https://mooseframework.inl.gov
Public Types | Public Member Functions | Static Public Member Functions | Protected Types | Protected Member Functions | Protected Attributes | Private Member Functions | Private Attributes | List of all members
ParallelStudy< WorkType, ParallelDataType > Class Template Referenceabstract

#include <ParallelStudy.h>

Inheritance diagram for ParallelStudy< WorkType, ParallelDataType >:
[legend]

Public Types

typedef MooseUtils::Buffer< WorkType >::iterator work_iterator
 
typedef MooseUtils::Buffer< std::shared_ptr< ParallelDataType > >::iterator parallel_data_iterator
 

Public Member Functions

 ParallelStudy (const libMesh::Parallel::Communicator &comm, const InputParameters &params, const std::string &name)
 
void preExecute ()
 Pre-execute method that MUST be called before execute() and before adding work. More...
 
void execute ()
 Execute method. More...
 
template<typename... Args>
MooseUtils::SharedPool< ParallelDataType >::PtrType acquireParallelData (const THREAD_ID tid, Args &&... args)
 Acquire a parallel data object from the pool. More...
 
void moveParallelDataToBuffer (std::shared_ptr< ParallelDataType > &data, const processor_id_type dest_pid)
 Moves parallel data objects to the send buffer to be communicated to processor dest_pid. More...
 
const ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > & receiveBuffer () const
 Gets the receive buffer. More...
 
const MooseUtils::Buffer< WorkType > & workBuffer () const
 Gets the work buffer. More...
 
unsigned long long int sendBufferPoolCreated () const
 Gets the total number of send buffer pools created. More...
 
unsigned long long int parallelDataSent () const
 Gets the total number of parallel data objects sent from this processor. More...
 
unsigned long long int buffersSent () const
 Gets the total number of buffers sent from this processor. More...
 
unsigned long long int poolParallelDataCreated () const
 Gets the total number of parallel data created in all of the threaded pools. More...
 
unsigned long long int localWorkStarted () const
 Gets the total amount of work started from this processor. More...
 
unsigned long long int localWorkExecuted () const
 Gets the total amount of work executed on this processor. More...
 
unsigned long long int totalWorkCompleted () const
 Gets the total amount of work completeed across all processors. More...
 
unsigned long long int localChunksExecuted () const
 Gets the total number of chunks of work executed on this processor. More...
 
bool currentlyExecuting () const
 Whether or not this object is currently in execute(). More...
 
bool currentlyPreExecuting () const
 Whether or not this object is between preExecute() and execute(). More...
 
unsigned int maxBufferSize () const
 Gets the max buffer size. More...
 
unsigned int chunkSize () const
 Gets the chunk size. More...
 
unsigned int clicksPerCommunication () const
 Gets the number of iterations to wait before communicating. More...
 
unsigned int clicksPerRootCommunication () const
 Gets the number of iterations to wait before communicating with root. More...
 
unsigned int clicksPerReceive () const
 Gets the number of iterations to wait before checking for new parallel data. More...
 
ParallelStudyMethod method () const
 Gets the method. More...
 
void reserveBuffer (const std::size_t size)
 Reserve size entries in the work buffer. More...
 
const Parallel::Communicator & comm () const
 
processor_id_type n_processors () const
 
processor_id_type processor_id () const
 
void moveWorkToBuffer (WorkType &work, const THREAD_ID tid)
 Adds work to the buffer to be executed. More...
 
void moveWorkToBuffer (const work_iterator begin, const work_iterator end, const THREAD_ID tid)
 
void moveWorkToBuffer (std::vector< WorkType > &work, const THREAD_ID tid)
 

Static Public Member Functions

static InputParameters validParams ()
 

Protected Types

enum  MoveWorkError {
  DURING_EXECUTION_DISABLED, PRE_EXECUTION_AND_EXECUTION_ONLY, PRE_EXECUTION_ONLY, PRE_EXECUTION_THREAD_0_ONLY,
  CONTINUING_DURING_EXECUTING_WORK
}
 Enum for providing useful errors during work addition in moveWorkError(). More...
 

Protected Member Functions

virtual std::unique_ptr< MooseUtils::Buffer< WorkType > > createWorkBuffer ()
 Creates the work buffer. More...
 
virtual void executeWork (const WorkType &work, const THREAD_ID tid)=0
 Pure virtual to be overridden that executes a single object of work on a given thread. More...
 
virtual void moveWorkError (const MoveWorkError error, const WorkType *work=nullptr) const
 Virtual that allows for the customization of error text for moving work into the buffer. More...
 
virtual bool alternateSmartEndingCriteriaMet ()
 Insertion point for derived classes to provide an alternate ending criteria for SMART execution. More...
 
virtual void postExecuteChunk (const work_iterator, const work_iterator)
 Insertion point for acting on work that was just executed. More...
 
virtual void preReceiveAndExecute ()
 Insertion point called just after trying to receive work and just before beginning work on the work buffer. More...
 
virtual void postReceiveParallelData (const parallel_data_iterator begin, const parallel_data_iterator end)=0
 Pure virtual for acting on parallel data that has JUST been received and filled into the buffer. More...
 
virtual bool workIsComplete (const WorkType &)
 Can be overridden to denote if a piece of work is not complete yet. More...
 
bool buffersAreEmpty () const
 Whether or not ALL of the buffers are empty: Working buffer, threaded buffers, receive buffer, and send buffers. More...
 
void moveContinuingWorkToBuffer (WorkType &Work)
 Moves work that is considered continuing for the purposes of the execution algorithm into the buffer. More...
 
void moveContinuingWorkToBuffer (const work_iterator begin, const work_iterator end)
 

Protected Attributes

const processor_id_type _pid
 This rank. More...
 
const std::string _name
 Name for this object for use in error handling. More...
 
const InputParameters_params
 The InputParameters. More...
 
const ParallelStudyMethod _method
 The study method. More...
 
bool _has_alternate_ending_criteria
 Whether or not this object has alternate ending criteria. More...
 
const Parallel::Communicator & _communicator
 

Private Member Functions

void flushSendBuffers ()
 Flushes all parallel data out of the send buffers. More...
 
void smartExecute ()
 Execute work using SMART. More...
 
void harmExecute ()
 Execute work using HARM. More...
 
void bsExecute ()
 Execute work using BS. More...
 
bool receiveAndExecute ()
 Receive packets of parallel data from other processors and executes work. More...
 
void executeAndBuffer (const std::size_t chunk_size)
 Execute a chunk of work and buffer. More...
 
void canMoveWorkCheck (const THREAD_ID tid)
 Internal check for if it is allowed to currently add work in moveWorkToBuffer(). More...
 
void postReceiveParallelDataInternal ()
 Internal method for acting on the parallel data that has just been received into the parallel buffer. More...
 

Private Attributes

const unsigned int _min_buffer_size
 Minimum size of a SendBuffer. More...
 
const unsigned int _max_buffer_size
 Number of objects to buffer before communication. More...
 
const Real _buffer_growth_multiplier
 Multiplier for the buffer size for growing the buffer. More...
 
const Real _buffer_shrink_multiplier
 Multiplier for the buffer size for shrinking the buffer. More...
 
const unsigned int _chunk_size
 Number of objects to execute at once during communication. More...
 
const bool _allow_new_work_during_execution
 Whether or not to allow the addition of new work to the buffer during execution. More...
 
const unsigned int _clicks_per_communication
 Iterations to wait before communicating. More...
 
const unsigned int _clicks_per_root_communication
 Iterations to wait before communicating with root. More...
 
const unsigned int _clicks_per_receive
 Iterations to wait before checking for new objects. More...
 
Parallel::MessageTag _parallel_data_buffer_tag
 MessageTag for sending parallel data. More...
 
std::vector< MooseUtils::SharedPool< ParallelDataType > > _parallel_data_pools
 Pools for re-using destructed parallel data objects (one for each thread) More...
 
std::vector< std::vector< WorkType > > _temp_threaded_work
 Threaded temprorary storage for work added while we're using the _work_buffer (one for each thread) More...
 
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
 Buffer for executing work. More...
 
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
 The receive buffer. More...
 
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
 Send buffers for each processor. More...
 
unsigned long long int _local_chunks_executed
 Number of chunks of work executed on this processor. More...
 
unsigned long long int _local_work_completed
 Amount of work completed on this processor. More...
 
unsigned long long int _local_work_started
 Amount of work started on this processor. More...
 
unsigned long long int _local_work_executed
 Amount of work executed on this processor. More...
 
unsigned long long int _total_work_started
 Amount of work started on all processors. More...
 
unsigned long long int _total_work_completed
 Amount of work completed on all processors. More...
 
bool _currently_executing
 Whether we are within execute() More...
 
bool _currently_pre_executing
 Whether we are between preExecute() and execute() More...
 
bool _currently_executing_work
 Whether or not we are currently within executeAndBuffer() More...
 

Detailed Description

template<typename WorkType, typename ParallelDataType>
class ParallelStudy< WorkType, ParallelDataType >

Definition at line 29 of file ParallelStudy.h.

Member Typedef Documentation

◆ parallel_data_iterator

template<typename WorkType, typename ParallelDataType>
typedef MooseUtils::Buffer<std::shared_ptr<ParallelDataType> >::iterator ParallelStudy< WorkType, ParallelDataType >::parallel_data_iterator

Definition at line 34 of file ParallelStudy.h.

◆ work_iterator

template<typename WorkType, typename ParallelDataType>
typedef MooseUtils::Buffer<WorkType>::iterator ParallelStudy< WorkType, ParallelDataType >::work_iterator

Definition at line 32 of file ParallelStudy.h.

Member Enumeration Documentation

◆ MoveWorkError

template<typename WorkType, typename ParallelDataType>
enum ParallelStudy::MoveWorkError
protected

Enum for providing useful errors during work addition in moveWorkError().

Enumerator
DURING_EXECUTION_DISABLED 
PRE_EXECUTION_AND_EXECUTION_ONLY 
PRE_EXECUTION_ONLY 
PRE_EXECUTION_THREAD_0_ONLY 
CONTINUING_DURING_EXECUTING_WORK 

Definition at line 182 of file ParallelStudy.h.

Constructor & Destructor Documentation

◆ ParallelStudy()

template<typename WorkType , typename ParallelDataType >
ParallelStudy< WorkType, ParallelDataType >::ParallelStudy ( const libMesh::Parallel::Communicator comm,
const InputParameters params,
const std::string &  name 
)

Definition at line 370 of file ParallelStudy.h.

374  : ParallelObject(comm),
375  _pid(comm.rank()),
376  _name(name),
377  _params(params),
378 
379  _method((ParallelStudyMethod)(int)(params.get<MooseEnum>("method"))),
381  _min_buffer_size(params.isParamSetByUser("min_buffer_size")
382  ? params.get<unsigned int>("min_buffer_size")
383  : params.get<unsigned int>("send_buffer_size")),
384  _max_buffer_size(params.get<unsigned int>("send_buffer_size")),
385  _buffer_growth_multiplier(params.get<Real>("buffer_growth_multiplier")),
386  _buffer_shrink_multiplier(params.get<Real>("buffer_shrink_multiplier")),
387  _chunk_size(params.get<unsigned int>("chunk_size")),
388  _allow_new_work_during_execution(params.get<bool>("allow_new_work_during_execution")),
389 
390  _clicks_per_communication(params.get<unsigned int>("clicks_per_communication")),
391  _clicks_per_root_communication(params.get<unsigned int>("clicks_per_root_communication")),
392  _clicks_per_receive(params.get<unsigned int>("clicks_per_receive")),
393 
398  _receive_buffer(std::make_unique<
401 
402  _currently_executing(false),
405 {
406 #ifndef LIBMESH_HAVE_OPENMP
407  if (libMesh::n_threads() != 1)
408  mooseWarning(_name, ": Threading will not be used without OpenMP");
409 #endif
410 
413  ": When allowing new work addition during execution\n",
414  "('allow_new_work_during_execution = true'), the method must be SMART");
415 }
ParallelObject(const Parallel::Communicator &comm_in)
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
The receive buffer.
const unsigned int _max_buffer_size
Number of objects to buffer before communication.
unsigned int n_threads()
const unsigned int _chunk_size
Number of objects to execute at once during communication.
std::vector< std::vector< WorkType > > _temp_threaded_work
Threaded temprorary storage for work added while we&#39;re using the _work_buffer (one for each thread) ...
const unsigned int _clicks_per_communication
Iterations to wait before communicating.
const Real _buffer_shrink_multiplier
Multiplier for the buffer size for shrinking the buffer.
const unsigned int _min_buffer_size
Minimum size of a SendBuffer.
void mooseError(Args &&... args)
std::vector< std::pair< R1, R2 > > get(const std::string &param1, const std::string &param2) const
bool _currently_pre_executing
Whether we are between preExecute() and execute()
MessageTag get_unique_tag(int tagvalue=MessageTag::invalid_tag) const
void mooseWarning(Args &&... args)
const std::string _name
Name for this object for use in error handling.
processor_id_type rank() const
const Parallel::Communicator & comm() const
bool _currently_executing
Whether we are within execute()
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
const processor_id_type _pid
This rank.
const Real _buffer_growth_multiplier
Multiplier for the buffer size for growing the buffer.
const ParallelStudyMethod _method
The study method.
const unsigned int _clicks_per_root_communication
Iterations to wait before communicating with root.
const std::string name
Definition: Setup.h:20
Parallel::MessageTag _parallel_data_buffer_tag
MessageTag for sending parallel data.
std::vector< MooseUtils::SharedPool< ParallelDataType > > _parallel_data_pools
Pools for re-using destructed parallel data objects (one for each thread)
bool isParamSetByUser(const std::string &name) const
ParallelStudyMethod
const bool _allow_new_work_during_execution
Whether or not to allow the addition of new work to the buffer during execution.
const unsigned int _clicks_per_receive
Iterations to wait before checking for new objects.
const InputParameters & _params
The InputParameters.
bool _currently_executing_work
Whether or not we are currently within executeAndBuffer()
virtual std::unique_ptr< MooseUtils::Buffer< WorkType > > createWorkBuffer()
Creates the work buffer.
bool _has_alternate_ending_criteria
Whether or not this object has alternate ending criteria.

Member Function Documentation

◆ acquireParallelData()

template<typename WorkType, typename ParallelDataType>
template<typename... Args>
MooseUtils::SharedPool<ParallelDataType>::PtrType ParallelStudy< WorkType, ParallelDataType >::acquireParallelData ( const THREAD_ID  tid,
Args &&...  args 
)
inline

Acquire a parallel data object from the pool.

Definition at line 73 of file ParallelStudy.h.

74  {
75  return _parallel_data_pools[tid].acquire(std::forward<Args>(args)...);
76  }
std::vector< MooseUtils::SharedPool< ParallelDataType > > _parallel_data_pools
Pools for re-using destructed parallel data objects (one for each thread)

◆ alternateSmartEndingCriteriaMet()

template<typename WorkType , typename ParallelDataType >
bool ParallelStudy< WorkType, ParallelDataType >::alternateSmartEndingCriteriaMet ( )
protectedvirtual

Insertion point for derived classes to provide an alternate ending criteria for SMART execution.

Only called when _has_alternate_ending_criteria == true.

Definition at line 1202 of file ParallelStudy.h.

1203 {
1204  mooseError(_name, ": Unimplemented alternateSmartEndingCriteriaMet()");
1205 }
void mooseError(Args &&... args)
const std::string _name
Name for this object for use in error handling.

◆ bsExecute()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::bsExecute ( )
private

Execute work using BS.

Definition at line 900 of file ParallelStudy.h.

901 {
903  mooseError("ParallelStudy: Alternate ending criteria not yet supported for BS");
905  mooseError(_name, ": The addition of new work during execution is not supported by BS");
906  mooseAssert(_method == ParallelStudyMethod::BS, "Should be called with BS only");
907 
908  Parallel::Request work_completed_probe_status;
909  Parallel::Request work_completed_request;
910 
911  // Temp for use in sending the current value in a nonblocking sum instead of an updated value
912  unsigned long long int temp;
913 
914  // Get the amount of work that were started in the whole domain
915  comm().sum(_local_work_started, _total_work_started, work_completed_probe_status);
916 
917  // Keep working until done
918  while (true)
919  {
920  bool receiving = false;
921  bool sending = false;
922 
923  Parallel::Request some_left_request;
924  unsigned int some_left = 0;
925  unsigned int all_some_left = 1;
926 
927  do
928  {
929  _receive_buffer->receive();
932 
933  receiving = _receive_buffer->currentlyReceiving();
934 
935  sending = false;
936  for (auto & send_buffer : _send_buffers)
937  sending = sending || send_buffer.second->currentlySending() ||
938  send_buffer.second->currentlyBuffered();
939 
940  if (!receiving && !sending && some_left_request.test() && all_some_left)
941  {
942  some_left = receiving || sending;
943  comm().sum(some_left, all_some_left, some_left_request);
944  }
945  } while (receiving || sending || !some_left_request.test() || all_some_left);
946 
948 
949  comm().barrier();
950 
951  if (work_completed_probe_status.test() && work_completed_request.test())
952  {
954  return;
955 
956  temp = _local_work_completed;
957  comm().sum(temp, _total_work_completed, work_completed_request);
958  }
959  }
960 }
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
The receive buffer.
void mooseError(Args &&... args)
const std::string _name
Name for this object for use in error handling.
const Parallel::Communicator & comm() const
unsigned long long int _local_work_completed
Amount of work completed on this processor.
void flushSendBuffers()
Flushes all parallel data out of the send buffers.
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
const ParallelStudyMethod _method
The study method.
unsigned long long int _total_work_completed
Amount of work completed on all processors.
void postReceiveParallelDataInternal()
Internal method for acting on the parallel data that has just been received into the parallel buffer...
unsigned long long int _local_work_started
Amount of work started on this processor.
const bool _allow_new_work_during_execution
Whether or not to allow the addition of new work to the buffer during execution.
void executeAndBuffer(const std::size_t chunk_size)
Execute a chunk of work and buffer.
unsigned long long int _total_work_started
Amount of work started on all processors.
bool _has_alternate_ending_criteria
Whether or not this object has alternate ending criteria.
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ buffersAreEmpty()

template<typename WorkType , typename ParallelDataType >
bool ParallelStudy< WorkType, ParallelDataType >::buffersAreEmpty ( ) const
protected

Whether or not ALL of the buffers are empty: Working buffer, threaded buffers, receive buffer, and send buffers.

Definition at line 1209 of file ParallelStudy.h.

1210 {
1211  if (!_work_buffer->empty())
1212  return false;
1213  for (const auto & threaded_buffer : _temp_threaded_work)
1214  if (!threaded_buffer.empty())
1215  return false;
1216  if (_receive_buffer->currentlyReceiving())
1217  return false;
1218  for (const auto & map_pair : _send_buffers)
1219  if (map_pair.second->currentlySending() || map_pair.second->currentlyBuffered())
1220  return false;
1221 
1222  return true;
1223 }
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
The receive buffer.
std::vector< std::vector< WorkType > > _temp_threaded_work
Threaded temprorary storage for work added while we&#39;re using the _work_buffer (one for each thread) ...
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ buffersSent()

template<typename WorkType , typename ParallelDataType >
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::buffersSent ( ) const

Gets the total number of buffers sent from this processor.

Definition at line 1178 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

1179 {
1180  unsigned long long int total_sent = 0;
1181 
1182  for (const auto & buffer : _send_buffers)
1183  total_sent += buffer.second->buffersSent();
1184 
1185  return total_sent;
1186 }
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ canMoveWorkCheck()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::canMoveWorkCheck ( const THREAD_ID  tid)
private

Internal check for if it is allowed to currently add work in moveWorkToBuffer().

Definition at line 1050 of file ParallelStudy.h.

1051 {
1053  {
1055  moveWorkError(MoveWorkError::DURING_EXECUTION_DISABLED);
1056  }
1057  else if (!_currently_pre_executing)
1058  {
1060  moveWorkError(MoveWorkError::PRE_EXECUTION_AND_EXECUTION_ONLY);
1061  else
1062  moveWorkError(MoveWorkError::PRE_EXECUTION_ONLY);
1063  }
1064  else if (tid != 0)
1065  moveWorkError(MoveWorkError::PRE_EXECUTION_THREAD_0_ONLY);
1066 }
bool _currently_pre_executing
Whether we are between preExecute() and execute()
bool _currently_executing
Whether we are within execute()
virtual void moveWorkError(const MoveWorkError error, const WorkType *work=nullptr) const
Virtual that allows for the customization of error text for moving work into the buffer.
const bool _allow_new_work_during_execution
Whether or not to allow the addition of new work to the buffer during execution.

◆ chunkSize()

template<typename WorkType, typename ParallelDataType>
unsigned int ParallelStudy< WorkType, ParallelDataType >::chunkSize ( ) const
inline

Gets the chunk size.

Definition at line 148 of file ParallelStudy.h.

148 { return _chunk_size; }
const unsigned int _chunk_size
Number of objects to execute at once during communication.

◆ clicksPerCommunication()

template<typename WorkType, typename ParallelDataType>
unsigned int ParallelStudy< WorkType, ParallelDataType >::clicksPerCommunication ( ) const
inline

Gets the number of iterations to wait before communicating.

Definition at line 153 of file ParallelStudy.h.

153 { return _clicks_per_communication; }
const unsigned int _clicks_per_communication
Iterations to wait before communicating.

◆ clicksPerReceive()

template<typename WorkType, typename ParallelDataType>
unsigned int ParallelStudy< WorkType, ParallelDataType >::clicksPerReceive ( ) const
inline

Gets the number of iterations to wait before checking for new parallel data.

Definition at line 161 of file ParallelStudy.h.

161 { return _clicks_per_receive; }
const unsigned int _clicks_per_receive
Iterations to wait before checking for new objects.

◆ clicksPerRootCommunication()

template<typename WorkType, typename ParallelDataType>
unsigned int ParallelStudy< WorkType, ParallelDataType >::clicksPerRootCommunication ( ) const
inline

Gets the number of iterations to wait before communicating with root.

Definition at line 157 of file ParallelStudy.h.

const unsigned int _clicks_per_root_communication
Iterations to wait before communicating with root.

◆ createWorkBuffer()

template<typename WorkType , typename ParallelDataType >
std::unique_ptr< MooseUtils::Buffer< WorkType > > ParallelStudy< WorkType, ParallelDataType >::createWorkBuffer ( )
protectedvirtual

Creates the work buffer.

This is virtual so that derived classes can use their own specialized buffers

Definition at line 419 of file ParallelStudy.h.

420 {
421  std::unique_ptr<MooseUtils::Buffer<WorkType>> buffer;
422 
423  const auto buffer_type = _params.get<MooseEnum>("work_buffer_type");
424  if (buffer_type == "lifo")
425  buffer = std::make_unique<MooseUtils::LIFOBuffer<WorkType>>();
426  else if (buffer_type == "circular")
427  buffer = std::make_unique<MooseUtils::CircularBuffer<WorkType>>();
428  else
429  mooseError("Unknown work buffer type");
430 
431  return buffer;
432 }
void mooseError(Args &&... args)
std::vector< std::pair< R1, R2 > > get(const std::string &param1, const std::string &param2) const
const InputParameters & _params
The InputParameters.

◆ currentlyExecuting()

template<typename WorkType, typename ParallelDataType>
bool ParallelStudy< WorkType, ParallelDataType >::currentlyExecuting ( ) const
inline

Whether or not this object is currently in execute().

Definition at line 135 of file ParallelStudy.h.

135 { return _currently_executing; }
bool _currently_executing
Whether we are within execute()

◆ currentlyPreExecuting()

template<typename WorkType, typename ParallelDataType>
bool ParallelStudy< WorkType, ParallelDataType >::currentlyPreExecuting ( ) const
inline

Whether or not this object is between preExecute() and execute().

Definition at line 139 of file ParallelStudy.h.

139 { return _currently_pre_executing; }
bool _currently_pre_executing
Whether we are between preExecute() and execute()

◆ execute()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::execute ( )

Execute method.

Definition at line 988 of file ParallelStudy.h.

989 {
991  mooseError(_name, ": preExecute() was not called before execute()");
992 
993  _currently_pre_executing = false;
994  _currently_executing = true;
995 
996  switch (_method)
997  {
999  smartExecute();
1000  break;
1002  harmExecute();
1003  break;
1005  bsExecute();
1006  break;
1007  default:
1008  mooseError("Unknown ParallelStudyMethod");
1009  }
1010 
1011  _currently_executing = false;
1012 
1013  // Sanity checks on if we're really done
1014  comm().barrier();
1015 
1016  if (!buffersAreEmpty())
1017  mooseError(_name, ": Buffers are not empty after execution");
1018 }
void mooseError(Args &&... args)
bool _currently_pre_executing
Whether we are between preExecute() and execute()
const std::string _name
Name for this object for use in error handling.
const Parallel::Communicator & comm() const
bool _currently_executing
Whether we are within execute()
const ParallelStudyMethod _method
The study method.
void smartExecute()
Execute work using SMART.
bool buffersAreEmpty() const
Whether or not ALL of the buffers are empty: Working buffer, threaded buffers, receive buffer...
void harmExecute()
Execute work using HARM.
void bsExecute()
Execute work using BS.

◆ executeAndBuffer()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::executeAndBuffer ( const std::size_t  chunk_size)
private

Execute a chunk of work and buffer.

Definition at line 497 of file ParallelStudy.h.

498 {
500 
501  // If chunk_size > the number of objects left, this will properly grab all of them
502  const auto begin = _work_buffer->beginChunk(chunk_size);
503  const auto end = _work_buffer->endChunk(chunk_size);
504 
506 
507 #ifdef LIBMESH_HAVE_OPENMP
508 #pragma omp parallel
509 #endif
510  {
511  const THREAD_ID tid =
512 #ifdef LIBMESH_HAVE_OPENMP
513  omp_get_thread_num();
514 #else
515  0;
516 #endif
517 
518 #ifdef LIBMESH_HAVE_OPENMP
519 #pragma omp for schedule(dynamic, 20) nowait
520 #endif
521  for (auto it = begin; it < end; ++it)
522  executeWork(*it, tid);
523  }
524 
525  // Increment the executed and completed counters
526  _local_work_executed += std::distance(begin, end);
527  for (auto it = begin; it != end; ++it)
528  if (workIsComplete(*it))
530 
531  // Insertion point for derived classes to do something to the completed work
532  // Example: Create ParallelData to spawn additional work on another processor
533  postExecuteChunk(begin, end);
534 
535  // Remove the objects we just worked on from the buffer
536  _work_buffer->eraseChunk(chunk_size);
537 
538  // If new work is allowed to be geneated during execution, it goes into _temp_threaded_work
539  // during the threaded execution phase and then must be moved into the working buffer
541  {
542  // Amount of work that needs to be moved into the main working buffer from
543  // the temporary working buffer
544  std::size_t threaded_work_size = 0;
545  for (const auto & work_objects : _temp_threaded_work)
546  threaded_work_size += work_objects.size();
547 
548  if (threaded_work_size)
549  {
550  // We don't ever want to decrease the capacity, so only set it if we need more entries
551  if (_work_buffer->capacity() < _work_buffer->size() + threaded_work_size)
552  _work_buffer->setCapacity(_work_buffer->size() + threaded_work_size);
553 
554  // Move the work into the buffer
555  for (auto & threaded_work_vector : _temp_threaded_work)
556  {
557  for (auto & work : threaded_work_vector)
558  _work_buffer->move(work);
559  threaded_work_vector.clear();
560  }
561 
562  // Variable that must be set when adding work so that the algorithm can keep count
563  // of how much work still needs to be executed
564  _local_work_started += threaded_work_size;
565  }
566  }
567 
570 
572 }
std::vector< std::vector< WorkType > > _temp_threaded_work
Threaded temprorary storage for work added while we&#39;re using the _work_buffer (one for each thread) ...
unsigned long long int _local_work_executed
Amount of work executed on this processor.
unsigned long long int _local_work_completed
Amount of work completed on this processor.
void flushSendBuffers()
Flushes all parallel data out of the send buffers.
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
const ParallelStudyMethod _method
The study method.
virtual bool workIsComplete(const WorkType &)
Can be overridden to denote if a piece of work is not complete yet.
unsigned long long int _local_chunks_executed
Number of chunks of work executed on this processor.
unsigned long long int _local_work_started
Amount of work started on this processor.
const bool _allow_new_work_during_execution
Whether or not to allow the addition of new work to the buffer during execution.
virtual void postExecuteChunk(const work_iterator, const work_iterator)
Insertion point for acting on work that was just executed.
bool _currently_executing_work
Whether or not we are currently within executeAndBuffer()
virtual void executeWork(const WorkType &work, const THREAD_ID tid)=0
Pure virtual to be overridden that executes a single object of work on a given thread.
unsigned int THREAD_ID

◆ executeWork()

template<typename WorkType, typename ParallelDataType>
virtual void ParallelStudy< WorkType, ParallelDataType >::executeWork ( const WorkType &  work,
const THREAD_ID  tid 
)
protectedpure virtual

Pure virtual to be overridden that executes a single object of work on a given thread.

Implemented in ParallelRayStudy.

◆ flushSendBuffers()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::flushSendBuffers ( )
private

Flushes all parallel data out of the send buffers.

Definition at line 610 of file ParallelStudy.h.

611 {
612  for (auto & send_buffer_iter : _send_buffers)
613  send_buffer_iter.second->forceSend();
614 }
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ harmExecute()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::harmExecute ( )
private

Execute work using HARM.

Definition at line 803 of file ParallelStudy.h.

804 {
806  mooseError("ParallelStudy: Alternate ending criteria not yet supported for HARM");
808  mooseError(_name, ": The addition of new work during execution is not supported by HARM");
809  mooseAssert(_method == ParallelStudyMethod::HARM, "Should be called with HARM only");
810 
811  // Request for the total amount of work started
812  Parallel::Request work_started_request;
813  // Requests for sending the amount of finished worked to every other processor
814  std::vector<Parallel::Request> work_completed_requests(comm().size());
815  // Whether or not the finished requests have been sent to each processor
816  std::vector<bool> work_completed_requests_sent(comm().size(), false);
817  // Values of work completed on this processor that are being sent to other processors
818  std::vector<unsigned long long int> work_completed_requests_temps(comm().size(), 0);
819  // Work completed by each processor
820  std::vector<unsigned long long int> work_completed_per_proc(comm().size(), 0);
821  // Tag for sending work finished
822  const auto work_completed_requests_tag = comm().get_unique_tag();
823 
824  // Get the amount of work that was started in the whole domain
825  comm().sum(_local_work_started, _total_work_started, work_started_request);
826 
827  // All work has been executed, so time to communicate
829 
830  // HARM only does some communication based on times through the loop.
831  // This counter will be used for that
832  unsigned int communication_clicks = 0;
833 
834  Parallel::Status work_completed_probe_status;
835  int work_completed_probe_flag;
836 
837  // Keep working until done
838  while (true)
839  {
841 
843 
844  if (communication_clicks > comm().size())
845  {
846  // Receive messages about work being finished
847  do
848  {
849  MPI_Iprobe(MPI_ANY_SOURCE,
850  work_completed_requests_tag.value(),
851  comm().get(),
852  &work_completed_probe_flag,
853  work_completed_probe_status.get());
854 
855  if (work_completed_probe_flag)
856  {
857  auto proc = work_completed_probe_status.source();
858  comm().receive(proc, work_completed_per_proc[proc], work_completed_requests_tag);
859  }
860  } while (work_completed_probe_flag);
861 
862  _total_work_completed = std::accumulate(
863  work_completed_per_proc.begin(), work_completed_per_proc.end(), _local_work_completed);
864 
865  // Reset
866  communication_clicks = 0;
867  }
868 
869  // Send messages about objects being finished
870  for (processor_id_type pid = 0; pid < comm().size(); ++pid)
871  if (pid != _pid &&
872  (!work_completed_requests_sent[pid] || work_completed_requests[pid].test()) &&
873  _local_work_completed > work_completed_requests_temps[pid])
874  {
875  work_completed_requests_temps[pid] = _local_work_completed;
876  comm().send(pid,
877  work_completed_requests_temps[pid],
878  work_completed_requests[pid],
879  work_completed_requests_tag);
880  work_completed_requests_sent[pid] = true;
881  }
882 
883  // All procs agree on the amount of work started and we've finished all the work started
884  if (work_started_request.test() && _total_work_started == _total_work_completed)
885  {
886  // Need to call the post wait work for all of the requests
887  for (processor_id_type pid = 0; pid < comm().size(); ++pid)
888  if (pid != _pid)
889  work_completed_requests[pid].wait();
890 
891  return;
892  }
893 
894  communication_clicks++;
895  }
896 }
void mooseError(Args &&... args)
MessageTag get_unique_tag(int tagvalue=MessageTag::invalid_tag) const
const std::string _name
Name for this object for use in error handling.
const Parallel::Communicator & comm() const
unsigned long long int _local_work_completed
Amount of work completed on this processor.
void flushSendBuffers()
Flushes all parallel data out of the send buffers.
const processor_id_type _pid
This rank.
const ParallelStudyMethod _method
The study method.
processor_id_type size() const
uint8_t processor_id_type
Status receive(const unsigned int dest_processor_id, T &buf, const MessageTag &tag=any_tag) const
bool receiveAndExecute()
Receive packets of parallel data from other processors and executes work.
unsigned long long int _total_work_completed
Amount of work completed on all processors.
unsigned long long int _local_work_started
Amount of work started on this processor.
const bool _allow_new_work_during_execution
Whether or not to allow the addition of new work to the buffer during execution.
void send(const unsigned int dest_processor_id, const T &buf, const MessageTag &tag=no_tag) const
unsigned long long int _total_work_started
Amount of work started on all processors.
bool _has_alternate_ending_criteria
Whether or not this object has alternate ending criteria.

◆ localChunksExecuted()

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::localChunksExecuted ( ) const
inline

Gets the total number of chunks of work executed on this processor.

Definition at line 130 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

130 { return _local_chunks_executed; }
unsigned long long int _local_chunks_executed
Number of chunks of work executed on this processor.

◆ localWorkExecuted()

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::localWorkExecuted ( ) const
inline

Gets the total amount of work executed on this processor.

Definition at line 122 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

122 { return _local_work_executed; }
unsigned long long int _local_work_executed
Amount of work executed on this processor.

◆ localWorkStarted()

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::localWorkStarted ( ) const
inline

Gets the total amount of work started from this processor.

Definition at line 118 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

118 { return _local_work_started; }
unsigned long long int _local_work_started
Amount of work started on this processor.

◆ maxBufferSize()

template<typename WorkType, typename ParallelDataType>
unsigned int ParallelStudy< WorkType, ParallelDataType >::maxBufferSize ( ) const
inline

Gets the max buffer size.

Definition at line 144 of file ParallelStudy.h.

144 { return _max_buffer_size; }
const unsigned int _max_buffer_size
Number of objects to buffer before communication.

◆ method()

template<typename WorkType, typename ParallelDataType>
ParallelStudyMethod ParallelStudy< WorkType, ParallelDataType >::method ( ) const
inline

Gets the method.

Definition at line 166 of file ParallelStudy.h.

166 { return _method; }
const ParallelStudyMethod _method
The study method.

◆ moveContinuingWorkToBuffer() [1/2]

template<typename WorkType, typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::moveContinuingWorkToBuffer ( WorkType &  Work)
protected

Moves work that is considered continuing for the purposes of the execution algorithm into the buffer.

Definition at line 1128 of file ParallelStudy.h.

1129 {
1131  moveWorkError(MoveWorkError::CONTINUING_DURING_EXECUTING_WORK);
1132 
1133  _work_buffer->move(work);
1134 }
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
virtual void moveWorkError(const MoveWorkError error, const WorkType *work=nullptr) const
Virtual that allows for the customization of error text for moving work into the buffer.
bool _currently_executing_work
Whether or not we are currently within executeAndBuffer()

◆ moveContinuingWorkToBuffer() [2/2]

template<typename WorkType, typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::moveContinuingWorkToBuffer ( const work_iterator  begin,
const work_iterator  end 
)
protected

Definition at line 1138 of file ParallelStudy.h.

1140 {
1142  moveWorkError(MoveWorkError::CONTINUING_DURING_EXECUTING_WORK);
1143 
1144  const auto size = std::distance(begin, end);
1145  if (_work_buffer->capacity() < _work_buffer->size() + size)
1146  _work_buffer->setCapacity(_work_buffer->size() + size);
1147 
1148  for (auto it = begin; it != end; ++it)
1149  _work_buffer->move(*it);
1150 }
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
virtual void moveWorkError(const MoveWorkError error, const WorkType *work=nullptr) const
Virtual that allows for the customization of error text for moving work into the buffer.
bool _currently_executing_work
Whether or not we are currently within executeAndBuffer()

◆ moveParallelDataToBuffer()

template<typename WorkType , typename ParallelDataType>
void ParallelStudy< WorkType, ParallelDataType >::moveParallelDataToBuffer ( std::shared_ptr< ParallelDataType > &  data,
const processor_id_type  dest_pid 
)

Moves parallel data objects to the send buffer to be communicated to processor dest_pid.

Definition at line 576 of file ParallelStudy.h.

578 {
579  mooseAssert(comm().size() > dest_pid, "Invalid processor ID");
580  mooseAssert(_pid != dest_pid, "Processor ID is self");
581 
583  mooseError(_name, ": Cannot sendParallelData() when not executing");
584 
585  // Get the send buffer for the proc this object is going to
586  auto find_pair = _send_buffers.find(dest_pid);
587  // Need to create a send buffer for said processor
588  if (find_pair == _send_buffers.end())
590  .emplace(dest_pid,
591  std::make_unique<
593  comm(),
594  this,
595  dest_pid,
596  _method,
602  .first->second->moveObject(data);
603  // Send buffer exists for this processor
604  else
605  find_pair->second->moveObject(data);
606 }
const unsigned int _max_buffer_size
Number of objects to buffer before communication.
const Real _buffer_shrink_multiplier
Multiplier for the buffer size for shrinking the buffer.
const unsigned int _min_buffer_size
Minimum size of a SendBuffer.
void mooseError(Args &&... args)
bool _currently_pre_executing
Whether we are between preExecute() and execute()
const std::string _name
Name for this object for use in error handling.
const Parallel::Communicator & comm() const
bool _currently_executing
Whether we are within execute()
const processor_id_type _pid
This rank.
const Real _buffer_growth_multiplier
Multiplier for the buffer size for growing the buffer.
const ParallelStudyMethod _method
The study method.
Parallel::MessageTag _parallel_data_buffer_tag
MessageTag for sending parallel data.
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ moveWorkError()

template<typename WorkType, typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::moveWorkError ( const MoveWorkError  error,
const WorkType *  work = nullptr 
) const
protectedvirtual

Virtual that allows for the customization of error text for moving work into the buffer.

Definition at line 1022 of file ParallelStudy.h.

1024 {
1025  if (error == MoveWorkError::DURING_EXECUTION_DISABLED)
1026  mooseError(_name,
1027  ": The moving of new work into the buffer during work execution requires\n",
1028  "that the parameter 'allow_new_work_during_execution = true'");
1029  if (error == MoveWorkError::PRE_EXECUTION_AND_EXECUTION_ONLY)
1030  mooseError(
1031  _name,
1032  ": Can only move work into the buffer in the pre-execution and execution phase\n(between "
1033  "preExecute() and the end of execute()");
1034  if (error == MoveWorkError::PRE_EXECUTION_ONLY)
1035  mooseError(_name,
1036  ": Can only move work into the buffer in the pre-execution phase\n(between "
1037  "preExecute() and execute()");
1038  if (error == MoveWorkError::PRE_EXECUTION_THREAD_0_ONLY)
1039  mooseError(_name,
1040  ": Can only move work into the buffer in the pre-execution phase\n(between "
1041  "preExecute() and execute()) on thread 0");
1042  if (error == CONTINUING_DURING_EXECUTING_WORK)
1043  mooseError(_name, ": Cannot move continuing work into the buffer during executeAndBuffer()");
1044 
1045  mooseError("Unknown MoveWorkError");
1046 }
void mooseError(Args &&... args)
const std::string _name
Name for this object for use in error handling.

◆ moveWorkToBuffer() [1/3]

template<typename WorkType, typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::moveWorkToBuffer ( WorkType &  work,
const THREAD_ID  tid 
)

Adds work to the buffer to be executed.

This will move the work into the buffer (with std::move), therefore the passed in work will be invalid after this call. For the purposes of the completion algorithm, this added work is considered NEW work.

During pre-execution (between preExecute() and execute()), this method can ONLY be called on thread 0.

During execute(), this method is thread safe and can be used to add work during execution.

Definition at line 1070 of file ParallelStudy.h.

1071 {
1072  // Error checks for moving work into the buffer at unallowed times
1073  canMoveWorkCheck(tid);
1074 
1075  // Can move directly into the work buffer on thread 0 when we're not executing work
1076  if (!_currently_executing_work && tid == 0)
1077  {
1078  ++_local_work_started; // must ALWAYS increment when adding new work to the working buffer
1079  _work_buffer->move(work);
1080  }
1081  // Objects added during execution go into a temporary threaded vector (is thread safe) to be
1082  // moved into the working buffer when possible
1083  else
1084  _temp_threaded_work[tid].emplace_back(std::move(work));
1085 }
std::vector< std::vector< WorkType > > _temp_threaded_work
Threaded temprorary storage for work added while we&#39;re using the _work_buffer (one for each thread) ...
void canMoveWorkCheck(const THREAD_ID tid)
Internal check for if it is allowed to currently add work in moveWorkToBuffer().
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
unsigned long long int _local_work_started
Amount of work started on this processor.
bool _currently_executing_work
Whether or not we are currently within executeAndBuffer()

◆ moveWorkToBuffer() [2/3]

template<typename WorkType, typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::moveWorkToBuffer ( const work_iterator  begin,
const work_iterator  end,
const THREAD_ID  tid 
)

Definition at line 1089 of file ParallelStudy.h.

1092 {
1093  // Error checks for moving work into the buffer at unallowed times
1094  canMoveWorkCheck(tid);
1095 
1096  // Get work size beforehand so we can resize
1097  const auto size = std::distance(begin, end);
1098 
1099  // Can move directly into the work buffer on thread 0 when we're not executing work
1100  if (!_currently_executing_work && tid == 0)
1101  {
1102  if (_work_buffer->capacity() < _work_buffer->size() + size)
1103  _work_buffer->setCapacity(_work_buffer->size() + size);
1104  _local_work_started += size;
1105  }
1106  else
1107  _temp_threaded_work[tid].reserve(_temp_threaded_work[tid].size() + size);
1108 
1109  // Move the objects
1110  if (!_currently_executing_work && tid == 0)
1111  for (auto it = begin; it != end; ++it)
1112  _work_buffer->move(*it);
1113  else
1114  for (auto it = begin; it != end; ++it)
1115  _temp_threaded_work[tid].emplace_back(std::move(*it));
1116 }
std::vector< std::vector< WorkType > > _temp_threaded_work
Threaded temprorary storage for work added while we&#39;re using the _work_buffer (one for each thread) ...
void canMoveWorkCheck(const THREAD_ID tid)
Internal check for if it is allowed to currently add work in moveWorkToBuffer().
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
unsigned long long int _local_work_started
Amount of work started on this processor.
bool _currently_executing_work
Whether or not we are currently within executeAndBuffer()

◆ moveWorkToBuffer() [3/3]

template<typename WorkType, typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::moveWorkToBuffer ( std::vector< WorkType > &  work,
const THREAD_ID  tid 
)

Definition at line 1120 of file ParallelStudy.h.

1122 {
1123  moveWorkToBuffer(work_vector.begin(), work_vector.end(), tid);
1124 }
void moveWorkToBuffer(WorkType &work, const THREAD_ID tid)
Adds work to the buffer to be executed.

◆ parallelDataSent()

template<typename WorkType , typename ParallelDataType >
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::parallelDataSent ( ) const

Gets the total number of parallel data objects sent from this processor.

Definition at line 1166 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

1167 {
1168  unsigned long long int total_sent = 0;
1169 
1170  for (const auto & buffer : _send_buffers)
1171  total_sent += buffer.second->objectsSent();
1172 
1173  return total_sent;
1174 }
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ poolParallelDataCreated()

template<typename WorkType , typename ParallelDataType >
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::poolParallelDataCreated ( ) const

Gets the total number of parallel data created in all of the threaded pools.

Definition at line 1190 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

1191 {
1192  unsigned long long int num_created = 0;
1193 
1194  for (const auto & pool : _parallel_data_pools)
1195  num_created += pool.num_created();
1196 
1197  return num_created;
1198 }
std::vector< MooseUtils::SharedPool< ParallelDataType > > _parallel_data_pools
Pools for re-using destructed parallel data objects (one for each thread)

◆ postExecuteChunk()

template<typename WorkType, typename ParallelDataType>
virtual void ParallelStudy< WorkType, ParallelDataType >::postExecuteChunk ( const work_iterator  ,
const work_iterator   
)
inlineprotectedvirtual

Insertion point for acting on work that was just executed.

This is not called in threads.

Definition at line 219 of file ParallelStudy.h.

219 {}

◆ postReceiveParallelData()

template<typename WorkType, typename ParallelDataType>
virtual void ParallelStudy< WorkType, ParallelDataType >::postReceiveParallelData ( const parallel_data_iterator  begin,
const parallel_data_iterator  end 
)
protectedpure virtual

Pure virtual for acting on parallel data that has JUST been received and filled into the buffer.

The parallel data in the range passed here will have its use count reduced by one if it still exists after this call.

◆ postReceiveParallelDataInternal()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::postReceiveParallelDataInternal ( )
private

Internal method for acting on the parallel data that has just been received into the parallel buffer.

Definition at line 630 of file ParallelStudy.h.

631 {
632  if (_receive_buffer->buffer().empty())
633  return;
634 
635  // Let derived classes work on the data and then clear it after
636  postReceiveParallelData(_receive_buffer->buffer().begin(), _receive_buffer->buffer().end());
637  for (auto & data : _receive_buffer->buffer())
638  if (data)
639  data.reset();
640 
641  _receive_buffer->buffer().clear();
642 }
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
The receive buffer.
virtual void postReceiveParallelData(const parallel_data_iterator begin, const parallel_data_iterator end)=0
Pure virtual for acting on parallel data that has JUST been received and filled into the buffer...

◆ preExecute()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::preExecute ( )

Pre-execute method that MUST be called before execute() and before adding work.

Definition at line 964 of file ParallelStudy.h.

965 {
966  if (!buffersAreEmpty())
967  mooseError(_name, ": Buffers are not empty in preExecute()");
968 
969  // Clear communication buffers
970  for (auto & send_buffer_pair : _send_buffers)
971  send_buffer_pair.second->clear();
972  _send_buffers.clear();
973  _receive_buffer->clear();
974 
975  // Clear counters
982 
984 }
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
The receive buffer.
void mooseError(Args &&... args)
bool _currently_pre_executing
Whether we are between preExecute() and execute()
unsigned long long int _local_work_executed
Amount of work executed on this processor.
const std::string _name
Name for this object for use in error handling.
unsigned long long int _local_work_completed
Amount of work completed on this processor.
bool buffersAreEmpty() const
Whether or not ALL of the buffers are empty: Working buffer, threaded buffers, receive buffer...
unsigned long long int _local_chunks_executed
Number of chunks of work executed on this processor.
unsigned long long int _total_work_completed
Amount of work completed on all processors.
unsigned long long int _local_work_started
Amount of work started on this processor.
unsigned long long int _total_work_started
Amount of work started on all processors.
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ preReceiveAndExecute()

template<typename WorkType, typename ParallelDataType>
virtual void ParallelStudy< WorkType, ParallelDataType >::preReceiveAndExecute ( )
inlineprotectedvirtual

Insertion point called just after trying to receive work and just before beginning work on the work buffer.

Definition at line 225 of file ParallelStudy.h.

225 {}

◆ receiveAndExecute()

template<typename WorkType , typename ParallelDataType >
bool ParallelStudy< WorkType, ParallelDataType >::receiveAndExecute ( )
private

Receive packets of parallel data from other processors and executes work.

Definition at line 646 of file ParallelStudy.h.

647 {
648  bool executed_some = false;
649 
650  if (_receive_buffer->currentlyReceiving() && _method == ParallelStudyMethod::SMART)
651  _receive_buffer->cleanupRequests();
652  else
653  _receive_buffer->receive();
654 
656 
658 
659  while (!_work_buffer->empty())
660  {
661  executed_some = true;
662 
663  // Switch between tracing a chunk and buffering with SMART
665  {
666  // Look for extra work first so that these transfers can be finishing while we're executing
667  // Start receives only if our work buffer is decently sized
668  const bool start_receives_only = _work_buffer->size() > (2 * _chunk_size);
669  _receive_buffer->receive(_work_buffer->size() > (2 * _chunk_size));
670  if (!start_receives_only)
672 
673  // Execute some objects
675  }
676  // Execute all of them and then buffer with the other methods
677  else
679  }
680 
681  return executed_some;
682 }
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
The receive buffer.
const unsigned int _chunk_size
Number of objects to execute at once during communication.
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.
const ParallelStudyMethod _method
The study method.
virtual void preReceiveAndExecute()
Insertion point called just after trying to receive work and just before beginning work on the work b...
void postReceiveParallelDataInternal()
Internal method for acting on the parallel data that has just been received into the parallel buffer...
void executeAndBuffer(const std::size_t chunk_size)
Execute a chunk of work and buffer.

◆ receiveBuffer()

template<typename WorkType, typename ParallelDataType>
const ReceiveBuffer<ParallelDataType, ParallelStudy<WorkType, ParallelDataType> >& ParallelStudy< WorkType, ParallelDataType >::receiveBuffer ( ) const
inline

Gets the receive buffer.

Definition at line 88 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

89  {
90  return *_receive_buffer;
91  }
const std::unique_ptr< ReceiveBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > _receive_buffer
The receive buffer.

◆ reserveBuffer()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::reserveBuffer ( const std::size_t  size)

Reserve size entries in the work buffer.

This can only be used during the pre-execution phase (between preExecute() and execute()).

This is particularly useful when one wants to move many work objects into the buffer using moveWorkToBuffer() and wants to allocate the space ahead of time.

Definition at line 618 of file ParallelStudy.h.

619 {
621  mooseError(_name, ": Can only reserve in object buffer during pre-execution");
622 
623  // We don't ever want to decrease the capacity, so only set if we need more entries
624  if (_work_buffer->capacity() < size)
625  _work_buffer->setCapacity(size);
626 }
void mooseError(Args &&... args)
bool _currently_pre_executing
Whether we are between preExecute() and execute()
const std::string _name
Name for this object for use in error handling.
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.

◆ sendBufferPoolCreated()

template<typename WorkType , typename ParallelDataType >
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::sendBufferPoolCreated ( ) const

Gets the total number of send buffer pools created.

Definition at line 1154 of file ParallelStudy.h.

Referenced by PerProcessorRayTracingResultsVectorPostprocessor::execute().

1155 {
1156  unsigned long long int total = 0;
1157 
1158  for (const auto & buffer : _send_buffers)
1159  total += buffer.second->bufferPoolCreated();
1160 
1161  return total;
1162 }
std::unordered_map< processor_id_type, std::unique_ptr< SendBuffer< ParallelDataType, ParallelStudy< WorkType, ParallelDataType > > > > _send_buffers
Send buffers for each processor.

◆ smartExecute()

template<typename WorkType , typename ParallelDataType >
void ParallelStudy< WorkType, ParallelDataType >::smartExecute ( )
private

Execute work using SMART.

Definition at line 686 of file ParallelStudy.h.

687 {
688  mooseAssert(_method == ParallelStudyMethod::SMART, "Should be called with SMART only");
689 
690  // Request for the sum of the started work
691  Parallel::Request started_request;
692  // Request for the sum of the completed work
693  Parallel::Request completed_request;
694 
695  // Temp for use in sending the current value in a nonblocking sum instead of an updated value
696  unsigned long long int temp;
697 
698  // Whether or not to make the started request first, or after every finished request.
699  // When allowing adding new work during the execution phase, the starting object counts could
700  // change after right now, so we must update them after each finished request is complete.
701  // When not allowing generation during propagation, we know the counts up front.
702  const bool started_request_first = !_allow_new_work_during_execution;
703 
704  // Get the amount of work that was started in the whole domain, if applicable
705  if (started_request_first)
706  comm().sum(_local_work_started, _total_work_started, started_request);
707 
708  // Whether or not the started request has been made
709  bool made_started_request = started_request_first;
710  // Whether or not the completed request has been made
711  bool made_completed_request = false;
712 
713  // Good time to get rid of whatever's currently in our SendBuffers
715 
716  // Use these to try to delay some forced communication
717  unsigned int non_executing_clicks = 0;
718  unsigned int non_executing_root_clicks = 0;
719  bool executed_some = true;
720 
721  // Keep executing work until it has all completed
722  while (true)
723  {
724  executed_some = receiveAndExecute();
725 
726  if (executed_some)
727  {
728  non_executing_clicks = 0;
729  non_executing_root_clicks = 0;
730  }
731  else
732  {
733  non_executing_clicks++;
734  non_executing_root_clicks++;
735  }
736 
737  if (non_executing_clicks >= _clicks_per_communication)
738  {
739  non_executing_clicks = 0;
740 
742  }
743 
745  {
747  {
748  comm().barrier();
749  return;
750  }
751  }
752  else if (non_executing_root_clicks >= _clicks_per_root_communication)
753  {
754  non_executing_root_clicks = 0;
755 
756  // We need the starting work sum first but said request isn't complete yet
757  if (started_request_first && !started_request.test())
758  continue;
759 
760  // At this point, we need to make a request for the completed work sum
761  if (!made_completed_request)
762  {
763  made_completed_request = true;
764  temp = _local_work_completed;
765  comm().sum(temp, _total_work_completed, completed_request);
766  continue;
767  }
768 
769  // We have the completed work sum
770  if (completed_request.test())
771  {
772  // The starting work sum must be requested /after/ we have finishing counts and we
773  // need to make the request for said sum
774  if (!made_started_request)
775  {
776  made_started_request = true;
777  temp = _local_work_started;
778  comm().sum(temp, _total_work_started, started_request);
779  continue;
780  }
781 
782  // The starting work sum must be requested /after/ we have finishing sum and we
783  // don't have the starting sum yet
784  if (!started_request_first && !started_request.test())
785  continue;
786 
787  // Started count is the same as the finished count - we're done!
789  return;
790 
791  // Next time around we should make a completed sum request
792  made_completed_request = false;
793  // If we need the starting work sum after the completed work sum, we need those now as well
794  if (!started_request_first)
795  made_started_request = false;
796  }
797  }
798  }
799 }
const unsigned int _clicks_per_communication
Iterations to wait before communicating.
virtual bool alternateSmartEndingCriteriaMet()
Insertion point for derived classes to provide an alternate ending criteria for SMART execution...
const Parallel::Communicator & comm() const
unsigned long long int _local_work_completed
Amount of work completed on this processor.
void flushSendBuffers()
Flushes all parallel data out of the send buffers.
const ParallelStudyMethod _method
The study method.
bool receiveAndExecute()
Receive packets of parallel data from other processors and executes work.
const unsigned int _clicks_per_root_communication
Iterations to wait before communicating with root.
bool buffersAreEmpty() const
Whether or not ALL of the buffers are empty: Working buffer, threaded buffers, receive buffer...
unsigned long long int _total_work_completed
Amount of work completed on all processors.
unsigned long long int _local_work_started
Amount of work started on this processor.
const bool _allow_new_work_during_execution
Whether or not to allow the addition of new work to the buffer during execution.
unsigned long long int _total_work_started
Amount of work started on all processors.
bool _has_alternate_ending_criteria
Whether or not this object has alternate ending criteria.

◆ totalWorkCompleted()

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::totalWorkCompleted ( ) const
inline

Gets the total amount of work completeed across all processors.

Definition at line 126 of file ParallelStudy.h.

Referenced by RayTracingStudyResult::getValue().

126 { return _total_work_completed; }
unsigned long long int _total_work_completed
Amount of work completed on all processors.

◆ validParams()

template<typename WorkType , typename ParallelDataType >
InputParameters ParallelStudy< WorkType, ParallelDataType >::validParams ( )
static

Definition at line 436 of file ParallelStudy.h.

437 {
438  auto params = emptyInputParameters();
439 
440  params.addRangeCheckedParam<unsigned int>(
441  "send_buffer_size", 100, "send_buffer_size > 0", "The size of the send buffer");
442  params.addRangeCheckedParam<unsigned int>(
443  "chunk_size",
444  100,
445  "chunk_size > 0",
446  "The number of objects to process at one time during execution");
447  params.addRangeCheckedParam<unsigned int>("clicks_per_communication",
448  10,
449  "clicks_per_communication >= 0",
450  "Iterations to wait before communicating");
451  params.addRangeCheckedParam<unsigned int>("clicks_per_root_communication",
452  10,
453  "clicks_per_root_communication > 0",
454  "Iterations to wait before communicating with root");
455  params.addRangeCheckedParam<unsigned int>("clicks_per_receive",
456  1,
457  "clicks_per_receive > 0",
458  "Iterations to wait before checking for new objects");
459 
460  params.addParam<unsigned int>("min_buffer_size",
461  "The initial size of the SendBuffer and the floor for shrinking "
462  "it. This defaults to send_buffer_size if not set (i.e. the "
463  "buffer won't change size)");
464  params.addParam<Real>("buffer_growth_multiplier",
465  2.,
466  "How much to grow a SendBuffer by if the buffer completely fills and "
467  "dumps. Will max at send_buffer_size");
468  params.addRangeCheckedParam<Real>("buffer_shrink_multiplier",
469  0.5,
470  "0 < buffer_shrink_multiplier <= 1.0",
471  "Multiplier (between 0 and 1) to apply to the current buffer "
472  "size if it is force dumped. Will stop at "
473  "min_buffer_size.");
474 
475  params.addParam<bool>(
476  "allow_new_work_during_execution",
477  true,
478  "Whether or not to allow the addition of new work to the work buffer during execution");
479 
480  MooseEnum methods("smart harm bs", "smart");
481  params.addParam<MooseEnum>("method", methods, "The algorithm to use");
482 
483  MooseEnum work_buffers("lifo circular", "circular");
484  params.addParam<MooseEnum>("work_buffer_type", work_buffers, "The work buffer type to use");
485 
486  params.addParamNamesToGroup(
487  "send_buffer_size chunk_size clicks_per_communication clicks_per_root_communication "
488  "clicks_per_receive min_buffer_size buffer_growth_multiplier buffer_shrink_multiplier method "
489  "work_buffer_type allow_new_work_during_execution",
490  "Advanced");
491 
492  return params;
493 }
InputParameters emptyInputParameters()
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real

◆ workBuffer()

template<typename WorkType, typename ParallelDataType>
const MooseUtils::Buffer<WorkType>& ParallelStudy< WorkType, ParallelDataType >::workBuffer ( ) const
inline

Gets the work buffer.

Definition at line 96 of file ParallelStudy.h.

96 { return *_work_buffer; }
const std::unique_ptr< MooseUtils::Buffer< WorkType > > _work_buffer
Buffer for executing work.

◆ workIsComplete()

template<typename WorkType, typename ParallelDataType>
virtual bool ParallelStudy< WorkType, ParallelDataType >::workIsComplete ( const WorkType &  )
inlineprotectedvirtual

Can be overridden to denote if a piece of work is not complete yet.

The complete terminology is used within the execution algorithms to determine if the study is complete.

Reimplemented in ParallelRayStudy.

Definition at line 243 of file ParallelStudy.h.

243 { return true; }

Member Data Documentation

◆ _allow_new_work_during_execution

template<typename WorkType, typename ParallelDataType>
const bool ParallelStudy< WorkType, ParallelDataType >::_allow_new_work_during_execution
private

Whether or not to allow the addition of new work to the buffer during execution.

Definition at line 322 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::ParallelStudy().

◆ _buffer_growth_multiplier

template<typename WorkType, typename ParallelDataType>
const Real ParallelStudy< WorkType, ParallelDataType >::_buffer_growth_multiplier
private

Multiplier for the buffer size for growing the buffer.

Definition at line 316 of file ParallelStudy.h.

◆ _buffer_shrink_multiplier

template<typename WorkType, typename ParallelDataType>
const Real ParallelStudy< WorkType, ParallelDataType >::_buffer_shrink_multiplier
private

Multiplier for the buffer size for shrinking the buffer.

Definition at line 318 of file ParallelStudy.h.

◆ _chunk_size

template<typename WorkType, typename ParallelDataType>
const unsigned int ParallelStudy< WorkType, ParallelDataType >::_chunk_size
private

Number of objects to execute at once during communication.

Definition at line 320 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::chunkSize().

◆ _clicks_per_communication

template<typename WorkType, typename ParallelDataType>
const unsigned int ParallelStudy< WorkType, ParallelDataType >::_clicks_per_communication
private

Iterations to wait before communicating.

Definition at line 325 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::clicksPerCommunication().

◆ _clicks_per_receive

template<typename WorkType, typename ParallelDataType>
const unsigned int ParallelStudy< WorkType, ParallelDataType >::_clicks_per_receive
private

Iterations to wait before checking for new objects.

Definition at line 329 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::clicksPerReceive().

◆ _clicks_per_root_communication

template<typename WorkType, typename ParallelDataType>
const unsigned int ParallelStudy< WorkType, ParallelDataType >::_clicks_per_root_communication
private

Iterations to wait before communicating with root.

Definition at line 327 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::clicksPerRootCommunication().

◆ _currently_executing

template<typename WorkType, typename ParallelDataType>
bool ParallelStudy< WorkType, ParallelDataType >::_currently_executing
private

Whether we are within execute()

Definition at line 362 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::currentlyExecuting().

◆ _currently_executing_work

template<typename WorkType, typename ParallelDataType>
bool ParallelStudy< WorkType, ParallelDataType >::_currently_executing_work
private

Whether or not we are currently within executeAndBuffer()

Definition at line 366 of file ParallelStudy.h.

◆ _currently_pre_executing

template<typename WorkType, typename ParallelDataType>
bool ParallelStudy< WorkType, ParallelDataType >::_currently_pre_executing
private

Whether we are between preExecute() and execute()

Definition at line 364 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::currentlyPreExecuting().

◆ _has_alternate_ending_criteria

template<typename WorkType, typename ParallelDataType>
bool ParallelStudy< WorkType, ParallelDataType >::_has_alternate_ending_criteria
protected

Whether or not this object has alternate ending criteria.

Definition at line 269 of file ParallelStudy.h.

◆ _local_chunks_executed

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::_local_chunks_executed
private

Number of chunks of work executed on this processor.

Definition at line 349 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::localChunksExecuted().

◆ _local_work_completed

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::_local_work_completed
private

Amount of work completed on this processor.

Definition at line 351 of file ParallelStudy.h.

◆ _local_work_executed

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::_local_work_executed
private

Amount of work executed on this processor.

Definition at line 355 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::localWorkExecuted().

◆ _local_work_started

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::_local_work_started
private

Amount of work started on this processor.

Definition at line 353 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::localWorkStarted().

◆ _max_buffer_size

template<typename WorkType, typename ParallelDataType>
const unsigned int ParallelStudy< WorkType, ParallelDataType >::_max_buffer_size
private

Number of objects to buffer before communication.

Definition at line 314 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::maxBufferSize().

◆ _method

template<typename WorkType, typename ParallelDataType>
const ParallelStudyMethod ParallelStudy< WorkType, ParallelDataType >::_method
protected

◆ _min_buffer_size

template<typename WorkType, typename ParallelDataType>
const unsigned int ParallelStudy< WorkType, ParallelDataType >::_min_buffer_size
private

Minimum size of a SendBuffer.

Definition at line 312 of file ParallelStudy.h.

◆ _name

template<typename WorkType, typename ParallelDataType>
const std::string ParallelStudy< WorkType, ParallelDataType >::_name
protected

Name for this object for use in error handling.

Definition at line 263 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::ParallelStudy().

◆ _parallel_data_buffer_tag

template<typename WorkType, typename ParallelDataType>
Parallel::MessageTag ParallelStudy< WorkType, ParallelDataType >::_parallel_data_buffer_tag
private

MessageTag for sending parallel data.

Definition at line 332 of file ParallelStudy.h.

◆ _parallel_data_pools

template<typename WorkType, typename ParallelDataType>
std::vector<MooseUtils::SharedPool<ParallelDataType> > ParallelStudy< WorkType, ParallelDataType >::_parallel_data_pools
private

Pools for re-using destructed parallel data objects (one for each thread)

Definition at line 334 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::acquireParallelData().

◆ _params

template<typename WorkType, typename ParallelDataType>
const InputParameters& ParallelStudy< WorkType, ParallelDataType >::_params
protected

The InputParameters.

Definition at line 265 of file ParallelStudy.h.

◆ _pid

template<typename WorkType, typename ParallelDataType>
const processor_id_type ParallelStudy< WorkType, ParallelDataType >::_pid
protected

This rank.

Definition at line 261 of file ParallelStudy.h.

◆ _receive_buffer

template<typename WorkType, typename ParallelDataType>
const std::unique_ptr<ReceiveBuffer<ParallelDataType, ParallelStudy<WorkType, ParallelDataType> > > ParallelStudy< WorkType, ParallelDataType >::_receive_buffer
private

The receive buffer.

Definition at line 341 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::receiveBuffer().

◆ _send_buffers

template<typename WorkType, typename ParallelDataType>
std::unordered_map< processor_id_type, std::unique_ptr<SendBuffer<ParallelDataType, ParallelStudy<WorkType, ParallelDataType> > > > ParallelStudy< WorkType, ParallelDataType >::_send_buffers
private

Send buffers for each processor.

Definition at line 346 of file ParallelStudy.h.

◆ _temp_threaded_work

template<typename WorkType, typename ParallelDataType>
std::vector<std::vector<WorkType> > ParallelStudy< WorkType, ParallelDataType >::_temp_threaded_work
private

Threaded temprorary storage for work added while we're using the _work_buffer (one for each thread)

Definition at line 336 of file ParallelStudy.h.

◆ _total_work_completed

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::_total_work_completed
private

Amount of work completed on all processors.

Definition at line 359 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::totalWorkCompleted().

◆ _total_work_started

template<typename WorkType, typename ParallelDataType>
unsigned long long int ParallelStudy< WorkType, ParallelDataType >::_total_work_started
private

Amount of work started on all processors.

Definition at line 357 of file ParallelStudy.h.

◆ _work_buffer

template<typename WorkType, typename ParallelDataType>
const std::unique_ptr<MooseUtils::Buffer<WorkType> > ParallelStudy< WorkType, ParallelDataType >::_work_buffer
private

Buffer for executing work.

Definition at line 338 of file ParallelStudy.h.

Referenced by ParallelStudy< std::shared_ptr< Ray >, Ray >::workBuffer().


The documentation for this class was generated from the following file: