libMesh
Public Member Functions | List of all members
libMesh::MeshCommunication Class Reference

This is the MeshCommunication class. More...

#include <mesh_communication.h>

Public Member Functions

 MeshCommunication ()
 Constructor. More...
 
 ~MeshCommunication ()
 Destructor. More...
 
void clear ()
 Clears all data structures and resets to a pristine state. More...
 
void broadcast (MeshBase &) const
 
void redistribute (DistributedMesh &mesh, bool newly_coarsened_only=false) const
 This method takes a parallel distributed mesh and redistributes the elements. More...
 
void gather_neighboring_elements (DistributedMesh &) const
 
void send_coarse_ghosts (MeshBase &) const
 Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them. More...
 
void gather (const processor_id_type root_id, DistributedMesh &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void allgather (DistributedMesh &mesh) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void delete_remote_elements (DistributedMesh &, const std::set< Elem * > &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void assign_global_indices (MeshBase &) const
 This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh. More...
 
void check_for_duplicate_global_indices (MeshBase &) const
 Throw an error if we have any index clashes in the numbering used by assign_global_indices. More...
 
template<typename ForwardIterator >
void find_local_indices (const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::unordered_map< dof_id_type, dof_id_type > &) const
 This method determines a locally unique, contiguous index for each object in the input range. More...
 
template<typename ForwardIterator >
void find_global_indices (const Parallel::Communicator &communicator, const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
 This method determines a globally unique, partition-agnostic index for each object in the input range. More...
 
void make_elems_parallel_consistent (MeshBase &)
 Copy ids of ghost elements from their local processors. More...
 
void make_p_levels_parallel_consistent (MeshBase &)
 Copy p levels of ghost elements from their local processors. More...
 
void make_node_ids_parallel_consistent (MeshBase &)
 Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent. More...
 
void make_node_unique_ids_parallel_consistent (MeshBase &)
 Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent. More...
 
void make_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well. More...
 
void make_new_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well. More...
 
void make_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on ghost nodes from their local processors. More...
 
void make_new_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on new nodes from their local processors. More...
 

Detailed Description

This is the MeshCommunication class.

It handles all the details of communicating mesh information from one processor to another. All parallelization of the Mesh data structures is done via this class.

Author
Benjamin S. Kirk
Date
2003

Definition at line 50 of file mesh_communication.h.

Constructor & Destructor Documentation

◆ MeshCommunication()

libMesh::MeshCommunication::MeshCommunication ( )
inline

Constructor.

Definition at line 57 of file mesh_communication.h.

57 {}

◆ ~MeshCommunication()

libMesh::MeshCommunication::~MeshCommunication ( )
inline

Destructor.

Definition at line 62 of file mesh_communication.h.

62 {}

Member Function Documentation

◆ allgather()

void libMesh::MeshCommunication::allgather ( DistributedMesh mesh) const
inline

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor then sends its local nodes and elements to the other processors. The end result is that a previously distributed DistributedMesh will be serialized on each processor. Since this method is collective it must be called by all processors.

Definition at line 135 of file mesh_communication.h.

References gather(), libMesh::DofObject::invalid_processor_id, and mesh.

◆ assign_global_indices()

void libMesh::MeshCommunication::assign_global_indices ( MeshBase mesh) const

This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh.

The approach is to compute the Hilbert space-filling curve key and use its value to assign an index in [0,N_global). Since the Hilbert key is unique for each spatial location, two objects occupying the same location will be assigned the same global id. Thus, this method can also be useful for identifying duplicate nodes which may occur during parallel refinement.

Definition at line 178 of file mesh_communication_global_indices.C.

179 {
180  LOG_SCOPE ("assign_global_indices()", "MeshCommunication");
181 
182  // This method determines partition-agnostic global indices
183  // for nodes and elements.
184 
185  // Algorithm:
186  // (1) compute the Hilbert key for each local node/element
187  // (2) perform a parallel sort of the Hilbert key
188  // (3) get the min/max value on each processor
189  // (4) determine the position in the global ranking for
190  // each local object
191 
192  const Parallel::Communicator & communicator (mesh.comm());
193 
194  // Global bounding box. We choose the nodal bounding box for
195  // backwards compatibility; the element bounding box may be looser
196  // on curved elements.
197  BoundingBox bbox =
199 
200  //-------------------------------------------------------------
201  // (1) compute Hilbert keys
202  std::vector<Parallel::DofObjectKey>
203  node_keys, elem_keys;
204 
205  {
206  // Nodes first
207  {
210  node_keys.resize (nr.size());
211  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
212 
213  // // It's O(N^2) to check that these keys don't duplicate before the
214  // // sort...
215  // MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
216  // for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
217  // {
218  // MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
219  // for (std::size_t j = 0; j != i; ++j, ++nodej)
220  // {
221  // if (node_keys[i] == node_keys[j])
222  // {
223  // CFixBitVec icoords[3], jcoords[3];
224  // get_hilbert_coords(**nodej, bbox, jcoords);
225  // libMesh::err << "node " << (*nodej)->id() << ", " << static_cast<Point &>(**nodej) << " has HilbertIndices " << node_keys[j] << std::endl;
226  // get_hilbert_coords(**nodei, bbox, icoords);
227  // libMesh::err << "node " << (*nodei)->id() << ", " << static_cast<Point &>(**nodei) << " has HilbertIndices " << node_keys[i] << std::endl;
228  // libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
229  // }
230  // }
231  // }
232  }
233 
234  // Elements next
235  {
238  elem_keys.resize (er.size());
239  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
240 
241  // // For elements, the keys can be (and in the case of TRI, are
242  // // expected to be) duplicates, but only if the elements are at
243  // // different levels
244  // MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
245  // for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
246  // {
247  // MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
248  // for (std::size_t j = 0; j != i; ++j, ++elemj)
249  // {
250  // if ((elem_keys[i] == elem_keys[j]) &&
251  // ((*elemi)->level() == (*elemj)->level()))
252  // {
253  // libMesh::err << "level " << (*elemj)->level()
254  // << " elem\n" << (**elemj)
255  // << " centroid " << (*elemj)->centroid()
256  // << " has HilbertIndices " << elem_keys[j]
257  // << " or " << get_dofobject_key((*elemj), bbox)
258  // << std::endl;
259  // libMesh::err << "level " << (*elemi)->level()
260  // << " elem\n" << (**elemi)
261  // << " centroid " << (*elemi)->centroid()
262  // << " has HilbertIndices " << elem_keys[i]
263  // << " or " << get_dofobject_key((*elemi), bbox)
264  // << std::endl;
265  // libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
266  // }
267  // }
268  // }
269  }
270  } // done computing Hilbert keys
271 
272 
273 
274  //-------------------------------------------------------------
275  // (2) parallel sort the Hilbert keys
276  Parallel::Sort<Parallel::DofObjectKey> node_sorter (communicator,
277  node_keys);
278  node_sorter.sort(); /* done with node_keys */ //node_keys.clear();
279 
280  const std::vector<Parallel::DofObjectKey> & my_node_bin =
281  node_sorter.bin();
282 
283  Parallel::Sort<Parallel::DofObjectKey> elem_sorter (communicator,
284  elem_keys);
285  elem_sorter.sort(); /* done with elem_keys */ //elem_keys.clear();
286 
287  const std::vector<Parallel::DofObjectKey> & my_elem_bin =
288  elem_sorter.bin();
289 
290 
291 
292  //-------------------------------------------------------------
293  // (3) get the max value on each processor
294  std::vector<Parallel::DofObjectKey>
295  node_upper_bounds(communicator.size()),
296  elem_upper_bounds(communicator.size());
297 
298  { // limit scope of temporaries
299  std::vector<Parallel::DofObjectKey> recvbuf(2*communicator.size());
300  std::vector<unsigned short int> /* do not use a vector of bools here since it is not always so! */
301  empty_nodes (communicator.size()),
302  empty_elem (communicator.size());
303  std::vector<Parallel::DofObjectKey> my_max(2);
304 
305  communicator.allgather (static_cast<unsigned short int>(my_node_bin.empty()), empty_nodes);
306  communicator.allgather (static_cast<unsigned short int>(my_elem_bin.empty()), empty_elem);
307 
308  if (!my_node_bin.empty()) my_max[0] = my_node_bin.back();
309  if (!my_elem_bin.empty()) my_max[1] = my_elem_bin.back();
310 
311  communicator.allgather (my_max, /* identical_buffer_sizes = */ true);
312 
313  // Be careful here. The *_upper_bounds will be used to find the processor
314  // a given object belongs to. So, if a processor contains no objects (possible!)
315  // then copy the bound from the lower processor id.
316  for (auto p : IntRange<processor_id_type>(0, communicator.size()))
317  {
318  node_upper_bounds[p] = my_max[2*p+0];
319  elem_upper_bounds[p] = my_max[2*p+1];
320 
321  if (p > 0) // default hilbert index value is the OK upper bound for processor 0.
322  {
323  if (empty_nodes[p]) node_upper_bounds[p] = node_upper_bounds[p-1];
324  if (empty_elem[p]) elem_upper_bounds[p] = elem_upper_bounds[p-1];
325  }
326  }
327  }
328 
329 
330 
331  //-------------------------------------------------------------
332  // (4) determine the position in the global ranking for
333  // each local object
334  {
335  //----------------------------------------------
336  // Nodes first -- all nodes, not just local ones
337  {
338  // Request sets to send to each processor
339  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
340  requested_ids;
341  // Results to gather from each processor - kept in a map so we
342  // do only one loop over nodes after all receives are done.
343  std::map<dof_id_type, std::vector<dof_id_type>>
344  filled_request;
345 
346  // build up list of requests
347  for (const auto & node : mesh.node_ptr_range())
348  {
349  libmesh_assert(node);
350  const Parallel::DofObjectKey hi =
351  get_dofobject_key (node, bbox);
352  const processor_id_type pid =
353  cast_int<processor_id_type>
354  (std::distance (node_upper_bounds.begin(),
355  std::lower_bound(node_upper_bounds.begin(),
356  node_upper_bounds.end(),
357  hi)));
358 
359  libmesh_assert_less (pid, communicator.size());
360 
361  requested_ids[pid].push_back(hi);
362  }
363 
364  // The number of objects in my_node_bin on each processor
365  std::vector<dof_id_type> node_bin_sizes(communicator.size());
366  communicator.allgather (static_cast<dof_id_type>(my_node_bin.size()), node_bin_sizes);
367 
368  // The offset of my first global index
369  dof_id_type my_offset = 0;
370  for (auto pid : IntRange<processor_id_type>(0, communicator.rank()))
371  my_offset += node_bin_sizes[pid];
372 
373  auto gather_functor =
374  [
375 #ifndef NDEBUG
376  & node_upper_bounds,
377  & communicator,
378 #endif
379  & my_node_bin,
380  my_offset
381  ]
383  const std::vector<Parallel::DofObjectKey> & keys,
384  std::vector<dof_id_type> & global_ids)
385  {
386  // Fill the requests
387  const std::size_t keys_size = keys.size();
388  global_ids.reserve(keys_size);
389  for (std::size_t idx=0; idx != keys_size; idx++)
390  {
391  const Parallel::DofObjectKey & hi = keys[idx];
392  libmesh_assert_less_equal (hi, node_upper_bounds[communicator.rank()]);
393 
394  // find the requested index in my node bin
395  std::vector<Parallel::DofObjectKey>::const_iterator pos =
396  std::lower_bound (my_node_bin.begin(), my_node_bin.end(), hi);
397  libmesh_assert (pos != my_node_bin.end());
398  libmesh_assert_equal_to (*pos, hi);
399 
400  // Finally, assign the global index based off the position of the index
401  // in my array, properly offset.
402  global_ids.push_back(cast_int<dof_id_type>(std::distance(my_node_bin.begin(), pos) + my_offset));
403  }
404  };
405 
406  auto action_functor =
407  [&filled_request]
408  (processor_id_type pid,
409  const std::vector<Parallel::DofObjectKey> &,
410  const std::vector<dof_id_type> & global_ids)
411  {
412  filled_request[pid] = global_ids;
413  };
414 
415  // Trade requests with other processors
416  const dof_id_type * ex = nullptr;
417  Parallel::pull_parallel_vector_data
418  (communicator, requested_ids, gather_functor, action_functor, ex);
419 
420  // We now have all the filled requests, so we can loop through our
421  // nodes once and assign the global index to each one.
422  {
423  std::map<dof_id_type, std::vector<dof_id_type>::const_iterator>
424  next_obj_on_proc;
425  for (auto & p : filled_request)
426  next_obj_on_proc[p.first] = p.second.begin();
427 
428  for (auto & node : mesh.node_ptr_range())
429  {
430  libmesh_assert(node);
431  const Parallel::DofObjectKey hi =
432  get_dofobject_key (node, bbox);
433  const processor_id_type pid =
434  cast_int<processor_id_type>
435  (std::distance (node_upper_bounds.begin(),
436  std::lower_bound(node_upper_bounds.begin(),
437  node_upper_bounds.end(),
438  hi)));
439 
440  libmesh_assert_less (pid, communicator.size());
441  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
442 
443  const dof_id_type global_index = *next_obj_on_proc[pid];
444  libmesh_assert_less (global_index, mesh.n_nodes());
445  node->set_id() = global_index;
446 
447  ++next_obj_on_proc[pid];
448  }
449  }
450  }
451 
452  //---------------------------------------------------
453  // elements next -- all elements, not just local ones
454  {
455  // Request sets to send to each processor
456  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
457  requested_ids;
458  // Results to gather from each processor - kept in a map so we
459  // do only one loop over elements after all receives are done.
460  std::map<dof_id_type, std::vector<dof_id_type>>
461  filled_request;
462 
463  for (const auto & elem : mesh.element_ptr_range())
464  {
465  libmesh_assert(elem);
466  const Parallel::DofObjectKey hi =
467  get_dofobject_key (elem, bbox);
468  const processor_id_type pid =
469  cast_int<processor_id_type>
470  (std::distance (elem_upper_bounds.begin(),
471  std::lower_bound(elem_upper_bounds.begin(),
472  elem_upper_bounds.end(),
473  hi)));
474 
475  libmesh_assert_less (pid, communicator.size());
476 
477  requested_ids[pid].push_back(hi);
478  }
479 
480  // The number of objects in my_elem_bin on each processor
481  std::vector<dof_id_type> elem_bin_sizes(communicator.size());
482  communicator.allgather (static_cast<dof_id_type>(my_elem_bin.size()), elem_bin_sizes);
483 
484  // The offset of my first global index
485  dof_id_type my_offset = 0;
486  for (auto pid : IntRange<processor_id_type>(0, communicator.rank()))
487  my_offset += elem_bin_sizes[pid];
488 
489  auto gather_functor =
490  [
491 #ifndef NDEBUG
492  & elem_upper_bounds,
493  & communicator,
494 #endif
495  & my_elem_bin,
496  my_offset
497  ]
499  const std::vector<Parallel::DofObjectKey> & keys,
500  std::vector<dof_id_type> & global_ids)
501  {
502  // Fill the requests
503  const std::size_t keys_size = keys.size();
504  global_ids.reserve(keys_size);
505  for (std::size_t idx=0; idx != keys_size; idx++)
506  {
507  const Parallel::DofObjectKey & hi = keys[idx];
508  libmesh_assert_less_equal (hi, elem_upper_bounds[communicator.rank()]);
509 
510  // find the requested index in my elem bin
511  std::vector<Parallel::DofObjectKey>::const_iterator pos =
512  std::lower_bound (my_elem_bin.begin(), my_elem_bin.end(), hi);
513  libmesh_assert (pos != my_elem_bin.end());
514  libmesh_assert_equal_to (*pos, hi);
515 
516  // Finally, assign the global index based off the position of the index
517  // in my array, properly offset.
518  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_elem_bin.begin(), pos) + my_offset));
519  }
520  };
521 
522  auto action_functor =
523  [&filled_request]
524  (processor_id_type pid,
525  const std::vector<Parallel::DofObjectKey> &,
526  const std::vector<dof_id_type> & global_ids)
527  {
528  filled_request[pid] = global_ids;
529  };
530 
531  // Trade requests with other processors
532  const dof_id_type * ex = nullptr;
533  Parallel::pull_parallel_vector_data
534  (communicator, requested_ids, gather_functor, action_functor, ex);
535 
536  // We now have all the filled requests, so we can loop through our
537  // elements once and assign the global index to each one.
538  {
539  std::vector<std::vector<dof_id_type>::const_iterator>
540  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
541  for (auto pid : IntRange<processor_id_type>(0, communicator.size()))
542  next_obj_on_proc.push_back(filled_request[pid].begin());
543 
544  for (auto & elem : mesh.element_ptr_range())
545  {
546  libmesh_assert(elem);
547  const Parallel::DofObjectKey hi =
548  get_dofobject_key (elem, bbox);
549  const processor_id_type pid =
550  cast_int<processor_id_type>
551  (std::distance (elem_upper_bounds.begin(),
552  std::lower_bound(elem_upper_bounds.begin(),
553  elem_upper_bounds.end(),
554  hi)));
555 
556  libmesh_assert_less (pid, communicator.size());
557  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
558 
559  const dof_id_type global_index = *next_obj_on_proc[pid];
560  libmesh_assert_less (global_index, mesh.n_elem());
561  elem->set_id() = global_index;
562 
563  ++next_obj_on_proc[pid];
564  }
565  }
566  }
567  }
568 }

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), libMesh::ParallelObject::comm(), libMesh::MeshTools::create_nodal_bounding_box(), distance(), libMesh::MeshBase::element_ptr_range(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::libmesh_assert(), libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), mesh, libMesh::MeshBase::n_elem(), libMesh::MeshBase::n_nodes(), libMesh::MeshBase::node_ptr_range(), libMesh::Threads::parallel_for(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::MeshTools::Private::globally_renumber_nodes_and_elements().

◆ broadcast()

void libMesh::MeshCommunication::broadcast ( MeshBase mesh) const
      Finds all the processors that may contain
      elements that neighbor my elements.  This list
      is guaranteed to include all processors that border
      any of my elements, but may include additional ones as
      well.  This method computes bounding boxes for the
      elements on each processor and checks for overlaps.
     &zwj;/

void find_neighboring_processors(const MeshBase &);

/** This method takes a mesh (which is assumed to reside on processor 0) and broadcasts it to all the other processors. It also broadcasts any boundary information the mesh has associated with it.

Definition at line 1084 of file mesh_communication.C.

1085 {
1086  // no MPI == one processor, no need for this method...
1087  return;
1088 }

Referenced by libMesh::NameBasedIO::read(), libMesh::CheckpointIO::read(), MeshInputTest::testDynaReadElem(), MeshInputTest::testDynaReadPatch(), MeshInputTest::testExodusCopyElementSolution(), and MeshInputTest::testExodusWriteElementDataFromDiscontinuousNodalData().

◆ check_for_duplicate_global_indices()

void libMesh::MeshCommunication::check_for_duplicate_global_indices ( MeshBase mesh) const

Throw an error if we have any index clashes in the numbering used by assign_global_indices.

Definition at line 577 of file mesh_communication_global_indices.C.

578 {
579  LOG_SCOPE ("check_for_duplicate_global_indices()", "MeshCommunication");
580 
581  // Global bounding box. We choose the nodal bounding box for
582  // backwards compatibility; the element bounding box may be looser
583  // on curved elements.
584  BoundingBox bbox =
586 
587 
588  std::vector<Parallel::DofObjectKey>
589  node_keys, elem_keys;
590 
591  {
592  // Nodes first
593  {
596  node_keys.resize (nr.size());
597  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
598 
599  // It's O(N^2) to check that these keys don't duplicate before the
600  // sort...
602  for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
603  {
605  for (std::size_t j = 0; j != i; ++j, ++nodej)
606  {
607  if (node_keys[i] == node_keys[j])
608  {
609  CFixBitVec icoords[3], jcoords[3];
610  get_hilbert_coords(**nodej, bbox, jcoords);
611  libMesh::err <<
612  "node " << (*nodej)->id() << ", " <<
613  *(Point *)(*nodej) << " has HilbertIndices " <<
614  node_keys[j] << std::endl;
615  get_hilbert_coords(**nodei, bbox, icoords);
616  libMesh::err <<
617  "node " << (*nodei)->id() << ", " <<
618  *(Point *)(*nodei) << " has HilbertIndices " <<
619  node_keys[i] << std::endl;
620  libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
621  }
622  }
623  }
624  }
625 
626  // Elements next
627  {
630  elem_keys.resize (er.size());
631  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
632 
633  // For elements, the keys can be (and in the case of TRI, are
634  // expected to be) duplicates, but only if the elements are at
635  // different levels
637  for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
638  {
640  for (std::size_t j = 0; j != i; ++j, ++elemj)
641  {
642  if ((elem_keys[i] == elem_keys[j]) &&
643  ((*elemi)->level() == (*elemj)->level()))
644  {
645  libMesh::err <<
646  "level " << (*elemj)->level() << " elem\n" <<
647  (**elemj) << " centroid " <<
648  (*elemj)->centroid() << " has HilbertIndices " <<
649  elem_keys[j] << " or " <<
650  get_dofobject_key((*elemj), bbox) <<
651  std::endl;
652  libMesh::err <<
653  "level " << (*elemi)->level() << " elem\n" <<
654  (**elemi) << " centroid " <<
655  (*elemi)->centroid() << " has HilbertIndices " <<
656  elem_keys[i] << " or " <<
657  get_dofobject_key((*elemi), bbox) <<
658  std::endl;
659  libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
660  }
661  }
662  }
663  }
664  } // done checking Hilbert keys
665 }

References libMesh::MeshTools::create_nodal_bounding_box(), libMesh::err, libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), mesh, and libMesh::Threads::parallel_for().

◆ clear()

void libMesh::MeshCommunication::clear ( )

Clears all data structures and resets to a pristine state.

Definition at line 276 of file mesh_communication.C.

277 {
278  // _neighboring_processors.clear();
279 }

◆ delete_remote_elements()

void libMesh::MeshCommunication::delete_remote_elements ( DistributedMesh mesh,
const std::set< Elem * > &  extra_ghost_elem_ids 
) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor deletes all elements which are neither local elements nor "ghost" elements which touch local elements, and deletes all nodes which are not contained in local or ghost elements. The end result is that a previously serial DistributedMesh will be distributed between processors. Since this method is collective it must be called by all processors.

The std::set is a list of extra elements that you don't want to delete. These will be left on the current processor along with local elements and ghosted neighbors.

Definition at line 1852 of file mesh_communication.C.

1854 {
1855  // The mesh should know it's about to be parallelized
1857 
1858  LOG_SCOPE("delete_remote_elements()", "MeshCommunication");
1859 
1860 #ifdef DEBUG
1861  // We expect maximum ids to be in sync so we can use them to size
1862  // vectors
1863  libmesh_assert(mesh.comm().verify(mesh.max_node_id()));
1864  libmesh_assert(mesh.comm().verify(mesh.max_elem_id()));
1865  const dof_id_type par_max_node_id = mesh.parallel_max_node_id();
1866  const dof_id_type par_max_elem_id = mesh.parallel_max_elem_id();
1867  libmesh_assert_equal_to (par_max_node_id, mesh.max_node_id());
1868  libmesh_assert_equal_to (par_max_elem_id, mesh.max_elem_id());
1869 #endif
1870 
1871  std::set<const Elem *, CompareElemIdsByLevel> elements_to_keep;
1872 
1873  // Don't delete elements that we were explicitly told not to
1874  for (const auto & elem : extra_ghost_elem_ids)
1875  {
1876  std::vector<const Elem *> active_family;
1877 #ifdef LIBMESH_ENABLE_AMR
1878  if (!elem->subactive())
1879  elem->active_family_tree(active_family);
1880  else
1881 #endif
1882  active_family.push_back(elem);
1883 
1884  for (const auto & f : active_family)
1885  elements_to_keep.insert(f);
1886  }
1887 
1888  // See which elements we still need to keep ghosted, given that
1889  // we're keeping local and unpartitioned elements.
1891  (mesh, mesh.processor_id(),
1894  elements_to_keep);
1899  elements_to_keep);
1900 
1901  // The inactive elements we need to send should have their
1902  // immediate children present.
1905  elements_to_keep);
1909  elements_to_keep);
1910 
1911  // The elements we need should have their ancestors and their
1912  // subactive children present too.
1913  connect_families(elements_to_keep);
1914 
1915  // Don't delete nodes that our semilocal elements need
1916  std::set<const Node *> connected_nodes;
1917  reconnect_nodes(elements_to_keep, connected_nodes);
1918 
1919  // Delete all the elements we have no reason to save,
1920  // starting with the most refined so that the mesh
1921  // is valid at all intermediate steps
1922  unsigned int n_levels = MeshTools::n_levels(mesh);
1923 
1924  for (int l = n_levels - 1; l >= 0; --l)
1925  for (auto & elem : as_range(mesh.level_elements_begin(l),
1927  {
1928  libmesh_assert (elem);
1929  // Make sure we don't leave any invalid pointers
1930  const bool keep_me = elements_to_keep.count(elem);
1931 
1932  if (!keep_me)
1933  elem->make_links_to_me_remote();
1934 
1935  // delete_elem doesn't currently invalidate element
1936  // iterators... that had better not change
1937  if (!keep_me)
1938  mesh.delete_elem(elem);
1939  }
1940 
1941  // Delete all the nodes we have no reason to save
1942  for (auto & node : mesh.node_ptr_range())
1943  {
1944  libmesh_assert(node);
1945  if (!connected_nodes.count(node))
1946  mesh.delete_node(node);
1947  }
1948 
1949  // If we had a point locator, it's invalid now that some of the
1950  // elements it pointed to have been deleted.
1952 
1953  // Much of our boundary info may have been for now-remote parts of
1954  // the mesh, in which case we don't want to keep local copies.
1956 
1957  // We now have all remote elements and nodes deleted; our ghosting
1958  // functors should be ready to delete any now-redundant cached data
1959  // they use too.
1961  gf->delete_remote_elements();
1962 
1963 #ifdef DEBUG
1965 #endif
1966 }

References libMesh::Elem::active_family_tree(), libMesh::MeshBase::active_pid_elements_begin(), libMesh::MeshBase::active_pid_elements_end(), libMesh::as_range(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_families(), libMesh::MeshBase::delete_elem(), libMesh::MeshBase::delete_node(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DofObject::invalid_processor_id, libMesh::MeshBase::is_serial(), libMesh::MeshBase::level_elements_begin(), libMesh::MeshBase::level_elements_end(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libMesh::Elem::make_links_to_me_remote(), libMesh::MeshBase::max_elem_id(), libMesh::MeshBase::max_node_id(), mesh, libMesh::MeshTools::n_levels(), libMesh::MeshBase::node_ptr_range(), libMesh::MeshBase::pid_elements_begin(), libMesh::MeshBase::pid_elements_end(), libMesh::ParallelObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::reconnect_nodes(), libMesh::BoundaryInfo::regenerate_id_sets(), and libMesh::Elem::subactive().

◆ find_global_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_global_indices ( const Parallel::Communicator &  communicator,
const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::vector< dof_id_type > &  index_map 
) const

This method determines a globally unique, partition-agnostic index for each object in the input range.

Definition at line 710 of file mesh_communication_global_indices.C.

715 {
716  LOG_SCOPE ("find_global_indices()", "MeshCommunication");
717 
718  // This method determines partition-agnostic global indices
719  // for nodes and elements.
720 
721  // Algorithm:
722  // (1) compute the Hilbert key for each local node/element
723  // (2) perform a parallel sort of the Hilbert key
724  // (3) get the min/max value on each processor
725  // (4) determine the position in the global ranking for
726  // each local object
727  index_map.clear();
728  std::size_t n_objects = std::distance (begin, end);
729  index_map.reserve(n_objects);
730 
731  //-------------------------------------------------------------
732  // (1) compute Hilbert keys
733  // These aren't trivial to compute, and we will need them again.
734  // But the binsort will sort the input vector, trashing the order
735  // that we'd like to rely on. So, two vectors...
736  std::vector<Parallel::DofObjectKey>
737  sorted_hilbert_keys,
738  hilbert_keys;
739  sorted_hilbert_keys.reserve(n_objects);
740  hilbert_keys.reserve(n_objects);
741  {
742  LOG_SCOPE("compute_hilbert_indices()", "MeshCommunication");
743  for (ForwardIterator it=begin; it!=end; ++it)
744  {
745  const Parallel::DofObjectKey hi(get_dofobject_key (*it, bbox));
746  hilbert_keys.push_back(hi);
747 
748  if ((*it)->processor_id() == communicator.rank())
749  sorted_hilbert_keys.push_back(hi);
750 
751  // someone needs to take care of unpartitioned objects!
752  if ((communicator.rank() == 0) &&
753  ((*it)->processor_id() == DofObject::invalid_processor_id))
754  sorted_hilbert_keys.push_back(hi);
755  }
756  }
757 
758  //-------------------------------------------------------------
759  // (2) parallel sort the Hilbert keys
760  START_LOG ("parallel_sort()", "MeshCommunication");
761  Parallel::Sort<Parallel::DofObjectKey> sorter (communicator,
762  sorted_hilbert_keys);
763  sorter.sort();
764  STOP_LOG ("parallel_sort()", "MeshCommunication");
765  const std::vector<Parallel::DofObjectKey> & my_bin = sorter.bin();
766 
767  // The number of objects in my_bin on each processor
768  std::vector<unsigned int> bin_sizes(communicator.size());
769  communicator.allgather (static_cast<unsigned int>(my_bin.size()), bin_sizes);
770 
771  // The offset of my first global index
772  unsigned int my_offset = 0;
773  for (auto pid : IntRange<processor_id_type>(0, communicator.rank()))
774  my_offset += bin_sizes[pid];
775 
776  //-------------------------------------------------------------
777  // (3) get the max value on each processor
778  std::vector<Parallel::DofObjectKey>
779  upper_bounds(1);
780 
781  if (!my_bin.empty())
782  upper_bounds[0] = my_bin.back();
783 
784  communicator.allgather (upper_bounds, /* identical_buffer_sizes = */ true);
785 
786  // Be careful here. The *_upper_bounds will be used to find the processor
787  // a given object belongs to. So, if a processor contains no objects (possible!)
788  // then copy the bound from the lower processor id.
789  for (auto p : IntRange<processor_id_type>(1, communicator.size()))
790  if (!bin_sizes[p]) upper_bounds[p] = upper_bounds[p-1];
791 
792 
793  //-------------------------------------------------------------
794  // (4) determine the position in the global ranking for
795  // each local object
796  {
797  //----------------------------------------------
798  // all objects, not just local ones
799 
800  // Request sets to send to each processor
801  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
802  requested_ids;
803  // Results to gather from each processor
804  std::map<processor_id_type, std::vector<dof_id_type>>
805  filled_request;
806 
807  // build up list of requests
808  std::vector<Parallel::DofObjectKey>::const_iterator hi =
809  hilbert_keys.begin();
810 
811  for (ForwardIterator it = begin; it != end; ++it)
812  {
813  libmesh_assert (hi != hilbert_keys.end());
814 
815  std::vector<Parallel::DofObjectKey>::iterator lb =
816  std::lower_bound(upper_bounds.begin(), upper_bounds.end(),
817  *hi);
818 
819  const processor_id_type pid =
820  cast_int<processor_id_type>
821  (std::distance (upper_bounds.begin(), lb));
822 
823  libmesh_assert_less (pid, communicator.size());
824 
825  requested_ids[pid].push_back(*hi);
826 
827  ++hi;
828  // go ahead and put pid in index_map, that way we
829  // don't have to repeat the std::lower_bound()
830  index_map.push_back(pid);
831  }
832 
833  auto gather_functor =
834  [
835 #ifndef NDEBUG
836  & upper_bounds,
837  & communicator,
838 #endif
839  & bbox,
840  & my_bin,
841  my_offset
842  ]
843  (processor_id_type, const std::vector<Parallel::DofObjectKey> & keys,
844  std::vector<dof_id_type> & global_ids)
845  {
846  // Ignore unused lambda capture warnings in devel mode
847  libmesh_ignore(bbox);
848 
849  // Fill the requests
850  const std::size_t keys_size = keys.size();
851  global_ids.clear();
852  global_ids.reserve(keys_size);
853  for (std::size_t idx=0; idx != keys_size; idx++)
854  {
855  const Parallel::DofObjectKey & hilbert_indices = keys[idx];
856  libmesh_assert_less_equal (hilbert_indices, upper_bounds[communicator.rank()]);
857 
858  // find the requested index in my node bin
859  std::vector<Parallel::DofObjectKey>::const_iterator pos =
860  std::lower_bound (my_bin.begin(), my_bin.end(), hilbert_indices);
861  libmesh_assert (pos != my_bin.end());
862 #ifdef DEBUG
863  // If we could not find the requested Hilbert index in
864  // my_bin, something went terribly wrong, possibly the
865  // Mesh was displaced differently on different processors,
866  // and therefore the Hilbert indices don't agree.
867  if (*pos != hilbert_indices)
868  {
869  // The input will be hilbert_indices. We convert it
870  // to BitVecType using the operator= provided by the
871  // BitVecType class. BitVecType is a CBigBitVec!
872  Hilbert::BitVecType input;
873 #ifdef LIBMESH_ENABLE_UNIQUE_ID
874  input = hilbert_indices.first;
875 #else
876  input = hilbert_indices;
877 #endif
878 
879  // Get output in a vector of CBigBitVec
880  std::vector<CBigBitVec> output(3);
881 
882  // Call the indexToCoords function
883  Hilbert::indexToCoords(output.data(), 8*sizeof(Hilbert::inttype), 3, input);
884 
885  // The entries in the output racks are integers in the
886  // range [0, Hilbert::inttype::max] which can be
887  // converted to floating point values in [0,1] and
888  // finally to actual values using the bounding box.
889  const Real max_int_as_real =
890  static_cast<Real>(std::numeric_limits<Hilbert::inttype>::max());
891 
892  // Get the points in [0,1]^3. The zeroth rack of each entry in
893  // 'output' maps to the normalized x, y, and z locations,
894  // respectively.
895  Point p_hat(static_cast<Real>(output[0].racks()[0]) / max_int_as_real,
896  static_cast<Real>(output[1].racks()[0]) / max_int_as_real,
897  static_cast<Real>(output[2].racks()[0]) / max_int_as_real);
898 
899  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
900  Real
901  xmin = bbox.first(0),
902  xmax = bbox.second(0),
903  ymin = bbox.first(1),
904  ymax = bbox.second(1),
905  zmin = bbox.first(2),
906  zmax = bbox.second(2);
907 
908  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
909  Point p(xmin + (xmax-xmin)*p_hat(0),
910  ymin + (ymax-ymin)*p_hat(1),
911  zmin + (zmax-zmin)*p_hat(2));
912 
913  libmesh_error_msg("Could not find hilbert indices: "
914  << hilbert_indices
915  << " corresponding to point " << p);
916  }
917 #endif
918 
919  // Finally, assign the global index based off the position of the index
920  // in my array, properly offset.
921  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_bin.begin(), pos) + my_offset));
922  }
923  };
924 
925  auto action_functor =
926  [&filled_request]
927  (processor_id_type pid,
928  const std::vector<Parallel::DofObjectKey> &,
929  const std::vector<dof_id_type> & global_ids)
930  {
931  filled_request[pid] = global_ids;
932  };
933 
934  const dof_id_type * ex = nullptr;
935  Parallel::pull_parallel_vector_data
936  (communicator, requested_ids, gather_functor, action_functor, ex);
937 
938  // We now have all the filled requests, so we can loop through our
939  // nodes once and assign the global index to each one.
940  {
941  std::vector<std::vector<dof_id_type>::const_iterator>
942  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
943  for (auto pid : IntRange<processor_id_type>(0, communicator.size()))
944  next_obj_on_proc.push_back(filled_request[pid].begin());
945 
946  unsigned int cnt=0;
947  for (ForwardIterator it = begin; it != end; ++it, cnt++)
948  {
949  const processor_id_type pid = cast_int<processor_id_type>
950  (index_map[cnt]);
951 
952  libmesh_assert_less (pid, communicator.size());
953  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
954 
955  const dof_id_type global_index = *next_obj_on_proc[pid];
956  index_map[cnt] = global_index;
957 
958  ++next_obj_on_proc[pid];
959  }
960  }
961  }
962 
963  libmesh_assert_equal_to(index_map.size(), n_objects);
964 }

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), distance(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::Real, and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::ParmetisPartitioner::initialize(), libMesh::MetisPartitioner::partition_range(), and libMesh::Partitioner::partition_unpartitioned_elements().

◆ find_local_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_local_indices ( const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::unordered_map< dof_id_type, dof_id_type > &  index_map 
) const

This method determines a locally unique, contiguous index for each object in the input range.

Definition at line 674 of file mesh_communication_global_indices.C.

678 {
679  LOG_SCOPE ("find_local_indices()", "MeshCommunication");
680 
681  // This method determines id-agnostic local indices
682  // for nodes and elements by sorting Hilbert keys.
683 
684  index_map.clear();
685 
686  //-------------------------------------------------------------
687  // (1) compute Hilbert keys
688  // These aren't trivial to compute, and we will need them again.
689  // But the binsort will sort the input vector, trashing the order
690  // that we'd like to rely on. So, two vectors...
691  std::map<Parallel::DofObjectKey, dof_id_type> hilbert_keys;
692  {
693  LOG_SCOPE("local_hilbert_indices", "MeshCommunication");
694  for (ForwardIterator it=begin; it!=end; ++it)
695  {
696  const Parallel::DofObjectKey hi(get_dofobject_key ((*it), bbox));
697  hilbert_keys.emplace(hi, (*it)->id());
698  }
699  }
700 
701  {
702  dof_id_type cnt = 0;
703  for (auto key_val : hilbert_keys)
704  index_map[key_val.second] = cnt++;
705  }
706 }

References end.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map().

◆ gather()

void libMesh::MeshCommunication::gather ( const processor_id_type  root_id,
DistributedMesh mesh 
) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor then sends its local nodes and elements to processor root_id. The end result is that a previously distributed DistributedMesh will be serialized on processor root_id. Since this method is collective it must be called by all processors. For the special case of root_id equal to DofObject::invalid_processor_id this function performs an allgather.

Definition at line 1168 of file mesh_communication.C.

1169 {
1170  // no MPI == one processor, no need for this method...
1171  return;
1172 }

Referenced by allgather().

◆ gather_neighboring_elements()

void libMesh::MeshCommunication::gather_neighboring_elements ( DistributedMesh mesh) const

Definition at line 540 of file mesh_communication.C.

541 {
542  // no MPI == one processor, no need for this method...
543  return;
544 }

Referenced by libMesh::Nemesis_IO::read().

◆ make_elems_parallel_consistent()

void libMesh::MeshCommunication::make_elems_parallel_consistent ( MeshBase mesh)

Copy ids of ghost elements from their local processors.

Definition at line 1513 of file mesh_communication.C.

1514 {
1515  // This function must be run on all processors at once
1516  libmesh_parallel_only(mesh.comm());
1517 
1518  LOG_SCOPE ("make_elems_parallel_consistent()", "MeshCommunication");
1519 
1520  SyncIds syncids(mesh, &MeshBase::renumber_elem);
1523  mesh.active_elements_end(), syncids);
1524 
1525 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1526  SyncUniqueIds<Elem> syncuniqueids(mesh, &MeshBase::query_elem_ptr);
1529  mesh.active_elements_end(), syncuniqueids);
1530 #endif
1531 }

References libMesh::MeshBase::active_elements_begin(), libMesh::MeshBase::active_elements_end(), libMesh::ParallelObject::comm(), mesh, libMesh::MeshBase::query_elem_ptr(), libMesh::MeshBase::renumber_elem(), libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Parallel::sync_element_data_by_parent_id().

Referenced by libMesh::MeshRefinement::_refine_elements().

◆ make_new_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.

Definition at line 1693 of file mesh_communication.C.

1694 {
1695  LOG_SCOPE ("make_new_node_proc_ids_parallel_consistent()", "MeshCommunication");
1696 
1697  // This function must be run on all processors at once
1698  libmesh_parallel_only(mesh.comm());
1699 
1700  // When this function is called, each section of a parallelized mesh
1701  // should be in the following state:
1702  //
1703  // Local nodes should have unique authoritative ids,
1704  // and new nodes should be unpartitioned.
1705  //
1706  // New ghost nodes touching local elements should be unpartitioned.
1707 
1708  // We may not have consistent processor ids for new nodes (because a
1709  // node may be old and partitioned on one processor but new and
1710  // unpartitioned on another) when we start
1711 #ifdef DEBUG
1713  // MeshTools::libmesh_assert_parallel_consistent_new_node_procids(mesh);
1714 #endif
1715 
1716  // We have two kinds of new nodes. *NEW* nodes are unpartitioned on
1717  // all processors: we need to use a id-independent (i.e. dumb)
1718  // heuristic to partition them. But "new" nodes are newly created
1719  // on some processors (when ghost elements are refined) yet
1720  // correspond to existing nodes on other processors: we need to use
1721  // the existing processor id for them.
1722  //
1723  // A node which is "new" on one processor will be associated with at
1724  // least one ghost element, and we can just query that ghost
1725  // element's owner to find out the correct processor id.
1726 
1727  auto node_unpartitioned =
1728  [](const Elem * elem, unsigned int local_node_num)
1729  { return elem->node_ref(local_node_num).processor_id() ==
1731 
1732  SyncProcIds sync(mesh);
1733 
1737  node_unpartitioned, sync);
1738 
1739  // Nodes should now be unpartitioned iff they are truly new; those
1740  // are the *only* nodes we will touch.
1741 #ifdef DEBUG
1743 #endif
1744 
1745  NodeWasNew node_was_new(mesh);
1746 
1747  // Set the lowest processor id we can on truly new nodes
1748  for (auto & elem : mesh.element_ptr_range())
1749  for (auto & node : elem->node_ref_range())
1750  if (node_was_new.was_new.count(&node))
1751  {
1752  processor_id_type & pid = node.processor_id();
1753  pid = std::min(pid, elem->processor_id());
1754  }
1755 
1756  // Then finally see if other processors have a lower option
1759  ElemNodesMaybeNew(), node_was_new, sync);
1760 
1761  // We should have consistent processor ids when we're done.
1762 #ifdef DEBUG
1765 #endif
1766 }

References libMesh::ParallelObject::comm(), libMesh::MeshBase::element_ptr_range(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), libMesh::DofObject::invalid_processor_id, libMesh::MeshTools::libmesh_assert_parallel_consistent_new_node_procids(), libMesh::MeshTools::libmesh_assert_parallel_consistent_procids< Node >(), mesh, libMesh::Elem::node_ref(), libMesh::Elem::node_ref_range(), libMesh::MeshBase::not_local_elements_begin(), libMesh::MeshBase::not_local_elements_end(), libMesh::DofObject::processor_id(), libMesh::Parallel::sync_node_data_by_element_id(), and libMesh::Parallel::sync_node_data_by_element_id_once().

Referenced by make_new_nodes_parallel_consistent().

◆ make_new_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_new_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on new nodes from their local processors.

Definition at line 1813 of file mesh_communication.C.

1814 {
1815  // This function must be run on all processors at once
1816  libmesh_parallel_only(mesh.comm());
1817 
1818  // When this function is called, each section of a parallelized mesh
1819  // should be in the following state:
1820  //
1821  // All nodes should have the exact same physical location on every
1822  // processor where they exist.
1823  //
1824  // Local nodes should have unique authoritative ids,
1825  // and new nodes should be unpartitioned.
1826  //
1827  // New ghost nodes touching local elements should be unpartitioned.
1828  //
1829  // New ghost nodes should have ids which are either already correct
1830  // or which are in the "unpartitioned" id space.
1831  //
1832  // Non-new nodes should have correct ids and processor ids already.
1833 
1834  // First, let's sync up new nodes' processor ids.
1835 
1837 
1838  // Second, sync up dofobject ids.
1840 
1841  // Third, sync up dofobject unique_ids if applicable.
1843 
1844  // Finally, correct the processor ids to make DofMap happy
1846 }

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), make_new_node_proc_ids_parallel_consistent(), make_node_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), and mesh.

Referenced by libMesh::MeshRefinement::_refine_elements().

◆ make_node_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_ids_parallel_consistent ( MeshBase mesh)

Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent.

Definition at line 1460 of file mesh_communication.C.

1461 {
1462  // This function must be run on all processors at once
1463  libmesh_parallel_only(mesh.comm());
1464 
1465  // We need to agree on which processor owns every node, but we can't
1466  // easily assert that here because we don't currently agree on which
1467  // id every node has, and some of our temporary ids on unrelated
1468  // nodes will "overlap".
1469 //#ifdef DEBUG
1470 // MeshTools::libmesh_assert_parallel_consistent_procids<Node> (mesh);
1471 //#endif // DEBUG
1472 
1473  LOG_SCOPE ("make_node_ids_parallel_consistent()", "MeshCommunication");
1474 
1475  SyncNodeIds syncids(mesh);
1479 
1480  // At this point, with both ids and processor ids synced, we can
1481  // finally check for topological consistency of node processor ids.
1482 #ifdef DEBUG
1484 #endif
1485 }

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), libMesh::MeshTools::libmesh_assert_topology_consistent_procids< Node >(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by make_new_nodes_parallel_consistent(), and make_nodes_parallel_consistent().

◆ make_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

Definition at line 1664 of file mesh_communication.C.

1665 {
1666  LOG_SCOPE ("make_node_proc_ids_parallel_consistent()", "MeshCommunication");
1667 
1668  // This function must be run on all processors at once
1669  libmesh_parallel_only(mesh.comm());
1670 
1671  // When this function is called, each section of a parallelized mesh
1672  // should be in the following state:
1673  //
1674  // All nodes should have the exact same physical location on every
1675  // processor where they exist.
1676  //
1677  // Local nodes should have unique authoritative ids,
1678  // and processor ids consistent with all processors which own
1679  // an element touching them.
1680  //
1681  // Ghost nodes touching local elements should have processor ids
1682  // consistent with all processors which own an element touching
1683  // them.
1684  SyncProcIds sync(mesh);
1688 }

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by make_nodes_parallel_consistent().

◆ make_node_unique_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent ( MeshBase mesh)

Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent.

Definition at line 1489 of file mesh_communication.C.

1490 {
1491  // Avoid unused variable warnings if unique ids aren't enabled.
1493 
1494  // This function must be run on all processors at once
1495  libmesh_parallel_only(mesh.comm());
1496 
1497 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1498  LOG_SCOPE ("make_node_unique_ids_parallel_consistent()", "MeshCommunication");
1499 
1500  SyncUniqueIds<Node> syncuniqueids(mesh, &MeshBase::query_node_ptr);
1502  mesh.nodes_begin(),
1503  mesh.nodes_end(),
1504  syncuniqueids);
1505 
1506 #endif
1507 }

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), mesh, libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::MeshBase::query_node_ptr(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::BoundaryInfo::add_elements(), make_new_nodes_parallel_consistent(), make_nodes_parallel_consistent(), and libMesh::Nemesis_IO::read().

◆ make_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on ghost nodes from their local processors.

This is useful for code which wants to add nodes to a distributed mesh.

Definition at line 1771 of file mesh_communication.C.

1772 {
1773  // This function must be run on all processors at once
1774  libmesh_parallel_only(mesh.comm());
1775 
1776  // When this function is called, each section of a parallelized mesh
1777  // should be in the following state:
1778  //
1779  // All nodes should have the exact same physical location on every
1780  // processor where they exist.
1781  //
1782  // Local nodes should have unique authoritative ids,
1783  // and processor ids consistent with all processors which own
1784  // an element touching them.
1785  //
1786  // Ghost nodes touching local elements should have processor ids
1787  // consistent with all processors which own an element touching
1788  // them.
1789  //
1790  // Ghost nodes should have ids which are either already correct
1791  // or which are in the "unpartitioned" id space.
1792 
1793  // First, let's sync up processor ids. Some of these processor ids
1794  // may be "wrong" from coarsening, but they're right in the sense
1795  // that they'll tell us who has the authoritative dofobject ids for
1796  // each node.
1797 
1799 
1800  // Second, sync up dofobject ids.
1802 
1803  // Third, sync up dofobject unique_ids if applicable.
1805 
1806  // Finally, correct the processor ids to make DofMap happy
1808 }

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), make_node_ids_parallel_consistent(), make_node_proc_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), and mesh.

Referenced by libMesh::MeshRefinement::_coarsen_elements(), and libMesh::MeshTools::Modification::all_tri().

◆ make_p_levels_parallel_consistent()

void libMesh::MeshCommunication::make_p_levels_parallel_consistent ( MeshBase mesh)

Copy p levels of ghost elements from their local processors.

Definition at line 1537 of file mesh_communication.C.

1538 {
1539  // This function must be run on all processors at once
1540  libmesh_parallel_only(mesh.comm());
1541 
1542  LOG_SCOPE ("make_p_levels_parallel_consistent()", "MeshCommunication");
1543 
1544  SyncPLevels syncplevels(mesh);
1547  syncplevels);
1548 }

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), mesh, and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), and libMesh::MeshRefinement::_refine_elements().

◆ redistribute()

void libMesh::MeshCommunication::redistribute ( DistributedMesh mesh,
bool  newly_coarsened_only = false 
) const

This method takes a parallel distributed mesh and redistributes the elements.

Specifically, any elements stored on a given processor are sent to the processor which "owns" them. Similarly, any elements assigned to the current processor but stored on another are received. Once this step is completed any required ghost elements are updated. The final result is that each processor stores only the elements it actually owns and any ghost elements required to satisfy data dependencies. This method can be invoked after a partitioning step to affect the new partitioning.

Redistribution can also be done with newly coarsened elements' neighbors only.

Definition at line 285 of file mesh_communication.C.

286 {
287  // no MPI == one processor, no redistribution
288  return;
289 }

Referenced by libMesh::DistributedMesh::redistribute().

◆ send_coarse_ghosts()

void libMesh::MeshCommunication::send_coarse_ghosts ( MeshBase mesh) const

Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them.

Definition at line 915 of file mesh_communication.C.

916 {
917  // no MPI == one processor, no need for this method...
918  return;
919 }

Referenced by libMesh::MeshRefinement::_coarsen_elements().


The documentation for this class was generated from the following files:
libMesh::MeshBase::pid_elements_begin
virtual element_iterator pid_elements_begin(processor_id_type proc_id)=0
Iterate over all elements with a specified processor id.
libMesh::connect_children
void connect_children(const MeshBase &mesh, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
Definition: mesh_communication.C:169
libMesh::MeshBase::ghosting_functors_begin
std::set< GhostingFunctor * >::const_iterator ghosting_functors_begin() const
Beginning of range of ghosting functors.
Definition: mesh_base.h:1113
libMesh::dof_id_type
uint8_t dof_id_type
Definition: id_types.h:67
libMesh::MeshBase::ghosting_functors_end
std::set< GhostingFunctor * >::const_iterator ghosting_functors_end() const
End of range of ghosting functors.
Definition: mesh_base.h:1119
libMesh::MeshBase::delete_node
virtual void delete_node(Node *n)=0
Removes the Node n from the mesh.
libMesh::MeshBase::active_pid_elements_end
virtual element_iterator active_pid_elements_end(processor_id_type proc_id)=0
libMesh::MeshBase::get_boundary_info
const BoundaryInfo & get_boundary_info() const
The information about boundary ids on the mesh.
Definition: mesh_base.h:132
libMesh::Parallel::DofObjectKey
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
Definition: parallel_hilbert.h:77
libMesh::MeshBase::is_serial
virtual bool is_serial() const
Definition: mesh_base.h:159
libMesh::MeshBase::delete_elem
virtual void delete_elem(Elem *e)=0
Removes element e from the mesh.
libMesh::Parallel::Sort
The parallel sorting method is templated on the type of data which is to be sorted.
Definition: parallel_sort.h:55
libMesh::MeshBase::pid_elements_end
virtual element_iterator pid_elements_end(processor_id_type proc_id)=0
libMesh::BoundingBox
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
libMesh::MeshBase::local_nodes_begin
virtual node_iterator local_nodes_begin()=0
Iterate over local nodes (nodes whose processor_id() matches the current processor).
libMesh::MeshBase::n_elem
virtual dof_id_type n_elem() const =0
libMesh::MeshTools::create_nodal_bounding_box
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:414
libMesh::MeshBase::max_elem_id
virtual dof_id_type max_elem_id() const =0
libMesh::MeshTools::libmesh_assert_parallel_consistent_new_node_procids
void libmesh_assert_parallel_consistent_new_node_procids(const MeshBase &mesh)
A function for verifying that processor assignment is parallel consistent (every processor agrees on ...
Definition: mesh_tools.C:1870
libMesh::reconnect_nodes
void reconnect_nodes(const std::set< const Elem *, CompareElemIdsByLevel > &connected_elements, std::set< const Node * > &connected_nodes)
Definition: mesh_communication.C:259
end
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end.
Definition: variant_filter_iterator.h:343
libMesh::ParallelObject::comm
const Parallel::Communicator & comm() const
Definition: parallel_object.h:94
mesh
MeshBase & mesh
Definition: mesh_communication.C:1257
libMesh::MeshBase::not_local_elements_begin
virtual element_iterator not_local_elements_begin()=0
libMesh::MeshBase::max_node_id
virtual dof_id_type max_node_id() const =0
libMesh::MeshBase::renumber_elem
virtual void renumber_elem(dof_id_type old_id, dof_id_type new_id)=0
Changes the id of element old_id, both by changing elem(old_id)->id() and by moving elem(old_id) in t...
libMesh::MeshBase::active_pid_elements_begin
virtual element_iterator active_pid_elements_begin(processor_id_type proc_id)=0
libMesh::DofObject::processor_id
processor_id_type processor_id() const
Definition: dof_object.h:829
libMesh::MeshBase::elements_begin
virtual element_iterator elements_begin()=0
Iterate over all the elements in the Mesh.
libMesh::MeshTools::n_levels
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:656
libMesh::MeshBase::local_elements_begin
virtual element_iterator local_elements_begin()=0
libMesh::MeshCommunication::make_node_ids_parallel_consistent
void make_node_ids_parallel_consistent(MeshBase &)
Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel cons...
Definition: mesh_communication.C:1460
libMesh::MeshBase::query_elem_ptr
virtual const Elem * query_elem_ptr(const dof_id_type i) const =0
libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent
void make_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent,...
Definition: mesh_communication.C:1664
libMesh::MeshBase::element_ptr_range
virtual SimpleRange< element_iterator > element_ptr_range()=0
libMesh::libmesh_assert
libmesh_assert(ctx)
libMesh::IntRange
The IntRange templated class is intended to make it easy to loop over integers which are indices of a...
Definition: int_range.h:53
libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent
void make_node_unique_ids_parallel_consistent(MeshBase &)
Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parall...
Definition: mesh_communication.C:1489
libMesh::Threads::parallel_for
void parallel_for(const Range &range, const Body &body)
Execute the provided function object in parallel on the specified range.
Definition: threads_none.h:73
libMesh::MeshBase::active_elements_begin
virtual element_iterator active_elements_begin()=0
Active, local, and negation forms of the element iterators described above.
libMesh::Elem::node_ref_range
SimpleRange< NodeRefIter > node_ref_range()
Returns a range with all nodes of an element, usable in range-based for loops.
Definition: elem.h:2152
libMesh::Parallel::sync_node_data_by_element_id
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id,...
Definition: parallel_ghost_sync.h:754
libMesh::MeshBase::level_elements_begin
virtual element_iterator level_elements_begin(unsigned int level)=0
Iterate over elements of a given level.
libMesh::MeshCommunication::gather
void gather(const processor_id_type root_id, DistributedMesh &) const
This method takes an input DistributedMesh which may be distributed among all the processors.
Definition: mesh_communication.C:1168
libMesh::MeshTools::libmesh_assert_topology_consistent_procids< Node >
void libmesh_assert_topology_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:1819
libMesh::MeshBase::node_ptr_range
virtual SimpleRange< node_iterator > node_ptr_range()=0
libMesh::MeshBase::const_node_iterator
The definition of the const_node_iterator struct.
Definition: mesh_base.h:1945
libMesh::MeshBase::clear_point_locator
void clear_point_locator()
Releases the current PointLocator object.
Definition: mesh_base.C:696
libMesh::ParallelObject::processor_id
processor_id_type processor_id() const
Definition: parallel_object.h:106
libMesh::libmesh_ignore
void libmesh_ignore(const Args &...)
Definition: libmesh_common.h:526
libMesh::Point
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:38
libMesh::processor_id_type
uint8_t processor_id_type
Definition: id_types.h:104
libMesh::MeshTools::Generation::Private::idx
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
A useful inline function which replaces the macros used previously.
Definition: mesh_generation.C:72
libMesh::as_range
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
libMesh::BoundaryInfo::regenerate_id_sets
void regenerate_id_sets()
Clears and regenerates the cached sets of ids.
Definition: boundary_info.C:159
libMesh::MeshBase::const_element_iterator
The definition of the const_element_iterator struct.
Definition: mesh_base.h:1891
libMesh::query_ghosting_functors
void query_ghosting_functors(const MeshBase &mesh, processor_id_type pid, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
Definition: mesh_communication.C:138
libMesh::MeshBase::nodes_end
virtual node_iterator nodes_end()=0
libMesh::MeshBase::n_nodes
virtual dof_id_type n_nodes() const =0
libMesh::connect_families
void connect_families(std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
Definition: mesh_communication.C:192
distance
Real distance(const Point &p)
Definition: subdomains_ex3.C:50
libMesh::Parallel::sync_dofobject_data_by_id
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.
Definition: parallel_ghost_sync.h:338
libMesh::MeshBase::nodes_begin
virtual node_iterator nodes_begin()=0
Iterate over all the nodes in the Mesh.
libMesh::MeshBase::local_elements_end
virtual element_iterator local_elements_end()=0
libMesh::MeshBase::elements_end
virtual element_iterator elements_end()=0
libMesh::StoredRange
The StoredRange class defines a contiguous, divisible set of objects.
Definition: stored_range.h:52
libMesh::DofObject::invalid_processor_id
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:432
libMesh::Elem
This is the base class from which all geometric element types are derived.
Definition: elem.h:100
libMesh::Parallel::sync_element_data_by_parent_id
void sync_element_data_by_parent_id(MeshBase &mesh, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost elements uniquely identified by their parent id and which child t...
Definition: parallel_ghost_sync.h:431
libMesh::Parallel::sync_node_data_by_element_id_once
bool sync_node_data_by_element_id_once(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id,...
Definition: parallel_ghost_sync.h:548
libMesh::MeshTools::libmesh_assert_valid_refinement_tree
void libmesh_assert_valid_refinement_tree(const MeshBase &mesh)
A function for verifying that elements on this processor have valid descendants and consistent active...
Definition: mesh_tools.C:2026
libMesh::Parallel::SyncEverything
Definition: parallel_ghost_sync.h:219
libMesh::MeshBase::level_elements_end
virtual element_iterator level_elements_end(unsigned int level)=0
libMesh::err
OStreamProxy err
libMesh::Elem::node_ref
const Node & node_ref(const unsigned int i) const
Definition: elem.h:2031
libMesh::Real
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real
Definition: libmesh_common.h:121
libMesh::MeshBase::not_local_elements_end
virtual element_iterator not_local_elements_end()=0
libMesh::MeshBase::query_node_ptr
virtual const Node * query_node_ptr(const dof_id_type i) const =0
libMesh::MeshBase::active_elements_end
virtual element_iterator active_elements_end()=0
libMesh::MeshBase::local_nodes_end
virtual node_iterator local_nodes_end()=0
libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent
void make_new_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent,...
Definition: mesh_communication.C:1693
libMesh::MeshTools::correct_node_proc_ids
void correct_node_proc_ids(MeshBase &)
Changes the processor ids on each node so be the same as the id of the lowest element touching that n...
Definition: mesh_tools.C:2252
libMesh::MeshTools::libmesh_assert_parallel_consistent_procids< Node >
void libmesh_assert_parallel_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:1908