libMesh
Public Member Functions | List of all members
libMesh::MeshCommunication Class Reference

This is the MeshCommunication class. More...

#include <mesh_communication.h>

Public Member Functions

 MeshCommunication ()=default
 Constructor. More...
 
 ~MeshCommunication ()=default
 Destructor. More...
 
void clear ()
 Clears all data structures and resets to a pristine state. More...
 
void broadcast (MeshBase &) const
 Finds all the processors that may contain elements that neighbor my elements. More...
 
void redistribute (DistributedMesh &mesh, bool newly_coarsened_only=false) const
 This method takes a parallel distributed mesh and redistributes the elements. More...
 
void gather_neighboring_elements (DistributedMesh &) const
 
void send_coarse_ghosts (MeshBase &) const
 Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them. More...
 
void gather (const processor_id_type root_id, MeshBase &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void allgather (MeshBase &mesh) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void delete_remote_elements (DistributedMesh &, const std::set< Elem *> &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void assign_global_indices (MeshBase &) const
 This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh. More...
 
void check_for_duplicate_global_indices (MeshBase &) const
 Throw an error if we have any index clashes in the numbering used by assign_global_indices. More...
 
template<typename ForwardIterator >
void find_local_indices (const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::unordered_map< dof_id_type, dof_id_type > &) const
 This method determines a locally unique, contiguous index for each object in the input range. More...
 
template<typename ForwardIterator >
void find_global_indices (const Parallel::Communicator &communicator, const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
 This method determines a globally unique, partition-agnostic index for each object in the input range. More...
 
void make_elems_parallel_consistent (MeshBase &)
 Copy ids of ghost elements from their local processors. More...
 
void make_p_levels_parallel_consistent (MeshBase &)
 Copy p levels of ghost elements from their local processors. More...
 
void make_node_ids_parallel_consistent (MeshBase &)
 Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent. More...
 
void make_node_unique_ids_parallel_consistent (MeshBase &)
 Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent. More...
 
void make_node_bcids_parallel_consistent (MeshBase &)
 Assuming all processor ids are parallel consistent, this function makes all ghost boundary ids on nodes parallel consistent. More...
 
void make_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well. More...
 
void make_new_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well. More...
 
void make_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on ghost nodes from their local processors. More...
 
void make_new_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on new nodes from their local processors. More...
 

Detailed Description

This is the MeshCommunication class.

It handles all the details of communicating mesh information from one processor to another. All parallelization of the Mesh data structures is done via this class.

Author
Benjamin S. Kirk
Date
2003

Definition at line 53 of file mesh_communication.h.

Constructor & Destructor Documentation

◆ MeshCommunication()

libMesh::MeshCommunication::MeshCommunication ( )
default

Constructor.

◆ ~MeshCommunication()

libMesh::MeshCommunication::~MeshCommunication ( )
default

Destructor.

Member Function Documentation

◆ allgather()

void libMesh::MeshCommunication::allgather ( MeshBase mesh) const
inline

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor then sends its local nodes and elements to the other processors. The end result is that a previously distributed DistributedMesh will be serialized on each processor. Since this method is collective it must be called by all processors.

Definition at line 138 of file mesh_communication.h.

References gather(), libMesh::DofObject::invalid_processor_id, and mesh.

Referenced by libMesh::Nemesis_IO::read().

MeshBase & mesh
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:493
void gather(const processor_id_type root_id, MeshBase &) const
This method takes an input DistributedMesh which may be distributed among all the processors...

◆ assign_global_indices()

void libMesh::MeshCommunication::assign_global_indices ( MeshBase mesh) const

This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh.

The approach is to compute the Hilbert space-filling curve key and use its value to assign an index in [0,N_global). Since the Hilbert key is unique for each spatial location, two objects occupying the same location will be assigned the same global id. Thus, this method can also be useful for identifying duplicate nodes which may occur during parallel refinement.

Definition at line 204 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), libMesh::ParallelObject::comm(), communicator, libMesh::MeshTools::create_nodal_bounding_box(), distance(), libMesh::MeshTools::Generation::Private::idx(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_equal_connectivity(), libMesh::MeshTools::libmesh_assert_equal_points(), libMesh::make_range(), mesh, libMesh::MeshBase::n_elem(), libMesh::MeshBase::n_nodes(), libMesh::Threads::parallel_for(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::MeshTools::Private::globally_renumber_nodes_and_elements().

205 {
206  LOG_SCOPE ("assign_global_indices()", "MeshCommunication");
207 
208  // This method determines partition-agnostic global indices
209  // for nodes and elements.
210 
211  // Algorithm:
212  // (1) compute the Hilbert key for each local node/element
213  // (2) perform a parallel sort of the Hilbert key
214  // (3) get the min/max value on each processor
215  // (4) determine the position in the global ranking for
216  // each local object
217 
219 
220 #ifdef DEBUG
221  // This is going to be a mess if geometry is out of sync
224 #endif
225 
226  // Global bounding box. We choose the nodal bounding box for
227  // backwards compatibility; the element bounding box may be looser
228  // on curved elements.
229  const BoundingBox bbox =
231 
232  const Point bboxinv = invert_bbox(bbox);
233 
234  //-------------------------------------------------------------
235  // (1) compute Hilbert keys
236  std::vector<Parallel::DofObjectKey>
237  node_keys, elem_keys;
238 
239  {
240  // Nodes first
241  {
242  ConstNodeRange nr (mesh.local_nodes_begin(),
243  mesh.local_nodes_end());
244  node_keys.resize (nr.size());
245  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, bboxinv, node_keys));
246 
247  // // It's O(N^2) to check that these keys don't duplicate before the
248  // // sort...
249  //
250  // MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
251  // for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
252  // {
253  // MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
254  // for (std::size_t j = 0; j != i; ++j, ++nodej)
255  // {
256  // if (node_keys[i] == node_keys[j])
257  // {
258  // CFixBitVec icoords[3], jcoords[3];
259  // get_hilbert_coords(**nodej, bbox, bboxinv, jcoords);
260  // libMesh::err << "node " << (*nodej)->id() << ", " << static_cast<Point &>(**nodej) << " has HilbertIndices " << node_keys[j] << std::endl;
261  // get_hilbert_coords(**nodei, bbox, bboxinv, icoords);
262  // libMesh::err << "node " << (*nodei)->id() << ", " << static_cast<Point &>(**nodei) << " has HilbertIndices " << node_keys[i] << std::endl;
263  // libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
264  // }
265  // }
266  // }
267  }
268 
269  // Elements next
270  {
271  ConstElemRange er (mesh.local_elements_begin(),
272  mesh.local_elements_end());
273  elem_keys.resize (er.size());
274  Threads::parallel_for (er, ComputeHilbertKeys (bbox, bboxinv, elem_keys));
275 
276  // // For elements, the keys can be (and in the case of TRI, are
277  // // expected to be) duplicates, but only if the elements are at
278  // // different levels
279  // MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
280  // for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
281  // {
282  // MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
283  // for (std::size_t j = 0; j != i; ++j, ++elemj)
284  // {
285  // if ((elem_keys[i] == elem_keys[j]) &&
286  // ((*elemi)->level() == (*elemj)->level()))
287  // {
288  // libMesh::err << "level " << (*elemj)->level()
289  // << " elem\n" << (**elemj)
290  // << " vertex average " << (*elemj)->vertex_average()
291  // << " has HilbertIndices " << elem_keys[j]
292  // << " or " << get_dofobject_key((**elemj), bbox, bboxinv)
293  // << std::endl;
294  // libMesh::err << "level " << (*elemi)->level()
295  // << " elem\n" << (**elemi)
296  // << " vertex average " << (*elemi)->vertex_average()
297  // << " has HilbertIndices " << elem_keys[i]
298  // << " or " << get_dofobject_key((**elemi), bbox, bboxinv)
299  // << std::endl;
300  // libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
301  // }
302  // }
303  // }
304  }
305  } // done computing Hilbert keys
306 
307 
308 
309  //-------------------------------------------------------------
310  // (2) parallel sort the Hilbert keys
311  Parallel::Sort<Parallel::DofObjectKey> node_sorter (communicator,
312  node_keys);
313  node_sorter.sort(); /* done with node_keys */ //node_keys.clear();
314 
315  const std::vector<Parallel::DofObjectKey> & my_node_bin =
316  node_sorter.bin();
317 
318  Parallel::Sort<Parallel::DofObjectKey> elem_sorter (communicator,
319  elem_keys);
320  elem_sorter.sort(); /* done with elem_keys */ //elem_keys.clear();
321 
322  const std::vector<Parallel::DofObjectKey> & my_elem_bin =
323  elem_sorter.bin();
324 
325 
326 
327  //-------------------------------------------------------------
328  // (3) get the max value on each processor
329  std::vector<Parallel::DofObjectKey>
330  node_upper_bounds(communicator.size()),
331  elem_upper_bounds(communicator.size());
332 
333  { // limit scope of temporaries
334  std::vector<Parallel::DofObjectKey> recvbuf(2*communicator.size());
335  std::vector<unsigned short int> /* do not use a vector of bools here since it is not always so! */
336  empty_nodes (communicator.size()),
337  empty_elem (communicator.size());
338  std::vector<Parallel::DofObjectKey> my_max(2);
339 
340  communicator.allgather (static_cast<unsigned short int>(my_node_bin.empty()), empty_nodes);
341  communicator.allgather (static_cast<unsigned short int>(my_elem_bin.empty()), empty_elem);
342 
343  if (!my_node_bin.empty()) my_max[0] = my_node_bin.back();
344  if (!my_elem_bin.empty()) my_max[1] = my_elem_bin.back();
345 
346  communicator.allgather (my_max, /* identical_buffer_sizes = */ true);
347 
348  // Be careful here. The *_upper_bounds will be used to find the processor
349  // a given object belongs to. So, if a processor contains no objects (possible!)
350  // then copy the bound from the lower processor id.
351  for (auto p : make_range(communicator.size()))
352  {
353  node_upper_bounds[p] = my_max[2*p+0];
354  elem_upper_bounds[p] = my_max[2*p+1];
355 
356  if (p > 0) // default hilbert index value is the OK upper bound for processor 0.
357  {
358  if (empty_nodes[p]) node_upper_bounds[p] = node_upper_bounds[p-1];
359  if (empty_elem[p]) elem_upper_bounds[p] = elem_upper_bounds[p-1];
360  }
361  }
362  }
363 
364 
365 
366  //-------------------------------------------------------------
367  // (4) determine the position in the global ranking for
368  // each local object
369  {
370  //----------------------------------------------
371  // Nodes first -- all nodes, not just local ones
372  {
373  // Request sets to send to each processor
374  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
375  requested_ids;
376  // Results to gather from each processor - kept in a map so we
377  // do only one loop over nodes after all receives are done.
378  std::map<dof_id_type, std::vector<dof_id_type>>
379  filled_request;
380 
381  // build up list of requests
382  for (const auto & node : mesh.node_ptr_range())
383  {
384  libmesh_assert(node);
385  const Parallel::DofObjectKey hi =
386  get_dofobject_key (*node, bbox, bboxinv);
387  const processor_id_type pid =
388  cast_int<processor_id_type>
389  (std::distance (node_upper_bounds.begin(),
390  std::lower_bound(node_upper_bounds.begin(),
391  node_upper_bounds.end(),
392  hi)));
393 
394  libmesh_assert_less (pid, communicator.size());
395 
396  requested_ids[pid].push_back(hi);
397  }
398 
399  // The number of objects in my_node_bin on each processor
400  std::vector<dof_id_type> node_bin_sizes(communicator.size());
401  communicator.allgather (static_cast<dof_id_type>(my_node_bin.size()), node_bin_sizes);
402 
403  // The offset of my first global index
404  dof_id_type my_offset = 0;
405  for (auto pid : make_range(communicator.rank()))
406  my_offset += node_bin_sizes[pid];
407 
408  auto gather_functor =
409  [
410 #ifndef NDEBUG
411  & node_upper_bounds,
412  & communicator,
413 #endif
414  & my_node_bin,
415  my_offset
416  ]
418  const std::vector<Parallel::DofObjectKey> & keys,
419  std::vector<dof_id_type> & global_ids)
420  {
421  // Fill the requests
422  const std::size_t keys_size = keys.size();
423  global_ids.reserve(keys_size);
424  for (std::size_t idx=0; idx != keys_size; idx++)
425  {
426  const Parallel::DofObjectKey & hi = keys[idx];
427  libmesh_assert_less_equal (hi, node_upper_bounds[communicator.rank()]);
428 
429  // find the requested index in my node bin
430  std::vector<Parallel::DofObjectKey>::const_iterator pos =
431  std::lower_bound (my_node_bin.begin(), my_node_bin.end(), hi);
432  libmesh_assert (pos != my_node_bin.end());
433  libmesh_assert_equal_to (*pos, hi);
434 
435  // Finally, assign the global index based off the position of the index
436  // in my array, properly offset.
437  global_ids.push_back(cast_int<dof_id_type>(std::distance(my_node_bin.begin(), pos) + my_offset));
438  }
439  };
440 
441  auto action_functor =
442  [&filled_request]
443  (processor_id_type pid,
444  const std::vector<Parallel::DofObjectKey> &,
445  const std::vector<dof_id_type> & global_ids)
446  {
447  filled_request[pid] = global_ids;
448  };
449 
450  // Trade requests with other processors
451  const dof_id_type * ex = nullptr;
452  Parallel::pull_parallel_vector_data
453  (communicator, requested_ids, gather_functor, action_functor, ex);
454 
455  // We now have all the filled requests, so we can loop through our
456  // nodes once and assign the global index to each one.
457  {
458  std::map<dof_id_type, std::vector<dof_id_type>::const_iterator>
459  next_obj_on_proc;
460  for (auto & p : filled_request)
461  next_obj_on_proc[p.first] = p.second.begin();
462 
463  for (auto & node : mesh.node_ptr_range())
464  {
465  libmesh_assert(node);
466  const Parallel::DofObjectKey hi =
467  get_dofobject_key (*node, bbox, bboxinv);
468  const processor_id_type pid =
469  cast_int<processor_id_type>
470  (std::distance (node_upper_bounds.begin(),
471  std::lower_bound(node_upper_bounds.begin(),
472  node_upper_bounds.end(),
473  hi)));
474 
475  libmesh_assert_less (pid, communicator.size());
476  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
477 
478  const dof_id_type global_index = *next_obj_on_proc[pid];
479  libmesh_assert_less (global_index, mesh.n_nodes());
480  node->set_id() = global_index;
481 
482  ++next_obj_on_proc[pid];
483  }
484  }
485  }
486 
487  //---------------------------------------------------
488  // elements next -- all elements, not just local ones
489  {
490  // Request sets to send to each processor
491  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
492  requested_ids;
493  // Results to gather from each processor - kept in a map so we
494  // do only one loop over elements after all receives are done.
495  std::map<dof_id_type, std::vector<dof_id_type>>
496  filled_request;
497 
498  for (const auto & elem : mesh.element_ptr_range())
499  {
500  libmesh_assert(elem);
501  const Parallel::DofObjectKey hi =
502  get_dofobject_key (*elem, bbox, bboxinv);
503  const processor_id_type pid =
504  cast_int<processor_id_type>
505  (std::distance (elem_upper_bounds.begin(),
506  std::lower_bound(elem_upper_bounds.begin(),
507  elem_upper_bounds.end(),
508  hi)));
509 
510  libmesh_assert_less (pid, communicator.size());
511 
512  requested_ids[pid].push_back(hi);
513  }
514 
515  // The number of objects in my_elem_bin on each processor
516  std::vector<dof_id_type> elem_bin_sizes(communicator.size());
517  communicator.allgather (static_cast<dof_id_type>(my_elem_bin.size()), elem_bin_sizes);
518 
519  // The offset of my first global index
520  dof_id_type my_offset = 0;
521  for (auto pid : make_range(communicator.rank()))
522  my_offset += elem_bin_sizes[pid];
523 
524  auto gather_functor =
525  [
526 #ifndef NDEBUG
527  & elem_upper_bounds,
528  & communicator,
529 #endif
530  & my_elem_bin,
531  my_offset
532  ]
534  const std::vector<Parallel::DofObjectKey> & keys,
535  std::vector<dof_id_type> & global_ids)
536  {
537  // Fill the requests
538  const std::size_t keys_size = keys.size();
539  global_ids.reserve(keys_size);
540  for (std::size_t idx=0; idx != keys_size; idx++)
541  {
542  const Parallel::DofObjectKey & hi = keys[idx];
543  libmesh_assert_less_equal (hi, elem_upper_bounds[communicator.rank()]);
544 
545  // find the requested index in my elem bin
546  std::vector<Parallel::DofObjectKey>::const_iterator pos =
547  std::lower_bound (my_elem_bin.begin(), my_elem_bin.end(), hi);
548  libmesh_assert (pos != my_elem_bin.end());
549  libmesh_assert_equal_to (*pos, hi);
550 
551  // Finally, assign the global index based off the position of the index
552  // in my array, properly offset.
553  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_elem_bin.begin(), pos) + my_offset));
554  }
555  };
556 
557  auto action_functor =
558  [&filled_request]
559  (processor_id_type pid,
560  const std::vector<Parallel::DofObjectKey> &,
561  const std::vector<dof_id_type> & global_ids)
562  {
563  filled_request[pid] = global_ids;
564  };
565 
566  // Trade requests with other processors
567  const dof_id_type * ex = nullptr;
568  Parallel::pull_parallel_vector_data
569  (communicator, requested_ids, gather_functor, action_functor, ex);
570 
571  // We now have all the filled requests, so we can loop through our
572  // elements once and assign the global index to each one.
573  {
574  std::vector<std::vector<dof_id_type>::const_iterator>
575  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
576  for (auto pid : make_range(communicator.size()))
577  next_obj_on_proc.push_back(filled_request[pid].begin());
578 
579  for (auto & elem : mesh.element_ptr_range())
580  {
581  libmesh_assert(elem);
582  const Parallel::DofObjectKey hi =
583  get_dofobject_key (*elem, bbox, bboxinv);
584  const processor_id_type pid =
585  cast_int<processor_id_type>
586  (std::distance (elem_upper_bounds.begin(),
587  std::lower_bound(elem_upper_bounds.begin(),
588  elem_upper_bounds.end(),
589  hi)));
590 
591  libmesh_assert_less (pid, communicator.size());
592  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
593 
594  const dof_id_type global_index = *next_obj_on_proc[pid];
595  libmesh_assert_less (global_index, mesh.n_elem());
596  elem->set_id() = global_index;
597 
598  ++next_obj_on_proc[pid];
599  }
600  }
601  }
602  }
603 }
void libmesh_assert_equal_points(const MeshBase &mesh)
A function for testing that node locations match across processors.
Definition: mesh_tools.C:1605
void parallel_for(const Range &range, const Body &body)
Execute the provided function object in parallel on the specified range.
Definition: threads_none.h:73
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
MeshBase & mesh
const Parallel::Communicator & comm() const
The StoredRange class defines a contiguous, divisible set of objects.
Definition: stored_range.h:54
Real distance(const Point &p)
uint8_t processor_id_type
Definition: id_types.h:104
uint8_t processor_id_type
libmesh_assert(ctx)
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
void libmesh_assert_equal_connectivity(const MeshBase &mesh)
A function for testing that element nodal connectivities match across processors. ...
Definition: mesh_tools.C:1621
The parallel sorting method is templated on the type of data which is to be sorted.
Definition: parallel_sort.h:55
DIE A HORRIBLE DEATH HERE typedef MPI_Comm communicator
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:140
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:584
virtual dof_id_type n_elem() const =0
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39
virtual dof_id_type n_nodes() const =0
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
A useful inline function which replaces the macros used previously.
uint8_t dof_id_type
Definition: id_types.h:67

◆ broadcast()

void libMesh::MeshCommunication::broadcast ( MeshBase mesh) const

Finds all the processors that may contain elements that neighbor my elements.

This list is guaranteed to include all processors that border any of my elements, but may include additional ones as well. This method computes bounding boxes for the elements on each processor and checks for overlaps. This method takes a mesh (which is assumed to reside on processor 0) and broadcasts it to all the other processors. It also broadcasts any boundary information the mesh has associated with it.

Definition at line 1177 of file mesh_communication.C.

Referenced by main(), libMesh::NameBasedIO::read(), libMesh::CheckpointIO::read(), ExodusTest< elem_type >::test_read_gold(), ExodusTest< elem_type >::test_write(), MeshInputTest::testAbaqusRead(), MeshInputTest::testCopyElementSolutionImpl(), MeshInputTest::testCopyElementVectorImpl(), MeshInputTest::testCopyNodalSolutionImpl(), MeshInputTest::testDynaFileMappings(), MeshInputTest::testDynaNoSplines(), MeshInputTest::testDynaReadElem(), MeshInputTest::testDynaReadPatch(), MeshInputTest::testExodusFileMappings(), MeshInputTest::testExodusIGASidesets(), MeshInputTest::testExodusWriteElementDataFromDiscontinuousNodalData(), MeshInputTest::testGmshBCIDOverlap(), MeshInputTest::testGoodGmsh(), MeshInputTest::testGoodSTL(), MeshInputTest::testGoodSTLBinary(), MeshInputTest::testLowOrderEdgeBlocks(), MeshInputTest::testTetgenIO(), and libMesh::NetGenMeshInterface::triangulate().

1178 {
1179  // no MPI == one processor, no need for this method...
1180  return;
1181 }

◆ check_for_duplicate_global_indices()

void libMesh::MeshCommunication::check_for_duplicate_global_indices ( MeshBase mesh) const

Throw an error if we have any index clashes in the numbering used by assign_global_indices.

Definition at line 612 of file mesh_communication_global_indices.C.

References libMesh::MeshTools::create_nodal_bounding_box(), libMesh::err, mesh, and libMesh::Threads::parallel_for().

613 {
614  LOG_SCOPE ("check_for_duplicate_global_indices()", "MeshCommunication");
615 
616  // Global bounding box. We choose the nodal bounding box for
617  // backwards compatibility; the element bounding box may be looser
618  // on curved elements.
619  const BoundingBox bbox =
621 
622  const Point bboxinv = invert_bbox(bbox);
623 
624  std::vector<Parallel::DofObjectKey>
625  node_keys, elem_keys;
626 
627  {
628  // Nodes first
629  {
630  ConstNodeRange nr (mesh.local_nodes_begin(),
631  mesh.local_nodes_end());
632  node_keys.resize (nr.size());
633  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, bboxinv, node_keys));
634 
635  // It's O(N^2) to check that these keys don't duplicate before the
636  // sort...
637  MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
638  for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
639  {
640  MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
641  for (std::size_t j = 0; j != i; ++j, ++nodej)
642  {
643  if (node_keys[i] == node_keys[j])
644  {
645  CFixBitVec icoords[3], jcoords[3];
646  get_hilbert_coords(**nodej, bbox, bboxinv, jcoords);
647  libMesh::err <<
648  "node " << (*nodej)->id() << ", " <<
649  *(const Point *)(*nodej) << " has HilbertIndices " <<
650  node_keys[j] << std::endl;
651  get_hilbert_coords(**nodei, bbox, bboxinv, icoords);
652  libMesh::err <<
653  "node " << (*nodei)->id() << ", " <<
654  *(const Point *)(*nodei) << " has HilbertIndices " <<
655  node_keys[i] << std::endl;
656  libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
657  }
658  }
659  }
660  }
661 
662  // Elements next
663  {
664  ConstElemRange er (mesh.local_elements_begin(),
665  mesh.local_elements_end());
666  elem_keys.resize (er.size());
667  Threads::parallel_for (er, ComputeHilbertKeys (bbox, bboxinv, elem_keys));
668 
669  // For elements, the keys can be (and in the case of TRI, are
670  // expected to be) duplicates, but only if the elements are at
671  // different levels
672  MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
673  for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
674  {
675  MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
676  for (std::size_t j = 0; j != i; ++j, ++elemj)
677  {
678  if ((elem_keys[i] == elem_keys[j]) &&
679  ((*elemi)->level() == (*elemj)->level()))
680  {
681  libMesh::err <<
682  "level " << (*elemj)->level() << " elem\n" <<
683  (**elemj) << " vertex average " <<
684  (*elemj)->vertex_average() << " has HilbertIndices " <<
685  elem_keys[j] << " or " <<
686  get_dofobject_key((**elemj), bbox, bboxinv) <<
687  std::endl;
688  libMesh::err <<
689  "level " << (*elemi)->level() << " elem\n" <<
690  (**elemi) << " vertex average " <<
691  (*elemi)->vertex_average() << " has HilbertIndices " <<
692  elem_keys[i] << " or " <<
693  get_dofobject_key((**elemi), bbox, bboxinv) <<
694  std::endl;
695  libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
696  }
697  }
698  }
699  }
700  } // done checking Hilbert keys
701 }
OStreamProxy err
void parallel_for(const Range &range, const Body &body)
Execute the provided function object in parallel on the specified range.
Definition: threads_none.h:73
The definition of the const_element_iterator struct.
Definition: mesh_base.h:2216
MeshBase & mesh
The StoredRange class defines a contiguous, divisible set of objects.
Definition: stored_range.h:54
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
The definition of the const_node_iterator struct.
Definition: mesh_base.h:2267
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:584
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39

◆ clear()

void libMesh::MeshCommunication::clear ( )

Clears all data structures and resets to a pristine state.

Definition at line 442 of file mesh_communication.C.

443 {
444  // _neighboring_processors.clear();
445 }

◆ delete_remote_elements()

void libMesh::MeshCommunication::delete_remote_elements ( DistributedMesh mesh,
const std::set< Elem *> &  extra_ghost_elem_ids 
) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor deletes all elements which are neither local elements nor "ghost" elements which touch local elements, and deletes all nodes which are not contained in local or ghost elements. The end result is that a previously serial DistributedMesh will be distributed between processors. Since this method is collective it must be called by all processors.

The std::set is a list of extra elements that you don't want to delete. These will be left on the current processor along with local elements and ghosted neighbors.

Definition at line 2147 of file mesh_communication.C.

References libMesh::Elem::active_family_tree(), libMesh::as_range(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_element_dependencies(), libMesh::MeshBase::delete_elem(), libMesh::MeshBase::delete_node(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DofObject::invalid_processor_id, libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_constraint_rows(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libMesh::Elem::make_links_to_me_remote(), libMesh::MeshBase::max_elem_id(), libMesh::MeshBase::max_node_id(), mesh, libMesh::MeshBase::n_constraint_rows(), libMesh::MeshTools::n_levels(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::Elem::subactive(), and TIMPI::Communicator::verify().

2149 {
2150  // The mesh should know it's about to be parallelized
2152 
2153  LOG_SCOPE("delete_remote_elements()", "MeshCommunication");
2154 
2155 #ifdef DEBUG
2156  // We expect maximum ids to be in sync so we can use them to size
2157  // vectors
2160  const dof_id_type par_max_node_id = mesh.parallel_max_node_id();
2161  const dof_id_type par_max_elem_id = mesh.parallel_max_elem_id();
2162  libmesh_assert_equal_to (par_max_node_id, mesh.max_node_id());
2163  libmesh_assert_equal_to (par_max_elem_id, mesh.max_elem_id());
2164  const dof_id_type n_constraint_rows = mesh.n_constraint_rows();
2165 #endif
2166 
2167  connected_elem_set_type elements_to_keep;
2168 
2169  // Don't delete elements that we were explicitly told not to
2170  for (const auto & elem : extra_ghost_elem_ids)
2171  {
2172  std::vector<const Elem *> active_family;
2173 #ifdef LIBMESH_ENABLE_AMR
2174  if (!elem->subactive())
2175  elem->active_family_tree(active_family);
2176  else
2177 #endif
2178  active_family.push_back(elem);
2179 
2180  for (const auto & f : active_family)
2181  elements_to_keep.insert(f);
2182  }
2183 
2184  // See which elements we still need to keep ghosted, given that
2185  // we're keeping local and unpartitioned elements.
2187  (mesh, mesh.processor_id(),
2188  mesh.active_pid_elements_begin(mesh.processor_id()),
2189  mesh.active_pid_elements_end(mesh.processor_id()),
2190  elements_to_keep);
2193  mesh.active_pid_elements_begin(DofObject::invalid_processor_id),
2194  mesh.active_pid_elements_end(DofObject::invalid_processor_id),
2195  elements_to_keep);
2196 
2197  // The inactive elements we need to send should have their
2198  // immediate children present.
2199  connect_children(mesh, mesh.pid_elements_begin(mesh.processor_id()),
2200  mesh.pid_elements_end(mesh.processor_id()),
2201  elements_to_keep);
2203  mesh.pid_elements_begin(DofObject::invalid_processor_id),
2204  mesh.pid_elements_end(DofObject::invalid_processor_id),
2205  elements_to_keep);
2206 
2207  // And see which elements and nodes they depend on
2208  connected_node_set_type connected_nodes;
2209  connect_element_dependencies(mesh, elements_to_keep, connected_nodes);
2210 
2211  // Delete all the elements we have no reason to save,
2212  // starting with the most refined so that the mesh
2213  // is valid at all intermediate steps
2214  unsigned int n_levels = MeshTools::n_levels(mesh);
2215 
2216  for (int l = n_levels - 1; l >= 0; --l)
2217  for (auto & elem : as_range(mesh.level_elements_begin(l),
2218  mesh.level_elements_end(l)))
2219  {
2220  libmesh_assert (elem);
2221  // Make sure we don't leave any invalid pointers
2222  const bool keep_me = elements_to_keep.count(elem);
2223 
2224  if (!keep_me)
2225  elem->make_links_to_me_remote();
2226 
2227  // delete_elem doesn't currently invalidate element
2228  // iterators... that had better not change
2229  if (!keep_me)
2230  mesh.delete_elem(elem);
2231  }
2232 
2233  // Delete all the nodes we have no reason to save
2234  for (auto & node : mesh.node_ptr_range())
2235  {
2236  libmesh_assert(node);
2237  if (!connected_nodes.count(node))
2238  {
2239  libmesh_assert_not_equal_to(node->processor_id(),
2240  mesh.processor_id());
2241  mesh.delete_node(node);
2242  }
2243  }
2244 
2245  // If we had a point locator, it's invalid now that some of the
2246  // elements it pointed to have been deleted.
2248 
2249  // We now have all remote elements and nodes deleted; our ghosting
2250  // functors should be ready to delete any now-redundant cached data
2251  // they use too.
2253  gf->delete_remote_elements();
2254 
2255 #ifdef DEBUG
2256  const dof_id_type n_new_constraint_rows = mesh.n_constraint_rows();
2257  libmesh_assert_equal_to(n_constraint_rows, n_new_constraint_rows);
2258 
2261 #endif
2262 }
void libmesh_assert_valid_refinement_tree(const MeshBase &mesh)
A function for verifying that elements on this processor have valid descendants and consistent active...
Definition: mesh_tools.C:1547
MeshBase & mesh
void libmesh_assert_valid_constraint_rows(const MeshBase &mesh)
A function for verifying that all mesh constraint rows express relations between nodes and elements t...
Definition: mesh_tools.C:1665
const Parallel::Communicator & comm() const
dof_id_type n_constraint_rows() const
Definition: mesh_base.C:2044
virtual bool is_serial() const
Definition: mesh_base.h:211
virtual void delete_elem(Elem *e)=0
Removes element e from the mesh.
void query_ghosting_functors(const MeshBase &mesh, processor_id_type pid, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, connected_elem_set_type &connected_elements)
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:493
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:802
virtual dof_id_type max_elem_id() const =0
void clear_point_locator()
Releases the current PointLocator object.
Definition: mesh_base.C:1671
libmesh_assert(ctx)
virtual void delete_node(Node *n)=0
Removes the Node n from the mesh.
void connect_children(const MeshBase &mesh, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, connected_elem_set_type &connected_elements)
std::set< const Node * > connected_node_set_type
std::set< GhostingFunctor * >::const_iterator ghosting_functors_begin() const
Beginning of range of ghosting functors.
Definition: mesh_base.h:1291
timpi_pure bool verify(const T &r) const
void connect_element_dependencies(const MeshBase &mesh, connected_elem_set_type &connected_elements, connected_node_set_type &connected_nodes)
std::set< const Elem *, CompareElemIdsByLevel > connected_elem_set_type
virtual dof_id_type max_node_id() const =0
processor_id_type processor_id() const
std::set< GhostingFunctor * >::const_iterator ghosting_functors_end() const
End of range of ghosting functors.
Definition: mesh_base.h:1297
uint8_t dof_id_type
Definition: id_types.h:67

◆ find_global_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_global_indices ( const Parallel::Communicator communicator,
const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::vector< dof_id_type > &  index_map 
) const

This method determines a globally unique, partition-agnostic index for each object in the input range.

Definition at line 748 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), communicator, distance(), libMesh::MeshTools::Generation::Private::idx(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::make_range(), libMesh::Real, and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::ParmetisPartitioner::initialize(), libMesh::MetisPartitioner::partition_range(), and libMesh::Partitioner::partition_unpartitioned_elements().

753 {
754  LOG_SCOPE ("find_global_indices()", "MeshCommunication");
755 
756  // This method determines partition-agnostic global indices
757  // for nodes and elements.
758 
759  // Algorithm:
760  // (1) compute the Hilbert key for each local node/element
761  // (2) perform a parallel sort of the Hilbert key
762  // (3) get the min/max value on each processor
763  // (4) determine the position in the global ranking for
764  // each local object
765  index_map.clear();
766  std::size_t n_objects = std::distance (begin, end);
767  index_map.reserve(n_objects);
768 
769  const Point bboxinv = invert_bbox(bbox);
770 
771  //-------------------------------------------------------------
772  // (1) compute Hilbert keys
773  // These aren't trivial to compute, and we will need them again.
774  // But the binsort will sort the input vector, trashing the order
775  // that we'd like to rely on. So, two vectors...
776  std::vector<Parallel::DofObjectKey>
777  sorted_hilbert_keys,
778  hilbert_keys;
779  sorted_hilbert_keys.reserve(n_objects);
780  hilbert_keys.reserve(n_objects);
781  {
782  LOG_SCOPE("compute_hilbert_indices()", "MeshCommunication");
783  const processor_id_type my_pid = communicator.rank();
784  for (ForwardIterator it=begin; it!=end; ++it)
785  {
786  const Parallel::DofObjectKey hi(get_dofobject_key (**it, bbox, bboxinv));
787  hilbert_keys.push_back(hi);
788 
789  const processor_id_type pid = (*it)->processor_id();
790 
791  if (pid == my_pid)
792  sorted_hilbert_keys.push_back(hi);
793 
794  // someone needs to take care of unpartitioned objects!
795  if ((my_pid == 0) &&
797  sorted_hilbert_keys.push_back(hi);
798  }
799  }
800 
801  //-------------------------------------------------------------
802  // (2) parallel sort the Hilbert keys
803  START_LOG ("parallel_sort()", "MeshCommunication");
804  Parallel::Sort<Parallel::DofObjectKey> sorter (communicator,
805  sorted_hilbert_keys);
806  sorter.sort();
807  STOP_LOG ("parallel_sort()", "MeshCommunication");
808  const std::vector<Parallel::DofObjectKey> & my_bin = sorter.bin();
809 
810  // The number of objects in my_bin on each processor
811  std::vector<unsigned int> bin_sizes(communicator.size());
812  communicator.allgather (static_cast<unsigned int>(my_bin.size()), bin_sizes);
813 
814  // The offset of my first global index
815  unsigned int my_offset = 0;
816  for (auto pid : make_range(communicator.rank()))
817  my_offset += bin_sizes[pid];
818 
819  //-------------------------------------------------------------
820  // (3) get the max value on each processor
821  std::vector<Parallel::DofObjectKey>
822  upper_bounds(1);
823 
824  if (!my_bin.empty())
825  upper_bounds[0] = my_bin.back();
826 
827  communicator.allgather (upper_bounds, /* identical_buffer_sizes = */ true);
828 
829  // Be careful here. The *_upper_bounds will be used to find the processor
830  // a given object belongs to. So, if a processor contains no objects (possible!)
831  // then copy the bound from the lower processor id.
832  for (auto p : IntRange<processor_id_type>(1, communicator.size()))
833  if (!bin_sizes[p]) upper_bounds[p] = upper_bounds[p-1];
834 
835 
836  //-------------------------------------------------------------
837  // (4) determine the position in the global ranking for
838  // each local object
839  {
840  //----------------------------------------------
841  // all objects, not just local ones
842 
843  // Request sets to send to each processor
844  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
845  requested_ids;
846  // Results to gather from each processor
847  std::map<processor_id_type, std::vector<dof_id_type>>
848  filled_request;
849 
850  // build up list of requests
851  std::vector<Parallel::DofObjectKey>::const_iterator hi =
852  hilbert_keys.begin();
853 
854  for (ForwardIterator it = begin; it != end; ++it)
855  {
856  libmesh_assert (hi != hilbert_keys.end());
857 
858  std::vector<Parallel::DofObjectKey>::iterator lb =
859  std::lower_bound(upper_bounds.begin(), upper_bounds.end(),
860  *hi);
861 
862  const processor_id_type pid =
863  cast_int<processor_id_type>
864  (std::distance (upper_bounds.begin(), lb));
865 
866  libmesh_assert_less (pid, communicator.size());
867 
868  requested_ids[pid].push_back(*hi);
869 
870  ++hi;
871  // go ahead and put pid in index_map, that way we
872  // don't have to repeat the std::lower_bound()
873  index_map.push_back(pid);
874  }
875 
876  auto gather_functor =
877  [
878 #ifndef NDEBUG
879  & upper_bounds,
880  & communicator,
881 #endif
882  & bbox,
883  & my_bin,
884  my_offset
885  ]
886  (processor_id_type, const std::vector<Parallel::DofObjectKey> & keys,
887  std::vector<dof_id_type> & global_ids)
888  {
889  // Ignore unused lambda capture warnings in devel mode
890  libmesh_ignore(bbox);
891 
892  // Fill the requests
893  const std::size_t keys_size = keys.size();
894  global_ids.clear();
895  global_ids.reserve(keys_size);
896  for (std::size_t idx=0; idx != keys_size; idx++)
897  {
898  const Parallel::DofObjectKey & hilbert_indices = keys[idx];
899  libmesh_assert_less_equal (hilbert_indices, upper_bounds[communicator.rank()]);
900 
901  // find the requested index in my node bin
902  std::vector<Parallel::DofObjectKey>::const_iterator pos =
903  std::lower_bound (my_bin.begin(), my_bin.end(), hilbert_indices);
904  libmesh_assert (pos != my_bin.end());
905 #ifdef DEBUG
906  // If we could not find the requested Hilbert index in
907  // my_bin, something went terribly wrong, possibly the
908  // Mesh was displaced differently on different processors,
909  // and therefore the Hilbert indices don't agree.
910  if (*pos != hilbert_indices)
911  {
912  // The input will be hilbert_indices. We convert it
913  // to BitVecType using the operator= provided by the
914  // BitVecType class. BitVecType is a CBigBitVec!
915  Hilbert::BitVecType input;
916 #ifdef LIBMESH_ENABLE_UNIQUE_ID
917  input = hilbert_indices.first;
918 #else
919  input = hilbert_indices;
920 #endif
921 
922  // Get output in a vector of CBigBitVec
923  std::vector<CBigBitVec> output(3);
924 
925  // Call the indexToCoords function
926  Hilbert::indexToCoords(output.data(), 8*sizeof(Hilbert::inttype), 3, input);
927 
928  // The entries in the output racks are integers in the
929  // range [0, Hilbert::inttype::max] which can be
930  // converted to floating point values in [0,1] and
931  // finally to actual values using the bounding box.
932  const Real max_int_as_real =
933  static_cast<Real>(std::numeric_limits<Hilbert::inttype>::max());
934 
935  // Get the points in [0,1]^3. The zeroth rack of each entry in
936  // 'output' maps to the normalized x, y, and z locations,
937  // respectively.
938  Point p_hat(static_cast<Real>(output[0].racks()[0]) / max_int_as_real,
939  static_cast<Real>(output[1].racks()[0]) / max_int_as_real,
940  static_cast<Real>(output[2].racks()[0]) / max_int_as_real);
941 
942  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
943  Real
944  xmin = bbox.first(0),
945  xmax = bbox.second(0),
946  ymin = bbox.first(1),
947  ymax = bbox.second(1),
948  zmin = bbox.first(2),
949  zmax = bbox.second(2);
950 
951  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
952  Point p(xmin + (xmax-xmin)*p_hat(0),
953  ymin + (ymax-ymin)*p_hat(1),
954  zmin + (zmax-zmin)*p_hat(2));
955 
956  libmesh_error_msg("Could not find hilbert indices: "
957  << hilbert_indices
958  << " corresponding to point " << p);
959  }
960 #endif
961 
962  // Finally, assign the global index based off the position of the index
963  // in my array, properly offset.
964  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_bin.begin(), pos) + my_offset));
965  }
966  };
967 
968  auto action_functor =
969  [&filled_request]
970  (processor_id_type pid,
971  const std::vector<Parallel::DofObjectKey> &,
972  const std::vector<dof_id_type> & global_ids)
973  {
974  filled_request[pid] = global_ids;
975  };
976 
977  const dof_id_type * ex = nullptr;
978  Parallel::pull_parallel_vector_data
979  (communicator, requested_ids, gather_functor, action_functor, ex);
980 
981  // We now have all the filled requests, so we can loop through our
982  // nodes once and assign the global index to each one.
983  {
984  std::vector<std::vector<dof_id_type>::const_iterator>
985  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
986  for (auto pid : make_range(communicator.size()))
987  next_obj_on_proc.push_back(filled_request[pid].begin());
988 
989  unsigned int cnt=0;
990  for (ForwardIterator it = begin; it != end; ++it, cnt++)
991  {
992  const processor_id_type pid = cast_int<processor_id_type>
993  (index_map[cnt]);
994 
995  libmesh_assert_less (pid, communicator.size());
996  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
997 
998  const dof_id_type global_index = *next_obj_on_proc[pid];
999  index_map[cnt] = global_index;
1000 
1001  ++next_obj_on_proc[pid];
1002  }
1003  }
1004  }
1005 
1006  libmesh_assert_equal_to(index_map.size(), n_objects);
1007 }
The IntRange templated class is intended to make it easy to loop over integers which are indices of a...
Definition: int_range.h:53
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
Real distance(const Point &p)
uint8_t processor_id_type
Definition: id_types.h:104
uint8_t processor_id_type
void libmesh_ignore(const Args &...)
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:493
libmesh_assert(ctx)
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real
The parallel sorting method is templated on the type of data which is to be sorted.
Definition: parallel_sort.h:55
DIE A HORRIBLE DEATH HERE typedef MPI_Comm communicator
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:140
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
A useful inline function which replaces the macros used previously.
uint8_t dof_id_type
Definition: id_types.h:67

◆ find_local_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_local_indices ( const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::unordered_map< dof_id_type, dof_id_type > &  index_map 
) const

This method determines a locally unique, contiguous index for each object in the input range.

Definition at line 710 of file mesh_communication_global_indices.C.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map().

714 {
715  LOG_SCOPE ("find_local_indices()", "MeshCommunication");
716 
717  // This method determines id-agnostic local indices
718  // for nodes and elements by sorting Hilbert keys.
719 
720  index_map.clear();
721 
722  const Point bboxinv = invert_bbox(bbox);
723 
724  //-------------------------------------------------------------
725  // (1) compute Hilbert keys
726  // These aren't trivial to compute, and we will need them again.
727  // But the binsort will sort the input vector, trashing the order
728  // that we'd like to rely on. So, two vectors...
729  std::map<Parallel::DofObjectKey, dof_id_type> hilbert_keys;
730  {
731  LOG_SCOPE("local_hilbert_indices", "MeshCommunication");
732  for (ForwardIterator it=begin; it!=end; ++it)
733  {
734  const Parallel::DofObjectKey hi(get_dofobject_key ((**it), bbox, bboxinv));
735  hilbert_keys.emplace(hi, (*it)->id());
736  }
737  }
738 
739  {
740  dof_id_type cnt = 0;
741  for (auto key_val : hilbert_keys)
742  index_map[key_val.second] = cnt++;
743  }
744 }
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39
uint8_t dof_id_type
Definition: id_types.h:67

◆ gather()

void libMesh::MeshCommunication::gather ( const processor_id_type  root_id,
MeshBase mesh 
) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor then sends its local nodes and elements to processor root_id. The end result is that a previously distributed DistributedMesh will be serialized on processor root_id. Since this method is collective it must be called by all processors. For the special case of root_id equal to DofObject::invalid_processor_id this function performs an allgather.

Definition at line 1318 of file mesh_communication.C.

Referenced by allgather().

1319 {
1320  // no MPI == one processor, no need for this method...
1321  return;
1322 }

◆ gather_neighboring_elements()

void libMesh::MeshCommunication::gather_neighboring_elements ( DistributedMesh mesh) const

Definition at line 676 of file mesh_communication.C.

Referenced by libMesh::Nemesis_IO::read().

677 {
678  // no MPI == one processor, no need for this method...
679  return;
680 }

◆ make_elems_parallel_consistent()

void libMesh::MeshCommunication::make_elems_parallel_consistent ( MeshBase mesh)

Copy ids of ghost elements from their local processors.

Definition at line 1802 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), mesh, libMesh::MeshBase::query_elem_ptr(), libMesh::MeshBase::renumber_elem(), libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Parallel::sync_element_data_by_parent_id().

Referenced by libMesh::MeshRefinement::_refine_elements().

1803 {
1804  // This function must be run on all processors at once
1805  libmesh_parallel_only(mesh.comm());
1806 
1807  LOG_SCOPE ("make_elems_parallel_consistent()", "MeshCommunication");
1808 
1809  SyncIds syncids(mesh, &MeshBase::renumber_elem);
1811  (mesh, mesh.active_elements_begin(),
1812  mesh.active_elements_end(), syncids);
1813 
1814 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1815  SyncUniqueIds<Elem> syncuniqueids(mesh, &MeshBase::query_elem_ptr);
1817  (mesh.comm(), mesh.active_elements_begin(),
1818  mesh.active_elements_end(), syncuniqueids);
1819 #endif
1820 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.
virtual const Elem * query_elem_ptr(const dof_id_type i) const =0
virtual void renumber_elem(dof_id_type old_id, dof_id_type new_id)=0
Changes the id of element old_id, both by changing elem(old_id)->id() and by moving elem(old_id) in t...
void sync_element_data_by_parent_id(MeshBase &mesh, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost elements uniquely identified by their parent id and which child t...

◆ make_new_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.

Definition at line 1982 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::DofObject::invalid_processor_id, libMesh::MeshTools::libmesh_assert_parallel_consistent_new_node_procids(), libMesh::MeshTools::libmesh_assert_parallel_consistent_procids< Node >(), mesh, libMesh::Elem::node_ref(), libMesh::Elem::node_ref_range(), libMesh::DofObject::processor_id(), libMesh::Parallel::sync_node_data_by_element_id(), and libMesh::Parallel::sync_node_data_by_element_id_once().

Referenced by make_new_nodes_parallel_consistent().

1983 {
1984  LOG_SCOPE ("make_new_node_proc_ids_parallel_consistent()", "MeshCommunication");
1985 
1986  // This function must be run on all processors at once
1987  libmesh_parallel_only(mesh.comm());
1988 
1989  // When this function is called, each section of a parallelized mesh
1990  // should be in the following state:
1991  //
1992  // Local nodes should have unique authoritative ids,
1993  // and new nodes should be unpartitioned.
1994  //
1995  // New ghost nodes touching local elements should be unpartitioned.
1996 
1997  // We may not have consistent processor ids for new nodes (because a
1998  // node may be old and partitioned on one processor but new and
1999  // unpartitioned on another) when we start
2000 #ifdef DEBUG
2002  // MeshTools::libmesh_assert_parallel_consistent_new_node_procids(mesh);
2003 #endif
2004 
2005  // We have two kinds of new nodes. *NEW* nodes are unpartitioned on
2006  // all processors: we need to use a id-independent (i.e. dumb)
2007  // heuristic to partition them. But "new" nodes are newly created
2008  // on some processors (when ghost elements are refined) yet
2009  // correspond to existing nodes on other processors: we need to use
2010  // the existing processor id for them.
2011  //
2012  // A node which is "new" on one processor will be associated with at
2013  // least one ghost element, and we can just query that ghost
2014  // element's owner to find out the correct processor id.
2015 
2016  auto node_unpartitioned =
2017  [](const Elem * elem, unsigned int local_node_num)
2018  { return elem->node_ref(local_node_num).processor_id() ==
2020 
2021  SyncProcIds sync(mesh);
2022 
2024  (mesh, mesh.not_local_elements_begin(),
2025  mesh.not_local_elements_end(), Parallel::SyncEverything(),
2026  node_unpartitioned, sync);
2027 
2028  // Nodes should now be unpartitioned iff they are truly new; those
2029  // are the *only* nodes we will touch.
2030 #ifdef DEBUG
2032 #endif
2033 
2034  NodeWasNew node_was_new(mesh);
2035 
2036  // Set the lowest processor id we can on truly new nodes
2037  for (auto & elem : mesh.element_ptr_range())
2038  for (auto & node : elem->node_ref_range())
2039  if (node_was_new.was_new.count(&node))
2040  {
2041  processor_id_type & pid = node.processor_id();
2042  pid = std::min(pid, elem->processor_id());
2043  }
2044 
2045  // Then finally see if other processors have a lower option
2047  (mesh, mesh.elements_begin(), mesh.elements_end(),
2048  ElemNodesMaybeNew(), node_was_new, sync);
2049 
2050  // We should have consistent processor ids when we're done.
2051 #ifdef DEBUG
2054 #endif
2055 }
This is the base class from which all geometric element types are derived.
Definition: elem.h:94
MeshBase & mesh
void libmesh_assert_parallel_consistent_new_node_procids(const MeshBase &mesh)
A function for verifying that processor assignment is parallel consistent (every processor agrees on ...
Definition: mesh_tools.C:2065
const Parallel::Communicator & comm() const
uint8_t processor_id_type
const Node & node_ref(const unsigned int i) const
Definition: elem.h:2529
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:493
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...
void libmesh_assert_parallel_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:2103
SimpleRange< NodeRefIter > node_ref_range()
Returns a range with all nodes of an element, usable in range-based for loops.
Definition: elem.h:2665
bool sync_node_data_by_element_id_once(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...
processor_id_type processor_id() const
Definition: dof_object.h:905

◆ make_new_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_new_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on new nodes from their local processors.

Definition at line 2105 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), make_new_node_proc_ids_parallel_consistent(), make_node_bcids_parallel_consistent(), make_node_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), and mesh.

Referenced by libMesh::MeshRefinement::_refine_elements().

2106 {
2107  // This function must be run on all processors at once
2108  libmesh_parallel_only(mesh.comm());
2109 
2110  // When this function is called, each section of a parallelized mesh
2111  // should be in the following state:
2112  //
2113  // All nodes should have the exact same physical location on every
2114  // processor where they exist.
2115  //
2116  // Local nodes should have unique authoritative ids,
2117  // and new nodes should be unpartitioned.
2118  //
2119  // New ghost nodes touching local elements should be unpartitioned.
2120  //
2121  // New ghost nodes should have ids which are either already correct
2122  // or which are in the "unpartitioned" id space.
2123  //
2124  // Non-new nodes should have correct ids and processor ids already.
2125 
2126  // First, let's sync up new nodes' processor ids.
2127 
2129 
2130  // Second, sync up dofobject ids.
2132 
2133  // Third, sync up dofobject unique_ids if applicable.
2135 
2136  // Fourth, sync up any nodal boundary conditions
2138 
2139  // Finally, correct the processor ids to make DofMap happy
2141 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parall...
void correct_node_proc_ids(MeshBase &)
Changes the processor ids on each node so be the same as the id of the lowest element touching that n...
Definition: mesh_tools.C:2393
MeshBase & mesh
const Parallel::Communicator & comm() const
void make_node_bcids_parallel_consistent(MeshBase &)
Assuming all processor ids are parallel consistent, this function makes all ghost boundary ids on nod...
void make_new_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.
void make_node_ids_parallel_consistent(MeshBase &)
Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel cons...

◆ make_node_bcids_parallel_consistent()

void libMesh::MeshCommunication::make_node_bcids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids are parallel consistent, this function makes all ghost boundary ids on nodes parallel consistent.

Definition at line 1780 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), mesh, and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by make_new_nodes_parallel_consistent().

1781 {
1782  // Avoid unused variable warnings if unique ids aren't enabled.
1784 
1785  // This function must be run on all processors at once
1786  libmesh_parallel_only(mesh.comm());
1787 
1788  LOG_SCOPE ("make_node_bcids_parallel_consistent()", "MeshCommunication");
1789 
1790  SyncBCIds<Node> syncbcids(mesh);
1792  mesh.nodes_begin(),
1793  mesh.nodes_end(),
1794  syncbcids);
1795 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void libmesh_ignore(const Args &...)
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.

◆ make_node_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_ids_parallel_consistent ( MeshBase mesh)

Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent.

Definition at line 1730 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::libmesh_assert_topology_consistent_procids< Node >(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by make_new_nodes_parallel_consistent(), and make_nodes_parallel_consistent().

1731 {
1732  // This function must be run on all processors at once
1733  libmesh_parallel_only(mesh.comm());
1734 
1735  // We need to agree on which processor owns every node, but we can't
1736  // easily assert that here because we don't currently agree on which
1737  // id every node has, and some of our temporary ids on unrelated
1738  // nodes will "overlap".
1739 //#ifdef DEBUG
1740 // MeshTools::libmesh_assert_parallel_consistent_procids<Node> (mesh);
1741 //#endif // DEBUG
1742 
1743  LOG_SCOPE ("make_node_ids_parallel_consistent()", "MeshCommunication");
1744 
1745  SyncNodeIds syncids(mesh);
1747  (mesh, mesh.elements_begin(), mesh.elements_end(),
1749 
1750  // At this point, with both ids and processor ids synced, we can
1751  // finally check for topological consistency of node processor ids.
1752 #ifdef DEBUG
1754 #endif
1755 }
void libmesh_assert_topology_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:1483
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...

◆ make_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

Definition at line 1953 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by make_nodes_parallel_consistent(), and libMesh::SimplexRefiner::refine_via_edges().

1954 {
1955  LOG_SCOPE ("make_node_proc_ids_parallel_consistent()", "MeshCommunication");
1956 
1957  // This function must be run on all processors at once
1958  libmesh_parallel_only(mesh.comm());
1959 
1960  // When this function is called, each section of a parallelized mesh
1961  // should be in the following state:
1962  //
1963  // All nodes should have the exact same physical location on every
1964  // processor where they exist.
1965  //
1966  // Local nodes should have unique authoritative ids,
1967  // and processor ids consistent with all processors which own
1968  // an element touching them.
1969  //
1970  // Ghost nodes touching local elements should have processor ids
1971  // consistent with all processors which own an element touching
1972  // them.
1973  SyncProcIds sync(mesh);
1975  (mesh, mesh.elements_begin(), mesh.elements_end(),
1977 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...

◆ make_node_unique_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent ( MeshBase mesh)

Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent.

Definition at line 1759 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), mesh, libMesh::MeshBase::query_node_ptr(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::BoundaryInfo::add_elements(), make_new_nodes_parallel_consistent(), make_nodes_parallel_consistent(), and libMesh::Nemesis_IO::read().

1760 {
1761  // Avoid unused variable warnings if unique ids aren't enabled.
1763 
1764  // This function must be run on all processors at once
1765  libmesh_parallel_only(mesh.comm());
1766 
1767 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1768  LOG_SCOPE ("make_node_unique_ids_parallel_consistent()", "MeshCommunication");
1769 
1770  SyncUniqueIds<Node> syncuniqueids(mesh, &MeshBase::query_node_ptr);
1772  mesh.nodes_begin(),
1773  mesh.nodes_end(),
1774  syncuniqueids);
1775 
1776 #endif
1777 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void libmesh_ignore(const Args &...)
virtual const Node * query_node_ptr(const dof_id_type i) const =0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.

◆ make_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on ghost nodes from their local processors.

This is useful for code which wants to add nodes to a distributed mesh.

Definition at line 2060 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), make_node_ids_parallel_consistent(), make_node_proc_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), and mesh.

Referenced by libMesh::MeshRefinement::_coarsen_elements(), and libMesh::MeshTools::Modification::all_tri().

2061 {
2062  // This function must be run on all processors at once
2063  libmesh_parallel_only(mesh.comm());
2064 
2065  // When this function is called, each section of a parallelized mesh
2066  // should be in the following state:
2067  //
2068  // Element ids and locations should be consistent on every processor
2069  // where they exist.
2070  //
2071  // All nodes should have the exact same physical location on every
2072  // processor where they exist.
2073  //
2074  // Local nodes should have unique authoritative ids,
2075  // and processor ids consistent with all processors which own
2076  // an element touching them.
2077  //
2078  // Ghost nodes touching local elements should have processor ids
2079  // consistent with all processors which own an element touching
2080  // them.
2081  //
2082  // Ghost nodes should have ids which are either already correct
2083  // or which are in the "unpartitioned" id space.
2084 
2085  // First, let's sync up processor ids. Some of these processor ids
2086  // may be "wrong" from coarsening, but they're right in the sense
2087  // that they'll tell us who has the authoritative dofobject ids for
2088  // each node.
2089 
2091 
2092  // Second, sync up dofobject ids.
2094 
2095  // Third, sync up dofobject unique_ids if applicable.
2097 
2098  // Finally, correct the processor ids to make DofMap happy
2100 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parall...
void correct_node_proc_ids(MeshBase &)
Changes the processor ids on each node so be the same as the id of the lowest element touching that n...
Definition: mesh_tools.C:2393
MeshBase & mesh
const Parallel::Communicator & comm() const
void make_node_ids_parallel_consistent(MeshBase &)
Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel cons...
void make_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

◆ make_p_levels_parallel_consistent()

void libMesh::MeshCommunication::make_p_levels_parallel_consistent ( MeshBase mesh)

Copy p levels of ghost elements from their local processors.

Definition at line 1826 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), mesh, and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), and libMesh::MeshRefinement::_refine_elements().

1827 {
1828  // This function must be run on all processors at once
1829  libmesh_parallel_only(mesh.comm());
1830 
1831  LOG_SCOPE ("make_p_levels_parallel_consistent()", "MeshCommunication");
1832 
1833  SyncPLevels syncplevels(mesh);
1835  (mesh.comm(), mesh.elements_begin(), mesh.elements_end(),
1836  syncplevels);
1837 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.

◆ redistribute()

void libMesh::MeshCommunication::redistribute ( DistributedMesh mesh,
bool  newly_coarsened_only = false 
) const

This method takes a parallel distributed mesh and redistributes the elements.

Specifically, any elements stored on a given processor are sent to the processor which "owns" them. Similarly, any elements assigned to the current processor but stored on another are received. Once this step is completed any required ghost elements are updated. The final result is that each processor stores only the elements it actually owns and any ghost elements required to satisfy data dependencies. This method can be invoked after a partitioning step to affect the new partitioning.

Redistribution can also be done with newly coarsened elements' neighbors only.

Definition at line 451 of file mesh_communication.C.

Referenced by libMesh::DistributedMesh::redistribute().

452 {
453  // no MPI == one processor, no redistribution
454  return;
455 }

◆ send_coarse_ghosts()

void libMesh::MeshCommunication::send_coarse_ghosts ( MeshBase mesh) const

Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them.

Definition at line 1055 of file mesh_communication.C.

Referenced by libMesh::MeshRefinement::_coarsen_elements().

1056 {
1057  // no MPI == one processor, no need for this method...
1058  return;
1059 }

The documentation for this class was generated from the following files: