libMesh
Public Member Functions | Static Public Member Functions | Protected Member Functions | Protected Attributes | Static Protected Attributes | List of all members
libMesh::LinearPartitioner Class Reference

The LinearPartitioner simply takes the element list and splits it into equal-sized chunks assigned to each processor. More...

#include <linear_partitioner.h>

Inheritance diagram for libMesh::LinearPartitioner:
[legend]

Public Member Functions

 LinearPartitioner ()=default
 Ctors, assignment operators, and destructor are all explicitly defaulted for this class. More...
 
 LinearPartitioner (const LinearPartitioner &)=default
 
 LinearPartitioner (LinearPartitioner &&)=default
 
LinearPartitioneroperator= (const LinearPartitioner &)=default
 
LinearPartitioneroperator= (LinearPartitioner &&)=default
 
virtual ~LinearPartitioner ()=default
 
virtual PartitionerType type () const override
 
virtual std::unique_ptr< Partitionerclone () const override
 
virtual void partition_range (MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) override
 Called by the SubdomainPartitioner to partition elements in the range (it, end). More...
 
virtual void partition (MeshBase &mesh, const unsigned int n)
 Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems. More...
 
virtual void partition (MeshBase &mesh)
 Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems. More...
 
void repartition (MeshBase &mesh, const unsigned int n)
 Repartitions the MeshBase into n parts. More...
 
void repartition (MeshBase &mesh)
 Repartitions the MeshBase into mesh.n_processors() parts. More...
 
virtual void attach_weights (ErrorVector *)
 Attach weights that can be used for partitioning. More...
 

Static Public Member Functions

static std::unique_ptr< Partitionerbuild (const PartitionerType solver_package)
 Builds a Partitioner of the type specified by partitioner_type. More...
 
static void partition_unpartitioned_elements (MeshBase &mesh)
 These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0. More...
 
static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n)
 
static void set_parent_processor_ids (MeshBase &mesh)
 This function is called after partitioning to set the processor IDs for the inactive parent elements. More...
 
static void set_node_processor_ids (MeshBase &mesh)
 This function is called after partitioning to set the processor IDs for the nodes. More...
 
static void processor_pairs_to_interface_nodes (MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
 On the partitioning interface, a surface is shared by two and only two processors. More...
 
static void set_interface_node_processor_ids_linear (MeshBase &mesh)
 Nodes on the partitioning interface is linearly assigned to each pair of processors. More...
 
static void set_interface_node_processor_ids_BFS (MeshBase &mesh)
 Nodes on the partitioning interface is clustered into two groups BFS (Breadth First Search)scheme for per pair of processors. More...
 
static void set_interface_node_processor_ids_petscpartitioner (MeshBase &mesh)
 Nodes on the partitioning interface is partitioned into two groups using a PETSc partitioner for each pair of processors. More...
 

Protected Member Functions

virtual void _do_partition (MeshBase &mesh, const unsigned int n) override
 Partition the MeshBase into n subdomains. More...
 
bool single_partition (MeshBase &mesh)
 Trivially "partitions" the mesh for one processor. More...
 
bool single_partition_range (MeshBase::element_iterator it, MeshBase::element_iterator end)
 Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end). More...
 
virtual void _do_repartition (MeshBase &mesh, const unsigned int n)
 This is the actual re-partitioning method which can be overridden in derived classes. More...
 
virtual void _find_global_index_by_pid_map (const MeshBase &mesh)
 Construct contiguous global indices for the current partitioning. More...
 
virtual void build_graph (const MeshBase &mesh)
 Build a dual graph for partitioner. More...
 
void assign_partitioning (MeshBase &mesh, const std::vector< dof_id_type > &parts)
 Assign the computed partitioning to the mesh. More...
 

Protected Attributes

ErrorVector_weights
 The weights that might be used for partitioning. More...
 
std::unordered_map< dof_id_type, dof_id_type_global_index_by_pid_map
 Maps active element ids into a contiguous range, as needed by parallel partitioner. More...
 
std::vector< dof_id_type_n_active_elem_on_proc
 The number of active elements on each processor. More...
 
std::vector< std::vector< dof_id_type > > _dual_graph
 A dual graph corresponds to the mesh, and it is typically used in paritioner. More...
 
std::vector< Elem * > _local_id_to_elem
 

Static Protected Attributes

static const dof_id_type communication_blocksize
 The blocksize to use when doing blocked parallel communication. More...
 

Detailed Description

The LinearPartitioner simply takes the element list and splits it into equal-sized chunks assigned to each processor.

Warning: the resulting domain decomposition can be arbitrarily bad in terms of edge-cut and other communication-based metrics!

Author
Benjamin S. Kirk
Date
2003 Partitions the elements based solely on their ids.

Definition at line 42 of file linear_partitioner.h.

Constructor & Destructor Documentation

◆ LinearPartitioner() [1/3]

libMesh::LinearPartitioner::LinearPartitioner ( )
default

Ctors, assignment operators, and destructor are all explicitly defaulted for this class.

◆ LinearPartitioner() [2/3]

libMesh::LinearPartitioner::LinearPartitioner ( const LinearPartitioner )
default

◆ LinearPartitioner() [3/3]

libMesh::LinearPartitioner::LinearPartitioner ( LinearPartitioner &&  )
default

◆ ~LinearPartitioner()

virtual libMesh::LinearPartitioner::~LinearPartitioner ( )
virtualdefault

Member Function Documentation

◆ _do_partition()

void libMesh::LinearPartitioner::_do_partition ( MeshBase mesh,
const unsigned int  n 
)
overrideprotectedvirtual

Partition the MeshBase into n subdomains.

Implements libMesh::Partitioner.

Definition at line 115 of file linear_partitioner.C.

References mesh, and partition_range().

117 {
118  this->partition_range(mesh,
119  mesh.active_elements_begin(),
120  mesh.active_elements_end(),
121  n);
122 }
MeshBase & mesh
virtual void partition_range(MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) override
Called by the SubdomainPartitioner to partition elements in the range (it, end).

◆ _do_repartition()

virtual void libMesh::Partitioner::_do_repartition ( MeshBase mesh,
const unsigned int  n 
)
inlineprotectedvirtualinherited

This is the actual re-partitioning method which can be overridden in derived classes.

Note
The default behavior is to simply call the partition function.

Reimplemented in libMesh::ParmetisPartitioner.

Definition at line 251 of file partitioner.h.

References libMesh::Partitioner::_do_partition().

Referenced by libMesh::Partitioner::repartition().

252  { this->_do_partition (mesh, n); }
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
This is the actual partitioning method which must be overridden in derived classes.

◆ _find_global_index_by_pid_map()

void libMesh::Partitioner::_find_global_index_by_pid_map ( const MeshBase mesh)
protectedvirtualinherited

Construct contiguous global indices for the current partitioning.

The global indices are ordered part-by-part

Definition at line 1067 of file partitioner.C.

References libMesh::Partitioner::_global_index_by_pid_map, libMesh::Partitioner::_n_active_elem_on_proc, TIMPI::Communicator::allgather(), libMesh::ParallelObject::comm(), libMesh::MeshTools::create_bounding_box(), libMesh::MeshCommunication::find_local_indices(), libMesh::make_range(), mesh, libMesh::MeshBase::n_active_local_elem(), libMesh::ParallelObject::n_processors(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::Partitioner::build_graph().

1068 {
1069  const dof_id_type n_active_local_elem = mesh.n_active_local_elem();
1070 
1071  // Find the number of active elements on each processor. We cannot use
1072  // mesh.n_active_elem_on_proc(pid) since that only returns the number of
1073  // elements assigned to pid which are currently stored on the calling
1074  // processor. This will not in general be correct for parallel meshes
1075  // when (pid!=mesh.processor_id()).
1076  auto n_proc = mesh.n_processors();
1077  _n_active_elem_on_proc.resize(n_proc);
1078  mesh.comm().allgather(n_active_local_elem, _n_active_elem_on_proc);
1079 
1080  std::vector<dof_id_type> n_active_elem_before_proc(mesh.n_processors());
1081 
1082  for (auto i : make_range(n_proc-1))
1083  n_active_elem_before_proc[i+1] =
1084  n_active_elem_before_proc[i] + _n_active_elem_on_proc[i];
1085 
1086  libMesh::BoundingBox bbox =
1088 
1089  _global_index_by_pid_map.clear();
1090 
1091  // create the mapping which is contiguous by processor
1093  mesh.active_local_elements_begin(),
1094  mesh.active_local_elements_end(),
1096 
1098 
1100  (mesh.comm(), mesh.active_elements_begin(), mesh.active_elements_end(), sync);
1101 
1102  for (const auto & elem : mesh.active_element_ptr_range())
1103  {
1104  const processor_id_type pid = elem->processor_id();
1105  libmesh_assert_less (_global_index_by_pid_map[elem->id()], _n_active_elem_on_proc[pid]);
1106 
1107  _global_index_by_pid_map[elem->id()] += n_active_elem_before_proc[pid];
1108  }
1109 }
void allgather(const T &send_data, std::vector< T, A > &recv_data) const
std::unordered_map< dof_id_type, dof_id_type > _global_index_by_pid_map
Maps active element ids into a contiguous range, as needed by parallel partitioner.
Definition: partitioner.h:286
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:558
MeshBase & mesh
const Parallel::Communicator & comm() const
uint8_t processor_id_type
dof_id_type n_active_local_elem() const
Definition: mesh_base.h:544
processor_id_type n_processors() const
This is the MeshCommunication class.
void find_local_indices(const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::unordered_map< dof_id_type, dof_id_type > &) const
This method determines a locally unique, contiguous index for each object in the input range...
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:134
std::vector< dof_id_type > _n_active_elem_on_proc
The number of active elements on each processor.
Definition: partitioner.h:295
uint8_t dof_id_type
Definition: id_types.h:67

◆ assign_partitioning()

void libMesh::Partitioner::assign_partitioning ( MeshBase mesh,
const std::vector< dof_id_type > &  parts 
)
protectedinherited

Assign the computed partitioning to the mesh.

Definition at line 1334 of file partitioner.C.

References libMesh::Partitioner::_global_index_by_pid_map, libMesh::Partitioner::_n_active_elem_on_proc, libMesh::ParallelObject::comm(), libMesh::libmesh_assert(), libMesh::make_range(), mesh, libMesh::MeshBase::n_active_local_elem(), libMesh::MeshBase::n_partitions(), libMesh::ParallelObject::n_processors(), and libMesh::ParallelObject::processor_id().

1335 {
1336  LOG_SCOPE("assign_partitioning()", "Partitioner");
1337 
1338  // This function must be run on all processors at once
1339  libmesh_parallel_only(mesh.comm());
1340 
1341  dof_id_type first_local_elem = 0;
1342  for (auto pid : make_range(mesh.processor_id()))
1343  first_local_elem += _n_active_elem_on_proc[pid];
1344 
1345 #ifndef NDEBUG
1346  const dof_id_type n_active_local_elem = mesh.n_active_local_elem();
1347 #endif
1348 
1349  std::map<processor_id_type, std::vector<dof_id_type>>
1350  requested_ids;
1351 
1352  // Results to gather from each processor - kept in a map so we
1353  // do only one loop over elements after all receives are done.
1354  std::map<processor_id_type, std::vector<processor_id_type>>
1355  filled_request;
1356 
1357  for (auto & elem : mesh.active_element_ptr_range())
1358  {
1359  // we need to get the index from the owning processor
1360  // (note we cannot assign it now -- we are iterating
1361  // over elements again and this will be bad!)
1362  requested_ids[elem->processor_id()].push_back(elem->id());
1363  }
1364 
1365  auto gather_functor =
1366  [this,
1367  & parts,
1368 #ifndef NDEBUG
1369  & mesh,
1370  n_active_local_elem,
1371 #endif
1372  first_local_elem]
1373  (processor_id_type, const std::vector<dof_id_type> & ids,
1374  std::vector<processor_id_type> & data)
1375  {
1376  const std::size_t ids_size = ids.size();
1377  data.resize(ids.size());
1378 
1379  for (std::size_t i=0; i != ids_size; i++)
1380  {
1381  const dof_id_type requested_elem_index = ids[i];
1382 
1383  libmesh_assert(_global_index_by_pid_map.count(requested_elem_index));
1384 
1385  const dof_id_type global_index_by_pid =
1386  _global_index_by_pid_map[requested_elem_index];
1387 
1388  const dof_id_type local_index =
1389  global_index_by_pid - first_local_elem;
1390 
1391  libmesh_assert_less (local_index, parts.size());
1392  libmesh_assert_less (local_index, n_active_local_elem);
1393 
1394  const processor_id_type elem_procid =
1395  cast_int<processor_id_type>(parts[local_index]);
1396 
1397  libmesh_assert_less (elem_procid, mesh.n_partitions());
1398 
1399  data[i] = elem_procid;
1400  }
1401  };
1402 
1403  auto action_functor =
1404  [&filled_request]
1405  (processor_id_type pid,
1406  const std::vector<dof_id_type> &,
1407  const std::vector<processor_id_type> & new_procids)
1408  {
1409  filled_request[pid] = new_procids;
1410  };
1411 
1412  // Trade requests with other processors
1413  const processor_id_type * ex = nullptr;
1414  Parallel::pull_parallel_vector_data
1415  (mesh.comm(), requested_ids, gather_functor, action_functor, ex);
1416 
1417  // and finally assign the partitioning.
1418  // note we are iterating in exactly the same order
1419  // used to build up the request, so we can expect the
1420  // required entries to be in the proper sequence.
1421  std::vector<unsigned int> counters(mesh.n_processors(), 0);
1422  for (auto & elem : mesh.active_element_ptr_range())
1423  {
1424  const processor_id_type current_pid = elem->processor_id();
1425 
1426  libmesh_assert_less (counters[current_pid], requested_ids[current_pid].size());
1427 
1428  const processor_id_type elem_procid =
1429  filled_request[current_pid][counters[current_pid]++];
1430 
1431  libmesh_assert_less (elem_procid, mesh.n_partitions());
1432  elem->processor_id() = elem_procid;
1433  }
1434 }
std::unordered_map< dof_id_type, dof_id_type > _global_index_by_pid_map
Maps active element ids into a contiguous range, as needed by parallel partitioner.
Definition: partitioner.h:286
MeshBase & mesh
const Parallel::Communicator & comm() const
uint8_t processor_id_type
Definition: id_types.h:104
uint8_t processor_id_type
dof_id_type n_active_local_elem() const
Definition: mesh_base.h:544
processor_id_type n_processors() const
libmesh_assert(ctx)
unsigned int n_partitions() const
Definition: mesh_base.h:1322
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:134
processor_id_type processor_id() const
std::vector< dof_id_type > _n_active_elem_on_proc
The number of active elements on each processor.
Definition: partitioner.h:295
uint8_t dof_id_type
Definition: id_types.h:67

◆ attach_weights()

virtual void libMesh::Partitioner::attach_weights ( ErrorVector )
inlinevirtualinherited

Attach weights that can be used for partitioning.

This ErrorVector should be exactly the same on every processor and should have mesh->max_elem_id() entries.

Reimplemented in libMesh::MetisPartitioner.

Definition at line 213 of file partitioner.h.

213 { libmesh_not_implemented(); }

◆ build()

std::unique_ptr< Partitioner > libMesh::Partitioner::build ( const PartitionerType  solver_package)
staticinherited

Builds a Partitioner of the type specified by partitioner_type.

Definition at line 158 of file partitioner.C.

References libMesh::CENTROID_PARTITIONER, libMesh::Utility::enum_to_string(), libMesh::HILBERT_SFC_PARTITIONER, libMesh::LINEAR_PARTITIONER, libMesh::MAPPED_SUBDOMAIN_PARTITIONER, libMesh::METIS_PARTITIONER, libMesh::MORTON_SFC_PARTITIONER, libMesh::PARMETIS_PARTITIONER, libMesh::SFC_PARTITIONER, and libMesh::SUBDOMAIN_PARTITIONER.

Referenced by libMesh::DistributedMesh::DistributedMesh(), libMesh::ReplicatedMesh::ReplicatedMesh(), PartitionerTest< PartitionerSubclass, MeshClass >::testBuild(), and MeshDeletionsTest::testDeleteElem().

159 {
160  switch (partitioner_type)
161  {
163  return std::make_unique<CentroidPartitioner>();
164  case LINEAR_PARTITIONER:
165  return std::make_unique<LinearPartitioner>();
167  return std::make_unique<MappedSubdomainPartitioner>();
168  case METIS_PARTITIONER:
169  return std::make_unique<MetisPartitioner>();
171  return std::make_unique<ParmetisPartitioner>();
173  return std::make_unique<HilbertSFCPartitioner>();
175  return std::make_unique<MortonSFCPartitioner>();
176  case SFC_PARTITIONER:
177  return std::make_unique<SFCPartitioner>();
179  return std::make_unique<SubdomainPartitioner>();
180  default:
181  libmesh_error_msg("Invalid partitioner type: " <<
182  Utility::enum_to_string(partitioner_type));
183  }
184 }
std::string enum_to_string(const T e)

◆ build_graph()

void libMesh::Partitioner::build_graph ( const MeshBase mesh)
protectedvirtualinherited

Build a dual graph for partitioner.

Reimplemented in libMesh::ParmetisPartitioner.

Definition at line 1111 of file partitioner.C.

References libMesh::Partitioner::_dual_graph, libMesh::Partitioner::_find_global_index_by_pid_map(), libMesh::Partitioner::_global_index_by_pid_map, libMesh::Partitioner::_local_id_to_elem, libMesh::Partitioner::_n_active_elem_on_proc, libMesh::as_range(), libMesh::MeshBase::get_constraint_rows(), libMesh::DofObject::id(), libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::make_range(), mesh, libMesh::MeshBase::n_active_local_elem(), and libMesh::ParallelObject::processor_id().

1112 {
1113  LOG_SCOPE("build_graph()", "Partitioner");
1114 
1115  const dof_id_type n_active_local_elem = mesh.n_active_local_elem();
1116 
1117  // If we have boundary elements in this mesh, we want to account for
1118  // the connectivity between them and interior elements. We can find
1119  // interior elements from boundary elements, but we need to build up
1120  // a lookup map to do the reverse.
1121  typedef std::unordered_multimap<const Elem *, const Elem *> map_type;
1122  map_type interior_to_boundary_map;
1123 
1124  // If we have spline nodes in this mesh, we want to account for the
1125  // connectivity between them and integration elements. We can find
1126  // spline nodes from integration elements, but need a reverse map
1127  // {integration_elements} = elems_constrained_by[spline_nodeelem]
1128  map_type elems_constrained_by;
1129 
1130  const auto & mesh_constrained_nodes = mesh.get_constraint_rows();
1131 
1132  for (const Elem * elem : mesh.active_element_ptr_range())
1133  {
1134  if (!mesh_constrained_nodes.empty()) // quick test for non-IGA cases
1135  {
1136  const auto end_it = mesh_constrained_nodes.end();
1137 
1138  // Use a set to avoid duplicates
1139  std::set<const Elem *> constraining_elems;
1140  for (const Node & node : elem->node_ref_range())
1141  {
1142  auto row_it = mesh_constrained_nodes.find(&node);
1143  if (row_it != end_it)
1144  for (auto [pr, coef] : row_it->second)
1145  {
1146  libmesh_ignore(coef); // avoid gcc 7 warning
1147  constraining_elems.insert(pr.first);
1148  }
1149  }
1150  for (const Elem * constraining_elem : constraining_elems)
1151  elems_constrained_by.emplace(constraining_elem, elem);
1152  }
1153 
1154  // If we don't have an interior_parent and we don't have any
1155  // constrained nodes then there's nothing else to look up.
1156  if (elem->interior_parent())
1157  {
1158  // get all relevant interior elements
1159  std::set<const Elem *> neighbor_set;
1160  elem->find_interior_neighbors(neighbor_set);
1161 
1162  for (const auto & neighbor : neighbor_set)
1163  interior_to_boundary_map.emplace(neighbor, elem);
1164  }
1165  }
1166 
1167 #ifdef LIBMESH_ENABLE_AMR
1168  std::vector<const Elem *> neighbors_offspring;
1169 #endif
1170 
1171  // This is costly, and we only need to do it if the mesh has
1172  // changed since we last partitioned... but the mesh probably has
1173  // changed since we last partitioned, and if it hasn't we don't
1174  // have a reliable way to be sure of that.
1176 
1177  dof_id_type first_local_elem = 0;
1178  for (auto pid : make_range(mesh.processor_id()))
1179  first_local_elem += _n_active_elem_on_proc[pid];
1180 
1181  _dual_graph.clear();
1182  _dual_graph.resize(n_active_local_elem);
1183  _local_id_to_elem.resize(n_active_local_elem);
1184 
1185  for (const auto & elem : mesh.active_local_element_ptr_range())
1186  {
1187  libmesh_assert (_global_index_by_pid_map.count(elem->id()));
1188  const dof_id_type global_index_by_pid =
1189  _global_index_by_pid_map[elem->id()];
1190 
1191  const dof_id_type local_index =
1192  global_index_by_pid - first_local_elem;
1193  libmesh_assert_less (local_index, n_active_local_elem);
1194 
1195  std::vector<dof_id_type> & graph_row = _dual_graph[local_index];
1196 
1197  // Save this off to make it easy to index later
1198  _local_id_to_elem[local_index] = const_cast<Elem*>(elem);
1199 
1200  // Loop over the element's neighbors. An element
1201  // adjacency corresponds to a face neighbor
1202  for (auto neighbor : elem->neighbor_ptr_range())
1203  {
1204  if (neighbor != nullptr)
1205  {
1206  // If the neighbor is active treat it
1207  // as a connection
1208  if (neighbor->active())
1209  {
1210  libmesh_assert(_global_index_by_pid_map.count(neighbor->id()));
1211  const dof_id_type neighbor_global_index_by_pid =
1212  _global_index_by_pid_map[neighbor->id()];
1213 
1214  graph_row.push_back(neighbor_global_index_by_pid);
1215  }
1216 
1217 #ifdef LIBMESH_ENABLE_AMR
1218 
1219  // Otherwise we need to find all of the
1220  // neighbor's children that are connected to
1221  // us and add them
1222  else
1223  {
1224  // The side of the neighbor to which
1225  // we are connected
1226  const unsigned int ns =
1227  neighbor->which_neighbor_am_i (elem);
1228  libmesh_assert_less (ns, neighbor->n_neighbors());
1229 
1230  // Get all the active children (& grandchildren, etc...)
1231  // of the neighbor
1232 
1233  // FIXME - this is the wrong thing, since we
1234  // should be getting the active family tree on
1235  // our side only. But adding too many graph
1236  // links may cause hanging nodes to tend to be
1237  // on partition interiors, which would reduce
1238  // communication overhead for constraint
1239  // equations, so we'll leave it.
1240 
1241  neighbor->active_family_tree (neighbors_offspring);
1242 
1243  // Get all the neighbor's children that
1244  // live on that side and are thus connected
1245  // to us
1246  for (const auto & child : neighbors_offspring)
1247  {
1248  // This does not assume a level-1 mesh.
1249  // Note that since children have sides numbered
1250  // coincident with the parent then this is a sufficient test.
1251  if (child->neighbor_ptr(ns) == elem)
1252  {
1253  libmesh_assert (child->active());
1254  libmesh_assert (_global_index_by_pid_map.count(child->id()));
1255  const dof_id_type child_global_index_by_pid =
1256  _global_index_by_pid_map[child->id()];
1257 
1258  graph_row.push_back(child_global_index_by_pid);
1259  }
1260  }
1261  }
1262 
1263 #endif /* ifdef LIBMESH_ENABLE_AMR */
1264 
1265 
1266  }
1267  }
1268 
1269  if ((elem->dim() < LIBMESH_DIM) &&
1270  elem->interior_parent())
1271  {
1272  // get all relevant interior elements
1273  std::set<const Elem *> neighbor_set;
1274  elem->find_interior_neighbors(neighbor_set);
1275 
1276  for (const auto & neighbor : neighbor_set)
1277  {
1278  const dof_id_type neighbor_global_index_by_pid =
1279  _global_index_by_pid_map[neighbor->id()];
1280 
1281  graph_row.push_back(neighbor_global_index_by_pid);
1282  }
1283  }
1284 
1285  // Check for any boundary neighbors
1286  for (const auto & pr : as_range(interior_to_boundary_map.equal_range(elem)))
1287  {
1288  const Elem * neighbor = pr.second;
1289 
1290  const dof_id_type neighbor_global_index_by_pid =
1291  _global_index_by_pid_map[neighbor->id()];
1292 
1293  graph_row.push_back(neighbor_global_index_by_pid);
1294  }
1295 
1296  // Check for any constraining elements
1297  if (!mesh_constrained_nodes.empty()) // quick test for non-IGA cases
1298  {
1299  const auto end_it = mesh_constrained_nodes.end();
1300 
1301  // Use a set to avoid duplicates
1302  std::set<const Elem *> constraining_elems;
1303  for (const Node & node : elem->node_ref_range())
1304  {
1305  auto row_it = mesh_constrained_nodes.find(&node);
1306  if (row_it != end_it)
1307  for (auto [pr, coef] : row_it->second)
1308  {
1309  libmesh_ignore(coef); // avoid gcc 7 warning
1310  constraining_elems.insert(pr.first);
1311  }
1312  }
1313  for (const Elem * constraining_elem : constraining_elems)
1314  {
1315  const dof_id_type constraining_global_index_by_pid =
1316  _global_index_by_pid_map[constraining_elem->id()];
1317 
1318  graph_row.push_back(constraining_global_index_by_pid);
1319  }
1320  }
1321 
1322  // Check for any constrained elements
1323  for (const auto & pr : as_range(elems_constrained_by.equal_range(elem)))
1324  {
1325  const Elem * constrained = pr.second;
1326  const dof_id_type constrained_global_index_by_pid =
1327  _global_index_by_pid_map[constrained->id()];
1328 
1329  graph_row.push_back(constrained_global_index_by_pid);
1330  }
1331  }
1332 }
constraint_rows_type & get_constraint_rows()
Definition: mesh_base.h:1683
A Node is like a Point, but with more information.
Definition: node.h:52
std::unordered_map< dof_id_type, dof_id_type > _global_index_by_pid_map
Maps active element ids into a contiguous range, as needed by parallel partitioner.
Definition: partitioner.h:286
This is the base class from which all geometric element types are derived.
Definition: elem.h:94
MeshBase & mesh
std::vector< Elem * > _local_id_to_elem
Definition: partitioner.h:305
dof_id_type n_active_local_elem() const
Definition: mesh_base.h:544
void libmesh_ignore(const Args &...)
dof_id_type id() const
Definition: dof_object.h:823
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
libmesh_assert(ctx)
virtual void _find_global_index_by_pid_map(const MeshBase &mesh)
Construct contiguous global indices for the current partitioning.
Definition: partitioner.C:1067
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:134
processor_id_type processor_id() const
std::vector< std::vector< dof_id_type > > _dual_graph
A dual graph corresponds to the mesh, and it is typically used in paritioner.
Definition: partitioner.h:302
std::vector< dof_id_type > _n_active_elem_on_proc
The number of active elements on each processor.
Definition: partitioner.h:295
uint8_t dof_id_type
Definition: id_types.h:67

◆ clone()

virtual std::unique_ptr<Partitioner> libMesh::LinearPartitioner::clone ( ) const
inlineoverridevirtual
Returns
A copy of this partitioner wrapped in a smart pointer.

Implements libMesh::Partitioner.

Definition at line 62 of file linear_partitioner.h.

63  {
64  return std::make_unique<LinearPartitioner>(*this);
65  }

◆ operator=() [1/2]

LinearPartitioner& libMesh::LinearPartitioner::operator= ( const LinearPartitioner )
default

◆ operator=() [2/2]

LinearPartitioner& libMesh::LinearPartitioner::operator= ( LinearPartitioner &&  )
default

◆ partition() [1/2]

void libMesh::Partitioner::partition ( MeshBase mesh,
const unsigned int  n 
)
virtualinherited

Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 195 of file partitioner.C.

References libMesh::Partitioner::_do_partition(), libMesh::ParallelObject::comm(), libMesh::MeshTools::libmesh_assert_valid_remote_elems(), mesh, libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::redistribute(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), libMesh::Partitioner::single_partition(), and libMesh::MeshBase::update_post_partitioning().

Referenced by libMesh::ParmetisPartitioner::_do_repartition(), and libMesh::Partitioner::partition().

197 {
198  libmesh_parallel_only(mesh.comm());
199 
200  // BSK - temporary fix while redistribution is integrated 6/26/2008
201  // Uncomment this to not repartition in parallel
202  // if (!mesh.is_serial())
203  // return;
204 
205  // we cannot partition into more pieces than we have
206  // active elements!
207  const unsigned int n_parts =
208  static_cast<unsigned int>
209  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
210 
211  // Set the number of partitions in the mesh
212  mesh.set_n_partitions()=n_parts;
213 
214  if (n_parts == 1)
215  {
216  this->single_partition (mesh);
217  return;
218  }
219 
220  // First assign a temporary partitioning to any unpartitioned elements
222 
223  // Call the partitioning function
224  this->_do_partition(mesh,n_parts);
225 
226  // Set the parent's processor ids
228 
229  // Redistribute elements if necessary, before setting node processor
230  // ids, to make sure those will be set consistently
231  mesh.redistribute();
232 
233 #ifdef DEBUG
235 
236  // Messed up elem processor_id()s can leave us without the child
237  // elements we need to restrict vectors on a distributed mesh
238  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
239 #endif
240 
241  // Set the node's processor ids
243 
244 #ifdef DEBUG
245  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
246 #endif
247 
248  // Give derived Mesh classes a chance to update any cached data to
249  // reflect the new partitioning
251 }
virtual dof_id_type n_active_elem() const =0
void libmesh_assert_valid_remote_elems(const MeshBase &mesh)
A function for verifying that active local elements&#39; neighbors are never remote elements.
Definition: mesh_tools.C:1139
bool single_partition(MeshBase &mesh)
Trivially "partitions" the mesh for one processor.
Definition: partitioner.C:297
static void set_node_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the nodes.
Definition: partitioner.C:851
MeshBase & mesh
const Parallel::Communicator & comm() const
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
This is the actual partitioning method which must be overridden in derived classes.
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:344
static void set_parent_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the inactive parent elements...
Definition: partitioner.C:425
virtual void update_post_partitioning()
Recalculate any cached data after elements and nodes have been repartitioned.
Definition: mesh_base.h:1154
virtual void redistribute()
Redistribute elements between processors.
Definition: mesh_base.C:925
unsigned int & set_n_partitions()
Definition: mesh_base.h:1789
uint8_t dof_id_type
Definition: id_types.h:67

◆ partition() [2/2]

void libMesh::Partitioner::partition ( MeshBase mesh)
virtualinherited

Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 188 of file partitioner.C.

References mesh, libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::partition().

189 {
190  this->partition(mesh,mesh.n_processors());
191 }
MeshBase & mesh
processor_id_type n_processors() const
virtual void partition(MeshBase &mesh, const unsigned int n)
Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.
Definition: partitioner.C:195

◆ partition_range()

void libMesh::LinearPartitioner::partition_range ( MeshBase mesh,
MeshBase::element_iterator  it,
MeshBase::element_iterator  end,
const unsigned int  n 
)
overridevirtual

Called by the SubdomainPartitioner to partition elements in the range (it, end).

Reimplemented from libMesh::Partitioner.

Definition at line 36 of file linear_partitioner.C.

References libMesh::as_range(), distance(), mesh, libMesh::DofObject::processor_id(), and libMesh::Partitioner::single_partition_range().

Referenced by _do_partition(), and libMesh::SFCPartitioner::partition_range().

40 {
41  const bool mesh_is_serial = mesh.is_serial();
42 
43  // Check for easy returns
44  if (it == end && mesh_is_serial)
45  return;
46 
47  if (n == 1)
48  {
49  this->single_partition_range (it, end);
50  return;
51  }
52 
53  libmesh_assert_greater (n, 0);
54 
55  // Create a simple linear partitioning
56  LOG_SCOPE ("partition_range()", "LinearPartitioner");
57 
58  // This has to be an ordered set
59  std::set<dof_id_type> element_ids;
60 
61  // If we're on a serialized mesh, we know our range is the same on
62  // every processor.
63  if (mesh_is_serial)
64  {
65  const dof_id_type blksize = cast_int<dof_id_type>
66  (std::distance(it, end) / n);
67 
68  dof_id_type e = 0;
69  for (auto & elem : as_range(it, end))
70  {
71  if ((e/blksize) < n)
72  elem->processor_id() = cast_int<processor_id_type>(e/blksize);
73  else
74  elem->processor_id() = 0;
75 
76  e++;
77  }
78  }
79  // If we're on a replicated mesh, we might have different ranges on
80  // different processors, and we'll need to gather the full range.
81  //
82  // This is not an efficient way to do this, but if you want to be
83  // efficient then you want to be using a different partitioner to
84  // begin with; LinearPartitioner is more for debugging than
85  // performance.
86  else
87  {
88  for (const auto & elem : as_range(it, end))
89  element_ids.insert(elem->id());
90 
91  mesh.comm().set_union(element_ids);
92 
93  const dof_id_type blksize = cast_int<dof_id_type>
94  (element_ids.size() / n);
95 
96  dof_id_type e = 0;
97  for (auto eid : element_ids)
98  {
99  Elem * elem = mesh.query_elem_ptr(eid);
100  if (elem)
101  {
102  if ((e/blksize) < n)
103  elem->processor_id() = cast_int<processor_id_type>(e/blksize);
104  else
105  elem->processor_id() = 0;
106  }
107 
108  e++;
109  }
110  }
111 }
bool single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Slightly generalized version of single_partition which acts on a range of elements defined by the pai...
Definition: partitioner.C:319
MeshBase & mesh
Real distance(const Point &p)
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
uint8_t dof_id_type
Definition: id_types.h:67

◆ partition_unpartitioned_elements() [1/2]

void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh)
staticinherited

These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0.

Definition at line 344 of file partitioner.C.

References mesh, and libMesh::ParallelObject::n_processors().

Referenced by libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

345 {
347 }
MeshBase & mesh
processor_id_type n_processors() const
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:344

◆ partition_unpartitioned_elements() [2/2]

void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh,
const unsigned int  n 
)
staticinherited

Definition at line 351 of file partitioner.C.

References libMesh::as_range(), libMesh::ParallelObject::comm(), libMesh::MeshTools::create_bounding_box(), distance(), libMesh::MeshCommunication::find_global_indices(), libMesh::make_range(), mesh, libMesh::MeshTools::n_elem(), and libMesh::ParallelObject::n_processors().

353 {
354  MeshBase::element_iterator it = mesh.unpartitioned_elements_begin();
355  const MeshBase::element_iterator end = mesh.unpartitioned_elements_end();
356 
357  const dof_id_type n_unpartitioned_elements = MeshTools::n_elem (it, end);
358 
359  // the unpartitioned elements must exist on all processors. If the range is empty on one
360  // it is empty on all, and we can quit right here.
361  if (!n_unpartitioned_elements)
362  return;
363 
364  // find the target subdomain sizes
365  std::vector<dof_id_type> subdomain_bounds(mesh.n_processors());
366 
367  for (auto pid : make_range(mesh.n_processors()))
368  {
369  dof_id_type tgt_subdomain_size = 0;
370 
371  // watch out for the case that n_subdomains < n_processors
372  if (pid < n_subdomains)
373  {
374  tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
375 
376  if (pid < n_unpartitioned_elements%n_subdomains)
377  tgt_subdomain_size++;
378 
379  }
380 
381  //libMesh::out << "pid, #= " << pid << ", " << tgt_subdomain_size << std::endl;
382  if (pid == 0)
383  subdomain_bounds[0] = tgt_subdomain_size;
384  else
385  subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
386  }
387 
388  libmesh_assert_equal_to (subdomain_bounds.back(), n_unpartitioned_elements);
389 
390  // create the unique mapping for all unpartitioned elements independent of partitioning
391  // determine the global indexing for all the unpartitioned elements
392  std::vector<dof_id_type> global_indices;
393 
394  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.
395  // Only the indices for the elements we pass in are returned in the array.
398  global_indices);
399 
400  dof_id_type cnt=0;
401  for (auto & elem : as_range(it, end))
402  {
403  libmesh_assert_less (cnt, global_indices.size());
404  const dof_id_type global_index =
405  global_indices[cnt++];
406 
407  libmesh_assert_less (global_index, subdomain_bounds.back());
408  libmesh_assert_less (global_index, n_unpartitioned_elements);
409 
410  const processor_id_type subdomain_id =
411  cast_int<processor_id_type>
412  (std::distance(subdomain_bounds.begin(),
413  std::upper_bound(subdomain_bounds.begin(),
414  subdomain_bounds.end(),
415  global_index)));
416  libmesh_assert_less (subdomain_id, n_subdomains);
417 
418  elem->processor_id() = subdomain_id;
419  //libMesh::out << "assigning " << global_index << " to " << subdomain_id << std::endl;
420  }
421 }
The definition of the element_iterator struct.
Definition: mesh_base.h:2103
void find_global_indices(const Parallel::Communicator &communicator, const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
This method determines a globally unique, partition-agnostic index for each object in the input range...
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:850
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:558
MeshBase & mesh
const Parallel::Communicator & comm() const
Real distance(const Point &p)
uint8_t processor_id_type
processor_id_type n_processors() const
This is the MeshCommunication class.
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:134
uint8_t dof_id_type
Definition: id_types.h:67

◆ processor_pairs_to_interface_nodes()

void libMesh::Partitioner::processor_pairs_to_interface_nodes ( MeshBase mesh,
std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &  processor_pair_to_nodes 
)
staticinherited

On the partitioning interface, a surface is shared by two and only two processors.

Try to find which pair of processors corresponds to which surfaces, and store their nodes.

Definition at line 578 of file partitioner.C.

References libMesh::ParallelObject::comm(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), mesh, and n_nodes.

Referenced by libMesh::Partitioner::set_interface_node_processor_ids_BFS(), libMesh::Partitioner::set_interface_node_processor_ids_linear(), and libMesh::Partitioner::set_interface_node_processor_ids_petscpartitioner().

580 {
581  // This function must be run on all processors at once
582  libmesh_parallel_only(mesh.comm());
583 
584  processor_pair_to_nodes.clear();
585 
586  std::set<dof_id_type> mynodes;
587  std::set<dof_id_type> neighbor_nodes;
588  std::vector<dof_id_type> common_nodes;
589 
590  // Loop over all the active elements
591  for (auto & elem : mesh.active_element_ptr_range())
592  {
593  libmesh_assert(elem);
594 
595  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
596 
597  auto n_nodes = elem->n_nodes();
598 
599  // prepare data for this element
600  mynodes.clear();
601  neighbor_nodes.clear();
602  common_nodes.clear();
603 
604  for (unsigned int inode = 0; inode < n_nodes; inode++)
605  mynodes.insert(elem->node_id(inode));
606 
607  for (auto i : elem->side_index_range())
608  {
609  auto neigh = elem->neighbor_ptr(i);
610  if (neigh && !neigh->is_remote() && neigh->processor_id() != elem->processor_id())
611  {
612  neighbor_nodes.clear();
613  common_nodes.clear();
614  auto neigh_n_nodes = neigh->n_nodes();
615  for (unsigned int inode = 0; inode < neigh_n_nodes; inode++)
616  neighbor_nodes.insert(neigh->node_id(inode));
617 
618  std::set_intersection(mynodes.begin(), mynodes.end(),
619  neighbor_nodes.begin(), neighbor_nodes.end(),
620  std::back_inserter(common_nodes));
621 
622  auto & map_set = processor_pair_to_nodes[std::make_pair(std::min(elem->processor_id(), neigh->processor_id()),
623  std::max(elem->processor_id(), neigh->processor_id()))];
624  for (auto global_node_id : common_nodes)
625  map_set.insert(global_node_id);
626  }
627  }
628  }
629 }
MeshBase & mesh
const Parallel::Communicator & comm() const
const dof_id_type n_nodes
Definition: tecplot_io.C:67
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:488
libmesh_assert(ctx)

◆ repartition() [1/2]

void libMesh::Partitioner::repartition ( MeshBase mesh,
const unsigned int  n 
)
inherited

Repartitions the MeshBase into n parts.

(Some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.) The default behavior is to simply call this->partition(mesh,n).

Definition at line 262 of file partitioner.C.

References libMesh::Partitioner::_do_repartition(), mesh, libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), and libMesh::Partitioner::single_partition().

Referenced by libMesh::Partitioner::repartition().

264 {
265  // we cannot partition into more pieces than we have
266  // active elements!
267  const unsigned int n_parts =
268  static_cast<unsigned int>
269  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
270 
271  // Set the number of partitions in the mesh
272  mesh.set_n_partitions()=n_parts;
273 
274  if (n_parts == 1)
275  {
276  this->single_partition (mesh);
277  return;
278  }
279 
280  // First assign a temporary partitioning to any unpartitioned elements
282 
283  // Call the partitioning function
284  this->_do_repartition(mesh,n_parts);
285 
286  // Set the parent's processor ids
288 
289  // Set the node's processor ids
291 }
virtual dof_id_type n_active_elem() const =0
bool single_partition(MeshBase &mesh)
Trivially "partitions" the mesh for one processor.
Definition: partitioner.C:297
static void set_node_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the nodes.
Definition: partitioner.C:851
MeshBase & mesh
virtual void _do_repartition(MeshBase &mesh, const unsigned int n)
This is the actual re-partitioning method which can be overridden in derived classes.
Definition: partitioner.h:251
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:344
static void set_parent_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the inactive parent elements...
Definition: partitioner.C:425
unsigned int & set_n_partitions()
Definition: mesh_base.h:1789
uint8_t dof_id_type
Definition: id_types.h:67

◆ repartition() [2/2]

void libMesh::Partitioner::repartition ( MeshBase mesh)
inherited

Repartitions the MeshBase into mesh.n_processors() parts.

This is required since some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.

Definition at line 255 of file partitioner.C.

References mesh, libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::repartition().

256 {
257  this->repartition(mesh,mesh.n_processors());
258 }
MeshBase & mesh
void repartition(MeshBase &mesh, const unsigned int n)
Repartitions the MeshBase into n parts.
Definition: partitioner.C:262
processor_id_type n_processors() const

◆ set_interface_node_processor_ids_BFS()

void libMesh::Partitioner::set_interface_node_processor_ids_BFS ( MeshBase mesh)
staticinherited

Nodes on the partitioning interface is clustered into two groups BFS (Breadth First Search)scheme for per pair of processors.

Definition at line 656 of file partitioner.C.

References libMesh::MeshTools::build_nodes_to_elem_map(), libMesh::ParallelObject::comm(), libMesh::MeshTools::find_nodal_neighbors(), mesh, libMesh::MeshBase::node_ref(), libMesh::DofObject::processor_id(), and libMesh::Partitioner::processor_pairs_to_interface_nodes().

Referenced by libMesh::Partitioner::set_node_processor_ids().

657 {
658  // This function must be run on all processors at once
659  libmesh_parallel_only(mesh.comm());
660 
661  // I see occasional consistency failures when using this on a
662  // distributed mesh
663  libmesh_experimental();
664 
665  std::map<std::pair<processor_id_type, processor_id_type>, std::set<dof_id_type>> processor_pair_to_nodes;
666 
667  processor_pairs_to_interface_nodes(mesh, processor_pair_to_nodes);
668 
669  std::unordered_map<dof_id_type, std::vector<const Elem *>> nodes_to_elem_map;
670 
671  MeshTools::build_nodes_to_elem_map(mesh, nodes_to_elem_map);
672 
673  std::vector<const Node *> neighbors;
674  std::set<dof_id_type> neighbors_order;
675  std::vector<dof_id_type> common_nodes;
676  std::queue<dof_id_type> nodes_queue;
677  std::set<dof_id_type> visted_nodes;
678 
679  for (auto & pmap : processor_pair_to_nodes)
680  {
681  std::size_t n_own_nodes = pmap.second.size()/2;
682 
683  // Initialize node assignment
684  for (dof_id_type id : pmap.second)
685  mesh.node_ref(id).processor_id() = pmap.first.second;
686 
687  visted_nodes.clear();
688  for (dof_id_type id : pmap.second)
689  {
690  mesh.node_ref(id).processor_id() = pmap.first.second;
691 
692  if (visted_nodes.find(id) != visted_nodes.end())
693  continue;
694  else
695  {
696  nodes_queue.push(id);
697  visted_nodes.insert(id);
698  if (visted_nodes.size() >= n_own_nodes)
699  break;
700  }
701 
702  while (!nodes_queue.empty())
703  {
704  auto & node = mesh.node_ref(nodes_queue.front());
705  nodes_queue.pop();
706 
707  neighbors.clear();
708  MeshTools::find_nodal_neighbors(mesh, node, nodes_to_elem_map, neighbors);
709  neighbors_order.clear();
710  for (auto & neighbor : neighbors)
711  neighbors_order.insert(neighbor->id());
712 
713  common_nodes.clear();
714  std::set_intersection(pmap.second.begin(), pmap.second.end(),
715  neighbors_order.begin(), neighbors_order.end(),
716  std::back_inserter(common_nodes));
717 
718  for (auto c_node : common_nodes)
719  if (visted_nodes.find(c_node) == visted_nodes.end())
720  {
721  nodes_queue.push(c_node);
722  visted_nodes.insert(c_node);
723  if (visted_nodes.size() >= n_own_nodes)
724  goto queue_done;
725  }
726 
727  if (visted_nodes.size() >= n_own_nodes)
728  goto queue_done;
729  }
730  }
731  queue_done:
732  for (auto node : visted_nodes)
733  mesh.node_ref(node).processor_id() = pmap.first.first;
734  }
735 }
MeshBase & mesh
void find_nodal_neighbors(const MeshBase &mesh, const Node &n, const std::vector< std::vector< const Elem *>> &nodes_to_elem_map, std::vector< const Node *> &neighbors)
Given a mesh and a node in the mesh, the vector will be filled with every node directly attached to t...
Definition: mesh_tools.C:888
const Parallel::Communicator & comm() const
void build_nodes_to_elem_map(const MeshBase &mesh, std::vector< std::vector< dof_id_type >> &nodes_to_elem_map)
After calling this function the input vector nodes_to_elem_map will contain the node to element conne...
Definition: mesh_tools.C:448
static void processor_pairs_to_interface_nodes(MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
On the partitioning interface, a surface is shared by two and only two processors.
Definition: partitioner.C:578
virtual const Node & node_ref(const dof_id_type i) const
Definition: mesh_base.h:575
processor_id_type processor_id() const
Definition: dof_object.h:898
uint8_t dof_id_type
Definition: id_types.h:67

◆ set_interface_node_processor_ids_linear()

void libMesh::Partitioner::set_interface_node_processor_ids_linear ( MeshBase mesh)
staticinherited

Nodes on the partitioning interface is linearly assigned to each pair of processors.

Definition at line 631 of file partitioner.C.

References libMesh::ParallelObject::comm(), mesh, libMesh::MeshBase::node_ref(), libMesh::DofObject::processor_id(), and libMesh::Partitioner::processor_pairs_to_interface_nodes().

Referenced by libMesh::Partitioner::set_node_processor_ids().

632 {
633  // This function must be run on all processors at once
634  libmesh_parallel_only(mesh.comm());
635 
636  std::map<std::pair<processor_id_type, processor_id_type>, std::set<dof_id_type>> processor_pair_to_nodes;
637 
638  processor_pairs_to_interface_nodes(mesh, processor_pair_to_nodes);
639 
640  for (auto & pmap : processor_pair_to_nodes)
641  {
642  std::size_t n_own_nodes = pmap.second.size()/2, i = 0;
643 
644  for (dof_id_type id : pmap.second)
645  {
646  auto & node = mesh.node_ref(id);
647  if (i <= n_own_nodes)
648  node.processor_id() = pmap.first.first;
649  else
650  node.processor_id() = pmap.first.second;
651  i++;
652  }
653  }
654 }
MeshBase & mesh
const Parallel::Communicator & comm() const
static void processor_pairs_to_interface_nodes(MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
On the partitioning interface, a surface is shared by two and only two processors.
Definition: partitioner.C:578
virtual const Node & node_ref(const dof_id_type i) const
Definition: mesh_base.h:575
processor_id_type processor_id() const
Definition: dof_object.h:898
uint8_t dof_id_type
Definition: id_types.h:67

◆ set_interface_node_processor_ids_petscpartitioner()

void libMesh::Partitioner::set_interface_node_processor_ids_petscpartitioner ( MeshBase mesh)
staticinherited

Nodes on the partitioning interface is partitioned into two groups using a PETSc partitioner for each pair of processors.

Definition at line 737 of file partitioner.C.

References libMesh::MeshTools::build_nodes_to_elem_map(), libMesh::ParallelObject::comm(), libMesh::MeshTools::find_nodal_neighbors(), libMesh::is, libMesh::libmesh_ignore(), mesh, libMesh::MeshBase::node_ref(), libMesh::DofObject::processor_id(), and libMesh::Partitioner::processor_pairs_to_interface_nodes().

Referenced by libMesh::Partitioner::set_node_processor_ids().

738 {
739  libmesh_ignore(mesh); // Only used if LIBMESH_HAVE_PETSC
740 
741  // This function must be run on all processors at once
742  libmesh_parallel_only(mesh.comm());
743 
744 #if LIBMESH_HAVE_PETSC
745  std::map<std::pair<processor_id_type, processor_id_type>, std::set<dof_id_type>> processor_pair_to_nodes;
746 
747  processor_pairs_to_interface_nodes(mesh, processor_pair_to_nodes);
748 
749  std::vector<std::vector<const Elem *>> nodes_to_elem_map;
750 
751  MeshTools::build_nodes_to_elem_map(mesh, nodes_to_elem_map);
752 
753  std::vector<const Node *> neighbors;
754  std::set<dof_id_type> neighbors_order;
755  std::vector<dof_id_type> common_nodes;
756 
757  std::vector<dof_id_type> rows;
758  std::vector<dof_id_type> cols;
759 
760  std::map<dof_id_type, dof_id_type> global_to_local;
761 
762  for (auto & pmap : processor_pair_to_nodes)
763  {
764  unsigned int i = 0;
765 
766  rows.clear();
767  rows.resize(pmap.second.size()+1);
768  cols.clear();
769  for (dof_id_type id : pmap.second)
770  global_to_local[id] = i++;
771 
772  i = 0;
773  for (auto id : pmap.second)
774  {
775  auto & node = mesh.node_ref(id);
776  neighbors.clear();
777  MeshTools::find_nodal_neighbors(mesh, node, nodes_to_elem_map, neighbors);
778  neighbors_order.clear();
779  for (auto & neighbor : neighbors)
780  neighbors_order.insert(neighbor->id());
781 
782  common_nodes.clear();
783  std::set_intersection(pmap.second.begin(), pmap.second.end(),
784  neighbors_order.begin(), neighbors_order.end(),
785  std::back_inserter(common_nodes));
786 
787  rows[i+1] = rows[i] + cast_int<dof_id_type>(common_nodes.size());
788 
789  for (auto c_node : common_nodes)
790  cols.push_back(global_to_local[c_node]);
791 
792  i++;
793  }
794 
795  // Next we construct an IS from a MatPartitioning
797  {
798  PetscInt *adj_i, *adj_j;
799  PetscCalloc1(rows.size(), &adj_i);
800  PetscCalloc1(cols.size(), &adj_j);
801  PetscInt rows_size = cast_int<PetscInt>(rows.size());
802  for (PetscInt ii=0; ii<rows_size; ii++)
803  adj_i[ii] = rows[ii];
804 
805  PetscInt cols_size = cast_int<PetscInt>(cols.size());
806  for (PetscInt ii=0; ii<cols_size; ii++)
807  adj_j[ii] = cols[ii];
808 
809  const PetscInt sz = cast_int<PetscInt>(pmap.second.size());
810 
811  // Create sparse matrix representing an adjacency list
812  WrappedPetsc<Mat> adj;
813  MatCreateMPIAdj(PETSC_COMM_SELF, sz, sz, adj_i, adj_j, nullptr, adj.get());
814 
815  // Create MatPartitioning object
817  MatPartitioningCreate(PETSC_COMM_SELF, part.get());
818 
819  // Apply MatPartitioning, storing results in "is"
820  MatPartitioningSetAdjacency(part, adj);
821  MatPartitioningSetNParts(part, 2);
822  PetscObjectSetOptionsPrefix((PetscObject)(*part), "balance_");
823  MatPartitioningSetFromOptions(part);
824  MatPartitioningApply(part, is.get());
825  }
826 
827  PetscInt local_size;
828  const PetscInt *indices;
829  ISGetLocalSize(is, &local_size);
830  ISGetIndices(is, &indices);
831 
832  i = 0;
833  for (auto id : pmap.second)
834  {
835  auto & node = mesh.node_ref(id);
836  if (indices[i])
837  node.processor_id() = pmap.first.second;
838  else
839  node.processor_id() = pmap.first.first;
840 
841  i++;
842  }
843  ISRestoreIndices(is, &indices);
844  }
845 #else
846  libmesh_error_msg("PETSc is required");
847 #endif
848 }
MeshBase & mesh
void find_nodal_neighbors(const MeshBase &mesh, const Node &n, const std::vector< std::vector< const Elem *>> &nodes_to_elem_map, std::vector< const Node *> &neighbors)
Given a mesh and a node in the mesh, the vector will be filled with every node directly attached to t...
Definition: mesh_tools.C:888
const Parallel::Communicator & comm() const
void build_nodes_to_elem_map(const MeshBase &mesh, std::vector< std::vector< dof_id_type >> &nodes_to_elem_map)
After calling this function the input vector nodes_to_elem_map will contain the node to element conne...
Definition: mesh_tools.C:448
void libmesh_ignore(const Args &...)
static void processor_pairs_to_interface_nodes(MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
On the partitioning interface, a surface is shared by two and only two processors.
Definition: partitioner.C:578
PetscErrorCode PetscInt const PetscInt IS * is
virtual const Node & node_ref(const dof_id_type i) const
Definition: mesh_base.h:575
processor_id_type processor_id() const
Definition: dof_object.h:898
uint8_t dof_id_type
Definition: id_types.h:67

◆ set_node_processor_ids()

void libMesh::Partitioner::set_node_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the nodes.

By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 851 of file partitioner.C.

References libMesh::as_range(), libMesh::Node::choose_processor_id(), libMesh::ParallelObject::comm(), libMesh::DofObject::id(), libMesh::DofObject::invalid_processor_id, libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), mesh, libMesh::MeshTools::n_elem(), libMesh::MeshTools::n_levels(), libMesh::MeshBase::node_ref(), libMesh::on_command_line(), libMesh::DofObject::processor_id(), libMesh::Partitioner::set_interface_node_processor_ids_BFS(), libMesh::Partitioner::set_interface_node_processor_ids_linear(), libMesh::Partitioner::set_interface_node_processor_ids_petscpartitioner(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by libMesh::MeshRefinement::_refine_elements(), libMesh::UnstructuredMesh::all_first_order(), libMesh::Partitioner::partition(), libMesh::XdrIO::read(), libMesh::Partitioner::repartition(), and libMesh::BoundaryInfo::sync().

852 {
853  LOG_SCOPE("set_node_processor_ids()","Partitioner");
854 
855  // This function must be run on all processors at once
856  libmesh_parallel_only(mesh.comm());
857 
858  // If we have any unpartitioned elements at this
859  // stage there is a problem
860  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
861  mesh.unpartitioned_elements_end()) == 0);
862 
863  // Start from scratch here: nodes we used to own may not be
864  // eligible for us to own any more.
865  for (auto & node : mesh.node_ptr_range())
866  {
867  node->processor_id() = DofObject::invalid_processor_id;
868  }
869 
870  // Loop over all the active elements
871  for (auto & elem : mesh.active_element_ptr_range())
872  {
873  libmesh_assert(elem);
874 
875  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
876 
877  // Consider updating the processor id on this element's nodes
878  for (Node & node : elem->node_ref_range())
879  {
880  processor_id_type & pid = node.processor_id();
881  pid = node.choose_processor_id(pid, elem->processor_id());
882  }
883  }
884 
885  // How we finish off the node partitioning depends on our command
886  // line options.
887 
888  const bool load_balanced_nodes_linear =
889  libMesh::on_command_line ("--load-balanced-nodes-linear");
890 
891  const bool load_balanced_nodes_bfs =
892  libMesh::on_command_line ("--load-balanced-nodes-bfs");
893 
894  const bool load_balanced_nodes_petscpartition =
895  libMesh::on_command_line ("--load-balanced-nodes-petscpartitioner");
896 
897  unsigned int n_load_balance_options = load_balanced_nodes_linear;
898  n_load_balance_options += load_balanced_nodes_bfs;
899  n_load_balance_options += load_balanced_nodes_petscpartition;
900  libmesh_error_msg_if(n_load_balance_options > 1,
901  "Cannot perform more than one load balancing type at a time");
902 
903  if (load_balanced_nodes_linear)
905  else if (load_balanced_nodes_bfs)
907  else if (load_balanced_nodes_petscpartition)
909 
910  // Node balancing algorithm will response to assign owned nodes.
911  // We still need to sync PIDs
912  {
913  // For inactive elements, we will have already gotten most of
914  // these nodes, *except* for the case of a parent with a subset
915  // of active descendants which are remote elements. In that
916  // case some of the parent nodes will not have been properly
917  // handled yet on our processor.
918  //
919  // We don't want to inadvertently give one of them an incorrect
920  // processor id, but if we're not in serial then we have to
921  // assign them temporary pids to make querying work, so we'll
922  // save our *valid* pids before assigning temporaries.
923  //
924  // Even in serial we'll want to check and make sure we're not
925  // overwriting valid active node pids with pids from subactive
926  // elements.
927  std::unordered_set<dof_id_type> bad_pids;
928 
929  for (auto & node : mesh.node_ptr_range())
930  if (node->processor_id() == DofObject::invalid_processor_id)
931  bad_pids.insert(node->id());
932 
933  // If we assign our temporary ids by looping from finer elements
934  // to coarser elements, we'll always get an id from the finest
935  // ghost element we can see, which will usually be "closer" to
936  // the true processor we want to query and so will reduce query
937  // cycles that don't reach that processor.
938 
939  // But we can still end up with a query cycle that dead-ends, so
940  // we need to prepare a "push" communication step here.
941 
942  const bool is_serial = mesh.is_serial();
943  std::unordered_map
945  std::unordered_map<dof_id_type, processor_id_type>>
946  potential_pids;
947 
948  const unsigned int n_levels = MeshTools::n_levels(mesh);
949  for (unsigned int level = n_levels; level > 0; --level)
950  {
951  for (auto & elem : as_range(mesh.level_elements_begin(level-1),
952  mesh.level_elements_end(level-1)))
953  {
954  libmesh_assert_not_equal_to (elem->processor_id(),
956 
957  const processor_id_type elem_pid = elem->processor_id();
958 
959  // Consider updating the processor id on this element's nodes
960  for (Node & node : elem->node_ref_range())
961  {
962  processor_id_type & pid = node.processor_id();
963  if (bad_pids.count(node.id()))
964  pid = node.choose_processor_id(pid, elem_pid);
965  else if (!is_serial)
966  potential_pids[elem_pid][node.id()] = pid;
967  }
968  }
969  }
970 
971  if (!is_serial)
972  {
973  std::unordered_map
975  std::vector<std::pair<dof_id_type, processor_id_type>>>
976  potential_pids_vecs;
977 
978  for (auto & pair : potential_pids)
979  potential_pids_vecs[pair.first].assign(pair.second.begin(), pair.second.end());
980 
981  auto pids_action_functor =
982  [& mesh, & bad_pids]
983  (processor_id_type /* src_pid */,
984  const std::vector<std::pair<dof_id_type, processor_id_type>> & data)
985  {
986  for (auto pair : data)
987  {
988  Node & node = mesh.node_ref(pair.first);
989  processor_id_type & pid = node.processor_id();
990  auto it = bad_pids.find(pair.first);
991  if (it != bad_pids.end())
992  {
993  pid = pair.second;
994  bad_pids.erase(it);
995  }
996  else
997  pid = node.choose_processor_id(pid, pair.second);
998  }
999  };
1000 
1001  Parallel::push_parallel_vector_data
1002  (mesh.comm(), potential_pids_vecs, pids_action_functor);
1003 
1004  // Using default libMesh options, we'll just need to sync
1005  // between processors now. The catch here is that we can't
1006  // initially trust Node::choose_processor_id() because some
1007  // of those node processor ids are the temporary ones.
1008  CorrectProcIds correct_pids(mesh, bad_pids);
1010  (mesh, mesh.elements_begin(), mesh.elements_end(),
1012  correct_pids);
1013 
1014  // But once we've got all the non-temporary pids synced, we
1015  // may need to sync again to get any pids on nodes only
1016  // connected to subactive elements, for which *only*
1017  // "temporary" pids are possible.
1018  bad_pids.clear();
1020  (mesh,
1021  mesh.elements_begin(), mesh.elements_end(),
1023  correct_pids);
1024  }
1025  }
1026 
1027  // We can't assert that all nodes are connected to elements, because
1028  // a DistributedMesh with NodeConstraints might have pulled in some
1029  // remote nodes solely for evaluating those constraints.
1030  // MeshTools::libmesh_assert_connected_nodes(mesh);
1031 
1032 #ifdef DEBUG
1033  MeshTools::libmesh_assert_valid_procids<Node>(mesh);
1034  //MeshTools::libmesh_assert_canonical_node_procids(mesh);
1035 #endif
1036 }
static void set_interface_node_processor_ids_petscpartitioner(MeshBase &mesh)
Nodes on the partitioning interface is partitioned into two groups using a PETSc partitioner for each...
Definition: partitioner.C:737
A Node is like a Point, but with more information.
Definition: node.h:52
static void set_interface_node_processor_ids_BFS(MeshBase &mesh)
Nodes on the partitioning interface is clustered into two groups BFS (Breadth First Search)scheme for...
Definition: partitioner.C:656
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:850
static void set_interface_node_processor_ids_linear(MeshBase &mesh)
Nodes on the partitioning interface is linearly assigned to each pair of processors.
Definition: partitioner.C:631
MeshBase & mesh
const Parallel::Communicator & comm() const
processor_id_type choose_processor_id(processor_id_type pid1, processor_id_type pid2) const
Return which of pid1 and pid2 would be preferred by the current load-balancing heuristic applied to t...
Definition: node.C:78
uint8_t processor_id_type
Definition: id_types.h:104
uint8_t processor_id_type
virtual bool is_serial() const
Definition: mesh_base.h:205
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:488
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:801
libmesh_assert(ctx)
virtual const Node & node_ref(const dof_id_type i) const
Definition: mesh_base.h:575
bool on_command_line(std::string arg)
Definition: libmesh.C:924
processor_id_type processor_id() const
Definition: dof_object.h:898

◆ set_parent_processor_ids()

void libMesh::Partitioner::set_parent_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the inactive parent elements.

A parent's processor ID is the same as its first child.

Definition at line 425 of file partitioner.C.

References libMesh::as_range(), libMesh::Elem::child_ref_range(), libMesh::ParallelObject::comm(), libMesh::Partitioner::communication_blocksize, libMesh::DofObject::invalid_processor_id, libMesh::DofObject::invalidate_processor_id(), libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::MeshBase::max_elem_id(), mesh, TIMPI::Communicator::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::parent(), libMesh::DofObject::processor_id(), and libMesh::Elem::total_family_tree().

Referenced by libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

426 {
427  // Ignore the parameter when !LIBMESH_ENABLE_AMR
429 
430  LOG_SCOPE("set_parent_processor_ids()", "Partitioner");
431 
432 #ifdef LIBMESH_ENABLE_AMR
433 
434  // If the mesh is serial we have access to all the elements,
435  // in particular all the active ones. We can therefore set
436  // the parent processor ids indirectly through their children, and
437  // set the subactive processor ids while examining their active
438  // ancestors.
439  // By convention a parent is assigned to the minimum processor
440  // of all its children, and a subactive is assigned to the processor
441  // of its active ancestor.
442  if (mesh.is_serial())
443  {
444  for (auto & elem : mesh.active_element_ptr_range())
445  {
446  // First set descendents
447  std::vector<Elem *> subactive_family;
448  elem->total_family_tree(subactive_family);
449  for (const auto & f : subactive_family)
450  f->processor_id() = elem->processor_id();
451 
452  // Then set ancestors
453  Elem * parent = elem->parent();
454 
455  while (parent)
456  {
457  // invalidate the parent id, otherwise the min below
458  // will not work if the current parent id is less
459  // than all the children!
460  parent->invalidate_processor_id();
461 
462  for (auto & child : parent->child_ref_range())
463  {
464  libmesh_assert(!child.is_remote());
465  libmesh_assert_not_equal_to (child.processor_id(), DofObject::invalid_processor_id);
466  parent->processor_id() = std::min(parent->processor_id(),
467  child.processor_id());
468  }
469  parent = parent->parent();
470  }
471  }
472  }
473 
474  // When the mesh is parallel we cannot guarantee that parents have access to
475  // all their children.
476  else
477  {
478  // Setting subactive processor ids is easy: we can guarantee
479  // that children have access to all their parents.
480 
481  // Loop over all the active elements in the mesh
482  for (auto & child : mesh.active_element_ptr_range())
483  {
484  std::vector<Elem *> subactive_family;
485  child->total_family_tree(subactive_family);
486  for (const auto & f : subactive_family)
487  f->processor_id() = child->processor_id();
488  }
489 
490  // When the mesh is parallel we cannot guarantee that parents have access to
491  // all their children.
492 
493  // We will use a brute-force approach here. Each processor finds its parent
494  // elements and sets the parent pid to the minimum of its
495  // semilocal descendants.
496  // A global reduction is then performed to make sure the true minimum is found.
497  // As noted, this is required because we cannot guarantee that a parent has
498  // access to all its children on any single processor.
499  libmesh_parallel_only(mesh.comm());
500  libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
501  mesh.unpartitioned_elements_end()) == 0);
502 
503  const dof_id_type max_elem_id = mesh.max_elem_id();
504 
505  std::vector<processor_id_type>
506  parent_processor_ids (std::min(communication_blocksize,
507  max_elem_id));
508 
509  for (dof_id_type blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
510  {
511  last_elem_id =
512  std::min(static_cast<dof_id_type>((blk+1)*communication_blocksize),
513  max_elem_id);
514  const dof_id_type first_elem_id = blk*communication_blocksize;
515 
516  std::fill (parent_processor_ids.begin(),
517  parent_processor_ids.end(),
519 
520  // first build up local contributions to parent_processor_ids
521  bool have_parent_in_block = false;
522 
523  for (auto & parent : as_range(mesh.ancestor_elements_begin(),
524  mesh.ancestor_elements_end()))
525  {
526  const dof_id_type parent_idx = parent->id();
527  libmesh_assert_less (parent_idx, max_elem_id);
528 
529  if ((parent_idx >= first_elem_id) &&
530  (parent_idx < last_elem_id))
531  {
532  have_parent_in_block = true;
534 
535  std::vector<const Elem *> active_family;
536  parent->active_family_tree(active_family);
537  for (const auto & f : active_family)
538  parent_pid = std::min (parent_pid, f->processor_id());
539 
540  const dof_id_type packed_idx = parent_idx - first_elem_id;
541  libmesh_assert_less (packed_idx, parent_processor_ids.size());
542 
543  parent_processor_ids[packed_idx] = parent_pid;
544  }
545  }
546 
547  // then find the global minimum
548  mesh.comm().min (parent_processor_ids);
549 
550  // and assign the ids, if we have a parent in this block.
551  if (have_parent_in_block)
552  for (auto & parent : as_range(mesh.ancestor_elements_begin(),
553  mesh.ancestor_elements_end()))
554  {
555  const dof_id_type parent_idx = parent->id();
556 
557  if ((parent_idx >= first_elem_id) &&
558  (parent_idx < last_elem_id))
559  {
560  const dof_id_type packed_idx = parent_idx - first_elem_id;
561  libmesh_assert_less (packed_idx, parent_processor_ids.size());
562 
563  const processor_id_type parent_pid =
564  parent_processor_ids[packed_idx];
565 
566  libmesh_assert_not_equal_to (parent_pid, DofObject::invalid_processor_id);
567 
568  parent->processor_id() = parent_pid;
569  }
570  }
571  }
572  }
573 
574 #endif // LIBMESH_ENABLE_AMR
575 }
const Elem * parent() const
Definition: elem.h:2867
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:850
This is the base class from which all geometric element types are derived.
Definition: elem.h:94
MeshBase & mesh
void active_family_tree(std::vector< const Elem *> &active_family, bool reset=true) const
Same as the family_tree() member, but only adds the active children.
Definition: elem.C:1757
const Parallel::Communicator & comm() const
SimpleRange< ChildRefIter > child_ref_range()
Returns a range with all children of a parent element, usable in range-based for loops.
Definition: elem.h:2188
uint8_t processor_id_type
virtual bool is_serial() const
Definition: mesh_base.h:205
void libmesh_ignore(const Args &...)
dof_id_type id() const
Definition: dof_object.h:823
void min(const T &r, T &o, Request &req) const
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:488
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
virtual dof_id_type max_elem_id() const =0
libmesh_assert(ctx)
void invalidate_processor_id()
Sets the processor id to invalid_processor_id.
Definition: dof_object.h:775
static const dof_id_type communication_blocksize
The blocksize to use when doing blocked parallel communication.
Definition: partitioner.h:258
void total_family_tree(std::vector< const Elem *> &family, bool reset=true) const
Same as the family_tree() member, but also adds any subactive descendants.
Definition: elem.C:1741
processor_id_type processor_id() const
Definition: dof_object.h:898
uint8_t dof_id_type
Definition: id_types.h:67

◆ single_partition()

bool libMesh::Partitioner::single_partition ( MeshBase mesh)
protectedinherited

Trivially "partitions" the mesh for one processor.

Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Returns true iff any processor id was changed.

Definition at line 297 of file partitioner.C.

References libMesh::ParallelObject::comm(), TIMPI::Communicator::max(), mesh, libMesh::MeshBase::redistribute(), and libMesh::Partitioner::single_partition_range().

Referenced by libMesh::SubdomainPartitioner::_do_partition(), libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

298 {
299  bool changed_pid =
300  this->single_partition_range(mesh.elements_begin(),
301  mesh.elements_end());
302 
303  // If we have a distributed mesh with an empty rank (or where rank
304  // 0 has only its own component of a disconnected mesh, I guess),
305  // that rank might need to be informed of a change.
306  mesh.comm().max(changed_pid);
307 
308  // We may need to redistribute, in case someone (like our unit
309  // tests) is doing something silly (like moving a whole
310  // already-distributed mesh back onto rank 0).
311  if (changed_pid)
312  mesh.redistribute();
313 
314  return changed_pid;
315 }
bool single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Slightly generalized version of single_partition which acts on a range of elements defined by the pai...
Definition: partitioner.C:319
MeshBase & mesh
const Parallel::Communicator & comm() const
void max(const T &r, T &o, Request &req) const
virtual void redistribute()
Redistribute elements between processors.
Definition: mesh_base.C:925

◆ single_partition_range()

bool libMesh::Partitioner::single_partition_range ( MeshBase::element_iterator  it,
MeshBase::element_iterator  end 
)
protectedinherited

Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end).

Returns true iff any processor id was changed.

Definition at line 319 of file partitioner.C.

References libMesh::as_range(), and libMesh::DofObject::processor_id().

Referenced by partition_range(), libMesh::MetisPartitioner::partition_range(), libMesh::SFCPartitioner::partition_range(), libMesh::MappedSubdomainPartitioner::partition_range(), libMesh::CentroidPartitioner::partition_range(), and libMesh::Partitioner::single_partition().

321 {
322  LOG_SCOPE("single_partition_range()", "Partitioner");
323 
324  bool changed_pid = false;
325 
326  for (auto & elem : as_range(it, end))
327  {
328  if (elem->processor_id())
329  changed_pid = true;
330  elem->processor_id() = 0;
331 
332  // Assign all this element's nodes to processor 0 as well.
333  for (Node & node : elem->node_ref_range())
334  {
335  if (node.processor_id())
336  changed_pid = true;
337  node.processor_id() = 0;
338  }
339  }
340 
341  return changed_pid;
342 }
A Node is like a Point, but with more information.
Definition: node.h:52
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57

◆ type()

PartitionerType libMesh::LinearPartitioner::type ( ) const
overridevirtual

Reimplemented from libMesh::Partitioner.

Definition at line 30 of file linear_partitioner.C.

References libMesh::LINEAR_PARTITIONER.

Member Data Documentation

◆ _dual_graph

std::vector<std::vector<dof_id_type> > libMesh::Partitioner::_dual_graph
protectedinherited

A dual graph corresponds to the mesh, and it is typically used in paritioner.

A vertex represents an element, and its neighbors are the element neighbors.

Definition at line 302 of file partitioner.h.

Referenced by libMesh::Partitioner::build_graph().

◆ _global_index_by_pid_map

std::unordered_map<dof_id_type, dof_id_type> libMesh::Partitioner::_global_index_by_pid_map
protectedinherited

Maps active element ids into a contiguous range, as needed by parallel partitioner.

Definition at line 286 of file partitioner.h.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map(), libMesh::Partitioner::assign_partitioning(), and libMesh::Partitioner::build_graph().

◆ _local_id_to_elem

std::vector<Elem *> libMesh::Partitioner::_local_id_to_elem
protectedinherited

Definition at line 305 of file partitioner.h.

Referenced by libMesh::Partitioner::build_graph().

◆ _n_active_elem_on_proc

std::vector<dof_id_type> libMesh::Partitioner::_n_active_elem_on_proc
protectedinherited

The number of active elements on each processor.

Note
ParMETIS requires that each processor have some active elements; it will abort if any processor passes a nullptr _part array.

Definition at line 295 of file partitioner.h.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map(), libMesh::Partitioner::assign_partitioning(), and libMesh::Partitioner::build_graph().

◆ _weights

ErrorVector* libMesh::Partitioner::_weights
protectedinherited

The weights that might be used for partitioning.

Definition at line 281 of file partitioner.h.

Referenced by libMesh::MetisPartitioner::attach_weights(), and libMesh::MetisPartitioner::partition_range().

◆ communication_blocksize

const dof_id_type libMesh::Partitioner::communication_blocksize
staticprotectedinherited
Initial value:
=
dof_id_type(1000000)

The blocksize to use when doing blocked parallel communication.

This limits the maximum vector size which can be used in a single communication step.

Definition at line 258 of file partitioner.h.

Referenced by libMesh::Partitioner::set_parent_processor_ids().


The documentation for this class was generated from the following files: