libMesh
Public Member Functions | Private Member Functions | Private Attributes | List of all members
libMesh::PetscDMWrapper Class Reference

This class defines a wrapper around the PETSc DM infrastructure. More...

#include <petsc_dm_wrapper.h>

Public Member Functions

 PetscDMWrapper ()=default
 
 ~PetscDMWrapper ()
 
void clear ()
 Destroys and clears all build DM-related data. More...
 
void init_and_attach_petscdm (System &system, SNES snes)
 
void init_and_attach_petscdm (System &system, KSP ksp)
 

Private Member Functions

unsigned int init_petscdm (System &system)
 Initialize the PETSc DM and return the number of geometric multigrid levels. More...
 
void init_dm_data (unsigned int n_levels, const Parallel::Communicator &comm)
 Init all the n_mesh_level dependent data structures. More...
 
DM & get_dm (unsigned int level)
 Get reference to DM for the given mesh level. More...
 
PetscSection & get_section (unsigned int level)
 Get reference to PetscSection for the given mesh level. More...
 
PetscSF & get_star_forest (unsigned int level)
 Get reference to PetscSF for the given mesh level. More...
 
void build_section (const System &system, PetscSection &section)
 Takes System, empty PetscSection and populates the PetscSection. More...
 
void build_sf (const System &system, PetscSF &star_forest)
 Takes System, empty PetscSF and populates the PetscSF. More...
 
void set_point_range_in_section (const System &system, PetscSection &section, std::unordered_map< dof_id_type, dof_id_type > &node_map, std::unordered_map< dof_id_type, dof_id_type > &elem_map, std::map< dof_id_type, unsigned int > &scalar_map)
 Helper function for build_section. More...
 
void add_dofs_to_section (const System &system, PetscSection &section, const std::unordered_map< dof_id_type, dof_id_type > &node_map, const std::unordered_map< dof_id_type, dof_id_type > &elem_map, const std::map< dof_id_type, unsigned int > &scalar_map)
 Helper function for build_section. More...
 
dof_id_type check_section_n_dofs (PetscSection &section)
 Helper function to sanity check PetscSection construction The PetscSection contains local dof information. More...
 
void add_dofs_helper (const System &system, const DofObject &dof_object, dof_id_type local_id, PetscSection &section)
 

Private Attributes

std::vector< WrappedPetsc< DM > > _dms
 Vector of DMs for all grid levels. More...
 
std::vector< WrappedPetsc< PetscSection > > _sections
 Vector of PETScSections for all grid levels. More...
 
std::vector< PetscSF > _star_forests
 Vector of star forests for all grid levels. More...
 
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _pmtx_vec
 Vector of projection matrixes for all grid levels. More...
 
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _subpmtx_vec
 Vector of sub projection matrixes for all grid levels for fieldsplit. More...
 
std::vector< PetscDMContext_ctx_vec
 Vector of internal PetscDM context structs for all grid levels Pointers to these C++ objects are passed to DMShellSetContext(), they are cleaned up automatically by their destructors. More...
 
std::vector< std::unique_ptr< PetscVector< Number > > > _vec_vec
 Vector of solution vectors for all grid levels. More...
 
std::vector< unsigned int_mesh_dof_sizes
 Stores n_dofs for each grid level, to be used for projection matrix sizing. More...
 
std::vector< unsigned int_mesh_dof_loc_sizes
 Stores n_local_dofs for each grid level, to be used for projection vector sizing. More...
 

Detailed Description

This class defines a wrapper around the PETSc DM infrastructure.

By coordinating DM data structures with libMesh, we can use libMesh mesh hierarchies for geometric multigrid. Additionally, by setting the DM data, we can additionally (with or without multigrid) define recursive fieldsplits of our variables.

Author
Paul T. Bauman, Boris Boutkov
Date
2018

Definition at line 97 of file petsc_dm_wrapper.h.

Constructor & Destructor Documentation

◆ PetscDMWrapper()

libMesh::PetscDMWrapper::PetscDMWrapper ( )
default

◆ ~PetscDMWrapper()

libMesh::PetscDMWrapper::~PetscDMWrapper ( )
default

Member Function Documentation

◆ add_dofs_helper()

void libMesh::PetscDMWrapper::add_dofs_helper ( const System system,
const DofObject dof_object,
dof_id_type  local_id,
PetscSection &  section 
)
private

Definition at line 962 of file petsc_dm_wrapper.C.

References libMesh::ParallelObject::comm(), libMesh::make_range(), libMesh::DofObject::n_dofs(), libMesh::System::n_vars(), and libMesh::System::number().

Referenced by add_dofs_to_section().

966  {
967  unsigned int total_n_dofs_at_dofobject = 0;
968 
969  // We are assuming variables are also numbered 0 to n_vars()-1
970  for (auto v : make_range(system.n_vars()))
971  {
972  unsigned int n_dofs_at_dofobject = dof_object.n_dofs(system.number(), v);
973 
974  if( n_dofs_at_dofobject > 0 )
975  {
976  LibmeshPetscCall2(system.comm(), PetscSectionSetFieldDof( section,
977  local_id,
978  v,
979  n_dofs_at_dofobject ));
980 
981  total_n_dofs_at_dofobject += n_dofs_at_dofobject;
982  }
983  }
984 
985  libmesh_assert_equal_to(total_n_dofs_at_dofobject, dof_object.n_dofs(system.number()));
986 
987  LibmeshPetscCall2(system.comm(), PetscSectionSetDof( section, local_id, total_n_dofs_at_dofobject ));
988  }
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:140

◆ add_dofs_to_section()

void libMesh::PetscDMWrapper::add_dofs_to_section ( const System system,
PetscSection &  section,
const std::unordered_map< dof_id_type, dof_id_type > &  node_map,
const std::unordered_map< dof_id_type, dof_id_type > &  elem_map,
const std::map< dof_id_type, unsigned int > &  scalar_map 
)
private

Helper function for build_section.

This function will set the DoF info for each "point" in the PetscSection.

Definition at line 908 of file petsc_dm_wrapper.C.

References add_dofs_helper(), libMesh::ParallelObject::comm(), libMesh::System::get_mesh(), libMesh::OrderWrapper::get_order(), libMesh::libmesh_assert(), mesh, libMesh::ParallelObject::n_processors(), libMesh::FEType::order, libMesh::ParallelObject::processor_id(), libMesh::Variable::type(), and libMesh::System::variable().

Referenced by build_section().

913  {
914  const MeshBase & mesh = system.get_mesh();
915 
916  // Now we go through and add dof information for each "point".
917  //
918  // In libMesh, for most finite elements, we just associate those DoFs with the
919  // geometric nodes. So can we loop over the nodes we cached in the node_map and
920  // the DoFs for each field for that node. We need to give PETSc the local id
921  // we built up in the node map.
922  for (const auto [global_node_id, local_node_id] : node_map )
923  {
924  libmesh_assert( mesh.query_node_ptr(global_node_id) );
925 
926  const Node & node = mesh.node_ref(global_node_id);
927 
928  this->add_dofs_helper(system,node,local_node_id,section);
929  }
930 
931  // Some finite element types associate dofs with the element. So now we go through
932  // any of those with the Elem as the point we add to the PetscSection with accompanying
933  // dofs
934  for (const auto [global_elem_id, local_elem_id] : elem_map )
935  {
936  libmesh_assert( mesh.query_elem_ptr(global_elem_id) );
937 
938  const Elem & elem = mesh.elem_ref(global_elem_id);
939 
940  this->add_dofs_helper(system,elem,local_elem_id,section);
941  }
942 
943  // Now add any SCALAR dofs to the PetscSection
944  // SCALAR dofs live on the "last" processor, so only work there if there are any
945  if (system.processor_id() == (system.n_processors()-1))
946  {
947  for (const auto [local_id, scalar_var] : scalar_map )
948  {
949  // The number of SCALAR dofs comes from the variable order
950  const int n_dofs = system.variable(scalar_var).type().order.get_order();
951 
952  LibmeshPetscCall2(system.comm(), PetscSectionSetFieldDof( section, local_id, scalar_var, n_dofs ));
953 
954  // In the SCALAR case, there are no other variables associate with the "point"
955  // the total number of dofs on the point is the same as that for the field
956  LibmeshPetscCall2(system.comm(), PetscSectionSetDof( section, local_id, n_dofs ));
957  }
958  }
959 
960  }
MeshBase & mesh
libmesh_assert(ctx)
void add_dofs_helper(const System &system, const DofObject &dof_object, dof_id_type local_id, PetscSection &section)

◆ build_section()

void libMesh::PetscDMWrapper::build_section ( const System system,
PetscSection &  section 
)
private

Takes System, empty PetscSection and populates the PetscSection.

Take the System in its current state and an empty PetscSection and then populate the PetscSection. The PetscSection is comprised of global "point" numbers, where a "point" in PetscDM parlance is a geometric entity: node, edge, face, or element. Then, we also add the DoF numbering for each variable for each of the "points". The PetscSection, together the with PetscSF will allow for recursive fieldsplits from the command line using PETSc.

Definition at line 671 of file petsc_dm_wrapper.C.

References add_dofs_to_section(), check_section_n_dofs(), libMesh::ParallelObject::comm(), TIMPI::Communicator::get(), libMesh::make_range(), libMesh::System::n_local_dofs(), libMesh::System::n_vars(), libMesh::on_command_line(), set_point_range_in_section(), and libMesh::System::variable_name().

Referenced by init_petscdm().

672  {
673  LOG_SCOPE ("build_section()", "PetscDMWrapper");
674 
675  LibmeshPetscCall2(system.comm(), PetscSectionCreate(system.comm().get(),&section));
676 
677  // Tell the PetscSection about all of our System variables
678  LibmeshPetscCall2(system.comm(), PetscSectionSetNumFields(section,system.n_vars()));
679 
680  // Set the actual names of all the field variables
681  for (auto v : make_range(system.n_vars()))
682  LibmeshPetscCall2(system.comm(), PetscSectionSetFieldName( section, v, system.variable_name(v).c_str() ));
683 
684  // For building the section, we need to create local-to-global map
685  // of local "point" ids to the libMesh global id of that point.
686  // A "point" in PETSc nomenclature is a geometric object that can have
687  // dofs associated with it, e.g. Node, Edge, Face, Elem.
688  // The numbering PETSc expects is continuous for the local numbering.
689  // Since we're only using this interface for solvers, then we can just
690  // assign whatever local id to any of the global ids. But it is local
691  // so we don't need to worry about processor numbering for the local
692  // point ids.
693  std::unordered_map<dof_id_type,dof_id_type> node_map;
694  std::unordered_map<dof_id_type,dof_id_type> elem_map;
695  std::map<dof_id_type,unsigned int> scalar_map;
696 
697  // First we tell the PetscSection about all of our points that have
698  // dofs associated with them.
699  this->set_point_range_in_section(system, section, node_map, elem_map, scalar_map);
700 
701  // Now we can build up the dofs per "point" in the PetscSection
702  this->add_dofs_to_section(system, section, node_map, elem_map, scalar_map);
703 
704  // Final setup of PetscSection
705  // Until Matt Knepley finishes implementing the commented out function
706  // below, the PetscSection will be assuming node-major ordering
707  // so let's throw an error if the user tries to use this without
708  // node-major order
709  libmesh_error_msg_if(!libMesh::on_command_line("--node-major-dofs"),
710  "ERROR: Must use --node-major-dofs with PetscSection!");
711 
712  //else if (!system.identify_variable_groups())
713  // ierr = PetscSectionSetUseFieldOffsets(section,PETSC_TRUE);LIBMESH_CHKERR(ierr);
714  //else
715  // {
716  // std::string msg = "ERROR: Only node-major or var-major ordering supported for PetscSection!\n";
717  // msg += " var-group-major ordering not supported!\n";
718  // msg += " Must use --node-major-dofs or set System::identify_variable_groups() = false!\n";
719  // libmesh_error_msg(msg);
720  // }
721 
722  LibmeshPetscCall2(system.comm(), PetscSectionSetUp(section));
723 
724  // Sanity checking at least that local_n_dofs match
725  libmesh_assert_equal_to(system.n_local_dofs(),this->check_section_n_dofs(section));
726  }
dof_id_type check_section_n_dofs(PetscSection &section)
Helper function to sanity check PetscSection construction The PetscSection contains local dof informa...
void set_point_range_in_section(const System &system, PetscSection &section, std::unordered_map< dof_id_type, dof_id_type > &node_map, std::unordered_map< dof_id_type, dof_id_type > &elem_map, std::map< dof_id_type, unsigned int > &scalar_map)
Helper function for build_section.
void add_dofs_to_section(const System &system, PetscSection &section, const std::unordered_map< dof_id_type, dof_id_type > &node_map, const std::unordered_map< dof_id_type, dof_id_type > &elem_map, const std::map< dof_id_type, unsigned int > &scalar_map)
Helper function for build_section.
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:140
bool on_command_line(std::string arg)
Definition: libmesh.C:987

◆ build_sf()

void libMesh::PetscDMWrapper::build_sf ( const System system,
PetscSF &  star_forest 
)
private

Takes System, empty PetscSF and populates the PetscSF.

The PetscSF (star forest) is a cousin of PetscSection. PetscSection has the DoF info, and PetscSF gives the parallel distribution of the DoF info. So PetscSF should only be necessary when we have more than one MPI rank. Essentially, we are copying the DofMap.send_list(): we are specifying the local dofs, what rank communicates that dof info (for off-processor dofs that are communicated) and the dofs local index on that rank.

https://jedbrown.org/files/StarForest.pdf

Definition at line 728 of file petsc_dm_wrapper.C.

References libMesh::ParallelObject::comm(), libMesh::DofMap::dof_owner(), libMesh::DofMapBase::first_dof(), TIMPI::Communicator::get(), libMesh::System::get_dof_map(), libMesh::DofMap::get_send_list(), libMesh::index_range(), and libMesh::DofMap::n_local_dofs().

Referenced by init_petscdm().

729  {
730  LOG_SCOPE ("build_sf()", "PetscDMWrapper");
731 
732  const DofMap & dof_map = system.get_dof_map();
733 
734  const std::vector<dof_id_type> & send_list = dof_map.get_send_list();
735 
736  // Number of ghost dofs that send information to this processor
737  const PetscInt n_leaves = cast_int<PetscInt>(send_list.size());
738 
739  // Number of local dofs, including ghosts dofs
740  const PetscInt n_roots = dof_map.n_local_dofs() + n_leaves;
741 
742  // This is the vector of dof indices coming from other processors
743  // We need to give this to the PetscSF
744  // We'll be extra paranoid about this ugly double cast
745  static_assert(sizeof(PetscInt) == sizeof(dof_id_type),"PetscInt is not a dof_id_type!");
746  PetscInt * local_dofs = reinterpret_cast<PetscInt *>(const_cast<dof_id_type *>(send_list.data()));
747 
748 #ifdef DEBUG
749  // PETSc 3.18 and above don't want duplicate entries here ... and
750  // frankly we shouldn't have duplicates in the first place!
751  {
752  std::set<dof_id_type> send_set(send_list.begin(), send_list.end());
753  libmesh_assert_equal_to(send_list.size(), send_set.size());
754  }
755 #endif
756 
757  // This is the vector of PetscSFNode's for the local_dofs.
758  // For each entry in local_dof, we have to supply the rank from which
759  // that dof stems and its local index on that rank.
760  // PETSc documentation here:
761  // http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PetscSF/PetscSFNode.html
762  std::vector<PetscSFNode> sf_nodes(send_list.size());
763 
764  for (auto i : index_range(send_list))
765  {
766  dof_id_type incoming_dof = send_list[i];
767 
768  const processor_id_type rank = dof_map.dof_owner(incoming_dof);
769 
770  // Dofs are sorted and continuous on the processor so local index
771  // is counted up from the first dof on the processor.
772  PetscInt index = incoming_dof - dof_map.first_dof(rank);
773 
774  sf_nodes[i].rank = rank; /* Rank of owner */
775  sf_nodes[i].index = index;/* Index of dof on rank */
776  }
777 
778  PetscSFNode * remote_dofs = sf_nodes.data();
779 
780  LibmeshPetscCall2(system.comm(), PetscSFCreate(system.comm().get(), &star_forest));
781 
782  // TODO: We should create pointers to arrays so we don't have to copy
783  // and then can use PETSC_OWN_POINTER where PETSc will take ownership
784  // and delete the memory for us. But then we'd have to use PetscMalloc.
785  LibmeshPetscCall2(system.comm(), PetscSFSetGraph(star_forest,
786  n_roots,
787  n_leaves,
788  local_dofs,
789  PETSC_COPY_VALUES,
790  remote_dofs,
791  PETSC_COPY_VALUES));
792  }
uint8_t processor_id_type
auto index_range(const T &sizable)
Helper function that returns an IntRange<std::size_t> representing all the indices of the passed-in v...
Definition: int_range.h:117
uint8_t dof_id_type
Definition: id_types.h:67

◆ check_section_n_dofs()

dof_id_type libMesh::PetscDMWrapper::check_section_n_dofs ( PetscSection &  section)
private

Helper function to sanity check PetscSection construction The PetscSection contains local dof information.

This helper function just facilitates sanity checking that in fact it only has n_local_dofs.

Definition at line 991 of file petsc_dm_wrapper.C.

Referenced by build_section().

992  {
993  PetscInt n_local_dofs = 0;
994 
995  // Grap the starting and ending points from the section
996  PetscInt pstart, pend;
997  PetscErrorCode ierr = PetscSectionGetChart(section, &pstart, &pend);
998  if (ierr)
999  libmesh_error();
1000 
1001  // Count up the n_dofs for each point from the section
1002  for( PetscInt p = pstart; p < pend; p++ )
1003  {
1004  PetscInt n_dofs;
1005  ierr = PetscSectionGetDof(section,p,&n_dofs);
1006  if (ierr)
1007  libmesh_error();
1008  n_local_dofs += n_dofs;
1009  }
1010 
1011  static_assert(sizeof(PetscInt) == sizeof(dof_id_type),"PetscInt is not a dof_id_type!");
1012  return n_local_dofs;
1013  }
uint8_t dof_id_type
Definition: id_types.h:67

◆ clear()

void libMesh::PetscDMWrapper::clear ( )

Destroys and clears all build DM-related data.

Definition at line 386 of file petsc_dm_wrapper.C.

References _ctx_vec, _dms, _mesh_dof_loc_sizes, _mesh_dof_sizes, _pmtx_vec, _sections, _star_forests, _subpmtx_vec, and _vec_vec.

Referenced by libMesh::PetscDiffSolver::clear().

387  {
388  // Calls custom deleters
389  _dms.clear();
390  _sections.clear();
391 
392  // We don't manage the lifetime of the PetscSF objects
393  _star_forests.clear();
394 
395  // The relevant C++ destructors are called for these objects
396  // automatically.
397  _pmtx_vec.clear();
398  _subpmtx_vec.clear();
399  _vec_vec.clear();
400 
401  // These members are trivially clear()able.
402  _ctx_vec.clear();
403  _mesh_dof_sizes.clear();
404  _mesh_dof_loc_sizes.clear();
405  }
std::vector< std::unique_ptr< PetscVector< Number > > > _vec_vec
Vector of solution vectors for all grid levels.
std::vector< PetscDMContext > _ctx_vec
Vector of internal PetscDM context structs for all grid levels Pointers to these C++ objects are pass...
std::vector< WrappedPetsc< DM > > _dms
Vector of DMs for all grid levels.
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _pmtx_vec
Vector of projection matrixes for all grid levels.
std::vector< WrappedPetsc< PetscSection > > _sections
Vector of PETScSections for all grid levels.
std::vector< unsigned int > _mesh_dof_loc_sizes
Stores n_local_dofs for each grid level, to be used for projection vector sizing. ...
std::vector< PetscSF > _star_forests
Vector of star forests for all grid levels.
std::vector< unsigned int > _mesh_dof_sizes
Stores n_dofs for each grid level, to be used for projection matrix sizing.
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _subpmtx_vec
Vector of sub projection matrixes for all grid levels for fieldsplit.

◆ get_dm()

DM& libMesh::PetscDMWrapper::get_dm ( unsigned int  level)
inlineprivate

Get reference to DM for the given mesh level.

init_dm_data() should be called before this function.

Definition at line 184 of file petsc_dm_wrapper.h.

References _dms.

Referenced by init_and_attach_petscdm(), and init_petscdm().

185  { libmesh_assert_less(level, _dms.size());
186  return *_dms[level]; }
std::vector< WrappedPetsc< DM > > _dms
Vector of DMs for all grid levels.

◆ get_section()

PetscSection& libMesh::PetscDMWrapper::get_section ( unsigned int  level)
inlineprivate

Get reference to PetscSection for the given mesh level.

init_dm_data() should be called before this function.

Definition at line 192 of file petsc_dm_wrapper.h.

References _sections.

Referenced by init_petscdm().

193  { libmesh_assert_less(level, _sections.size());
194  return *_sections[level]; }
std::vector< WrappedPetsc< PetscSection > > _sections
Vector of PETScSections for all grid levels.

◆ get_star_forest()

PetscSF& libMesh::PetscDMWrapper::get_star_forest ( unsigned int  level)
inlineprivate

Get reference to PetscSF for the given mesh level.

init_dm_data() should be called before this function.

Definition at line 200 of file petsc_dm_wrapper.h.

References _star_forests.

Referenced by init_petscdm().

201  { libmesh_assert_less(level, _star_forests.size());
202  return _star_forests[level]; }
std::vector< PetscSF > _star_forests
Vector of star forests for all grid levels.

◆ init_and_attach_petscdm() [1/2]

void libMesh::PetscDMWrapper::init_and_attach_petscdm ( System system,
SNES  snes 
)

Definition at line 653 of file petsc_dm_wrapper.C.

References libMesh::ParallelObject::comm(), get_dm(), init_petscdm(), and libMesh::MeshTools::n_levels().

Referenced by libMesh::PetscDiffSolver::setup_petsc_data().

654  {
655  const auto n_levels = this->init_petscdm(system);
656 
657  // Lastly, give SNES the finest level DM
658  DM & dm = this->get_dm(n_levels-1);
659  LibmeshPetscCall2(system.comm(), SNESSetDM(snes, dm));
660  }
DM & get_dm(unsigned int level)
Get reference to DM for the given mesh level.
unsigned int init_petscdm(System &system)
Initialize the PETSc DM and return the number of geometric multigrid levels.
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:802

◆ init_and_attach_petscdm() [2/2]

void libMesh::PetscDMWrapper::init_and_attach_petscdm ( System system,
KSP  ksp 
)

Definition at line 662 of file petsc_dm_wrapper.C.

References libMesh::ParallelObject::comm(), get_dm(), init_petscdm(), and libMesh::MeshTools::n_levels().

663  {
664  const auto n_levels = this->init_petscdm(system);
665 
666  // Lastly, give KSP the finest level DM
667  DM & dm = this->get_dm(n_levels-1);
668  LibmeshPetscCall2(system.comm(), KSPSetDM(ksp, dm));
669  }
DM & get_dm(unsigned int level)
Get reference to DM for the given mesh level.
unsigned int init_petscdm(System &system)
Initialize the PETSc DM and return the number of geometric multigrid levels.
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:802

◆ init_dm_data()

void libMesh::PetscDMWrapper::init_dm_data ( unsigned int  n_levels,
const Parallel::Communicator comm 
)
private

Init all the n_mesh_level dependent data structures.

Definition at line 1015 of file petsc_dm_wrapper.C.

References _ctx_vec, _dms, _mesh_dof_loc_sizes, _mesh_dof_sizes, _pmtx_vec, _sections, _star_forests, _subpmtx_vec, _vec_vec, and libMesh::MeshTools::n_levels().

Referenced by init_petscdm().

1016  {
1017  _dms.resize(n_levels);
1018  _sections.resize(n_levels);
1019  _star_forests.resize(n_levels);
1020  _ctx_vec.resize(n_levels);
1021  _pmtx_vec.resize(n_levels);
1022  _subpmtx_vec.resize(n_levels);
1023  _vec_vec.resize(n_levels);
1024  _mesh_dof_sizes.resize(n_levels);
1025  _mesh_dof_loc_sizes.resize(n_levels);
1026 
1027  for( unsigned int i = 0; i < n_levels; i++ )
1028  {
1029  // Call C++ object constructors
1030  _pmtx_vec[i] = std::make_unique<PetscMatrix<Number>>(comm);
1031  _subpmtx_vec[i] = std::make_unique<PetscMatrix<Number>>(comm);
1032  _vec_vec[i] = std::make_unique<PetscVector<Number>>(comm);
1033  }
1034  }
std::vector< std::unique_ptr< PetscVector< Number > > > _vec_vec
Vector of solution vectors for all grid levels.
std::vector< PetscDMContext > _ctx_vec
Vector of internal PetscDM context structs for all grid levels Pointers to these C++ objects are pass...
std::vector< WrappedPetsc< DM > > _dms
Vector of DMs for all grid levels.
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _pmtx_vec
Vector of projection matrixes for all grid levels.
std::vector< WrappedPetsc< PetscSection > > _sections
Vector of PETScSections for all grid levels.
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:802
std::vector< unsigned int > _mesh_dof_loc_sizes
Stores n_local_dofs for each grid level, to be used for projection vector sizing. ...
std::vector< PetscSF > _star_forests
Vector of star forests for all grid levels.
std::vector< unsigned int > _mesh_dof_sizes
Stores n_dofs for each grid level, to be used for projection matrix sizing.
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _subpmtx_vec
Vector of sub projection matrixes for all grid levels for fieldsplit.

◆ init_petscdm()

unsigned int libMesh::PetscDMWrapper::init_petscdm ( System system)
private

Initialize the PETSc DM and return the number of geometric multigrid levels.

Definition at line 407 of file petsc_dm_wrapper.C.

References _ctx_vec, _dms, _mesh_dof_loc_sizes, _mesh_dof_sizes, _pmtx_vec, _subpmtx_vec, _vec_vec, build_section(), build_sf(), libMesh::ParallelObject::comm(), libMesh::command_line_next(), libMesh::DofMap::distribute_dofs(), libMesh::DofMapBase::end_old_dof(), libMesh::DofMapBase::first_old_dof(), TIMPI::Communicator::get(), get_dm(), libMesh::System::get_dof_map(), libMesh::System::get_mesh(), get_section(), get_star_forest(), init_dm_data(), libMesh::libmesh_assert(), libMesh::libmesh_petsc_DMCoarsen(), libMesh::libmesh_petsc_DMCreateInterpolation(), libMesh::libmesh_petsc_DMCreateRestriction(), libMesh::libmesh_petsc_DMCreateSubDM(), libMesh::libmesh_petsc_DMRefine(), libMesh::DofMap::local_variable_indices(), mesh, libMesh::DofMap::n_dofs(), libMesh::MeshTools::n_levels(), libMesh::DofMap::n_local_dofs(), libMesh::ParallelObject::n_processors(), n_vars, libMesh::System::n_vars(), libMesh::DofMap::prepare_send_list(), libMesh::System::projection_matrix(), libMesh::System::reinit_constraints(), libMesh::MeshRefinement::uniformly_coarsen(), and libMesh::MeshRefinement::uniformly_refine().

Referenced by init_and_attach_petscdm().

408  {
409  LOG_SCOPE ("init_and_attach_petscdm()", "PetscDMWrapper");
410 
411  MeshBase & mesh = system.get_mesh(); // Convenience
412  MeshRefinement mesh_refinement(mesh); // Used for swapping between grids
413 
414  // There's no need for these code paths while traversing the hierarchy
415  mesh.allow_renumbering(false);
416  mesh.allow_remote_element_removal(false);
417  mesh.partitioner() = nullptr;
418 
419  // First walk over the active local elements and see how many maximum MG levels we can construct
420 /*
421  unsigned int n_levels = 0;
422  for ( auto & elem : mesh.active_local_element_ptr_range() )
423  {
424  if ( elem->level() > n_levels )
425  n_levels = elem->level();
426  }
427  // On coarse grids some processors may have no active local elements,
428  // these processors shouldn't make projections
429  if (n_levels >= 1)
430  n_levels += 1;
431 */
432 
433  unsigned int n_levels = MeshTools::n_levels(mesh);
434 
435  // How many MG levels did the user request?
436  unsigned int usr_requested_mg_lvls = 0;
437  usr_requested_mg_lvls = command_line_next("-pc_mg_levels", usr_requested_mg_lvls);
438 
439  // Only construct however many levels were requested if something was actually requested
440  if ( usr_requested_mg_lvls != 0 )
441  {
442  // Dont request more than avail num levels on mesh, require at least 2 levels
443  libmesh_assert_less_equal( usr_requested_mg_lvls, n_levels );
444  libmesh_assert( usr_requested_mg_lvls > 1 );
445 
446  n_levels = usr_requested_mg_lvls;
447  }
448  else
449  {
450  // if -pc_mg_levels is not specified we just construct fieldsplit related
451  // structures on the finest mesh.
452  n_levels = 1;
453  }
454 
455 
456  // Init data structures: data[0] ~ coarse grid, data[n_levels-1] ~ fine grid
457  this->init_dm_data(n_levels, system.comm());
458 
459  // Step 1. contract : all active elements have no children
460  mesh.contract();
461 
462  // Start on finest grid. Construct DM datas and stash some info for
463  // later projection_matrix and vec sizing
464  for(unsigned int level = n_levels; level >= 1; level--)
465  {
466  // Save the n_fine_dofs before coarsening for later projection matrix sizing
467  _mesh_dof_sizes[level-1] = system.get_dof_map().n_dofs();
468  _mesh_dof_loc_sizes[level-1] = system.get_dof_map().n_local_dofs();
469 
470  // Get refs to things we will fill
471  DM & dm = this->get_dm(level-1);
472  PetscSection & section = this->get_section(level-1);
473  PetscSF & star_forest = this->get_star_forest(level-1);
474 
475  // The shell will contain other DM info
476  LibmeshPetscCall2(system.comm(), DMShellCreate(system.comm().get(), &dm));
477 
478  // Set the DM embedding dimension to help PetscDS (Discrete System)
479  LibmeshPetscCall2(system.comm(), DMSetCoordinateDim(dm, mesh.mesh_dimension()));
480 
481  // Build the PetscSection and attach it to the DM
482  this->build_section(system, section);
483 #if PETSC_VERSION_LESS_THAN(3,10,0)
484  LibmeshPetscCall2(system.comm(), DMSetDefaultSection(dm, section));
485 #elif PETSC_VERSION_LESS_THAN(3,23,0)
486  LibmeshPetscCall2(system.comm(), DMSetSection(dm, section));
487 #else
488  LibmeshPetscCall2(system.comm(), DMSetLocalSection(dm, section));
489 #endif
490 
491  // We only need to build the star forest if we're in a parallel environment
492  if (system.n_processors() > 1)
493  {
494  // Build the PetscSF and attach it to the DM
495  this->build_sf(system, star_forest);
496 #if PETSC_VERSION_LESS_THAN(3,12,0)
497  LibmeshPetscCall2(system.comm(), DMSetDefaultSF(dm, star_forest));
498 #else
499  LibmeshPetscCall2(system.comm(), DMSetSectionSF(dm, star_forest));
500 #endif
501  }
502 
503  // Set PETSC's Restriction, Interpolation, Coarsen and Refine functions for the current DM
504  LibmeshPetscCall2(system.comm(), DMShellSetCreateInterpolation ( dm, libmesh_petsc_DMCreateInterpolation ));
505 
506  // Not implemented. For now we rely on galerkin style restrictions
507  bool supply_restriction = false;
508  if (supply_restriction)
509  LibmeshPetscCall2(system.comm(), DMShellSetCreateRestriction ( dm, libmesh_petsc_DMCreateRestriction ));
510 
511  LibmeshPetscCall2(system.comm(), DMShellSetCoarsen ( dm, libmesh_petsc_DMCoarsen ));
512 
513  LibmeshPetscCall2(system.comm(), DMShellSetRefine ( dm, libmesh_petsc_DMRefine ));
514 
515  LibmeshPetscCall2(system.comm(), DMShellSetCreateSubDM(dm, libmesh_petsc_DMCreateSubDM));
516 
517  // Uniformly coarsen if not the coarsest grid and distribute dof info.
518  if ( level != 1 )
519  {
520  LOG_CALL("PDM_coarsen", "PetscDMWrapper", mesh_refinement.uniformly_coarsen(1));
521  LOG_CALL("PDM_dist_dof", "PetscDMWrapper", system.get_dof_map().distribute_dofs(mesh));
522  LOG_CALL("PDM_prep_send", "PetscDMWrapper", system.get_dof_map().prepare_send_list());
523  }
524  } // End PETSc data structure creation
525 
526  // Now fill the corresponding internal PetscDMContext for each created DM
527  for( unsigned int i=1; i <= n_levels; i++ )
528  {
529  // Set context dimension
530  _ctx_vec[i-1].mesh_dim = mesh.mesh_dimension();
531 
532  // Create and attach a sized vector to the current ctx
533  _vec_vec[i-1]->init( _mesh_dof_sizes[i-1] );
534  _ctx_vec[i-1].current_vec = _vec_vec[i-1].get();
535 
536  // Set a global DM to be used as reference when using fieldsplit
537  _ctx_vec[i-1].global_dm = &(this->get_dm(n_levels-1));
538 
539  if (n_levels > 1 )
540  {
541  // Set pointers to surrounding dm levels to help PETSc refine/coarsen
542  if ( i == 1 ) // were at the coarsest mesh
543  {
544  _ctx_vec[i-1].coarser_dm = nullptr;
545  _ctx_vec[i-1].finer_dm = _dms[1].get();
546  }
547  else if( i == n_levels ) // were at the finest mesh
548  {
549  _ctx_vec[i-1].coarser_dm = _dms[_dms.size() - 2].get();
550  _ctx_vec[i-1].finer_dm = nullptr;
551  }
552  else // were in the middle of the hierarchy
553  {
554  _ctx_vec[i-1].coarser_dm = _dms[i-2].get();
555  _ctx_vec[i-1].finer_dm = _dms[i].get();
556  }
557  }
558  } // End context creation
559 
560  // Attach a vector and context to each DM
561  if ( n_levels >= 1 )
562  {
563  for ( unsigned int i = 1; i <= n_levels ; ++i)
564  {
565  DM & dm = this->get_dm(i-1);
566 
567  LibmeshPetscCall2(system.comm(), DMShellSetGlobalVector( dm, _ctx_vec[i-1].current_vec->vec() ));
568 
569  LibmeshPetscCall2(system.comm(), DMShellSetContext( dm, &_ctx_vec[i-1] ));
570  }
571  }
572 
573  // DM structures created, now we need projection matrixes if GMG is requested.
574  // To prepare for projection creation go to second coarsest mesh so we can utilize
575  // old_dof_indices information in the projection creation.
576  if (n_levels > 1 )
577  {
578  // First, stash the coarse dof indices for FS purposes
579  unsigned int n_vars = system.n_vars();
580  _ctx_vec[0].dof_vec.resize(n_vars);
581 
582  for( unsigned int v = 0; v < n_vars; v++ )
583  {
584  std::vector<numeric_index_type> di;
585  system.get_dof_map().local_variable_indices(di, system.get_mesh(), v);
586  _ctx_vec[0].dof_vec[v] = di;
587  }
588 
589  LOG_CALL ("PDM_refine", "PetscDMWrapper", mesh_refinement.uniformly_refine(1));
590  LOG_CALL ("PDM_dist_dof", "PetscDMWrapper", system.get_dof_map().distribute_dofs(mesh));
591  LOG_CALL ("PDM_cnstrnts", "PetscDMWrapper", system.reinit_constraints());
592  LOG_CALL ("PDM_prep_send", "PetscDMWrapper", system.get_dof_map().prepare_send_list());
593  }
594 
595  // Create the Interpolation Matrices between adjacent mesh levels
596  for ( unsigned int i = 1 ; i < n_levels ; ++i )
597  {
598  if ( i != n_levels )
599  {
600  // Stash the rest of the dof indices
601  unsigned int n_vars = system.n_vars();
602  _ctx_vec[i].dof_vec.resize(n_vars);
603 
604  for( unsigned int v = 0; v < n_vars; v++ )
605  {
606  std::vector<numeric_index_type> di;
607  system.get_dof_map().local_variable_indices(di, system.get_mesh(), v);
608  _ctx_vec[i].dof_vec[v] = di;
609  }
610 
611  unsigned int ndofs_c = _mesh_dof_sizes[i-1];
612  unsigned int ndofs_f = _mesh_dof_sizes[i];
613 
614  // Create the Interpolation matrix and set its pointer
615  _ctx_vec[i-1].K_interp_ptr = _pmtx_vec[i-1].get();
616  _ctx_vec[i-1].K_sub_interp_ptr = _subpmtx_vec[i-1].get();
617 
618  unsigned int ndofs_local = system.get_dof_map().n_local_dofs();
619  unsigned int ndofs_old_first = system.get_dof_map().first_old_dof();
620  unsigned int ndofs_old_end = system.get_dof_map().end_old_dof();
621  unsigned int ndofs_old_size = ndofs_old_end - ndofs_old_first;
622 
623  // Init and zero the matrix
624  _ctx_vec[i-1].K_interp_ptr->init(ndofs_f, ndofs_c, ndofs_local, ndofs_old_size, 30 , 20);
625 
626  // Disable Mat destruction since PETSc destroys these for us
627  _ctx_vec[i-1].K_interp_ptr->set_destroy_mat_on_exit(false);
628  _ctx_vec[i-1].K_sub_interp_ptr->set_destroy_mat_on_exit(false);
629 
630  // TODO: Projection matrix sparsity pattern?
631  //MatSetOption(_ctx_vec[i-1].K_interp_ptr->mat(), MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE);
632 
633  // Compute the interpolation matrix and set K_interp_ptr
634  LOG_CALL ("PDM_proj_mat", "PetscDMWrapper", system.projection_matrix(*_ctx_vec[i-1].K_interp_ptr));
635 
636  // Always close matrix that contains altered data
637  _ctx_vec[i-1].K_interp_ptr->close();
638  }
639 
640  // Move to next grid to make next projection
641  if ( i != n_levels - 1 )
642  {
643  LOG_CALL ("PDM_refine", "PetscDMWrapper", mesh_refinement.uniformly_refine(1));
644  LOG_CALL ("PDM_dist_dof", "PetscDMWrapper", system.get_dof_map().distribute_dofs(mesh));
645  LOG_CALL ("PDM_cnstrnts", "PetscDMWrapper", system.reinit_constraints());
646  LOG_CALL ("PDM_prep_send", "PetscDMWrapper", system.get_dof_map().prepare_send_list());
647  }
648  } // End create transfer operators. System back at the finest grid
649 
650  return n_levels;
651  }
void init_dm_data(unsigned int n_levels, const Parallel::Communicator &comm)
Init all the n_mesh_level dependent data structures.
std::vector< std::unique_ptr< PetscVector< Number > > > _vec_vec
Vector of solution vectors for all grid levels.
T command_line_next(std::string name, T default_value)
Use GetPot&#39;s search()/next() functions to get following arguments from the command line...
Definition: libmesh.C:1078
std::vector< PetscDMContext > _ctx_vec
Vector of internal PetscDM context structs for all grid levels Pointers to these C++ objects are pass...
DM & get_dm(unsigned int level)
Get reference to DM for the given mesh level.
std::vector< WrappedPetsc< DM > > _dms
Vector of DMs for all grid levels.
MeshBase & mesh
PetscErrorCode libmesh_petsc_DMCreateInterpolation(DM dmc, DM dmf, Mat *mat, Vec *vec)
Function to give PETSc that sets the Interpolation Matrix between two DMs.
PetscErrorCode libmesh_petsc_DMRefine(DM dmc, MPI_Comm, DM *dmf)
Help PETSc identify the finer DM given a dmc.
PetscErrorCode libmesh_petsc_DMCreateRestriction(DM dmc, DM dmf, Mat *mat)
Function to give PETSc that sets the Restriction Matrix between two DMs.
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _pmtx_vec
Vector of projection matrixes for all grid levels.
unsigned int n_vars
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:802
libmesh_assert(ctx)
PetscSection & get_section(unsigned int level)
Get reference to PetscSection for the given mesh level.
PetscErrorCode libmesh_petsc_DMCoarsen(DM dmf, MPI_Comm, DM *dmc)
Help PETSc identify the coarser DM dmc given the fine DM dmf.
PetscErrorCode libmesh_petsc_DMCreateSubDM(DM dm, PetscInt numFields, PetscInt fields[], IS *is, DM *subdm) PetscErrorCode libmesh_petsc_DMCreateSubDM(DM dm
Help PETSc create a subDM given a global dm when using fieldsplit.
std::vector< unsigned int > _mesh_dof_loc_sizes
Stores n_local_dofs for each grid level, to be used for projection vector sizing. ...
PetscSF & get_star_forest(unsigned int level)
Get reference to PetscSF for the given mesh level.
std::vector< unsigned int > _mesh_dof_sizes
Stores n_dofs for each grid level, to be used for projection matrix sizing.
void build_section(const System &system, PetscSection &section)
Takes System, empty PetscSection and populates the PetscSection.
void build_sf(const System &system, PetscSF &star_forest)
Takes System, empty PetscSF and populates the PetscSF.
std::vector< std::unique_ptr< PetscMatrixBase< Number > > > _subpmtx_vec
Vector of sub projection matrixes for all grid levels for fieldsplit.

◆ set_point_range_in_section()

void libMesh::PetscDMWrapper::set_point_range_in_section ( const System system,
PetscSection &  section,
std::unordered_map< dof_id_type, dof_id_type > &  node_map,
std::unordered_map< dof_id_type, dof_id_type > &  elem_map,
std::map< dof_id_type, unsigned int > &  scalar_map 
)
private

Helper function for build_section.

This function will count how many "points" on the current processor have DoFs associated with them and give that count to PETSc. We need to cache a mapping between the global node id and our local count that we do in this function because we will need the local number again in the add_dofs_to_section function.

Definition at line 794 of file petsc_dm_wrapper.C.

References libMesh::ParallelObject::comm(), libMesh::DofMap::dof_indices(), libMesh::FEType::family, libMesh::System::get_dof_map(), libMesh::System::get_mesh(), libMesh::libmesh_assert(), libMesh::DofMap::local_index(), libMesh::make_range(), mesh, libMesh::DofMap::n_local_dofs(), libMesh::ParallelObject::n_processors(), libMesh::DofMap::n_SCALAR_dofs(), libMesh::System::n_vars(), libMesh::System::number(), libMesh::ParallelObject::processor_id(), libMesh::SCALAR, libMesh::Variable::type(), and libMesh::System::variable().

Referenced by build_section().

799  {
800  // We're expecting this to be empty coming in
801  libmesh_assert(node_map.empty());
802 
803  // We need to count up the number of active "points" on this processor.
804  // Nominally, a "point" in PETSc parlance is a geometric object that can
805  // hold DoFs, i.e node, edge, face, elem. Since we handle the mesh and are only
806  // interested in solvers, then the only thing PETSc needs is a unique *local* number
807  // for each "point" that has active DoFs; note however this local numbering
808  // we construct must be continuous.
809  //
810  // In libMesh, for most finite elements, we just associate those DoFs with the
811  // geometric nodes. So can we loop over the nodes on this processor and check
812  // if any of the fields are have active DoFs on that node.
813  // If so, then we tell PETSc about that "point". At this stage, we just need
814  // to count up how many active "points" we have and cache the local number to global id
815  // mapping.
816 
817 
818  // These will be our local counters. pstart should always be zero.
819  // pend will track our local "point" count.
820  // If we're on a processor who coarsened the mesh to have no local elements,
821  // we should make an empty PetscSection. An empty PetscSection is specified
822  // by passing [0,0) to the PetscSectionSetChart call at the end. So, if we
823  // have nothing on this processor, these are the correct values to pass to
824  // PETSc.
825  dof_id_type pstart = 0;
826  dof_id_type pend = 0;
827 
828  const MeshBase & mesh = system.get_mesh();
829 
830  const DofMap & dof_map = system.get_dof_map();
831 
832  // If we don't have any local dofs, then there's nothing to tell to the PetscSection
833  if (dof_map.n_local_dofs() > 0)
834  {
835  // Conservative estimate of space needed so we don't thrash
836  node_map.reserve(mesh.n_local_nodes());
837  elem_map.reserve(mesh.n_active_local_elem());
838 
839  // We loop over active elements and then cache the global/local node mapping to make sure
840  // we only count active nodes. For example, if we're calling this function and we're
841  // not the finest level in the Mesh tree, we don't want to include nodes of child elements
842  // that aren't active on this level.
843  for (const auto & elem : mesh.active_local_element_ptr_range())
844  {
845  for (const Node & node : elem->node_ref_range())
846  {
847  // get the global id number of local node n
848 
849  // Only register nodes with the PetscSection if they have dofs that belong to
850  // this processor. Even though we're active local elements, the dofs associated
851  // with the node may belong to a different processor. The processor who owns
852  // those dofs will register that node with the PetscSection on that processor.
853  std::vector<dof_id_type> node_dof_indices;
854  dof_map.dof_indices( &node, node_dof_indices );
855  if( !node_dof_indices.empty() && dof_map.local_index(node_dof_indices[0]) )
856  {
857 #ifndef NDEBUG
858  // We're assuming that if the first dof for this node belongs to this processor,
859  // then all of them do.
860  for( auto dof : node_dof_indices )
861  libmesh_assert(dof_map.local_index(dof));
862 #endif
863  // Cache the global/local mapping if we haven't already
864  // Then increment our local count
865  dof_id_type node_id = node.id();
866  if( node_map.count(node_id) == 0 )
867  {
868  node_map.emplace(node_id, pend);
869  pend++;
870  }
871  }
872  }
873 
874  // Some finite elements, e.g. Hierarchic, associate element interior DoFs with the element
875  // rather than the node (since we ought to be able to use Hierachic elements on a QUAD4,
876  // which has no interior node). Thus, we also need to check element interiors for DoFs
877  // as well and, if the finite element has them, we also need to count the Elem in our
878  // "point" accounting.
879  if( elem->n_dofs(system.number()) > 0 )
880  {
881  dof_id_type elem_id = elem->id();
882  elem_map.emplace(elem_id, pend);
883  pend++;
884  }
885  }
886 
887  // SCALAR dofs live on the "last" processor, so only work there if there are any
888  if( dof_map.n_SCALAR_dofs() > 0 && (system.processor_id() == (system.n_processors()-1)) )
889  {
890  // Loop through all the variables and cache the scalar ones. We cache the
891  // SCALAR variable index along with the local point to make it easier when
892  // we have to register dofs with the PetscSection
893  for (auto v : make_range(system.n_vars()))
894  {
895  if( system.variable(v).type().family == SCALAR )
896  {
897  scalar_map.emplace(pend, v);
898  pend++;
899  }
900  }
901  }
902 
903  }
904 
905  LibmeshPetscCall2(system.comm(), PetscSectionSetChart(section, pstart, pend));
906  }
MeshBase & mesh
libmesh_assert(ctx)
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:140
uint8_t dof_id_type
Definition: id_types.h:67

Member Data Documentation

◆ _ctx_vec

std::vector<PetscDMContext> libMesh::PetscDMWrapper::_ctx_vec
private

Vector of internal PetscDM context structs for all grid levels Pointers to these C++ objects are passed to DMShellSetContext(), they are cleaned up automatically by their destructors.

Definition at line 162 of file petsc_dm_wrapper.h.

Referenced by clear(), init_dm_data(), and init_petscdm().

◆ _dms

std::vector<WrappedPetsc<DM> > libMesh::PetscDMWrapper::_dms
private

Vector of DMs for all grid levels.

These are PETSc objects created by calling DMShellCreate(), so we are responsible for cleaning them up.

Definition at line 122 of file petsc_dm_wrapper.h.

Referenced by clear(), get_dm(), init_dm_data(), and init_petscdm().

◆ _mesh_dof_loc_sizes

std::vector<unsigned int> libMesh::PetscDMWrapper::_mesh_dof_loc_sizes
private

Stores n_local_dofs for each grid level, to be used for projection vector sizing.

Definition at line 175 of file petsc_dm_wrapper.h.

Referenced by clear(), init_dm_data(), and init_petscdm().

◆ _mesh_dof_sizes

std::vector<unsigned int> libMesh::PetscDMWrapper::_mesh_dof_sizes
private

Stores n_dofs for each grid level, to be used for projection matrix sizing.

Definition at line 172 of file petsc_dm_wrapper.h.

Referenced by clear(), init_dm_data(), and init_petscdm().

◆ _pmtx_vec

std::vector<std::unique_ptr<PetscMatrixBase<Number> > > libMesh::PetscDMWrapper::_pmtx_vec
private

Vector of projection matrixes for all grid levels.

These are C++ objects, they are cleaned up automatically by their destructors.

Definition at line 148 of file petsc_dm_wrapper.h.

Referenced by clear(), init_dm_data(), and init_petscdm().

◆ _sections

std::vector<WrappedPetsc<PetscSection> > libMesh::PetscDMWrapper::_sections
private

Vector of PETScSections for all grid levels.

These are PETSc objects which are attached to the DM by calling DMSetLocalSection(). While the DM takes care of destroying existing PetscSections in calls to DMSetLocalSection(), it does not appear to clean up PetscSections it holds when it is destroyed, so we manage their lifetimes using WrappedPetsc objects.

Definition at line 132 of file petsc_dm_wrapper.h.

Referenced by clear(), get_section(), and init_dm_data().

◆ _star_forests

std::vector<PetscSF> libMesh::PetscDMWrapper::_star_forests
private

Vector of star forests for all grid levels.

These are PETSc objects which are attached to the DM by calling DMSetSectionSF(). The DM seems to take care of cleaning these up itself as far as I can tell, so we do not try to manage their lifetime in any way.

Definition at line 141 of file petsc_dm_wrapper.h.

Referenced by clear(), get_star_forest(), and init_dm_data().

◆ _subpmtx_vec

std::vector<std::unique_ptr<PetscMatrixBase<Number> > > libMesh::PetscDMWrapper::_subpmtx_vec
private

Vector of sub projection matrixes for all grid levels for fieldsplit.

These are C++ objects, they are cleaned up automatically by their destructors.

Definition at line 155 of file petsc_dm_wrapper.h.

Referenced by clear(), init_dm_data(), and init_petscdm().

◆ _vec_vec

std::vector<std::unique_ptr<PetscVector<Number> > > libMesh::PetscDMWrapper::_vec_vec
private

Vector of solution vectors for all grid levels.

These are C++ objects, they are cleaned up automatically by their destructors.

Definition at line 169 of file petsc_dm_wrapper.h.

Referenced by clear(), init_dm_data(), and init_petscdm().


The documentation for this class was generated from the following files: