libMesh
Public Member Functions | Private Member Functions | Private Attributes | List of all members
libMesh::PetscDMWrapper Class Reference

This class defines a wrapper around the PETSc DM infrastructure. More...

#include <petsc_dm_wrapper.h>

Public Member Functions

 PetscDMWrapper ()=default
 
 ~PetscDMWrapper ()
 
void clear ()
 Destroys and clears all build DM-related data. More...
 
void init_and_attach_petscdm (System &system, SNES &snes)
 

Private Member Functions

void init_dm_data (unsigned int n_levels, const Parallel::Communicator &comm)
 Init all the n_mesh_level dependent data structures. More...
 
DM & get_dm (unsigned int level)
 Get reference to DM for the given mesh level. More...
 
PetscSection & get_section (unsigned int level)
 Get reference to PetscSection for the given mesh level. More...
 
PetscSF & get_star_forest (unsigned int level)
 Get reference to PetscSF for the given mesh level. More...
 
void build_section (const System &system, PetscSection &section)
 Takes System, empty PetscSection and populates the PetscSection. More...
 
void build_sf (const System &system, PetscSF &star_forest)
 Takes System, empty PetscSF and populates the PetscSF. More...
 
void set_point_range_in_section (const System &system, PetscSection &section, std::unordered_map< dof_id_type, dof_id_type > &node_map, std::unordered_map< dof_id_type, dof_id_type > &elem_map, std::map< dof_id_type, unsigned int > &scalar_map)
 Helper function for build_section. More...
 
void add_dofs_to_section (const System &system, PetscSection &section, const std::unordered_map< dof_id_type, dof_id_type > &node_map, const std::unordered_map< dof_id_type, dof_id_type > &elem_map, const std::map< dof_id_type, unsigned int > &scalar_map)
 Helper function for build_section. More...
 
dof_id_type check_section_n_dofs (PetscSection &section)
 Helper function to sanity check PetscSection construction. More...
 
void add_dofs_helper (const System &system, const DofObject &dof_object, dof_id_type local_id, PetscSection &section)
 Helper function to reduce code duplication when setting dofs in section. More...
 

Private Attributes

std::vector< std::unique_ptr< DM > > _dms
 Vector of DMs for all grid levels. More...
 
std::vector< std::unique_ptr< PetscSection > > _sections
 Vector of PETScSections for all grid levels. More...
 
std::vector< std::unique_ptr< PetscSF > > _star_forests
 Vector of star forests for all grid levels. More...
 
std::vector< std::unique_ptr< PetscMatrix< Number > > > _pmtx_vec
 Vector of projection matrixes for all grid levels. More...
 
std::vector< std::unique_ptr< PetscMatrix< Number > > > _subpmtx_vec
 Vector of sub projection matrixes for all grid levels for fieldsplit. More...
 
std::vector< std::unique_ptr< PetscDMContext > > _ctx_vec
 Vector of internal PetscDM context structs for all grid levels. More...
 
std::vector< std::unique_ptr< PetscVector< Number > > > _vec_vec
 Vector of solution vectors for all grid levels. More...
 
std::vector< unsigned int_mesh_dof_sizes
 Stores n_dofs for each grid level, to be used for projection matrix sizing. More...
 
std::vector< unsigned int_mesh_dof_loc_sizes
 Stores n_local_dofs for each grid level, to be used for projection vector sizing. More...
 

Detailed Description

This class defines a wrapper around the PETSc DM infrastructure.

By coordinating DM data structures with libMesh, we can use libMesh mesh hierarchies for geometric multigrid. Additionally, by setting the DM data, we can additionally (with or without multigrid) define recursive fieldsplits of our variables.

Author
Paul T. Bauman, Boris Boutkov
Date
2018

Definition at line 95 of file petsc_dm_wrapper.h.

Constructor & Destructor Documentation

◆ PetscDMWrapper()

libMesh::PetscDMWrapper::PetscDMWrapper ( )
default

◆ ~PetscDMWrapper()

libMesh::PetscDMWrapper::~PetscDMWrapper ( )

Definition at line 425 of file petsc_dm_wrapper.C.

426  {
427  this->clear();
428  }

References clear().

Member Function Documentation

◆ add_dofs_helper()

void libMesh::PetscDMWrapper::add_dofs_helper ( const System system,
const DofObject dof_object,
dof_id_type  local_id,
PetscSection &  section 
)
private

Helper function to reduce code duplication when setting dofs in section.

Definition at line 1039 of file petsc_dm_wrapper.C.

1043  {
1044  unsigned int total_n_dofs_at_dofobject = 0;
1045 
1046  // We are assuming variables are also numbered 0 to n_vars()-1
1047  for (auto v : IntRange<unsigned int>(0, system.n_vars()))
1048  {
1049  unsigned int n_dofs_at_dofobject = dof_object.n_dofs(system.number(), v);
1050 
1051  if( n_dofs_at_dofobject > 0 )
1052  {
1053  PetscErrorCode ierr = PetscSectionSetFieldDof( section,
1054  local_id,
1055  v,
1056  n_dofs_at_dofobject );
1057 
1058  CHKERRABORT(system.comm().get(),ierr);
1059 
1060  total_n_dofs_at_dofobject += n_dofs_at_dofobject;
1061  }
1062  }
1063 
1064  libmesh_assert_equal_to(total_n_dofs_at_dofobject, dof_object.n_dofs(system.number()));
1065 
1066  PetscErrorCode ierr =
1067  PetscSectionSetDof( section, local_id, total_n_dofs_at_dofobject );
1068  CHKERRABORT(system.comm().get(),ierr);
1069  }

References libMesh::ParallelObject::comm(), libMesh::ierr, libMesh::DofObject::n_dofs(), libMesh::System::n_vars(), and libMesh::System::number().

Referenced by add_dofs_to_section().

◆ add_dofs_to_section()

void libMesh::PetscDMWrapper::add_dofs_to_section ( const System system,
PetscSection &  section,
const std::unordered_map< dof_id_type, dof_id_type > &  node_map,
const std::unordered_map< dof_id_type, dof_id_type > &  elem_map,
const std::map< dof_id_type, unsigned int > &  scalar_map 
)
private

Helper function for build_section.

This function will set the DoF info for each "point" in the PetscSection.

Definition at line 972 of file petsc_dm_wrapper.C.

977  {
978  const MeshBase & mesh = system.get_mesh();
979 
980  PetscErrorCode ierr;
981 
982  // Now we go through and add dof information for each "point".
983  //
984  // In libMesh, for most finite elements, we just associate those DoFs with the
985  // geometric nodes. So can we loop over the nodes we cached in the node_map and
986  // the DoFs for each field for that node. We need to give PETSc the local id
987  // we built up in the node map.
988  for (const auto & nmap : node_map )
989  {
990  const dof_id_type global_node_id = nmap.first;
991  const dof_id_type local_node_id = nmap.second;
992 
993  libmesh_assert( mesh.query_node_ptr(global_node_id) );
994 
995  const Node & node = mesh.node_ref(global_node_id);
996 
997  this->add_dofs_helper(system,node,local_node_id,section);
998  }
999 
1000  // Some finite element types associate dofs with the element. So now we go through
1001  // any of those with the Elem as the point we add to the PetscSection with accompanying
1002  // dofs
1003  for (const auto & emap : elem_map )
1004  {
1005  const dof_id_type global_elem_id = emap.first;
1006  const dof_id_type local_elem_id = emap.second;
1007 
1008  libmesh_assert( mesh.query_elem_ptr(global_elem_id) );
1009 
1010  const Elem & elem = mesh.elem_ref(global_elem_id);
1011 
1012  this->add_dofs_helper(system,elem,local_elem_id,section);
1013  }
1014 
1015  // Now add any SCALAR dofs to the PetscSection
1016  // SCALAR dofs live on the "last" processor, so only work there if there are any
1017  if (system.processor_id() == (system.n_processors()-1))
1018  {
1019  for (const auto & smap : scalar_map )
1020  {
1021  const dof_id_type local_id = smap.first;
1022  const unsigned int scalar_var = smap.second;
1023 
1024  // The number of SCALAR dofs comes from the variable order
1025  const int n_dofs = system.variable(scalar_var).type().order.get_order();
1026 
1027  ierr = PetscSectionSetFieldDof( section, local_id, scalar_var, n_dofs );
1028  CHKERRABORT(system.comm().get(),ierr);
1029 
1030  // In the SCALAR case, there are no other variables associate with the "point"
1031  // the total number of dofs on the point is the same as that for the field
1032  ierr = PetscSectionSetDof( section, local_id, n_dofs );
1033  CHKERRABORT(system.comm().get(),ierr);
1034  }
1035  }
1036 
1037  }

References add_dofs_helper(), libMesh::ParallelObject::comm(), libMesh::System::get_mesh(), libMesh::OrderWrapper::get_order(), libMesh::ierr, libMesh::libmesh_assert(), mesh, libMesh::ParallelObject::n_processors(), libMesh::FEType::order, libMesh::ParallelObject::processor_id(), libMesh::Variable::type(), and libMesh::System::variable().

Referenced by build_section().

◆ build_section()

void libMesh::PetscDMWrapper::build_section ( const System system,
PetscSection &  section 
)
private

Takes System, empty PetscSection and populates the PetscSection.

Take the System in its current state and an empty PetscSection and then populate the PetscSection. The PetscSection is comprised of global "point" numbers, where a "point" in PetscDM parlance is a geometric entity: node, edge, face, or element. Then, we also add the DoF numbering for each variable for each of the "points". The PetscSection, together the with PetscSF will allow for recursive fieldsplits from the command line using PETSc.

Definition at line 729 of file petsc_dm_wrapper.C.

730  {
731  START_LOG ("build_section()", "PetscDMWrapper");
732 
733  PetscErrorCode ierr;
734  ierr = PetscSectionCreate(system.comm().get(),&section);
735  CHKERRABORT(system.comm().get(),ierr);
736 
737  // Tell the PetscSection about all of our System variables
738  ierr = PetscSectionSetNumFields(section,system.n_vars());
739  CHKERRABORT(system.comm().get(),ierr);
740 
741  // Set the actual names of all the field variables
742  for (auto v : IntRange<unsigned int>(0, system.n_vars()))
743  {
744  ierr = PetscSectionSetFieldName( section, v, system.variable_name(v).c_str() );
745  CHKERRABORT(system.comm().get(),ierr);
746  }
747 
748  // For building the section, we need to create local-to-global map
749  // of local "point" ids to the libMesh global id of that point.
750  // A "point" in PETSc nomenclature is a geometric object that can have
751  // dofs associated with it, e.g. Node, Edge, Face, Elem.
752  // The numbering PETSc expects is continuous for the local numbering.
753  // Since we're only using this interface for solvers, then we can just
754  // assign whatever local id to any of the global ids. But it is local
755  // so we don't need to worry about processor numbering for the local
756  // point ids.
757  std::unordered_map<dof_id_type,dof_id_type> node_map;
758  std::unordered_map<dof_id_type,dof_id_type> elem_map;
759  std::map<dof_id_type,unsigned int> scalar_map;
760 
761  // First we tell the PetscSection about all of our points that have
762  // dofs associated with them.
763  this->set_point_range_in_section(system, section, node_map, elem_map, scalar_map);
764 
765  // Now we can build up the dofs per "point" in the PetscSection
766  this->add_dofs_to_section(system, section, node_map, elem_map, scalar_map);
767 
768  // Final setup of PetscSection
769  // Until Matt Knepley finishes implementing the commented out function
770  // below, the PetscSection will be assuming node-major ordering
771  // so let's throw an error if the user tries to use this without
772  // node-major order
773  if (!libMesh::on_command_line("--node-major-dofs"))
774  libmesh_error_msg("ERROR: Must use --node-major-dofs with PetscSection!");
775 
776  //else if (!system.identify_variable_groups())
777  // ierr = PetscSectionSetUseFieldOffsets(section,PETSC_TRUE);LIBMESH_CHKERR(ierr);
778  //else
779  // {
780  // std::string msg = "ERROR: Only node-major or var-major ordering supported for PetscSection!\n";
781  // msg += " var-group-major ordering not supported!\n";
782  // msg += " Must use --node-major-dofs or set System::identify_variable_groups() = false!\n";
783  // libmesh_error_msg(msg);
784  // }
785 
786  ierr = PetscSectionSetUp(section);
787  CHKERRABORT(system.comm().get(),ierr);
788 
789  // Sanity checking at least that local_n_dofs match
790  libmesh_assert_equal_to(system.n_local_dofs(),this->check_section_n_dofs(section));
791 
792  STOP_LOG ("build_section()", "PetscDMWrapper");
793  }

References add_dofs_to_section(), check_section_n_dofs(), libMesh::ParallelObject::comm(), libMesh::ierr, libMesh::System::n_local_dofs(), libMesh::System::n_vars(), libMesh::on_command_line(), set_point_range_in_section(), and libMesh::System::variable_name().

Referenced by init_and_attach_petscdm().

◆ build_sf()

void libMesh::PetscDMWrapper::build_sf ( const System system,
PetscSF &  star_forest 
)
private

Takes System, empty PetscSF and populates the PetscSF.

The PetscSF (star forest) is a cousin of PetscSection. PetscSection has the DoF info, and PetscSF gives the parallel distribution of the DoF info. So PetscSF should only be necessary when we have more than one MPI rank. Essentially, we are copying the DofMap.send_list(): we are specifying the local dofs, what rank communicates that dof info (for off-processor dofs that are communicated) and the dofs local index on that rank.

https://jedbrown.org/files/StarForest.pdf

Definition at line 795 of file petsc_dm_wrapper.C.

796  {
797  START_LOG ("build_sf()", "PetscDMWrapper");
798 
799  const DofMap & dof_map = system.get_dof_map();
800 
801  const std::vector<dof_id_type> & send_list = dof_map.get_send_list();
802 
803  // Number of ghost dofs that send information to this processor
804  const PetscInt n_leaves = cast_int<PetscInt>(send_list.size());
805 
806  // Number of local dofs, including ghosts dofs
807  const PetscInt n_roots = dof_map.n_local_dofs() + n_leaves;
808 
809  // This is the vector of dof indices coming from other processors
810  // We need to give this to the PetscSF
811  // We'll be extra paranoid about this ugly double cast
812  static_assert(sizeof(PetscInt) == sizeof(dof_id_type),"PetscInt is not a dof_id_type!");
813  PetscInt * local_dofs = reinterpret_cast<PetscInt *>(const_cast<dof_id_type *>(send_list.data()));
814 
815  // This is the vector of PetscSFNode's for the local_dofs.
816  // For each entry in local_dof, we have to supply the rank from which
817  // that dof stems and its local index on that rank.
818  // PETSc documentation here:
819  // http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PetscSF/PetscSFNode.html
820  std::vector<PetscSFNode> sf_nodes(send_list.size());
821 
822  for (auto i : index_range(send_list))
823  {
824  dof_id_type incoming_dof = send_list[i];
825 
826  const processor_id_type rank = dof_map.dof_owner(incoming_dof);
827 
828  // Dofs are sorted and continuous on the processor so local index
829  // is counted up from the first dof on the processor.
830  PetscInt index = incoming_dof - dof_map.first_dof(rank);
831 
832  sf_nodes[i].rank = rank; /* Rank of owner */
833  sf_nodes[i].index = index;/* Index of dof on rank */
834  }
835 
836  PetscSFNode * remote_dofs = sf_nodes.data();
837 
838  PetscErrorCode ierr;
839  ierr = PetscSFCreate(system.comm().get(), &star_forest);
840  CHKERRABORT(system.comm().get(),ierr);
841 
842  // TODO: We should create pointers to arrays so we don't have to copy
843  // and then can use PETSC_OWN_POINTER where PETSc will take ownership
844  // and delete the memory for us. But then we'd have to use PetscMalloc.
845  ierr = PetscSFSetGraph(star_forest,
846  n_roots,
847  n_leaves,
848  local_dofs,
850  remote_dofs,
852  CHKERRABORT(system.comm().get(),ierr);
853 
854  STOP_LOG ("build_sf()", "PetscDMWrapper");
855  }

References libMesh::ParallelObject::comm(), libMesh::DofMap::dof_owner(), libMesh::DofMap::first_dof(), libMesh::System::get_dof_map(), libMesh::DofMap::get_send_list(), libMesh::ierr, libMesh::index_range(), libMesh::DofMap::n_local_dofs(), and PETSC_COPY_VALUES.

Referenced by init_and_attach_petscdm().

◆ check_section_n_dofs()

dof_id_type libMesh::PetscDMWrapper::check_section_n_dofs ( PetscSection &  section)
private

Helper function to sanity check PetscSection construction.

The PetscSection contains local dof information. This helper function just facilitates sanity checking that in fact it only has n_local_dofs.

Definition at line 1072 of file petsc_dm_wrapper.C.

1073  {
1074  PetscInt n_local_dofs = 0;
1075 
1076  // Grap the starting and ending points from the section
1077  PetscInt pstart, pend;
1078  PetscErrorCode ierr = PetscSectionGetChart(section, &pstart, &pend);
1079  if (ierr)
1080  libmesh_error();
1081 
1082  // Count up the n_dofs for each point from the section
1083  for( PetscInt p = pstart; p < pend; p++ )
1084  {
1085  PetscInt n_dofs;
1086  ierr = PetscSectionGetDof(section,p,&n_dofs);
1087  if (ierr)
1088  libmesh_error();
1089  n_local_dofs += n_dofs;
1090  }
1091 
1092  static_assert(sizeof(PetscInt) == sizeof(dof_id_type),"PetscInt is not a dof_id_type!");
1093  return n_local_dofs;
1094  }

References libMesh::ierr.

Referenced by build_section().

◆ clear()

void libMesh::PetscDMWrapper::clear ( )

Destroys and clears all build DM-related data.

Definition at line 430 of file petsc_dm_wrapper.C.

431  {
432  // PETSc will destroy the attached PetscSection, PetscSF as well as
433  // other relateds such as the Projections so we just tidy up the
434  // containers here.
435 
436  _dms.clear();
437  _sections.clear();
438  _star_forests.clear();
439  _pmtx_vec.clear();
440  _vec_vec.clear();
441  _ctx_vec.clear();
442 
443  }

References _ctx_vec, _dms, _pmtx_vec, _sections, _star_forests, and _vec_vec.

Referenced by libMesh::PetscDiffSolver::clear(), and ~PetscDMWrapper().

◆ get_dm()

DM& libMesh::PetscDMWrapper::get_dm ( unsigned int  level)
inlineprivate

Get reference to DM for the given mesh level.

init_dm_data() should be called before this function.

Definition at line 144 of file petsc_dm_wrapper.h.

145  { libmesh_assert_less(level, _dms.size());
146  return *(_dms[level].get()); }

References _dms.

Referenced by init_and_attach_petscdm().

◆ get_section()

PetscSection& libMesh::PetscDMWrapper::get_section ( unsigned int  level)
inlineprivate

Get reference to PetscSection for the given mesh level.

init_dm_data() should be called before this function.

Definition at line 152 of file petsc_dm_wrapper.h.

153  { libmesh_assert_less(level, _sections.size());
154  return *(_sections[level].get()); }

References _sections.

Referenced by init_and_attach_petscdm().

◆ get_star_forest()

PetscSF& libMesh::PetscDMWrapper::get_star_forest ( unsigned int  level)
inlineprivate

Get reference to PetscSF for the given mesh level.

init_dm_data() should be called before this function.

Definition at line 160 of file petsc_dm_wrapper.h.

161  { libmesh_assert_less(level, _star_forests.size());
162  return *(_star_forests[level].get()); }

References _star_forests.

Referenced by init_and_attach_petscdm().

◆ init_and_attach_petscdm()

void libMesh::PetscDMWrapper::init_and_attach_petscdm ( System system,
SNES &  snes 
)

Definition at line 445 of file petsc_dm_wrapper.C.

446  {
447  START_LOG ("init_and_attach_petscdm()", "PetscDMWrapper");
448 
449  PetscErrorCode ierr;
450 
451  MeshBase & mesh = system.get_mesh(); // Convenience
452  MeshRefinement mesh_refinement(mesh); // Used for swapping between grids
453 
454  // Theres no need for these code paths while traversing the hierarchy
455  mesh.allow_renumbering(false);
456  mesh.allow_remote_element_removal(false);
457  mesh.partitioner() = nullptr;
458 
459  // First walk over the active local elements and see how many maximum MG levels we can construct
460  unsigned int n_levels = 0;
461  for ( auto & elem : mesh.active_local_element_ptr_range() )
462  {
463  if ( elem->level() > n_levels )
464  n_levels = elem->level();
465  }
466  // On coarse grids some processors may have no active local elements,
467  // these processors shouldnt make projections
468  if (n_levels >= 1)
469  n_levels += 1;
470 
471  // How many MG levels did the user request?
472  unsigned int usr_requested_mg_lvls = 0;
473  usr_requested_mg_lvls = command_line_next("-pc_mg_levels", usr_requested_mg_lvls);
474 
475  // Only construct however many levels were requested if something was actually requested
476  if ( usr_requested_mg_lvls != 0 )
477  {
478  // Dont request more than avail num levels on mesh, require at least 2 levels
479  libmesh_assert_less_equal( usr_requested_mg_lvls, n_levels );
480  libmesh_assert( usr_requested_mg_lvls > 1 );
481 
482  n_levels = usr_requested_mg_lvls;
483  }
484  else
485  {
486  // if -pc_mg_levels is not specified we just construct fieldsplit related
487  // structures on the finest mesh.
488  n_levels = 1;
489  }
490 
491 
492  // Init data structures: data[0] ~ coarse grid, data[n_levels-1] ~ fine grid
493  this->init_dm_data(n_levels, system.comm());
494 
495  // Step 1. contract : all active elements have no children
496  mesh.contract();
497 
498  // Start on finest grid. Construct DM datas and stash some info for
499  // later projection_matrix and vec sizing
500  for(unsigned int level = n_levels; level >= 1; level--)
501  {
502  // Save the n_fine_dofs before coarsening for later projection matrix sizing
503  _mesh_dof_sizes[level-1] = system.get_dof_map().n_dofs();
504  _mesh_dof_loc_sizes[level-1] = system.get_dof_map().n_local_dofs();
505 
506  // Get refs to things we will fill
507  DM & dm = this->get_dm(level-1);
508  PetscSection & section = this->get_section(level-1);
509  PetscSF & star_forest = this->get_star_forest(level-1);
510 
511  // The shell will contain other DM info
512  ierr = DMShellCreate(system.comm().get(), &dm);
513  CHKERRABORT(system.comm().get(),ierr);
514 
515  // Set the DM embedding dimension to help PetscDS (Discrete System)
516  ierr = DMSetCoordinateDim(dm, mesh.mesh_dimension());
517  CHKERRABORT(system.comm().get(),ierr);
518 
519  // Build the PetscSection and attach it to the DM
520  this->build_section(system, section);
521 #if PETSC_VERSION_LESS_THAN(3,12,0)
522  ierr = DMSetDefaultSection(dm, section);
523 #else
524  ierr = DMSetSection(dm, section);
525 #endif
526  CHKERRABORT(system.comm().get(),ierr);
527 
528  // We only need to build the star forest if we're in a parallel environment
529  if (system.n_processors() > 1)
530  {
531  // Build the PetscSF and attach it to the DM
532  this->build_sf(system, star_forest);
533 #if PETSC_VERSION_LESS_THAN(3,12,0)
534  ierr = DMSetDefaultSF(dm, star_forest);
535 #else
536  ierr = DMSetSectionSF(dm, star_forest);
537 #endif
538  CHKERRABORT(system.comm().get(),ierr);
539  }
540 
541  // Set PETSC's Restriction, Interpolation, Coarsen and Refine functions for the current DM
542  ierr = DMShellSetCreateInterpolation ( dm, libmesh_petsc_DMCreateInterpolation );
543  CHKERRABORT(system.comm().get(),ierr);
544 
545  // Not implemented. For now we rely on galerkin style restrictions
546  bool supply_restriction = false;
547  if (supply_restriction)
548  {
549  ierr = DMShellSetCreateRestriction ( dm, libmesh_petsc_DMCreateRestriction );
550  CHKERRABORT(system.comm().get(),ierr);
551  }
552 
553  ierr = DMShellSetCoarsen ( dm, libmesh_petsc_DMCoarsen );
554  CHKERRABORT(system.comm().get(),ierr);
555 
556  ierr = DMShellSetRefine ( dm, libmesh_petsc_DMRefine );
557  CHKERRABORT(system.comm().get(),ierr);
558 
559  ierr= DMShellSetCreateSubDM(dm, libmesh_petsc_DMCreateSubDM);
560  CHKERRABORT(system.comm().get(), ierr);
561 
562  // Uniformly coarsen if not the coarsest grid and distribute dof info.
563  if ( level != 1 )
564  {
565  START_LOG ("PDM_coarsen", "PetscDMWrapper");
566  mesh_refinement.uniformly_coarsen(1);
567  STOP_LOG ("PDM_coarsen", "PetscDMWrapper");
568 
569  START_LOG ("PDM_dist_dof", "PetscDMWrapper");
570  system.get_dof_map().distribute_dofs(mesh);
571  STOP_LOG ("PDM_dist_dof", "PetscDMWrapper");
572  }
573  } // End PETSc data structure creation
574 
575  // Now fill the corresponding internal PetscDMContext for each created DM
576  for( unsigned int i=1; i <= n_levels; i++ )
577  {
578  // Set context dimension
579  (*_ctx_vec[i-1]).mesh_dim = mesh.mesh_dimension();
580 
581  // Create and attach a sized vector to the current ctx
582  _vec_vec[i-1]->init( _mesh_dof_sizes[i-1] );
583  _ctx_vec[i-1]->current_vec = _vec_vec[i-1].get();
584 
585  // Set a global DM to be used as reference when using fieldsplit
586  _ctx_vec[i-1]->global_dm = &(this->get_dm(n_levels-1));
587 
588  if (n_levels > 1 )
589  {
590  // Set pointers to surrounding dm levels to help PETSc refine/coarsen
591  if ( i == 1 ) // were at the coarsest mesh
592  {
593  (*_ctx_vec[i-1]).coarser_dm = nullptr;
594  (*_ctx_vec[i-1]).finer_dm = _dms[1].get();
595  }
596  else if( i == n_levels ) // were at the finest mesh
597  {
598  (*_ctx_vec[i-1]).coarser_dm = _dms[_dms.size() - 2].get();
599  (*_ctx_vec[i-1]).finer_dm = nullptr;
600  }
601  else // were in the middle of the hierarchy
602  {
603  (*_ctx_vec[i-1]).coarser_dm = _dms[i-2].get();
604  (*_ctx_vec[i-1]).finer_dm = _dms[i].get();
605  }
606  }
607 
608  } // End context creation
609 
610  // Attach a vector and context to each DM
611  if ( n_levels >= 1 )
612  {
613 
614  for ( unsigned int i = 1; i <= n_levels ; ++i)
615  {
616  DM & dm = this->get_dm(i-1);
617 
618  ierr = DMShellSetGlobalVector( dm, (*_ctx_vec[ i-1 ]).current_vec->vec() );
619  CHKERRABORT(system.comm().get(),ierr);
620 
621  ierr = DMShellSetContext( dm, _ctx_vec[ i-1 ].get() );
622  CHKERRABORT(system.comm().get(),ierr);
623  }
624  }
625 
626  // DM structures created, now we need projection matrixes if GMG is requested.
627  // To prepare for projection creation go to second coarsest mesh so we can utilize
628  // old_dof_indices information in the projection creation.
629  if (n_levels > 1 )
630  {
631 
632  // First, stash the coarse dof indices for FS purposes
633  unsigned int n_vars = system.n_vars();
634  _ctx_vec[0]->dof_vec.resize(n_vars);
635 
636  for( unsigned int v = 0; v < n_vars; v++ )
637  {
638  std::vector<numeric_index_type> di;
639  system.get_dof_map().local_variable_indices(di, system.get_mesh(), v);
640  _ctx_vec[0]->dof_vec[v] = di;
641  }
642 
643  START_LOG ("PDM_refine", "PetscDMWrapper");
644  mesh_refinement.uniformly_refine(1);
645  STOP_LOG ("PDM_refine", "PetscDMWrapper");
646 
647  START_LOG ("PDM_dist_dof", "PetscDMWrapper");
648  system.get_dof_map().distribute_dofs(mesh);
649  STOP_LOG ("PDM_dist_dof", "PetscDMWrapper");
650 
651  START_LOG ("PDM_cnstrnts", "PetscDMWrapper");
652  system.reinit_constraints();
653  STOP_LOG ("PDM_cnstrnts", "PetscDMWrapper");
654  }
655 
656  // Create the Interpolation Matrices between adjacent mesh levels
657  for ( unsigned int i = 1 ; i < n_levels ; ++i )
658  {
659  if ( i != n_levels )
660  {
661  // Stash the rest of the dof indices
662  unsigned int n_vars = system.n_vars();
663  _ctx_vec[i]->dof_vec.resize(n_vars);
664 
665  for( unsigned int v = 0; v < n_vars; v++ )
666  {
667  std::vector<numeric_index_type> di;
668  system.get_dof_map().local_variable_indices(di, system.get_mesh(), v);
669  _ctx_vec[i]->dof_vec[v] = di;
670  }
671 
672  unsigned int ndofs_c = _mesh_dof_sizes[i-1];
673  unsigned int ndofs_f = _mesh_dof_sizes[i];
674 
675  // Create the Interpolation matrix and set its pointer
676  _ctx_vec[i-1]->K_interp_ptr = _pmtx_vec[i-1].get();
677  _ctx_vec[i-1]->K_sub_interp_ptr = _subpmtx_vec[i-1].get();
678 
679  unsigned int ndofs_local = system.get_dof_map().n_dofs_on_processor(system.processor_id());
680  unsigned int ndofs_old_first = system.get_dof_map().first_old_dof(system.processor_id());
681  unsigned int ndofs_old_end = system.get_dof_map().end_old_dof(system.processor_id());
682  unsigned int ndofs_old_size = ndofs_old_end - ndofs_old_first;
683 
684  // Init and zero the matrix
685  _ctx_vec[i-1]->K_interp_ptr->init(ndofs_f, ndofs_c, ndofs_local, ndofs_old_size, 30 , 20);
686 
687  // Disable Mat destruction since PETSc destroys these for us
688  _ctx_vec[i-1]->K_interp_ptr->set_destroy_mat_on_exit(false);
689  _ctx_vec[i-1]->K_sub_interp_ptr->set_destroy_mat_on_exit(false);
690 
691  // TODO: Projection matrix sparsity pattern?
692  //MatSetOption(_ctx_vec[i-1]->K_interp_ptr->mat(), MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE);
693 
694  // Compute the interpolation matrix and set K_interp_ptr
695  START_LOG ("PDM_proj_mat", "PetscDMWrapper");
696  system.projection_matrix(*_ctx_vec[i-1]->K_interp_ptr);
697  STOP_LOG ("PDM_proj_mat", "PetscDMWrapper");
698 
699  // Always close matrix that contains altered data
700  _ctx_vec[i-1]->K_interp_ptr->close();
701  }
702 
703  // Move to next grid to make next projection
704  if ( i != n_levels - 1 )
705  {
706  START_LOG ("PDM_refine", "PetscDMWrapper");
707  mesh_refinement.uniformly_refine(1);
708  STOP_LOG ("PDM_refine", "PetscDMWrapper");
709 
710  START_LOG ("PDM_dist_dof", "PetscDMWrapper");
711  system.get_dof_map().distribute_dofs(mesh);
712  STOP_LOG ("PDM_dist_dof", "PetscDMWrapper");
713 
714  START_LOG ("PDM_cnstrnts", "PetscDMWrapper");
715  system.reinit_constraints();
716  STOP_LOG ("PDM_cnstrnts", "PetscDMWrapper");
717 
718  }
719  } // End create transfer operators. System back at the finest grid
720 
721  // Lastly, give SNES the finest level DM
722  DM & dm = this->get_dm(n_levels-1);
723  ierr = SNESSetDM(snes, dm);
724  CHKERRABORT(system.comm().get(),ierr);
725 
726  STOP_LOG ("init_and_attach_petscdm()", "PetscDMWrapper");
727  }

References _ctx_vec, _dms, _mesh_dof_loc_sizes, _mesh_dof_sizes, _pmtx_vec, _subpmtx_vec, _vec_vec, build_section(), build_sf(), libMesh::ParallelObject::comm(), libMesh::command_line_next(), libMesh::DofMap::distribute_dofs(), libMesh::DofMap::end_old_dof(), libMesh::DofMap::first_old_dof(), libMesh::ReferenceElem::get(), get_dm(), libMesh::System::get_dof_map(), libMesh::System::get_mesh(), get_section(), get_star_forest(), libMesh::ierr, init_dm_data(), libMesh::libmesh_assert(), libMesh::libmesh_petsc_DMCoarsen(), libMesh::libmesh_petsc_DMCreateInterpolation(), libMesh::libmesh_petsc_DMCreateRestriction(), libMesh::libmesh_petsc_DMCreateSubDM(), libMesh::libmesh_petsc_DMRefine(), libMesh::DofMap::local_variable_indices(), mesh, libMesh::DofMap::n_dofs(), libMesh::DofMap::n_dofs_on_processor(), libMesh::MeshTools::n_levels(), libMesh::DofMap::n_local_dofs(), libMesh::ParallelObject::n_processors(), n_vars, libMesh::System::n_vars(), libMesh::ParallelObject::processor_id(), libMesh::System::projection_matrix(), libMesh::System::reinit_constraints(), libMesh::MeshRefinement::uniformly_coarsen(), and libMesh::MeshRefinement::uniformly_refine().

Referenced by libMesh::PetscDiffSolver::setup_petsc_data().

◆ init_dm_data()

void libMesh::PetscDMWrapper::init_dm_data ( unsigned int  n_levels,
const Parallel::Communicator &  comm 
)
private

Init all the n_mesh_level dependent data structures.

Definition at line 1096 of file petsc_dm_wrapper.C.

1097  {
1098  _dms.resize(n_levels);
1099  _sections.resize(n_levels);
1100  _star_forests.resize(n_levels);
1101  _ctx_vec.resize(n_levels);
1102  _pmtx_vec.resize(n_levels);
1103  _subpmtx_vec.resize(n_levels);
1104  _vec_vec.resize(n_levels);
1105  _mesh_dof_sizes.resize(n_levels);
1106  _mesh_dof_loc_sizes.resize(n_levels);
1107 
1108  for( unsigned int i = 0; i < n_levels; i++ )
1109  {
1110  _dms[i] = libmesh_make_unique<DM>();
1111  _sections[i] = libmesh_make_unique<PetscSection>();
1112  _star_forests[i] = libmesh_make_unique<PetscSF>();
1113  _ctx_vec[i] = libmesh_make_unique<PetscDMContext>();
1114  _pmtx_vec[i]= libmesh_make_unique<PetscMatrix<Number>>(comm);
1115  _subpmtx_vec[i]= libmesh_make_unique<PetscMatrix<Number>>(comm);
1116  _vec_vec[i] = libmesh_make_unique<PetscVector<Number>>(comm);
1117  }
1118  }

References _ctx_vec, _dms, _mesh_dof_loc_sizes, _mesh_dof_sizes, _pmtx_vec, _sections, _star_forests, _subpmtx_vec, _vec_vec, and libMesh::MeshTools::n_levels().

Referenced by init_and_attach_petscdm().

◆ set_point_range_in_section()

void libMesh::PetscDMWrapper::set_point_range_in_section ( const System system,
PetscSection &  section,
std::unordered_map< dof_id_type, dof_id_type > &  node_map,
std::unordered_map< dof_id_type, dof_id_type > &  elem_map,
std::map< dof_id_type, unsigned int > &  scalar_map 
)
private

Helper function for build_section.

This function will count how many "points" on the current processor have DoFs associated with them and give that count to PETSc. We need to cache a mapping between the global node id and our local count that we do in this function because we will need the local number again in the add_dofs_to_section function.

Definition at line 857 of file petsc_dm_wrapper.C.

862  {
863  // We're expecting this to be empty coming in
864  libmesh_assert(node_map.empty());
865 
866  // We need to count up the number of active "points" on this processor.
867  // Nominally, a "point" in PETSc parlance is a geometric object that can
868  // hold DoFs, i.e node, edge, face, elem. Since we handle the mesh and are only
869  // interested in solvers, then the only thing PETSc needs is a unique *local* number
870  // for each "point" that has active DoFs; note however this local numbering
871  // we construct must be continuous.
872  //
873  // In libMesh, for most finite elements, we just associate those DoFs with the
874  // geometric nodes. So can we loop over the nodes on this processor and check
875  // if any of the fields are have active DoFs on that node.
876  // If so, then we tell PETSc about that "point". At this stage, we just need
877  // to count up how many active "points" we have and cache the local number to global id
878  // mapping.
879 
880 
881  // These will be our local counters. pstart should always be zero.
882  // pend will track our local "point" count.
883  // If we're on a processor who coarsened the mesh to have no local elements,
884  // we should make an empty PetscSection. An empty PetscSection is specified
885  // by passing [0,0) to the PetscSectionSetChart call at the end. So, if we
886  // have nothing on this processor, these are the correct values to pass to
887  // PETSc.
888  dof_id_type pstart = 0;
889  dof_id_type pend = 0;
890 
891  const MeshBase & mesh = system.get_mesh();
892 
893  const DofMap & dof_map = system.get_dof_map();
894 
895  // If we don't have any local dofs, then there's nothing to tell to the PetscSection
896  if (dof_map.n_local_dofs() > 0)
897  {
898  // Conservative estimate of space needed so we don't thrash
899  node_map.reserve(mesh.n_local_nodes());
900  elem_map.reserve(mesh.n_active_local_elem());
901 
902  // We loop over active elements and then cache the global/local node mapping to make sure
903  // we only count active nodes. For example, if we're calling this function and we're
904  // not the finest level in the Mesh tree, we don't want to include nodes of child elements
905  // that aren't active on this level.
906  for (const auto & elem : mesh.active_local_element_ptr_range())
907  {
908  for (const Node & node : elem->node_ref_range())
909  {
910  // get the global id number of local node n
911 
912  // Only register nodes with the PetscSection if they have dofs that belong to
913  // this processor. Even though we're active local elements, the dofs associated
914  // with the node may belong to a different processor. The processor who owns
915  // those dofs will register that node with the PetscSection on that processor.
916  std::vector<dof_id_type> node_dof_indices;
917  dof_map.dof_indices( &node, node_dof_indices );
918  if( !node_dof_indices.empty() && dof_map.local_index(node_dof_indices[0]) )
919  {
920 #ifndef NDEBUG
921  // We're assuming that if the first dof for this node belongs to this processor,
922  // then all of them do.
923  for( auto dof : node_dof_indices )
924  libmesh_assert(dof_map.local_index(dof));
925 #endif
926  // Cache the global/local mapping if we haven't already
927  // Then increment our local count
928  dof_id_type node_id = node.id();
929  if( node_map.count(node_id) == 0 )
930  {
931  node_map.insert(std::make_pair(node_id,pend));
932  pend++;
933  }
934  }
935  }
936 
937  // Some finite elements, e.g. Hierarchic, associate element interior DoFs with the element
938  // rather than the node (since we ought to be able to use Hierachic elements on a QUAD4,
939  // which has no interior node). Thus, we also need to check element interiors for DoFs
940  // as well and, if the finite element has them, we also need to count the Elem in our
941  // "point" accounting.
942  if( elem->n_dofs(system.number()) > 0 )
943  {
944  dof_id_type elem_id = elem->id();
945  elem_map.insert(std::make_pair(elem_id,pend));
946  pend++;
947  }
948  }
949 
950  // SCALAR dofs live on the "last" processor, so only work there if there are any
951  if( dof_map.n_SCALAR_dofs() > 0 && (system.processor_id() == (system.n_processors()-1)) )
952  {
953  // Loop through all the variables and cache the scalar ones. We cache the
954  // SCALAR variable index along with the local point to make it easier when
955  // we have to register dofs with the PetscSection
956  for (auto v : IntRange<unsigned int>(0, system.n_vars()))
957  {
958  if( system.variable(v).type().family == SCALAR )
959  {
960  scalar_map.insert(std::make_pair(pend,v));
961  pend++;
962  }
963  }
964  }
965 
966  }
967 
968  PetscErrorCode ierr = PetscSectionSetChart(section, pstart, pend);
969  CHKERRABORT(system.comm().get(),ierr);
970  }

References libMesh::ParallelObject::comm(), libMesh::DofMap::dof_indices(), libMesh::FEType::family, libMesh::System::get_dof_map(), libMesh::System::get_mesh(), libMesh::ierr, libMesh::libmesh_assert(), libMesh::DofMap::local_index(), mesh, libMesh::DofMap::n_local_dofs(), libMesh::ParallelObject::n_processors(), libMesh::DofMap::n_SCALAR_dofs(), libMesh::System::n_vars(), libMesh::System::number(), libMesh::ParallelObject::processor_id(), libMesh::SCALAR, libMesh::Variable::type(), and libMesh::System::variable().

Referenced by build_section().

Member Data Documentation

◆ _ctx_vec

std::vector<std::unique_ptr<PetscDMContext> > libMesh::PetscDMWrapper::_ctx_vec
private

Vector of internal PetscDM context structs for all grid levels.

Definition at line 126 of file petsc_dm_wrapper.h.

Referenced by clear(), init_and_attach_petscdm(), and init_dm_data().

◆ _dms

std::vector<std::unique_ptr<DM> > libMesh::PetscDMWrapper::_dms
private

Vector of DMs for all grid levels.

Definition at line 111 of file petsc_dm_wrapper.h.

Referenced by clear(), get_dm(), init_and_attach_petscdm(), and init_dm_data().

◆ _mesh_dof_loc_sizes

std::vector<unsigned int> libMesh::PetscDMWrapper::_mesh_dof_loc_sizes
private

Stores n_local_dofs for each grid level, to be used for projection vector sizing.

Definition at line 135 of file petsc_dm_wrapper.h.

Referenced by init_and_attach_petscdm(), and init_dm_data().

◆ _mesh_dof_sizes

std::vector<unsigned int> libMesh::PetscDMWrapper::_mesh_dof_sizes
private

Stores n_dofs for each grid level, to be used for projection matrix sizing.

Definition at line 132 of file petsc_dm_wrapper.h.

Referenced by init_and_attach_petscdm(), and init_dm_data().

◆ _pmtx_vec

std::vector<std::unique_ptr<PetscMatrix<Number> > > libMesh::PetscDMWrapper::_pmtx_vec
private

Vector of projection matrixes for all grid levels.

Definition at line 120 of file petsc_dm_wrapper.h.

Referenced by clear(), init_and_attach_petscdm(), and init_dm_data().

◆ _sections

std::vector<std::unique_ptr<PetscSection> > libMesh::PetscDMWrapper::_sections
private

Vector of PETScSections for all grid levels.

Definition at line 114 of file petsc_dm_wrapper.h.

Referenced by clear(), get_section(), and init_dm_data().

◆ _star_forests

std::vector<std::unique_ptr<PetscSF> > libMesh::PetscDMWrapper::_star_forests
private

Vector of star forests for all grid levels.

Definition at line 117 of file petsc_dm_wrapper.h.

Referenced by clear(), get_star_forest(), and init_dm_data().

◆ _subpmtx_vec

std::vector<std::unique_ptr<PetscMatrix<Number> > > libMesh::PetscDMWrapper::_subpmtx_vec
private

Vector of sub projection matrixes for all grid levels for fieldsplit.

Definition at line 123 of file petsc_dm_wrapper.h.

Referenced by init_and_attach_petscdm(), and init_dm_data().

◆ _vec_vec

std::vector<std::unique_ptr<PetscVector<Number> > > libMesh::PetscDMWrapper::_vec_vec
private

Vector of solution vectors for all grid levels.

Definition at line 129 of file petsc_dm_wrapper.h.

Referenced by clear(), init_and_attach_petscdm(), and init_dm_data().


The documentation for this class was generated from the following files:
libMesh::dof_id_type
uint8_t dof_id_type
Definition: id_types.h:67
libMesh::PetscDMWrapper::set_point_range_in_section
void set_point_range_in_section(const System &system, PetscSection &section, std::unordered_map< dof_id_type, dof_id_type > &node_map, std::unordered_map< dof_id_type, dof_id_type > &elem_map, std::map< dof_id_type, unsigned int > &scalar_map)
Helper function for build_section.
Definition: petsc_dm_wrapper.C:857
libMesh::PetscDMWrapper::get_section
PetscSection & get_section(unsigned int level)
Get reference to PetscSection for the given mesh level.
Definition: petsc_dm_wrapper.h:152
libMesh::PetscDMWrapper::_ctx_vec
std::vector< std::unique_ptr< PetscDMContext > > _ctx_vec
Vector of internal PetscDM context structs for all grid levels.
Definition: petsc_dm_wrapper.h:126
libMesh::PetscDMWrapper::get_dm
DM & get_dm(unsigned int level)
Get reference to DM for the given mesh level.
Definition: petsc_dm_wrapper.h:144
libMesh::command_line_next
T command_line_next(std::string name, T default_value)
Use GetPot's search()/next() functions to get following arguments from the command line.
Definition: libmesh.C:963
n_vars
unsigned int n_vars
Definition: adaptivity_ex3.C:116
libMesh::PetscDMWrapper::add_dofs_helper
void add_dofs_helper(const System &system, const DofObject &dof_object, dof_id_type local_id, PetscSection &section)
Helper function to reduce code duplication when setting dofs in section.
Definition: petsc_dm_wrapper.C:1039
libMesh::index_range
IntRange< std::size_t > index_range(const std::vector< T > &vec)
Helper function that returns an IntRange<std::size_t> representing all the indices of the passed-in v...
Definition: int_range.h:106
libMesh::PetscDMWrapper::build_section
void build_section(const System &system, PetscSection &section)
Takes System, empty PetscSection and populates the PetscSection.
Definition: petsc_dm_wrapper.C:729
libMesh::PetscDMWrapper::_dms
std::vector< std::unique_ptr< DM > > _dms
Vector of DMs for all grid levels.
Definition: petsc_dm_wrapper.h:111
mesh
MeshBase & mesh
Definition: mesh_communication.C:1257
libMesh::libmesh_petsc_DMCreateRestriction
PetscErrorCode libmesh_petsc_DMCreateRestriction(DM dmc, DM dmf, Mat *mat)
Function to give PETSc that sets the Restriction Matrix between two DMs.
Definition: petsc_dm_wrapper.C:397
libMesh::ierr
ierr
Definition: petsc_dm_wrapper.C:72
libMesh::PetscDMWrapper::add_dofs_to_section
void add_dofs_to_section(const System &system, PetscSection &section, const std::unordered_map< dof_id_type, dof_id_type > &node_map, const std::unordered_map< dof_id_type, dof_id_type > &elem_map, const std::map< dof_id_type, unsigned int > &scalar_map)
Helper function for build_section.
Definition: petsc_dm_wrapper.C:972
libMesh::PetscDMWrapper::_mesh_dof_loc_sizes
std::vector< unsigned int > _mesh_dof_loc_sizes
Stores n_local_dofs for each grid level, to be used for projection vector sizing.
Definition: petsc_dm_wrapper.h:135
libMesh::MeshTools::n_levels
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:656
libMesh::PetscDMWrapper::_subpmtx_vec
std::vector< std::unique_ptr< PetscMatrix< Number > > > _subpmtx_vec
Vector of sub projection matrixes for all grid levels for fieldsplit.
Definition: petsc_dm_wrapper.h:123
libMesh::libmesh_assert
libmesh_assert(ctx)
libMesh::PetscDMWrapper::_sections
std::vector< std::unique_ptr< PetscSection > > _sections
Vector of PETScSections for all grid levels.
Definition: petsc_dm_wrapper.h:114
libMesh::libmesh_petsc_DMCreateSubDM
PetscErrorCode libmesh_petsc_DMCreateSubDM(DM dm, PetscInt numFields, PetscInt fields[], IS *is, DM *subdm) PetscErrorCode libmesh_petsc_DMCreateSubDM(DM dm
Help PETSc create a subDM given a global dm when using fieldsplit.
libMesh::libmesh_petsc_DMCreateInterpolation
PetscErrorCode libmesh_petsc_DMCreateInterpolation(DM dmc, DM dmf, Mat *mat, Vec *vec)
Function to give PETSc that sets the Interpolation Matrix between two DMs.
Definition: petsc_dm_wrapper.C:307
libMesh::libmesh_petsc_DMRefine
PetscErrorCode libmesh_petsc_DMRefine(DM dmc, MPI_Comm, DM *dmf)
Help PETSc identify the finer DM given a dmc.
Definition: petsc_dm_wrapper.C:197
libMesh::PetscDMWrapper::_star_forests
std::vector< std::unique_ptr< PetscSF > > _star_forests
Vector of star forests for all grid levels.
Definition: petsc_dm_wrapper.h:117
libMesh::processor_id_type
uint8_t processor_id_type
Definition: id_types.h:104
libMesh::PetscDMWrapper::_mesh_dof_sizes
std::vector< unsigned int > _mesh_dof_sizes
Stores n_dofs for each grid level, to be used for projection matrix sizing.
Definition: petsc_dm_wrapper.h:132
libMesh::PetscDMWrapper::build_sf
void build_sf(const System &system, PetscSF &star_forest)
Takes System, empty PetscSF and populates the PetscSF.
Definition: petsc_dm_wrapper.C:795
libMesh::PetscDMWrapper::init_dm_data
void init_dm_data(unsigned int n_levels, const Parallel::Communicator &comm)
Init all the n_mesh_level dependent data structures.
Definition: petsc_dm_wrapper.C:1096
PETSC_COPY_VALUES
Definition: petsc_macro.h:103
libMesh::PetscDMWrapper::clear
void clear()
Destroys and clears all build DM-related data.
Definition: petsc_dm_wrapper.C:430
libMesh::PetscDMWrapper::_pmtx_vec
std::vector< std::unique_ptr< PetscMatrix< Number > > > _pmtx_vec
Vector of projection matrixes for all grid levels.
Definition: petsc_dm_wrapper.h:120
libMesh::ReferenceElem::get
const Elem & get(const ElemType type_in)
Definition: reference_elem.C:237
libMesh::libmesh_petsc_DMCoarsen
PetscErrorCode libmesh_petsc_DMCoarsen(DM dmf, MPI_Comm, DM *dmc)
Help PETSc identify the coarser DM dmc given the fine DM dmf.
Definition: petsc_dm_wrapper.C:219
libMesh::PetscDMWrapper::check_section_n_dofs
dof_id_type check_section_n_dofs(PetscSection &section)
Helper function to sanity check PetscSection construction.
Definition: petsc_dm_wrapper.C:1072
libMesh::on_command_line
bool on_command_line(std::string arg)
Definition: libmesh.C:898
libMesh::PetscDMWrapper::get_star_forest
PetscSF & get_star_forest(unsigned int level)
Get reference to PetscSF for the given mesh level.
Definition: petsc_dm_wrapper.h:160
libMesh::SCALAR
Definition: enum_fe_family.h:58
libMesh::PetscDMWrapper::_vec_vec
std::vector< std::unique_ptr< PetscVector< Number > > > _vec_vec
Vector of solution vectors for all grid levels.
Definition: petsc_dm_wrapper.h:129