MOOSE Newsletter (March 2023)
MOOSE Improvements
Customized user object / postprocessor execution order
A new optional parameter execution_order_group
, which expects an integer number, has been added. All user objects in the same execution group will be executed in the same order as before (which is now documented under UserObject System). All objects with the same execution_order_group
parameter value are executed in the same round (i.e. pass over the mesh), and groups with a lower number are executed first. The default execution_order_group
is 0 (zero). Negative values can be specified to force user object execution before the default group, and positive values can be uses to create execution groups that run after the default group. Execution order groups apply to all execute_on
flags specified for a given object.
libMesh-level Changes
2023.03.02
Update
Improvements to mesh redistribution
ReplicatedMesh
and serializedDistributedMesh
objects now also callGhostingFunctor::redistribute()
callbacks (enabling communication of distributed data like MOOSE stateful materials on top of replicated mesh data).GhostingFunctor code now supports (in deprecated builds) less strict behavior from subclasses.
Redundant calls to
redistribute()
in special cases have been combined.scatter_constraints()
now uses a more optimal data structure.send_coarse_ghosts()
andredistribute()
now use the NBX parallel synchronization algorithm. These had been the last two distributed algorithms in libMesh using older less-scalable MPI techniques.Bug fix: in some use cases (including MOOSE applications using mesh refinement) libMesh could fail to properly synchronize changing nodeset definitions
Side boundary ids now be set on child elements, not just coarse mesh elements, allowing for adaptive refinement of sidesets.
Clearer error messages are now printed when a
parallel_only
assertion is failed.subdomain_id
has been added to output when printingElem
info.send_list
data is now properly prepared in all use cases. This fixes Geometric MultiGrid compatibility with PETSc 3.18, and may give slight performance improvements elsewhere.A
System::has_constraint_object()
query API has been added.Bug fixes for msys2 / Windows builds, TIMPI
set_union
of maps with inconsistent values assigned to the same key, packed-range communication of pairs with fixed-size data and padding bytes.
2023.03.29
Update
PetscVector local array behavior fixes
Bezier extraction from ExodusII IsoGeometric Analysis meshes can now be done for a discontinuous mesh, if specified by user code; this is less efficient but much more flexible.
ExodusII I/O support for lower-order edge blocks referring to higher-order edges
HDF5 underlying Nemesis output is now a user-controllable option
More assertions of parallel synchronization in
PetscNonlinearSolver
TIMPI updates:
Better error messages in cases where TIMPI assertions of users' parallel synchronization fail
parallel_sync.h
algorithms now fully support rvalue data, and their behavior with local data is commented slightly better.Packing
specializations for container types now only exist if the container contents are entirely packableBug fixes for certain
Packing<tuple>
casesBetter test coverage
MetaPhysicL updates: - Fixes for builds with
TIMPI
but withoutMPI
- Fixes for many compiler warnings. Warnings triggered by the--enable-paranoid-warnings
configuration are now all fixed, that configuration is now enabled in MetaPhysicL whenever it is enabled in libMesh itself, and it is now tested for regressions in CI.Compiler warning fixes for libMesh builds with newer clang++
VTK 9.2.6 (moose-libmesh-vtk)
Bump VTK version 9.1.0 to 9.2.6. Fix issue involving relink on modern systems with latest libm.so:
libmesh-vtk/lib/././libvtkkissfft-9.1.so.1' with `/lib64/libm.so.6' for IFUNC symbol `sincos'