MOOSE Newsletter (February 2021)
Incompressible Navier Stokes with Finite Volumes (INSFV)
We now have a finite volume implementation of the incompressible Navier-Stokes equations, with tests for lid-driven, natural convection, and channel flows. The implementation solves the mass and momentum conservation equations in a fully coupled manner, using a Rhie-Chow interpolation for the velocity field in order to avoid checker-board patterns in the pressure field (this checker-board pattern also arises in finite element calculations when using unstabilized equal-order bases for the pressure and velocity). We have conducted verification tests in both 2D and 3D, checking for mass, momentum, and energy conservation (when the latter is included in the simulation). Moreover, through MMS testing the method is shown to be second-order convergent with respect to mesh refinement.
Default Automatic Differentiation (AD) configuration change
We have transitioned MOOSE to use a globally-indexed sparse derivative container for automatic differentiation by default. Experience has shown that this container type is hardly ever slower (and if it is, it is by a very small amount) and in some cases is much faster than the previous default AD container time (locally indexed non-sparse). This change in default configuration should hopefully be little-felt by users. However, for "power" AD developers, this change may open doors to more complex calculations. For example in finite volume gradient reconstruction, neighbor of neighbor information is required. Handling this larger element/cell stencil with a locally indexed container would be extremely difficult; however, it is very straightforward and natural using the globally indexed container. Here global vs. local means indexing based on globally numbered degrees of freedom vs. locally numbered degrees of freedom; one can hopefully imagine that as residuals become more and more dependent on non-local information, a local indexing scheme becomes more and more untenable.
GridPartitioner Improvement
GridPartitioner creates a uniform processor grid that overlays the mesh to be partitioned and then assigns all elements of the to-be-partitioned mesh within each cell of the grid to the same processor. Previously, a native MOOSE mesh was explicitly created to represent the uniform processor grid, and this could cause issues for a specific scenario where the native MOOSE mesh interfered with the attachment of geometric relationship managers. This issue has been addressed by creating a virtual grid instead of a native MOOSE mesh, and all operations for mesh element assignments are done using the virtual temporary grid only.
Default PETSc Configuration Change
As more and more users explore MOOSE's massively parallel HPC features for large-size problems, it was determined that PETSc should have 64-bit integers enabled by default. This change should not affect simulation performance, and the motivation is to allow MOOSE to smoothly run large-size simulations with hundreds of millions (or billions) of DoFs. Due to this change, libMesh's DoF index has been set to be a 64-bit integer as well.
Subdomain-specific Quadrature Order
We've recently added support allowing users to specify different quadrature orders for each subdomain/block in the mesh. This can be done via the Executioner/Quadrature
subblock in an input file:
The specified quadrature orders also are applied to any face-quadrature rules executed on the subdomains. When running face quadrature rules on a face bordering two subdomains with different quadrature orders, the higher order is used consistently for the face.
It is also possible to bump the quadrature order on a per-subdomain basis via code in MOOSE object constructors: