NekRSProblem

This class performs all activities related to solving NekRS as a MOOSE application. This class also facilitates data transfers between NekRS's internal solution fields and MOOSE by reading and writing MooseVariables, Postprocessors, and scalar numbers. You can use this class to couple NekRS to MOOSE for:

  • Boundary CHT, by passing heat fluxes and wall temperatures

  • Volume coupling via temperature and heat source feedback (such as for coupling to neutronics)

  • Fluid-structure interaction, by passing wall displacements and stresses

  • Systems-level coupling, by reading and writing scalar numbers representing boundary conditions

  • Extracting the NekRS solution for postprocessing or one-way coupling in MOOSE, such as to - Query the solution, evaluate heat balances and pressure drops, or evaluate solution convergence - Providing one-way coupling to other MOOSE applications, such as for transporting scalars based on NekRS's velocity solution or for projecting NekRS turbulent viscosity closure terms onto another MOOSE application's mesh - Project the NekRS solution onto other discretization schemes, such as a subchannel discretization, or onto other MOOSE applications, such as for providing closures - Automatically convert nondimensional NekRS solutions into dimensional form - Because the MOOSE framework supports many different output formats, obtain a representation of the NekRS solution in Exodus, VTK, CSV, and other formats.

All of the above options can be combined together in a flexible, modular system.

commentnote

This class must be used in conjunction with two other classes in Cardinal:

  1. NekRSMesh, which builds a mirror of the NekRS mesh in a MOOSE format so that all the usual Transfers understand how to send data into/out of NekRS. The settings on NekRSMesh also determine which coupling types (of those listed above) are available.

  2. NekTimeStepper, which allows NekRS to control its own time stepping.

Therefore, we recommend first reading the documentation for the above classes before proceeding here.

The smallest possible MOOSE-wrapped input file that can be used to run NekRS is shown below. casename is the prefix describing the NekRS input files, i.e. this parameter would be casename = 'fluid' if the NekRS input files are fluid.re2, fluid.par, fluid.udf, and fluid.oudf.

Listing 1: Smallest possible NekRS wrapped input file.

[Problem<<<{"href": "../../syntax/Problem/index.html"}>>>]
  type = NekRSProblem
  casename<<<{"description": "Case name for the NekRS input files; this is <case> in <case>.par, <case>.udf, <case>.oudf, and <case>.re2."}>>> = 'fluid'
[]

[Mesh<<<{"href": "../../syntax/Mesh/index.html"}>>>]
  type = NekRSMesh
  boundary = '1 2'
  volume = true
[]

[Executioner<<<{"href": "../../syntax/Executioner/index.html"}>>>]
  type = Transient

  [TimeStepper<<<{"href": "../../syntax/Executioner/TimeStepper/index.html"}>>>]
    type = NekTimeStepper
  []
[]
(doc/content/source/problems/smallest_input.i)

The remainder of this page describes how NekRSProblem wraps NekRS as a MOOSE application.

Overall Calculation Methodology

NekRSProblem inherits from the ExternalProblem class. For each time step, the calculation proceeds according to the ExternalProblem::solve() function. Data gets sent into NekRS, NekRS runs a time step, and data gets extracted from NekRS. NekRSProblem mostly consists of defining the syncSolutions and externalSolve methods. Each of these functions is now described.

void
ExternalProblem::solve(const unsigned int)
{
  TIME_SECTION("solve", 1, "Solving", false)

  syncSolutions(Direction::TO_EXTERNAL_APP);
  externalSolve();
  syncSolutions(Direction::FROM_EXTERNAL_APP);
}
(contrib/moose/framework/src/problems/ExternalProblem.C)

External Solve

The actual solve of a timestep by NekRS is peformed within the externalSolve method, which essentially copies NekRS's main() function into MOOSE.

nekrs::runStep(time, dt, time_step_index);
nekrs::ocopyToNek(time + dt, time_step_index);
nekrs::udfExecuteStep(time + dt, time_step_index, is_output_step);
if (is_output_step) nekrs::outfld(time + dt);

These four functions are defined in the NekRS source code, and perform the following:

  • Run a single time step

  • Copy the device-side solution to the host (so that it can be accessed by MOOSE via AuxVariables, Postprocessors, UserObjects, etc.)

  • Execute a user-defined function in NekRS, UDF_ExecuteStep, for Nek-style postprocessing (optional)

  • Write a NekRS output file

Because externalSolve is wrapped inside two syncSolutions calls, this means that for every NekRS time step, data is sent to and from NekRS, even if NekRS runs with a smaller time step than the MOOSE application to which it is coupled (i.e. if the data going into NekRS hasn't changed since the last time it was sent to NekRS). A means by which to reduce some of these (technically) unnecessary data transfers is described in Reducing CPU/GPU Data Transfers .

Transfers to NekRS

In the TO_EXTERNAL_APP data transfer, FieldTransfers and ScalarTransfers read from auxvariables and postprocessors and write data into the NekRS internal data space. Please click on the links to learn more.

  • FieldTransfers: passes field data (values defined throughout the nodal points on a mesh) between NekRS and MOOSE

  • ScalarTransfers: passes scalar data (single values or postprocessors) between NekRS and MOOSE

Transfer from NekRS

In the FROM_EXTERNAL_APP data transfer, FieldTransfers and ScalarTransfers read from NekRS's internal data space into auxvariables and postprocessors. Please click on the links to learn more.

  • FieldTransfers: passes field data (values defined throughout the nodal points on a mesh) between NekRS and MOOSE

  • ScalarTransfers: passes scalar data (single values or postprocessors) between NekRS and MOOSE

Nondimensional Solution

NekRS is most often solved in nondimensional form, such that all solution variables are of order unity by normalizing by problem-specific characteristic scales. However, most other MOOSE applications use dimensional units. When transferring field data to/from NekRS or when postprocessing the NekRS solution, it is important for the NekRS solution to match the dimensional solution of the coupled MOOSE application. For physical intuition, it is also often helpful in many cases to visualize and interpret a NekRS solution in dimensional form. Cardinal automatically handles conversions between dimensional and non-dimensional form if you add a Dimensionalize sub-block. Please consult the documentation for Dimensionalize for more information.

Outputting the Scratch Array

This class allows you to write slots in the nrs->usrwrk scratch space array to NekRS field files. This can be useful for viewing the data sent from MOOSE to NekRS on the actual spectral mesh where they are ultimately used. This feature can also be used to write field files for other quantities in the scratch space, such as a wall distance computation from the Nek5000 backend. To write the scratch space to a field file, set usrwrk_output to an array with each "slot" in the nrs->usrwrk array that you want to write. Then, specify a filename prefix to use to name each field file.

In the example below, the first two "slots" in the nrs->usrwrk array will be written to field files on the same interval that NekRS writes its usual field files. These files will be named aaabrick0.f00001, etc. and cccbrick0.f00001, etc. Based on limitations in how NekRS writes its files, the fields written to these files will all be named temperature when visualized.

[Problem<<<{"href": "../../syntax/Problem/index.html"}>>>]
  type = NekRSProblem
  casename<<<{"description": "Case name for the NekRS input files; this is <case> in <case>.par, <case>.udf, <case>.oudf, and <case>.re2."}>>> = 'brick'
  usrwrk_output = '0 1'
  usrwrk_output_prefix = 'aaa ccc'
  n_usrwrk_slots<<<{"description": "Number of slots to allocate in nrs->usrwrk to hold fields either related to coupling (which will be populated by Cardinal), or other custom usages, such as a distance-to-wall calculation"}>>> = 2
[]
(test/tests/nek_file_output/usrwrk/nek.i)

Reducing CPU/GPU Data Transfers

As shown in External Solve , for every NekRS time step, data is passed in/out of NekRS. If NekRS is run as a sub-application to a master application, and sub-cycling is used, a lot of these Central Processing Unit (CPU)/Graphics Processing Unit (GPU) data transfers can be omitted.

First, let's explain what MOOSE does in the usual master/sub coupling scheme when subcycling = true using a CHT as an example. Suppose you have a master application with a time step size of 1 second, and run NekRS as a sub-application with a time step size of 0.4 seconds that executes at the end of the master application time step. The calculation procedure involves:

  1. Solve the master application from to seconds.

  2. Transfer an auxvariable representing flux (and a postprocessor representing its integral) from the master application to the NekRS sub-application) at .

  3. Read from the auxvariable and write into the nrs->usrwrk array using a NekBoundaryFlux object. Normalize the flux and then copy it from host to device.

  4. Run a NekRS time step from to seconds.

  5. Copy the temperature from device to host and then interpolate NekRS's temperature from NekRS's GLL points to the NekRSMesh using a NekFieldVariable.

  6. Even though the flux data hasn't changed, and even though the temperature data isn't going to be used by the master application yet, ExternalProblem::solve() performs data transfers in/out of NekRS for every NekRS time step. Regardless of whether that data has changed or is needed yet by MOOSE, repeat steps 3-5 two times - once for a time step size of 0.4 seconds, and again for a time step size of 0.2 seconds (for the NekRS sub-application to "catch up" to the master application's overall time step length of 1 second.

If NekRS is run with a time step times smaller than its master application, this structuring of ExternalProblem represents unnecessary interpolations and CPU to GPU copies of the flux, and unnecessary GPU to CPU copies of the temperature and interpolations. NekRSProblem contains features that allow you to turn off these extra transfers. However, MOOSE's MultiApp system is designed in such a way that sub-applications know very little about their master applications (and for good reason - such a design is what enables such flexible multiphysics couplings). So, the only way that NekRS can definitively know that a data transfer from a master application is the first data transfer after the flux data has been updated, we monitor the value of a dummy postprocessor sent by the master application to NekRS. In other words, we define a postprocessor in the master application that just has a value of 1.

[Postprocessors<<<{"href": "../../syntax/Postprocessors/index.html"}>>>]
  [synchronize]
    type = Receiver<<<{"description": "Reports the value stored in this processor, which is usually filled in by another object. The Receiver does not compute its own value.", "href": "../postprocessors/Receiver.html"}>>>
    default<<<{"description": "The default value"}>>> = 1
  []
[]
(tutorials/sfr_7pin/solid.i)

We define this postprocessor as a Receiver postprocessor, but we won't actually use it to receive anything from other applications. Instead, we set the default value to 1 in order to indicate "true". Then, at the same time that we send new flux values to NekRS, we also pass this postprocessor.

[Transfers<<<{"href": "../../syntax/Transfers/index.html"}>>>]
  [synchronize_in]
    type = MultiAppPostprocessorTransfer<<<{"description": "Transfers postprocessor data between the master application and sub-application(s).", "href": "../transfers/MultiAppPostprocessorTransfer.html"}>>>
    to_postprocessor<<<{"description": "The name of the Postprocessor in the MultiApp to transfer the value to.  This should most likely be a Reporter Postprocessor."}>>> = transfer_in
    from_postprocessor<<<{"description": "The name of the Postprocessor in the Master to transfer the value from."}>>> = synchronize
    to_multi_app<<<{"description": "The name of the MultiApp to transfer the data to"}>>> = nek
  []
[]
(tutorials/sfr_7pin/solid.i)

We then receive this postprocessor in the sub-application. This basically means that, when the flux data is new, the NekRS sub-application will receive a value of "true" from the master-application (through the lens of this postprocessor), and send the data into NekRS. For data transfer out of NekRS, we determine when the temperatue data is ready for use by MOOSE by monitoring how close the sub-application is to the synchronization time to the master application.

All that is required to use this reduced communication feature are to define the dummy postprocessor in the master application, and transfer it to the sub-application. Then, set the following options in NekRSProblem, which here are shown minimizing the communication going in and out of NekRS.

[Problem<<<{"href": "../../syntax/Problem/index.html"}>>>]
  type = NekRSProblem
  casename<<<{"description": "Case name for the NekRS input files; this is <case> in <case>.par, <case>.udf, <case>.oudf, and <case>.re2."}>>> = 'sfr_7pin'
  synchronization_interval = parent_app
  n_usrwrk_slots<<<{"description": "Number of slots to allocate in nrs->usrwrk to hold fields either related to coupling (which will be populated by Cardinal), or other custom usages, such as a distance-to-wall calculation"}>>> = 1

  [FieldTransfers<<<{"href": "../../syntax/Problem/FieldTransfers/index.html"}>>>]
    [heat_flux]
      type = NekBoundaryFlux<<<{"description": "Reads/writes boundary flux data between NekRS and MOOSE."}>>>
      direction<<<{"description": "Direction in which to send data"}>>> = to_nek
      usrwrk_slot<<<{"description": "When 'direction = to_nek', the slot(s) in the usrwrk array to write the incoming data; provide one entry for each quantity being passed"}>>> = 0
    []
    [temperature]
      type = NekFieldVariable<<<{"description": "Reads/writes volumetric field data between NekRS and MOOSE."}>>>
      direction<<<{"description": "Direction in which to send data"}>>> = from_nek
    []
  []
[]
(tutorials/sfr_7pin/nek.i)
warningwarning

When the interpolate_transfers = true option is used by the TransientMultiApp, MOOSE interpolates the heat flux that gets sent to NekRS for each NekRS time step based on the master application time steps bounding the NekRS step. That is, if MOOSE computes the heat flux at two successive time steps to be and , and NekRS is being advanced to a sub-cycled step , then if interpolate_transfers = true, the avg_flux variable actually is a linear interpolation of the two flux values at the end points of the master application's solve interval, or . Using this "minimal transfer" feature will ignore the fact that MOOSE is interpolating the heat flux.

Input Parameters

  • casenameCase name for the NekRS input files; this is in .par, .udf, .oudf, and .re2.

    C++ Type:std::string

    Controllable:No

    Description:Case name for the NekRS input files; this is in .par, .udf, .oudf, and .re2.

Required Parameters

  • blockANY_BLOCK_ID List of subdomains for kernel coverage and material coverage checks. Setting this parameter is equivalent to setting 'kernel_coverage_block_list' and 'material_coverage_block_list' as well as using 'ONLY_LIST' as the coverage check mode.

    Default:ANY_BLOCK_ID

    C++ Type:std::vector<SubdomainName>

    Controllable:No

    Description:List of subdomains for kernel coverage and material coverage checks. Setting this parameter is equivalent to setting 'kernel_coverage_block_list' and 'material_coverage_block_list' as well as using 'ONLY_LIST' as the coverage check mode.

  • constant_interval1Constant interval (in units of number of time steps) with which to synchronize the NekRS solution

    Default:1

    C++ Type:unsigned int

    Controllable:No

    Description:Constant interval (in units of number of time steps) with which to synchronize the NekRS solution

  • disable_fld_file_outputFalseWhether to turn off all NekRS field file output writing

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Whether to turn off all NekRS field file output writing

  • n_usrwrk_slots0Number of slots to allocate in nrs->usrwrk to hold fields either related to coupling (which will be populated by Cardinal), or other custom usages, such as a distance-to-wall calculation (which will be populated by the user from the case files)

    Default:0

    C++ Type:unsigned int

    Controllable:No

    Description:Number of slots to allocate in nrs->usrwrk to hold fields either related to coupling (which will be populated by Cardinal), or other custom usages, such as a distance-to-wall calculation (which will be populated by the user from the case files)

  • regard_general_exceptions_as_errorsFalseIf we catch an exception during residual/Jacobian evaluaton for which we don't have specific handling, immediately error instead of allowing the time step to be cut

    Default:False

    C++ Type:bool

    Controllable:No

    Description:If we catch an exception during residual/Jacobian evaluaton for which we don't have specific handling, immediately error instead of allowing the time step to be cut

  • skip_final_field_fileFalseBy default, we write a NekRS field file on the last time step; set this to true to disable

    Default:False

    C++ Type:bool

    Controllable:No

    Description:By default, we write a NekRS field file on the last time step; set this to true to disable

  • solveTrueWhether or not to actually solve the Nonlinear system. This is handy in the case that all you want to do is execute AuxKernels, Transfers, etc. without actually solving anything

    Default:True

    C++ Type:bool

    Controllable:Yes

    Description:Whether or not to actually solve the Nonlinear system. This is handy in the case that all you want to do is execute AuxKernels, Transfers, etc. without actually solving anything

  • synchronization_intervalconstantWhen to synchronize the NekRS solution with the mesh mirror. By default, the NekRS solution is mapped to/receives data from the mesh mirror for every time step.

    Default:constant

    C++ Type:MooseEnum

    Options:constant, parent_app

    Controllable:No

    Description:When to synchronize the NekRS solution with the mesh mirror. By default, the NekRS solution is mapped to/receives data from the mesh mirror for every time step.

  • usrwrk_outputUsrwrk slot(s) to output to NekRS field files; this can be used for viewing the quantities passed from MOOSE to NekRS after interpolation to the CFD mesh. Can also be used for any slots in usrwrk that are written by the user, but unused for coupling.

    C++ Type:std::vector<unsigned int>

    Controllable:No

    Description:Usrwrk slot(s) to output to NekRS field files; this can be used for viewing the quantities passed from MOOSE to NekRS after interpolation to the CFD mesh. Can also be used for any slots in usrwrk that are written by the user, but unused for coupling.

  • usrwrk_output_prefixString prefix to use for naming the field file(s); only the first three characters are used in the name based on limitations in NekRS

    C++ Type:std::vector<std::string>

    Controllable:No

    Description:String prefix to use for naming the field file(s); only the first three characters are used in the name based on limitations in NekRS

  • write_fld_filesFalseWhether to write NekRS field file output from Cardinal. If true, this will disable any output writing by NekRS itself, and instead produce output files with names a01...a99pin, b01...b99pin, etc.

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Whether to write NekRS field file output from Cardinal. If true, this will disable any output writing by NekRS itself, and instead produce output files with names a01...a99pin, b01...b99pin, etc.

Optional Parameters

  • allow_initial_conditions_with_restartFalseTrue to allow the user to specify initial conditions when restarting. Initial conditions can override any restarted field

    Default:False

    C++ Type:bool

    Controllable:No

    Description:True to allow the user to specify initial conditions when restarting. Initial conditions can override any restarted field

  • restart_file_baseFile base name used for restart (e.g. / or /LATEST to grab the latest file available)

    C++ Type:FileNameNoExtension

    Controllable:No

    Description:File base name used for restart (e.g. / or /LATEST to grab the latest file available)

Restart Parameters

  • control_tagsAdds user-defined labels for accessing object parameters via control logic.

    C++ Type:std::vector<std::string>

    Controllable:No

    Description:Adds user-defined labels for accessing object parameters via control logic.

  • default_ghostingFalseWhether or not to use libMesh's default amount of algebraic and geometric ghosting

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Whether or not to use libMesh's default amount of algebraic and geometric ghosting

  • enableTrueSet the enabled status of the MooseObject.

    Default:True

    C++ Type:bool

    Controllable:No

    Description:Set the enabled status of the MooseObject.

Advanced Parameters

  • kernel_coverage_block_listList of subdomains for kernel coverage check. The meaning of this list is controlled by the parameter 'kernel_coverage_check' (whether this is the list of subdomains to be checked, not to be checked or not taken into account).

    C++ Type:std::vector<SubdomainName>

    Controllable:No

    Description:List of subdomains for kernel coverage check. The meaning of this list is controlled by the parameter 'kernel_coverage_check' (whether this is the list of subdomains to be checked, not to be checked or not taken into account).

  • material_coverage_block_listList of subdomains for material coverage check. The meaning of this list is controlled by the parameter 'material_coverage_check' (whether this is the list of subdomains to be checked, not to be checked or not taken into account).

    C++ Type:std::vector<SubdomainName>

    Controllable:No

    Description:List of subdomains for material coverage check. The meaning of this list is controlled by the parameter 'material_coverage_check' (whether this is the list of subdomains to be checked, not to be checked or not taken into account).

  • material_coverage_checkTRUEControls, if and how a material subdomain coverage check is performed. With 'TRUE' or 'ON' all subdomains are checked (the default). Setting 'FALSE' or 'OFF' will disable the check for all subdomains. To exclude a predefined set of subdomains 'SKIP_LIST' is to be used, while the subdomains to skip are to be defined in the parameter 'material_coverage_block_list'. To limit the check to a list of subdomains, 'ONLY_LIST' is to be used (again, using the parameter 'material_coverage_block_list').

    Default:TRUE

    C++ Type:MooseEnum

    Options:FALSE, TRUE, OFF, ON, SKIP_LIST, ONLY_LIST

    Controllable:No

    Description:Controls, if and how a material subdomain coverage check is performed. With 'TRUE' or 'ON' all subdomains are checked (the default). Setting 'FALSE' or 'OFF' will disable the check for all subdomains. To exclude a predefined set of subdomains 'SKIP_LIST' is to be used, while the subdomains to skip are to be defined in the parameter 'material_coverage_block_list'. To limit the check to a list of subdomains, 'ONLY_LIST' is to be used (again, using the parameter 'material_coverage_block_list').

Simulation Checks Parameters

  • not_zeroed_tag_vectorsExtra vector tags which the sytem will not zero when other vector tags are zeroed. The outer index is for which nonlinear system the extra tag vectors should be added for

    C++ Type:std::vector<std::vector<TagName>>

    Controllable:No

    Description:Extra vector tags which the sytem will not zero when other vector tags are zeroed. The outer index is for which nonlinear system the extra tag vectors should be added for

Contribution To Tagged Field Data Parameters

  • parallel_barrier_messagingFalseDisplays messaging from parallel barrier notifications when executing or transferring to/from Multiapps (default: false)

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Displays messaging from parallel barrier notifications when executing or transferring to/from Multiapps (default: false)

  • verbose_multiappsFalseSet to True to enable verbose screen printing related to MultiApps

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Set to True to enable verbose screen printing related to MultiApps

  • verbose_restoreFalseSet to True to enable verbose screen printing related to solution restoration

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Set to True to enable verbose screen printing related to solution restoration

  • verbose_setupfalseSet to 'true' to have the problem report on any object created. Set to 'extra' to also display all parameters.

    Default:false

    C++ Type:MooseEnum

    Options:false, true, extra

    Controllable:No

    Description:Set to 'true' to have the problem report on any object created. Set to 'extra' to also display all parameters.

Verbosity Parameters

  • restore_original_nonzero_patternFalseWhether we should reset matrix memory for every Jacobian evaluation. This option is useful if the sparsity pattern is constantly changing and you are using hash table assembly or if you wish to continually restore the matrix to the originally preallocated sparsity pattern computed by relationship managers.

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Whether we should reset matrix memory for every Jacobian evaluation. This option is useful if the sparsity pattern is constantly changing and you are using hash table assembly or if you wish to continually restore the matrix to the originally preallocated sparsity pattern computed by relationship managers.

  • use_hash_table_matrix_assemblyFalseWhether to assemble matrices using hash tables instead of preallocating matrix memory. This can be a good option if the sparsity pattern changes throughout the course of the simulation.

    Default:False

    C++ Type:bool

    Controllable:No

    Description:Whether to assemble matrices using hash tables instead of preallocating matrix memory. This can be a good option if the sparsity pattern changes throughout the course of the simulation.

Nonlinear System(S) Parameters

  • show_invalid_solution_consoleTrueSet to true to show the invalid solution occurance summary in console

    Default:True

    C++ Type:bool

    Controllable:No

    Description:Set to true to show the invalid solution occurance summary in console

Solution Validity Control Parameters

Input Files