Stochastic Tools System Requirements Specification
This template follows INL template TEM-135, "IT System Requirements Specification".
This document serves as an addendum to Framework System Requirements Specification and captures information for SRS specific to the Stochastic Tools application.
Introduction
System Purpose
The MOOSE is a tool for solving complex coupled Multiphysics equations using the finite element method. MOOSE uses an object-oriented design to abstract data structure management, parallelism, threading and compiling while providing an easy to use interface targeted at engineers that may not have a lot of software development experience. MOOSE will require extreme scalability and flexibility when compared to other FEM frameworks. For instance, MOOSE needs the ability to run extremely complex material models, or even third-party applications within a parallel simulation without sacrificing parallelism. This capability is in contrast to what is often seen in commercial packages, where custom material models can limit the parallel scalability, forcing serial runs in the most severe cases. When comparing high-end capabilities, many MOOSE competitors target modest-sized clusters with just a few thousand processing cores. MOOSE, however, will be required to routinely executed on much larger clusters with scalability to clusters available in the top 500 systems (top500.org). MOOSE will also be targeted at smaller systems such as high-end laptop computers.
The design goal of MOOSE is to give developers ultimate control over their physical models and applications. Designing new models or solving completely new classes of problems will be accomplished by writing standard C++ source code within the framework's class hierarchy. Scientists and engineers will be free to implement completely new algorithms using pieces of the framework where possible, and extending the framework's capabilities where it makes sense to do so. Commercial applications do not have this capability, and instead opt for either a more rigid parameter system or a limited application-specific metalanguage.
System Scope
MOOSE's scope is to provide a set of interfaces for building FEM simulations. Abstractions to all underlying libraries are provided.
Solving coupled problems where competing physical phenomena impact one and other in a significant nonlinear fashion represents a serious challenge to several solution strategies. Small perturbations in strongly-coupled parameters often have very large adverse effects on convergence behavior. These adverse effects are compounded as additional physics are added to a model. To overcome these challenges, MOOSE employs three distinct yet compatible systems for solving these types of problems.
First, an advanced numerical technique called the JFNK method is employed to solve the most fully-coupled physics in an accurate, consistent way. An example of this would be the effect of temperature on the expansion or contraction of a material. While the JFNK numerical method is very effective at solving fully-coupled equations, it can also be computationally expensive. Plus, not all physical phenomena in a given model are truly coupled to one another. For instance, in a reactor, the speed of the coolant flow may not have any direct effect on the complex chemical reactions taking place inside the fuel rods. We call such models "loosely-coupled". A robust, scalable system must strike the proper balance between the various modeling strategies to avoid performing unnecessary computations or incorrectly predicting behavior in situations such as these.
MOOSE's Multiapp system will allow modelers to group physics into logical categories where MOOSE can solve some groups fully-coupled and others loosely-coupled. The Multiapp system goes even further by also supporting a "tightly-coupled" strategy, which falls somewhere between the "fully-coupled" and "loosely-coupled" approaches. Several sets of physics can then be linked together into logical hierarchies using any one of these coupling strategies, allowing for several potential solution strategies. For instance, a complex nuclear reactor model might consist of several tightly-coupled systems of fully-coupled equations.
Finally, MOOSE's Transfers system ties all of the physics groups contained within the Multiapp system together and allows for full control over the flow of information among the various groups. This capability bridges physical phenomena from several different complementary scales simultaneously. When these three MOOSE systems are combined, myriad coupling combinations are possible. In all cases, the MOOSE framework handles the parallel communication, input, output and execution of the underlying simulation. By handling these computer science tasks, the MOOSE framework keeps modelers focused on doing research.
MOOSE innovates by building advanced simulation capabilities on top of the very best available software technologies in a way that makes them widely accessible for innovative research. MOOSE is equally capable of solving small models on common laptops and the very biggest FEM models ever attempted—all without any major changes to configuration or source code. Since its inception, the MOOSE project has focused on both developer and computational efficiency. Improved developer efficiency is achieved by leveraging existing algorithms and technologies from several leading open-source packages. Additionally, MOOSE uses several complementary parallel technologies (both the distributed-memory message passing paradigm and shared-memory thread-based approaches are used) to lay an efficient computational foundation for development. Using existing open technologies in this manner helps the developers reduce the scope of the project and keeps the size of the MOOSE code base maintainable. This approach provides users with state-of-the-art finite element and solver technology as a basis for the advanced coupling and solution strategies mentioned previously.
MOOSE's developers work openly with other package developers to make sure that cutting-edge technologies are available through MOOSE, providing researchers with competitive research opportunities. MOOSE maintains a set of objects that hide parallel interfaces while exposing advanced spatial and temporal coupling algorithms in the framework. This accessible approach places developmental technology into the hands of scientists and engineers, which can speed the pace of scientific discovery.
System Overview
System Context
MOOSE is a command-line driven application. This is typical for a high-performance software that is designed to run across several nodes of a cluster system. As such, all of the usage of the software is through any standard terminal program generally available on all supported operating systems. Similarly, for the purpose of interacting through the software, there is only a single user, "the user", which interacts with the software through the command-line. MOOSE does not maintain any back-end database or interact with any system daemons. It is a executable, which may be launched from the command line and writes out various result files as it runs.
Figure 1: Usage of MOOSE and MOOSE-based applications.
System Functions
Since MOOSE is a command-line driven application, all functionality provided in the framework is operated through the use of standard UNIX command line flags and the extendable MOOSE input file. The framework is completely extendable so individual design pages should be consulted for specific behaviors of each user-defined object.
User Characteristics
Framework Developers: These are the core developers of the framework. They will be responsible for following and enforcing the appropriate software development standards. They will be responsible for designing, implementing and maintaining the software.
Developers: A Scientist or Engineer that utilizes the framework to build his or her own application. This user will typically have a background in modeling and simulation techniques and/or numerical analysis but may only have a limited skill-set when it comes to object-oriented coding and the C++ language. This is our primary focus group. In many cases these developers will be encouraged to give their code back to the framework maintainers.
Analysts: These are users that will run the code and perform various analysis on the simulations they perform. These users may interact with developers of the system requesting new features and reporting bugs found and will typically make heavy use of the input file format.
Assumptions and Dependencies
The Stochastic Tools application is developed using MOOSE and is based on various modules, as such the SRS for Stochastic Tools is dependent upon the files listed at the beginning of this document.
References
Definitions and Acronyms
This section defines, or provides the definition of, all terms and acronyms required to properly understand this specification.
Definitions
- Verification: (1) The process of: evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. (2) Formal proof of program correctness (e.g., requirements, design, implementation reviews, system tests) (24765:2010(E), 2010).
Acronyms
Acronym | Description |
---|---|
FEM | Finite Element Method |
INL | Idaho National Laboratory |
JFNK | Jacobian-Free Newton-Krylov |
LGPL | GNU Lesser General Public License |
MOOSE | Multiphysics Object Oriented Simulation Environment |
NQA-1 | Nuclear Quality Assurance Level 1 |
POSIX | Portable Operating System Interface |
SRS | Software Requirement Specification |
System Requirements
- A POSIX compliant Unix including the two most recent versions of MacOS and most current versions of Linux. - 4 GB of RAM for optimized compilation (8 GB for debug compilation), 2 GB per core execution - 100 GB disk space - C++17 compatible compiler (GCC, Clang) - Python 3.7+ - Git
Functional Requirements
- stochastic_tools: Distributions
- 3.1.1The system shall provide distribution function including
- uniform,
- Weibull (3 parameter),
- Kernel Density 1D with a Gaussian kernel and data file as input,
- Kernel Density 1D with a Uniform kernel and data file as input,
- Kernel Density 1D with a Gaussian kernel and data vector as input,
- Kernel Density 1D with a Gaussian kernel and user defined bandwidth,
- Kernel Density 1D with a Gaussian kernel and standard deviation as bandwidth,
- normal,
- truncated normal,
- lognormal,
- Johnson Special Bounded (SB), and
- logistic distributions.
- 3.1.2The system shall provide a normal distribution with the ability to directly call methods with distribution inputs.
- 3.1.3The system shall produce an error if a distribution is retrieved with the differing type than supplied.
- stochastic_tools: Ics
- 3.2.1The system shall generate parallel agnostic random initial conditions using a distribution function.
- 3.2.2The system shall generate an error the random initial condition is used with both a distribution and min or max value defined.
- stochastic_tools: Multiapps
- 3.3.1The system shall be able to set command line parameter(s) for sub-application that executes completely from a sample distribution for
- for a single parameter,
- for a single parameter for a batch of sub-applications,
- for multiple parameters, and
- for vector parameters for a batch of sub-applications,
- for multiple parameters for a batch of sub-applications.
- for multiple parameters using their global column indexes for a batch of sub-applications.
- 3.3.2The system shall error when the supplied sampler object operates in a mode that does not allow for command line arguments to be modified.
- 3.3.3The system shall error when the supplied sampler does not use the correct execution flags.
- 3.3.4The system shall error when '[]' syntax is not used for all parameters.
- 3.3.5The system shall error when provided global column index is out of bound.
- 3.3.6The system shall support pulling postprocessor data from a sub-application for each row of sampled data.
- 3.3.7The system shall support running sub-applications in a batches on a
- on a single processor and
- on a multiple processors.
- 3.3.8The stochastic tools module shall support pulling postprocessor data from a single sub-application running a batch of sampled data
- on a single processor,
- on multiple processors, and
- on multiple processors using in-memory backup.
- 3.3.9The system shall support running sub-applications with input parameters varying at each time step
- with individual sub-applications,
- with sub-applications batches using in-memory restore functionality.
- 3.3.10The SamplerTransientMultiApp object shall error if the 'batch-reset' mode is supplied.
- 3.3.11The system shall be able to set a sub-application command line parameters from a sample distribution
- for a single parameter and
- for a multiple parameters.
- 3.3.12The system shall error when sub-apps are constructed too early thus unable to be changed by samplers.
- 3.3.13The system shall error when the number of samples differs from the number of command line parameters.
- 3.3.14The system shall support the modification of the number of complete sub-application simulations performed with
- normal execution,
- batch execution with memory-based restoring, and
- batch execution with reset-based restoring.
- 3.3.15The system shall error when the size of a sampler is altered an sub-applications are progressing with time with the main application.
- 3.3.16The system shall have consistent partitioning between multiapps and sampler for full solves
- with less processors than rows in normal mode;
- with more processors than rows in normal mode;
- with specified minimum processors per app in normal mode;
- with less processors than rows in batch-reset mode;
- with more processors than rows in batch-reset mode;
- with specified minimum processors per app in batch-reset mode;
- with less processors than rows in batch-restore mode;
- with more processors than rows in batch-restore mode;
- with specified minimum processors per app in batch-restore mode;
- error when partitionings do not match;
- 3.3.17The system shall have consistent partitioning between multiapps and sampler for transient solves
- with less processors than rows in normal mode;
- with more processors than rows in normal mode;
- with specified minimum processors per app in normal mode;
- with less processors than rows in batch-restore mode;
- with more processors than rows in batch-restore mode;
- with specified minimum processors per app in batch-restore mode;
- error when partitionings do not match;
- 3.3.18The system shall provide the ability to create a full-solve type sub-application from sampled data from distributions.
- 3.3.19The system shall provide the ability to create a transient sub-application from the sample data generated from distributions.
- 3.3.20The system shall provide the ability to set a transient sub-application command line parameters from a sample distribution.
- 3.3.21The system shall support performing complete solves within a sub-application that include perturbed inputs that yield repeatable results
- using normal operation;
- using in memory backup operation;
- using reset operation.
- 3.3.22The system shall support performing complete solves within a sub-application that include perturbed inputs that yield changing results
- using normal operation;
- using in memory backup operation;
- using reset operation.
- 3.3.23The system shall be able to perform stochastic simulations of steady-state models while obeying unperturbed command line arguements with:
- command line control in normal mode;
- command line control in batch mode;
- parameter transfer in normal mode;
- parameter transfer in batch-reset mode;
- parameter transfer in batch-restore mode;
- 3.3.24The system shall be able to perform stochastic simulations of transient models while obeying unperturbed command line arguements with:
- command line control;
- parameter transfer in normal mode;
- parameter transfer in batch mode;
- stochastic_tools: Reporters
- 3.4.1The system shall support the calculation of statistics using
- vectors of data from the postprocessing system and include
- confidence level intervals for statistics calculations.
- 3.4.2The system shall support the calculation of statistics using
- vector of data from the reporting system; including
- confidence level intervals for statistics calculations
- and error if the supplied type is not supported.
- 3.4.3The system shall support computing bias corrected and accelerated confidence level intervals of statistics
- of a vector of data
- of a vector of vector data
- using data that is replicated and
- using data that is distributed.
- 3.4.4The system shall error when computing confidence level intervals when
- the confidence level intervals are omitted;
- the confidence level intervals are less than or equal to zero;
- the confidence level intervals are greater than or equal to one;
- input is not provided.
- 3.4.5The system shall support computing percentile confidence level intervals of statistics
- of a vector of data
- of a vector of vector data
- using data that is replicated or
- distributed across processors.
- 3.4.6The system shall support the ability to compute first, second, and total-effect Sobol sensitivity indices with a reporter.
- 3.4.7The system shall support the ability to compute Sobol sensitivity indices for vector-type data.
- 3.4.8The system shall be capable of computing the statistics of a data vector that
- is replicated and
- distributed.
- 3.4.9The system shall be capable of computing the statistics from vector reporter values.
- 3.4.10The system shall support the ability to use transferred reporter data to
- compute statistics.
- stochastic_tools: Samplers
- 3.5.1The system shall include an Adaptive Importance Sampling method for sampling distribution data.
- 3.5.2The system shall include an Parallel Subset Simulation method for sampling distribution data.
- 3.5.3The system shall throw an error when
- the selected sampler type is not of an adaptive type.
- 3.5.4The system shall include the ability to create a Cartesian product sampling scheme.
- 3.5.5The CSV Sampler shall read samples from a CSV file while the sample data is
- distributed across processors,
- replicated across processors, and
- distributed across processors with the output also distributed.
- 3.5.6The CSV Sampler shall sample from a CSV file when column indices are provided.
- 3.5.7The CSV Sampler shall sample from a CSV file when column names are provided.
- 3.5.8The system shall support generating random samples of data
- that remain constant in size and
- that are dynamic in size.
- 3.5.9The system shall support the creation of data sampled from distribution during the initial setup of a simulation.
- 3.5.10The system shall support the ability to sample data using the Latin Hypercube method that can operate
- using global matrix,
- a local matrix,
- or row-by-row.
- 3.5.11The system shall support the ability to sample data using the Latin Hypercube method with more processors than rows that can operate
- using global matrix,
- a local matrix,
- or row-by-row.
- 3.5.12The system shall include a utility that visually displays results of plotting Latin Hypercube test.
- 3.5.13The system shall include a Monte Carlo method for sampling distribution data including
- a uniform distribution distributed across processors,
- a uniform distribution replicated across processors,
- a uniform distribution distributed across processors (output is also distributed),
- a Wiebull distribution distributed across processors, and
- a Weibull distribution replicated across processors.
- 3.5.14The system shall include a nested Monte Carlo sampling scheme where sets of distributions are sampled as nested loops of rows
- in serial;
- in parallel;
- 3.5.15The system shall error out when the number of nested Monte Carlo loops does not match the number of sets of distributions.
- 3.5.16The system shall include a SOBOL method for sampling distribution data:
- with the re-sampling matrix and
- without the re-sampling matrix.
- 3.5.17The system shall error if the SOBOL sampling method is setup with input sampling matrices
- with differing number of rows;
- with differing number of columns; and
- if the matrices are the same.
- stochastic_tools: Surrogates
- 3.6.1The system shall demonstrate a gaussian process surrogate by
- training a Gaussian process model and
- evaluating the trained Gaussian process model
- 3.6.2The system shall be able to produce a Gaussian process surrogate with
- a squared exponential kernel;
- an exponential kernel;
- Matern half integer kernel;
- 3.6.3The system shall be be able to tune hyperparameters of a Gaussian process surrogate with
- a squared exponential kernel;
- an exponential kernel;
- a Matern half integer kernel;
- 3.6.4The system shall throw an error when
- the no optimization is selected while parameter tuning is required.
- 3.6.5The system shall be able to train and evaluate a libtorch-based neural network in the same input file.
- 3.6.6The system shall be able to train a libtorch-based neural network.
- 3.6.7The system shall be able to evaluate a previously trained, libtorch-based neural network.
- 3.6.8The system shall be able to retrain a pretrained and saved libtorch-based neural network.
- 3.6.9The system shall be able to throw an error if the user requires libtorch-based objects without installing libtorch.
- 3.6.10The system shall be able to train a libtorch-based neural network using a relative tolerance instead of fixed epoch number.
- 3.6.11The system shall support the creation of surrogate models that can be
- trained with replicated stochastic data and
- evaluated separately (with replicated data);
- trained with distributed stochastic data and
- evaluated separately (with distributed data);
- trained with distributed stochastic data;
- evaluated separately with a different number of processors;
- and be trained and evaluated in memory with a single input file.
- 3.6.12The system shall create a surrogate that evaluates the closes point from training data by
- training then
- evaluating,
- training and loading, and
- using explictly specified predictors.
- 3.6.13The system shall demonstrate a POD-RB surrogate (with Dirichlet BC) by
- training using known 4D data
- and then evaluating new samples separately for new data.
- 3.6.14PODFullSolveMultiapp shall throw an error when
- the trainer object cannot be found.
- 3.6.15PODSamplerSolutionTransfer shall throw an error when
- the trainer object cannot be found.
- 3.6.16PODReducedBasisTrainer shall throw an error when
- the variable names cannot be found on sub-applications,
- the number of energy limits and variable names do not match,
- the number of tag names and tag types do not match,
- the Dirichlet tag types do not exist,
- and the residual generation is called before having the basis vectors.
- 3.6.17PODReducedBasisSurrogate shall throw an error when
- the number of inputs in 'change_rank' and 'new_ranks' is not the same.
- 3.6.18The system shall demonstrate a POD-RB surrogate (without Dirichlet BC) by
- training using known 3D data,
- saving the eigenvalues,
- then evaluating new samples separately for new data,
- and doing both together in one input file.
- 3.6.19The system shall compute polynomial chaos coefficents using
- MonteCarlo sampler with Uniform distribution,
- Quadrature sampler with Uniform distribution, and
- Quadrature sampler with Normal distribution.
- 3.6.20The system shall compute relevant statistics with polynomial chaos expansion including
- statistical moments with Legendre polynomials,
- statistical moments with Hermite polynomials,
- sampler and user defined local sensitivities with Legendre polynomials,
- sampler and user defined local sensitivities with Hermite polynomials, and
- Sobol sensitivity indices.
- 3.6.21The system shall include the ability to use sparse grid methods to evaluate polynomial chaos expansion coefficients including
- Smolyak and
- Clenshaw-Curtis methods.
- 3.6.22The system shall throw an error when
- the number of samples does not match the number of results.
- 3.6.23The system shall demonstrate a polnomial regression surrogate by
- training using known 3D data
- and then evaluating new samples separately for the same data
- and then doing both on another 1D case.
- 3.6.24The sytem shall be able to create polynomial regression surrogate with vector-type response.
- stochastic_tools: Transfers
- 3.7.1The system shall include the ability to modify parameters for sub-applications using values from a distribution
- on a single processor,
- on multiple processors,
- and on more processors than samples.
- 3.7.2The system shall include the ability to modify parameters for sub-applications executed in batches using values from a distribution
- on a single processor,
- on multiple processors, and
- on multiple processors using in-memory sub-application restore.
- 3.7.3The system shall include the ability to transfer stochastic results for two sub apps.
- 3.7.4The 'StochasticToolsTransfer object shall error if the 'execute_on' parameter is defined when the corresponding MultiApp object is running in batch mode.
- 3.7.5The 'StochasticToolsTransfer' object shall error if the 'execute_on' parameter does not match the corresponding MultiApp object is running in normal mode.
- 3.7.6The system shall report a reasonable error if parameters for a trasnfer between multiapps are provided to stochastics transfer, which do not support this currently
- 3.7.7The system shall support the creation of a sub-application for each row of the stochastic data.
- 3.7.8The system shall produce an error if neither a 'SamplerTransientMultiApp' nor
SamplerFullSolveMultiApp
is provided in SamplerPostprocessorTransfer. - 3.7.9The system shall produce an error if the 'result' object in 'SamplerPostprocessorTransfer' is not a 'StochasticResults object'.
- 3.7.10The system shall support the ability to transfer a single value from each sub-application for a set of stochastic data.
- 3.7.11The system shall error if the supplied name is invalid when attempting to transfer a single value from a sub-application.
- 3.7.12When sub-application solve does not converge, the system shall either
- abort run,
- transfer last computed postprocessor value,
- or transfer NaN.
- 3.7.13The system shall support the ability to transfer reporter data from each sub-application for a set of stochastic data
- in normal mode,
- in batch mode,
- with distributed output,
- with more processors than samples,
- and error if transferring unsupported type.
- 3.7.14The system shall produce an error if neither a 'SamplerTransientMultiApp' nor
SamplerFullSolveMultiApp
is provided in SamplerParameterTransfer. - 3.7.15The system shall produce an error if the sampler sub-application does not contain a Control object with the name 'stochastic'.
- 3.7.16The system shall produce an error if the sampler sub-application does not have a correct Control object with 'to_control' parameter being 'SamplerReceiver' type.
- 3.7.17The system shall produce an error if supplied vector of real values is not sized correctly within the SamplerParameterTransfer object.
- 3.7.18The system shall produce an error if sampling method differs between the sub-application and the associated sub-application data transfer.
- 3.7.19The system shall be capable of transferring scalar data to sub-applications for each row of the stochastic data
- using a Monte Carlo and
- Sobol sampling scheme.
- 3.7.20The system shall be capable of transferring vector data to sub-applications for each row of the stochastic data.
- 3.7.21The system shall error if the transferred vector to a sub-application
- is not sized correctly for stochastic data,
- is not sized uniformily across sub-applications,
- if the vector parameter does not exist, and
- if the sub-application does not consume all of the supplied data.
- 3.7.22The system shall support the creation of a sub-application for each row sampled data generated from a Sobol scheme.
- stochastic_tools: Vectorpostprocessors
- 3.8.9The system shall support the collection of stochastic data from multiple sub-applications.
- 3.8.10The system shall be able to ouptut samples from a sampler using the sampling method
- get global matrix;
- get local matrix;
- get next local row;
- 3.8.11The system shall be able to ouptut distributed samples from a sampler using the sampling method
- get local matrix;
- get next local row;
- 3.8.12The system shall be able to ouptut samples from a sampler with
- one column;
- multiple columns;
- large number of columns;
- 3.8.13The system shall support the ability to compute first, second, and total-effect Sobol sensitivity indices.
- 3.8.14The system shall support the ability to compute confidence intervals on Sobol sensitivity indices.
- 3.8.17The system shall support the collection of stochastic data that is
- replicated on all processors and
- distributed across many.
- 3.8.18The system shall support the labeling of collection of stochastic data
- with custom prefix and
- without a prefix.
- 3.8.19The system shall support the collection of stochastic data that
- can be appended into a single data set or
- or contain a single file per timestep.
Usability Requirements
Performace Requirements
System Interfaces
System Operations
Human System Integration Requirements
MOOSE is a command line driven application which conforms to all standard terminal behaviors. Specific human system interaction accommodations shall be a function of the end-user's terminal. MOOSE does support optional coloring within the terminal's ability to display color, which may be disabled.
Maintainablity
- The latest working version (defined as the version that passes all tests in the current regression test suite) shall be publicly available at all times through the repository host provider. - Flaws identified in the system shall be reported and tracked in a ticket or issue based system. The technical lead will determine the severity and priority of all reported issues and assign resources at his or her discretion to resolve identified issues. - The software maintainers will entertain all proposed changes to the system in a timely manner (within two business days). - The core framework in its entirety will be made publicly available under the LGPL version 2.0 license.
Reliability
The regression test suite will cover at least 80% of all lines of code at all times. Known regressions will be recorded and tracked (see Maintainablity) to an independent and satisfactory resolution.
System Modes and States
MOOSE applications normally run in normal execution mode when an input file is supplied. However there are a few other modes that can be triggered with various command line flags as indicated here:
Command Line Flag | Description of mode |
---|---|
-i <input_file> | Normal execution mode |
--split-mesh <splits> | Read the mesh block splitting the mesh into two or more pieces for use in a subsequent run |
--use-split | (inplies -i flag) Execute the the simulation but use pre-split mesh files instead of the mesh from the input file |
--yaml | Output all object descriptions and available parameters in YAML format |
--json | Output all object descriptions and available parameters in JSON format |
--syntax | Output all registered syntax |
--registry | Output all known objects and actions |
--registry-hit | Output all known objects and actions in HIT format |
--mesh-only (implies -i flag) | Run only the mesh related tasks and output the final mesh that would be used for the simulation |
--start-in-debugger <debugger> | Start the simulation attached to the supplied debugger |
The list of system-modes may not be extensive as the system is designed to be extendable to end-user applications. The complete list of command line options for applications can be obtained by running the executable with zero arguments. See the command line usage.
Phyisical Characteristics
MOOSE is software only with no associated physical media. See System Requirements for a description of the minimum required hardware necessary for running a MOOSE-based application.
Environmental Conditions
Not Applicable
System Security
MOOSE based applications have no requirements or special needs related to system-security. The framework is designed to run completely in user-space with no elevated privileges required nor recommended.
Information Management
The core framework in its entirety will be made publicly available on an appropriate repository hosting site. Backups and security services will be provided by the hosting service.
Polices and Regulations
MOOSE-based applications must comply with all export control restrictions.
System Life Cycle Sustainment
MOOSE-based development follows various agile methods. The system is continuously built and deployed in a piecemeal fashion since objects within the system are more or less independent. Every new object requires a test, which in turn requires an associated requirement and design description. Some MOOSE-based development teams follow the NQA-1 standards.
Packaging, Handling, Shipping and Transportation
No special requirements are needed for packaging or shipping any media containing MOOSE source code. However, some MOOSE-based applications maybe be export controlled in which case all export control restrictions must be adhered to when packaging and shipping media.
Verification
The regression test suite will employ several verification tests using comparison against known analytical solutions, the method of manufactured solutions, and convergence rate analysis.