MOOSE Tools System Design Description

This template follows INL template TEM-140, "IT System Design Description."

Introduction

The main objective of creating a set of support utilities such as MOOSE Tools is to provide a resource to application and framework developers that saves time and provides useful capabilities not attainable otherwise in a tailored way. The MOOSE Tools mission is just that: provide a set of capability for engineers and scientists using the MOOSE framework to support testing, verification, analysis, and documentation of their simulations and applications.

MOOSE Tools was designed to be as easy and straightforward to use by scientists and engineers as possible. MOOSE is meant to be approachable by non-computational scientists who use MOOSE to perform their research. Thus, MOOSE Tools has grown and developed alongside MOOSE to be as effective as possible in supporting these aims. This has led to many of the unique features of MOOSE Tools:

  • A general test platform for arbitrarily located test cases within a code base and facilitating highly-parallel testing

  • A general documentation platform that can use source code, markdown, and code input to create flexible websites, reports, and presentations directly from the code repository

  • Utilities for manufactured solution code verification

  • Utilities for data manipulation, conversion, and differencing

  • Integrated, automatic, and rigorous testing

  • Rapid, continuous integration development cycle

  • Codified, rigorous path for contributing

Each of these characteristics is meant to build trust in the framework by those attempting to use it by providing comprehensive solutions for their general needs, coupled with an extensible system for new requirements. Ultimately, the decision to utilize code contained in MOOSE Tools comes down to whether or not you trust the code in the utilities and those developing it to be able to support your desired use-case. No matter the technical capabilities of a code, without trust users will look elsewhere. This is especially true of those not trained in software development or computational science.

Developing trust in a code base goes beyond utilizing "best practices" for the code developed, it is equally important that the code itself is built upon tools that are trusted. For this reason, MOOSE Tools relies on well-known community-driven libraries and utilities to perform some of its functions. See the MOOSE Tools Software Library List for more information.

With these principles in mind, an open source, flexible tool to complement the greater MOOSE ecosystem has been conceived. MOOSE Tools is an on-going project started to support MOOSE itself and is aimed toward a common platform for code testing, documentation, verification, and data support. This document provides design details pertinent to application developers as well as framework developers.

Use Cases

The set of MOOSE Tools utilities are targeted at two main groups of actors: Developers and Users. Developers are the main use case. These are typically students and professionals trained in science and engineering fields with some level of experience with coding but typically very little formal software development training. The other user group is Users. Those who intend to use an application built upon the MOOSE framework without writing any computer code themselves. Instead they may modify or create input files for driving a simulation, run the application, and analyze the results. This analysis, testing and verification of newly created code, and documentation support are then handled through MOOSE Tools utilities. All interactions using MOOSE Tools are primarily through the command-line interface.

System Purpose

MOOSE Tools is a set of Python utilities designed to support the MOOSE framework. MOOSE Tools contains two main utilities – MOOSE Documentation System (MooseDocs) and TestHarness – that facilitate MOOSE documentation (e.g., website and presentation building) and code testing. While tied to the framework most directly (and housed within the MOOSE code repository), MOOSE Tools can be used in a more general sense to support other codes. More specific information on the design and structure of MOOSE Tools utilities can be found on the main documentation page.

The design goal of MOOSE Tools is to give code developers access to common tools and resources to help them verify, test, and document their code and simulations, as well as make best use of the input and output data obtained in the process of performing research. To this end, many of the Tools utilities are extensible and flexible, like MOOSE itself, and therefore are designed to be expanded and reconfigured to meet the needs of the end-user.

System Scope

The scope of MOOSE Tools is to provide a set of utilities to support MOOSE development. Namely, verification, testing, documentation, and data analysis and manipulation of MOOSE code and Input/Output (I/O). Tools code is written in a general way, so that it can be extended and tuned to the needs of the user/developer. The major two systems of the Tools utilities are described in the following sections, but more information on the support utilities also contained within MOOSE tools can be found on the main documentation page.

MooseDocs

The MOOSE Documentation System (MooseDocs) facilitates documentation of the MOOSE and MOOSE Tools code bases, as well as supports the MOOSE Software Quality Assurance (SQA) practices. It contains many extensions for website rendering, navigation, linking, bibliographic references, and support for integration of code and input file snippets (among many more features), as well as capabilities for development of training and presentation slides and reports.

TestHarness

The TestHarness system is responsible for finding tests and running them. The extended philosophy behind MOOSE testing can be found in the MOOSE Test System documentation page, and this philosophy has driven the creation and design choices of the TestHarness system.

Within MOOSE there are three different testing ideas:

  1. The "tests": which are typically "Regression Tests" consisting of input files and known good outputs ("gold" files).

  2. Unit tests that test the functionality of small separable pieces

  3. The TestHarness: a piece of software that was written to _run_ tests and aggregate the results.

The TestHarness integrates with the MOOSE continuous integration (CI) and continuous deployment (CD) workflows to facilitate testing across multiple operating systems and hardware architectures.

Dependencies and Limitations

MOOSE Tools has several dependencies on other software packages and has scope that is constantly evolving based upon funding, resources, priorities, and lab direction as the MOOSE framework expands. However, the software is open-source and many features and even bugs can be offloaded to developers with appropriate levels of knowledge and direction from the main design team. The primary list of software dependencies is listed in the MOOSE Tools Software Library List. This list is not meant to be exhaustive. Individual operating systems may require specific packages to be installed prior to using MOOSE Tools, which can be found on the Install MOOSE pages.

Definitions and Acronyms

This section defines, or provides the definition of, all terms and acronyms required to properly understand this specification.

Definitions

  • Pull (Merge) Request: A proposed change to the software (e.g. usually a code change, but may also include documentation, requirements, design, and/or testing).

  • Baseline: A specification or product (e.g., project plan, maintenance and operations (M&O) plan, requirements, or design) that has been formally reviewed and agreed upon, that thereafter serves as the basis for use and further development, and that can be changed only by using an approved change control process (NQA-1, 2009).

  • Validation: Confirmation, through the provision of objective evidence (e.g., acceptance test), that the requirements for a specific intended use or application have been fulfilled (24765:2010(E), 2010).

  • Verification: (1) The process of: evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. (2) Formal proof of program correctness (e.g., requirements, design, implementation reviews, system tests) (24765:2010(E), 2010).

Acronyms

AcronymDescription
APIApplication Programming Interface
CDcontinuous deployment
CIcontinuous integration
DOE-NEDepartment of Energy, Nuclear Energy
HPCHigh Performance Computing
I/OInput/Output
INLIdaho National Laboratory
MOOSEMultiphysics Object Oriented Simulation Environment
MPIMessage Passing Interface
SQASoftware Quality Assurance

Design Stakeholders and Concerns

Design Stakeholders

Stakeholders for MOOSE Tools include several of the funding sources including Department of Energy, Nuclear Energy (DOE-NE) and the INL. However, Since MOOSE Tools is an open-source project, several universities, companies, and foreign governments have an interest in the development and maintenance of the MOOSE Tools project.

Stakeholder Design Concerns

Concerns from many of the stakeholders are similar. These concerns include correctness, stability, and performance. The mitigation plan for each of these can be addressed. For correctness, MOOSE development requires either regression or unit testing for all new code added to the repository. The project contains several comparisons against analytical solutions where possible and also other verification methods such as MMS. For stability, MOOSE maintains multiple branches to incorporate several layers of testing both internally and for dependent applications. Finally, performance tests are also performed as part of the normal testing suite to monitor code change impacts to performance.

System Design

MOOSE Tools is composed of a wide range of utilities. Each utility is generally composed of a single or small set of Python objects or interfaces intended to be specialized and combined by a Developer to support a specific code base, application, or simulation. To accomplish this design goal, MOOSE Tools utilities generally use a particular design pattern, consisting of a central core of common functionality extended by code designed for specific tasks. Users needing to extend or create new MOOSE Tools utilities may, in many cases, use the main code as Python packages and specialize or modify it to provide an implementation meeting their needs. The design of each of these systems is documented on the MOOSE homepage. Additionally, up-to-date documentation extracted from the source is maintained on the same documentation site after every successful merge to MOOSE's stable branch.

System Structure

The MOOSE Tools system consists of multiple utilities. Each utility has its own unique structure, generally containing a set of core functionality surrounded by extensions or flexible methods for user/developer usage. Links to design documentation for MOOSE Tools can be found below in Table 1.

Table 1:

ToolDescription
TestHarnessTool testing that applications work correctly as code is developed.
Memory LoggerTool for gathering memory usage of a running process.
CSVDiff ToolTool for computing differences between comma separated value (CSV) files.
Method of Manufactured Solutions (MMS)Utilities for verifying solves with the method of manufactured solutions.
free_energy.pyTool for extracting MOOSE parsed function expressions from thermodynamic database files.
moosetreeTool for building and searching tree structures.
pyhitTool for reading, writing, and manipulating MOOSE input files.
Combine CSVTool for combining CSV files together.
MOOSE SQA ToolsTools for managing SQA documentation.
ReporterReaderTool for reading JSON output of Reporter data
Module Hash ToolTool for generating a hash suffix for our contribution modules.
MOOSE Documentation System (MooseDocs)Tool for creating documentation.
MooseControlTool for interacting with a WebServerControl

The design of these utilities is fluid and is managed through agile methods and ticket request system on the MOOSE repository website.

Data Design and Control

At a high level, the system is designed to process configuration input, source code documentation, and other support files in order to facilitate its design (supporting the MOOSE framework and MOOSE-based applications). Some components of the utilities may in turn load other file-based resources to complete its processes. Examples include secondary configuration files or data files. The system will then assemble its pre-requisites and perform its function using the libraries of the Code Platform. The system can then output various outputs associated with its design function – a website, test results, manufactured solution, translated data, etc. An example of this is the MooseDocs rendering of documentation webpages – configuration files, markdown documentation, code documentation, input files, and source code itself will be assembled and then rendered to a functional conclusion (website, presentation, or document) using the MooseDocs code extension systems.

Human-Machine Interface Design

MOOSE Tools utilities are command-line driven programs. All interaction with MOOSE and MOOSE-based codes and data is ultimately done through the command line. This is typical for HPC applications that use the MPI interface for running on computing clusters.

System Design Interface

All external system interaction is performed either through file I/O or through local Application Programming Interface (API) calls. MOOSE Tools is not designed to interact with any external system directly through remote procedure calls.

Security Structure

MOOSE Tools does not require any elevated privileges to operate and does not run any stateful services, daemons or other network programs. Distributed runs rely on the MPI library.

Requirements Cross-Reference

  • python: MOOSE Tools
  • 2.1.11The system shall include a utility for reporting the status of software quality items.

    Specification(s): check

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.1.12The system shall include a utility for generating documentation stub pages.

    Specification(s): generate

    Design: MOOSE Tools

    Issue(s): #16155

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.13The system shall include a utility for displaying the application syntax tree.

    Specification(s): syntax

    Design: MOOSE Tools

    Issue(s): #16155

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.14The system shall include a utility for converting markdown documentation to other formats.

    Specification(s): build

    Design: MOOSE Tools

    Issue(s): #16807#18137

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.15The system shall include a utility to initialize documentation items.

    Specification(s): init

    Design: MOOSE Tools

    Issue(s): #16868

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.38The system shall include data structures for reporting requirement information.

    Specification(s): Requirement

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.39The system shall include a tool for checking application syntax for software quality documentation.

    Specification(s): check_syntax

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.40The system shall include a tool for gathering requirement information for software quality documentation.

    Specification(s): get_requirements

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.41The system shall include a tool for testing requirement information for software quality documentation.

    Specification(s): check_requirements

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.42The system shall have reporting tools for software quality.

    Specification(s): SQAReport

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.43The system shall have tools for reporting software quality status that includes reports for monitoring
    1. files and websites;
    2. tests and requirements; and
    3. design content.

    Specification(s): reports/SQADocumentReport, reports/SQARequirementReport, reports/SQAMooseAppReport

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.44The system shall include a tool for gathering multiple software quality reports.

    Specification(s): get_reports

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.45The system shall include a utility to enable the recording of logging messages without display.

    Specification(s): silent_logging

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.46The system shall include a utility to enable control the of logging messages for software quality reports.

    Specification(s): LogHelper

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.47The system shall include documented requirement collections and SQA template files.

    Specification(s): documents

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.48The system shall include the necessary components for building a syntax tree from application information.

    Specification(s): base

    Design: MOOSE Tools

    Issue(s): #6699#12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.1.49The system shall include a utility for generating a complete syntax tree from an application executable.

    Specification(s): tree

    Design: MOOSE Tools

    Issue(s): #6699#12049

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.2The system shall contain python utilities that include a messaging interface.

    Specification(s): mooseMessage

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.3The system shall contain python utilities that include a messaging interface capable of creating a dialog window.

    Specification(s): mooseMessageDialog

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.4The system shall contain python utilities for reading CSV data via pandas.DataFrame.

    Specification(s): moose_data_frame

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.5The system shall contain python utilities for reading postprocessor data.

    Specification(s): postprocessors

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.6The system shall contain python utilities for reading vector postprocessor data.

    Specification(s): vector_postprocessors

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.7The system shall contain python utilities for reading reporter data.

    Specification(s): reporters

    Design: MOOSE Tools

    Issue(s): #17391

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.8The system shall contain python utilities for converting camel case text to underscore separated text.

    Specification(s): camel

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.9The system shall contain python utilities for reading YAML files.

    Specification(s): yaml_load

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.10The system shall contain python utilities for breaking a list of items into a specified number of chunks.

    Specification(s): make_chunks

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.11The system shall include a utility for locating a MOOSE-based application executable.

    Specification(s): find_moose_executable

    Design: MOOSE Tools

    Issue(s): #15017

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.12The system shall include utilities for executing version control system commands.

    Specification(s): gitutils

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.13The system shall include a utility running an executable.

    Specification(s): run_executable

    Design: MOOSE Tools

    Issue(s): #15996

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.14The system shall include a tool for accessing CIVET testing results.

    Specification(s): civet_results

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.15The system shall contain python utilities for reading PerfGraphReporter data.

    Specification(s): perfgraph

    Design: MOOSE Tools

    Issue(s): #16256

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.16The system shall contain python utilities for reading compared csv files.

    Specification(s): csvdiff

    Design: MOOSE Tools

    Issue(s): #20032

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.17The system shall contain a python postprocessor class to combine CSV files together.

    Specification(s): combine_csv

    Design: MOOSE Tools

    Issue(s): #13988

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.18The system shall contain a python utility for parsing hierarchical input text (HIT) files.

    Specification(s): parser

    Design: MOOSE Tools

    Issue(s): #11189

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.19The system shall contain a python utility for tokenizing hierarchical input text (HIT) files.

    Specification(s): tokenize

    Design: MOOSE Tools

    Issue(s): #15889

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.20The system shall include examples for reading, writing, and manipulating input file syntax using python.

    Specification(s): examples

    Design: MOOSE Tools

    Issue(s): #16622

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • python: TestHarness
  • 2.2.21The system shall report a non-failing status after a predetermined time of no activity

    Specification(s): long_running

    Design: TestHarness

    Issue(s): #9280

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.22The system shall support the output of the longest running jobs

    Specification(s): longest_jobs

    Design: TestHarness

    Issue(s): #16752

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.23The system shall report a failure when encountering a CSV differential result

    Specification(s): csvdiffs

    Design: TestHarness

    Issue(s): #11250#11251

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.24The system shall report a failure when encountering differential result

    Specification(s): diff

    Design: TestHarness

    Issue(s): #8373

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.25The system shall report a failure when encountering differential result with a custom gold directory.

    Specification(s): diff_gold

    Design: TestHarness

    Issue(s): #10647

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.26The system shall restrict tests based on the available dual number derivative vector size

    Specification(s): min_ad_size

    Design: TestHarness

    Issue(s): #427

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.27The system shall report a failure during a cyclic dependency event

    Specification(s): cyclic

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.28The system shall not perform a test if said test has a skipped dependency

    Specification(s): dependency_skip

    Design: TestHarness

    Issue(s): #427

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.29The system shall report a failure if a test is missing its gold file

    Specification(s): missing_gold

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.30The system shall report a failure if expected output is not reported

    Specification(s): expect

    Design: TestHarness

    Issue(s): #9933

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.31The system shall report a failure if test output causes a race condition

    Specification(s): duplicate

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.32The system shall report deleted tests as failures when specified with additional –extra-info optionsIn all other cases, deleted tests will be treated as skipped tests

    Specification(s): deleted

    Design: TestHarness

    Issue(s): #427

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.33The system shall perform all operations required of the TestHarness except executing a test

    Specification(s): dry_run

    Design: TestHarness

    Issue(s): #8637

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.34The system shall run only tests designated with display_required.

    Specification(s): dislpay_required

    Design: TestHarness

    Issue(s): #8700#8701

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.35The system shall allow users to ignore and override specified prerequisites

    Specification(s): ignore

    Design: TestHarness

    Issue(s): #427

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.36The system shall report a failure if a test exceeds a predetermined walltime

    Specification(s): timeout

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.37The system shall report a failure if a test depends on another non-existent test

    Specification(s): unknown_prereq

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.38The system shall report a failure due to issues with input files

    Specification(s): syntax

    Design: TestHarness

    Issue(s): #9249

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.39The system shall report a failure if a test requires an object not present in the executable

    Specification(s): required_objects

    Design: TestHarness

    Issue(s): #6781

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.40The system shall skip a test if required application is unavailable

    Specification(s): required_apps

    Design: TestHarness

    Issue(s): #11095

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.41The system shall only perform the validation of test results without executing the test itself

    Specification(s): should_execute

    Design: TestHarness

    Issue(s): #9932

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.42The system shall skip syntax only tests if instructed to do so

    Specification(s): report_skipped

    Design: TestHarness

    Issue(s): #9359

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.43The system shall properly run tests using distributed mesh options

    Specification(s): distributed_mesh

    Design: TestHarness

    Issue(s): #9181

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.44The system shall supply the necessary resources a test requires, and report when these resources are insufficient to run said test

    Specification(s): allocations

    Design: TestHarness

    Issue(s): #10272

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.45The system shall print all caveats pertaining to the test involved

    Specification(s): extra_info

    Design: TestHarness

    Issue(s): #10272

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.46The system shall report a failure if a test file is not constructed properly or does not contain valid parameters

    Specification(s): parser_errors

    Design: TestHarness

    Issue(s): #10400

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.47The system shall perform normal operating procedures on a single provided test spec file

    Specification(s): arbitrary_tests

    Design: TestHarness

    Issue(s): #11076

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.48The system shall write the output (stdout|stderr) that an executed test generated to a file as designated by user supplied arguments

    Specification(s): write_results

    Design: TestHarness

    Issue(s): #11116

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.49The system shall be able to perform recovery of a test

    Specification(s): recover_tests

    Design: TestHarness

    Issue(s): #11492

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.50The system shall trim output once threshold has exceeded

    Specification(s): trim_output

    Design: TestHarness

    Issue(s): #12167

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.51The system shall detect and report race conditions that exist in the supplied tests

    Specification(s): race_conditions

    Design: TestHarness

    Issue(s): #13186

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.52The system shall detect and report unreadable output in executed commands

    Specification(s): unreadable_output

    Design: TestHarness

    Issue(s): #14370

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.53The system shall run only tests which previously have failed

    Specification(s): failed_tests

    Design: TestHarness

    Issue(s): #14512

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.54The system shall support restrictions based on the python version available.

    Specification(s): python_version

    Design: TestHarness

    Issue(s): #13903

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.55The system shall be able to compare computed values against measured data using mean value and standard deviation

    Specification(s): csvvalidationtester

    Design: TestHarness

    Issue(s): #14511

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.56The system shall be able to run tests in relative path directories supplied by the spec file

    Specification(s): working_directory

    Design: TestHarness

    Issue(s): #14962

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.57The system shall produce a descriptive error with the file and line number when a test specification parameter is unknown.

    Specification(s): unknown_param

    Design: TestHarness

    Issue(s): #14803

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.58The system shall perform a test after all other tests have passed if specified to do so

    Specification(s): do_last

    Design: TestHarness

    Issue(s): #15230

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.59The system shall report multiple failures resulting from SchemaDiff operations.

    Specification(s): schema_expect_err

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.60The system shall be able to replay last results

    Specification(s): test_show_last_results

    Design: TestHarness

    Issue(s): #22545

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.61The system shall be able to evaluate a given test with a user-supplied evaluation function.

    Specification(s): custom_eval

    Design: TestHarness

    Issue(s): #22946

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.62The system shall test for a valid json output dump

    Specification(s): test_failed_json

    Design: TestHarness

    Issue(s): #21967

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.63The system shall skip tests not capable of being run depending on binary installation type

    Specification(s): test_install_type

    Design: TestHarness

    Issue(s): #24195

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.64The system shall skip tests not capable of being run depending on micro architecture

    Specification(s): test_machine_type

    Design: TestHarness

    Issue(s): #25317

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.65The system shall not skip non-heavy tests for which heavy tests depend on

    Specification(s): test_soft_heavy

    Design: TestHarness

    Issue(s): #26215

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

  • 2.2.66The system shall provide a common interface for storing and retrieving output that supports sanitization issues =

    Specification(s): test_output_interface

    Design: TestHarness

    Issue(s): #427

    Collection(s): FUNCTIONAL

    Type(s): PythonUnitTest

References

  1. ISO/IEC/IEEE 24765:2010(E). Systems and software engineering—Vocabulary. first edition, December 15 2010.[BibTeX]
  2. ASME NQA-1. ASME NQA-1-2008 with the NQA-1a-2009 addenda: Quality Assurance Requirements for Nuclear Facility Applications. first edition, August 31 2009.[BibTeX]