MOOSE Tools Failure Analysis Report

Introduction

MOOSE Tools, as a collection of utilities, is designed to operate as a library of functionalities. While each utility is tailored to perform a specific set of functions, the code can be arbitrarily expanded and built upon to create extended capability. This flexibility exists by design within the utilities, where they are often created with a central core set of code framework surrounded by extensions built to house specific features (the best example of this is the MOOSE Documentation System (MooseDocs)). With respect to performing failure analysis, the flexibility of the code base can be detrimental since there lacks a well-defined problem to assess. To minimize the possibility of failure for a simulation, various automated methods exist for developers. This document discusses these features and includes a list of requirements associated with software failure analysis.

References

No citations exist within this document.

Failure Analysis

MOOSE Tools has three primary methods for handling utility failures that range from test input errors to output differential failures. These methods of failure are common across MOOSE and MOOSE-based applications when testing aspects of Python-based support tools. The following sections detail the handling of these sources of failures.

  1. Test input parameter errors,

  2. Test dependency errors, and

  3. Missing objects/files failures.

Beyond these common sources of error, the PythonUnitTest tester (more information in Python Test Automation (Scripting)) is also able to handle custom failure capture through Python script input.

Test Input Parameter Errors

Test specification files used to control MOOSE Tool testing, as in MOOSE itself, uses the standard HIT syntax that MOOSE uses. File parsing automatically handles syntax mistakes and reports them as errors. For example, consider the following test specification file that contains a duplicate parameter:

[Tests]
  [./syntax]
    type = RunApp
    input = good.i
    check_input = true
    check_input = true
  [../]
[]
(test/tests/test_harness/parse_errors)

If this test spec file is used to start a test, the system will automatically report the error and associated line number where it occurred as follows:


parse_errors:5: duplicate parameter "/Tests/syntax/check_input"

Test Dependency Errors

Tests can depend on each other, and thus trees of test dependency can result. The testing system can automatically handle dependency issues (including depending on another test that is non-existent, or having a cyclic dependency chain). For an example of cyclic dependency, see the following:

[Tests]
  [./testA]
    type = RunApp
    input = foo.i
    prereq = testC
  [../]
  [./testB]
    type = RunApp
    input = foo.i
    prereq = testA
  [../]
  [./testC]
    type = RunApp
    input = foo.i
    prereq = testB
  [../]
[]
(test/tests/test_harness/cyclic_tests)

Following the dependency chain (denoted through the prereq parameter), notice that testB and testC functionally depend on each other (through testA), which leads to a cyclic dependency. The system will detect this and report an error:


tests/test_harness.testC: Cyclic dependency error!
tests/test_harness.testC: 	testA --> testB --> testC <--> testB

Missing Objects/Files Failure

When files required for the successful completion of a test are missing or incorrect, failures will also be encountered. In this example, consider the following test specification that compares outputs for comma-separated values (CSV) and Exodus output-based tests to gold files that do not exist:

[Tests]
  [./exodiff_diff_gold]
    type = Exodiff
    input = exodiff_diff_gold.i
    exodiff = exodiff_diff_gold_out.e
    gold_dir = diff_gold
  [../]

  [./csvdiff_diff_gold]
    type = CSVDiff
    input = csvdiff_diff_gold.i
    csvdiff = csvdiff_diff_gold_out.csv
    gold_dir = diff_gold
  [../]
[]
(test/tests/test_harness/diff_golds)

When this test is executed, an error will be thrown during the file comparison. The missing file content is considered empty, so the kind of error experienced will relate to differences between the output and an empty file. In the case of the CSVDiff the following failing output is seen:


ERROR: In file csvdiff_diff_gold_out.csv: Columns with header 'time' aren't the same length

Failure Analysis Requirements

  • python: Test
  • 2.1.11The system shall include a utility for reporting the status of software quality items.

    Specification(s): check

    Design: MOOSE Tools

    Issue(s): #12049

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • python: Tests
  • 2.2.23The system shall report a failure when encountering a CSV differential result

    Specification(s): csvdiffs

    Design: TestHarness

    Issue(s): #11250#11251

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.24The system shall report a failure when encountering differential result

    Specification(s): diff

    Design: TestHarness

    Issue(s): #8373

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.25The system shall report a failure when encountering differential result with a custom gold directory.

    Specification(s): diff_gold

    Design: TestHarness

    Issue(s): #10647

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.27The system shall report a failure during a cyclic dependency event

    Specification(s): cyclic

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.29The system shall report a failure if a test is missing its gold file

    Specification(s): missing_gold

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.30The system shall report a failure if expected output is not reported

    Specification(s): expect

    Design: TestHarness

    Issue(s): #9933

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.31The system shall report a failure if test output causes a race condition

    Specification(s): duplicate

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

    Prerequisite(s): 2.2.41

  • 2.2.36The system shall report a failure if a test exceeds a predetermined walltime

    Specification(s): timeout

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.37The system shall report a failure if a test depends on another non-existent test

    Specification(s): unknown_prereq

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.38The system shall report a failure due to issues with input files

    Specification(s): syntax

    Design: TestHarness

    Issue(s): #9249

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.39The system shall report a failure if a test requires an object not present in the executable

    Specification(s): required_objects

    Design: TestHarness

    Issue(s): #6781

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.46The system shall report a failure if a test file is not constructed properly or does not contain valid parameters

    Specification(s): parser_errors

    Design: TestHarness

    Issue(s): #10400

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.59The system shall produce a descriptive error with the file and line number when a test specification parameter is unknown.

    Specification(s): unknown_param

    Design: TestHarness

    Issue(s): #14803

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest

  • 2.2.61The system shall report multiple failures resulting from SchemaDiff operations.

    Specification(s): schema_expect_err

    Design: TestHarness

    Issue(s): #427

    Collection(s): FAILURE_ANALYSIS

    Type(s): PythonUnitTest