SALAMANDER Verification, Validation, and Example Cases
For software quality assurance (SQA) purposes, SALAMANDER undergoes verification, validation, and benchmarking. The SALAMANDER development team defines these terms as:
Verification: Comparing SALAMANDER predictions against analytical solutions in different conditions, which are often simple cases.
Validation: Comparing SALAMANDER predictions against experimental data.
Benchmarking: Comparing SALAMANDER predictions against other codes.
Note that in addition to monitoring SALAMANDER performance and reproducibility in verification and validation cases, the effects of changes made to SALAMANDER are tracked. A series of automated tests are performed via continuous integration using CIVET to help identify any changes in SALAMANDER's predictions, therefore ensuring stability and robustness.
Example cases have also been added, independent of the SQA cases shown below, which showcase existing SALAMANDER capabilities and demonstrate potential applications. In all cases, the existing input files should allow users to leverage prior work.
Finally, for more references of SALAMANDER usage, a list of publications supporting SALAMANDER development can be found here.
SALAMANDER couples with other MOOSE-based applications, such as TMAP8 or Cardinal. These projects have their own documentation regarding SQA and examples, which can be found at the following locations:
List of verification cases
SALAMANDER is under active development and does not currently have any verification cases fully documented for users. However, several verification cases are already available in the salamander/test/tests
folder.
List of validation cases
SALAMANDER is under active development and does not currently have any validation cases available to users.
List of benchmarking cases
Case | Title |
---|---|
1 | Lieberman Ion Wall Losses |
List of example cases
SALAMANDER is under active development and does not currently have any example cases available to users.