HPC Cluster

The following instructions are for those operating on an HPC cluster.

commentnote

It is entirely possible your cluster may have too old of libraries. If this ends up being the case after failing the below instructions, please follow our regular Linux instructions instead for an easier, but lower performance, installation.

Prerequisites

Minimum System Requirements

In general, the following is required for MOOSE-based development:

  • GCC/Clang C++17 compliant compiler (GCC @ 7.5.0, Clang @ 10.0.1 or greater)

    • Note: Intel compilers are not supported.

  • Memory: 8 GBs of RAM for optimized compilation (16 GBs for debug compilation), 2 GB per core execution

  • Processor: 64-bit x86 or ARM64 (specifically, Apple Silicon)

  • Disk: 30GB

  • A Portable Operating System Interface (POSIX) compliant Unix-like operating system, including the two most recent versions of MacOS and most current versions of Linux.

  • Git version control system

  • Python @ 3.7 or greater

  • CMake. A modern version of CMake is required to build some of the meta packages we need to include in PETSc

  • Python 3.x Development libraries

Your cluster will most likely have these requirements available via some form of environment management software. If you are unfamiliar with how to manage your environment or unsure how to obtain the above requirements, please consult with your cluster administrators.

Activate Environment

Activate your desired MPI environment (refer to your cluster administrators on how to do this). This usually involves module load commands. Please note again, that Intel compilers are not supported.

Sometimes after loading a proper MPI environment, it is still necessary to set some variables. Check to see if the following variables are set:


echo $CC $CXX $FC $F90 $F77

If nothing returns, or what does return does not include MPI naming conventions ($CC is gcc and not mpicc like we need), you need to set them manually each and every time you load said environment:


export CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90 F77=mpif77

For this reason you may wish to add the above export command to your shell startup profile so that these variables are always set. Achieving this is different for each shell type (and there are a number of them). You'll want to read up on how to do this for your particular shell.

You can determine what shell your environment operates within by running the following command:


echo $0

Cloning MOOSE

MOOSE is hosted on GitHub and should be cloned directly from there using git. We recommend creating a directory ~/projects to contain all of your MOOSE related work.

To clone MOOSE, run the following commands in a terminal:


mkdir -p ~/projects
cd ~/projects
git clone https://github.com/idaholab/moose.git
cd moose
git checkout master
commentnote

The master branch of MOOSE is the stable branch that will only be updated after all tests are passing. This protects you from the day-to-day changes in the MOOSE repository.

PETSc, libMesh, and WASP

MOOSE requires several support libraries in order to build or run properly. These libraries (PETSc, libMesh, and WASP) can be built using our supplied scripts:


cd ~/projects/moose/scripts
export MOOSE_JOBS=6 METHODS=opt
./update_and_rebuild_petsc.sh  
./update_and_rebuild_libmesh.sh 
./update_and_rebuild_wasp.sh 
schooltip

MOOSE_JOBS is a loose influential environment variable that dictates how many cores to use when executing many of our scripts.

METHODS is an influential environment variable that dictates how to build libMesh. If this variable is not set, libMesh will by default build 4 methods (taking 4x longer to finish).

Build and Test MOOSE

To build MOOSE run the following commands:


cd ~/projects/moose/test
make -j 6

To test MOOSE, run the following commands:


cd ~/projects/moose/test
./run_tests -j 6

Some tests are SKIPPED. This is normal as some tests are specific to available resources, or some other constraint your machine does not satisfy. If you see failures, or you see MAX FAILURES, thats a problem! And it needs to be addressed before continuing:

  • Supply a report of the actual failure (scroll up a ways). For example the following snippet does not give the full picture (created with ./run_tests -i always_bad):

    
    Final Test Results:
    --------------------------------------------------------------------------------
    tests/test_harness.always_ok .................... FAILED (Application not found)
    tests/test_harness.always_bad .................................. FAILED (CODE 1)
    --------------------------------------------------------------------------------
    Ran 2 tests in 0.2 seconds. Average test time 0.0 seconds, maximum test time 0.0 seconds.
    0 passed, 0 skipped, 0 pending, 2 FAILED
    

    Instead, you need to scroll up and report the actual error:

    
    tests/test_harness.always_ok: Working Directory: /Users/me/projects/moose/test/tests/test_harness
    tests/test_harness.always_ok: Running command:
    tests/test_harness.always_ok:
    tests/test_harness.always_ok: ####################################################################
    tests/test_harness.always_ok: Tester failed, reason: Application not found
    tests/test_harness.always_ok:
    tests/test_harness.always_ok .................... FAILED (Application not found)
    tests/test_harness.always_bad: Working Directory: /Users/me/projects/moose/test/tests/test_harness
    tests/test_harness.always_bad: Running command: false
    tests/test_harness.always_bad:
    tests/test_harness.always_bad: ###################################################################
    tests/test_harness.always_bad: Tester failed, reason: CODE 1
    tests/test_harness.always_bad:
    tests/test_harness.always_bad .................................. FAILED (CODE 1)
    

Now that you have a working MOOSE, and you know how to make your MPI wrapper available, proceed to 'New Users' to begin your tour of MOOSE!