HPC Cluster
The following instructions are for those operating on an HPC cluster.
It is entirely possible your cluster may have too old of libraries. If this ends up being the case after failing the below instructions, please follow our regular Linux instructions instead for an easier, but lower performance, installation.
Prerequisites
Minimum System Requirements
In general, the following is required for MOOSE-based development:
A POSIX compliant Unix-like operating system. This includes any modern Linux-based operating system (e.g., Ubuntu, Fedora, Rocky, etc.), or a Macintosh machine running either of the last two MacOS releases.
Hardware | Information |
---|---|
CPU Architecture | x86_64, ARM (Apple Silicon) |
Memory | 8 GB (16 GBs for debug compilation) |
Disk Space | 30GB |
Libraries | Version / Information |
---|---|
GCC | 8.5.0 - 12.2.1 |
LLVM/Clang | 10.0.1 - 16.0.6 |
Intel (ICC/ICX) | Not supported at this time |
Python | 3.9 - 3.11 |
Python Packages | packaging pyaml jinja2 |
CMake. A modern version of CMake is required to build some of the meta packages we need to include in PETSc
Python 3.x Development libraries
Your cluster will most likely have these requirements available via some form of environment management software. If you are unfamiliar with how to manage your environment or unsure how to obtain the above requirements, please consult with your cluster administrators.
Activate Environment
Activate your desired MPI environment (refer to your cluster administrators on how to do this). This usually involves module load
commands. Please note again, that Intel compilers are not supported.
Sometimes after loading a proper MPI environment, it is still necessary to set some variables. Check to see if the following variables are set:
If nothing returns, or what does return does not include MPI naming conventions ($CC
is gcc
and not mpicc
like we need), you need to set them manually each and every time you load said environment:
For this reason you may wish to add the above export command to your shell startup profile so that these variables are always set. Achieving this is different for each shell type (and there are a number of them). You'll want to read up on how to do this for your particular shell.
You can determine what shell your environment operates within by running the following command:
Cloning MOOSE
MOOSE is hosted on GitHub and should be cloned directly from there using git. We recommend creating a projects directory to contain all of your MOOSE related work. But you are free to choose any location you wish:
~/projects
To clone MOOSE, run the following commands in a terminal:
The master branch of MOOSE is the stable branch that will only be updated after all tests are passing. This protects you from the day-to-day changes in the MOOSE repository.
PETSc, libMesh, and WASP
MOOSE requires several support libraries in order to build or run properly. These libraries (PETSc, libMesh, and WASP) can be built using our supplied scripts:
MOOSE_JOBS
is a loose influential environment variable that dictates how many cores to use when executing many of our scripts. While operating on INL HPC login nodes alongside everyone else, it is courtesy to limit your CPU core usage. We prefer that users limit themselves to 6:
export MOOSE_JOBS=6
METHODS
is an influential environment variable that dictates how to build libMesh. If this variable is not set, libMesh will by default build 4 methods (taking 4x longer to finish). Most of the time, folks will want to use optimized methods:
export METHODS=opt
Build and Test MOOSE
To build MOOSE run the following commands:
To test MOOSE, run the following commands:
Some tests are SKIPPED. This is normal as some tests are specific to available resources, or some other constraint your machine does not satisfy. If you see failures, or you see MAX FAILURES
, thats a problem! And it needs to be addressed before continuing:
Supply a report of the actual failure (scroll up a ways). For example the following snippet does not give the full picture (created with
./run_tests -i always_bad
):Instead, you need to scroll up and report the actual error:
Now that you have a working MOOSE, and you know how to make your MPI wrapper available, proceed to 'New Users' to begin your tour of MOOSE!