ExaConstit provides a modular build system with automated installation scripts for different platforms and backends. The build system automatically handles all dependencies including RAJA, MFEM, ExaCMech, Hypre, and METIS.
- Quick Start
- Build System Architecture
- First-Time Setup
- Advanced Configuration
- Build Locations and Output
- Troubleshooting
- Manual Build
# Clone the repository
git clone https://github.com/LLNL/ExaConstit.git# Create a separate build directory (recommended)
# This keeps source and build artifacts separate
mkdir -p exaconstit_builds
cd exaconstit_buildsNote: The build scripts will compile all dependencies in your current working directory. Using a separate build directory outside the source tree is strongly recommended to keep your workspace organized.
Edit the appropriate config file in ExaConstit/scripts/install/configs/ and update:
- Compiler paths and versions
- MPI installation location
- Python executable path
- Module load commands (for HPC systems)
See the Installation Guide for detailed configuration instructions.
../ExaConstit/scripts/install/unix_cpu_intel_install.sh../ExaConstit/scripts/install/unix_cpu_mac_install.sh../ExaConstit/scripts/install/unix_gpu_cuda_install.sh../ExaConstit/scripts/install/unix_gpu_hip_install.shNote for MI300A users: Set HSA_XNACK=1 in your environment before running simulations. This is required due to unified memory requirements and current limitations in MFEM's HIP backend.
The installation framework is organized into three components:
scripts/install/
├── common/
│ ├── dependency_versions.sh # Centralized version control
│ ├── preflight_checks.sh # Validation and utilities
│ └── build_functions.sh # Shared build logic
├── configs/
│ ├── cpu_intel_config.sh # Intel compiler configuration
│ ├── cpu_mac_config.sh # macOS configuration
│ ├── gpu_cuda_config.sh # NVIDIA CUDA configuration
│ └── gpu_hip_config.sh # AMD HIP configuration
└── unix_*_install.sh # Platform-specific entry points
- common/: Shared build logic used across all platforms
- configs/: Platform-specific compiler paths, flags, and settings
- Entry scripts: Simple launchers that source the appropriate config and common functions
Before running an install script, you'll need to customize the configuration file for your system.
Navigate to the ExaConstit repository and open the appropriate config file in scripts/install/configs/ with either a built in editor or something like VSCode:
cd ExaConstit
# For Intel CPU builds
code scripts/install/configs/cpu_intel_config.sh
# For CUDA GPU builds
code scripts/install/configs/gpu_cuda_config.sh
# For HIP/AMD GPU builds
code scripts/install/configs/gpu_hip_config.sh
# For macOS builds
code scripts/install/configs/cpu_mac_config.shEach config file has a clearly marked section at the top for system-specific paths. Update these for your environment:
###########################################
# Compiler Versions and Base Paths
###########################################
INTEL_VERSION="2023.2.1-magic" # Update to your Intel version
COMPILER_VERSION="intel-${INTEL_VERSION}"
INTEL_BASE="/usr/tce/packages/intel/${COMPILER_VERSION}" # Update to your path
MPI_IMPL="mvapich2" # Or openmpi, mpich, etc.
MPI_VERSION="2.3.7" # Update to your MPI version
MPI_COMPILER_VERSION="${MPI_IMPL}-${MPI_VERSION}"
MPI_BASE="/usr/tce/packages/${MPI_IMPL}/${MPI_COMPILER_VERSION}-${COMPILER_VERSION}" # update to your path
PYTHON_VERSION="3.12.2" # Update to your Python version
PYTHON_BASE="/usr/apps/python-${PYTHON_VERSION}" # Update to your pathHow to find your paths:
# Find your compilers
which icc # Intel C compiler
which icpc # Intel C++ compiler
which mpicc # MPI C wrapper
which mpicxx # MPI C++ wrapper
which python3 # Python executable
# Get version information
icc --version
mpicc --version
python3 --version###########################################
# Compiler Versions and Base Paths
###########################################
# Host Compiler
CLANG_VERSION="ibm-14.0.5" # Update to your Clang version
COMPILER_VERSION="clang-${CLANG_VERSION}"
CLANG_BASE="/usr/tce/packages/clang/${COMPILER_VERSION}"
# CUDA
CUDA_VERSION="11.8.0" # Update to your CUDA version
CUDA_BASE="/usr/tce/packages/cuda/cuda-${CUDA_VERSION}" # Update to your CUDA Path
# MPI
MPI_IMPL="spectrum-mpi" # Update to your MPI implementation / version / path
MPI_VERSION="rolling-release"
MPI_COMPILER_VERSION="${MPI_IMPL}-${MPI_VERSION}"
MPI_BASE="/usr/tce/packages/${MPI_IMPL}/${MPI_COMPILER_VERSION}-${COMPILER_VERSION}"
# Python
PYTHON_VERSION="3.8.2" # Like stated earlier update to your version / path
PYTHON_BASE="/usr/tce/packages/python/python-${PYTHON_VERSION}"How to find CUDA paths:
# Find CUDA installation
which nvcc
nvcc --version
# CUDA is typically at /usr/local/cuda or /usr/local/cuda-11.8
echo $CUDA_HOME # May already be set###########################################
# Compiler Versions and Base Paths
###########################################
# ROCm Compiler
# Update all of the below to your own relevant versions / paths / anything specific to your
# system
ROCM_VERSION="6.4.2"
ROCM_MAGIC_SUFFIX="magic"
COMPILER_VERSION="rocmcc-${ROCM_VERSION}-${ROCM_MAGIC_SUFFIX}"
ROCM_BASE="/usr/tce/packages/rocmcc/${COMPILER_VERSION}"
# MPI - Cray MPICH
MPI_IMPL="cray-mpich"
MPI_VERSION="9.0.1"
MPI_COMPILER_VERSION="${MPI_IMPL}-${MPI_VERSION}"
MPI_BASE="/usr/tce/packages/${MPI_IMPL}/${MPI_COMPILER_VERSION}-${COMPILER_VERSION}"
# Python
PYTHON_VERSION="3.9.12"
PYTHON_BASE="/usr/tce/packages/python/python-${PYTHON_VERSION}"How to find ROCm paths:
# Find ROCm installation
which amdclang
which hipcc
hipcc --version
# ROCm is typically at /opt/rocm or /opt/rocm-6.4.2
echo $ROCM_PATH # May already be set###########################################
# User-Configurable Paths
###########################################
# Homebrew location
HOMEBREW_PREFIX="${HOMEBREW_PREFIX:-/opt/homebrew}" # or /usr/local for Intel Macs
# System Clang (usually fine as-is)
CLANG_BASE="/usr/bin"
# MPI installation (REQUIRED: Update this!)
MPI_BASE="${HOME}/local/bin" # Update to your MPI location
# Common locations:
# Homebrew: /opt/homebrew/bin
# MacPorts: /opt/local/bin
# Anaconda: ${HOME}/anaconda3/bin
# Python location (REQUIRED: Update this!)
PYTHON_BASE="${HOME}/anaconda3/bin" # Update to your Python location
# Common locations:
# Homebrew: /opt/homebrew/bin
# System: /usr/binHow to find paths on macOS:
# Check architecture
uname -m # arm64 for Apple Silicon, x86_64 for Intel
# Find MPI (install if missing: brew install open-mpi)
which mpicc
which mpicxx
# Find Python
which python3
python3 --version # Should be 3.8 or newer
# Check Homebrew prefix
brew --prefixIf you're on an HPC system with a module system, update the module load commands to match your system:
###########################################
# Module Loading
###########################################
module load intel/2023.2.1-magic # Update to match your system's modules
module load CMake/3.26.3
module load python/3.12
module listHow to find available modules:
module avail # List all available modules
module avail intel # Search for Intel modules
module avail cuda # Search for CUDA modules
module list # Show currently loaded modulesIf your system doesn't use modules (like most macOS or personal Linux systems), you can comment out or remove the module commands.
Once you've customized the config file, run the appropriate install script from your build directory:
cd ../exaconstit_builds # Or wherever you want to build
../ExaConstit/scripts/install/unix_cpu_intel_install.sh # Or appropriate scriptThe script will:
- Validate your configuration
- Display a build summary
- Download and build all dependencies
- Build ExaConstit
- Save detailed logs in each component's build directory
Expected build time: 10 minutes to 45 minutes depending on system / parallelism / GPU builds or not.
All dependency versions are centralized in common/dependency_versions.sh:
# Edit version file
code ExaConstit/scripts/install/common/dependency_versions.sh# Portability libraries
export CAMP_VER="v2025.09.2" # Update to newer version
export RAJA_VER="v2025.09.1"
export UMPIRE_VER="v2025.09.0"
export CHAI_VER="v2025.09.1"
# Material models
export EXACMECH_REPO="https://github.com/LLNL/ExaCMech.git"
export EXACMECH_BRANCH="develop" # Change to different branch if needed
# FEM infrastructure
export HYPRE_VER="v2.32.0" # Update to newer version
export METIS_VER="5.1.0"
export MFEM_REPO="https://github.com/rcarson3/mfem.git"
export MFEM_BRANCH="exaconstit-dev" # Change branch if needed
# Main application
export EXACONSTIT_REPO="https://github.com/llnl/ExaConstit.git"
export EXACONSTIT_BRANCH="exaconstit-dev" # Change branch if needed
# Build standards
export CMAKE_CXX_STANDARD="17"
export CMAKE_BUILD_TYPE="Release"After updating versions, all build scripts will automatically use the new versions. No other changes needed.
Override the default GPU architecture at runtime:
# CUDA: Target Ampere A100 instead of default Volta V100
CMAKE_GPU_ARCHITECTURES=80 ./unix_gpu_cuda_install.sh
# HIP: Target MI250X instead of default MI300A
CMAKE_GPU_ARCHITECTURES=gfx90a ./unix_gpu_hip_install.shCommon GPU architectures:
NVIDIA CUDA:
60- Pascal (P100)70- Volta (V100)75- Turing (RTX 20xx, T4)80- Ampere (A100)86- Ampere (RTX 30xx, A40)89- Ada Lovelace (RTX 40xx, L40)90- Hopper (H100)
AMD HIP:
gfx906- MI50gfx908- MI100gfx90a- MI200 series (MI210, MI250)gfx940- MI300X (compute-only)gfx942- MI300A (APU)gfx942:xnack+- MI300A with unified memory support
Control the build behavior with environment variables:
# Clean rebuild (removes all build directories and rebuilds from scratch)
REBUILD=ON ./unix_gpu_hip_install.sh
# Force submodule updates (syncs and updates all git submodules)
SYNC_SUBMODULES=ON ./unix_gpu_cuda_install.sh
# Adjust parallel build jobs (default is 4)
MAKE_JOBS=16 ./unix_cpu_intel_install.sh
# Combine multiple options
REBUILD=ON MAKE_JOBS=8 CMAKE_GPU_ARCHITECTURES=80 ./unix_gpu_cuda_install.shAvailable environment variables:
REBUILD-ONto clean and rebuild,OFFto reuse existing builds (default:OFF)SYNC_SUBMODULES-ONto force submodule sync,OFFto skip (default:OFF)MAKE_JOBS- Number of parallel build jobs (default:4)CMAKE_GPU_ARCHITECTURES- GPU architecture target (default varies by platform)MFEM_HIP_ARCHITECTURES- MFEM-specific HIP arch (HIP only, default:gfx942)OPENMP_ON- Enable OpenMP (default:OFF)ENABLE_TESTS_EXACONSTIT- Build tests (default:ON)
To use a fork or different branch, edit common/dependency_versions.sh:
# Use your fork of MFEM
export MFEM_REPO="https://github.com/YOUR_USERNAME/mfem.git"
export MFEM_BRANCH="my-custom-feature"
# Use development branch of ExaConstit
export EXACONSTIT_BRANCH="develop"
# Use a different ExaCMech repository
export EXACMECH_REPO="https://github.com/YOUR_USERNAME/ExaCMech.git"
export EXACMECH_BRANCH="custom-material-models"To use a specific commit instead of a branch:
# The build scripts will clone the repo, then manually checkout the commit:
cd mfem && git checkout abc123def456
cd ../exaconstit_builds
# Re-run the build script with REBUILD=ONYou can add custom flags by editing the config file:
# In your config file (e.g., configs/gpu_cuda_config.sh)
# Add optimization flags
export CMAKE_CXX_FLAGS="-fPIC -std=c++17 --gcc-toolchain=${GCC_BASE} -O3 -march=native"
# Add debugging symbols
export CMAKE_CXX_FLAGS="${CMAKE_CXX_FLAGS} -g"
# Add preprocessor definitions
export CMAKE_CXX_FLAGS="${CMAKE_CXX_FLAGS} -DMY_CUSTOM_DEFINE"By default, all builds install to your build directory:
your_build_directory/
├── camp/
│ ├── build_${BUILD_SUFFIX}/ # Build artifacts
│ └── install_${BUILD_SUFFIX}/ # Installed library
├── RAJA/
│ ├── build_${BUILD_SUFFIX}/
│ └── install_${BUILD_SUFFIX}/
├── Umpire/ # GPU builds only
│ ├── build_${BUILD_SUFFIX}/
│ └── install_${BUILD_SUFFIX}/
├── CHAI/ # GPU builds only
│ ├── build_${BUILD_SUFFIX}/
│ └── install_${BUILD_SUFFIX}/
├── ExaCMech/
│ ├── build_${BUILD_SUFFIX}/
│ └── install_${BUILD_SUFFIX}/
├── hypre/
│ ├── build_${BUILD_SUFFIX}/
│ └── src/hypre_${BUILD_SUFFIX}/ # Installed library
├── metis-5.1.0/
│ └── install_${BUILD_SUFFIX}/
├── mfem/
│ ├── build_${BUILD_SUFFIX}/
│ └── install_${BUILD_SUFFIX}/
└── ExaConstit/
├── build_${BUILD_SUFFIX}/ # Build artifacts
└── install_dir/ # Final installation
Where ${BUILD_SUFFIX} is:
cpufor CPU buildscudafor CUDA/NVIDIA buildshipfor HIP/AMD builds
Build logs are saved in each component's build directory with standardized names:
my_<package>_config- CMake configuration outputmy_<package>_build- Compilation outputmy_<package>_install- Installation output
Example: To check why RAJA failed to build:
cd RAJA/build_cuda
less my_raja_build # View the build logTypical disk space usage:
- CPU build: ~1 GB total
- GPU build (CUDA/HIP): ~2 GB total
- Includes additional Umpire and CHAI libraries
- GPU architectures add to binary sizes
ERROR: Unable to locate a modulefile for 'intel/2023.2.1-magic'
Solution:
- Check available modules:
module avail intel - Update the module version in your config file
- If not using a module system, comment out the
module loadcommands
CMake Error: CMAKE_C_COMPILER not found
Solution:
- Verify the compiler path:
ls -la /path/to/compiler - Check that the executable exists:
which iccorwhich clang - Update the
*_BASEvariable in your config file - Ensure the compiler is in your
PATH
Could NOT find Python3 (missing: Python3_EXECUTABLE)
Solution:
- Verify Python installation:
which python3orwhich python - Check Python version (must be 3.8+):
python3 --version - Update
PYTHON_BASEin your config file - If using Anaconda:
conda activate your_envbefore building
Could not find MPI compiler wrappers
Solution:
- Verify MPI installation:
which mpicc && which mpicxx - Update
MPI_BASEin your config file - Test MPI:
mpicc --version - On macOS, install with:
brew install open-mpi
c++: fatal error: Killed signal terminated program cc1plus
Solution:
- Reduce parallel jobs:
MAKE_JOBS=2 ./unix_*_install.sh - Close other applications
- Add swap space if building on a resource-constrained system
No space left on device
Solution:
- Check available space:
df -h . - Clean previous builds:
REBUILD=ON ./unix_*_install.sh - Build in a location with more space
- Remove old build directories
Solution:
- Check the specific log file in the failing component's build directory
- Common issues:
- Missing system libraries (install via package manager)
- Version incompatibilities (check
dependency_versions.sh) - Network issues during git clone (retry with
SYNC_SUBMODULES=ON)
- Try a clean rebuild:
REBUILD=ON ./unix_*_install.sh - Build dependencies individually to isolate the issue
fatal: unable to access 'https://github.com/...': Failed to connect
Solution:
- Check network connectivity
- If behind a firewall, configure git proxy
- Clone repositories manually if needed
- Force submodule sync:
SYNC_SUBMODULES=ON ./unix_*_install.sh
Solution:
xcode-select --installSolution:
# Install Homebrew if not already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install required tools
brew install cmake
brew install open-mpi
brew install python3Solution:
- Ensure all dependencies are built for the same architecture
- For Apple Silicon, use ARM-native tools:
arch -arm64 brew install ... - For Intel compatibility on Apple Silicon:
arch -x86_64 brew install ...
Solution:
# Clear all modules and reload
# Some HPC will automatically switch things for you
# So only do this if your HPC center advises you to purge modules
module purge
module load intel/2023.2.1-magic
module load cmake/3.26.3
# ... load other required modulesSolution:
- Build in your scratch space instead of home directory
- Clean old builds regularly
- Check quota:
quota -sorlfs quota -h $HOME
Solution:
- Verify CUDA installation:
which nvcc - Update
CUDA_BASEingpu_cuda_config.sh - Load CUDA module if on HPC:
module load cuda - Set
CUDA_HOMEenvironment variable
nvcc fatal: Unsupported gpu architecture 'compute_XX'
Solution:
- Check your GPU architecture:
nvidia-smi - Update
CMAKE_GPU_ARCHITECTURESto match your hardware - Common fix:
CMAKE_GPU_ARCHITECTURES=70 ./unix_gpu_cuda_install.sh
Solution:
- Verify ROCm installation:
which amdclang - Update
ROCM_BASEingpu_hip_config.sh - Load ROCm modules:
module load rocm - Set
ROCM_PATHenvironment variable
Solution:
- Set unified memory flag:
export HSA_XNACK=1 - Verify architecture:
CMAKE_GPU_ARCHITECTURES=gfx942:xnack+ - Check system:
rocminfo | grep xnack
Possible causes:
- Library path issues - Ensure
LD_LIBRARY_PATHincludes all dependency lib directories - ABI incompatibility - Rebuild with consistent compiler versions
- Missing runtime dependencies - Check with
lddon the executable
Solution:
# Test MPI installation
mpirun -np 2 hostname
# Verify MPI library paths
ldd ExaConstit/build_*/mechanics_driver | grep mpi
# Check module environment
module listIf you encounter issues not covered here:
-
Check the build logs
- Navigate to the failing component's build directory
- Review
my_<package>_config,my_<package>_build, ormy_<package>_install - Look for specific error messages
-
Verify your configuration
- Confirm all paths in your config file are correct
- Test each tool independently:
which compiler,mpicc --version, etc.
-
Search existing issues
- Check the GitHub Issues page
- Search for similar error messages
-
Open a new issue
- Go to GitHub Issues
- Include:
- Your platform and OS version (
uname -a,lsb_release -a, etc.) - The config file you're using
- Relevant sections from error logs
- Steps you've already tried
- Output of
module list(if applicable)
- Your platform and OS version (
If you prefer to build manually or need more control over the build process:
You'll need to manually build all dependencies first:
- CAMP (v2025.09.2)
- RAJA (v2025.09.1)
- Umpire (v2025.09.0) - GPU builds only
- CHAI (v2025.09.1) - GPU builds only
- ExaCMech (develop branch)
- Hypre (v2.32.0)
- METIS (5.1.0)
- MFEM (exaconstit-smart-ptrs branch)
See scripts/install/common/build_functions.sh for the exact build commands and CMake options.
Once all dependencies are built:
# 1. Clone ExaConstit
git clone https://github.com/LLNL/ExaConstit.git
cd ExaConstit
git checkout the_great_refactoring
git submodule update --init --recursive
# 2. Create build directory
mkdir build && cd build
# 3. Configure (CPU example)
cmake .. \
-DCMAKE_CXX_COMPILER=mpicxx \
-DCMAKE_C_COMPILER=mpicc \
-DENABLE_TESTS=ON \
-DENABLE_OPENMP=OFF \
-DENABLE_FORTRAN=OFF \
-DPYTHON_EXECUTABLE=/usr/bin/python3 \
-DMFEM_DIR=${MFEM_INSTALL_DIR}/lib/cmake/mfem/ \
-DECMECH_DIR=${EXACMECH_INSTALL_DIR}/ \
-DSNLS_DIR=${EXACMECH_INSTALL_DIR}/ \
-DRAJA_DIR=${RAJA_INSTALL_DIR}/lib/cmake/raja/ \
-Dcamp_DIR=${CAMP_INSTALL_DIR}/lib/cmake/camp/ \
-DCMAKE_BUILD_TYPE=Release
# 4. Build
make -j $(nproc)
# 5. Test
ctestFor CUDA builds, add:
cmake .. \
... (all the above options) ... \
-DCMAKE_CXX_COMPILER=${CUDA_ROOT}/bin/nvcc \
-DCMAKE_CUDA_COMPILER=${CUDA_ROOT}/bin/nvcc \
-DCMAKE_CUDA_HOST_COMPILER=${HOST_CXX_COMPILER} \
-DCMAKE_CUDA_ARCHITECTURES=80 \
-DENABLE_CUDA=ON \
-DFMT_DIR=${UMPIRE_INSTALL_DIR}/lib64/cmake/fmt \
-DUMPIRE_DIR=${UMPIRE_INSTALL_DIR}/lib64/cmake/umpire \
-DCHAI_DIR=${CHAI_INSTALL_DIR}/lib/cmake/chaiFor HIP builds, add:
cmake .. \
... (all the above options) ... \
-DCMAKE_CXX_COMPILER=${ROCM_ROOT}/bin/amdclang++ \
-DCMAKE_HIP_COMPILER=${ROCM_ROOT}/bin/amdclang++ \
-DCMAKE_HIP_ARCHITECTURES=gfx942 \
-DENABLE_HIP=ON \
-DFMT_DIR=${UMPIRE_INSTALL_DIR}/lib64/cmake/fmt \
-DUMPIRE_DIR=${UMPIRE_INSTALL_DIR}/lib64/cmake/umpire \
-DCHAI_DIR=${CHAI_INSTALL_DIR}/lib/cmake/chaiAfter successful installation:
- Run the test suite:
cd ExaConstit/build_*/thenctestormake test - Try example problems: See
examples/directory - Read the documentation: Check the
docs/folder for detailed usage guides - Join the community: Open issues or discussions on GitHub
For questions about using ExaConstit after installation, see the main README and documentation in the docs/ folder.