A parallel implementation of a 2D-decomposed cellular automaton with periodic boundary conditions on the
Note: If the video does not play you can access it directly here .git_assets/automaton.mp4
- icc (Intel C Compiler 19.1.3.304)
- mpicc
- make
- mpirun (v4.1.6)
- slurm (v22.05.11)
- Extract the contents of the zip folder into a directory of your choosing:
unzip MPP2425-B264122.zip
- Once the zip has been extracted, enter the directory:
cd MPP2425-B264122/
- Run the make command to compile the program using the provided Makefile. This will create the automaton program in the bin directory. By default the make file compiles using the icc (Intel Compiler) with -O3 optimisation:
make -j
- Execute the program from the root directory with your specified options:
mpirun -n <processes> ./bin/automaton <seed> <rho> <maxsteps> <gridsize>
For example, to run on 16 processes with
mpirun -n 16 ./bin/automaton 8766 0.52 9600 960
Or run the program in serial like so:
./bin/automaton 8766 0.52 9600 960
All parameters for the program must be provided as command line arguments in the following order:
- seed (required)
- rho
$\rho$ (optional) - maxsteps (optional)
- grid size
$L$ (optional)
Note: The mpirun program expects the number of processes to run on which is provided with the -n command-line argument.
Note: The total number of processes ($P$) must be an exact multiple of the grid size ($L$) else the program will terminate.
A automaton.slurm script file has been provided which specifies the parameters needed to execute the program on the CPU compute nodes of Cirrus. Ensure that your account code is set correctly before running.
Use the sbatch command to submit a job to Cirrus:
sbatch automaton.slurm
The program is structured into three modules automaton, parallel and io. Each module is compartmentalised and is responsible for a specific aspect of the overall program.
-
automaton The main module that runs the entire program and calls into lower level modules. It initializes all resources including the map and calls MPI to distribute and communicate between processes.
-
parallel Contains the MPI specific functionaility for the program including the creation of the virtual topology, map distribution and collection and the halo swapping. The implementation details are are hidden from other modules so that the the underyling MPI library can be replaced easily with another implementation.
-
io Responsible for the parsing of command-line arguments provided by the user and writing the final grid to a file on disk.
The programs directory structure is shown below:
- src/ Main source directory containing source files for the main program.
- libs/ Contains library dependencies that are used by the main program.
- bin/ The binary directory is created when the program is compiled and contains the program executable and intermediate object files.
- output/ Directory storing program generated files such as the output cell image and Slurm logs. Note that this directory must exist before running the program.
