Skip to content

ZOulhadj/cellular-automaton

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

76 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2D Decomposed Cellular Automaton

A parallel implementation of a 2D-decomposed cellular automaton with periodic boundary conditions on the $i^{th}$ dimension. The boundary conditions for $\frac{2}{3}$ of the $j^{th}$ dimension are set to alive cells. A termination condition is imposed for the simulation in which the program terminates if the number of living cells is below 3/4 or greater than 4/3 of the initial living cells. The implementation uses MPI and a cartesian virtual topology to decompose the grid into two dimensions where each process receives a subsection of the grid. Communication between processes is performed using halo-swapping via non-blocking point-to-point communication (MPI_Isend and MPI_Irecv). This README contains all the required information to compile and run the program both locally and on the backend of the Cirrus CPU compute nodes via Slurm.

{width=360px height=360px}

{width=360px height=360px}

Note: If the video does not play you can access it directly here .git_assets/automaton.mp4

Building

Requirements

  • icc (Intel C Compiler 19.1.3.304)
  • mpicc
  • make
  • mpirun (v4.1.6)
  • slurm (v22.05.11)

  1. Extract the contents of the zip folder into a directory of your choosing:
unzip MPP2425-B264122.zip
  1. Once the zip has been extracted, enter the directory:
cd MPP2425-B264122/
  1. Run the make command to compile the program using the provided Makefile. This will create the automaton program in the bin directory. By default the make file compiles using the icc (Intel Compiler) with -O3 optimisation:
make -j

Running

  1. Execute the program from the root directory with your specified options:
mpirun -n <processes> ./bin/automaton <seed> <rho> <maxsteps> <gridsize>

For example, to run on 16 processes with $seed=8766$, $\rho=0.52$, $maxsteps=9600$ and $L=960$:

mpirun -n 16 ./bin/automaton 8766 0.52 9600 960

Or run the program in serial like so:

./bin/automaton 8766 0.52 9600 960

All parameters for the program must be provided as command line arguments in the following order:

  • seed (required)
  • rho $\rho$ (optional)
  • maxsteps (optional)
  • grid size $L$ (optional)

Note: The mpirun program expects the number of processes to run on which is provided with the -n command-line argument.

Note: The total number of processes ($P$) must be an exact multiple of the grid size ($L$) else the program will terminate.

Running on Cirrus

A automaton.slurm script file has been provided which specifies the parameters needed to execute the program on the CPU compute nodes of Cirrus. Ensure that your account code is set correctly before running.

Use the sbatch command to submit a job to Cirrus:

sbatch automaton.slurm

Structure

The program is structured into three modules automaton, parallel and io. Each module is compartmentalised and is responsible for a specific aspect of the overall program.

  • automaton The main module that runs the entire program and calls into lower level modules. It initializes all resources including the map and calls MPI to distribute and communicate between processes.

  • parallel Contains the MPI specific functionaility for the program including the creation of the virtual topology, map distribution and collection and the halo swapping. The implementation details are are hidden from other modules so that the the underyling MPI library can be replaced easily with another implementation.

  • io Responsible for the parsing of command-line arguments provided by the user and writing the final grid to a file on disk.

The programs directory structure is shown below:

  • src/ Main source directory containing source files for the main program.
  • libs/ Contains library dependencies that are used by the main program.
  • bin/ The binary directory is created when the program is compiled and contains the program executable and intermediate object files.
  • output/ Directory storing program generated files such as the output cell image and Slurm logs. Note that this directory must exist before running the program.

About

2D cellular automaton decomposition across MPI processes. Developed as part of the EPCC11002 course on the Cirrus supercomputer.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors