The Biodiversity Cell Atlas is a coordinated international effort aimed at molecularly characterizing cell types across the eukaryotic tree of life. Our mission is to pave the way for the efficient expansion of cell atlases to hundreds of species.
This project uses:
- Podman Compose to manage multiple Podman containers (using docker-compose backend for compatibility)
- Ghost, a blog-focused Content Management System (CMS) to setup the main website
- Mailpit captures and provides a web interface to read Ghost transactional emails
- Django, a high-level Python web framework setup using Gunicorn to setup the data portal
- PostgreSQL, a relational database
- Nginx, a reverse proxy
To set up the project and run the web app locally, first install:
- Podman — consider installing via Podman Desktop to make it easier to manage Podman containers
- docker-compose (standalone) — Docker itself is not required
Then, download the project directory from GitHub and follow these steps:
# Go to the project directory
cd bca-website
# Copy the *.template files to avoid the .template suffix
cp .env.template .env
cp .pg_service.conf.template .pg_service.conf
cp .pgpass.template .pgpass
# Fix the permissions for .pgpass
chmod 600 .pgpass
# Start Podman Compose to locally deploy the web app
# - Prepares, downloads and starts all containers
# - `-d`: starts the containers in detached mode
# - `--build`: rebuilds the web image (for instance, new Python dependencies in `requirements.txt`)
podman compose up -d --build
# Create a superuser (only required once for database setup)
podman compose exec web python manage.py createsuperuserThese are some of the commands to use during development:
# Locally deploy the web app to localhost
podman compose up -d --build
# Check information about the active Compose containers
podman compose ps
# Check container logs
# - use `-f` to live update log output
# - add container name to print logs only for that container
podman compose logs
podman compose logs -f
podman compose logs web
# Run a bash shell within the web app container
podman compose exec web bash
# Run a Python shell within the context of the web app
# https://docs.djangoproject.com/en/dev/intro/tutorial02/#playing-with-the-api
podman compose exec web python manage.py shell
# Run unit tests: https://docs.djangoproject.com/en/dev/topics/testing/
podman compose exec web python manage.py test
# Stop and delete all containers and Compose-related networks
podman compose downThe project directory is automatically mounted to the web app container, allowing to preview updates in the web app in real-time, except for static files and Django model updates.
After launching the service, the main website will be deployed to http://localhost and the Data Portal to http://portal.localhost.
Note
If you are using proxies, localhost subdomains may need to be excluded in your Proxy settings.
Static files are served by Nginx.
If you need to manually update the static files (such as when editing them),
run the collectstatic command:
podman compose exec web python manage.py collectstatic --noinputThe collectstatics command runs automatically when the web app container starts,
so you can simply run podman compose restart web.
To apply changes to Django models, run the migrate command:
podman compose exec web python manage.py migrateThe migrate command runs automatically when the web app container starts in
development mode, so you can simply run podman compose restart web.
The automatic command will not work if there is an issue that requires
manual intervention.
A dedicated Compose file (such as compose.prod.yml) can be used for
production-specific settings:
# Set COMPOSE_FILE in .env: COMPOSE_FILE=compose.yml:compose.prod.yml
# Deploy in production mode
podman compose -dBy default, the project uses the Postgres database service to serve the Django
app. However, you can instead connect to any database by editing the Postgres
files .pg_service.conf and .pgpass, and then changing to which database
service to connect in .env:
POSTGRES_SERVICE=remote-bca-dbIn case the database service is not needed because you are connecting to an
external database, edit the .env file to exclude the db profile:
# Change the following line to exclude the db service
# COMPOSE_PROFILES=nginx,db
COMPOSE_PROFILES=nginxIf the database can only be accessed via an intermediate host, you will need to connect to the host via an SSH tunnel:
ssh -fN -L 5432:db-host.com:5432 darwin@intermediate.host.comTo connect to the database through the SSH tunnel, use host host.docker.internal.
You can configure your .pg_service.conf and .pgpass files like this:
[ssh-bca-db]
host=host.docker.internal
port=5432
dbname=bca_db
user=wallacehost.docker.internal:5432:bca_db:wallace:mypasswordYou can now start the project as usual via podman compose up.
The main website is built with the Ghost blogging platform. Base templates
in the ghost/ folder modify the default theme.
Transactional emails (like those sent to reset passwords and create new user accounts) can be read by opening Mailpit web interface at localhost:1025.
In case you want to run Nginx or another reverse proxy yourself, edit the .env
file to exclude the nginx profile:
# Change the following line to exclude the nginx service
# COMPOSE_PROFILES=nginx,db
COMPOSE_PROFILES=db
# If you don't need both the nginx and db services, simply delete the whole lineSuper-Linter is run for every Pull Request. To run it locally using Podman, execute the following commands (the correct image is automatically pulled based on the version used in the GitHub workflow):
# Run in check mode on changed files
./superlinter.sh check
# Run in fix mode on changed files
./superlinter.sh fix
# Run in fix mode on changed files using Python and JS linters only
./superlinter.sh fix --python --js
# Run in fix mode on all codebase
./superlinter.sh fix --all
# Print all available options
./superlinter.shThe environment files that Super-Linter automatically loads are available in .github/linters: super-linter.env and super-linter-fix.env.
