A lightweight, easy-to-extend JupyterLab image built on the official Python Docker image. 4x smaller than quay.io/jupyter/scipy-notebook (820 MB vs 3.36 GB) with the same core scientific stack.
- Small — 820 MB, 7 layers, ~28 line Dockerfile. The official
scipy-notebookis 3.36 GB across 37 layers. - Simple — one base image, pip-only installs, no conda, no JupyterHub, no TeX Live
- Readable — a single Dockerfile you can read in 30 seconds and extend in minutes
- Secure — non-root user by default; no secrets baked in
- AI-ready — optional jupyter-ai variant with chat sidebar and
%%aicell magic; bring your own API key
git clone https://github.com/gitjeff05/jupyterlab-minimalist-image.git
cd jupyterlab-minimalist-image
docker build -t jupyterlab-minimalist:latest .
docker run --rm -it -p 8888:8888 \
-w /home/jordan/work \
--mount type=bind,source=$(pwd)/project,target=/home/jordan/work \
jupyterlab-minimalist:latest| Variant | Description |
|---|---|
| (root) | Core scientific stack: numpy, pandas, scipy, matplotlib, seaborn, bokeh, scikit-learn, sympy |
basic/ |
Python + numpy + matplotlib only. No JupyterLab. |
astropy/ |
Astronomy stack (astropy, astroplan). Plain Python or JupyterLab via Dockerfile.jupyter. |
pytorch/ |
Full scientific stack + PyTorch 2.x (CPU). Swap the index URL in requirements.txt for GPU builds. |
jupyter-ai/ |
Full scientific stack + jupyter-ai chat sidebar and %%ai cell magic. API key injected at runtime via env var. |
The jupyter-ai variant adds a chat sidebar (Jupyternaut) and %%ai cell magic powered by your choice of LLM. No key is baked into the image — pass it at runtime:
docker build -t jupyterlab-minimalist-ai:latest dockerfiles/jupyter-ai/
docker run --rm -it -p 8888:8888 \
-e ANTHROPIC_API_KEY=sk-ant-... \
-w /home/jordan/work \
--mount type=bind,source=$(pwd)/project,target=/home/jordan/work \
jupyterlab-minimalist-ai:latestSee dockerfiles/jupyter-ai/README.md for full usage, OpenAI support, and %%ai magic examples.
A minimalist image built from a small Dockerfile that is easy to understand. This project follows Docker best practices and in particular:
- Use an intuitive Dockerfile that is easy to extend
- Produce an image as small as possible:
- using multi-stage builds
- minimizing RUN, COPY, ADD commands
- minimizing dependencies
- Start with an appropriate base image (i.e., Official Python Docker)
Disclaimer: This is experimental. You should review the Dockerfile and test the image carefully before putting this in production. Feedback is welcome.
Setting up a local environment for data science is cumbersome. Between environment and dependency management, many hours can be spent on configuration before any work can begin.
A good way to create consistent, portable and isolated environments is containerization. Containers can be preconfigured with packages and software installed. They are efficient and can be shared easily.
The solutions discussed here focus on containerization as opposed to environment managers like Anaconda or Virtualenv.
For a more in-depth rundown of containers, consider reading some good introductions by NetApp and Google and Docker.
Currently, Jupyter Docker Stacks leverage the power of containerization to provide an array of Docker images for data science applications. Note: as of October 2023, these images are published to Quay.io — the Docker Hub versions are frozen and no longer updated.
However, the resulting images from Jupyter Docker Stacks are quite large and some other downsides include:
- The Dockerfiles and startup scripts are long and somewhat difficult to follow
- The images arguably violate the best practice of decoupling (e.g., by including packages like TeX Live, git, vim)
- The base image chain is complex (e.g., to extend
jupyter/scipy-notebook, one must understand the full hierarchy:ubuntu:nobledocker-stacks-foundationjupyter/base-notebookjupyter/minimal-notebookjupyter/scipy-notebook
If you require Conda or JupyterHub, then Jupyter Docker Stacks is a good option for you. They also support R, Spark, TensorFlow, Julia and other kernels that this project does not (yet). However, this same approach has been used to build PyTorch, Astropy, and AI-enabled variants.
We desired a solution based off the Official Python Docker image. Why does this matter? Starting with an appropriate base image is a best practice and helps reduce the complexity and size of the image. We also employ multi-stage builds to produce a lean final image.
The Dockerfile uses two stages:
builder— installs all Python packages fromrequirements.txtinto an isolated virtual environment at/venv. This stage handles pip's temporary files, wheel downloads, and any build-time overhead.- final — starts from a fresh
python:3.13-slim-bookwormbase and copies only/venvfrom the builder usingCOPY --from=builder. No pip cache, no build artifacts, and no intermediate layers carry forward.
builder → installs packages into /venv
final → clean base + COPY --from=builder /venv /venv
This is what keeps the final image at 7 layers. The venv is self-contained — the PATH is updated to point into it, so jupyter, python, and all installed packages are available without any system-level installation.
The resulting image is built from a ~28 line Dockerfile using a real multi-stage build. A comparison with quay.io/jupyter/scipy-notebook (measured March 2026) is shown below.
| Image | # Layers | # lines in Dockerfile | Size |
|---|---|---|---|
jupyterlab-minimalist |
7 | 28 | 820 MB |
quay.io/jupyter/scipy-notebook |
37 | 190+ | 3.36 GB |
Number of lines in Dockerfile was calculated using all the dockerfiles in the chain with spaces and comments removed.
Note: This is not exactly a fair comparison because the scipy image from Jupyter Docker Stacks includes so much more (e.g., Conda, JupyterHub, Git, and more).
docker build -t jupyterlab-minimalist:latest .BuildKit is enabled by default in Docker 23+, so no additional flags are needed.
docker run --rm -it -p 8888:8888 \
-w /home/jordan/work \
--mount type=bind,source=/Users/alex/project,target=/home/jordan/work \
jupyterlab-minimalist:latestWant to use ggplot or plotly? Simply modify the requirements.txt file and rebuild.
Generate certificates using mkcert — see the cert folder README for setup instructions. Once you have localhost.pem and localhost-key.pem, pass them to the container:
docker run --rm -it -p 8888:8888 \
-w /home/jordan/work \
-v /Users/alex/project:/home/jordan/work \
-v /Users/alex/certs:/home/jordan/certs \
jupyterlab-minimalist:latest \
--ip=0.0.0.0 --port=8888 \
--certfile=/home/jordan/certs/localhost.pem \
--keyfile=/home/jordan/certs/localhost-key.pemNote: ip and port must be repeated here because they override the default CMD.
Any feedback is most welcome. Please feel free to open an issue or pull request if you would like to see any additional functionality or additional kernels added.