A modular ROS 2 (Jazzy) platform for researching and benchmarking autonomous robot navigation in 2D and 3D simulated environments. It supports classical planners (Nav2), deep-RL planners (rosnav_rl), and a variety of simulators (Gazebo, Isaac Sim).
Prerequisites: Docker installation with nvidia-container-toolkit for GPU support. Current user must be in group docker.
Afterwards, run the following commands to install Arena:
curl https://raw.githubusercontent.com/voshch/Arena/jazzy/install.sh > install.sh
bash install.shand follow the prompts. This will create a ROS 2 workspace at your target location and instruct you how to proceed (yellow text).
cd ~/arena_ws # replace with your actual workspace path
source arena
arena feature isaac install # optional
arena feature gazebo install # optional
arena feature training install # optional
arena feature vllm install # optional: local LLM backendWe recommend installing at least one simulator.
Runs a local vLLM server plus a LiteLLM proxy that speaks the Gemini API, so GPT consumers in task_generator transparently hit local inference instead of Google. Defaults target an 11 GB 2080 Ti (Qwen3-0.6B, 40% GPU util).
Tune via _meta/docker/features/vllm/config.yaml:
| key | default | purpose |
|---|---|---|
model |
Qwen/Qwen3-0.6B |
HF model id |
gpu_memory_utilization |
0.4 |
fraction of VRAM vllm may claim |
max_model_len |
4096 |
context window |
port / proxy_port |
8000 / 4000 |
vllm / LiteLLM ports |
After editing, re-run arena feature vllm update to recreate the container.
The container will start automatically on source and continue running in the background. To free up GPU memory, stop it with arena feature docker stop.
cd ~/arena_ws # replace with your actual workspace path
source arena
arena launch sim:=isaac # Isaac Sim
arena launch local_planner:=rosnav_rl agent_name:=<your_agent> # DRL planner
arena launch sim:=gazebo local_planner:=rosnav_rl env_n:=2 train_config:=<path to config.yaml> # DRL training Place your trained agent folder inside Arena/arena_training/agents/<agent_name>/ (must contain training_config.yaml and best_model.zip), then launch with local_planner:=rosnav_rl agent_name:=<agent_name>. Refer to the arena_training for training instructions.
Linting is handled by Ruff, driven by pre-commit. Config lives in root pyproject.toml; the hook pin is in .pre-commit-config.yaml. Auto-formatting is intentionally not enforced.
One-time setup:
pip install pre-commit
pre-commit installEveryday use: hooks run automatically on git commit against staged files. To run manually:
pre-commit run # staged files only
pre-commit run -a # entire repo
ruff check . # check without pre-commitIf the hook auto-fixes something, the commit is aborted and the fixes are left unstaged — git add and re-commit.
.github/workflows/lint.yml runs the same pre-commit hooks on every push to jazzy and every pull request targeting it. The GH check uses the exact config and hook pins from .pre-commit-config.yaml, so local and CI never drift. Make the check required in branch protection to block merges on lint failures.
Bump the Ruff version with pre-commit autoupdate.
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
sudo nvidia-ctk runtime configure --runtime=containerd
sudo systemctl restart containerd