NVIDIA DGX Spark ressources
- DGX Spark User Guide — DGX Spark User Guide
- DGX Spark
- DGX Spark Playbooks
- Set up Tailscale on Your Spark | DGX Spark
- vLLM for Inference | DGX Spark
- https://github.com/search?q=DGX+Spark&type=repositories
- NVIDIA/dgx-spark-playbooks
- eugr/spark-vllm-docker
- mark-ramsey-ri/vllm-dgx-spark
- eelbaz/dgx-spark-vllm-setup
- ohoachuck/dgx-spark-vllm-qwen3-omni: Tweak of eelbaz/dgx-spark-vllm-setup to support Qwen3-omni multimodal capabilities on NVIDIA DGX Spark
- eelbaz/dgx-spark-vllm-setup: One-command vLLM installation for NVIDIA DGX Spark with Blackwell GB10 GPUs (sm_121 architecture)
- dataforgex/dgx_spark
- kenmoini/lab-dgx-spark: Resources to quickly bootstrap a DGX Spark (or other) AI machine for productivity - includes container and model management, reverse proxy, service portal, observability, and more!
- kshetrajna12/sparkstation: Unified LLM orchestration and gateway service for DGX Spark — dynamically manages vLLM, SGLang, and TensorRT-LLM backends under a single OpenAI-compatible API.
- codekunoichi/dgx-spark-open-source: A practical, hands-on guide for Mac users transitioning into Ubuntu + GPU workflows. Step-by-step notes, cheatsheets, and setup scripts from my DGX Spark journey.
- paruparu/faster-whisper-dgx-spark: Reproducible Docker setup for running faster-whisper with CUDA on NVIDIA DGX Spark–class ARM systems.
- Trosfy/dgx-spark-ai-curriculum: Comprehensive 40-week AI/ML curriculum optimized for NVIDIA DGX Spark (Grace Blackwell GB10). From neural network fundamentals to 70B model fine-tuning, quantization, and AI agents. All tasks in JupyterLab with hands-on projects.
- Sniper711/DGX-Spark-Day03-DGX-Spark-Now-Accessible-on-Tablets-and-Mobile-Devices-20260102: This is an extension of my previous articles "DGX Spark: Day01A & Day01B, and Day02". Here, I'll expand Client support from Mac/PC to any Tablets/Phones.
- assix/pytorch-aarch64-cuda130-python310-wheels: PyTorch Wheels for DGX Spark (aarch64 / Python 3.10 / CUDA 13.0)
- edu-ide/gb10-wheels: Pre-built Python wheels for NVIDIA DGX Spark (GB10, SM121)
- atripathy86/transcribe: Multi-format Whisper/Faster-Whisper transcription tool with DGX-Spark optimization
- raibid-labs/osai: Comprehensive guide to self-hosted AI infrastructure on NVIDIA DGX Spark - your $4,000 datacenter
- cslev/llamacpp-cuda-arm64-docker: This repo is to build your own llama.cpp Dockerimage with CUDA for ARM64 (DGX Spark)
- operezmuena/voxcpm-1.5-fastapi-server: DGX Spark–focused example: Dockerized FastAPI REST server for VoxCPM 1.5 TTS (voice prompting + batch text files).
- andrewcapatina/research-assistant: This project will summarize a collection of research papers gathered from internet sources on the NVIDIA DGX Spark/Jetson development boards.
- rakpan/project-vyasa: Project Vyasa is a local-first research execution framework for DGX Spark that helps researchers, journal authors, and domain experts turn unstructured documents into defensible, evidence-bound manuscripts for high-stakes, long-running inquiry. It keeps humans in control of judgment while AI handles extracting, validating, and governing evidence.
- repositories
- paruparu/faster-whisper-dgx-spark: Reproducible Docker setup for running faster-whisper with CUDA on NVIDIA DGX Spark–class ARM systems.
- assix/ctranslate2-aarch64-cuda13-binaries: CTranslate2 Binaries for DGX Spark (aarch64 / CUDA 13)
- atripathy86/transcribe: Multi-format Whisper/Faster-Whisper transcription tool with DGX-Spark optimization
- threads