Skip to content

Feature Request: Add NVIDIA Jetson support for edge deployments #195

@matedev01

Description

@matedev01

Description

Add first-class support for NVIDIA Jetson devices (Orin, Xavier, Nano) so Dream Server can run on edge hardware with local GPU inference.

Use Case

Nonprofits, field deployments, and edge use cases often rely on Jetson for low-cost, low-power local AI. Dream Server’s “sovereign AI” mission fits these scenarios, but today the installer and stack target x86 + discrete GPUs or Apple Silicon. Jetson (ARM64 + Tegra CUDA) is a different path that isn’t yet supported. Supporting Jetson would extend Dream Server to robotics, kiosks, field clinics, and similar edge deployments.

Proposed Solution

  1. Detection: Extend scripts/detect-hardware.sh and scripts/classify-hardware.sh to detect Jetson (e.g. /etc/nv_tegra_release, Tegra device IDs, uname -m = aarch64).
  2. Tier mapping: Add a Jetson tier in installers/lib/tier-map.sh with model choices by device (e.g. Orin 16GB → 7B, Nano 4GB → 1.5B–3B).
  3. Compose overlay: Add docker-compose.jetson.yml (or similar) with linux/arm64 platform and CUDA images compatible with JetPack.
  4. Compose resolution: Update scripts/resolve-compose-stack.sh to select the Jetson overlay when Jetson is detected.
  5. Documentation: Update docs/SUPPORT-MATRIX.md and add a Jetson quickstart.

Alternatives Considered

  • Cloud mode only: Doesn’t meet the goal of fully local, sovereign inference on edge hardware.
  • Manual Docker: Possible today but not discoverable or supported; users need a guided path.
  • Generic ARM64: Jetson has specific CUDA/JetPack requirements; a dedicated path is more reliable than a generic ARM64 path.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions