Cross-platform installer for Triton and SageAttention on ComfyUI. Simplifies GPU-accelerated inference setup for Windows users with automated dependency management and RTX 5090 support.
-
Updated
Mar 16, 2026 - Python
Cross-platform installer for Triton and SageAttention on ComfyUI. Simplifies GPU-accelerated inference setup for Windows users with automated dependency management and RTX 5090 support.
RTX 5090 & RTX 5060 Docker container with PyTorch + TensorFlow. First fully-tested Blackwell GPU support for ML/AI. CUDA 12.8, Python 3.11, Ubuntu 24.04. Works with RTX 50-series (5090/5080/5070/5060) and RTX 40-series.
High-performance LLM inference engine in C++/CUDA for NVIDIA Blackwell GPUs (RTX 5090)
A high-performance local AI pipeline for restoring VHS audio, transcribing with Whisper, and translating subtitles using NLLB-200.
Autonomous node manager for Vast.ai - Dynamic pricing, GPU monitoring, auto-bidding & watchdog for dual RTX 5090 compute rigs. Production-ready with REST API, web dashboard, and systemd integration.
⚡ Compare AI models by Accuracy × Cost × Carbon — RTX 5090 benchmarks reveal 4-bit quantization wastes energy on small models
Lightweight GPU & CPU system tray monitor for NVIDIA GPUs (RTX 5090, RTX 6000, RTX 4090, RTX 3090, Tesla, TCC mode). Real-time power, temperature, VRAM & CPU usage badges. Works where HWMonitor, GPU-Z & MSI Afterburner fail.
Pre-built onnxruntime-gpu 1.24.1 with Blackwell sm_120 CUDA kernels (RTX 5090/5080/5070)
This is a simulation of the Founders Edition 5090 Nvidia GPU:
Technical insights from r/LocalLLaMA — vLLM, FP8, NVFP4, Blackwell GPU benchmarks, and more. Unverified community knowledge, generated by Nemotron 9B. Issues welcome.
Generate images, videos, and audio from your terminal. Natural language to ComfyUI — 15 commands, WSL2 auto-detection, zero dependencies.
Enterprise-grade Sovereign AI Stack optimized for NVIDIA Blackwell (sm_120) & vLLM. Features 256K context window, 5.8k tok/s prefill, and integrated observability via Langfuse.
Add a description, image, and links to the rtx-5090 topic page so that developers can more easily learn about it.
To associate your repository with the rtx-5090 topic, visit your repo's landing page and select "manage topics."