Skip to content

turtacn/go-proxmox

Repository files navigation

go-proxmox Logo

go-proxmox

Build Status Go Version License Go Report Card Documentation 中文文档

A Cloud-Native Unified Compute Platform for VMs and Virtual Kubernetes Clusters

go-proxmox is a modern Golang reimplementation of Proxmox VE core capabilities, establishing a unified compute plane where traditional Virtual Machines (VMs) and Virtual Kubernetes Clusters (vCluster/VC) are treated as first-class citizens.


Mission Statement

go-proxmox aims to build an open-source, cloud-native friendly virtualization and container unified compute platform. While preserving the Proxmox VE design philosophy of "direct low-level control, minimal dependencies, and high performance," it provides a unified management plane through modern Golang implementation where VMs and lightweight Virtual Kubernetes Clusters coexist as equal first-class resources.


Why go-proxmox?

Industry Pain Points Addressed

Pain Point Traditional Approach go-proxmox Solution
VM/Container Dichotomy Separate management planes for VMs and Kubernetes Unified resource abstraction with shared storage/network
Multi-Tenancy Complexity Manual namespace isolation, weak boundaries Native VC with independent control planes, hardware-level isolation
Cloud-Native Incompatibility Legacy Perl codebase, poor K8s integration Go-native, CRD-ready, Operator patterns
AI Workload Gaps No native GPU passthrough orchestration First-class GPU/NPU support, AI agent scheduling
Operational Fragmentation 100+ repos, inconsistent tooling Single monorepo, unified binary deployment
Vendor Lock-in Proprietary HCI solutions (VMware, SmartX) Open-source Apache 2.0, community-driven

Key Advantages

  • Unified Compute Plane: VM and VC share identical lifecycle state machines, quota models, and observability
  • libvirt-Free Architecture: Direct QMP control over QEMU/KVM, no abstraction overhead
  • Single Binary Deployment: gpve-server + gpve-agent, declarative YAML configuration
  • MySQL-Backed Strong Consistency: All metadata in single source of truth, no distributed config complexity
  • Explicit Compensation + Idempotent Operations: No distributed transactions, predictable failure recovery
  • CNCF Conformance: kubelet-in-netns VNodes pass full Kubernetes conformance tests

Key Features

Virtual Machine Management

  • Full lifecycle: create, start, stop, migrate (live/cold), snapshot, clone
  • Direct QMP protocol control (no libvirt)
  • GPU/vGPU/NPU passthrough
  • Secure Boot, TPM 2.0, OVMF/SeaBIOS

Virtual Cluster (vCluster) as First-Class Resource

  • Independent apiserver per tenant
  • Tiered control plane storage: embedded SQLite or external etcd
  • VNode isolation levels: L1 (runc) → L2 (Kata) → L3 (QEMU microvm)
  • Pod → VNode → Physical Machine complete traceability

Unified Storage Abstraction

  • Single plugin interface for all content types
  • Backends: Local, ZFS, LVM-thin, Ceph RBD, NFS, iSCSI
  • Shared between VM disks and VC PersistentVolumes

Unified Network Abstraction

  • Linux Bridge + VLAN baseline
  • Optional SDN: EVPN/BGP, nftables-based
  • VM NICs and VC Pod networks on same infrastructure

Enterprise Features

  • Multi-tenant RBAC with quota management
  • Cluster HA with automatic failover
  • Node maintenance: cordon, drain, replace
  • Prometheus metrics, OpenTelemetry tracing

Architecture Overview

graph LR
    %% GPVE 系统架构图
    
    %% ========== 服务端模块 ==========
    subgraph SERVER[GPVE服务端(gpve-server)]
        direction TB
        
        REST[REST API服务器<br/>(REST API Server)]
        SCHED[调度器(Scheduler)]
        EXEC[任务执行器(Task Executor)]
        QUOTA[配额管理器(Quota Manager)]
        
        DB[(MySQL)]
        
        REST --> DB
        SCHED --> DB
        EXEC --> DB
        QUOTA --> DB
    end

    %% ========== gRPC 通信层 ==========
    SERVER -->|gRPC| AGENT1
    SERVER -->|gRPC| AGENT2
    SERVER -->|gRPC| AGENT3

    %% ========== 代理节点模块 ==========
    subgraph AGENT1[GPVE代理(gpve-agent)<br/>节点1(Node 1)]
        QMP[QMP驱动(QMP Driver)]
        QEMU1[QEMU]
        QMP --- QEMU1
    end

    subgraph AGENT2[GPVE代理(gpve-agent)<br/>节点2(Node 2)]
        VNODE[虚拟节点运行时(VNode Runtime)]
        KUBELET[kubelet-in-netns]
        VNODE --- KUBELET
    end

    subgraph AGENT3[GPVE代理(gpve-agent)<br/>节点3(Node 3)]
        STORAGE[存储插件(Storage Plugins)]
        BACKEND[ZFS/Ceph]
        STORAGE --- BACKEND
    end

    %% ========== 样式定义 ==========
    classDef serverBox fill:#2d3748,stroke:#4a5568,color:#e2e8f0
    classDef apiNode fill:#3182ce,stroke:#2c5282,color:#fff
    classDef schedNode fill:#38a169,stroke:#276749,color:#fff
    classDef execNode fill:#d69e2e,stroke:#b7791f,color:#fff
    classDef quotaNode fill:#805ad5,stroke:#6b46c1,color:#fff
    classDef dbNode fill:#e53e3e,stroke:#c53030,color:#fff
    classDef agentBox fill:#1a365d,stroke:#2a4365,color:#e2e8f0
    classDef driverNode fill:#4299e1,stroke:#3182ce,color:#fff
    classDef runtimeNode fill:#48bb78,stroke:#38a169,color:#fff

    class REST apiNode
    class SCHED schedNode
    class EXEC execNode
    class QUOTA quotaNode
    class DB dbNode
    class QMP,QEMU1 driverNode
    class VNODE,KUBELET runtimeNode
    class STORAGE,BACKEND driverNode

Loading

For detailed architecture documentation, see docs/architecture.md.


Getting Started

Prerequisites

  • Go 1.22+
  • MySQL 8.0+ (or MariaDB 10.6+)
  • QEMU/KVM 8.0+
  • Linux kernel 5.15+

Installation

# Install from source
go install github.com/turtacn/go-proxmox/cmd/gpve-server@latest
go install github.com/turtacn/go-proxmox/cmd/gpve-agent@latest

# Or build from repository
git clone https://github.com/turtacn/go-proxmox.git
cd go-proxmox
make build

Quick Start

# Initialize configuration
gpve-server init --config /etc/gpve/server.yaml

# Start server
gpve-server serve --config /etc/gpve/server.yaml

# On each node, start agent
gpve-agent join --server https://control-plane:8443 --token <bootstrap-token>

Usage Examples

Create a Virtual Machine

package main

import (
    "context"
    "log"

    "github.com/turtacn/go-proxmox/pkg/client"
    "github.com/turtacn/go-proxmox/pkg/api/types"
)

func main() {
    cli, err := client.New("https://gpve-server:8443", client.WithToken("your-token"))
    if err != nil {
        log.Fatal(err)
    }

    vm := &types.VirtualMachine{
        Metadata: types.ObjectMeta{
            Name:     "my-vm",
            TenantID: "tenant-001",
        },
        Spec: types.VMSpec{
            CPU:    types.CPUSpec{Cores: 4, Sockets: 1},
            Memory: types.MemorySpec{SizeBytes: 8 * 1024 * 1024 * 1024},
            Disks: []types.DiskSpec{
                {Size: "50G", Storage: "local-zfs", Interface: "virtio"},
            },
            Networks: []types.NetworkSpec{
                {Bridge: "vmbr0", Model: "virtio"},
            },
        },
    }

    created, err := cli.VMs().Create(context.Background(), vm)
    if err != nil {
        log.Fatal(err)
    }
    log.Printf("VM created: %s, status: %s", created.Metadata.Name, created.Status.Phase)
}

Create a Virtual Cluster

package main

import (
    "context"
    "log"

    "github.com/turtacn/go-proxmox/pkg/client"
    "github.com/turtacn/go-proxmox/pkg/api/types"
)

func main() {
    cli, err := client.New("https://gpve-server:8443", client.WithToken("your-token"))
    if err != nil {
        log.Fatal(err)
    }

    vc := &types.VirtualCluster{
        Metadata: types.ObjectMeta{
            Name:     "dev-cluster",
            TenantID: "tenant-001",
        },
        Spec: types.VCSpec{
            KubernetesVersion: "v1.30.0",
            ControlPlane: types.ControlPlaneSpec{
                Tier:         types.ControlPlaneTierStandard,
                BackingStore: types.BackingStoreSQLite,
            },
            WorkerNodes: types.WorkerNodesSpec{
                Count:          3,
                Mode:           types.WorkerModePrivate,
                IsolationLevel: types.IsolationL2Kata,
            },
            Quota: types.ResourceQuota{
                CPU:     "16",
                Memory:  "32Gi",
                Storage: "500Gi",
            },
        },
    }

    created, err := cli.VClusters().Create(context.Background(), vc)
    if err != nil {
        log.Fatal(err)
    }
    log.Printf("VCluster created: %s", created.Metadata.Name)

    // Get kubeconfig
    kubeconfig, err := cli.VClusters().GetKubeconfig(context.Background(), "tenant-001", "dev-cluster")
    if err != nil {
        log.Fatal(err)
    }
    log.Printf("Kubeconfig:\n%s", kubeconfig)
}

Query Pod-to-Node Mapping

// Query complete traceability: Pod → VNode → Physical Machine
mapping, err := cli.VClusters().GetPodMapping(context.Background(), "tenant-001", "dev-cluster", "my-pod")
if err != nil {
    log.Fatal(err)
}

log.Printf("Pod %s runs on VNode %s, which is on Physical Machine %s",
    mapping.PodName,
    mapping.VNodeName,
    mapping.PhysicalMachine,
)

Benchmarks

go-proxmox targets the following performance benchmarks:

Benchmark Target Notes
VM Density ≥500 VMs/node Small footprint VMs
VC Density ≥1000 VCs/cluster Lightweight control planes
API Latency <100ms p99 Control plane operations
Live Migration <30s for 8GB VM Shared storage
Oracle RAC Certified compatible Enterprise workload validation
K8s Conformance 100% pass CNCF certified
OLAP/Data Warehouse TPC-H validated Analytical workloads

Documentation


Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

# Clone repository
git clone https://github.com/turtacn/go-proxmox.git
cd go-proxmox

# Install dependencies
make deps

# Run tests
make test

# Run linter
make lint

# Build all binaries
make build

Code Style

  • Follow Effective Go
  • Use gofmt and golangci-lint
  • Write tests for all new features
  • Document public APIs

Roadmap

  • Core VM lifecycle management
  • MySQL-backed metadata store
  • Task orchestration engine
  • VCluster integration (Q2 2026)
  • Web UI (Rust/Yew, Q3 2026)
  • Firecracker runtime support (Q4 2026)
  • Multi-region federation (2027)

License

go-proxmox is licensed under the Apache License 2.0.

Copyright 2024-2026 The go-proxmox Authors

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Acknowledgments


中文文档ArchitectureAPI Reference

About

go-proxmox is a generative AI infrastructure runtime that reimagines virtualization management for the cloud-native era, by rebuilding Proxmox VE's core capabilities in Go, it deliver a unified compute plane where Virtual Machines and vClusters (lightweight Virtual Kubernetes) are treated as equal first-class citizens.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages