| Tool | Purpose | Min version |
|---|---|---|
| Rust (nightly) | Compiler | Managed by rust-toolchain.toml |
| QEMU | x86_64 emulator | 7.0+ |
| mtools | FAT32 image creation | any |
| clang | C shard compilation (freestanding x86-64) | 14+ |
| OVMF | UEFI firmware for QEMU | Bundled with QEMU |
| mise | Task runner (optional) | 2024.0+ |
# Rust (installs nightly automatically via rust-toolchain.toml)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# QEMU + mtools + clang (OVMF firmware ships with QEMU)
brew install qemu mtools llvm
# mise (optional)
brew install mise# Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# QEMU + mtools + OVMF + clang
sudo apt install qemu-system-x86 mtools ovmf clang lld
# mise (optional)
curl https://mise.run | shgit clone https://github.com/coconut-os/coconutOS.git
cd coconutOSOn first build, rustup will install the nightly toolchain and components (rust-src, llvm-tools-preview) specified in rust-toolchain.toml.
mise trust # trust the mise.toml config
mise run build-all # build supervisor + bootloader
mise run run # build and boot in QEMU./scripts/qemu-run.shThis single script builds all Rust crates and C shards, then launches QEMU.
On a successful boot you should see output like:
coconutOS supervisor v0.3.4 booting...
Higher-half: page tables built, CR3 switched
GDT: loaded (7 entries, TSS active)
IDT: loaded (256 entries, higher-half)
Syscall: configured (LSTAR, STAR, SFMASK)
PIC: remapped (IRQ 0-15 -> vectors 32-47)
PIT: configured (~1ms periodic, channel 0)
CR4: OSFXSR + TSD set
IOMMU: translation enabled
GPU: 2 partitions (8 MiB VRAM each, 4 CUs each)
Filesystem: ext2 ramdisk, 128 KiB, 2 files
Scheduler: starting run loop
...
GPU mem: freed+zeroed, compute ok
GPU DMA: recv ok, verified
Hello from coconutFS!
Hello from C shard!
llama-inference: loaded model (dim=32, layers=2, vocab=32)
llama-inference: token 0 -> 'i'
llama-inference: token 1 -> 't'
...
llama-inference: inference complete (16 tokens)
llama-pipeline stage 0: forwarding layers 0-0
llama-pipeline stage 1: forwarding layers 1-1
...
--- Shard Profiling Summary ---
ID Syscalls Cycles/Syscall Switches Wall (ms) Name
0 12 4523 8 45 gpu-hal
1 11 4201 7 42 gpu-hal
...
coconutOS supervisor v0.3.4: all shards completed.
Halting.
The system boots, creates GPU HAL shards (2 partitions), a filesystem reader shard, a C FFI demo shard, a transformer inference shard, and two pipeline shards that split the model across stages via IPC. A per-shard profiling summary is printed before halt. The system halts cleanly after all shards complete.
To parse the profiling summary into a formatted report:
./scripts/qemu-run.sh 2>&1 | python3 scripts/coconut-prof.py- Building — workspace layout, build targets, cargo configuration
- Debugging — GDB, serial output, common faults
- Architecture — full system design document