Native S3 storage plugin for Proxmox VE. Use any S3-compatible object store as a Proxmox storage backend for ISO images, container templates, snippets, and backups.
FUSE-based S3 mounts (s3fs, rclone, goofys) present S3 as a local filesystem. This causes problems in a PVE cluster:
- Network outages block the cluster. When the proxy or internet goes down, the FUSE mount hangs or disappears. pvestatd tries to stat the mountpoint, blocks, and the polling loop backs up. This affects all storages on the node, not just the S3 one.
- No offline status. A hung FUSE mount can't report itself as offline. It blocks until the kernel times out. PVE cannot distinguish "unreachable" from "slow".
- PVE doesn't know it's remote. PVE treats the mount as a local directory. There is no health check, no online/offline state, and no way to tell a missing file from a network error.
- Cache coherency. FUSE caches are opaque to PVE. Stale reads and partial data can occur under load or network instability.
ProxS3 is a native Proxmox storage plugin. S3 operations run in a separate Go daemon, not in PVE's polling path. The local cache is a real directory on disk that always exists. When S3 is unreachable, PVE gets online: false from cached state within the normal polling interval. Other storages are unaffected.
ProxS3 has two components:
- proxs3d - Go daemon that handles S3 operations, maintains a local file cache, and monitors endpoint connectivity. Listens on a Unix socket.
- S3Plugin.pm - Perl storage plugin that registers the
s3storage type in Proxmox. Forwards all storage operations to the daemon via the Unix socket.
Proxmox UI / pvesm CLI
|
v
S3Plugin.pm (PVE::Storage::Custom)
| Unix socket (local only)
v
proxs3d (Go daemon)
| HTTPS | local disk
v v
S3 bucket file cache
(source of truth) (real directory, always exists)
The daemon auto-discovers S3 storages from /etc/pve/storage.cfg. When you add, update, or remove an S3 storage via the Proxmox UI or pvesm, the plugin signals the daemon to reload automatically.
When the S3 endpoint becomes unreachable (proxy down, internet outage, provider issue):
| PVE Operation | Behaviour | Effect |
|---|---|---|
| Status polling (pvestatd) | Returns online: false from cached state |
Storage shown as unavailable. No network call, no blocking. |
| List volumes | 10s timeout, returns empty list | UI shows no contents. Does not hang. |
| Access a cached file | Serves local cached copy, skips S3 validation | Running VMs continue to work with cached ISOs/templates. |
| Access an uncached file | Returns error | File is not available without S3 connectivity. |
| Upload | File written to local cache, S3 upload fails (logged) | File preserved locally, not yet synced to S3. |
When connectivity returns, the health check detects it within 30 seconds and the storage status returns to online.
- Proxmox VE 9.x (Debian Trixie)
- An S3-compatible object store (AWS S3, MinIO, Ceph RGW, Cloudflare R2, Wasabi, etc.)
- Network access from each Proxmox node to the S3 endpoint (direct or via HTTP proxy)
This gets you from zero to a working S3-backed ISO/template store in under 5 minutes.
On each Proxmox node in your cluster:
# Download the latest release
wget https://github.com/sol1/proxs3/releases/latest/download/proxs3_amd64.deb
# Install
dpkg -i proxs3_amd64.debThis installs:
/usr/bin/proxs3d- the Go daemon/usr/share/perl5/PVE/Storage/Custom/S3Plugin.pm- the Proxmox plugin/usr/share/pve-manager/js/s3storage.js- web UI panel for the S3 storage type/lib/systemd/system/proxs3d.service- systemd unit/etc/proxs3/proxs3d.json- daemon config (only created if not already present)
Important: After installing or upgrading the package, you must restart the PVE services so they load the new plugin code:
systemctl restart pvedaemon pveproxy pvestatdThese services load the storage plugin at startup. Without a restart, the S3 storage type won't appear in the UI, storage status will show as "unknown", and content type changes won't take effect. The proxs3d daemon is managed separately and is restarted automatically by the package.
Edit /etc/proxs3/proxs3d.json on each node:
{
"socket_path": "/run/proxs3d.sock",
"cache_dir": "/var/cache/proxs3",
"cache_max_mb": 4096,
"credential_dir": "/etc/pve/priv/proxs3",
"storage_cfg": "/etc/pve/storage.cfg",
"headroom_gb": 100
}The only setting you're likely to change is cache_dir and cache_max_mb.
Important: choose your cache location carefully. ISOs and templates are large files. The cache should live on a disk with plenty of free space, not on a small rootfs. Good choices:
- A dedicated local disk or partition (e.g.,
/mnt/proxs3-cache) - A fast SSD with spare capacity
- An LVM volume sized for your expected workload
The cache_max_mb setting controls the maximum cache size in megabytes. When exceeded, the least recently used files are evicted. Set this to roughly 80% of the available space on your cache disk.
This config file is per-node (it's in /etc/proxs3/, not /etc/pve/) so you can set different cache paths and sizes on different nodes.
If you set cache_dir outside /var/cache/, you need a systemd override so the daemon has write access (it runs with ProtectSystem=strict):
systemctl edit proxs3dAdd:
[Service]
ReadWritePaths=/mnt/proxs3-cache /runThen restart: systemctl restart proxs3d
Create a bucket in your S3-compatible store and set up the expected directory structure. ProxS3 uses these key prefixes:
| Content Type | S3 Prefix | Example Key |
|---|---|---|
| ISO images | template/iso/ |
template/iso/debian-12.7-amd64-netinst.iso |
| Container templates | template/cache/ |
template/cache/debian-12-standard_12.2-1_amd64.tar.zst |
| Snippets | snippets/ |
snippets/cloud-init-user.yaml |
| Backups | dump/ |
dump/vzdump-qemu-100-2024_01_01.vma.zst |
You can create these prefixes by uploading a file to each path, or by creating "folders" in the S3 console. Most S3-compatible stores create prefixes implicitly when you upload objects.
To pre-populate with ISOs, simply upload them to the template/iso/ prefix:
aws s3 cp debian-12.7-amd64-netinst.iso s3://my-bucket/template/iso/systemctl enable --now proxs3dCheck that it started correctly:
systemctl status proxs3d
journalctl -u proxs3d --no-pager -n 20You should see log lines showing the socket path and the number of discovered storages (zero at this point, since we haven't added one yet).
You only need to do this once per cluster (the config is shared across all nodes via pmxcfs).
Option A: Via the web UI
Go to Datacenter -> Storage -> Add -> S3 and fill in the fields.
Option B: Via the command line
# AWS S3 example - endpoint is just the hostname, no https://
pvesm add s3 my-s3-store \
--endpoint s3.us-east-1.amazonaws.com \
--bucket my-proxmox-bucket \
--region us-east-1 \
--access-key AKIAIOSFODNN7EXAMPLE \
--secret-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
--content iso,vztmpl,snippets \
--use-ssl 1
# DigitalOcean Spaces example
pvesm add s3 my-do-store \
--endpoint syd1.digitaloceanspaces.com \
--bucket my-space-name \
--region syd1 \
--access-key DO00EXAMPLE \
--secret-key secretkeyhere \
--content iso,vztmpl,snippets \
--use-ssl 1
# MinIO example - path-style 1 required
pvesm add s3 my-minio-store \
--endpoint minio.local:9000 \
--bucket my-bucket \
--content iso,vztmpl,snippets \
--path-style 1What happens behind the scenes:
- The plugin writes your credentials to
/etc/pve/priv/proxs3/my-s3-store.json(root-only, 0600). This file is cluster-shared via pmxcfs. - The storage configuration (endpoint, bucket, region, etc.) is written to
/etc/pve/storage.cfg, but credentials are not stored there. - The plugin signals proxs3d to reload its configuration.
- The daemon re-reads
storage.cfg, discovers the new S3 storage, loads its credentials, and starts health-checking the endpoint.
# Check storage status
pvesm status
# List contents
pvesm list my-s3-store
# Or check the web UI - your S3 storage should appear with a green status indicatorYou should now see any ISOs or templates you uploaded to the bucket. You can upload new ones via the Proxmox UI or download ISOs directly from the storage view.
| Field | Default | Description |
|---|---|---|
socket_path |
/run/proxs3d.sock |
Unix socket path for plugin communication |
cache_dir |
/var/cache/proxs3 |
Local cache directory for downloaded objects |
cache_max_mb |
4096 |
Maximum cache size in MB before LRU eviction |
credential_dir |
/etc/pve/priv/proxs3 |
Directory containing per-storage credential files |
storage_cfg |
/etc/pve/storage.cfg |
Path to Proxmox storage configuration |
headroom_gb |
100 |
Available space (in GiB) to report to Proxmox. S3 has no real capacity limit, so this is the "free space" PVE always sees. Set to match what makes sense for your environment. |
proxy.http_proxy |
(empty) | HTTP proxy URL for outbound connections |
proxy.https_proxy |
(empty) | HTTPS proxy URL for outbound connections |
proxy.no_proxy |
(empty) | Comma-separated list of hosts to bypass proxy |
| Field | Required | Description |
|---|---|---|
endpoint |
Yes | S3 endpoint hostname (see Endpoint and URL Style below) |
bucket |
Yes | S3 bucket name |
region |
No | S3 region (defaults to us-east-1) |
access-key |
No | S3 access key ID (omit for public buckets) |
secret-key |
No | S3 secret access key (omit for public buckets) |
content |
No | Comma-separated content types: iso, vztmpl, snippets, backup, import |
use-ssl |
No | Use HTTPS (1) or HTTP (0). Defaults to on. |
path-style |
No | URL style (see Endpoint and URL Style below) |
cache-max-age |
No | Maximum age of cached files in days. 0 (default) keeps files forever. See Cache Age Eviction below. |
nodes |
No | Restrict storage to specific cluster nodes |
The endpoint field must be the base hostname only: no https:// prefix, no bucket name, no trailing slash.
S3 has two URL styles for addressing buckets:
| Style | URL Format | path-style |
Example |
|---|---|---|---|
| Virtual-hosted (default) | https://BUCKET.ENDPOINT/key |
0 |
https://my-bucket.s3.amazonaws.com/template/iso/debian.iso |
| Path | https://ENDPOINT/BUCKET/key |
1 |
https://minio.local:9000/my-bucket/template/iso/debian.iso |
Common mistake: If your provider gives you a URL like https://my-bucket.syd1.digitaloceanspaces.com, the endpoint is just syd1.digitaloceanspaces.com. The bucket name is a separate field. Don't include it in the endpoint or it will be doubled.
How to choose:
- AWS S3, DigitalOcean Spaces, Wasabi, Cloudflare R2, Backblaze B2 → Virtual-hosted (
path-style 0, the default) - MinIO, Ceph RGW, most self-hosted S3 → Path style (
path-style 1)
If your Proxmox nodes access the internet through an HTTP proxy, configure it in the daemon config:
{
"proxy": {
"https_proxy": "http://proxy.internal:3128",
"http_proxy": "http://proxy.internal:3128",
"no_proxy": "localhost,127.0.0.1,.internal"
}
}The proxy settings apply to all outbound S3 connections from the daemon.
ProxS3 works with any S3-compatible object store. Here are the recommended settings for common providers:
| Provider | endpoint |
region |
use-ssl |
path-style |
|---|---|---|---|---|
| AWS S3 | s3.us-east-1.amazonaws.com |
us-east-1 |
1 |
0 |
| AWS S3 (other region) | s3.ap-southeast-2.amazonaws.com |
ap-southeast-2 |
1 |
0 |
| DigitalOcean Spaces | syd1.digitaloceanspaces.com |
syd1 |
1 |
0 |
| Wasabi | s3.wasabisys.com |
us-east-1 |
1 |
0 |
| Cloudflare R2 | <account-id>.r2.cloudflarestorage.com |
auto |
1 |
0 |
| Backblaze B2 | s3.us-west-004.backblazeb2.com |
us-west-004 |
1 |
0 |
| MinIO | minio.local:9000 |
depends | 1 |
|
| Ceph RGW | rgw.local:7480 |
depends | 1 |
Note: The endpoint is always just the hostname (and port if non-standard). Never include https://, the bucket name, or a trailing slash.
The cache is critical to how ProxS3 works. Without it, every file access would require a full download from S3.
How it works:
- When Proxmox requests a file (e.g., to boot a VM from an ISO), the daemon first checks the local cache.
- If the file is cached, the daemon does a lightweight
HeadObjectcall to S3 to check the ETag and LastModified timestamp. - If the cached copy matches (ETag is identical), the local path is returned immediately. No download needed.
- If the remote object has changed (different ETag), the cached copy is invalidated and the new version is downloaded.
- If the file is not cached at all, it's downloaded from S3 and stored in the cache.
This means S3 is always the source of truth. If you update an ISO in your S3 bucket, every node will pick up the change on next access.
Cache eviction: When the total cache size exceeds cache_max_mb, the oldest files (by modification time) are automatically evicted to make room. This runs asynchronously after each new file is cached.
Upload caching: When you upload a file via the Proxmox UI, it's sent to S3 and simultaneously cached locally. This means the file is available for immediate use without waiting for a download.
The cache-max-age storage property controls how long files stay in the local cache. This is a per-storage setting, configured in Proxmox (not in the daemon config), so different storages can have different policies.
Why per-storage? A single daemon often serves multiple storages - for example, one for ISOs and one for backups. You probably want ISOs cached indefinitely (they're accessed repeatedly) but backup files cleaned up after a few days (they're written once, synced to S3, and rarely accessed locally again).
# Keep backup cache for 7 days
pvesm set my-backup-store --cache-max-age 7
# Keep ISOs forever (default, no need to set)
pvesm set my-iso-store --cache-max-age 0The daemon checks file ages hourly. Files older than cache-max-age days (by modification time) are removed from the local cache. The files remain in S3 and will be re-downloaded if needed.
This is separate from cache_max_mb (the daemon-wide size limit) and from prune-backups (which controls backup retention in S3 itself, not the local cache).
ProxS3 is designed for Proxmox clusters where you want every node to have access to the same ISOs and templates:
- Add the storage once.
storage.cfgis shared across all nodes via pmxcfs. Afterpvesm add, every node sees the storage. - Credentials are cluster-shared. Stored in
/etc/pve/priv/proxs3/, distributed by pmxcfs. Root-only permissions (0600). - Install the .deb on each node. The daemon and plugin must be present on every node that needs access.
- Cache is per-node. Each node maintains its own local cache. Nodes pull from S3 independently and validate against S3 metadata.
- Daemon config is per-node.
/etc/proxs3/proxs3d.jsonis local to each node, so you can set different cache paths and sizes per node.
# Start/stop
systemctl start proxs3d
systemctl stop proxs3d
# Reload config (re-reads storage.cfg and credentials, picks up new/changed/removed storages)
systemctl reload proxs3d
# View logs
journalctl -u proxs3d -f
# Check health
systemctl status proxs3dThe daemon performs health checks against each configured S3 endpoint every 30 seconds. If an endpoint becomes unreachable, the storage is marked as offline in Proxmox. When connectivity is restored, it's automatically marked as online again.
The PVE services (pvedaemon, pveproxy, pvestatd) load storage plugins at startup. If you installed or upgraded ProxS3 without restarting them, they won't know about the S3 type:
systemctl restart pvedaemon pveproxy pvestatdThen hard-refresh your browser (Ctrl+Shift+R). This is the most common issue after install or upgrade.
- Check the daemon is running:
systemctl status proxs3d - Check the logs:
journalctl -u proxs3d -f - Verify S3 connectivity from the node:
curl -I https://your-endpoint/your-bucket - Verify credentials:
cat /etc/pve/priv/proxs3/<storeid>.json
The daemon isn't running or the socket doesn't exist:
systemctl restart proxs3d
ls -la /run/proxs3d.sockThe cache validates against S3 on every access via ETag. If you're seeing stale data:
- The S3 provider may not be returning proper ETags (some providers don't for multipart uploads)
- You can clear the cache manually:
rm -rf /var/cache/proxs3/<storeid>/ - Restart the daemon:
systemctl restart proxs3d
Set cache_max_mb in /etc/proxs3/proxs3d.json to limit the cache size. Move cache_dir to a disk with more space if needed. After changing, restart the daemon.
Verify the proxy settings in /etc/proxs3/proxs3d.json. The daemon must be restarted (not just reloaded) for proxy changes to take effect:
systemctl restart proxs3d# Requires Go 1.24+
make build
sudo make installsudo apt install debhelper golang-go build-essential
make deb
# The .deb is created in the parent directory
dpkg -i ../proxs3_*.deb| Content Type | Proxmox Value | S3 Prefix | Description |
|---|---|---|---|
| ISO images | iso |
template/iso/ |
Installation media |
| Container templates | vztmpl |
template/cache/ |
LXC container templates |
| Snippets | snippets |
snippets/ |
Cloud-init configs, hookscripts |
| Backups | backup |
dump/ |
VM/CT backup files |
| Import (disk images) | import |
images/ |
Golden images for VM templates |
Note: ProxS3 does not support running VM disk images (images) or container rootdirs (rootdir) directly from S3. Live VM disks require block-level random access which S3 cannot provide. Use the import content type to store golden images that can be copied to local storage to create templates.
Store installation media in S3 and make it available across all nodes in your cluster. Upload once, available on every node. No need to copy ISOs between nodes or maintain a shared NFS mount.
# Upload ISOs to your bucket
aws s3 cp debian-12.7-amd64-netinst.iso s3://my-bucket/template/iso/
aws s3 cp ubuntu-24.04-live-server-amd64.iso s3://my-bucket/template/iso/
# They appear in the Proxmox UI on every node immediatelyWhen a node needs an ISO (e.g., to boot a VM installer), ProxS3 downloads it to the local cache on first use. Subsequent uses on the same node are served from cache with an ETag check to ensure freshness. Update an ISO in S3 and every node picks up the new version automatically.
Store base VM disk images in S3 and import them on any node to create templates. This is ideal for maintaining a library of pre-built images (e.g., a hardened Debian base, a pre-configured application stack) that can be deployed across clusters.
# Upload golden images to the images/ prefix
aws s3 cp base-debian12-disk-0.raw s3://my-bucket/images/
aws s3 cp base-ubuntu2404-disk-0.qcow2 s3://my-bucket/images/Enable the import content type on your S3 storage, then use Proxmox's import functionality to copy disk images to local storage and convert them into templates. The originals stay in S3 as your single source of truth.
Maintain a central library of LXC container templates across your cluster. Particularly useful for custom templates that aren't available from the standard Proxmox repositories.
# Upload custom container templates
aws s3 cp my-custom-debian-12_1.0_amd64.tar.zst s3://my-bucket/template/cache/Templates appear in the Proxmox UI under the S3 storage. When you create a container, ProxS3 downloads the template to the local cache. Like ISOs, templates are validated against S3 on each access. Update a template in the bucket and nodes pick up the change automatically.
Store cloud-init user-data, network-config, and vendor-data files in S3 for use across the cluster. Keep your infrastructure-as-code configs in one place.
aws s3 cp cloud-init-user.yaml s3://my-bucket/snippets/
aws s3 cp network-config.yaml s3://my-bucket/snippets/Use S3 as a backup target for vzdump. Backups are uploaded directly to S3 and can be restored on any node. Combined with S3 lifecycle policies, this gives you cost-effective long-term backup retention without managing local disk space.
Note: Backup to S3 requires sufficient local cache space to stage the backup file before upload, and restore requires downloading the full backup before extraction.
MIT License. See LICENSE for details.