Proxmox Backup Server 4.0 (August 2025) introduced native S3 object storage as a datastore backend. This replaces the old pattern of mounting S3 with s3fs-fuse or running third-party proxies like pmoxs3backuproxy — approaches Proxmox never officially supported. This guide covers both supported deployment patterns, with exact commands and the caveats that matter for production use.
What you'll configure: An S3 endpoint, an S3-backed datastore with local cache, and — optionally — a sync job that keeps a local datastore and an S3 datastore in step for true 3-2-1 backups.
proxmox-backup-manager versions.GetObject, PutObject, ListBucket, and DeleteObject permissions on that bucket. PBS does not create buckets or manage ACLs.⚠️ The S3 datastore backend is marked technology preview in PBS 4.1.6. It works and is reasonable to use for secondary or offsite copies, but run restore tests frequently and watch the Proxmox release notes before trusting it as your only copy.
PBS does not put self-contained snapshots into the bucket. It uses the same content-addressable chunk store model it uses locally: each backup is split into deduplicated, compressed, optionally encrypted chunks identified by hash. Those chunks are written to S3 as individual objects, prefixed by the datastore name. Index files that map chunks back to snapshots are also stored as objects.
Two consequences:
--reuse-datastore true --overwrite-in-use true, and your backups are recoverable.A local cache is mandatory. PBS keeps recently-read chunks and index metadata on disk so garbage collection, verification, and reads don't hit S3 for every operation. Without the cache, cost and latency would both be unusable.
PBS writes directly to the S3 datastore. No local chunk storage beyond the cache.
Use when: homelab, small deployments, or secondary PBS acting purely as offsite target. Simplest to set up.
Trade-off: every backup, restore, verification, and GC operation touches S3. Initial backup speeds are bound by your upstream bandwidth. Restores are bound by downstream.
Backups land on a local datastore first (fast). A scheduled sync job on the same PBS instance pulls from local to S3 for offsite retention. You get both copies from one PBS host.
Use when: you want backup speed to match local disk throughput and need an automated offsite copy. This is the recommended pattern for MSPs and production use.
Trade-off: more storage required on the PBS host, slightly more complex.
The rest of this guide sets up the S3 endpoint and datastore once. Both patterns diverge only at the final step (whether you point PVE at the S3 datastore directly, or configure a sync job).
At your S3 provider, create:
If your provider supports object versioning or object lock, enable it on the bucket for ransomware protection. PBS never modifies existing chunks, but a compromised client with delete permissions could — versioning gives you a recovery window.
Via the web UI: Navigate to Configuration → Remotes → S3 Endpoints → Add. Fill in name, access key, secret, endpoint URL, region, and (for self-signed providers) fingerprint.
Via CLI — this is the pattern most providers use:
proxmox-backup-manager s3 endpoint create my-s3-ep \
--access-key 'YOUR_ACCESS_KEY' \
--secret-key 'YOUR_SECRET_KEY' \
--endpoint '{{bucket}}.s3.{{region}}.example.com' \
--region eu-central-1
The {{bucket}} and {{region}} placeholders are expanded automatically when PBS makes requests. This gives you one endpoint config that works across multiple buckets.
Vhost vs path style: PBS defaults to vhost-style addressing (bucket as subdomain). If your provider requires path-style (bucket in the URL path), add --path-style true. Cloudflare R2 and some self-hosted providers need this.
Self-signed certificates: Add --fingerprint 'XX:XX:XX:...'. Get the fingerprint with:
openssl s_client -connect your-s3-endpoint:443 -servername your-s3-endpoint < /dev/null 2>/dev/null | \
openssl x509 -fingerprint -sha256 -noout
Verify the endpoint:
proxmox-backup-manager s3 endpoint list
The cache must live on a dedicated path. A ZFS dataset with a quota is the cleanest option:
zfs create -o mountpoint=/mnt/datastore/s3-cache rpool/s3-cache
zfs set quota=128G rpool/s3-cache
Or use a dedicated partition mounted at /mnt/datastore/s3-cache. Whatever you choose, do not use an existing datastore path — PBS will reject it.
Via the web UI: Datastore → Add Datastore, select S3 as backend, pick your endpoint from the dropdown, set bucket name and cache path.
Via CLI:
proxmox-backup-manager datastore create s3-offsite \
/mnt/datastore/s3-cache \
--backend type=s3,client=my-s3-ep,bucket=pbs-offsite-bucket
s3-offsite is the datastore name. It becomes the prefix for all objects in the bucket, so pick something stable — renaming later means reseeding./mnt/datastore/s3-cache is the local cache path.client=my-s3-ep references the endpoint you created in Step 2.bucket=pbs-offsite-bucket is the S3 bucket name.List datastores to confirm:
proxmox-backup-manager datastore list
The datastore is now usable. At this point your config diverges depending on pattern.
In Proxmox VE: Datacenter → Storage → Add → Proxmox Backup Server. Enter your PBS IP/hostname, credentials, the datastore name (s3-offsite), and the PBS server fingerprint.
Get the PBS fingerprint:
proxmox-backup-manager cert info | grep Fingerprint
Backup jobs in PVE that target this storage now write directly to S3 via the PBS local cache.
Create a local datastore alongside the S3 one (if you don't already have one):
proxmox-backup-manager datastore create local-pbs /mnt/datastore/local-pbs
Point your PVE backup jobs at local-pbs. Now configure a pull sync job that mirrors snapshots from local-pbs into s3-offsite.
PBS sync jobs are designed to pull from a Remote (another PBS instance). To sync between two datastores on the same PBS host, the supported approach is to create a Remote that points back at localhost. Create a dedicated API token for the sync user first (Configuration → Access Control → API Token, role DatastoreReader on /datastore/local-pbs), then:
proxmox-backup-manager remote create self \
--host 127.0.0.1 \
--userid 'sync@pbs!syncjob' \
--password 'YOUR_API_TOKEN_SECRET' \
--fingerprint "$(proxmox-backup-manager cert info | awk '/Fingerprint/ {print $3}')"
Then create the sync job from the local datastore (via the self-remote) into the S3 datastore:
proxmox-backup-manager sync-job create offsite-sync \
--remote self \
--remote-store local-pbs \
--store s3-offsite \
--schedule 'daily' \
--remove-vanished false
--remote self references the loopback remote you just created.--remote-store local-pbs is the source datastore.--store s3-offsite is the target (S3) datastore.--remove-vanished false is a ransomware safety measure: if an attacker deletes snapshots on your local datastore, the sync job won't propagate those deletions to S3. Manage retention directly on the S3 datastore with a separate prune job.You can also configure all of this from the web UI under Datastore → s3-offsite → Sync Jobs → Add after the self-remote is in place — often the faster path for first-time setup.
PBS supports client-side encryption. Chunks are encrypted on the PVE host before being sent to PBS — your S3 provider only ever sees ciphertext. This is independent of the bucket being public-accessible or not; correctly configured, a compromised bucket leaks nothing useful.
On each PVE node:
proxmox-backup-client key create /etc/pve/priv/pbs-encryption.key
Reference the key in your storage config in PVE (Datacenter → Storage → your PBS storage → Encryption Key). All subsequent backups are encrypted.
⚠️ Back up the encryption key separately — not on PBS, not in the S3 bucket it protects. If you lose the key, every backup in S3 is unrecoverable. Print the paper-key version and store it in a safe, or keep it in a password manager that is not itself backed up to the same PBS.
Use the master-key feature (--master-pubkey-file) to allow recovery of individual backup keys from a master keypair. The Proxmox Backup Client documentation covers the master-key workflow in detail.
GC on an S3-backed datastore issues significantly more API requests than GC on local storage. Schedule it less frequently than you would locally — weekly is reasonable for most workloads, not daily.
proxmox-backup-manager datastore update s3-offsite --gc-schedule 'Sun 04:00'
Verification jobs read chunks back and recompute their hashes. On S3 this means downloading chunks — egress cost applies unless your provider offers zero-egress. Configure verify jobs from the web UI under Datastore → Verify Jobs → Add with a conservative schedule (monthly is a reasonable starting point for S3 datastores). Enable the "skip verified" option with a 30-day window so verification is incremental rather than full.
Manual verification is also possible from the CLI:
proxmox-backup-manager verify s3-offsite --ignore-verified true
A backup you have not restored is a backup you do not have. Before relying on the setup:
Do this after the initial setup, after any PBS upgrade, and on a rotating sample of snapshots at least monthly.
HummingTribe S3 runs on Garage (S3-compatible) from our Hetzner facility in Germany. All storage is in the EU, zero egress fees, GDPR-compliant by default.
Values you'll use in the PBS S3 endpoint config:
| Field | Value |
|---|---|
| Endpoint | storage.hummingtribe.com |
| Region | (see your account dashboard) |
| Path style | true |
| Access key / Secret key | from your S3 console |
Create the endpoint:
proxmox-backup-manager s3 endpoint create hummingtribe \
--access-key 'YOUR_HT_ACCESS_KEY' \
--secret-key 'YOUR_HT_SECRET_KEY' \
--endpoint 'storage.hummingtribe.com' \
--region 'YOUR_REGION' \
--path-style true
Then create the datastore against your HummingTribe bucket:
proxmox-backup-manager datastore create ht-s3-offsite \
/mnt/datastore/ht-s3-cache \
--backend type=s3,client=hummingtribe,bucket=your-bucket-name
Why this is a fit for PBS offsite: zero egress means restore and verification operations don't incur surprise costs. EU-only data residency satisfies GDPR without a DPA negotiation. Flat monthly pricing removes the API-request cost variable that hurts cloud object storage PBS deployments on hyperscaler clouds.
certificate verify failed on endpoint test. Self-signed or private CA cert. Add --fingerprint to the endpoint config with the SHA-256 fingerprint.
Access Denied on datastore creation. Access key missing s3:PutObject, s3:DeleteObject, or s3:ListBucket on the bucket. On AWS IAM, the policy needs both arn:aws:s3:::bucket-name and arn:aws:s3:::bucket-name/* resources.
Region errors on Cloudflare R2 or similar. Set --region auto — R2 does not validate the region name but requires a non-empty value.
Datastore creation fails with "path already a datastore". Pick a cache path that is not already a PBS datastore. The cache cannot be nested inside another datastore directory.
Migrating to a new PBS host. On the new host, recreate the S3 endpoint config identically, then create the datastore with the same datastore name and both --reuse-datastore true and --overwrite-in-use true. Never run two PBS instances against the same S3 datastore simultaneously — use the overwrite-in-use flag only when the original host is retired.
Running out of space on S3 mid-write. Cleanup operations may fail alongside. Manually remove stray objects for the affected snapshot in the S3 console, then run an S3 refresh on the datastore (UI: Datastore → Refresh from S3, or via CLI).
If you're evaluating providers for PBS offsite, the three variables that matter are: data residency (EU if you need GDPR), egress pricing (zero-egress beats per-GB charges for any verification workload), and API request pricing (matters for GC frequency). HummingTribe S3 addresses all three, with flat per-TB pricing and no egress charges, hosted in Germany.