[{"data":1,"prerenderedAt":3003},["ShallowReactive",2],{"docs-truenas-s3-offsite":3,"docs-related-truenas-s3-offsite":830},{"id":4,"title":5,"body":6,"date":812,"description":813,"extension":814,"meta":815,"navigation":816,"path":817,"seo":818,"sitemap":819,"stem":820,"tags":821,"tool":822,"__hash__":829},"docs/docs/truenas-s3-offsite.md","TrueNAS offsite to S3 — full Cloud Sync setup guide",{"type":7,"value":8,"toc":791},"minimark",[9,22,28,33,69,73,80,85,91,94,98,101,128,131,135,149,152,191,198,236,246,251,255,264,269,327,331,355,361,365,379,382,386,392,455,464,468,479,482,486,489,513,520,524,527,532,569,576,579,583,586,589,654,657,663,667,680,695,707,725,750,756,766,770],[10,11,12,13,17,18,21],"p",{},"TrueNAS ships with two cloud backup features: ",[14,15,16],"strong",{},"Cloud Sync Tasks"," (rclone-based, works with any S3-compatible provider) and ",[14,19,20],{},"TrueCloud Backup"," (restic-based, Storj-only). If your offsite target is Amazon S3, Backblaze B2, HummingTribe, Wasabi, or any other S3-compatible object store, Cloud Sync Tasks is the only option — TrueCloud Backup doesn't support non-Storj endpoints. This guide covers the full Cloud Sync workflow for both TrueNAS Community Edition (25.04 Fangtooth, 25.10 Goldeye) and the legacy TrueNAS 13.0 CORE.",[10,23,24,27],{},[14,25,26],{},"What you'll configure:"," a cloud credential for your S3 provider, a scheduled sync task that pushes one or more datasets to a bucket, encryption, exclude rules, and a tested restore path.",[29,30,32],"h2",{"id":31},"prerequisites","Prerequisites",[34,35,36,40,59,66],"ul",{},[37,38,39],"li",{},"TrueNAS Community Edition 25.04 or newer (Fangtooth or Goldeye), or TrueNAS 13.0 CORE. UI paths below match Community Edition. CORE differences are noted inline.",[37,41,42,43,47,48,47,51,54,55,58],{},"An S3-compatible object store with a pre-created bucket and an access key pair with ",[44,45,46],"code",{},"GetObject",", ",[44,49,50],{},"PutObject",[44,52,53],{},"ListBucket",", and ",[44,56,57],{},"DeleteObject"," permissions on that bucket.",[37,60,61,62,65],{},"A TrueNAS administrator account with ",[14,63,64],{},"Full Administrative Access"," (shares admin and other restricted roles cannot configure cloud tasks in 25.04 and later).",[37,67,68],{},"Outbound HTTPS to the S3 endpoint. Firewall rules blocking port 443 egress will break the sync.",[29,70,72],{"id":71},"cloud-sync-tasks-vs-truecloud-backup-which-applies-here","Cloud Sync Tasks vs TrueCloud Backup — which applies here",[10,74,75,76,79],{},"TrueNAS 24.10 Electric Eel introduced ",[14,77,78],{},"TrueCloud Backup Tasks",", a restic-based feature with deduplication, snapshot-style versioning, and per-snapshot restores. It is technically superior to Cloud Sync for backup use cases — but it only supports Storj iX as a target. Community forum threads from 2025 confirm that iXsystems has no announced roadmap for TrueCloud Backup on other S3 providers.",[10,81,82,84],{},[14,83,16],{},", the older feature, uses rclone under the hood. It supports every major S3 provider via the \"Amazon S3\" provider type (which accepts custom endpoints for S3-compatible stores) and transfers data as individual files rather than restic chunks. No deduplication, no snapshot versioning — but it works universally.",[86,87,88],"blockquote",{},[10,89,90],{},"For offsite backup to anything other than Storj, use Cloud Sync Tasks. The rest of this guide is Cloud Sync only.",[10,92,93],{},"If you specifically want restic-based backup to a non-Storj S3 target, your options are: use Storj with TrueCloud Backup, run restic manually via a cron task on the TrueNAS shell (advanced, not officially supported), or use a separate restic host with the TrueNAS dataset mounted via NFS. None of these are covered here.",[29,95,97],{"id":96},"step-1-create-the-s3-bucket-and-credentials","Step 1 — Create the S3 bucket and credentials",[10,99,100],{},"At your S3 provider:",[34,102,103,110,117],{},[37,104,105,106,109],{},"Create a ",[14,107,108],{},"bucket"," dedicated to TrueNAS. One bucket per TrueNAS host is the simplest model — avoid sharing buckets between systems.",[37,111,112,113,116],{},"Create an ",[14,114,115],{},"access key pair"," with read/write/list/delete permissions scoped to that bucket only. Save the secret key; most providers show it once.",[37,118,119,120,123,124,127],{},"Note the ",[14,121,122],{},"endpoint URL"," and ",[14,125,126],{},"region",".",[10,129,130],{},"If your provider supports object versioning, enable it on the bucket. Cloud Sync's SYNC mode deletes remote files when local files are deleted — versioning gives you a recovery window against accidental deletion on the TrueNAS side and against compromised TrueNAS credentials.",[29,132,134],{"id":133},"step-2-add-the-cloud-credential-in-truenas","Step 2 — Add the cloud credential in TrueNAS",[10,136,137,138,141,142,145,146,127],{},"Navigate to ",[14,139,140],{},"Credentials → Backup Credentials → Cloud Credentials"," and click ",[14,143,144],{},"Add",". On TrueNAS 13.0 CORE, the path is ",[14,147,148],{},"System → Cloud Credentials",[10,150,151],{},"Fill in the Add Cloud Credentials form:",[34,153,154,163,173,179,185],{},[37,155,156,159,160],{},[14,157,158],{},"Name:"," descriptive, e.g. ",[44,161,162],{},"hummingtribe-backup",[37,164,165,168,169,172],{},[14,166,167],{},"Provider:"," ",[44,170,171],{},"Amazon S3"," — this is the correct provider type for any S3-compatible store, not just AWS",[37,174,175,178],{},[14,176,177],{},"Authentication Mode:"," default (Access Key + Secret Key)",[37,180,181,184],{},[14,182,183],{},"Access Key ID:"," your S3 access key",[37,186,187,190],{},[14,188,189],{},"Secret Access Key:"," your S3 secret",[10,192,193,194,197],{},"For non-AWS providers, expand ",[14,195,196],{},"Advanced"," and set:",[34,199,200,210,224,230],{},[37,201,202,205,206,209],{},[14,203,204],{},"Endpoint URL:"," the full HTTPS URL of your S3 endpoint (e.g. ",[44,207,208],{},"https://storage.hummingtribe.com",")",[37,211,212,215,216,219,220,223],{},[14,213,214],{},"Region:"," the region identifier for your provider. If unknown, check provider documentation. For Cloudflare R2 set ",[44,217,218],{},"auto","; for MinIO single-region deployments set ",[44,221,222],{},"us-east-1"," (the default); for Garage-based providers (like HummingTribe), use the value shown in your dashboard.",[37,225,226,229],{},[14,227,228],{},"Disable Endpoint Region:"," leave off unless instructed by your provider",[37,231,232,235],{},[14,233,234],{},"Signature Version:"," leave at default (v4) unless your provider requires v2",[10,237,238,239,242,243,245],{},"Click ",[14,240,241],{},"Verify Credential",". TrueNAS makes a test API call to list buckets. If this fails, the most common causes are: wrong endpoint URL, wrong region, or the access key lacks ",[44,244,53],{}," on at least one bucket. Do not proceed until verification passes.",[10,247,238,248,127],{},[14,249,250],{},"Save",[29,252,254],{"id":253},"step-3-create-the-cloud-sync-task","Step 3 — Create the Cloud Sync Task",[10,256,137,257,260,261,263],{},[14,258,259],{},"Data Protection → Cloud Sync Tasks"," widget → ",[14,262,144],{},". The Cloud Sync Task Wizard opens in Community Edition; CORE presents a single form.",[265,266,268],"h3",{"id":267},"what-to-back-up","What to back up",[34,270,271,280,293,299,305,311,317],{},[37,272,273,276,277,209],{},[14,274,275],{},"Description:"," descriptive name for the task (e.g. ",[44,278,279],{},"Daily offsite to HummingTribe",[37,281,282,168,285,288,289,292],{},[14,283,284],{},"Direction:",[44,286,287],{},"PUSH"," (TrueNAS → cloud). Use ",[44,290,291],{},"PULL"," only for restore operations.",[37,294,295,298],{},[14,296,297],{},"Transfer Mode:"," see the next section — this choice matters.",[37,300,301,304],{},[14,302,303],{},"Directory/Files:"," browse to the dataset or folder to back up. You can select a parent dataset and all children are included unless excluded.",[37,306,307,310],{},[14,308,309],{},"Credential:"," select the credential you created in Step 2.",[37,312,313,316],{},[14,314,315],{},"Bucket:"," pick from the dropdown; TrueNAS lists buckets accessible to the access key.",[37,318,319,322,323,326],{},[14,320,321],{},"Folder:"," optional path inside the bucket (e.g. ",[44,324,325],{},"truenas-pool/media","). Use this to organize if backing up multiple datasets to one bucket.",[265,328,330],{"id":329},"transfer-mode-the-decision-that-matters","Transfer mode — the decision that matters",[34,332,333,343,349],{},[37,334,335,338,339,342],{},[14,336,337],{},"SYNC"," mirrors the source: if a file is deleted locally, it's deleted remotely on the next run. Matches the source exactly. ",[14,340,341],{},"Risk:"," if a TrueNAS compromise deletes files locally, the next sync propagates the deletion. Mitigate with bucket versioning enabled.",[37,344,345,348],{},[14,346,347],{},"COPY"," copies new and changed files, but never deletes on the remote. Safer against ransomware, but the bucket grows over time and needs manual cleanup.",[37,350,351,354],{},[14,352,353],{},"MOVE"," copies to remote, then deletes from source. Use for one-way data migration, not backup.",[10,356,357,360],{},[14,358,359],{},"For most backup scenarios, COPY with bucket-side lifecycle rules is the safest pattern."," If you need exact mirroring, use SYNC with bucket versioning enabled.",[265,362,364],{"id":363},"schedule-and-retention","Schedule and retention",[34,366,367,373],{},[37,368,369,372],{},[14,370,371],{},"Schedule:"," set a preset (hourly/daily/weekly) or a custom cron expression. Daily at off-peak hours (02:00–04:00) is a sensible default for most deployments.",[37,374,375,378],{},[14,376,377],{},"Enabled:"," leave checked.",[10,380,381],{},"TrueNAS Cloud Sync has no built-in retention policy — versioning and lifecycle rules happen on the S3 bucket side. Configure your S3 provider's lifecycle policy to: expire old versions after N days, transition to cold storage after M days, or delete non-current versions beyond a retention period. This is where the rclone-based Cloud Sync approach shows its limits vs restic — you're managing retention in two places.",[265,383,385],{"id":384},"advanced-options-worth-setting","Advanced options worth setting",[10,387,238,388,391],{},[14,389,390],{},"Advanced Options",":",[34,393,394,404,410,433,443,449],{},[37,395,396,399,400,403],{},[14,397,398],{},"Remote Encryption:"," encrypts filenames and file contents on the remote. Uses rclone's crypt backend. ",[14,401,402],{},"Keep the password and salt in a secure place outside TrueNAS"," — losing them means the remote data is unreadable. Note: rclone has known issues with filename encryption when filenames are very long; for media-heavy datasets, consider content-only encryption instead.",[37,405,406,409],{},[14,407,408],{},"Filename Encryption:"," separate option from content encryption. Turn off if you hit issues with long paths.",[37,411,412,415,416,47,419,47,422,47,425,428,429,432],{},[14,413,414],{},"Exclude:"," glob patterns for files to skip. Common entries: ",[44,417,418],{},"*.tmp",[44,420,421],{},".DS_Store",[44,423,424],{},"Thumbs.db",[44,426,427],{},"@eaDir"," (Synology artifact), ",[44,430,431],{},".snapshots"," (ZFS snapshot dirs if you don't want them backed up).",[37,434,435,438,439,442],{},[14,436,437],{},"Pre-script / Post-script:"," shell scripts to run before/after the task. Common uses: ",[44,440,441],{},"zfs snapshot pool/dataset@presync"," as a pre-script, email notifications as a post-script.",[37,444,445,448],{},[14,446,447],{},"Bandwidth Limit:"," throttle upload speed to avoid saturating your connection. Schedule-aware limits (faster at night, slower during business hours) are available.",[37,450,451,454],{},[14,452,453],{},"Transfers:"," concurrent file transfers. Default of 4 is sensible; increase to 8–16 only if your upstream bandwidth is underutilized.",[10,456,238,457,459,460,463],{},[14,458,250],{},". The task appears in the widget with an ",[14,461,462],{},"enabled"," toggle.",[29,465,467],{"id":466},"step-4-run-a-test-sync","Step 4 — Run a test sync",[10,469,470,471,474,475,478],{},"Click the run arrow (▶) next to the task. The task enters ",[14,472,473],{},"RUNNING"," state. Watch the task log under ",[14,476,477],{},"Jobs"," (top toolbar, briefcase icon) to verify there are no errors.",[10,480,481],{},"First runs for large datasets will take a long time — plan for the initial seed to run outside business hours. Subsequent runs are incremental and much faster.",[29,483,485],{"id":484},"encryption-three-layers-to-consider","Encryption — three layers to consider",[10,487,488],{},"TrueNAS Cloud Sync to S3 has three independent encryption layers. Understand what each protects:",[490,491,492,498,507],"ol",{},[37,493,494,497],{},[14,495,496],{},"ZFS dataset encryption"," (TrueNAS side, at rest on local disks). Not related to cloud backup; protects the data on your TrueNAS if a drive walks away. Cloud Sync reads decrypted content from the source dataset and sends it to the cloud.",[37,499,500,503,504],{},[14,501,502],{},"Remote Encryption (rclone crypt)"," in the Cloud Sync Task advanced options. Encrypts file content (and optionally filenames) before upload. Your S3 provider stores ciphertext only. ",[14,505,506],{},"You manage the keys.",[37,508,509,512],{},[14,510,511],{},"S3 server-side encryption (SSE)",". Provider encrypts data at rest; keys managed by provider. Protects against provider-side storage compromise but not against compromise of your S3 credentials.",[10,514,515,516,519],{},"For offsite backups, enable ",[14,517,518],{},"Remote Encryption"," in the task — this gives you provider-independent confidentiality. SSE is a defense-in-depth layer, not a substitute. Do not rely only on ZFS dataset encryption; Cloud Sync reads decrypted content.",[29,521,523],{"id":522},"step-5-test-a-restore-before-you-need-one","Step 5 — Test a restore before you need one",[10,525,526],{},"Do not skip this. A backup you have not restored from is hypothetical.",[10,528,105,529,531],{},[14,530,291],{}," Cloud Sync Task:",[34,533,534,540,546,551,563],{},[37,535,536,168,538],{},[14,537,284],{},[44,539,291],{},[37,541,542,545],{},[14,543,544],{},"Credential and Bucket:"," same as your PUSH task",[37,547,548,550],{},[14,549,321],{}," the same remote path",[37,552,553,555,556,559,560],{},[14,554,303],{}," a new empty dataset on TrueNAS (e.g. ",[44,557,558],{},"tank/restore-test",") — ",[14,561,562],{},"never restore over your production dataset",[37,564,565,168,567],{},[14,566,297],{},[44,568,347],{},[10,570,571,572,575],{},"Run the PULL task. Verify the restored files match the originals in your primary dataset (",[44,573,574],{},"diff -r"," over SSH, or spot-check file counts and a handful of file hashes). Then delete the restore-test dataset.",[10,577,578],{},"Do this after the initial setup, after any TrueNAS major version upgrade, and at least quarterly on a rotating sample.",[29,580,582],{"id":581},"hummingtribe-s3-configuration","HummingTribe S3 configuration",[10,584,585],{},"HummingTribe S3 runs on Garage (S3-compatible) from our Hetzner facility in Germany. All storage is in the EU, zero egress fees, GDPR-compliant by default — which matters for Cloud Sync restore operations and any future disaster recovery from offsite.",[10,587,588],{},"Values for the TrueNAS cloud credential:",[590,591,592,605],"table",{},[593,594,595],"thead",{},[596,597,598,602],"tr",{},[599,600,601],"th",{},"Field",[599,603,604],{},"Value",[606,607,608,618,627,635,646],"tbody",{},[596,609,610,614],{},[611,612,613],"td",{},"Provider",[611,615,616],{},[44,617,171],{},[596,619,620,623],{},[611,621,622],{},"Endpoint URL",[611,624,625],{},[44,626,208],{},[596,628,629,632],{},[611,630,631],{},"Region",[611,633,634],{},"(see your account dashboard)",[596,636,637,640],{},[611,638,639],{},"Signature Version",[611,641,642,645],{},[44,643,644],{},"v4"," (default)",[596,647,648,651],{},[611,649,650],{},"Access Key / Secret Key",[611,652,653],{},"from your S3 console",[10,655,656],{},"Path-style vs vhost-style addressing is handled by rclone automatically for S3-compatible endpoints when you provide a custom endpoint URL — no separate setting is required on the TrueNAS side.",[10,658,659,662],{},[14,660,661],{},"Why this is a fit for TrueNAS offsite:"," zero egress means your quarterly test restores don't incur surprise costs. EU-only data residency satisfies GDPR for media and SMB datasets without negotiating a DPA with a US provider. Flat per-TB pricing removes the per-request cost uncertainty that hurts Cloud Sync on hyperscalers (every small file = at least one PUT request, which adds up fast on photo libraries or source code).",[29,664,666],{"id":665},"troubleshooting","Troubleshooting",[10,668,669,675,676,679],{},[14,670,671,672,127],{},"Credential verification fails with ",[44,673,674],{},"RequestTimeTooSkewed"," TrueNAS clock drift from NTP. Check ",[14,677,678],{},"System Settings → General → NTP Servers"," and force a sync.",[10,681,682,687,688,690,691,694],{},[14,683,671,684,127],{},[44,685,686],{},"Access Denied"," Access key lacks ",[44,689,53],{}," or ",[44,692,693],{},"ListAllMyBuckets",". For minimum-privilege keys, create the credential with a specific bucket name rather than relying on list-all-buckets; some providers let you skip the list step with explicit bucket configuration.",[10,696,697,702,703,706],{},[14,698,671,699,127],{},[44,700,701],{},"SignatureDoesNotMatch"," Wrong secret key (most common), clock skew, or endpoint URL uses a trailing slash or path when it should be bare. Try ",[44,704,705],{},"https://endpoint.example.com"," with no trailing slash.",[10,708,709,716,717,720,721,724],{},[14,710,711,712,715],{},"Task fails with ",[44,713,714],{},"context deadline exceeded"," on large files."," rclone transfer timeout. In advanced options, increase ",[14,718,719],{},"Transfers"," concurrency, decrease ",[14,722,723],{},"Bandwidth Limit",", or split the task by subfolder.",[10,726,727,730,731,734,735,47,738,741,742,745,746,749],{},[14,728,729],{},"Task succeeds but files are missing in the bucket."," Check the ",[14,732,733],{},"Exclude"," patterns — a too-broad glob (",[44,736,737],{},"*.db",[44,739,740],{},"*tmp*",") can match more than intended. Run with ",[44,743,744],{},"--verbose"," via shell (",[44,747,748],{},"midclt call cloudsync.run TASK_ID",") to see what rclone is skipping.",[10,751,752,755],{},[14,753,754],{},"Task runs fine manually but fails on schedule."," Most often a permissions issue with the admin account owning the task. In 25.04 and later, recreate the task under a Full Administrative Access account.",[10,757,758,761,762,765],{},[14,759,760],{},"Restored files have wrong ownership/permissions."," rclone preserves file content, not POSIX ACLs. For datasets with complex ACLs, Cloud Sync is not sufficient — combine with ",[44,763,764],{},"zfs send"," replication to a second TrueNAS for true data-fidelity backup.",[29,767,769],{"id":768},"what-to-do-next","What to do next",[10,771,772,773,776,777,780,781,784,785,790],{},"If you're evaluating providers for TrueNAS offsite, the three variables that matter most for Cloud Sync workloads are: ",[14,774,775],{},"per-request pricing"," (matters for datasets with many small files), ",[14,778,779],{},"egress cost"," (matters for test restores), and ",[14,782,783],{},"data residency"," (matters for GDPR if you handle EU personal data). ",[786,787,789],"a",{"href":788},"/s3#pricing","HummingTribe S3"," covers all three, with flat per-TB pricing, zero egress, and EU-only hosting in Germany.",{"title":792,"searchDepth":793,"depth":793,"links":794},"",2,[795,796,797,798,799,806,807,808,809,810,811],{"id":31,"depth":793,"text":32},{"id":71,"depth":793,"text":72},{"id":96,"depth":793,"text":97},{"id":133,"depth":793,"text":134},{"id":253,"depth":793,"text":254,"children":800},[801,803,804,805],{"id":267,"depth":802,"text":268},3,{"id":329,"depth":802,"text":330},{"id":363,"depth":802,"text":364},{"id":384,"depth":802,"text":385},{"id":466,"depth":793,"text":467},{"id":484,"depth":793,"text":485},{"id":522,"depth":793,"text":523},{"id":581,"depth":793,"text":582},{"id":665,"depth":793,"text":666},{"id":768,"depth":793,"text":769},"2026-04-22","Configure TrueNAS Community Edition to back up datasets to S3-compatible object storage using Cloud Sync Tasks. Covers encryption, scheduling, and retention.","md",{},true,"/docs/truenas-s3-offsite",{"title":5,"description":813},{"loc":817},"docs/truenas-s3-offsite",[822,823,824,825,826,827,828],"truenas","scale","core","s3","backup","offsite","setup-guide","7eAThfAdw19k9VBaqioF6TPWZK3fjaFA9fkyMZIzQNo",[831,1295,1767],{"id":832,"title":833,"body":834,"date":812,"description":1283,"extension":814,"meta":1284,"navigation":816,"path":1285,"seo":1286,"sitemap":1287,"stem":1288,"tags":1289,"tool":1291,"__hash__":1294},"docs/docs/cyberduck-eu-s3-setup.md","Cyberduck — S3 Browser and Ad-Hoc Transfers (Windows, macOS)",{"type":7,"value":835,"toc":1272},[836,843,847,853,875,878,882,898,902,921,927,930,990,996,1012,1015,1019,1033,1039,1043,1059,1069,1075,1091,1095,1098,1134,1144,1148,1151,1158,1174,1180,1184,1191,1198,1222,1225,1231,1234,1238,1260,1267],[10,837,838,839,842],{},"GUI-based S3 client for Windows and macOS. Best for browsing your bucket, ad-hoc uploads and downloads, manual restores, and inspecting backup contents. ",[14,840,841],{},"Not a scheduled backup tool"," — for automated backups, use restic, rclone, or Duplicati. Cyberduck complements those tools rather than replacing them.",[265,844,846],{"id":845},"_1-install-cyberduck","1. Install Cyberduck",[10,848,849,850,127],{},"Download the installer from ",[14,851,852],{},"cyberduck.io",[34,854,855,865],{},[37,856,857,860,861,864],{},[14,858,859],{},"Windows:"," run the ",[44,862,863],{},".exe"," installer.",[37,866,867,870,871,874],{},[14,868,869],{},"macOS:"," mount the ",[44,872,873],{},".zip"," and drag Cyberduck to Applications.",[10,876,877],{},"Cyberduck is donationware — free to use, with a nag screen on launch unless you purchase a registration key from the Mac App Store or Microsoft Store.",[265,879,881],{"id":880},"_2-get-your-s3-credentials","2. Get your S3 credentials",[10,883,884,885,889,890,893,894,897],{},"Log in to your ",[786,886,888],{"href":887},"/dashboard","HummingTribe dashboard"," → S3 Storage tab. Copy your ",[14,891,892],{},"Access Key ID"," and reveal your ",[14,895,896],{},"Secret Access Key"," (shown once — save it now). Note your bucket name.",[265,899,901],{"id":900},"_3-create-a-new-bookmark","3. Create a new bookmark",[10,903,904,905,908,909,912,913,916,917,920],{},"Open Cyberduck → ",[14,906,907],{},"Bookmark"," menu → ",[14,910,911],{},"New Bookmark"," (or press ",[44,914,915],{},"Cmd+Shift+B"," / ",[44,918,919],{},"Ctrl+Shift+B",").",[10,922,923,924,127],{},"In the bookmark editor, set the connection type at the top to ",[14,925,926],{},"S3 (HTTPS)",[10,928,929],{},"Fill in the fields:",[590,931,932,940],{},[593,933,934],{},[596,935,936,938],{},[599,937,601],{},[599,939,604],{},[606,941,942,952,962,972,979],{},[596,943,944,947],{},[611,945,946],{},"Nickname",[611,948,949],{},[44,950,951],{},"HummingTribe",[596,953,954,957],{},[611,955,956],{},"Server",[611,958,959],{},[44,960,961],{},"storage.hummingtribe.com",[596,963,964,967],{},[611,965,966],{},"Port",[611,968,969],{},[44,970,971],{},"443",[596,973,974,976],{},[611,975,892],{},[611,977,978],{},"your Access Key ID",[596,980,981,984],{},[611,982,983],{},"Path",[611,985,986,987,209],{},"your bucket name (e.g. ",[44,988,989],{},"my-bucket",[10,991,238,992,995],{},[14,993,994],{},"More Options"," to expand advanced settings:",[34,997,998,1006],{},[37,999,1000,168,1003],{},[14,1001,1002],{},"Transfer Files:",[44,1004,1005],{},"Use browser connection",[37,1007,1008,1011],{},[14,1009,1010],{},"Connect Mode:"," leave default",[10,1013,1014],{},"Close the bookmark editor — Cyberduck saves automatically.",[265,1016,1018],{"id":1017},"_4-connect-and-authenticate","4. Connect and authenticate",[10,1020,1021,1022,1024,1025,1028,1029,1032],{},"Double-click the bookmark in the main browser window. Cyberduck prompts for your ",[14,1023,896],{}," — paste it and tick ",[14,1026,1027],{},"Add to Keychain"," (macOS) or ",[14,1030,1031],{},"Save Password"," (Windows) so you don't have to re-enter it.",[10,1034,238,1035,1038],{},[14,1036,1037],{},"Login",". Cyberduck connects to HummingTribe and shows the contents of your bucket. An empty bucket shows a blank file list.",[265,1040,1042],{"id":1041},"_5-upload-and-download-files","5. Upload and download files",[10,1044,1045,1048,1049,1051,1052,916,1055,1058],{},[14,1046,1047],{},"Upload:"," drag files or folders from Finder/Explorer into the Cyberduck window. Transfers run in the ",[14,1050,719],{}," window (",[44,1053,1054],{},"Cmd+T",[44,1056,1057],{},"Ctrl+T",") with per-file progress.",[10,1060,1061,1064,1065,1068],{},[14,1062,1063],{},"Download:"," drag files from Cyberduck to your desktop, or right-click → ",[14,1066,1067],{},"Download To..."," to pick a destination.",[10,1070,1071,1074],{},[14,1072,1073],{},"Resume interrupted transfers:"," Cyberduck automatically detects partial transfers and offers to resume on next connect.",[10,1076,1077,1078,1081,1082,1081,1084,1081,1087,1090],{},"For large multi-gigabyte uploads, Cyberduck uses S3 multipart uploads automatically. The default chunk size is 10 MB — adjust under ",[14,1079,1080],{},"Preferences"," → ",[14,1083,719],{},[14,1085,1086],{},"General",[14,1088,1089],{},"Multipart download/upload"," if you need to.",[265,1092,1094],{"id":1093},"_6-browse-and-inspect-backups","6. Browse and inspect backups",[10,1096,1097],{},"Cyberduck is the easiest way to verify what your backup tools have written:",[34,1099,1100,1106,1116,1128],{},[37,1101,1102,1105],{},[14,1103,1104],{},"Path navigation:"," click into folders to drill down. Use the breadcrumb bar at the top to jump back up.",[37,1107,1108,1111,1112,1115],{},[14,1109,1110],{},"File info:"," right-click any file → ",[14,1113,1114],{},"Info"," to see size, modification date, storage class, and S3 metadata.",[37,1117,1118,168,1121,916,1124,1127],{},[14,1119,1120],{},"Search:",[44,1122,1123],{},"Cmd+F",[44,1125,1126],{},"Ctrl+F"," filters the current folder by name.",[37,1129,1130,1133],{},[14,1131,1132],{},"Sort:"," click column headers (Filename, Size, Modified) to sort.",[10,1135,1136,1137,123,1140,1143],{},"This is particularly useful for confirming restic snapshot directories, rclone sync results, or Duplicati ",[44,1138,1139],{},".dblock",[44,1141,1142],{},".dindex"," files are present in the bucket.",[265,1145,1147],{"id":1146},"_7-restore-a-single-file","7. Restore a single file",[10,1149,1150],{},"Restoring an individual file from a structured backup (restic, Duplicati) requires the original tool — those tools store data in their own internal format and cannot be browsed file-by-file in Cyberduck.",[10,1152,1153,1154,1157],{},"Cyberduck restores work for files uploaded ",[14,1155,1156],{},"directly"," as files (e.g. via rclone copy, or manual uploads). To restore:",[490,1159,1160,1163,1168],{},[37,1161,1162],{},"Navigate to the file in the bucket.",[37,1164,1165,1166],{},"Right-click → ",[14,1167,1067],{},[37,1169,1170,1171,127],{},"Choose a local destination → ",[14,1172,1173],{},"Choose",[10,1175,1176,1177,1179],{},"For an entire folder, right-click the folder → ",[14,1178,1067],{}," — Cyberduck downloads the folder tree recursively.",[265,1181,1183],{"id":1182},"_8-optional-client-side-encryption-with-cryptomator","8. Optional — client-side encryption with Cryptomator",[10,1185,1186,1187,1190],{},"Cyberduck integrates with ",[14,1188,1189],{},"Cryptomator"," for transparent client-side encryption. Files are encrypted on your machine before upload — HummingTribe never sees the plaintext.",[10,1192,1193,1194,1197],{},"Install Cryptomator from ",[14,1195,1196],{},"cryptomator.org",". Then in Cyberduck:",[490,1199,1200,1203,1209,1219],{},[37,1201,1202],{},"Connect to your bucket (step 4).",[37,1204,1205,1206,127],{},"Right-click in the browser pane → ",[14,1207,1208],{},"New Encrypted Vault",[37,1210,1211,1212,1215,1216,127],{},"Choose a vault name (e.g. ",[44,1213,1214],{},"vault",") and a strong ",[14,1217,1218],{},"passphrase",[37,1220,1221],{},"Cyberduck creates the Cryptomator vault structure in your bucket.",[10,1223,1224],{},"After creation, Cyberduck shows a virtual unlocked vault. Files dragged in are encrypted before upload; files dragged out are decrypted on download. The vault passphrase is required on every reconnection.",[10,1226,1227,1230],{},[14,1228,1229],{},"If you lose the vault passphrase, the files are unrecoverable."," Cryptomator has no recovery mechanism. Store the passphrase in a password manager.",[10,1232,1233],{},"This is a useful pattern for sensitive ad-hoc files — but for full automated backups, use restic or Duplicati's built-in encryption instead.",[265,1235,1237],{"id":1236},"_9-sync-one-off-manual","9. Sync (one-off, manual)",[10,1239,1240,1241,1244,1245,1247,1248,1251,1252,1255,1256,1259],{},"Cyberduck has a ",[14,1242,1243],{},"Synchronize"," feature (right-click bookmark → ",[14,1246,1243],{},") that compares a local folder to a remote folder and offers three modes: ",[14,1249,1250],{},"Download"," (remote → local), ",[14,1253,1254],{},"Upload"," (local → remote), or ",[14,1257,1258],{},"Mirror"," (both directions).",[10,1261,1262,1263,1266],{},"This is useful for occasional one-off sync operations, but ",[14,1264,1265],{},"it is not scheduled and not incremental"," — every sync rescans the entire folder tree. For automated, deduplicated, scheduled sync, use rclone instead.",[10,1268,1269,1270,127],{},"Manage your bucket and credentials from your ",[786,1271,888],{"href":887},{"title":792,"searchDepth":793,"depth":793,"links":1273},[1274,1275,1276,1277,1278,1279,1280,1281,1282],{"id":845,"depth":802,"text":846},{"id":880,"depth":802,"text":881},{"id":900,"depth":802,"text":901},{"id":1017,"depth":802,"text":1018},{"id":1041,"depth":802,"text":1042},{"id":1093,"depth":802,"text":1094},{"id":1146,"depth":802,"text":1147},{"id":1182,"depth":802,"text":1183},{"id":1236,"depth":802,"text":1237},"Use Cyberduck to browse, upload, and restore files in HummingTribe S3. GUI-based S3 client for Windows and macOS, with Cryptomator integration for client-side encryption.",{},"/docs/cyberduck-eu-s3-setup",{"title":833,"description":1283},{"loc":1285},"docs/cyberduck-eu-s3-setup",[825,1290,828,1291,1292,1293],"browser","cyberduck","windows","macos","gAR0zLszGpWSm3n8aqLR8C4rKjE39jAjRg7z7uAQzHk",{"id":1296,"title":1297,"body":1298,"date":812,"description":1758,"extension":814,"meta":1759,"navigation":816,"path":1760,"seo":1761,"sitemap":1762,"stem":1763,"tags":1764,"tool":1765,"__hash__":1766},"docs/docs/duplicati-eu-s3-setup.md","Duplicati — Automated Encrypted Backups (Windows, macOS, Linux)",{"type":7,"value":1299,"toc":1746},[1300,1303,1307,1313,1341,1375,1382,1384,1392,1396,1408,1419,1423,1434,1440,1444,1448,1462,1464,1556,1564,1568,1571,1605,1614,1618,1624,1646,1650,1654,1665,1672,1675,1687,1691,1698,1705,1709,1715,1729,1738,1742],[10,1301,1302],{},"Free, open-source, cross-platform backup with a web-based GUI. Built-in AES-256 encryption, block-level deduplication, and scheduling. Runs on Windows, macOS, and Linux.",[265,1304,1306],{"id":1305},"_1-install-duplicati","1. Install Duplicati",[10,1308,1309,1310,127],{},"Download the latest installer for your platform from ",[14,1311,1312],{},"duplicati.com/download",[34,1314,1315,1323,1331],{},[37,1316,1317,860,1319,1322],{},[14,1318,859],{},[44,1320,1321],{},".msi"," installer. Duplicati installs as a tray application and a background service.",[37,1324,1325,870,1327,1330],{},[14,1326,869],{},[44,1328,1329],{},".dmg"," and drag Duplicati to Applications.",[37,1332,1333,1336,1337,1340],{},[14,1334,1335],{},"Linux (Debian/Ubuntu):"," install the ",[44,1338,1339],{},".deb"," package:",[1342,1343,1347],"pre",{"className":1344,"code":1345,"language":1346,"meta":792,"style":792},"language-bash shiki shiki-themes github-light github-dark","sudo apt install ./duplicati_*.deb\n","bash",[44,1348,1349],{"__ignoreMap":792},[1350,1351,1354,1358,1362,1365,1368,1372],"span",{"class":1352,"line":1353},"line",1,[1350,1355,1357],{"class":1356},"sScJk","sudo",[1350,1359,1361],{"class":1360},"sZZnC"," apt",[1350,1363,1364],{"class":1360}," install",[1350,1366,1367],{"class":1360}," ./duplicati_",[1350,1369,1371],{"class":1370},"sj4cs","*",[1350,1373,1374],{"class":1360},".deb\n",[10,1376,1377,1378,1381],{},"On first launch, Duplicati opens the web UI at ",[44,1379,1380],{},"http://localhost:8200",". All configuration happens through the browser.",[265,1383,881],{"id":880},[10,1385,884,1386,889,1388,893,1390,897],{},[786,1387,888],{"href":887},[14,1389,892],{},[14,1391,896],{},[265,1393,1395],{"id":1394},"_3-create-a-new-backup-job","3. Create a new backup job",[10,1397,1398,1399,1081,1402,1081,1405,127],{},"In the Duplicati web UI, click ",[14,1400,1401],{},"Add backup",[14,1403,1404],{},"Configure a new backup",[14,1406,1407],{},"Next",[10,1409,1410,1411,1414,1415,1418],{},"Enter a ",[14,1412,1413],{},"Name"," for the job (e.g. ",[44,1416,1417],{},"Laptop → HummingTribe",") and an optional description.",[265,1420,1422],{"id":1421},"_4-configure-encryption","4. Configure encryption",[10,1424,1425,1426,1429,1430,1433],{},"On the same page, leave encryption set to ",[14,1427,1428],{},"AES-256 encryption, built in",". Enter a strong ",[14,1431,1432],{},"Passphrase"," and confirm it.",[10,1435,1436,1439],{},[14,1437,1438],{},"If you lose this passphrase, your backups are unrecoverable"," — Duplicati has no password reset. Store it in a password manager.",[10,1441,238,1442,127],{},[14,1443,1407],{},[265,1445,1447],{"id":1446},"_5-configure-the-s3-destination","5. Configure the S3 destination",[10,1449,1450,1451,1454,1455,1458,1459,127],{},"On the ",[14,1452,1453],{},"Destination"," screen, set ",[14,1456,1457],{},"Storage Type"," to ",[14,1460,1461],{},"S3 Compatible",[10,1463,929],{},[590,1465,1466,1474],{},[593,1467,1468],{},[596,1469,1470,1472],{},[599,1471,601],{},[599,1473,604],{},[606,1475,1476,1485,1493,1501,1509,1519,1531,1538,1546],{},[596,1477,1478,1480],{},[611,1479,956],{},[611,1481,1482],{},[44,1483,1484],{},"Custom server URL",[596,1486,1487,1489],{},[611,1488,1484],{},[611,1490,1491],{},[44,1492,961],{},[596,1494,1495,1498],{},[611,1496,1497],{},"Bucket name",[611,1499,1500],{},"your bucket name from the dashboard",[596,1502,1503,1506],{},[611,1504,1505],{},"Bucket create region",[611,1507,1508],{},"leave blank",[596,1510,1511,1514],{},[611,1512,1513],{},"Storage class",[611,1515,1516],{},[44,1517,1518],{},"(Default)",[596,1520,1521,1524],{},[611,1522,1523],{},"Folder path",[611,1525,1526,1527,1530],{},"leave blank (or e.g. ",[44,1528,1529],{},"laptop-backup"," for a subfolder)",[596,1532,1533,1536],{},[611,1534,1535],{},"AWS Access ID",[611,1537,978],{},[596,1539,1540,1543],{},[611,1541,1542],{},"AWS Access Key",[611,1544,1545],{},"your Secret Access Key",[596,1547,1548,1551],{},[611,1549,1550],{},"Client library to use",[611,1552,1553],{},[44,1554,1555],{},"Amazon AWS SDK",[10,1557,238,1558,1561,1562,127],{},[14,1559,1560],{},"Test connection",". Duplicati will verify credentials and confirm the bucket is reachable. If prompted to use path-style URLs, accept — HummingTribe requires path-style access. Click ",[14,1563,1407],{},[265,1565,1567],{"id":1566},"_6-select-source-data","6. Select source data",[10,1569,1570],{},"Expand the filesystem tree and tick the folders you want to back up. Typical selections:",[34,1572,1573,1586,1597],{},[37,1574,1575,168,1577,47,1580,47,1583],{},[14,1576,859],{},[44,1578,1579],{},"C:\\Users\\\u003Cname>\\Documents",[44,1581,1582],{},"Desktop",[44,1584,1585],{},"Pictures",[37,1587,1588,168,1590,47,1593,47,1595],{},[14,1589,869],{},[44,1591,1592],{},"/Users/\u003Cname>/Documents",[44,1594,1582],{},[44,1596,1585],{},[37,1598,1599,168,1602],{},[14,1600,1601],{},"Linux:",[44,1603,1604],{},"/home/\u003Cuser>",[10,1606,1607,1608,1611,1612,127],{},"Use the ",[14,1609,1610],{},"Filters"," tab to exclude caches, virtual machines, or large files you don't need backed up. Click ",[14,1613,1407],{},[265,1615,1617],{"id":1616},"_7-set-schedule","7. Set schedule",[10,1619,1620,1621,197],{},"Enable ",[14,1622,1623],{},"Automatically run backups",[34,1625,1626,1632,1640],{},[37,1627,1628,1631],{},[14,1629,1630],{},"Next time:"," today's date and a time after hours (e.g. 02:00)",[37,1633,1634,168,1637],{},[14,1635,1636],{},"Run again every:",[44,1638,1639],{},"1 Days",[37,1641,1642,1645],{},[14,1643,1644],{},"Allowed days:"," all",[10,1647,238,1648,127],{},[14,1649,1407],{},[265,1651,1653],{"id":1652},"_8-set-retention-policy","8. Set retention policy",[10,1655,1656,1657,1660,1661,1664],{},"Under ",[14,1658,1659],{},"Backup retention",", pick a policy. ",[14,1662,1663],{},"Smart backup retention"," is the sensible default — it keeps one backup per day for the last week, one per week for the last month, and one per month for the last year.",[10,1666,1667,1668,1671],{},"For more control, choose ",[14,1669,1670],{},"Custom backup retention"," and enter a policy string like:\n7D:1D,4W:1W,12M:1M",[10,1673,1674],{},"This reads as: keep one version per day for 7 days, one per week for 4 weeks, one per month for 12 months.",[10,1676,1677,1678,1458,1681,1684,1685,127],{},"Set ",[14,1679,1680],{},"Remote volume size",[44,1682,1683],{},"50 MB"," (default) for most connections. Click ",[14,1686,250],{},[265,1688,1690],{"id":1689},"_9-run-the-first-backup-and-verify","9. Run the first backup and verify",[10,1692,1693,1694,1697],{},"From the backup job's panel, click ",[14,1695,1696],{},"Run now",". The first backup uploads all selected data and will take time proportional to the dataset size and your upload bandwidth. Subsequent backups only upload changed blocks.",[10,1699,1700,1701,1704],{},"After completion, click ",[14,1702,1703],{},"Verify files"," on the job panel. Duplicati downloads a sample of backup volumes and checks their integrity against the local database. Run this periodically — a backup you haven't verified is a backup you don't have.",[265,1706,1708],{"id":1707},"_10-restore-from-backup","10. Restore from backup",[10,1710,238,1711,1714],{},[14,1712,1713],{},"Restore"," in the left sidebar → select the backup job → choose a restore point (date/time) → tick the files or folders to restore.",[10,1716,1717,1718,1721,1722,1725,1726,1728],{},"Choose a restore destination — ",[14,1719,1720],{},"Original location"," (overwrites existing files) or ",[14,1723,1724],{},"Pick location"," (restores to a new folder). Click ",[14,1727,1713],{}," and wait for completion.",[10,1730,1731,1732,1081,1734,1737],{},"To restore on a different machine (disaster recovery), install Duplicati, choose ",[14,1733,1713],{},[14,1735,1736],{},"Direct restore from backup files",", enter the same S3 destination settings, and provide your encryption passphrase. Duplicati will rebuild the local database from the remote backup and let you restore any snapshot.",[10,1739,1269,1740,127],{},[786,1741,888],{"href":887},[1743,1744,1745],"style",{},"html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":792,"searchDepth":793,"depth":793,"links":1747},[1748,1749,1750,1751,1752,1753,1754,1755,1756,1757],{"id":1305,"depth":802,"text":1306},{"id":880,"depth":802,"text":881},{"id":1394,"depth":802,"text":1395},{"id":1421,"depth":802,"text":1422},{"id":1446,"depth":802,"text":1447},{"id":1566,"depth":802,"text":1567},{"id":1616,"depth":802,"text":1617},{"id":1652,"depth":802,"text":1653},{"id":1689,"depth":802,"text":1690},{"id":1707,"depth":802,"text":1708},"Set up Duplicati for scheduled, encrypted backups to HummingTribe S3. GUI-based, cross-platform, with built-in AES-256 encryption and retention policies.",{},"/docs/duplicati-eu-s3-setup",{"title":1297,"description":1758},{"loc":1760},"docs/duplicati-eu-s3-setup",[825,826,828,1765],"duplicati","zjrwGDNPVZ1ODbnCXI5olSoyy9c5g4KFeFiLGYcJhhs",{"id":1768,"title":1769,"body":1770,"date":812,"description":2993,"extension":814,"meta":2994,"navigation":816,"path":2995,"seo":2996,"sitemap":2997,"stem":2998,"tags":2999,"tool":3000,"__hash__":3002},"docs/docs/proxmox-backup-server-s3-offsite.md","Proxmox Backup Server offsite to S3 — full setup guide",{"type":7,"value":1771,"toc":2973},[1772,1783,1788,1790,1818,1822,1831,1838,1841,1859,1866,1870,1874,1877,1883,1889,1893,1900,1905,1910,1913,1915,1918,1938,1941,1945,1955,1961,2026,2036,2046,2056,2116,2119,2135,2139,2142,2175,2182,2186,2198,2203,2237,2262,2265,2278,2281,2285,2295,2298,2321,2324,2328,2331,2350,2362,2385,2456,2459,2524,2554,2561,2565,2568,2571,2589,2596,2604,2611,2615,2622,2644,2651,2654,2674,2678,2681,2697,2700,2702,2705,2708,2752,2755,2818,2821,2853,2859,2861,2874,2900,2910,2916,2937,2951,2953,2970],[10,1773,1774,1775,1778,1779,1782],{},"Proxmox Backup Server 4.0 (August 2025) introduced native S3 object storage as a datastore backend. This replaces the old pattern of mounting S3 with ",[44,1776,1777],{},"s3fs-fuse"," or running third-party proxies like ",[44,1780,1781],{},"pmoxs3backuproxy"," — approaches Proxmox never officially supported. This guide covers both supported deployment patterns, with exact commands and the caveats that matter for production use.",[10,1784,1785,1787],{},[14,1786,26],{}," An S3 endpoint, an S3-backed datastore with local cache, and — optionally — a sync job that keeps a local datastore and an S3 datastore in step for true 3-2-1 backups.",[29,1789,32],{"id":31},[34,1791,1792,1798,1809,1812,1815],{},[37,1793,1794,1795,127],{},"Proxmox Backup Server 4.0 or newer (4.1.6+ recommended). Check with ",[44,1796,1797],{},"proxmox-backup-manager versions",[37,1799,42,1800,47,1802,47,1804,54,1806,1808],{},[44,1801,46],{},[44,1803,50],{},[44,1805,53],{},[44,1807,57],{}," permissions on that bucket. PBS does not create buckets or manage ACLs.",[37,1810,1811],{},"A dedicated disk, partition, or ZFS dataset for the local cache. Proxmox recommends 64–128 GiB. Do not reuse an existing PBS datastore path.",[37,1813,1814],{},"Outbound HTTPS to the S3 endpoint. Plain HTTP is rejected. Self-signed certificates require the TLS fingerprint in config.",[37,1816,1817],{},"Stable bandwidth to the S3 endpoint. Initial seeding writes a large chunk of your datastore to S3.",[29,1819,1821],{"id":1820},"how-the-s3-backend-actually-works","How the S3 backend actually works",[86,1823,1824],{},[10,1825,1826,1827,1830],{},"⚠️ ",[14,1828,1829],{},"The S3 datastore backend is marked technology preview in PBS 4.1.6."," It works and is reasonable to use for secondary or offsite copies, but run restore tests frequently and watch the Proxmox release notes before trusting it as your only copy.",[10,1832,1833,1834,1837],{},"PBS does not put self-contained snapshots into the bucket. It uses the same ",[14,1835,1836],{},"content-addressable chunk store"," model it uses locally: each backup is split into deduplicated, compressed, optionally encrypted chunks identified by hash. Those chunks are written to S3 as individual objects, prefixed by the datastore name. Index files that map chunks back to snapshots are also stored as objects.",[10,1839,1840],{},"Two consequences:",[490,1842,1843,1849],{},[37,1844,1845,1848],{},[14,1846,1847],{},"Dedup and compression still work."," You don't pay the S3 storage cost of full copies.",[37,1850,1851,1854,1855,1858],{},[14,1852,1853],{},"You cannot restore directly from the bucket."," A PBS instance is always required. The good news: if you lose the PBS host, you can point a fresh PBS install at the same bucket with the same datastore name and ",[44,1856,1857],{},"--reuse-datastore true --overwrite-in-use true",", and your backups are recoverable.",[10,1860,1861,1862,1865],{},"A ",[14,1863,1864],{},"local cache"," is mandatory. PBS keeps recently-read chunks and index metadata on disk so garbage collection, verification, and reads don't hit S3 for every operation. Without the cache, cost and latency would both be unusable.",[29,1867,1869],{"id":1868},"two-deployment-patterns","Two deployment patterns",[265,1871,1873],{"id":1872},"pattern-a-s3-as-the-only-datastore","Pattern A — S3 as the only datastore",[10,1875,1876],{},"PBS writes directly to the S3 datastore. No local chunk storage beyond the cache.",[10,1878,1879,1882],{},[14,1880,1881],{},"Use when:"," homelab, small deployments, or secondary PBS acting purely as offsite target. Simplest to set up.",[10,1884,1885,1888],{},[14,1886,1887],{},"Trade-off:"," every backup, restore, verification, and GC operation touches S3. Initial backup speeds are bound by your upstream bandwidth. Restores are bound by downstream.",[265,1890,1892],{"id":1891},"pattern-b-local-datastore-s3-datastore-sync-job","Pattern B — Local datastore + S3 datastore + sync job",[10,1894,1895,1896,1899],{},"Backups land on a local datastore first (fast). A scheduled ",[14,1897,1898],{},"sync job on the same PBS instance"," pulls from local to S3 for offsite retention. You get both copies from one PBS host.",[10,1901,1902,1904],{},[14,1903,1881],{}," you want backup speed to match local disk throughput and need an automated offsite copy. This is the recommended pattern for MSPs and production use.",[10,1906,1907,1909],{},[14,1908,1887],{}," more storage required on the PBS host, slightly more complex.",[10,1911,1912],{},"The rest of this guide sets up the S3 endpoint and datastore once. Both patterns diverge only at the final step (whether you point PVE at the S3 datastore directly, or configure a sync job).",[29,1914,97],{"id":96},[10,1916,1917],{},"At your S3 provider, create:",[34,1919,1920,1925,1931],{},[37,1921,1861,1922,1924],{},[14,1923,108],{}," dedicated to PBS. Do not share it with other tools — PBS manages object lifecycle itself.",[37,1926,1927,1928,1930],{},"An ",[14,1929,115],{}," scoped to that bucket only. Save the secret key; most providers only show it once.",[37,1932,119,1933,1935,1936,127],{},[14,1934,126],{}," identifier and the ",[14,1937,122],{},[10,1939,1940],{},"If your provider supports object versioning or object lock, enable it on the bucket for ransomware protection. PBS never modifies existing chunks, but a compromised client with delete permissions could — versioning gives you a recovery window.",[29,1942,1944],{"id":1943},"step-2-configure-the-s3-endpoint-in-pbs","Step 2 — Configure the S3 endpoint in PBS",[10,1946,1947,1950,1951,1954],{},[14,1948,1949],{},"Via the web UI:"," Navigate to ",[14,1952,1953],{},"Configuration → Remotes → S3 Endpoints → Add",". Fill in name, access key, secret, endpoint URL, region, and (for self-signed providers) fingerprint.",[10,1956,1957,1960],{},[14,1958,1959],{},"Via CLI"," — this is the pattern most providers use:",[1342,1962,1964],{"className":1344,"code":1963,"language":1346,"meta":792,"style":792},"proxmox-backup-manager s3 endpoint create my-s3-ep \\\n  --access-key 'YOUR_ACCESS_KEY' \\\n  --secret-key 'YOUR_SECRET_KEY' \\\n  --endpoint '{{bucket}}.s3.{{region}}.example.com' \\\n  --region eu-central-1\n",[44,1965,1966,1986,1996,2006,2017],{"__ignoreMap":792},[1350,1967,1968,1971,1974,1977,1980,1983],{"class":1352,"line":1353},[1350,1969,1970],{"class":1356},"proxmox-backup-manager",[1350,1972,1973],{"class":1360}," s3",[1350,1975,1976],{"class":1360}," endpoint",[1350,1978,1979],{"class":1360}," create",[1350,1981,1982],{"class":1360}," my-s3-ep",[1350,1984,1985],{"class":1370}," \\\n",[1350,1987,1988,1991,1994],{"class":1352,"line":793},[1350,1989,1990],{"class":1370},"  --access-key",[1350,1992,1993],{"class":1360}," 'YOUR_ACCESS_KEY'",[1350,1995,1985],{"class":1370},[1350,1997,1998,2001,2004],{"class":1352,"line":802},[1350,1999,2000],{"class":1370},"  --secret-key",[1350,2002,2003],{"class":1360}," 'YOUR_SECRET_KEY'",[1350,2005,1985],{"class":1370},[1350,2007,2009,2012,2015],{"class":1352,"line":2008},4,[1350,2010,2011],{"class":1370},"  --endpoint",[1350,2013,2014],{"class":1360}," '{{bucket}}.s3.{{region}}.example.com'",[1350,2016,1985],{"class":1370},[1350,2018,2020,2023],{"class":1352,"line":2019},5,[1350,2021,2022],{"class":1370},"  --region",[1350,2024,2025],{"class":1360}," eu-central-1\n",[10,2027,2028,2029,123,2032,2035],{},"The ",[44,2030,2031],{},"{{bucket}}",[44,2033,2034],{},"{{region}}"," placeholders are expanded automatically when PBS makes requests. This gives you one endpoint config that works across multiple buckets.",[10,2037,2038,2041,2042,2045],{},[14,2039,2040],{},"Vhost vs path style:"," PBS defaults to vhost-style addressing (bucket as subdomain). If your provider requires path-style (bucket in the URL path), add ",[44,2043,2044],{},"--path-style true",". Cloudflare R2 and some self-hosted providers need this.",[10,2047,2048,2051,2052,2055],{},[14,2049,2050],{},"Self-signed certificates:"," Add ",[44,2053,2054],{},"--fingerprint 'XX:XX:XX:...'",". Get the fingerprint with:",[1342,2057,2059],{"className":1344,"code":2058,"language":1346,"meta":792,"style":792},"openssl s_client -connect your-s3-endpoint:443 -servername your-s3-endpoint \u003C /dev/null 2>/dev/null | \\\n  openssl x509 -fingerprint -sha256 -noout\n",[44,2060,2061,2099],{"__ignoreMap":792},[1350,2062,2063,2066,2069,2072,2075,2078,2081,2085,2088,2091,2094,2097],{"class":1352,"line":1353},[1350,2064,2065],{"class":1356},"openssl",[1350,2067,2068],{"class":1360}," s_client",[1350,2070,2071],{"class":1370}," -connect",[1350,2073,2074],{"class":1360}," your-s3-endpoint:443",[1350,2076,2077],{"class":1370}," -servername",[1350,2079,2080],{"class":1360}," your-s3-endpoint",[1350,2082,2084],{"class":2083},"szBVR"," \u003C",[1350,2086,2087],{"class":1360}," /dev/null",[1350,2089,2090],{"class":2083}," 2>",[1350,2092,2093],{"class":1360},"/dev/null",[1350,2095,2096],{"class":2083}," |",[1350,2098,1985],{"class":1370},[1350,2100,2101,2104,2107,2110,2113],{"class":1352,"line":793},[1350,2102,2103],{"class":1356},"  openssl",[1350,2105,2106],{"class":1360}," x509",[1350,2108,2109],{"class":1370}," -fingerprint",[1350,2111,2112],{"class":1370}," -sha256",[1350,2114,2115],{"class":1370}," -noout\n",[10,2117,2118],{},"Verify the endpoint:",[1342,2120,2122],{"className":1344,"code":2121,"language":1346,"meta":792,"style":792},"proxmox-backup-manager s3 endpoint list\n",[44,2123,2124],{"__ignoreMap":792},[1350,2125,2126,2128,2130,2132],{"class":1352,"line":1353},[1350,2127,1970],{"class":1356},[1350,2129,1973],{"class":1360},[1350,2131,1976],{"class":1360},[1350,2133,2134],{"class":1360}," list\n",[29,2136,2138],{"id":2137},"step-3-prepare-the-local-cache","Step 3 — Prepare the local cache",[10,2140,2141],{},"The cache must live on a dedicated path. A ZFS dataset with a quota is the cleanest option:",[1342,2143,2145],{"className":1344,"code":2144,"language":1346,"meta":792,"style":792},"zfs create -o mountpoint=/mnt/datastore/s3-cache rpool/s3-cache\nzfs set quota=128G rpool/s3-cache\n",[44,2146,2147,2163],{"__ignoreMap":792},[1350,2148,2149,2152,2154,2157,2160],{"class":1352,"line":1353},[1350,2150,2151],{"class":1356},"zfs",[1350,2153,1979],{"class":1360},[1350,2155,2156],{"class":1370}," -o",[1350,2158,2159],{"class":1360}," mountpoint=/mnt/datastore/s3-cache",[1350,2161,2162],{"class":1360}," rpool/s3-cache\n",[1350,2164,2165,2167,2170,2173],{"class":1352,"line":793},[1350,2166,2151],{"class":1356},[1350,2168,2169],{"class":1360}," set",[1350,2171,2172],{"class":1360}," quota=128G",[1350,2174,2162],{"class":1360},[10,2176,2177,2178,2181],{},"Or use a dedicated partition mounted at ",[44,2179,2180],{},"/mnt/datastore/s3-cache",". Whatever you choose, do not use an existing datastore path — PBS will reject it.",[29,2183,2185],{"id":2184},"step-4-create-the-s3-backed-datastore","Step 4 — Create the S3-backed datastore",[10,2187,2188,168,2190,2193,2194,2197],{},[14,2189,1949],{},[14,2191,2192],{},"Datastore → Add Datastore",", select ",[14,2195,2196],{},"S3"," as backend, pick your endpoint from the dropdown, set bucket name and cache path.",[10,2199,2200],{},[14,2201,2202],{},"Via CLI:",[1342,2204,2206],{"className":1344,"code":2205,"language":1346,"meta":792,"style":792},"proxmox-backup-manager datastore create s3-offsite \\\n  /mnt/datastore/s3-cache \\\n  --backend type=s3,client=my-s3-ep,bucket=pbs-offsite-bucket\n",[44,2207,2208,2222,2229],{"__ignoreMap":792},[1350,2209,2210,2212,2215,2217,2220],{"class":1352,"line":1353},[1350,2211,1970],{"class":1356},[1350,2213,2214],{"class":1360}," datastore",[1350,2216,1979],{"class":1360},[1350,2218,2219],{"class":1360}," s3-offsite",[1350,2221,1985],{"class":1370},[1350,2223,2224,2227],{"class":1352,"line":793},[1350,2225,2226],{"class":1360},"  /mnt/datastore/s3-cache",[1350,2228,1985],{"class":1370},[1350,2230,2231,2234],{"class":1352,"line":802},[1350,2232,2233],{"class":1370},"  --backend",[1350,2235,2236],{"class":1360}," type=s3,client=my-s3-ep,bucket=pbs-offsite-bucket\n",[34,2238,2239,2245,2250,2256],{},[37,2240,2241,2244],{},[44,2242,2243],{},"s3-offsite"," is the datastore name. It becomes the prefix for all objects in the bucket, so pick something stable — renaming later means reseeding.",[37,2246,2247,2249],{},[44,2248,2180],{}," is the local cache path.",[37,2251,2252,2255],{},[44,2253,2254],{},"client=my-s3-ep"," references the endpoint you created in Step 2.",[37,2257,2258,2261],{},[44,2259,2260],{},"bucket=pbs-offsite-bucket"," is the S3 bucket name.",[10,2263,2264],{},"List datastores to confirm:",[1342,2266,2268],{"className":1344,"code":2267,"language":1346,"meta":792,"style":792},"proxmox-backup-manager datastore list\n",[44,2269,2270],{"__ignoreMap":792},[1350,2271,2272,2274,2276],{"class":1352,"line":1353},[1350,2273,1970],{"class":1356},[1350,2275,2214],{"class":1360},[1350,2277,2134],{"class":1360},[10,2279,2280],{},"The datastore is now usable. At this point your config diverges depending on pattern.",[29,2282,2284],{"id":2283},"pattern-a-use-s3-datastore-directly-from-pve","Pattern A — Use S3 datastore directly from PVE",[10,2286,2287,2288,2291,2292,2294],{},"In Proxmox VE: ",[14,2289,2290],{},"Datacenter → Storage → Add → Proxmox Backup Server",". Enter your PBS IP/hostname, credentials, the datastore name (",[44,2293,2243],{},"), and the PBS server fingerprint.",[10,2296,2297],{},"Get the PBS fingerprint:",[1342,2299,2301],{"className":1344,"code":2300,"language":1346,"meta":792,"style":792},"proxmox-backup-manager cert info | grep Fingerprint\n",[44,2302,2303],{"__ignoreMap":792},[1350,2304,2305,2307,2310,2313,2315,2318],{"class":1352,"line":1353},[1350,2306,1970],{"class":1356},[1350,2308,2309],{"class":1360}," cert",[1350,2311,2312],{"class":1360}," info",[1350,2314,2096],{"class":2083},[1350,2316,2317],{"class":1356}," grep",[1350,2319,2320],{"class":1360}," Fingerprint\n",[10,2322,2323],{},"Backup jobs in PVE that target this storage now write directly to S3 via the PBS local cache.",[29,2325,2327],{"id":2326},"pattern-b-local-datastore-sync-job-to-s3","Pattern B — Local datastore + sync job to S3",[10,2329,2330],{},"Create a local datastore alongside the S3 one (if you don't already have one):",[1342,2332,2334],{"className":1344,"code":2333,"language":1346,"meta":792,"style":792},"proxmox-backup-manager datastore create local-pbs /mnt/datastore/local-pbs\n",[44,2335,2336],{"__ignoreMap":792},[1350,2337,2338,2340,2342,2344,2347],{"class":1352,"line":1353},[1350,2339,1970],{"class":1356},[1350,2341,2214],{"class":1360},[1350,2343,1979],{"class":1360},[1350,2345,2346],{"class":1360}," local-pbs",[1350,2348,2349],{"class":1360}," /mnt/datastore/local-pbs\n",[10,2351,2352,2353,2356,2357,2359,2360,127],{},"Point your PVE backup jobs at ",[44,2354,2355],{},"local-pbs",". Now configure a pull sync job that mirrors snapshots from ",[44,2358,2355],{}," into ",[44,2361,2243],{},[10,2363,2364,2365,2368,2369,2372,2373,2376,2377,2380,2381,2384],{},"PBS sync jobs are designed to pull from a ",[14,2366,2367],{},"Remote"," (another PBS instance). To sync between two datastores on the same PBS host, the supported approach is to create a Remote that points back at ",[44,2370,2371],{},"localhost",". Create a dedicated API token for the sync user first (",[14,2374,2375],{},"Configuration → Access Control → API Token",", role ",[44,2378,2379],{},"DatastoreReader"," on ",[44,2382,2383],{},"/datastore/local-pbs","), then:",[1342,2386,2388],{"className":1344,"code":2387,"language":1346,"meta":792,"style":792},"proxmox-backup-manager remote create self \\\n  --host 127.0.0.1 \\\n  --userid 'sync@pbs!syncjob' \\\n  --password 'YOUR_API_TOKEN_SECRET' \\\n  --fingerprint \"$(proxmox-backup-manager cert info | awk '/Fingerprint/ {print $3}')\"\n",[44,2389,2390,2404,2414,2424,2434],{"__ignoreMap":792},[1350,2391,2392,2394,2397,2399,2402],{"class":1352,"line":1353},[1350,2393,1970],{"class":1356},[1350,2395,2396],{"class":1360}," remote",[1350,2398,1979],{"class":1360},[1350,2400,2401],{"class":1360}," self",[1350,2403,1985],{"class":1370},[1350,2405,2406,2409,2412],{"class":1352,"line":793},[1350,2407,2408],{"class":1370},"  --host",[1350,2410,2411],{"class":1370}," 127.0.0.1",[1350,2413,1985],{"class":1370},[1350,2415,2416,2419,2422],{"class":1352,"line":802},[1350,2417,2418],{"class":1370},"  --userid",[1350,2420,2421],{"class":1360}," 'sync@pbs!syncjob'",[1350,2423,1985],{"class":1370},[1350,2425,2426,2429,2432],{"class":1352,"line":2008},[1350,2427,2428],{"class":1370},"  --password",[1350,2430,2431],{"class":1360}," 'YOUR_API_TOKEN_SECRET'",[1350,2433,1985],{"class":1370},[1350,2435,2436,2439,2442,2444,2447,2450,2453],{"class":1352,"line":2019},[1350,2437,2438],{"class":1370},"  --fingerprint",[1350,2440,2441],{"class":1360}," \"$(",[1350,2443,1970],{"class":1356},[1350,2445,2446],{"class":1360}," cert info ",[1350,2448,2449],{"class":2083},"|",[1350,2451,2452],{"class":1356}," awk",[1350,2454,2455],{"class":1360}," '/Fingerprint/ {print $3}')\"\n",[10,2457,2458],{},"Then create the sync job from the local datastore (via the self-remote) into the S3 datastore:",[1342,2460,2462],{"className":1344,"code":2461,"language":1346,"meta":792,"style":792},"proxmox-backup-manager sync-job create offsite-sync \\\n  --remote self \\\n  --remote-store local-pbs \\\n  --store s3-offsite \\\n  --schedule 'daily' \\\n  --remove-vanished false\n",[44,2463,2464,2478,2487,2496,2505,2515],{"__ignoreMap":792},[1350,2465,2466,2468,2471,2473,2476],{"class":1352,"line":1353},[1350,2467,1970],{"class":1356},[1350,2469,2470],{"class":1360}," sync-job",[1350,2472,1979],{"class":1360},[1350,2474,2475],{"class":1360}," offsite-sync",[1350,2477,1985],{"class":1370},[1350,2479,2480,2483,2485],{"class":1352,"line":793},[1350,2481,2482],{"class":1370},"  --remote",[1350,2484,2401],{"class":1360},[1350,2486,1985],{"class":1370},[1350,2488,2489,2492,2494],{"class":1352,"line":802},[1350,2490,2491],{"class":1370},"  --remote-store",[1350,2493,2346],{"class":1360},[1350,2495,1985],{"class":1370},[1350,2497,2498,2501,2503],{"class":1352,"line":2008},[1350,2499,2500],{"class":1370},"  --store",[1350,2502,2219],{"class":1360},[1350,2504,1985],{"class":1370},[1350,2506,2507,2510,2513],{"class":1352,"line":2019},[1350,2508,2509],{"class":1370},"  --schedule",[1350,2511,2512],{"class":1360}," 'daily'",[1350,2514,1985],{"class":1370},[1350,2516,2518,2521],{"class":1352,"line":2517},6,[1350,2519,2520],{"class":1370},"  --remove-vanished",[1350,2522,2523],{"class":1370}," false\n",[34,2525,2526,2532,2538,2544],{},[37,2527,2528,2531],{},[44,2529,2530],{},"--remote self"," references the loopback remote you just created.",[37,2533,2534,2537],{},[44,2535,2536],{},"--remote-store local-pbs"," is the source datastore.",[37,2539,2540,2543],{},[44,2541,2542],{},"--store s3-offsite"," is the target (S3) datastore.",[37,2545,2546,2549,2550,2553],{},[44,2547,2548],{},"--remove-vanished false"," is a ",[14,2551,2552],{},"ransomware safety measure",": if an attacker deletes snapshots on your local datastore, the sync job won't propagate those deletions to S3. Manage retention directly on the S3 datastore with a separate prune job.",[10,2555,2556,2557,2560],{},"You can also configure all of this from the web UI under ",[14,2558,2559],{},"Datastore → s3-offsite → Sync Jobs → Add"," after the self-remote is in place — often the faster path for first-time setup.",[29,2562,2564],{"id":2563},"step-5-encryption-do-not-skip","Step 5 — Encryption (do not skip)",[10,2566,2567],{},"PBS supports client-side encryption. Chunks are encrypted on the PVE host before being sent to PBS — your S3 provider only ever sees ciphertext. This is independent of the bucket being public-accessible or not; correctly configured, a compromised bucket leaks nothing useful.",[10,2569,2570],{},"On each PVE node:",[1342,2572,2574],{"className":1344,"code":2573,"language":1346,"meta":792,"style":792},"proxmox-backup-client key create /etc/pve/priv/pbs-encryption.key\n",[44,2575,2576],{"__ignoreMap":792},[1350,2577,2578,2581,2584,2586],{"class":1352,"line":1353},[1350,2579,2580],{"class":1356},"proxmox-backup-client",[1350,2582,2583],{"class":1360}," key",[1350,2585,1979],{"class":1360},[1350,2587,2588],{"class":1360}," /etc/pve/priv/pbs-encryption.key\n",[10,2590,2591,2592,2595],{},"Reference the key in your storage config in PVE (",[14,2593,2594],{},"Datacenter → Storage → your PBS storage → Encryption Key","). All subsequent backups are encrypted.",[86,2597,2598],{},[10,2599,1826,2600,2603],{},[14,2601,2602],{},"Back up the encryption key separately — not on PBS, not in the S3 bucket it protects."," If you lose the key, every backup in S3 is unrecoverable. Print the paper-key version and store it in a safe, or keep it in a password manager that is not itself backed up to the same PBS.",[10,2605,2606,2607,2610],{},"Use the master-key feature (",[44,2608,2609],{},"--master-pubkey-file",") to allow recovery of individual backup keys from a master keypair. The Proxmox Backup Client documentation covers the master-key workflow in detail.",[29,2612,2614],{"id":2613},"step-6-garbage-collection-and-verification-on-s3","Step 6 — Garbage collection and verification on S3",[10,2616,2617,2618,2621],{},"GC on an S3-backed datastore issues significantly more API requests than GC on local storage. Schedule it ",[14,2619,2620],{},"less frequently"," than you would locally — weekly is reasonable for most workloads, not daily.",[1342,2623,2625],{"className":1344,"code":2624,"language":1346,"meta":792,"style":792},"proxmox-backup-manager datastore update s3-offsite --gc-schedule 'Sun 04:00'\n",[44,2626,2627],{"__ignoreMap":792},[1350,2628,2629,2631,2633,2636,2638,2641],{"class":1352,"line":1353},[1350,2630,1970],{"class":1356},[1350,2632,2214],{"class":1360},[1350,2634,2635],{"class":1360}," update",[1350,2637,2219],{"class":1360},[1350,2639,2640],{"class":1370}," --gc-schedule",[1350,2642,2643],{"class":1360}," 'Sun 04:00'\n",[10,2645,2646,2647,2650],{},"Verification jobs read chunks back and recompute their hashes. On S3 this means downloading chunks — egress cost applies unless your provider offers zero-egress. Configure verify jobs from the web UI under ",[14,2648,2649],{},"Datastore → Verify Jobs → Add"," with a conservative schedule (monthly is a reasonable starting point for S3 datastores). Enable the \"skip verified\" option with a 30-day window so verification is incremental rather than full.",[10,2652,2653],{},"Manual verification is also possible from the CLI:",[1342,2655,2657],{"className":1344,"code":2656,"language":1346,"meta":792,"style":792},"proxmox-backup-manager verify s3-offsite --ignore-verified true\n",[44,2658,2659],{"__ignoreMap":792},[1350,2660,2661,2663,2666,2668,2671],{"class":1352,"line":1353},[1350,2662,1970],{"class":1356},[1350,2664,2665],{"class":1360}," verify",[1350,2667,2219],{"class":1360},[1350,2669,2670],{"class":1370}," --ignore-verified",[1350,2672,2673],{"class":1370}," true\n",[29,2675,2677],{"id":2676},"step-7-test-a-restore","Step 7 — Test a restore",[10,2679,2680],{},"A backup you have not restored is a backup you do not have. Before relying on the setup:",[490,2682,2683,2686,2691,2694],{},[37,2684,2685],{},"From the PBS UI, select a snapshot on the S3 datastore.",[37,2687,238,2688,127],{},[14,2689,2690],{},"File Restore",[37,2692,2693],{},"Browse the archive and extract a handful of files.",[37,2695,2696],{},"Separately, restore an entire VM snapshot to a new VM ID on PVE and boot it.",[10,2698,2699],{},"Do this after the initial setup, after any PBS upgrade, and on a rotating sample of snapshots at least monthly.",[29,2701,582],{"id":581},[10,2703,2704],{},"HummingTribe S3 runs on Garage (S3-compatible) from our Hetzner facility in Germany. All storage is in the EU, zero egress fees, GDPR-compliant by default.",[10,2706,2707],{},"Values you'll use in the PBS S3 endpoint config:",[590,2709,2710,2718],{},[593,2711,2712],{},[596,2713,2714,2716],{},[599,2715,601],{},[599,2717,604],{},[606,2719,2720,2729,2735,2745],{},[596,2721,2722,2725],{},[611,2723,2724],{},"Endpoint",[611,2726,2727],{},[44,2728,961],{},[596,2730,2731,2733],{},[611,2732,631],{},[611,2734,634],{},[596,2736,2737,2740],{},[611,2738,2739],{},"Path style",[611,2741,2742],{},[44,2743,2744],{},"true",[596,2746,2747,2750],{},[611,2748,2749],{},"Access key / Secret key",[611,2751,653],{},[10,2753,2754],{},"Create the endpoint:",[1342,2756,2758],{"className":1344,"code":2757,"language":1346,"meta":792,"style":792},"proxmox-backup-manager s3 endpoint create hummingtribe \\\n  --access-key 'YOUR_HT_ACCESS_KEY' \\\n  --secret-key 'YOUR_HT_SECRET_KEY' \\\n  --endpoint 'storage.hummingtribe.com' \\\n  --region 'YOUR_REGION' \\\n  --path-style true\n",[44,2759,2760,2775,2784,2793,2802,2811],{"__ignoreMap":792},[1350,2761,2762,2764,2766,2768,2770,2773],{"class":1352,"line":1353},[1350,2763,1970],{"class":1356},[1350,2765,1973],{"class":1360},[1350,2767,1976],{"class":1360},[1350,2769,1979],{"class":1360},[1350,2771,2772],{"class":1360}," hummingtribe",[1350,2774,1985],{"class":1370},[1350,2776,2777,2779,2782],{"class":1352,"line":793},[1350,2778,1990],{"class":1370},[1350,2780,2781],{"class":1360}," 'YOUR_HT_ACCESS_KEY'",[1350,2783,1985],{"class":1370},[1350,2785,2786,2788,2791],{"class":1352,"line":802},[1350,2787,2000],{"class":1370},[1350,2789,2790],{"class":1360}," 'YOUR_HT_SECRET_KEY'",[1350,2792,1985],{"class":1370},[1350,2794,2795,2797,2800],{"class":1352,"line":2008},[1350,2796,2011],{"class":1370},[1350,2798,2799],{"class":1360}," 'storage.hummingtribe.com'",[1350,2801,1985],{"class":1370},[1350,2803,2804,2806,2809],{"class":1352,"line":2019},[1350,2805,2022],{"class":1370},[1350,2807,2808],{"class":1360}," 'YOUR_REGION'",[1350,2810,1985],{"class":1370},[1350,2812,2813,2816],{"class":1352,"line":2517},[1350,2814,2815],{"class":1370},"  --path-style",[1350,2817,2673],{"class":1370},[10,2819,2820],{},"Then create the datastore against your HummingTribe bucket:",[1342,2822,2824],{"className":1344,"code":2823,"language":1346,"meta":792,"style":792},"proxmox-backup-manager datastore create ht-s3-offsite \\\n  /mnt/datastore/ht-s3-cache \\\n  --backend type=s3,client=hummingtribe,bucket=your-bucket-name\n",[44,2825,2826,2839,2846],{"__ignoreMap":792},[1350,2827,2828,2830,2832,2834,2837],{"class":1352,"line":1353},[1350,2829,1970],{"class":1356},[1350,2831,2214],{"class":1360},[1350,2833,1979],{"class":1360},[1350,2835,2836],{"class":1360}," ht-s3-offsite",[1350,2838,1985],{"class":1370},[1350,2840,2841,2844],{"class":1352,"line":793},[1350,2842,2843],{"class":1360},"  /mnt/datastore/ht-s3-cache",[1350,2845,1985],{"class":1370},[1350,2847,2848,2850],{"class":1352,"line":802},[1350,2849,2233],{"class":1370},[1350,2851,2852],{"class":1360}," type=s3,client=hummingtribe,bucket=your-bucket-name\n",[10,2854,2855,2858],{},[14,2856,2857],{},"Why this is a fit for PBS offsite:"," zero egress means restore and verification operations don't incur surprise costs. EU-only data residency satisfies GDPR without a DPA negotiation. Flat monthly pricing removes the API-request cost variable that hurts cloud object storage PBS deployments on hyperscaler clouds.",[29,2860,666],{"id":665},[10,2862,2863,2869,2870,2873],{},[14,2864,2865,2868],{},[44,2866,2867],{},"certificate verify failed"," on endpoint test."," Self-signed or private CA cert. Add ",[44,2871,2872],{},"--fingerprint"," to the endpoint config with the SHA-256 fingerprint.",[10,2875,2876,2881,2882,47,2885,2888,2889,2892,2893,123,2896,2899],{},[14,2877,2878,2880],{},[44,2879,686],{}," on datastore creation."," Access key missing ",[44,2883,2884],{},"s3:PutObject",[44,2886,2887],{},"s3:DeleteObject",", or ",[44,2890,2891],{},"s3:ListBucket"," on the bucket. On AWS IAM, the policy needs both ",[44,2894,2895],{},"arn:aws:s3:::bucket-name",[44,2897,2898],{},"arn:aws:s3:::bucket-name/*"," resources.",[10,2901,2902,2905,2906,2909],{},[14,2903,2904],{},"Region errors on Cloudflare R2 or similar."," Set ",[44,2907,2908],{},"--region auto"," — R2 does not validate the region name but requires a non-empty value.",[10,2911,2912,2915],{},[14,2913,2914],{},"Datastore creation fails with \"path already a datastore\"."," Pick a cache path that is not already a PBS datastore. The cache cannot be nested inside another datastore directory.",[10,2917,2918,2921,2922,2925,2926,123,2929,2932,2933,2936],{},[14,2919,2920],{},"Migrating to a new PBS host."," On the new host, recreate the S3 endpoint config identically, then create the datastore with the ",[14,2923,2924],{},"same datastore name"," and both ",[44,2927,2928],{},"--reuse-datastore true",[44,2930,2931],{},"--overwrite-in-use true",". Never run two PBS instances against the same S3 datastore simultaneously — use the ",[44,2934,2935],{},"overwrite-in-use"," flag only when the original host is retired.",[10,2938,2939,2942,2943,2946,2947,2950],{},[14,2940,2941],{},"Running out of space on S3 mid-write."," Cleanup operations may fail alongside. Manually remove stray objects for the affected snapshot in the S3 console, then run an ",[14,2944,2945],{},"S3 refresh"," on the datastore (UI: ",[14,2948,2949],{},"Datastore → Refresh from S3",", or via CLI).",[29,2952,769],{"id":768},[10,2954,2955,2956,2958,2959,2962,2963,2966,2967,2969],{},"If you're evaluating providers for PBS offsite, the three variables that matter are: ",[14,2957,783],{}," (EU if you need GDPR), ",[14,2960,2961],{},"egress pricing"," (zero-egress beats per-GB charges for any verification workload), and ",[14,2964,2965],{},"API request pricing"," (matters for GC frequency). ",[786,2968,789],{"href":788}," addresses all three, with flat per-TB pricing and no egress charges, hosted in Germany.",[1743,2971,2972],{},"html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .szBVR, html code.shiki .szBVR{--shiki-default:#D73A49;--shiki-dark:#F97583}",{"title":792,"searchDepth":793,"depth":793,"links":2974},[2975,2976,2977,2981,2982,2983,2984,2985,2986,2987,2988,2989,2990,2991,2992],{"id":31,"depth":793,"text":32},{"id":1820,"depth":793,"text":1821},{"id":1868,"depth":793,"text":1869,"children":2978},[2979,2980],{"id":1872,"depth":802,"text":1873},{"id":1891,"depth":802,"text":1892},{"id":96,"depth":793,"text":97},{"id":1943,"depth":793,"text":1944},{"id":2137,"depth":793,"text":2138},{"id":2184,"depth":793,"text":2185},{"id":2283,"depth":793,"text":2284},{"id":2326,"depth":793,"text":2327},{"id":2563,"depth":793,"text":2564},{"id":2613,"depth":793,"text":2614},{"id":2676,"depth":793,"text":2677},{"id":581,"depth":793,"text":582},{"id":665,"depth":793,"text":666},{"id":768,"depth":793,"text":769},"Configure PBS with S3-compatible object storage: primary datastore, local + offsite sync pattern, encryption, and GC tuning. For homelab and MSP use.",{},"/docs/proxmox-backup-server-s3-offsite",{"title":1769,"description":2993},{"loc":2995},"docs/proxmox-backup-server-s3-offsite",[3000,3001,825,826,827,828],"pbs","proxmox","KZ8RonFJOmpTwwHf105xbY-yodEm9fVVhwbjneiq2wg",1776858558246]