← Back to Docs

TrueNAS offsite to S3 — full Cloud Sync setup guide

TrueNAS ships with two cloud backup features: Cloud Sync Tasks (rclone-based, works with any S3-compatible provider) and TrueCloud Backup (restic-based, Storj-only). If your offsite target is Amazon S3, Backblaze B2, HummingTribe, Wasabi, or any other S3-compatible object store, Cloud Sync Tasks is the only option — TrueCloud Backup doesn't support non-Storj endpoints. This guide covers the full Cloud Sync workflow for both TrueNAS Community Edition (25.04 Fangtooth, 25.10 Goldeye) and the legacy TrueNAS 13.0 CORE.

What you'll configure: a cloud credential for your S3 provider, a scheduled sync task that pushes one or more datasets to a bucket, encryption, exclude rules, and a tested restore path.

Prerequisites

  • TrueNAS Community Edition 25.04 or newer (Fangtooth or Goldeye), or TrueNAS 13.0 CORE. UI paths below match Community Edition. CORE differences are noted inline.
  • An S3-compatible object store with a pre-created bucket and an access key pair with GetObject, PutObject, ListBucket, and DeleteObject permissions on that bucket.
  • A TrueNAS administrator account with Full Administrative Access (shares admin and other restricted roles cannot configure cloud tasks in 25.04 and later).
  • Outbound HTTPS to the S3 endpoint. Firewall rules blocking port 443 egress will break the sync.

Cloud Sync Tasks vs TrueCloud Backup — which applies here

TrueNAS 24.10 Electric Eel introduced TrueCloud Backup Tasks, a restic-based feature with deduplication, snapshot-style versioning, and per-snapshot restores. It is technically superior to Cloud Sync for backup use cases — but it only supports Storj iX as a target. Community forum threads from 2025 confirm that iXsystems has no announced roadmap for TrueCloud Backup on other S3 providers.

Cloud Sync Tasks, the older feature, uses rclone under the hood. It supports every major S3 provider via the "Amazon S3" provider type (which accepts custom endpoints for S3-compatible stores) and transfers data as individual files rather than restic chunks. No deduplication, no snapshot versioning — but it works universally.

For offsite backup to anything other than Storj, use Cloud Sync Tasks. The rest of this guide is Cloud Sync only.

If you specifically want restic-based backup to a non-Storj S3 target, your options are: use Storj with TrueCloud Backup, run restic manually via a cron task on the TrueNAS shell (advanced, not officially supported), or use a separate restic host with the TrueNAS dataset mounted via NFS. None of these are covered here.

Step 1 — Create the S3 bucket and credentials

At your S3 provider:

  • Create a bucket dedicated to TrueNAS. One bucket per TrueNAS host is the simplest model — avoid sharing buckets between systems.
  • Create an access key pair with read/write/list/delete permissions scoped to that bucket only. Save the secret key; most providers show it once.
  • Note the endpoint URL and region.

If your provider supports object versioning, enable it on the bucket. Cloud Sync's SYNC mode deletes remote files when local files are deleted — versioning gives you a recovery window against accidental deletion on the TrueNAS side and against compromised TrueNAS credentials.

Step 2 — Add the cloud credential in TrueNAS

Navigate to Credentials → Backup Credentials → Cloud Credentials and click Add. On TrueNAS 13.0 CORE, the path is System → Cloud Credentials.

Fill in the Add Cloud Credentials form:

  • Name: descriptive, e.g. hummingtribe-backup
  • Provider: Amazon S3 — this is the correct provider type for any S3-compatible store, not just AWS
  • Authentication Mode: default (Access Key + Secret Key)
  • Access Key ID: your S3 access key
  • Secret Access Key: your S3 secret

For non-AWS providers, expand Advanced and set:

  • Endpoint URL: the full HTTPS URL of your S3 endpoint (e.g. https://storage.hummingtribe.com)
  • Region: the region identifier for your provider. If unknown, check provider documentation. For Cloudflare R2 set auto; for MinIO single-region deployments set us-east-1 (the default); for Garage-based providers (like HummingTribe), use the value shown in your dashboard.
  • Disable Endpoint Region: leave off unless instructed by your provider
  • Signature Version: leave at default (v4) unless your provider requires v2

Click Verify Credential. TrueNAS makes a test API call to list buckets. If this fails, the most common causes are: wrong endpoint URL, wrong region, or the access key lacks ListBucket on at least one bucket. Do not proceed until verification passes.

Click Save.

Step 3 — Create the Cloud Sync Task

Navigate to Data Protection → Cloud Sync Tasks widget → Add. The Cloud Sync Task Wizard opens in Community Edition; CORE presents a single form.

What to back up

  • Description: descriptive name for the task (e.g. Daily offsite to HummingTribe)
  • Direction: PUSH (TrueNAS → cloud). Use PULL only for restore operations.
  • Transfer Mode: see the next section — this choice matters.
  • Directory/Files: browse to the dataset or folder to back up. You can select a parent dataset and all children are included unless excluded.
  • Credential: select the credential you created in Step 2.
  • Bucket: pick from the dropdown; TrueNAS lists buckets accessible to the access key.
  • Folder: optional path inside the bucket (e.g. truenas-pool/media). Use this to organize if backing up multiple datasets to one bucket.

Transfer mode — the decision that matters

  • SYNC mirrors the source: if a file is deleted locally, it's deleted remotely on the next run. Matches the source exactly. Risk: if a TrueNAS compromise deletes files locally, the next sync propagates the deletion. Mitigate with bucket versioning enabled.
  • COPY copies new and changed files, but never deletes on the remote. Safer against ransomware, but the bucket grows over time and needs manual cleanup.
  • MOVE copies to remote, then deletes from source. Use for one-way data migration, not backup.

For most backup scenarios, COPY with bucket-side lifecycle rules is the safest pattern. If you need exact mirroring, use SYNC with bucket versioning enabled.

Schedule and retention

  • Schedule: set a preset (hourly/daily/weekly) or a custom cron expression. Daily at off-peak hours (02:00–04:00) is a sensible default for most deployments.
  • Enabled: leave checked.

TrueNAS Cloud Sync has no built-in retention policy — versioning and lifecycle rules happen on the S3 bucket side. Configure your S3 provider's lifecycle policy to: expire old versions after N days, transition to cold storage after M days, or delete non-current versions beyond a retention period. This is where the rclone-based Cloud Sync approach shows its limits vs restic — you're managing retention in two places.

Advanced options worth setting

Click Advanced Options:

  • Remote Encryption: encrypts filenames and file contents on the remote. Uses rclone's crypt backend. Keep the password and salt in a secure place outside TrueNAS — losing them means the remote data is unreadable. Note: rclone has known issues with filename encryption when filenames are very long; for media-heavy datasets, consider content-only encryption instead.
  • Filename Encryption: separate option from content encryption. Turn off if you hit issues with long paths.
  • Exclude: glob patterns for files to skip. Common entries: *.tmp, .DS_Store, Thumbs.db, @eaDir (Synology artifact), .snapshots (ZFS snapshot dirs if you don't want them backed up).
  • Pre-script / Post-script: shell scripts to run before/after the task. Common uses: zfs snapshot pool/dataset@presync as a pre-script, email notifications as a post-script.
  • Bandwidth Limit: throttle upload speed to avoid saturating your connection. Schedule-aware limits (faster at night, slower during business hours) are available.
  • Transfers: concurrent file transfers. Default of 4 is sensible; increase to 8–16 only if your upstream bandwidth is underutilized.

Click Save. The task appears in the widget with an enabled toggle.

Step 4 — Run a test sync

Click the run arrow (▶) next to the task. The task enters RUNNING state. Watch the task log under Jobs (top toolbar, briefcase icon) to verify there are no errors.

First runs for large datasets will take a long time — plan for the initial seed to run outside business hours. Subsequent runs are incremental and much faster.

Encryption — three layers to consider

TrueNAS Cloud Sync to S3 has three independent encryption layers. Understand what each protects:

  1. ZFS dataset encryption (TrueNAS side, at rest on local disks). Not related to cloud backup; protects the data on your TrueNAS if a drive walks away. Cloud Sync reads decrypted content from the source dataset and sends it to the cloud.
  2. Remote Encryption (rclone crypt) in the Cloud Sync Task advanced options. Encrypts file content (and optionally filenames) before upload. Your S3 provider stores ciphertext only. You manage the keys.
  3. S3 server-side encryption (SSE). Provider encrypts data at rest; keys managed by provider. Protects against provider-side storage compromise but not against compromise of your S3 credentials.

For offsite backups, enable Remote Encryption in the task — this gives you provider-independent confidentiality. SSE is a defense-in-depth layer, not a substitute. Do not rely only on ZFS dataset encryption; Cloud Sync reads decrypted content.

Step 5 — Test a restore before you need one

Do not skip this. A backup you have not restored from is hypothetical.

Create a PULL Cloud Sync Task:

  • Direction: PULL
  • Credential and Bucket: same as your PUSH task
  • Folder: the same remote path
  • Directory/Files: a new empty dataset on TrueNAS (e.g. tank/restore-test) — never restore over your production dataset
  • Transfer Mode: COPY

Run the PULL task. Verify the restored files match the originals in your primary dataset (diff -r over SSH, or spot-check file counts and a handful of file hashes). Then delete the restore-test dataset.

Do this after the initial setup, after any TrueNAS major version upgrade, and at least quarterly on a rotating sample.

HummingTribe S3 configuration

HummingTribe S3 runs on Garage (S3-compatible) from our Hetzner facility in Germany. All storage is in the EU, zero egress fees, GDPR-compliant by default — which matters for Cloud Sync restore operations and any future disaster recovery from offsite.

Values for the TrueNAS cloud credential:

FieldValue
ProviderAmazon S3
Endpoint URLhttps://storage.hummingtribe.com
Region(see your account dashboard)
Signature Versionv4 (default)
Access Key / Secret Keyfrom your S3 console

Path-style vs vhost-style addressing is handled by rclone automatically for S3-compatible endpoints when you provide a custom endpoint URL — no separate setting is required on the TrueNAS side.

Why this is a fit for TrueNAS offsite: zero egress means your quarterly test restores don't incur surprise costs. EU-only data residency satisfies GDPR for media and SMB datasets without negotiating a DPA with a US provider. Flat per-TB pricing removes the per-request cost uncertainty that hurts Cloud Sync on hyperscalers (every small file = at least one PUT request, which adds up fast on photo libraries or source code).

Troubleshooting

Credential verification fails with RequestTimeTooSkewed. TrueNAS clock drift from NTP. Check System Settings → General → NTP Servers and force a sync.

Credential verification fails with Access Denied. Access key lacks ListBucket or ListAllMyBuckets. For minimum-privilege keys, create the credential with a specific bucket name rather than relying on list-all-buckets; some providers let you skip the list step with explicit bucket configuration.

Credential verification fails with SignatureDoesNotMatch. Wrong secret key (most common), clock skew, or endpoint URL uses a trailing slash or path when it should be bare. Try https://endpoint.example.com with no trailing slash.

Task fails with context deadline exceeded on large files. rclone transfer timeout. In advanced options, increase Transfers concurrency, decrease Bandwidth Limit, or split the task by subfolder.

Task succeeds but files are missing in the bucket. Check the Exclude patterns — a too-broad glob (*.db, *tmp*) can match more than intended. Run with --verbose via shell (midclt call cloudsync.run TASK_ID) to see what rclone is skipping.

Task runs fine manually but fails on schedule. Most often a permissions issue with the admin account owning the task. In 25.04 and later, recreate the task under a Full Administrative Access account.

Restored files have wrong ownership/permissions. rclone preserves file content, not POSIX ACLs. For datasets with complex ACLs, Cloud Sync is not sufficient — combine with zfs send replication to a second TrueNAS for true data-fidelity backup.

What to do next

If you're evaluating providers for TrueNAS offsite, the three variables that matter most for Cloud Sync workloads are: per-request pricing (matters for datasets with many small files), egress cost (matters for test restores), and data residency (matters for GDPR if you handle EU personal data). HummingTribe S3 covers all three, with flat per-TB pricing, zero egress, and EU-only hosting in Germany.

← Back to Docs