I have spent a lot of time trying out backup solutions and I feel strongly enough to write this to stop others from using this. As other commenters mentioned, Duplicati is pretty unstable. I was never even able to finish the initial backups (less than 2 TB) on my PC over many years. If you pause an ongoing backup it never actually works again.
I'd use restic or duplicacy if you need something that works well both on Linux and Windows.
Duplicati's advantage is that it has a nice web UI but if the core features don't work.. that's not very useful.
It seems hard to find a universal recommendation. I've heard good things about Arq although it didn't work well for me personally whereas ironically Duplicati did, although I'm currently using Restic.
I've had a good experience with Kopia [0] for over a year. Linux and Windows boxes all writing to the same repository, which is synchronized to both B2 and a portable drive every night. The one thing it lacks that I'd like is error correction, so I store it on a RAID1 btrfs system. ECC is apparently being developed [1], but is currently experimental and I believe requires a new repository.
I've had issues trying to use multiple different Kopia repos from one machine. (A dedicated back-up server basically)
With compression landing in the most recent Reseic release, I'll probably switch back to that for my servers. Though I'm still keeping Kopia for my clients where I like a GUI once in a while.
After hearing a lot of praise for Arq here, I tried it out hoping it would become my new Windows backup solution. (I'm looking for a linux one too, but Arq doesn't do linux). But I was very underwhelmed. The user experience for browsing file versions in time was not really there. If I recall correctly, I could only browse by snapshot. And it was extremely slow for just a few gigabytes. The backup process didn't inspire confidence, I was never sure if something had interrupted it or what the status was.
I recommend Arq also at least for Windows (have not tried on Mac). I'm using Arq 7 cloud (something like $60 a year) on a Windows desktop. The software is straightforward, generally stays out of your way, gives alerts when needed, reliable, saves versions similar to time machine, fairly configurable, and backups are end to end encrypted, and can be saved to Arq's own cloud service, any local media, and most other cloud services. I had lots of permission errors when starting for a small bunch of files but was able to fix them out by either resetting permissions or excluding files (e.g., caches). I think these are the kind of problems you can expect on Windows when using Shadow copy, no reflection on Arq.
I have had similar experiences. I could not get a non-corrupt backup from one machine; it would repeatedly ask me to regenerate the local database from remote, which never succeeded. Oddly, another machine never seemed to have an issue, but that's not an argument in favor of using the software. It is possible there are "safe" versions, but without a way to identify them (all the releases I used were linked from the homepage).
Just another stat point... Been using it against 1TB storing encrypted to Backblaze B2 for about a year and a half. I've tested restoring and so far it's been very stable.
Just to balance this. I use duplicati for both my web server where I host client websites, and my personal home nas.
I've had to use it to restore multiple times, and have never had an issue with it. It's saved my ass multiple times. It's always been a set it and forget it until I remember I need it.
Never tried Duplicati, but restic + B2 has been great as "a different choice", and for my use case of backing up a variety of OS's (Windows, Mac, and different Linux distros, anyway), it's worked great.
Restic and B2 "just work". Works how I expect it to, and restores what I expect it to. Not amazingly fast in backups or restorations, but it works reliable for me. I have restic running on everything from workstations and laptops, (~200G each), to servers (500G-2TB) to a mini 'data hoard' (25TB+) level of backups, and its been doing great on each.
I did not like and could not trust duplicati to finish backups or restore from them.
I had a very similar experience with Duplicati on a small (disk space wise) backup set but a very large number of files bloating the sqlite data store.
I use Urbackup to back up Windows and Linux hosts to a server on my home network and then use Borg to back that up for DR. I'm currently in the process of testing Restic now that it has compression and may switch Borg out for that.
I've been using borg for a while (successfully, with Vorta as UI on mac) and curious to learn if there is something I've been missing that restic provides.
You probably aren't missing anything unless you are doing ridiculously large amounts of backups. I'm using Borg as a disaster recovery backup of a backup server.
Borg has issues properly maintaining the size of its local cache and that results in RIDICULOUS amounts of ram being consumed at runtime unless I manually clear the cache out periodically. It also brings in some python package for something FUSE related that constantly vomits a warning to the console on each run on Ubuntu.
I'm still not 100% sold on migrating to Restic. It seems to not suffer the same cache or FUSE problem (since it isn't Python) so far but the overall archive sizes seem to be a bit larger than Borg and I have to pay for every byte of storage I consume.
At BorgBase.com the largest Borg repo we host is about 70 TB. Still manageable with server-side pruning. Mostly larger files from what the user told me.
We just added support for Restic too. Using Nginx with HTTP/2. Fastest combination I've seen so far. So very excited to offer two great options now.
How strange. I have been backing up my own computers (4) and those of my family (another 3) using Duplicati for over three years now, and aside from the very rare full-body derp that required a complete dump of the backup profile (once) and a rebuild of the remote backup (twice), it’s been working flawlessly. I do test restores of randomly chosen files at least once a year, and have never had an issue.
Granted, the backup itself errors out on in-use files (and just proceeds to the next file), but show me a backup program that doesn’t. Open file handles make backing up rather hard for anything that needs to obey the underlying operating system.
I started to use Duplicati 2 for about a month now to try it out, and it was working flawlessly for me, except for occasional time-out of the web UI. I only backup local directories, and the destinations I tried out include an external drive over USB, Google Drive, and an SSH connection.
I'm using it to backup a Firefox profile while I'm using Firefox. It backed up active files as they are being written too! I'm also using it to backup a Veracrypt container file (single 24GB file), and incremental backups worked quite well too.
Thanks for the words of advice, I will keep testing longer before I make the switch.
I've looked around quite a bit too but did you actually use restic and duplicacy?
They've eaten my RAM quite heavily, it caused the machine to freeze up by exhausting the RAM on not that huge data sets and I've stopped using them a year or so ago.
I've come to the conclusion to use Borg and zfs as backup solutions (better to run multiple reliable independent implementations), latter being quite fast by knowing what got changed on each incremental backups as a file system itself unlike any other utilities that need to scan the entire datasets to figure out what got changed since last run.
You can run a 1GB memory instance and plug HDD (far cheaper) based block storage (such as on Vultr or AWS) for cheap zfs remote target. Ubuntu gets zfs running easily by simply installing zfsutils-linux package.
If you need large space, rsync.net gives you zfs target with $0.015/GB but with 4TB minimum commitment. Also good target for Borg at same price but with 100GB minimum yearly commitment. Hetzner storage box and BorgBase seem good for that too.
If you use restic/kopia, how are you managing scheduling and failure/success reporting together?
That's one thing I can't seem to quite figure out with those solutions. I know there are scripts out there (or I could try my own), but that seems error-prone and could result in failed backups.
I've tinkered with that using healthchecks, but I don't really trust that I know what I'm doing when setting it up.
Restic is also confusing to me with forgetting snapshots and doing cleanup, I don't understand why that isn't run as part of the backup (or is it? The docs aren't clear).
no, you have to run "restic forget" with the policy you want (keep last X, last monthly Y, etc.) followed up with a "restic prune". Or you can pass "--prune" to the "forget" command I think.
You don't always want to forget/prune snapshots. Especially if you're using a cloud service like B2. It can easily cost you more to prune than actual storage costs if you're not careful.
On Linux I used cron + email. You can setup postfix such that you use your personal gmail or whatever, then you will be able to do "echo message" | mail -s youremail.com to send an email. They (big email providers) always allow you to send an email as yourself to yourself.
On Windows, I used the native task scheduler (with various triggers like time, lock workstation, idle and so on) and send an email using powershell, which can also send emails using SMTP.
Same here. I have a wrapper script that runs restic commands. Whether I run it in a console or per crontab stdout/stderr is logged to a file and is emailed to me (in the crontab case). Nothing fancy yet, but it works and I am satisfied. Still pretty new to restic though. In another life I had a disaster recovery role and was using DLT for backup / restore of all the things, so ...
I read that Duplicati is also in beta (for years now), and that really seems discouraging. Restic looks great, but it's also 0.14 as of the moment. Would you consider restic a stable product, despite the version number?
Restic's versioning doesn't denote that it's not production-ready: it absolutely is. Stable, reliable and developed thoughtfully, with data integrity and security in mind. I highly recommend it.
Could you provide your reasoning for the switch? I've had good enough luck with duplicacy but I'm curious about it vs restic now that restic supports compression.
Yes, it's stable. They even added compression this year. We just added support for Restic on BorgBase.com. Will have more user feedback in a few months, but first tests and benchmarks are pretty encouraging.
Even late this warning has to be issued: restic still has serious problems with writing to samba shares - to the honor of the auhors we can see that the manual clearly tells you about that:
On Linux, storing the backup repository on a CIFS (SMB) share is not recommended due to compatibility issues.
There seems to be some deeper system level problem with go concurrency:
Duplicacy seems to upload every chunk as a separate object/file, which is great for deduplication but bad for your cloud bill (S3 providers usually charge for PUT requests). There's a reason everybody else packs up chunks.
I had a mixed experience. I've been able to successfully restore backups (the most important thing), but I frequently had to fix database issues, which makes the backup less seamless (perhaps the second most important thing).
I strongly advise people to not rely on Duplicati. Throughout its history, it's had a lot of weird, fatal problems that the dev team has shown little interest in tracking down while there is endless interest in chasing yet another storage provider or other shiny things.
Duplicati has been in desperate need of an extended feature freeze and someone to comb through the forums and github issues looking for critical archive-destroying or corrupting bugs.
"If you interrupt the initial backup, your archive is corrupted, but silently, so you'll do months of backups, maybe even rely upon having those backups" was what made me throw up my hands in disgust. I don't know if it's still a thing; I don't care. Any backup software that allows such a glaring bug to persist for months if not years has completely lost my trust.
In general there seemed to be a lot of local database issues where it could become corrupted, you'd have no idea, and worse, a lot of situations seemed to be unrecoverable - even doing a rebuild based off the 'remote' archive would error out or otherwise not work.
The duplicati team has exactly zero appreciation for the fact that backup software should be like filesystems: the most stable, reliable, predictable piece of software your computer runs.
Also, SSD users should be aware that Duplicati assembles each archive object on the local filesystem. On spinning rust, it significantly impacts performance.
Oh, and the default archive object size is comically small for modern day usage and will cause significant issues if you're not using object storage (say, a remote directory.) After just a few backups of a system with several hundred GB, you could end up with a "cripples standard linux filesystem tools" numbers of files in a single directory.
And of course, there's no way to switch or migrate object sizes...
I had a terrible experience too. The UI is incredibly slow and personally, I had issues where the "local db" had to be constantly repaired. The tool is just buggy and doesn't work well IMO.
FWIW: I ran it on 3 separate Windows PCs for around 6 months without any real luck getting it to work consistently.
Does this discount also apply to the raw ZFS plans at rsync.net? Looking for a reliable and cost efficient place to push my ZFS snapshots via “zfs send”.
Backblaze is much cheaper, and can have free egress when using Cloudflare with it.
There is also Storj, a decentralized storage coin and it gives 150 GB for free + $4/TB with free egress matching what you stored.
another one is IDrive E2, it $4/tb, with the first year costing the same as a single month, with egress for free up to about three times the size of what's stored.
Hetzners storage boxes are pretty cheap, but that is for a reason.
The upload speed is pretty slow outside Hetzners network (from my experience) and more importantly is that data is only protected by a single RAID cluster.
They also offer free Unlimited egress.
But I would personally go with Backblaze or maybe IDrive.
Or a small computer with a disk at a friend's home and backup to that. It's cheaper than cloud after one or two years, always less reliable, network speed is probably OK, you can have physical access. If the friend is a techy it could be one among many other little computers in that home. You can reciprocate by hosting his/her backup at your home.
That's a charming idea, the question is how far away does your friend live? If it's too far, the upstream bandwidth of residential internet can be a problem during a restore.
With me being an IT person my landlord asked for recommendations for doing backups. Some googling revealed duplicati and we gave it a go. Installation + configuration was easy and the features were sane. That was like 6-7 years ago and it is still running without issue (AFAIK ^^)
Have you tested restores? The problem I had with duplicati was that eventually restoring from a backup would take exponentially longer, to the point of never finishing. Maybe it would have eventually, but I can't wait multiple days to restore one file. There's a possibility it was an error or problem on my end, and this was a couple of years ago, so ymmv.
I'm a new user of Duplicati and so far so good, but what you describe sounds like their biggest issue with the original storage mechanism (full+huge chain of incremental backups). The new mechanism would likely completely fix your concern. Here's a brief description of how it now works on their website: https://www.duplicati.com/articles/Storage-Engine/
The one full-backup restore I did on my wife’s system - after her MacBook Air decided to fry its storage (it was obsolete anyhow) - went perfectly. 23Gb of personal files (she’s not the data pack rat I am) came streaming back down inside of 20 hrs. And we were on a much slower connection at the time, certainly not the symmetrical gigabit that we have now.
Duplicacy has been incredibly stable for me over the years and I still prefer it's lock-free deduplication design. Looks like 28 days ago there was a major release as well. Time to upgrade. :)
Agreed, duplicacy seems to be more resilient to the inevitable errors or hiccups along the way. The only downside is that it seems to be inefficient with storage with small metadata updates which happen frequently with my use case.
Another happy long-term Duplicacy user here. My only problem with it is; on the rare occasions I need to restore something from backup, I can never remember the correct syntax and always have to look it up again.
Same. I ended up writing the steps in a file because I could never remember them. It's not very complicated but a bit counter-intuitive: instead of pulling everything from the remote you first have to recreate "repository" with the same parameters as the original one, and _then_ run the restore command.
I had the exact same reaction. I have little "README.md" text files scattered about to remind me how to do things and never thought to make an interactive post-it note.
They have some major differences. Enough so that I first tried Duplicati and ran into corruption issues so frequently that I sought out an alternative and luckily found Duplicacy.
Duplicacy has been stable for years now and I gladly pay the commercial license. It seemed like Duplicacy constructs a giant DB of all the files and manages everything that way, whereas Duplicacy's approach is much simpler and is less prone to corruption. The large DB approach seems to fail when the backup set contains a large number of files that many users manage.
That's right -- Duplicati constructs the giant house-of-cards DB). I sometimes need to run a $> ps -ax to remember which one I'm using when it comes time to change the config.
I occasionally use restic but one thing I don't like about it is the sheer number of data files it creates (45k for ~800GB in my case) which makes it a pain to use with certain cloud storage providers that don't always handle tens of thousands of files very well (gdrive being a good example).
Is there some way to get it to not make as many files?
I've used restic with the backblaze and S3 backends - works pretty well for me. The newest version also has compression on top of deduplication, like borg, which is nice. (Of course, it will only make a difference for compressible data - most images or videos won't compress, but say, JSONs will).
Same. Database corruption hit me after ~1.5 years and I could never figure out what the cause was or how to fix it. Which is a shame, because Duplicati looks like a great open source project with a lot of dev time and effort invested into it. But when it comes to backup software, your core functionality better work reliably, and Duplicati just isn't there. I since switched to Duplicacy and couldn't be happier.
If you plan to use Duplicati please pay attention to the docs around block size. We used this to back up a couple 100GB of data to S3. Recovery was going to take over 3 days to reassemble the blocks based on the default 100KB block size. For most applications you will want at least 1MB if not more.
Otherwise a good product and has been reliable enough for us.
Many years ago I was a happy user of CrashPlan as the data was also easily accessible, but when they stopped their private user plans I looked into several solutions (Duplicati, duplicacy, and some others too). restic was the only one light enough for me to use consistently, which is a critical thing about backups.
While there's probably some overlap for certain use cases, I'd say they're more complementary. In fact, Restic leverages rclone to support a lot of cloud storage services. Restic is specifically meant as a backup tool and does encryption, deduplication, snapshots, and now apparently compression. Rclone is more of a synchronization tool/copying tool (which could also be used to make backups), more like rsync or even just cp (but with cloud storage support).
Rclone is more for syncing than backups. Its great for moving files between storage systems and syncing one path to a destination. Some backup tools use it for uploading/etc.
Rclone is like rsync for the cloud, you can sync files to a google drive or other service. And like CCC it can archive deleted files as a saftey net. I love the simplicity, no deltas, snapshots or restore procedures, the files are just there on the destination.
I use restic to a backup to a local drive, then use cloud storage to backup the repos. I know restic supports some direct backup to some cloud backends directly, but this seems more decoupled and less prone to errors/hangs.
the thing that made me wanting to post is this bullshit:
Download Duplicati 2.0 (beta)
or
Look at 1.3.4 (not supported)
i work at a big tech and see this all the time, if your "new shinny promotion ready" version is not ready, why the hell would you drop support for the old version that works? i'd stay away from any of this products operating by this irresponsible teams.
yeah, i know, lots of biased guesses/views and sentiments on my part but you get how much this angers me.
Thanks! seems great to me.
It's going to switch kapio-ui from electron to go-binary-plus-browser, I thought its server already provides a browser UI, not sure why it needs a new desktop UI that is browser based, why both.
Restic is CLI focused whereas Duplicati is GUI focused. Restic is based around repositories, which can contain multiple backups from multiple sources, whereas Duplicati's backups are not (although the actual backup format is similarly broken up into lots of small blocks).
+1 for Restic. What I ended up doing was writing a script which implements a file I called `backupctl` on my home server which specifies a set of directories under the home directories to backup. This wound up being a good solution to the problem of "try and save these files from loss due to annoyance" - i.e. a house fire isn't catastrophic, and "irreplaceable" (previous memories) which I want to head off to Restic.
For things like family photos this works really well since if we copy them all over the place across devices, restic will still deduplicate them down to just 1 record when it gets uploaded to Backblaze.
I've had a good experience with [crestic](https://github.com/nils-werner/crestic), even though it seems a lot smaller and simpler than autorestic.
But I really like how the same backup can be configured for different backends. Autorestic's seemed more complicated in comparison.
There’s also resticprofile which takes care of scheduling (with launchd on macOS) and maintenance tasks for restic. I especially enjoy that resticprofile can create a prom file for the backup status that I can just scoop up to my monitoring.
I've been using Autorestic but it has some issue, it keeps modifying the yml file on its own with an invalid config option, which causes the backups to fail.
Not a good thing for something that's supposed to run in the background and keep things backed up.
I use ZFS snapshots and send/replication. This has been the easiest and most reliable backup solution for everything. I especially enjoy taking backup of SQL Server with ZFS with the new snapshot feature in SQL Server 2022 "ALTER DATABASE MyDatabase SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON";
I'm a happy user, I use it as a solution to back up specific user folders on a Windows system to an smb network share. It's been chugging along for years now, and I even have done a few recoveries, and never had a problem. I'm surprised to read the other reviews here.
Have been using this for years, it has its quirks but it works and it costs me next to nothing - I keep looking at possible alternatives but so far haven’t shifted.
It's a shame Duplicati runs quite poorly (I have the same experience). I moved to restic with the autorestic wrapper and configured notifications through another method for both failures and successful backups.
That second option works amazingly well and is much quicker, more reliable, and offers more control than Duplicati. But it's much harder and time consuming to set up, requiring timers, scripts and setting up notifictions. For new people self hosting stuff, reliable incremental off site backups can be a right pain. How many poorly tested cronjobs failing to create backups that nobody will take action on are running right now? At least the Duplicati GUI will give you a glanceable GUI showing its failures in backups.
I have been using Kopia to back up all of my laptops' home dirs to a raspberry pi for at least a couple years now. There is a CLI and a UI. The UI is somewhat funky and could benefit from an "easy mode" a la time machine, but it does work. I recently restored my home dir from it just the other day when migrating from one OS to another. My favorite thing about Kopia is that it performs incremental backups on tens of GB _much_ quicker than plan rsync can and is much more space-efficient to boot.
I've been using Duply as a simple CLI front end to Duplicity (not Duplicati) for years now. It's worked great for me on many servers and personal machines.
I just started using Duplicati last week as my backup for ~900GB worth of photos, music, and other assorted data in an Ubuntu RAID1 array to Backblaze B2. I noticed it was a little sketchy when I poked at it (i.e. pausing the backup), but didn't realize it was so unstable. The initial backup did finish.
I don’t use it. I’ve tried, and it’s a large, bloated, unstable program in Docker and when installing natively there’s more dependencies than there is actual backup software. It would quadruple the size of my install on RasPi.
I use restic on servers and Syncthing set to one-way sync for basic folders.
Does anyone have experience with using regular backup software in conjunction with reverse-encrypting filesystems, like gocryptfs, eCryptFS or encFs? ie mount the plaintext directory as a new reverse-ciphertext directory, and backup the cipher one:
I do something like this. EncFS a local clone of my data, and rsync that clone to remote servers.
I prefer this over complex formats created by software like duplicati. This is easier to recover 20 years from now (I just went through that with a USB stick from 2001 :P )
I used Duplicati for a few years. The backup process would fail silently every once in a while and wouldn't run again until I manually reset it. It did save me once after a storage device failure. Now I just put stuff I want to back up in Dropbox or git.
My interest was piqued, so I started going through the issues tagged with 'bug' from oldest to newest. I got 100 in and... well, Jiminy Cricket... I was still in 2017. Think I'll pass.
Borg doesn't have a Windows version, for example. Borg is also command line only, while Duplicati has a nice graphical UI - by running a web server on localhost.
I've been using Vorta as a GUI for Borg for a while for personal use. It's not the prettiest thing out there but it has seemed to work fine so far. As far as restoring from old backup goes, though, I've only really tried that with a few individual files.
Yet Another Recommendation: borg backup to a local server daily, then rclone to S3 (or another cloud provider) to backup the whole local backup server repo weekly or something...
Still beta right? What fool trusts their backups to beta software? I tried this many years ago and it started failing eventually and I gave up. As expected, it's beta.
Interesting that Cryptomator hasn't been mentioned so far. I've been thinking about about setting it up to work with my 2TB GDrive. Anybody know how it compares?
Wait. The title uses the word free, but you write that it requires payment. Looking at the website I can't find a way to pay for it other than a donation.
I'd use restic or duplicacy if you need something that works well both on Linux and Windows.
Duplicati's advantage is that it has a nice web UI but if the core features don't work.. that's not very useful.