Hacker News new | past | comments | ask | show | jobs | submit login
Duplicati: Free backup software to store encrypted backups online (duplicati.com)
278 points by memorable on Nov 3, 2022 | hide | past | favorite | 188 comments



I have spent a lot of time trying out backup solutions and I feel strongly enough to write this to stop others from using this. As other commenters mentioned, Duplicati is pretty unstable. I was never even able to finish the initial backups (less than 2 TB) on my PC over many years. If you pause an ongoing backup it never actually works again.

I'd use restic or duplicacy if you need something that works well both on Linux and Windows.

Duplicati's advantage is that it has a nice web UI but if the core features don't work.. that's not very useful.


Also can't recommend duplicati. I never got it to work despite sinking many hours into it using different storage options. Not even local disk worked.

Instead, I'd recommend Arq backup.


It seems hard to find a universal recommendation. I've heard good things about Arq although it didn't work well for me personally whereas ironically Duplicati did, although I'm currently using Restic.


I've had a good experience with Kopia [0] for over a year. Linux and Windows boxes all writing to the same repository, which is synchronized to both B2 and a portable drive every night. The one thing it lacks that I'd like is error correction, so I store it on a RAID1 btrfs system. ECC is apparently being developed [1], but is currently experimental and I believe requires a new repository.

[0] https://github.com/kopia/kopia

[1] https://github.com/kopia/kopia/pull/2308


I've had issues trying to use multiple different Kopia repos from one machine. (A dedicated back-up server basically)

With compression landing in the most recent Reseic release, I'll probably switch back to that for my servers. Though I'm still keeping Kopia for my clients where I like a GUI once in a while.


After hearing a lot of praise for Arq here, I tried it out hoping it would become my new Windows backup solution. (I'm looking for a linux one too, but Arq doesn't do linux). But I was very underwhelmed. The user experience for browsing file versions in time was not really there. If I recall correctly, I could only browse by snapshot. And it was extremely slow for just a few gigabytes. The backup process didn't inspire confidence, I was never sure if something had interrupted it or what the status was.


I recommend Arq also at least for Windows (have not tried on Mac). I'm using Arq 7 cloud (something like $60 a year) on a Windows desktop. The software is straightforward, generally stays out of your way, gives alerts when needed, reliable, saves versions similar to time machine, fairly configurable, and backups are end to end encrypted, and can be saved to Arq's own cloud service, any local media, and most other cloud services. I had lots of permission errors when starting for a small bunch of files but was able to fix them out by either resetting permissions or excluding files (e.g., caches). I think these are the kind of problems you can expect on Windows when using Shadow copy, no reflection on Arq.


Arq on windows for me just stalled forever and didn't complete anything after 2-3 weeks.


Same for Duplicati running in Docker, as well as TimeMachine on macOS. Due to this thread I've swapped to Restic/Rclone.


I have had similar experiences. I could not get a non-corrupt backup from one machine; it would repeatedly ask me to regenerate the local database from remote, which never succeeded. Oddly, another machine never seemed to have an issue, but that's not an argument in favor of using the software. It is possible there are "safe" versions, but without a way to identify them (all the releases I used were linked from the homepage).


I had a similar experience with Duplicati. I attempted a 2TB backup of my NAS to a cloud storage and it went up to ~500GB and would just hang there.

I switched to restic and recomned it over Duplicati.


Just another stat point... Been using it against 1TB storing encrypted to Backblaze B2 for about a year and a half. I've tested restoring and so far it's been very stable.


Just to balance this. I use duplicati for both my web server where I host client websites, and my personal home nas.

I've had to use it to restore multiple times, and have never had an issue with it. It's saved my ass multiple times. It's always been a set it and forget it until I remember I need it.


Never tried Duplicati, but restic + B2 has been great as "a different choice", and for my use case of backing up a variety of OS's (Windows, Mac, and different Linux distros, anyway), it's worked great.


Restic and B2 "just work". Works how I expect it to, and restores what I expect it to. Not amazingly fast in backups or restorations, but it works reliable for me. I have restic running on everything from workstations and laptops, (~200G each), to servers (500G-2TB) to a mini 'data hoard' (25TB+) level of backups, and its been doing great on each.

I did not like and could not trust duplicati to finish backups or restore from them.


I'll throw a +1 in for Duplicacy too. I think I'm backing up something like 8TB to Wasabi using it and it's excellent in terms of de-duplication.


I had a very similar experience with Duplicati on a small (disk space wise) backup set but a very large number of files bloating the sqlite data store.

I use Urbackup to back up Windows and Linux hosts to a server on my home network and then use Borg to back that up for DR. I'm currently in the process of testing Restic now that it has compression and may switch Borg out for that.


What does restic offer that borg doesn't?

I've been using borg for a while (successfully, with Vorta as UI on mac) and curious to learn if there is something I've been missing that restic provides.


You probably aren't missing anything unless you are doing ridiculously large amounts of backups. I'm using Borg as a disaster recovery backup of a backup server.

Borg has issues properly maintaining the size of its local cache and that results in RIDICULOUS amounts of ram being consumed at runtime unless I manually clear the cache out periodically. It also brings in some python package for something FUSE related that constantly vomits a warning to the console on each run on Ubuntu.

I'm still not 100% sold on migrating to Restic. It seems to not suffer the same cache or FUSE problem (since it isn't Python) so far but the overall archive sizes seem to be a bit larger than Borg and I have to pay for every byte of storage I consume.


At BorgBase.com the largest Borg repo we host is about 70 TB. Still manageable with server-side pruning. Mostly larger files from what the user told me.

We just added support for Restic too. Using Nginx with HTTP/2. Fastest combination I've seen so far. So very excited to offer two great options now.


The main thing I was going to mention was deletion but it looks like borg has that now.


How strange. I have been backing up my own computers (4) and those of my family (another 3) using Duplicati for over three years now, and aside from the very rare full-body derp that required a complete dump of the backup profile (once) and a rebuild of the remote backup (twice), it’s been working flawlessly. I do test restores of randomly chosen files at least once a year, and have never had an issue.

Granted, the backup itself errors out on in-use files (and just proceeds to the next file), but show me a backup program that doesn’t. Open file handles make backing up rather hard for anything that needs to obey the underlying operating system.


I started to use Duplicati 2 for about a month now to try it out, and it was working flawlessly for me, except for occasional time-out of the web UI. I only backup local directories, and the destinations I tried out include an external drive over USB, Google Drive, and an SSH connection.

I'm using it to backup a Firefox profile while I'm using Firefox. It backed up active files as they are being written too! I'm also using it to backup a Veracrypt container file (single 24GB file), and incremental backups worked quite well too.

Thanks for the words of advice, I will keep testing longer before I make the switch.


Agree duplicati is quite immature.

I've looked around quite a bit too but did you actually use restic and duplicacy?

They've eaten my RAM quite heavily, it caused the machine to freeze up by exhausting the RAM on not that huge data sets and I've stopped using them a year or so ago.

I've come to the conclusion to use Borg and zfs as backup solutions (better to run multiple reliable independent implementations), latter being quite fast by knowing what got changed on each incremental backups as a file system itself unlike any other utilities that need to scan the entire datasets to figure out what got changed since last run.

You can run a 1GB memory instance and plug HDD (far cheaper) based block storage (such as on Vultr or AWS) for cheap zfs remote target. Ubuntu gets zfs running easily by simply installing zfsutils-linux package.

If you need large space, rsync.net gives you zfs target with $0.015/GB but with 4TB minimum commitment. Also good target for Borg at same price but with 100GB minimum yearly commitment. Hetzner storage box and BorgBase seem good for that too.


If you use restic/kopia, how are you managing scheduling and failure/success reporting together?

That's one thing I can't seem to quite figure out with those solutions. I know there are scripts out there (or I could try my own), but that seems error-prone and could result in failed backups.


You could use one of those services that expect a regular http heartbeat. I'm personally using uptimerobot for that. Within a .bat or .sh file, add a

  restic [...] && curl <heartbeat-url>
and you'll get eventually notified if backup jobs fails too often.


I've tinkered with that using healthchecks, but I don't really trust that I know what I'm doing when setting it up.

Restic is also confusing to me with forgetting snapshots and doing cleanup, I don't understand why that isn't run as part of the backup (or is it? The docs aren't clear).


no, you have to run "restic forget" with the policy you want (keep last X, last monthly Y, etc.) followed up with a "restic prune". Or you can pass "--prune" to the "forget" command I think.

You don't always want to forget/prune snapshots. Especially if you're using a cloud service like B2. It can easily cost you more to prune than actual storage costs if you're not careful.

See here: https://www.seanh.cc/2022/04/03/restic/#maintaining-your-bac...

and

https://kdecherf.com/blog/2018/12/28/restic-and-backblaze-b2...


Thanks for the links! That's helpful, the part about B2 makes sense.


Yeah I had to invent my own.

On Linux I used cron + email. You can setup postfix such that you use your personal gmail or whatever, then you will be able to do "echo message" | mail -s youremail.com to send an email. They (big email providers) always allow you to send an email as yourself to yourself.

On Windows, I used the native task scheduler (with various triggers like time, lock workstation, idle and so on) and send an email using powershell, which can also send emails using SMTP.


Same here. I have a wrapper script that runs restic commands. Whether I run it in a console or per crontab stdout/stderr is logged to a file and is emailed to me (in the crontab case). Nothing fancy yet, but it works and I am satisfied. Still pretty new to restic though. In another life I had a disaster recovery role and was using DLT for backup / restore of all the things, so ...


yeah, I scripted my backup jobs, and use good old email notifications to report.

I expect an email every day. If I don't receive one, I know there's a problem with email delivery.


I read that Duplicati is also in beta (for years now), and that really seems discouraging. Restic looks great, but it's also 0.14 as of the moment. Would you consider restic a stable product, despite the version number?


Restic's versioning doesn't denote that it's not production-ready: it absolutely is. Stable, reliable and developed thoughtfully, with data integrity and security in mind. I highly recommend it.


I've used restic for years now without issue. I'd definitely consider it stable.

I started with duplicacy and moved to restic.


Could you provide your reasoning for the switch? I've had good enough luck with duplicacy but I'm curious about it vs restic now that restic supports compression.


To me, it shows "beta" and "not supported" options.. so it's hard to choose :)


Yes, it's stable. They even added compression this year. We just added support for Restic on BorgBase.com. Will have more user feedback in a few months, but first tests and benchmarks are pretty encouraging.


Restic is rock solid. I have backed TBs servers with it. It never failed.

Encryption is properly implemented.


I've been using it since 2018, no issues so far.


Even late this warning has to be issued: restic still has serious problems with writing to samba shares - to the honor of the auhors we can see that the manual clearly tells you about that:

On Linux, storing the backup repository on a CIFS (SMB) share is not recommended due to compatibility issues.

There seems to be some deeper system level problem with go concurrency:

https://github.com/restic/restic/issues/2659


I agree. I really liked the interface and gave it a go at least 3 or 4 times, and got burned every single time with errors or random issues.


Duplicacy seems to upload every chunk as a separate object/file, which is great for deduplication but bad for your cloud bill (S3 providers usually charge for PUT requests). There's a reason everybody else packs up chunks.


I had a mixed experience. I've been able to successfully restore backups (the most important thing), but I frequently had to fix database issues, which makes the backup less seamless (perhaps the second most important thing).


Duplicacy has worked well for several years on both my wife's and mother's laptops. Doesn't require much work and just keeps operating.


Adding to the choir. I like the web UI of Duplicati but found it buggy and unstable, which are definitely not things you want in a backup system.


In my experience, Duplicacy is most stable backup software compared to Dupli* family. I don't say it's rock solid but mostly it works.


Agree totally with this. It's a hot mess tbh and very unreliable. As suggested restic (with autorestic as a wrapper) is a great replacement.


It's hard to see restic as a Duplicati replacement when there's no official documentation about backing data via SFTP on Windows.


What do you mean? It’s just “sftp” in front of the repository name!

And SFTP is SFTP, regardless of the OS.


What "SFTP" do I have on Windows?


For client, WinSCP, and more.


And how do I use it with restic? This is what I'm talking about.


I too had huge problems with Duplicati restoring. Switched to Borg, using Vorta as the GUI and am much happier.


I did use it. It worked 90% of the time. I backed up to one-drive. I just ended up getting veeam.


I strongly advise people to not rely on Duplicati. Throughout its history, it's had a lot of weird, fatal problems that the dev team has shown little interest in tracking down while there is endless interest in chasing yet another storage provider or other shiny things.

Duplicati has been in desperate need of an extended feature freeze and someone to comb through the forums and github issues looking for critical archive-destroying or corrupting bugs.

"If you interrupt the initial backup, your archive is corrupted, but silently, so you'll do months of backups, maybe even rely upon having those backups" was what made me throw up my hands in disgust. I don't know if it's still a thing; I don't care. Any backup software that allows such a glaring bug to persist for months if not years has completely lost my trust.

In general there seemed to be a lot of local database issues where it could become corrupted, you'd have no idea, and worse, a lot of situations seemed to be unrecoverable - even doing a rebuild based off the 'remote' archive would error out or otherwise not work.

The duplicati team has exactly zero appreciation for the fact that backup software should be like filesystems: the most stable, reliable, predictable piece of software your computer runs.

Also, SSD users should be aware that Duplicati assembles each archive object on the local filesystem. On spinning rust, it significantly impacts performance.

Oh, and the default archive object size is comically small for modern day usage and will cause significant issues if you're not using object storage (say, a remote directory.) After just a few backups of a system with several hundred GB, you could end up with a "cripples standard linux filesystem tools" numbers of files in a single directory.

And of course, there's no way to switch or migrate object sizes...


I had a terrible experience too. The UI is incredibly slow and personally, I had issues where the "local db" had to be constantly repaired. The tool is just buggy and doesn't work well IMO.

FWIW: I ran it on 3 separate Windows PCs for around 6 months without any real luck getting it to work consistently.


This looks interesting, thanks for all those warnings, i will stay away from it for now.

However the next question would always be which cloud provider to use.

Is OVH cloud archive the cheapest cloud storage for backups in europe? It lets me use scp or rsync, among others.

They are charging(§) $0.011/GB for traffic and $0.0024/month/GB for storage.

So if my total backup is size 100gb and i upload 5gb per day of incremental backups i pay around $2 per month.

--

§ https://www.ovhcloud.com/en/public-cloud/prices/#473


"Is OVH cloud archive the cheapest cloud storage for backups in europe? It lets me use scp or rsync, among others."

OVH may, indeed, be the cheapest.

If you email[1] and ask for the long-standing "HN Reader Discount" you can get $0.01/GB storage and free usage/bandwidth/transfer.

Zurich Equinix ZH4 on init7 pipes.

Depending on your preference either [2] or [3] may be the most compelling aspect of our service.

[1] info@rsync.net

[2] https://news.ycombinator.com/item?id=26960204

[3] https://www.rsync.net/products/universal.html


Does this discount also apply to the raw ZFS plans at rsync.net? Looking for a reliable and cost efficient place to push my ZFS snapshots via “zfs send”.


Yes, although larger (4TB) account size minimum still applies.

Email us ...


Great offer, thanks. Is there an open source backup software you recommend to your clients for encrypted backups?


borg:

https://borgbackup.readthedocs.io/en/stable/

A good description of how it works and why you should use it is here:

https://www.stavros.io/posts/holy-grail-backups/

Here are a few rsync.net-specific HOWTOs that have been created:

https://www.rsync.net/resources/howto/borg.html


It's pretty hard to beat Hetzner Storage Boxes, if you can live with the fixed provisioning (beyond being able to switch between the tiers).

https://www.hetzner.com/storage/storage-box


> It's pretty hard to beat Hetzner Storage Boxes

They had a recent change in pricing ... did you take that into account?


The data resiliency is pretty weak. Its only a single Raid cluster away from losing data.


This is true, but I already try not to rely too much on one backup target, so it works out for me as yet another replica.


Does Hetzner have a service that nigh work for ZFS receive (beyond dedicated server)?


Far from it really.

Backblaze is much cheaper, and can have free egress when using Cloudflare with it.

There is also Storj, a decentralized storage coin and it gives 150 GB for free + $4/TB with free egress matching what you stored.

another one is IDrive E2, it $4/tb, with the first year costing the same as a single month, with egress for free up to about three times the size of what's stored.

Hetzners storage boxes are pretty cheap, but that is for a reason.

The upload speed is pretty slow outside Hetzners network (from my experience) and more importantly is that data is only protected by a single RAID cluster.

They also offer free Unlimited egress.

But I would personally go with Backblaze or maybe IDrive.


Sorry but i asked for a european offering, Backblaze is a US company, as is IDrive. I should have been less ambiguous when i wrote "in europe".


Then yeah, Hetzner or Contabo Object storage seem to be the way to go.


Well, OVH is a similar price as Hetzner but offers a real storage service (not just DIY) and Contabo is 4x as expensive.


Or a small computer with a disk at a friend's home and backup to that. It's cheaper than cloud after one or two years, always less reliable, network speed is probably OK, you can have physical access. If the friend is a techy it could be one among many other little computers in that home. You can reciprocate by hosting his/her backup at your home.


Yeah this is what I do.. One at a friend's house in his rack, the other one elsewhere with an external drive on a raspberry zero 2 :P

The good thing is you can add more storage. The bad thing is no enterprise class guarantees of course. But having multiple mitigates that.


That's a charming idea, the question is how far away does your friend live? If it's too far, the upstream bandwidth of residential internet can be a problem during a restore.


Cheap and dirty: Office 365 family plan with 6 account a 1TB each for around $60/year.


Seems to be 100$ a year now.


With me being an IT person my landlord asked for recommendations for doing backups. Some googling revealed duplicati and we gave it a go. Installation + configuration was easy and the features were sane. That was like 6-7 years ago and it is still running without issue (AFAIK ^^)


Have you tested restores? The problem I had with duplicati was that eventually restoring from a backup would take exponentially longer, to the point of never finishing. Maybe it would have eventually, but I can't wait multiple days to restore one file. There's a possibility it was an error or problem on my end, and this was a couple of years ago, so ymmv.


I'm a new user of Duplicati and so far so good, but what you describe sounds like their biggest issue with the original storage mechanism (full+huge chain of incremental backups). The new mechanism would likely completely fix your concern. Here's a brief description of how it now works on their website: https://www.duplicati.com/articles/Storage-Engine/


The one full-backup restore I did on my wife’s system - after her MacBook Air decided to fry its storage (it was obsolete anyhow) - went perfectly. 23Gb of personal files (she’s not the data pack rat I am) came streaming back down inside of 20 hrs. And we were on a much slower connection at the time, certainly not the symmetrical gigabit that we have now.


Yes, we did test that and it worked reasonable fast (backup to external USB SSD)


> running without issue (AFAIK ^^)

If you don't know, then it's not working. At least that should be your stance on backups.


Not to be confused with duplicity, or duplicacy backup programs which have similar features.


Duplicacy has been incredibly stable for me over the years and I still prefer it's lock-free deduplication design. Looks like 28 days ago there was a major release as well. Time to upgrade. :)

https://github.com/gilbertchen/duplicacy


Agreed, duplicacy seems to be more resilient to the inevitable errors or hiccups along the way. The only downside is that it seems to be inefficient with storage with small metadata updates which happen frequently with my use case.


Another happy long-term Duplicacy user here. My only problem with it is; on the rare occasions I need to restore something from backup, I can never remember the correct syntax and always have to look it up again.


Same. I ended up writing the steps in a file because I could never remember them. It's not very complicated but a bit counter-intuitive: instead of pulling everything from the remote you first have to recreate "repository" with the same parameters as the original one, and _then_ run the restore command.


Make a script!


Or a do-nothing script if you don't want to automate it right away.

https://blog.danslimmon.com/2019/07/15/do-nothing-scripting-...


Never seen that concept before. I've done similar in the past using aliases in my ~/.zshrc

  alias taskname='echo "blah. blah... instructions for task..."'
Of course, then I have to remember what nemonnic I used for 'taskname'!


I had the exact same reaction. I have little "README.md" text files scattered about to remind me how to do things and never thought to make an interactive post-it note.


Note, its not open source. Duplicati is.


It seems to be open source (source code is in github), but the license isn't free for commercial use. I'm not exactly sure what to call it...


Source-available, I guess.


Proprietary Source available


Source available


They have some major differences. Enough so that I first tried Duplicati and ran into corruption issues so frequently that I sought out an alternative and luckily found Duplicacy.

Duplicacy has been stable for years now and I gladly pay the commercial license. It seemed like Duplicacy constructs a giant DB of all the files and manages everything that way, whereas Duplicacy's approach is much simpler and is less prone to corruption. The large DB approach seems to fail when the backup set contains a large number of files that many users manage.


> It seemed like Duplicacy

Duplicati?

----

These names are always a mess. I half the time quit comparing these tools due to not being able to keep the names straight.


That's right -- Duplicati constructs the giant house-of-cards DB). I sometimes need to run a $> ps -ax to remember which one I'm using when it comes time to change the config.


Just use restic and reclone and be done with it.

https://bobek.cz/blog/2020/restic-rclone/


I occasionally use restic but one thing I don't like about it is the sheer number of data files it creates (45k for ~800GB in my case) which makes it a pain to use with certain cloud storage providers that don't always handle tens of thousands of files very well (gdrive being a good example).

Is there some way to get it to not make as many files?



I've used restic with the backblaze and S3 backends - works pretty well for me. The newest version also has compression on top of deduplication, like borg, which is nice. (Of course, it will only make a difference for compressible data - most images or videos won't compress, but say, JSONs will).


I dropped duplicati after its database got corrupted irreversibly. Also, recoveries were always very long.

I now use restic and I'm very happy. I find it to be very resilient. No more database, only indexes and data packs, which can be repaired.


Same. Database corruption hit me after ~1.5 years and I could never figure out what the cause was or how to fix it. Which is a shame, because Duplicati looks like a great open source project with a lot of dev time and effort invested into it. But when it comes to backup software, your core functionality better work reliably, and Duplicati just isn't there. I since switched to Duplicacy and couldn't be happier.


If you plan to use Duplicati please pay attention to the docs around block size. We used this to back up a couple 100GB of data to S3. Recovery was going to take over 3 days to reassemble the blocks based on the default 100KB block size. For most applications you will want at least 1MB if not more.

Otherwise a good product and has been reliable enough for us.

* https://duplicati.readthedocs.io/en/latest/appendix-c-choosi...


Thanks for the note!


use restic. It's like git but for backups.

Many years ago I was a happy user of CrashPlan as the data was also easily accessible, but when they stopped their private user plans I looked into several solutions (Duplicati, duplicacy, and some others too). restic was the only one light enough for me to use consistently, which is a critical thing about backups.


Does anyone have experience with comparing restic and Rclone? I feel like they are similar.

* https://rclone.org/


While there's probably some overlap for certain use cases, I'd say they're more complementary. In fact, Restic leverages rclone to support a lot of cloud storage services. Restic is specifically meant as a backup tool and does encryption, deduplication, snapshots, and now apparently compression. Rclone is more of a synchronization tool/copying tool (which could also be used to make backups), more like rsync or even just cp (but with cloud storage support).


Thanks!


Rclone is more for syncing than backups. Its great for moving files between storage systems and syncing one path to a destination. Some backup tools use it for uploading/etc.


Rclone is like rsync for the cloud, you can sync files to a google drive or other service. And like CCC it can archive deleted files as a saftey net. I love the simplicity, no deltas, snapshots or restore procedures, the files are just there on the destination.


I use restic to a backup to a local drive, then use cloud storage to backup the repos. I know restic supports some direct backup to some cloud backends directly, but this seems more decoupled and less prone to errors/hangs.


the thing that made me wanting to post is this bullshit:

Download Duplicati 2.0 (beta) or Look at 1.3.4 (not supported)

i work at a big tech and see this all the time, if your "new shinny promotion ready" version is not ready, why the hell would you drop support for the old version that works? i'd stay away from any of this products operating by this irresponsible teams.

yeah, i know, lots of biased guesses/views and sentiments on my part but you get how much this angers me.


If you're a big tech and even considered using duplicati, you should look around more.


There is also https://kopia.io

- Cross platform

- GUI

- Encryption/Compression/Deduplication


Unfortunately looks like it does not backup full metadata (ACLs, extended attributes, flags, alternate data streams, special files, etc).

I wonder if there is any program that does?


Can recommend kopia as well. It is the one that I settled with after trying out pretty much all the other open source solutions (at the time).

Works great across all my devices (win, mac, linux).


Yeah, I was wondering about how Duplicati compares to Kopia, which seems to check pretty much all the boxes for me.


google storage only, can it add rclone as backend to support other storage providers?


Huh? No... Kopia supports cloud object storage, google drive, webdav, SFTP (ssh), or it's own repository server. https://kopia.io/docs/repositories/

I just use SFTP.


Thanks! seems great to me. It's going to switch kapio-ui from electron to go-binary-plus-browser, I thought its server already provides a browser UI, not sure why it needs a new desktop UI that is browser based, why both.


- LGPL license

- Cross platform (.NET / Mono)

- Incremental backups with compression

- Encryption (AES-256)

- Backup verification

- Block level deduplication

- WebUI

- Lots of backend supported

Wondering how does that compare to https://restic.net/


Restic is CLI focused whereas Duplicati is GUI focused. Restic is based around repositories, which can contain multiple backups from multiple sources, whereas Duplicati's backups are not (although the actual backup format is similarly broken up into lots of small blocks).


One of these backup apps had a comparison table in their wiki. I can't remember which one and how accurate it still is.


You may be referring to this page: https://github.com/gilbertchen/duplicacy

If you scroll down, below the benchmarks, it lists features and comparisons with other options.



I'm using restic on servers, Kopia on pc/mac


+1 for Restic. What I ended up doing was writing a script which implements a file I called `backupctl` on my home server which specifies a set of directories under the home directories to backup. This wound up being a good solution to the problem of "try and save these files from loss due to annoyance" - i.e. a house fire isn't catastrophic, and "irreplaceable" (previous memories) which I want to head off to Restic.

For things like family photos this works really well since if we copy them all over the place across devices, restic will still deduplicate them down to just 1 record when it gets uploaded to Backblaze.


Restic is absolutely great. Wrap it in Autorestic and it's even better!

https://autorestic.vercel.app/


I've had a good experience with [crestic](https://github.com/nils-werner/crestic), even though it seems a lot smaller and simpler than autorestic. But I really like how the same backup can be configured for different backends. Autorestic's seemed more complicated in comparison.


There’s also resticprofile which takes care of scheduling (with launchd on macOS) and maintenance tasks for restic. I especially enjoy that resticprofile can create a prom file for the backup status that I can just scoop up to my monitoring.

https://creativeprojects.github.io/resticprofile/


I've been using Autorestic but it has some issue, it keeps modifying the yml file on its own with an invalid config option, which causes the backups to fail.

Not a good thing for something that's supposed to run in the background and keep things backed up.


The problem with autorestic is its development is pretty slow. Many PRs are still not merged, and there are very few commits.


Just wondering: Any reason you don't use Kopia on the servers as well?


I use Kopia + Backblaze with linux. No problems so far.


Same question. I set up duplicati for a small server to backup WordPress websites In docker with a script, and it seems to work just fine.


Oops wrong comment to reply to.


I use ZFS snapshots and send/replication. This has been the easiest and most reliable backup solution for everything. I especially enjoy taking backup of SQL Server with ZFS with the new snapshot feature in SQL Server 2022 "ALTER DATABASE MyDatabase SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON";


The only issue is, there is no ZFS cloud backend other one (that is very expensive) and a half!

Otherwise, ZFS snapshots are sweet!


Once you get used to zfs snapshots, operating without them feels like developing without source control.


I'm a happy user, I use it as a solution to back up specific user folders on a Windows system to an smb network share. It's been chugging along for years now, and I even have done a few recoveries, and never had a problem. I'm surprised to read the other reviews here.


I highly recommend Borg Backup. I had to give up on Duplicati years ago, perhaps they're OK now. But Borg is magic.

https://github.com/borgbackup/borg


I use borg with a Hetzner storage box [0]. Works very well for me.

[0] https://www.hetzner.com/storage/storage-box?country=us


+1 with Vorta as the GUI frontend. Much more reliable and performant than Duplicati and Duplicacy which I used to use.


Have been using this for years, it has its quirks but it works and it costs me next to nothing - I keep looking at possible alternatives but so far haven’t shifted.


It's a shame Duplicati runs quite poorly (I have the same experience). I moved to restic with the autorestic wrapper and configured notifications through another method for both failures and successful backups.

That second option works amazingly well and is much quicker, more reliable, and offers more control than Duplicati. But it's much harder and time consuming to set up, requiring timers, scripts and setting up notifictions. For new people self hosting stuff, reliable incremental off site backups can be a right pain. How many poorly tested cronjobs failing to create backups that nobody will take action on are running right now? At least the Duplicati GUI will give you a glanceable GUI showing its failures in backups.


Has anyone tried Kopia yet? I have used restic which works pretty much fine yet I felt misses a few features that Kopia promises.

https://kopia.io/


I have been using Kopia to back up all of my laptops' home dirs to a raspberry pi for at least a couple years now. There is a CLI and a UI. The UI is somewhat funky and could benefit from an "easy mode" a la time machine, but it does work. I recently restored my home dir from it just the other day when migrating from one OS to another. My favorite thing about Kopia is that it performs incremental backups on tens of GB _much_ quicker than plan rsync can and is much more space-efficient to boot.


I recommend duply[1]. It is a frontend for duplicity:

https://duply.net/Duply_(simple_duplicity)


I've been using Duply as a simple CLI front end to Duplicity (not Duplicati) for years now. It's worked great for me on many servers and personal machines.


I just started using Duplicati last week as my backup for ~900GB worth of photos, music, and other assorted data in an Ubuntu RAID1 array to Backblaze B2. I noticed it was a little sketchy when I poked at it (i.e. pausing the backup), but didn't realize it was so unstable. The initial backup did finish.

Is restic the best option for Linux backup to B2?


I don’t use it. I’ve tried, and it’s a large, bloated, unstable program in Docker and when installing natively there’s more dependencies than there is actual backup software. It would quadruple the size of my install on RasPi.

I use restic on servers and Syncthing set to one-way sync for basic folders.


One of my friends really swears by this but I find something so GUI driven really complex to automate for the things I do :)

Most of my 'backup' is more like synchronising offline archives. I don't really backup full machines, just the data.

So I'm back to some custom encfs + rsync scripts and pretty happy with that :P


Does anyone have experience with using regular backup software in conjunction with reverse-encrypting filesystems, like gocryptfs, eCryptFS or encFs? ie mount the plaintext directory as a new reverse-ciphertext directory, and backup the cipher one:

  ./gocryptfs -reverse plain cipher


I do something like this. EncFS a local clone of my data, and rsync that clone to remote servers.

I prefer this over complex formats created by software like duplicati. This is easier to recover 20 years from now (I just went through that with a USB stick from 2001 :P )


gocryptfs works fast and good, it is the way to... go for encrypted backups.


These comments are becoming backup recommendations.

Want to add that I have have used “back in time” for a long time, probably +10 years, recovery has always worked so far.

Only issue is that it does not do block-level deduplication. But it is good enough for my laptop with external harddrive usage.


I used Duplicati for a few years. The backup process would fail silently every once in a while and wouldn't run again until I manually reset it. It did save me once after a storage device failure. Now I just put stuff I want to back up in Dropbox or git.


My interest was piqued, so I started going through the issues tagged with 'bug' from oldest to newest. I got 100 in and... well, Jiminy Cricket... I was still in 2017. Think I'll pass.



How does it compare to Borg?


Borg doesn't have a Windows version, for example. Borg is also command line only, while Duplicati has a nice graphical UI - by running a web server on localhost.


I've been using Vorta as a GUI for Borg for a while for personal use. It's not the prettiest thing out there but it has seemed to work fine so far. As far as restoring from old backup goes, though, I've only really tried that with a few individual files.

Windows still seems to be unsupported.


There is a nice GUI on MacOS called Vorta I think.


Borg works great (except mounting) under Cygwin.


My main issue with Borg is disk space needed locally for backup.

No idea on the rest, but the reason why I discarded it as an option.


Yet Another Recommendation: borg backup to a local server daily, then rclone to S3 (or another cloud provider) to backup the whole local backup server repo weekly or something...


Still beta right? What fool trusts their backups to beta software? I tried this many years ago and it started failing eventually and I gave up. As expected, it's beta.


Interesting that Cryptomator hasn't been mentioned so far. I've been thinking about about setting it up to work with my 2TB GDrive. Anybody know how it compares?


I recommend Arq Backup all the way https://www.arqbackup.com/


A paid-for software that is not available for Linux is not a replacement for many.


The fact that it's been in beta since forever means even the developers don't trust it themselves.


Do all those softwares mentioned here also work with external harddives as "cloud"?


Borg and duplicati definitely do. I use them to push my backups on a local samba share, but I have tested them by backing up just to a local folder.


Last update was June 2021, is the project still maintained?


There's some progress on GitHub, so I guess work is still ongoing.


I will use Restic (or Borg), in preference to Duplicati.


I still use Backblaze for this kind of stuff.


duplicati is basically a paid C# version of duplicity, an open source backup application https://en.wikipedia.org/wiki/Duplicity_(software)

https://en.wikipedia.org/wiki/Duplicati

edit: oh boy, need another cup of coffee. im entirely wrong. thinking of duplicacy https://duplicacy.com/ which comes with a slick paid frontend.


Wait. The title uses the word free, but you write that it requires payment. Looking at the website I can't find a way to pay for it other than a donation.


If it’s any comfort, you’re not the first one to confuse these programs…


A more accurate description might be that Duplicati is a free open source Windows version of Duplicity. It works pretty well, by the way.


re: edit: duplicacy is also free and open source for just the CLI version ( https://github.com/gilbertchen/duplicacy/releases ) , the 'slick paid frontend' is optional and sold by https://duplicacy.com/ and even then is free "to restore, check, copy, or prune existing backups"


> duplicacy is also free and open source for just the CLI version

Not what I see.

From the website, https://duplicacy.com/buy.html

>GUI/CLI licenses are not required under the following situations:

> - Running the CLI version to back up personal files/documents on a home computer

> - Running the CLI version to restore, check, copy, or prune existing backups on any computer

> - Running the CLI version to back up any files on a computer where a GUI license is already installed

From the repo,

https://github.com/gilbertchen/duplicacy/blob/master/LICENSE...

> Copyright © 2017 Acrosync LLC

> - Free for personal use or commercial trial

> - Non-trial commercial use requires per-computer CLI licenses available from duplicacy.com at a cost of $50 per year

> - The computer with a valid commercial license for the GUI version may run the CLI version without a CLI license

> - CLI licenses are not required to restore or manage backups; only the backup command requires valid CLI licenses

> - Modification and redistribution are permitted, but commercial use of derivative works is subject to the same requirements of this license

Not really open source.


How is it paid?

It's fully open-source, LGPL licensed, and from the website the only payment options are donations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: