Rclone can also mount cloud storage to local disk, especially nice from kubernetes. Write/read speed isn't the fastest when using as a drive with lots of files in the same folder, but a quick and easy way to utilize cloud storage for projects
It can also e2e encrypt locations, so everything you put into the mounted drive is getting written encrypted to Dropbox folder /foo, for example. Nice, because Dropbox and other providers like s3 don't have native e2e support yet
All in all, Rclone is great! One of those tools that is good to have on hand, and solves so many usecases
It's also a trivial way to set up an ad-hoc ftp server which serves either local or cloud storage. e.g. my Pi4 runs an rclone ftp server which exposes my dropbox to the rest of my intranet, and a separate ftp server which my networked printer can save scans to. The same machine uses rclone to run automated backups of cloud storage for my household and a close friend. rclone is a godsend for my intranet.
No configuration needed (beyond rclone's per-cloud-storage-account config (noting that serving local dirs this way does not require any cloud storage config)) and some variation of that can be added to crontab like:
Assuming the Dropbox is synchronized somewhere to your file system, the FTP server could serve that directory. Although I guess not everyone synchronizes their Dropbox locally.
> Although I guess not everyone synchronizes their Dropbox locally.
And i've yet to see a pi-native dropbox client for doing so.
PS: i actually do sync dropbox to my main workstation but selectively do so. The lion's share of my dropbox is stuff i don't need locally so don't bother to sync it. The rclone approach gives me easy access to the whole dropbox, when needed, from anywhere in my intranet, and i can use my local file manager instead of the dropbox web interface.
It is indeed great for this, but you need to make sure your network is stable.
I use it on my desktop and laptop to mount Google drives. The problem on the laptop is that the OS sees the drive as local, and Rclone doesn't timeout on network errors. So if you are not connected to wifi and an application tries to read/write to the drive, it will hang forever. This results in most of the UI locking up under XFCE for example, if you have a Thunar window open.
Thanks, but unfortunately this doesn't work - for my issues at least. I have this (and conntimeout) set to 15 seconds, but it makes no difference. I tried those based on another user reporting the same issue here:
The timeout param is listed as "If a transfer has started but then becomes idle for this long it is considered broken and disconnected". This seems to be only for file transfers in progress.
I traced it once, and Rclone gets a "temporary DNS failure" error once the network is down, but just keeps retrying.
Has anyone used it successfully as a replacement for two-way sync apps (like Insync for Google Drive)?. Insync sucks, and Google Drive sucks, but for something I depend on every day I feel like I really need to have a local copy of the files and immediate sync between local and server, particularly when Internet access is spotty.
But absolutely 100% remember to block automatic updates because even minor-minor version updates change the protocol (and even different versions of the ocaml compiler with the same version of unison source can mismatch.)
This pain has always stopped me using Unison whenever I give it another go (and it's been like this since, what, 2005? with no sign of them stabilising the protocol over major versions.)
FWIW, i've been using Insync on Linux since it went online (because Google never released a Linux-native client). Aside from one massive screw-up on their part about 8 or 10 years ago (where they automatically converted all of my 100+ gdocs-format files to MS office and deleted the originals), i've not had any issues with them. (In that one particular case the deleted gdocs were all in the trash bin, so could be recovered. Nothing was lost, it was just a huge pain in the butt.)
> Aside from one massive screw-up on their part about 8 or 10 years ago (where they automatically converted all of my 100+ gdocs-format files to MS office and deleted the originals)
Why. Just why. How does that shit ever happen in a public release?
I have bi-dir sync from/to two computers, one running Windows 10, the other Linux, using Google Drive as buffer. All data are encrypted client-side because I don't want Google to know my business. But the sync is every 30 min, not immediate. I have no fancy GUI over it, like synchthing etc. just plain CLI command triggered by cron and the task scheduled.
You can still do this, but Dropbox can’t use the File Provider API for that yet, so the experience won’t be quite as integrated as it is with Dropbox for macOS on File Provider. See https://help.dropbox.com/installs/dropbox-for-macos-support for more.
When you say “e2e” encryption do you mean client-side encryption? Because S3 supports both client and server side encryption. (It doesn’t really need to do anything on the service side to support client-side encryption tbf)
It can also e2e encrypt locations, so everything you put into the mounted drive is getting written encrypted to Dropbox folder /foo, for example. Nice, because Dropbox and other providers like s3 don't have native e2e support yet
All in all, Rclone is great! One of those tools that is good to have on hand, and solves so many usecases