Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rclone can also mount cloud storage to local disk, especially nice from kubernetes. Write/read speed isn't the fastest when using as a drive with lots of files in the same folder, but a quick and easy way to utilize cloud storage for projects

It can also e2e encrypt locations, so everything you put into the mounted drive is getting written encrypted to Dropbox folder /foo, for example. Nice, because Dropbox and other providers like s3 don't have native e2e support yet

All in all, Rclone is great! One of those tools that is good to have on hand, and solves so many usecases



It's also a trivial way to set up an ad-hoc ftp server which serves either local or cloud storage. e.g. my Pi4 runs an rclone ftp server which exposes my dropbox to the rest of my intranet, and a separate ftp server which my networked printer can save scans to. The same machine uses rclone to run automated backups of cloud storage for my household and a close friend. rclone is a godsend for my intranet.


And then all you need to do is use a trivial curlftpfs to have a synchronized folder available. Dropbox is not needed anymore ?


Oh wow I didn't know about this! Good tip


Do you mind explaining why it's so trivial versus setting up traditional ftp? I'm missing something.

Thank you


> Do you mind explaining why it's so trivial versus setting up traditional ftp?

    nohup /usr/bin/rclone serve ftp --addr MYIP:2121 $PWD &>/dev/null &
No configuration needed (beyond rclone's per-cloud-storage-account config (noting that serving local dirs this way does not require any cloud storage config)) and some variation of that can be added to crontab like:

    @reboot /usr/bin/sleep 30 && ...the above command...
Noting that $PWD can be a cloud drive identifier (part of the rclone config) so it can proxy a remote cloud service this same way. So, for example:

    rclone serve ftp --addr MYIP:2121 mydropbox:
assuming "mydropbox" is the locally-configured name for your rclone dropbox connection, that will serve your whole dropbox.


Just write a systemd unit. These commands are not any easier to support and are far worse from the purely technical point of view. You'll get:

- startup only when the network is up

- proper logging

- automatic restarts on failure

- optional protection for your ssh keys and other data if there's a breach (refer to `systemd-analyze security`)

Run:

  $ systemctl --user edit --full --force rclone-ftp.service
this opens a text editor; paste these lines:

  [Unit]
  After=network-online.target
  Wants=network-online.target

  [Install]
  WantedBy=default.target

  [Service]
  ExecStart=/usr/bin/rclone --your-flags /directory
and then enable and start the service:

  $ systemctl --user enable --now rclone-ftp


Seriously yes. Crontab isn't meant to keep your services up. We have a proper service manager now, out with the hacks.


People go out of their way to build their own crappy version of systemd.

systemd is far from perfect, and Poettering is radically anti-user. But it's the best we got and it serves us well


> Poettering is radically anti-user

What does that mean?


One of the authors of systemd


I know who he is but don't understand how he's supposed to be "anti-user".


Can traditional FTP talk to Dropbox?


Assuming the Dropbox is synchronized somewhere to your file system, the FTP server could serve that directory. Although I guess not everyone synchronizes their Dropbox locally.


> Although I guess not everyone synchronizes their Dropbox locally.

And i've yet to see a pi-native dropbox client for doing so.

PS: i actually do sync dropbox to my main workstation but selectively do so. The lion's share of my dropbox is stuff i don't need locally so don't bother to sync it. The rclone approach gives me easy access to the whole dropbox, when needed, from anywhere in my intranet, and i can use my local file manager instead of the dropbox web interface.


It is indeed great for this, but you need to make sure your network is stable.

I use it on my desktop and laptop to mount Google drives. The problem on the laptop is that the OS sees the drive as local, and Rclone doesn't timeout on network errors. So if you are not connected to wifi and an application tries to read/write to the drive, it will hang forever. This results in most of the UI locking up under XFCE for example, if you have a Thunar window open.


There is in fact a default timeout of 5 minutes and you can change it: https://rclone.org/docs/#timeout-time

I shorten it to prevent lockups like you are describing.


Thanks, but unfortunately this doesn't work - for my issues at least. I have this (and conntimeout) set to 15 seconds, but it makes no difference. I tried those based on another user reporting the same issue here:

https://forum.rclone.org/t/how-to-get-rclone-mount-to-issue-...

The timeout param is listed as "If a transfer has started but then becomes idle for this long it is considered broken and disconnected". This seems to be only for file transfers in progress.

I traced it once, and Rclone gets a "temporary DNS failure" error once the network is down, but just keeps retrying.


Sounds like you have enough for a decent bug report


Similar issues with WebDAV mounts on macOS


> Rclone can also mount cloud storage to local disk

It's not immediately apparent what this means—does it use FUSE, 9p, a driver, or some other mechanism to convert FS calls into API calls?

EDIT: it's FUSE.


Has anyone used it successfully as a replacement for two-way sync apps (like Insync for Google Drive)?. Insync sucks, and Google Drive sucks, but for something I depend on every day I feel like I really need to have a local copy of the files and immediate sync between local and server, particularly when Internet access is spotty.


You might want to try Unison: https://github.com/bcpierce00/unison

I've been using it to great effect for over 10 years on a daily basis.


But absolutely 100% remember to block automatic updates because even minor-minor version updates change the protocol (and even different versions of the ocaml compiler with the same version of unison source can mismatch.)

This pain has always stopped me using Unison whenever I give it another go (and it's been like this since, what, 2005? with no sign of them stabilising the protocol over major versions.)


The recent updates have stabilised things a while lot with nice features like atomic updates.


> Insync sucks...

FWIW, i've been using Insync on Linux since it went online (because Google never released a Linux-native client). Aside from one massive screw-up on their part about 8 or 10 years ago (where they automatically converted all of my 100+ gdocs-format files to MS office and deleted the originals), i've not had any issues with them. (In that one particular case the deleted gdocs were all in the trash bin, so could be recovered. Nothing was lost, it was just a huge pain in the butt.)


> Aside from one massive screw-up on their part about 8 or 10 years ago (where they automatically converted all of my 100+ gdocs-format files to MS office and deleted the originals)

Why. Just why. How does that shit ever happen in a public release?


You probably want syncthing


I have bi-dir sync from/to two computers, one running Windows 10, the other Linux, using Google Drive as buffer. All data are encrypted client-side because I don't want Google to know my business. But the sync is every 30 min, not immediate. I have no fancy GUI over it, like synchthing etc. just plain CLI command triggered by cron and the task scheduled.


Seems like a market opportunity exists for this especially with Apple cutting external drive support for third party sync tools including DropBox.


You can still do this, but Dropbox can’t use the File Provider API for that yet, so the experience won’t be quite as integrated as it is with Dropbox for macOS on File Provider. See https://help.dropbox.com/installs/dropbox-for-macos-support for more.


Syncthing maybe?


When you say “e2e” encryption do you mean client-side encryption? Because S3 supports both client and server side encryption. (It doesn’t really need to do anything on the service side to support client-side encryption tbf)

For client side encryption they have a whole encrypted S3 client and everything. (https://docs.aws.amazon.com/amazon-s3-encryption-client/late...)


This seems to be an SDK or library, not a command line tool.


I first used it at work to sync a OneDrive folder from a shared drive due to different audiences. Very cool tool. The open source stuff I really love.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: