> Is it better to make up common rsync “recipes” for people to use, or is it better to let folks have access to ALL of the flags [...] and pick’n’mix what they want?
I think both. This is the hard part about GUI design -- it is pretty easy to make a list of 100 command-line options listed alphabetically on the man page, but for GUIs, people expect much more.
There should be some sort of logical grouping. One area of the screen would be options related to what is transferred, a second area is metadata, the third one is speed-only optimizations, fourth is logging and so on. Having "hardlinks" all the way on the left and "symlinks" all the way on the right does not make a good GUI.
There is a very wide variety of input elements, so checkboxes are are not the right solution all the time! For example, while having separate "--verbose" and "--quiet" options is fine for CLI, one would expect to see a drop-down or a slider in the GUI. This will decrease cognitive load because there are less things to read and there is no need to worry what happens if you check both.
Another great GUI feature is dynamicity -- take the metadata for example. A good design might have a dropdown: "everything", "nothing", "x-bit only", "custom". Most of the time, first three options would be fine, but "custom" will show entire new group of checkboxes.
... this was just my subjective design, based on how I use the program. If I were trying to do a popular product, I'd have to spend some time asking people / looking at the forums to determine which sets of features do people use. For all that I know, there is a super common case for "--group" without "--owner", and this is just me not knowing about it.
I would think there maybe a "command-line" version of GUI design: supply limited default buttons for common options, but allow user customization. For example, I would love to have the button from author's design but tune it specifically to my need including running a specific set of "command lines".
rsync do actually support profile via option aliasing. rsync uses popt(3) to parse its command line options and popt allows you to define aliases, so you can put something like this in ~/.popt:
rsync alias --sync -rpcLDvz --chmod=D0755,F0644
A call "rsync --sync foo@bar:baz/ baz" will expand the command into "rsync -rpcLDvz --chmod=D0755,F0644 ...".
The "program alias newopt opt" syntax is actually popt's thing (see Option Aliasing[1]) and it works with everything that uses popt(3) for its command line parsing.
> There should be some sort of logical grouping. One area of the screen would be options related to what is transferred, a second area is metadata, the third one is speed-only optimizations, fourth is logging and so on. Having "hardlinks" all the way on the left and "symlinks" all the way on the right does not make a good GUI.
This is exactly what Microsoft have done with its ribbon interface[1] which is actually a mix of menus and toolbar with most commonly used functions logically grouped and prominently displayed. These Ribbons also changes as the context changes. Frequently used functionality gets into quick access ribbon displayed. There is also context sensitive popup toolbar available on right click.
I think I would mind //graphical menus// less than the change to an always in your face ribbon.
What I loved most about the seriously ancient UIs, way back in the Win 3.x days and forward, was that not only were the options categorized, they also had prominently displayed (right justified instead of left) the keyboard shortcut you could memorize if you knew you were doing it frequently. Yes, that sort of still exists, but you have to go out of your way to do something extra to see it: these days you have to tap a modifier key while in the focus area of the menu.
> I think I would mind //graphical menus// less than the change to an always in your face ribbon.
The ribbon has always been collapsible / "auto-hidable" to just a menu bar (double-click the active tab; admittedly not entirely easily discoverable). It's very much a graphical "pull" menu in the classic sense in that mode and the two-step Alt+ shortcuts might feel a lot more understandable/close-to-home in that context, as it is just like menu bars have always worked (Alt+E,C or Alt,E,C for Edit > Copy as opposed to Alt+H,C or Alt,H,C for Home > Copy; and in Office the old menu shortcuts still work to this day if you have muscle memory of them, you just don't get visual feedback).
> What I loved most about the seriously ancient UIs, way back in the Win 3.x days and forward, was that not only were the options categorized, they also had prominently displayed (right justified instead of left) the keyboard shortcut you could memorize if you knew you were doing it frequently. Yes, that sort of still exists, but you have to go out of your way to do something extra to see it: these days you have to tap a modifier key while in the focus area of the menu.
In addition to the Alt bubble pop ups, hovering every ribbon command has always provided a tooltip with command name and description and in that tooltip has always shown the most direct shortcut key in bold if there is one. Maybe silly to use the mouse to discover the fastest shortcut, but it's roughly the same discovery process as classic menus. (For instance, Format Painter shows Ctrl+Shift+C and Ctrl+Shift+V as opposed to Alt+H,F,P.)
(Not to mention, Windows stopped showing Menu keys by default way back in Windows 95, and even stopped showing some menus entirely unless Alt was pushed over the years. So the Windows 3.x days where everything always had underlines and macOS-like right-justified keyboard keys were relatively fewer than the many years of push Alt to see keyboard stuff on Windows. The ribbon wasn't a big shift in that department.)
> The ribbon has always been collapsible / "auto-hidable" to just a menu bar
Sadly, this is no longer true in Outlook, at least in the version recently pushed to my work computer. The only two options now are the "simplified ribbon" and the classic ribbon (https://support.office.com/en-us/article/use-the-simplified-...). You can no longer hide the ribbon completely. I assume other Office applications will soon follow suit.
Both options for the ribbon still support collapse and auto-hide. It may be confusing that the switch between Simplified and Classic Ribbon is a Caret button that looks like it should handle collapse / auto-hide (but that's still the odd boxed arrow icon in the title bar, right click menu options, and double-click the active tab name).
(Interesting to me is that the Simplified Ribbon in Outlook actually feels more useful to me and less space wasting than its Classic version and I switched my preference in Outlook from collapsed to shown because of it.)
'People complain about Electron a lot, but the primary complaint just seems to be that its bloated – carrying an entire browser engine along with something small. But like, all of our computers are already bloated, and there’s at least a 40% chance that right now I am bloated, and it doesn’t mean you should love me any less, and I think the above points I’ve raised override the lack of svelt-ness of the program. People don’t care that an application is a streamlined 3MB or a big round 200MB boy – they care that it works for them and it makes them feel good about using it.'
I think this is true in some cases, but the sorts of folk who run rsync may be more interested in performant hardware that can run on older machines than average. Then again, they'd probably opt for sticking to the command-line tool. Still a very cool project!
I avoid electron apps because it is almost anti web.
I would use apps built more like electron if they simply removed the electron part and replaced the app with a name for a namespace and simply shipped tht js/html/images along with a api service.
Installing would be binding tht app to tht name space and using the browser of choose and navigating to namespace.localhost. In this case rsync.localhost.
The system would be a bit more complex for handling auth and a few things but let's not get bogged down in solvable details.
Then I could access my apps from my phone. Another system (if I chose to expose them publicly).
Same here, just give me a PWA, or if one so insists in using the Web stack for the UI, then deliver the application as a daemon/service that uses whatever is the standard browser.
You can already use node + Carlo if you want something without the browser piece. Of course, then you need to worry about multiple versions of node and the browser, and distribution becomes more of an issue.
This is more or less how Jupyter notebooks work, and I agree this experience is pretty great. Make it a local web server and the browser becomes your UI.
Like maybe we can build this out. A common daemon for registering with.
The concept would be very simple to build. I might start a github project.
- Global daemon to register apps with. Apps could be installed system wide or per use.
Think plopping a zip/tar/ or directory structure in something like /usr/local/webap.d/(namespace) -- might be abetter directory. Then a manifest file that describes some things about the service, such as how to start the API service, and the default entry point (index.html).
- The global service is responsible for bringing apps on line by way of instance activation on say visiting a name space, or just keep them running.
- Services once launched pick random port from range to bind to and system service acts as a reverse proxy and common auth for validating the local user. Maybe even ensure each of the API services are run per user and jailed to the home directory
IDK, just thinking off the top of my head, I have written bits and pieces of this for various work projects, but never glued the idea into one single system.
It's not that hard, actually, I've done this with Node. Since you can install Node anywhere, include a binary, or whatever, you can have Node run as a server and it can run with elevated privileges, if needed.
You have it scan a directory structure and auto generate a path that you can use in a browser, any browser, to view your newly install app.
You can do this with APIs too so you can have internal tools with APIs via a Node server that do all kinds of things.
Some simple port forwarding rules (ymmv) make this available to your LAN or to the WWW. Add a DNS layer on top and you have something really cool.
I do this for all kinds of stuff, mostly PWAs, but it lets me have my own notes, todo, systems management and more.
It is easy, till you get into the trouble of local Firewalls, Double use of port numbers and all those details, where the daemon runs unexpectedly or stops without good information to the user etc
In my minds eye we would give one dedicated port to the main service. The apps would pick a random unused port, and report back to the main service to which would then become a reverse proxy from the main services port to the bound port.
So if the rsync app bound to 5392, it would report back to the main service so. And it would ensure all (or some subset of request) to rsync.localhost were reverse proxied back to localhost:5392.
For the most part when I say some request I mean most if not all, but it might be nice to have a system level rsync.localhost/.info uri that would give system level information about the app.
Can containers better fit into this? Will it be better to couple it with a service discovery/publishing too (say consul, to make endpoints of multiple apps easily accessible)?
Is there something that does this already, by any chance? (I might not have come across that!)
what about local filesystem access, os integration, and all the other things that you expect a desktop app to be able to do (like in this case executing a separate binary) which violate the security models of browsers?
App becomes a local backend server that has privileged access to the machine (with control over what it accesses, maybe give it a specific user), and a web UI to access the app.
Someone in the tree of replies for your parent explained and was contemplating actually going through with it.
While disk space may not often be an issue (though small SSDs are commonplace), RAM usage certainly is an issue if every little tool is an electron app.
The sort of folk who run rsync is a niche clique. Maybe this GUI is the perfect occasion to put rsync in the hands of more people ? We all know how powerful it is, there is no reason it should be used only by people who are happy testing the different flags, reading random blog posts about what combination to use, and trying to find salvation in a 3000-line manual page.
>The sort of folk who run rsync is a niche clique.
While that may be true, I think:
"The sort of folk who would benefit from being able to run rsync" is a much, much broader group.
(And even the existing group of rsync users can benefit from a tool to help construct a command line, that they then copy out and use. Usability matters!)
Ashley's target market are Archivists. They may or may not use rsync already, but you can see why they'd all benefit from it.
What kind of crap response is that? Ask any 100 people on the street about any industry specific tooling or terms and you're likely to get nothing. Go ask people 100 people about "liquidated derivatives" and see what happens.
> People don’t care that an application is a streamlined 3MB or a big round 200MB boy
At the risk of beating this topic to death here on HN...Some people care; I know I do. I think the audience for rsync would care more than a nontechnical user, for sure. If I downloaded a program with an extremely basic UI that essentially is a string builder, I probably wouldn't bother if the final executable was over, say, 10MB. My preference would be to hack together a basic CLI menu that could do the same thing and I can stick it in a folder that contains a bunch of other scripts.
Am I the target audience, or even close? Probably not, but I don't know many people who want what rsync offers who aren't programmers.
From the linked ServerFault page asking why Windows users don't use rsync:
> I would say that rsync is just too freaking complicated. Anyone I know who uses it regularly has a pre-set group of flags that generally does what they want.
This is followed by more reasons Windows users can't handle or don't want a CLI, along with a few mentions of Robocopy on Windows. Which, honestly, is the reason I don't use rsync on Windows. Windows (both Desktop and Server) ships with Robocopy, not rsync. To run rsync, I'm (or rather, was until WSL) forced to use a Cygwin-like environment to run it, which is overkill unless you're already using an environment like that (i.e. not on a Windows Server instance, most likely).
If ever there was a tool that didnt need a gui its rsync. Idea is to keep two folder structures in sync. Exactly the sort of thing you want to script, or trigger when one side changes, or cron.
I've use rsync all my computing life but i've rarely wanted to have to manually push a button to make it work.
A good cli interface is clearly the correct interface to rsync.
Embeding rsync in an app that does something else makes sense, eg edit locally auto push changes, but embeding rsync in an app that does nothing but rsync without any automation or integration options makes no sense to me.
You have to learn a new interface that you cant automate. Its dead end interface. The whole point of cli interfaces is that they are infinitly hackable and pluggable.
I think the foundation of this comment is incorrect. This helps you build up the rsync command and test it. It gives you the rsync command that it runs. You're supposed to take the command it generates and use another tool to automate it. You're not supposed to automate the interface. It fulfills the conditions you expect of it.
And personally, I loath constructing rsync commands. Every 2 months I have to parse that 2300 line rsync man page. And every 2 months google sees another search query "rsync GUI" Because I am sick of building these things.
And if the tool has an option to say "yes, that's what I needed, now make me a crontab entry!" then I'd say it could be very useful.
As the parent mentioned I script rsync once I know what I need. But like you said I do it so infrequently I usually have to man rsync to find the right cl to add to my script.
I appreciate this comment, because at first I didn't understand why I would ever use this program, and now I want to use it. Rereading 'man rsync' for the 10th time is really is annoying.
But it just will never be worth a 200mb download. Gosh. It's like bloat on a whole 'nother level.
rsync could have a WIMP GUI and a console one. One does not preclude the other. I think you are right that embedding rsync in a more narrowly focused app might be the best option but it is not the only way.
And though I tend to be a GUI hater, Unison is one tool where I find the GUI much more useful than the command line alone. The game is to iteratively come up with a set of rules which Unison will use to into effect on your dataset. The GUI quickly show you the effect of your current rules, and lets you propose the next iteration etc.
Very few people "know" rsync's interface. Any time I want to do anything more than -avz, I have to look it up. A well-made GUI would mean that instead of looking up "what option or combination of options does this?", you could just find the label that says "does this".
Of course, this GUI seems to only have the few most common options (i.e. the ones people are likely to remember), which indeed makes it pretty useless.
Making a GUI around a CLI is like translating a russian, experimental,supposedly untranslatable novel. Sure, it's hard, but there's still people out there wanting to read the book and who could enjoy the result, no matter how imperfect.
One of the main audiences for this GUI are people who maybe only ever use the CLI once a month/quarter and can’t even remember how to edit mid-line in the terminal. For them this GUI is a great idea.
> Successful UNIX tools (or any tools, for the most part) are the ones with a simple concept manifested very thoroughly, and rsync is certainly that. [...] There are over 100 flags in rsync that you can select to do different things. So, this is what I mean by doing something simple, and doing it very well and very thoroughly.
Rsync is certainly not simple. It certainly doesn't do one thing. It's "extraordinarily versatile" to the point of madness. Here's what it does:
- Copies files from A to B
- Mirrors directories
- Computes diffs
- Applies patches
- Evaluates filters
- Runs as a daemon
- Runs over various remote shells
- Applies filters to filenames
And then it doesn't give you a good progress bar or let you resume interrupted operations, or gather errors at the end. I think this is a good case where the tool may have grown too large and unwieldy, and it may be better to replace it with an API or a collection of small, UNIX-like tools. For example, if I just want to copy files from point A to B, I can do this:
I believe rsync can do two of your doesn't do list:
> And then it doesn't give you a good progress bar
rsync --progress
> let you resume interrupted operations
rsync --partial
...or enable both at the same time with rsync -P
(These features only available on samba.org's original rsync, and not OpenBSD openrsync[1], but I don't think openrsync has seen a widespread use yet, as it's still pretty new)
rsync --progress isn't very good, which is why I worded it that way.
If you use rsync --partial it has to rescan the source and destination directories again. All it does is let you resume transfers, and for some of us, transfers are not the slow part.
Your "piped tar" doesn't cover the main use case of rsync, which is (unsurprisingly), remote file hierarchy sync'ing.
In particular, when there's a hierarchy on a remote machine that I want to sync to a partially-up-to-date hierarchy on a local machine, the required rsync command is a one-liner and it's very fast because it uses a diff.
Incidentally, the "tar" man page is not exactly a model of simplicity itself.
> Your "piped tar" doesn't cover the main use case of rsync, which is (unsurprisingly), remote file hierarchy sync'ing.
That's exactly my point, isn't it? rsync does several different things, depending on how you invoke it, rather than being a UNIXy tool that does one thing well. Because the single tool does everything, you can't easily swap out different parts of it, and the single tool takes a hundred options.
For copying files from A to B, I wrote a simple script[1]. rsync is not as universally available on hosts, the tar version is sometimes significantly faster than rsync when just copying, and rsync won't let me triangle-route copies.
fyi, combining something like `--out-format="file: %f %l"` with `--progress` results in reasonably parse-able progress updates.
Keep in mind however that the format of --progress has changed in the past. Depending on your version of rsync it may or may not insert thousand seperating commas into the number of bytes written, e.g. 2,384,908 vs 2384908. The former can be supressed with --no-h, an option not available on at least some older versions of rsync that default to no commas.
The "Does not resume interrupted operations" gripe confuses me because that is the main reason I reach for rsync, I have a large tree of files I need to be somewhere else and if the transfer fails I would like it to not transfer files already transferred.
So I use rsync, which offers, as a core feature, resuming interrupted operation.
That one is badly phrased, it's about resuming a single interrupted large file. By default rsync will create a new temp file and start over, but there are options to make it resumable.
I don't like the --no-h change they made to the --progress format (mentioned in another comment) because it broke my parsing once, but otherwise I don't really see a problem with it and I expect most people have never tried to parse it. What's so bad about --progress?
Edit: --progress also tells you how many files rsync has been told to sync, and of those, how many are remaining.
--info=progress2 actually gives you the progress of the whole transfer (but I don't think you can do both). You can use --info=name0,progress2 if you don't care about individual files scrolling the terminal.
Seems like this user-hostile interface could have been averted if they gave rsync a "porcelain" mode, since --progress is frozen so it doesn't break scripts.
I think you're really exaggerating the severity of what is fundamentally just a minor matter of personal taste.
(I say this as somebody who's written a GUI wrapper around rsync to, among other things, implement my own progress bar. So I agree with the general principle of writing porcelain around rsync.) (Incidentally --progress has actually changed in a way that breaks (my) scripts in the past; the version that ships with MacOS and the latest version exhibit different behavior by default. It would be nice if there was a better way. If rsync followed the ffmpeg model of foremost being a library with a standard command line interface, I think that could be ideal.)
Nothing beats a good CLI with a proper man page in my book. I like having all the power at my fingertips, and a good way to search for what I want to do without using the internet.
When reading a man page, especially a classic man page like the ps man page or maybe a system call man page like sigaction, I try to keep in mind that the information is there but may require careful study. Good man pages are a often quite terse, requiring attentive reading.
I like using man the best, but there are two other methods that command-line programs can use to provide documentation, GNU's info and built in help (triggered by -h or --help on command-line).
GNU's info works better than man for more complex documentation--its practically a curses based e-book reader. For a taste of using info based documentation, try
info spell
on the command line to see the documentation for the aspell command. Info is even used as the internal help documentation for Emacs, reachable from inside of Emacs (via "Cntl-h i", naturally). I think rsync has too many facets for a man page and would benefit from having a longer info file that could explain the many use cases for rsync.
"proper" man page is quite the qualifier. (Edit: as in, I know I'm "supposed" to use man pages, but it's almost easier to google for it. The man pages I've seen aren't as easy to use for common cases).
CLIs are much more limited at discoverability compared to a graphical interface (TUI/GUI) which can show you all the options and let the user directly interact.
I always end up having to google how to do stuff with CLI tools but GUIs let you discover how to do things intuitively. Tried using man pages but they're always slower to find stuff in and a stackoverflow answer is more likely to target the use case I'm going for.
> so you can begin a (regex) search with / and hit n to go to the next result or N to go back.
His point of that being slower still stands. I also find that just googling generally what I want can more quickly lead me to a stack overflow post with exactly what is needed, while MAN pages can be rather cryptic at times without reading the whole thing.
And some new CLI programs don't ship with their manpages at all. I've also seen some manpages that don't match the specific version/release of a program installed on the OS.
In my experience the coverage is shockingly good. And if you find a case where there is no tldr, you're back where you're started having lost not more than five seconds. The failure case is graceful, so it was easy to put tldr into my workflow.
Or just script it, include good comments explaining why you used all the switches you used, and back up the scripts to your favorite backup service. Available everywhere and explained well so you don't forget why you choose the switches you did.
I created a single serving Ansible docker image with a couple of playbooks for this very purpose --devs were asking frequently to pull schemas or dump from various data sources internally "Hey I forgot the command to do this, what was it again?".
Eventually I gave them a Dockerfile, wrote up some documentation on how to change hosts in the run command if they so desired, questions all but stopped immediately, and the devs loved it.
I now get a new question "hey, can you make one of those but to do this?"
"Sure, here's a playbook for it, put this in that directory on your local machine and it'll be available in the container". Combined with dynamic host inventories, it's made me a few friends lol
There are lots and lots of ffmpeg GUIs, the problem is that IMO, all of them are either (1) not powerful enough or (2) more difficult to use than just the command line.
The first problem may, of course, be a non-issue depending on what you need to do.
Someone with a sufficient itch (and some Elisp skills) could probably do for ffmpeg what the Magit team did for Git. Git's another program with a litany of options, and Magit does an outstanding job of covering them with an easy-to-use interface.
I'd love to just make a GUI for generating ffmpeg filter graphs. The MS SDK had a directshow filter graph editor (graphedit i think it was called?) that was quite pleasant to use back in the day.
Very interested in this GUI for rsync that you wrote (for obvious reasons) ... one thing that jumps out at me right away is that none of your examples show a remote in the form of user@host:path ... I also did a text search of your page for "ssh" and it is not mentioned ...
Does your GUI have the ability to specify a remote in the form of user@host:path ?
This gets a bit complicated - not only do you need to allow password interaction with ssh but further, upon initial connection, you'll be presented with a host key dialog and need to confirm the key fingerprint, etc.
I really wish there were more GUI wrappers that literally wrote the CLI commands out on screen as the GUI was modified, to help with bridging the GUI/CLI gap for people.
Apple's MPW (Macintosh Programmers Workshop) had exactly this: a tool called Commando which allowed you to graphically put together and run command lines.
Apple did this with MPW back in the day. I think the feature was called "commando".
You would type the command name, then "..." and up pops a dialog box with all the options and popup help for what different things meant. As you fiddled with the GUI, it would generate the flags as text you could see. And finally, you could execute, or paste the command line text into the console, or both. It was awesome.
The problem with this is that it misses the point of most cli programs, which is that they are to be composed with other programs.
Besides, when colleagues of mine decided to demystify the command line and learn it proper, they're always surprised how much easier it is than they expected.
The GUI/CLI gap, as far as I have seen, is one of willingness to invest some time to learn.
> The problem with this is that it misses the point of most cli programs, which is that they are to be composed with other programs.
That's not true, CLI programs would still exist even if they could not be composed. And I would guess that if you counted how many of our commands include 1, 2, or 3 executables, the 1 column would be the biggest by far.
One of the major headaches of CLI programs is that they're hard to learn interactively because there's no starting point. You have to read the docs. The point of a GUI command builder is that it gives you something interactive to play with to start to learn how the CLI commands work. The point is not to do everything through the builder, the point is to bootstrap people to the point where they no longer need it.
Also, there's no reason a GUI command builder couldn't also show you how to do composition.
There is a gui for rsync that's quite popular. Its called Dropbox.
A lot of successful projects seem to be of the form 1) Consider old-school unix tool. 2) Aim for one aspect of its functionality that serves a mainstream need. 3) Put friendly face on it so less technical users can reap the benefits.
I recommend also trying Grsync (released in 2005), which is mostly the same thing and is available with multiple languages support. http://www.opbyte.it/grsync/
> People don’t care that an application is a streamlined 3MB or a big round 200MB boy
Assuming all else to be equal (UX, functionality, etc), people will absolutely gravitate towards the [5MB storage, 20MB RAM, 0.5% CPU] combination to electron's [300MB storage, 200MB RAM, 5% CPU], especially when they are running multiple such programs.
But, what electron offers is getting from zero to product across all desktop OS-es in arguably the least time possible. So, the real choice is between [PRODUCT_NO_SHIP] to [300MB storage, 200MB RAM, 5% CPU], and we all know what the world chose.
It seems to me, that the issue here is mostly web developers who are used to coding for browsers, are slowly moving into the desktop application domain, and instead of them applying the traditional languages used there (C and C++), are circumventing these by creating frameworks that allow them to continue using their web langs in a desktop environment.
If that is the case... they'll never get rid of the megabyte bloats. That's basically just baseline overhead from a browser engine itself.
Crazy idea: Reverse the entire process? Use C for everything, including web tasks, i.e. FastCGI? Correct me if I'm wrong, but with WebAssembly around, is it possible that C programming might make a comeback into web development?
I think the central problem with native UI is how much better looking web-based UIs are, and the talk I cited spends time making this point. HTML/CSS/Js built UIs have much better, modern, reactive styling options & when done properly, look much better than, say GTK.
I'm personally a huge proponent of C and would love to have access to WebApis [1] via Webassembly. That would still necessitate a runtime capable of interpreting HTML/Css into views, but projects like webview [2] have demonstrated that this access needn't come at a huge cost. For instance, in [3], for a UI in HTML/CSS with business logic in C, system resources clock in at [RAM: 6MB, CPU: 1.3% Storage: 10MB], which is completely reasonable.
Wow, thanks, those resources will hopefully keep me busy later... I'm building a cross-platform C engine, and had the usual suspects of "Windows, Linux, maybe Mac", but then recently also "discovered" this ancient FastCGI technology, and just day or two ago skimmed the WebAssembly wiki, and I'd like to explore possibilities of implementing my engine in web applications, too, afterwards.
But if I understand WebView correctly, wouldn't this be HTML in C/C++, i.e. is WebView... "just" a browser? Or... is it "just" spawning a minimal OS-Window on the target system with a system-built-in browser engine? For my intents and purposes, that's kinda the reverse of what I'm aiming for with my C project, which rather would be to bring sprites and raw data formats to your browser window.
In essence, what I meant with "reversing everything with C" is basically programming webpages like C programs. Taken to the extreme, what I mean is for example compiling Windows to run in a browser-context instead of a physical x86-PC.
electron gui for rsync when all one needs to do is read a very good man page for rsync. Sigh.
Yes, yes, it's not "intuitive". But maybe, just maybe, learning and/or teaching is just as important, and remembering "rsync -avuz [from] [to]" is really not that hard.
There are reasons why it will be difficult to write a GUI for rsync:
1. The 80% use case for rsync is a carefully crafted pre-set script that runs in a crontab, not as a one-off thing that you select options for and hit "run".
2. There are so many switches and options that you're not going to be able to model them all in a coherent manner without overwhelming the user.
3. The price of failure is data loss, which means that people will either be afraid to use it, or foolishly lose data and then be afraid to use it. A dry-run mode helps, but is not a panacea.
What you need is a multipart system:
1. A set of vetted recipes with good, succinct descriptions of what they do, why they're useful, and where the cliffs are, and a program to apply them for the specific user.
2. A design GUI that helps guide you through the process of building your own recipes.
3. A runner for running your recipes from the command line, so you can crontab it (if the recipe system doesn't generate self-contained executables - scripts or whatever - that you can just call directly).
4. A runner for running or testing your recipes in a GUI environment, for the 5% of times you actually need this.
Trying to mix more than one of these parts into a single program is a UX mistake.
> Unison is a file-synchronization tool for OSX, Unix, and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.
> Unison shares a number of features with tools such as configuration management packages (CVS, PRCS, Subversion, BitKeeper, etc.), distributed filesystems (Coda, etc.), uni-directional mirroring utilities (rsync, etc.), and other synchronizers (Intellisync, Reconcile, etc). However, there are several points where it differs: ...
Unison is terrible in that you must have exactly the same version (down to micro version) of unison on both sides. That makes it very hard to use in practice.
i think the UI for unison is very good, after the initial config and setup, 90% of the UI is just the changelog. you can then relatively easily choose how to sync.
Thanks for making one, I only recently learned about rsync and was wondering if there was a handy gui for it :), then it magically appears before me on HN.
I'm weary of this "electron" thing though. Do I have to download a 500mb package to run this?
It seems like to write a tool that's genuinely helpful for non-experts, you need to become something of an expert in how people use the tool: what are they usually trying to accomplish? What are common mistakes?
It's similar knowledge to what you would need to write a book about rsync.
And the way to do that is with lots of experiments and user testing. A good start would be to find one person who really wants this and make sure it works for them. Designing things by yourself, even with lots of sympathy for the user, doesn't seem like enough?
I agree with the other users who have a pre-set of options that work for them. I have used -avzP for 99% of my Rsync uses over a decade and a half (exceptions being adding -e ssh or refactoring for backup solutions). By putting all the options in an 'options menu', keeping the defaults sane, and check boxes off the screen, you end up with a much simpler and easier to use tool.
I'm not sure. Don't want to be mean, and I'm sure building this was a great learning experience, but it seems more like and excuse to practice Electron than make rsync more usable.
I think both. This is the hard part about GUI design -- it is pretty easy to make a list of 100 command-line options listed alphabetically on the man page, but for GUIs, people expect much more.
There should be some sort of logical grouping. One area of the screen would be options related to what is transferred, a second area is metadata, the third one is speed-only optimizations, fourth is logging and so on. Having "hardlinks" all the way on the left and "symlinks" all the way on the right does not make a good GUI.
There is a very wide variety of input elements, so checkboxes are are not the right solution all the time! For example, while having separate "--verbose" and "--quiet" options is fine for CLI, one would expect to see a drop-down or a slider in the GUI. This will decrease cognitive load because there are less things to read and there is no need to worry what happens if you check both.
Another great GUI feature is dynamicity -- take the metadata for example. A good design might have a dropdown: "everything", "nothing", "x-bit only", "custom". Most of the time, first three options would be fine, but "custom" will show entire new group of checkboxes.
... this was just my subjective design, based on how I use the program. If I were trying to do a popular product, I'd have to spend some time asking people / looking at the forums to determine which sets of features do people use. For all that I know, there is a super common case for "--group" without "--owner", and this is just me not knowing about it.