DomTerm approaches this from the other end than eshell does: At first glance it's a solid no-compromise (mostly) xterm-compatible terminal emulator, but it also has features to make it a great REPL/shell interface: Rather than just plain text, support structure (delimit commands, prompts, input, output). Allow "rich" html output: images, styled text, fonts. Builtin less-style paginator and readline-style input editing. Tmux-style panes and session management. DomTerm isn't integrated with Emacs, but there are embeddings for Atom and the Theia IDE.
FWIW when I wrote Emacs term mode my goal was for it to subsume comint mode (the basis for shell mode), but alas no-one else seemed interested in such a merger.
I thought XMLterm was going to catch on, but no one seemed to care. Almost 20 years, and now scientists seem happy with Jupyter Notebooks.
Wolfram spent that time trying to get everything into a regular model, so any output could be used as a valid input argument. I have tried, and yet it seems only possible to use if your a priori model of the problem domain already matches the Wolfram Language.
We need better terminals. But so far the seem very specialized.
I too though XMLterm very cool - and I hope DomTerm can fill its place.
The author of XMLterm (R. Saravanan) more recently developed GraphTerm (https://github.com/mitotic/graphterm) but has not had time to continue work on it.
I must say I really like the UI paradigm of Emacs. It's the "best of both worlds" combination of text-based interfaces and GUIs. It supports software that can be interactive, and yet also as interoperable as CLI applications (or even more).
It's the reason it isn't silly that people read e-mails, chat or manage their files from Emacs. A CLI e-mail client is annoying because of the typing. A GUI e-mail client lacks any sensible interoperability. A curses-based e-mail client is the worst of both worlds - neither interoperable, not particularly nice looking. An e-mail client in Emacs - and any other application made within it - immediately gains several deep levels of productivity features and interoperability:
- All your usual keybindings work. All your usual searching and editing operations work - that includes not just typical "move around and edit stuff", but also things like advanced autocompletion, interactive or batch regexp search & replace, grepping through everything that's displayed, multiple cursors, and whatever other thing you like.
- Since application's UI is rendered mostly as text (with some minimal non-text overlays if necessary), you can navigate around, interact with and copy everything you see. Need to copy headers of an e-mail to somewhere? If they're displayed, you can. Need to copy a list of e-mails sent from someone? Filter those mails, and then just copy the list from the buffer, as if it was regular text.
- Above solves like 90% of your basic interoperability requirements that let you be extremely productive. Need more? Learn some basic elisp, and now you can script or extend everything, both from outside (using "exposed" application's API) and inside (augmenting/modifying application's code at runtime). Emacs exposes lots of functions optimized for working with text in an editor, so it usually takes just a few lines of code to compose together a new productivity feature. Need the list of e-mails mentioned previously regularly, for your weekly report? That's probably one call to generate it in the background, few more lines to select and copy it as text, and paste it straight into the file you're editing.
Really, in my life I haven't seen any other platform embracing interoperability by default.
"the UI paradigm of Emacs" is also called a roguelike interface by ESR, vi's too: "In Chapter 11 we described the effect of the absence of standard arrow keys on early roguelike programs; vi was one of these. "
http://www.catb.org/esr/writings/taoup/html/ch13s03.html
Skimming the article, Emacs seems to be an outlier even for that category. Quoting from the abstract, the set of ideas defining Orthodox UIs:
1. Distinct command set layer with commands that can be entered from the command line and reflected in GUI interface. In this sense vi is a reference implementation and OFM inspired by vi have some interesting, distinct from traditional line of OFM ideas implemented. See ranger and vifm.
2. Tiled, nonoverlapping windows with minimum decorations
3. Stress on availability of all commands via keyboard, not only via mouse clicks, although mouse can be productively used and is used in such interface.
4. Ability to redirect output of commands executed in one window to other windows and processes.
5. Usage of GUI elements to generate commands on command line (macrovariables and such commands as Ctrl-Enter, Ctrl-[ and Ctrl-] in OFM. )
6. Accent of extensibility and programmability (with shell and/or scripting languages) instead of eye candy.
Emacs doesn't seem to meet #1. Sure, you can transfer commands from commandline to a new and/or existing Emacs instance (e.g. through emacsclient -e "some lisp code"), but it's not a common way of using it. It definitely meets #2, #3 and #4. It would fail at #5 if I understand it correctly - sure you can do this, but most of the time you use keys to execute elisp directly, not through shelling out. And as for #6, Emacs blows everything else out of the water.
This is the reason I'd like to write more about it. The author has a needlessly narrow definition.
I'd argue it is more useful to talk about "a language interpreter" rather than a "commandline". A command line is a language interpreter, of course, where the commands make up the language. Emacs is also, at its core, a language interpreter -- but its language is not a simple command set, it is Emacs Lisp.
From that perspective, one can reevaluate 1 and 5:
1. Emacs is at its core a virtual machine that runs elisp, and any modifications you do to that elisp environment shows in the GUI. Change the mode line variable? GUI reflects it. Evaling Emacs Lisp expressions is one of the most common ways to interact with the editor in more fundamental ways than offered by the interactive functions.
5. It is common for some people to use the mouse to click around in Emacs. I do it too, sometimes, with stuff like dired. These clicks are not magic built-ins. They are, just like keys on the keyboard, only bound to execute Emacs Lisp functions. Redefine the function, and you redefine the meaning of the click.
This is the very reason Emacs is as hackable as it is.
If you have more thoughts on that, I'd love to read them. So I want to encourage you to write about it. It's a very much underdiscussed topic.
I thought about it in the context of the "Tech's Two Philosophies" article (https://news.ycombinator.com/item?id=17030339) that showed up a couple hours ago. I look at the overall way Emacs works, and how many other powerful OSS tools work, and then I look at those companies mentioned, and it makes me deeply sad. Those companies, which happen to be trendsetters of our industry, are so up to their ears in moneymaking that they no longer realize most of their practices and recommendations are user-hostile and productivity-minimizing.
A lot of this seems similar to what plan 9's acme editor provides. You can run shell commands anywhere and their output is written to a new window. You can also run an interactive shell in a window and edit the text directly. I have been using this as my main dev environment for a while now and I just hate when I need an application that tries to use fancy terminal features with no fallback. Russ Cox has a good video showing how acme works: https://research.swtch.com/acme
Plus it had the benefit of being implemented in a GC enabled systems programming language.
Maybe one day we could get ACME re-implemented in Go, running on something like "Goberon System 3", specially now that it finally has plugin's support, essential for how Oberon implemented UI commands.
I find that the regular `M-x shell` functionality gives you a reasonable approximation of the acme `win` workflow. The 'shell' buffer acts mostly like a regular text buffer, so you can easily move around, edit things, and cut-n-paste.
Pressing <enter> sends the current line of buffer text as command-line input.
I love and use acme too.
I hate when I hit some tool that still emits color ansi escapes even with TERM=dumb (phabricator arcanist, I'm looking at you).
Do you know if there is a simple wrapper that swallows ansi escapes but yet preserves the illusion the output is a tty (so that the wrapped command doesn't freak out?)
EDIT: I'm having some success with `unbuffer -p arc diff | perl -pe 's/\e\[?.*?[\@-~]//g'` (part of `expect`) but interactive prompts are not flushing the line correctly in acme's win
I share many of the frustrations mentioned in the article and it motivated me to make my own terminal, Extraterm http://extraterm.org/features.html . It already supports many of the workflows listed in the "Enter Eshell" section.
* Long command output is a PITA? fine, Extraterm will 'frame' it and you can easily get to the top of it (Ctrl+Shift+PageUp).
* Want keep that output for later? No problem, you can move the frame into its own tab. Hell, you can even drag and drop it into your desktop file manager.
* You should have filtered that last command output but you forgot? Extraterm's `from` command will let you feed the contents of a frame back into your shell pipe line.
* You did a `cat file-list.txt` and now you want one of those paths? Just go into cursor mode, move up into the frame, edit in the rest of the command and run it. It's a bit like the C=64 again! yay!
* You just want to see that image? Sure, `show` command can show it directly.
* Sick of composing paths for `scp` to move stuff across machines? Use `show` and `from` directly to download and move binaries.
It also integrates with your favourite shell (as long as it either bash, zsh or fish!), and happily works across SSH. It runs Emacs too and any other normal terminal application because it is a proper terminal emulator.
> Long command output is a PITA? fine, Extraterm will 'frame' it and you can easily get to the top of it (Ctrl+Shift+PageUp).
* Shift+Home to scroll to the top
* Shift+PageUp to scroll up
* Ctrl+Shift+F and search for " $" to find the beginning of the output
* ... | less
> Want keep that output for later? No problem, you can move the frame into its own tab. Hell, you can even drag and drop it into your desktop file manager.
* ... &> output.txt
* Select the output with the mouse and copy it
* F10 -> Right -> a (keyboard shortcut for Select All), copy and paste into an editor, remove parts I don't want
> You should have filtered that last command output but you forgot? Extraterm's `from` command will let you feed the contents of a frame back into your shell pipe line.
Use one of the methods above to get the output into output.txt and then use `cat output.txt`.
> You did a `cat file-list.txt` and now you want one of those paths? Just go into cursor mode, move up into the frame, edit in the rest of the command and run it. It's a bit like the C=64 again! yay!
Triple-click to select the path, middle-click to paste it.
> You just want to see that image? Sure, `show` command can show it directly.
`xdg-open` (has the advantage that it opens my image viewer which can also zoom, rotate, show next image, etc.)
> Sick of composing paths for `scp` to move stuff across machines? Use `show` and `from` directly to download and move binaries.
I'm using my file-manager for uploading / downloading files via SSH.
Of course Extraterm will work better for some of these tasks (the first two points are a lot more cumbersome with my methods), but not by that much that I think it's worth it.
Your comment makes some great arguments about just how awful the traditional ways of working in the terminal are. They're often very clumsy and tedious or require additional tools and foresight, e.g. remembering to redirect output to a file.
I'm not saying that we couldn't do these things in the past. I will say that easy-of-use matters.
Also, a lot of the methods you outlined don't work well across SSH. Extraterm's methods do.
I was going to be all "yeah, but I don't run a Linux desktop because I'm not a masochist, so my only use for a terminal is over SSH from Windows.", but you actually support win32 so I assume I can use it with SSH from there.
Unfortunately all it does is complain about not having any session types configured, offering precisely no guidance as to what to do about this. The documentation doesn't mention it at all either.
> I was going to be all "yeah, but I don't run a Linux desktop because I'm not a masochist, so my only use for a terminal is over SSH from Windows."
I purchased a Windows laptop a couple of years ago as an additional machine (my main platform is Linux), and honestly it’s using Windows which feels masochistic: ads, system prompts which pop up beneath windows[0], ads, the login screen swallowing my first few characters, ads, massive over-use of the trackpad (but this could be my fault, ads, very poor update experience (compared to Debian), ads, sluggishness, IE, Edge, ads, lack of free software and — oh yes, lest I forget — ads. Did I mention that an OS I paid for shows me ads all the time? ’Cause it does.
By comparison, my Debian machines are a joy to use. Every time I have to use my Windows laptop I physically deflate; every time I return to one of my GNU/Linux machines I sit up straight & smile.
0: When I'm using a site which requires a smart card login, Windows pops the smart card certificate & PIN prompt up beneath the current browser window — so it appears that nothing is happening, unless I move the mouse down to the taskbar & hover, revealing the waiting prompt. This is … odd … behaviour.
Don't get me wrong, Windows has it's issues and lately Microsoft seems to be doing their best to make it worse, but Linux is just completely unsuitable for the ways I use a Desktop OS. And apparently the other 90% of people who also don't use Linux as a Desktop.
> I've run Linux for years with no issues whatsoever.
So clearly no one else has issues, right? None of the people complaining about poor power management, insane application deployment model, poor support for high-end GPUs, general fragmentation, poor backwards and forwards compatibility, or any of the other innumerable problems have ever tried a Linux desktop.
This attitude of "it works for me, therefore it is good enough for everybody" is one of the biggest problems I have with Linux. It isn't so much that there are flaws, it's that the community refuses to recognize that there are flaws.
Doesn't actually solve any of the problems I have with Linux as a Desktop. And if I listed those problems someone would just tell me to use a different distribution, which is itself a problem. Or alternatively try to convince me they aren't actually problems "for most people" as though I should give a damn what is and isn't a problem for someone else.
I have to make a small apology here. Only cygwin is supported on Windows and there was once a useful error message but it died during recent development. This week I've been working on Windows console app support (think: cmd.exe, powershell) and WSL support. Then at the very least you would get a cmd.exe session by default.
For Windows users though, the shell integration stuff is only going to work with WSL and cygwin because anything Windows console based doesn't use or pass VT style escape codes, thus I can't extend the protocol there. This may change in the future as the MS console team is working to add more support for VT and unix style terminals. Making Extraterm into an SSH client itself is also an option for Windows but I've having trouble gauging how important a feature that is and whether it should get some prio.
The thing is that on Windows it isn't really possible to make a terminal/console app which talks directly to the shell (cmd.exe etc) running in it. What everyone is doing is opening a hidden 'console' component which runs cmd.exe/powershell.exe and then screen-scraping the contents out of it and rendering it elsewhere. Most emulators on Windows which support cmd.exe and friends are using a little bit of software which handles the hidden console, monitors its contents and then outputs a VT style stream of the updates. It is not pretty. MS have been working to improve the situation though: https://github.com/Microsoft/console/issues/57
The bizarre twist to this story is that it is going to be easier to add shell integration features to PowerShell Core running on every platform which ISN'T Windows.
What you're describing is necessary when going the opposite direction and trying to translate windows console buffers into VT data streams, but that's not really necessary for something like this since the native console interaction isn't used. Just grab plink's stdin and stdout and you'll get precisely what comes through SSH, escape codes and all. Well, almost, I think plink does some EoL conversion.
Not to start the whole "I hate Electron" thing again (that conversation has been had -- err over-had ;-) ), but I think it's relevant to both Eshell and DomTerm (such as I understand it): I don't want an interface running in such a massive infrastructure. I use Emacs as my daily editor these days and I still don't want Eshell (I don't want to have Emacs up all the time).
But I think the arguments ring true none-the-less. Our ancient terminals are awful. Colours are basically a hack upon hack. You can't rely on getting text effects working properly. Throwing tmux/screen into the works is almost necessary and as much as I like tmux the complication at the interface between the terminal and tmux is insane. How many times have you used vim or emacs in tmux in a terminal and found that somehow the terminal settings aren't getting through properly? I'm practically an expert in that stuff now (through long hard experience) and sometimes I still run into problems that leave me scratching my head.
We're ripe for something new... but I don't think an application of that kind of girth is going to cut the mustard. Again, I'm super happy that people are tackling this problem and if it works for you, then more power to you. But I think that I'm probably pretty representative of the kind of person that lives in a terminal. I can't see that kind of thing getting popular.
What would be awesome, though, is the generation of a new set of standards. We need "terminal mark 2" that has these kinds of abilities and we need standards that will allow interoperability towards applications running in these terminals. For example, as much as the article asks if we need terminal capabilities like ncurses -- I think we do. But we also need capabilities like being able to spawn panes in a tmux pager (just like a window manager). We need proper history navigation, cut and paste (across and ssh session!), etc, etc. These things need to be environment agnostic so that we can build an ecosystem of tools that will become popular. If people want to live in Emacs, or in Electron, then great. If not, then there are potential options.
I know it's a completely half formed argument, but I hope it resonates with some people :-)
I suspect modern terminals just evolved underneath you and you didn't notice since you were looking for a revolution. Most of what you want, you can have. I actually like eshell because I can "cat someImage.png" and it will literally inline the image for me on the terminal.[1] I don't think this is limited to eshell, either. I just don't know how to do it in other terminals.
To be clear, I think it is enticing to imagine a perfect solution. I just don't think it is fair to ignore all of the work that has happened. Nor do I think it is realistic that something will be able to get past the ridiculous cost of entry at this point. There is a reason eshell isn't a complete shell replacement.
To be direct to your points of things we need, though. I think you'd be surprised at just how well eshell does all of them. The only real limit to how well it works is that piping a lot of commands together is limited due to everything going through a buffer. For those cases, "shell" and then doing whatever I was wanting to do, works like a champ. And if I am really wanting to do some fancy stuff, an org buffer is better anyway.
> I actually like eshell because I can "cat someImage.png" and it will literally inline the image for me on the terminal.[1] I don't think this is limited to eshell, either. I just don't know how to do it in other terminals.
FWIW DomTerm doesn't really need Electron. Any embedded browser can work - using 'chrome --app=MAGIC_URL' works pretty well. The main thing Electron is used for are the menubar and a context menu - and those can be simulated with plain JavaScript.
All these features are already possible in modern terminals.
1) True color support
2) Bold and italic fonts
3) Emojis, Ligatures, unicode in general
4) copy/paste across SSH
5) the ability to split windows into tmux like panes and send text/control the different panes
6) The ability to display images in true color with alpha blending
These are only a small subset of features modern terminals have. There is absolutely no need for terminal 2 or awful slow terminal implementation based on rendering via a DOM.
"Rendering via DOM" enables rich text via HTML which is much more (and nicer) than "bold and italic fonts" and "images". For example look at the 'domterm help' output in the right upper pane in http://domterm.org/images/domterm-panes-1.png - it is not true than many terminals can display this. (Except of course in the trivial sense of using a library to render to an image and then displaying the image.)
It is also worth pointing out that most of the features you mention (except 1 and maybe 3) use protocols that are not widely supported. DomTerm uses familiar HTML wrapped in a trivial escape sequence. A related benefit is that reflow on window resize works - and you can copy/paste or save as HTML.
"Awful slow" is relative. DomTerm does take some extra seconds if you 'cat' a large file - but if that is your primary measure of a terminal emulator then our needs are very different.
The only difference between "rich text" and what you can do in a terminal is changing font families/sizes. And that is not even close to useful enough to justify runnig a terminal on top of a browser.
As for familiar HTML, it is trivial to write a library that accepts "familiar" HTML and converts it to SGR cdes for formatting. I could do it in an afternoon.
And for "awful slow", do the following experiment. Open a large text file in less in your terminal. Then scroll it continuously and monitor CPU usage (of the terminal and X together). Now compare with a real terminal. Think of all the battery life and all the energy you are sacrificing just for the ability to use multiple font sizes and families.
There are still desirable features that modern terminals struggle with, or provide in an inconsistent way because they all implement it with slightly different hacks.
Some sort of common semantic markup/annotation would be useful to allow terminals to offer intelligent click/hover/select actions on urls, file/pathnames, etc, etc.
This can be done with regex or deeper parsers in the terminal program, but that's slow & fragile. If the outputting program has a way of generating '<a href=...>yourlink</a>' the term can just interpret those and save a lot of trouble.
It means you need a) a common markup standard, b) support from enough terminals, c) support from enough output-producing programs to make it all worthwhile.
You'd probably also want any such markup to be backwards compatible so it didn't horribly mangle content on unsupported terms, unless you could sneak it in through feature detection in termcap/terminfo.
Also, I'm not sure 'rendering via a DOM' is the real point of contention here. My understanding is that there's already a DOM of sorts in most terminals, being used to represent the current window in terms of lines, rows, character cells, etc.
Those cells hold attributes for text colour, formatting, and content, etc.
I don't think it would be entirely impossible/impractical to come up with an enhanced representation along those lines that allowed supporting producers/consumers to do more better things.
A fully-fledged HTML DOM and the bulk of a browser engine required to actually render it is where I think the complication and performance comes in. Not to mention a loss of some of the relative simplicity of display generation that a fixed-size character grid affords producers.
Many terminals recognize URLs. DomTerm also recognizes 'FILENAME:LINE:COLUMN:' as in error messages emitted by many compilers. Clicking on such a link opens an editor window at the specified FILE:COLUMN position. The editor is customizable, with buitin suppport for Atom and Emacs. See http://domterm.org/Tips.html#linkification
Neat, I've never seen the link support from there before.
And yes, I wasn't arguing in favour of using a full HTML DOM, but that potentially some simplicity-favouring middle-ground might allow new and interesting features. As you've just demonstrated :P
The overhead of an HTML DOM for terminal text is substantial and unfortunate - but does it matter once you have a few browser tabs displaying typical "modern" web pages?
Thanks for the link. I'm playing with kitty now. It looks great. It's a little memory hungry, but performance seems really amazing, so I'm probably OK with that.
I understand your point and I as an Emacs user I also considered moving to something less "heavy" quite a few times. But I am not convinced that something like you describe is actually possible to build and at the same time be less massive than existing interfaces.
As a question to think about, what features does Emacs provide that are unused? I think most Emacs users end up using quite a lot of the features, so why do you think it will be possible to create something more lightweight?
Don't get me wrong, I want this to exist. But, it is important to look at existing solutions first. For example, isn't the X11or Wayland spec an implementation of "terminal mark 2"? A window manager is the shell. Perhaps we are just missing the right kind of utilities to make this environment as effectively as a terminal shell?
Another point to consider: are the frameworks massive by themselves? I would argue that the bloat comes mostly from having multiple frameworks. If all apps used the same version of electron then you could have a single electron runtime and then it could be more efficient. Same if all apps agreed on a single Qt or GTK version, or any other framework. If in fact redundancy between multiple frameworks is the problem, then another, new standard will not solve this.
In the end, I want to believe that there's something better, and if you have any examples or arguments to convince me I am eager to hear them.
On the original gnu.org machine I had a little trampoline program in /etc/passwd that simply launched emacs as my shell -- my init file looked for argv[0] of -emacs (that's how login indicates a top level program) and didn't allow exit without checking first (as that would have logged me out).
Until we had window systems this really was quite practical.
Although I'm a huge fan of Emacs, and use it as my primary text editor and IDE I've never managed to get on with eshell.
I've usually got a large number of terminals open simultaneously to maximise use of my visual memory and it's useful to be able to context-switch between "editor mode" and "terminal mode". I don't really like all my shell and code views jumbled up.
Anyone got any tips on maintaining separate configurations of eshell windows and code/text windows?
Keep eshells in another frame (or in usual UI terms, in another Emacs window). You can spawn a new frame for the current instance with C-x 5 2, and close it with C-x 5 0.
When you use multiple frames, they are still connected to the same Emacs process, so they share buffers, kill rings, etc. Personally, I use frames to have Emacs windows on multiple screens and desktops.
Yes, I use separate frames, but I find that having a combined buffer list of shells and editing windows just adds to the context switching overhead - it's just less mental separation of stuff.
I'm not sure what I want exactly - maybe something like separable workspaces that have independent saveable and restorable configurations.
Also, start using the built-in ibuffer instead of regular buffer list, if you aren't yet.
(global-set-key (kbd "C-x C-b") 'ibuffer)
> maybe something like separable workspaces that have independent saveable and restorable configurations
This probably could be handled with one of the couple windowing/"desktop configuration" management packages available on MELPA, but I haven't used any of them so I can't recommend any. Myself, I use a tiling WM, so I just place Emacs frames where I need them and keep them there.
On Windows, I use eshell a lot, because I use Emacs anyway, and it provides a more familiar shell experience while also working smoothly with Windows instead of seeing the whole system through a Unix-lens as Cygwin does (i.e. Windows paths work, you can say //myhost/myshare/myfolder/myfile.txt to directly open a file on a network share, etc.)
On Un*x systems, I use it a fair bit, too, because its integration with emacs is more comfortable than running a regular shell from within emacs. Being able to call elisp functions from within the shell is something I do not do often, but it is very convenient have anyway.
I just can't use eshell. Too many programs I use expect a terminal. They use control and render codes, emoji, etc that eshell doesn't understand. And there are no Emacs alternatives for them nor am I interested in developing one.
The ansi-term mode is annoying. It doesn't work like regular Emacs so it messes up all of your keybindings.
It's much easier to just use a decent terminal emulator and live with the cruft than to fight it.
I use asdf [0] for managing the multiple versions of different programming languages. Here's the golang [1] plugin.
You need to be able to make some assumptions about your environment in order to implement tools like these. All you're doing is specifying the context in which certain commands within your workspace will run.
A common pattern I've seen with some npm packages is to have the globally installed executable delegate to the project-specific version. This is a nice solution, but it's unrealistic to rewrite lots of projects to use this approach.
IMO, text-based interfaces are still too limiting though. I want to interact with GUI apps programmatically as well! On macOS you can achieve a little bit of that with AppleScript (or with JavaScript in newer versions), but it's still not frictionless.
Oh boy, if someone ports Common Lisp management to that, it'll be a huge mess. In CL, we already have ASDF[0] - and it's the de-facto standard build system for the language.
Everything is great about eshell except for one thing: the documentation is crap. Nobody uses it because it's hard to find out how to use it efficiently. Personally, when I launch eshell sometimes it takes me 10 minutes to google how to do basic operations, after which I quit eshell and forget about it for another couple of months.
The concept of terminals and shells is still a beautiful thing for its simplicity. Yes, it has quirks, and yes it has bad graphical support. But that's also the advantage. This brings interoperability into the ecosystem. Programs that output text are trivially also programs that can draw to the screen. Do that with a GUI program!
Now, regarding the distinction terminal vs text. Translated to the GUI world what the author says is basically, "meh, my framebuffer doesn't support me in opening URLs even if I tell it the location of the pixels in its memory".
Of course that can't work. In the same way, the terminal is just an output device. GUI terminal emulators (like xterm) come with some additional features like selecting and pasting text or reporting mouse events.
We can easily make a shell that buffers all the output from the jobs run through it, and possibly stores a copy of them.
However, that would not play nice with programs that actually want a terminal connected to take advantage of advanced capabilities. The shell would have to simulate a terminal for these programs to work. That's not trivial, and basically what programs like screen and tmux do.
So instead, we're forced to re-run jobs in cases where simple mouse selection from the terminal does not work. Or, if we know in advance that we'll want to store contents, we'll just redirect the output explicitly to a file.
It's not perfect, but I've yet to see a GUI that comes close to the comfort I get from a terminal + shell combination. A big part of that comfort comes from the fact that so many programs work trivially in this environment.
I think one question might be, should we make a distinction between "advanced" and "output-only" terminal programs? The former could be started with an explicit "start" command or similar, and speak to the terminal directly. The latter could have their output redirected to the shell, which does whatever thing with it (display it in a text box, have it easily searchable, etc...).
I think this would quickly lead into a rathole, were we want to make more distinctions, like programs that output files that can be clicked on, programs that output URLs etc. So, those programs should probably just indicate what kind of thing they are drawing themselves, by using a complicated API. We've reinvented the GUI!
> GUI terminal emulators (like xterm) come with some additional features like selecting and pasting text or reporting mouse events.
I think that's sort of the point of the author. Emacs is kind of like GUI terminal emulator (and multiplexer). Kind of, because in reality it's an extremely advanced and powerful GUI for everything text-based.
Of course, it does have its quirks, especially when used to run curses tools. The problem here is a difference in philosophy between curses and how you would write software for Emacs. In fact, it's a very similar difference to what you mentioned about CLI vs GUI programs. Emacs UI conventions assume high interoperability as a fundamental platform feature. Curses applications are like GUIs or modern web pages - they want to have full control over content, instead of giving most of it to the platform. That's why a curses app run under Emacs will suck (assuming it'll even run correctly). An alternative application written for Emacs will excel at productivity and interoperability, because instead of trying to lock you out of what's on screen, it'll yield itself to the full feature set of the platform.
Looking at readline-ncurses-termcap stack, simplicity and interoperability are not the first words that come to mind. And despite all the work sunk into it, I still seem to stumble upon more or less broken terminals way too often, ranging from cosmetic errors to completely unusable (e.g. bash with broken line wrapping leading to misplaced cursors and unable to see what I'm actually going to run..).
The model is extremely simple. What you experience is just that the terminal state is not coupled to process lifetimes. But this separation is required for interoperability. "reset"/"tput reset" are your friends.
Btw. if you have other bugs than just those from left-over state from crashed processes, try xterm. There are too many other terminals that probably try to do too much that cannot work in all cases.
I feel the performance of large outputs (5-digit numbers of lines) got better with Emacs 26. At least since switching 1.5 months ago I didn't notice much lag anymore.
Right now I'm with "experimenting with shells in Emacs" phase, but I can already report good results. Most of the typical shell work I do in eshell now, but I also keep some shells (regular and M-x async-shell-command ones) in the background, to e.g. connect with openvpn or run node.js servers. I haven't seen any performance issues so far.
So I heard, yes, but before Emacs 26, I had problems with some buffers (particularly SLIME and eshell) slowing down as they accumulated a couple thousand of fontified lines. I usually had to clear (C-c M-o) SLIME buffer after 1-2k lines because the slowdown in typing got noticeable. Since switching to Emacs 26, I had SLIME accumulate a hundred thousand lines without visible slowdown.
I didn't spend much time digging into Emacs and profiling the problem, so I can't tell you much beyond my guesses that it had to do with fontification.
Part of the magic of eshell that can use elisp functions as if they were shell commands. So instead of C-x C-f filename, you can use find-file filename as a command. find-file in particular is useful as an alias.
You know what's a real internet rabbithole? Reading up on terminal emulators, and how they interact with (n)curses. Only recently did I become aware that Thomas E. Dickey maintains both ncurses and xterm. And do you know how many people are trying to scratch the itch to write a better one? From the vast number of terminal emulator projects, alive, dead, and zombie-shuffling along, it would seem that everyone on the internet is working or has worked on a better terminal emulator.
Famous last words! I think this may be why so many of the terminal emulator projects are abandoned. One thinks, "I could do that," and then trips over Dickey's vttest and Paul Flo Williams' vt100.net. And maybe ECMA-048. And then one remembers about unicode... Anyhow, I was going to start one this morning after reading about eshell, but my research persuaded me this would be a lot of trouble.
A lot of the benefits mentioned here are not eshell specific they also apply to `ansi-term` and `shell` just for the virtue of being in Emacs and being regular buffers.
The main benefit of this article for me is discovering fzf.
I've tried using eshell many times, but there are various quirks that just break it for me. It's also really hard to beat tmux for window/pane/session management.
When I used Emacs, for some reason it scrolled more than one line at a time as I hit the end of the screen. This was literally the only reason I have never bothered with Emacs and stuck with vim.
Terrible I know, but that's my store. I guess Eshell is not for me...
so shift-up and shift-down scroll by one line no matter where the cursor is; and the cursor can often remain where it is, until you scroll too far. (Maybe it's outdated, since I did it a long time ago.)
DomTerm approaches this from the other end than eshell does: At first glance it's a solid no-compromise (mostly) xterm-compatible terminal emulator, but it also has features to make it a great REPL/shell interface: Rather than just plain text, support structure (delimit commands, prompts, input, output). Allow "rich" html output: images, styled text, fonts. Builtin less-style paginator and readline-style input editing. Tmux-style panes and session management. DomTerm isn't integrated with Emacs, but there are embeddings for Atom and the Theia IDE.
FWIW when I wrote Emacs term mode my goal was for it to subsume comint mode (the basis for shell mode), but alas no-one else seemed interested in such a merger.