Hacker News new | past | comments | ask | show | jobs | submit login
A look at terminal emulators, part 2 (lwn.net)
222 points by vquemener on April 26, 2018 | hide | past | favorite | 73 comments



Shameless plug: the article pins the minimum latency of your keyboard to 3ms. I recently published the kinX, a keyboard controller for the Kinesis Advantage with merely 0.2ms of input latency: https://michael.stapelberg.de/posts/2018-04-17-kinx/


That article led me to the page on Cherry’s site about their “analog keyboard controller”, which promises no scanning latency. “Analog” suggests to me that each key has its own power-of-two-ish resistor value, and the controller can just look at the total resistance to see which keys are pressed. Does this seem plausible? If so, it could be a fun way to build a DIY keyboard.


More likely they are just using a package with enough pins to discretely input every key without a scan matrix. There are no real cost constraints on doing this nowadays with modern IC packaging. The added hardware inside to track all those inputs is inconsequential with modern process technologies. Then their marketing dept. got a hold of it and came up with some BS to sell it.


Yes sounds a lot easier than the resistor thing (though even that sounds doable).


> Does this seem plausible?

No.

1. The precision of your resistors is a limiting factor. If you use standard 1% resistors, you can't even put 8 switches on a single ADC; the precision of the largest resistor ends up being greater than the value of the smallest one. Increasing the precision of the resistors helps to a degree, but it drives up the price dramatically.

2. Contact resistance and switch bounce becomes a problem -- instead of just making a key show up twice, it could potentially result in a key you didn't press showing up.

3. Using ADCs doesn't get you away from scanning. ADCs have a sampling rate too -- and unless you pay a lot for your ADCs, it'll be slower than you could scan a diode matrix.


While it sounds difficult to do, I wouldn't underestimate the manufacturers.

> 1. The precision of your resistors is a limiting factor.

Suppose your resistors and measuring equipment were good enough that you needed 5% between neighbouring values. Then you can squeeze in 141 distinct values between 1 Megohm and 1 kilohm. Seems plenty to me.

> 2. Contact resistance and switch bounce becomes a problem -- instead of just making a key show up twice, it could potentially result in a key you didn't press showing up.

They definitely become issues, but they strike me as manageable. For good switches, contact resistance should add perhaps 1 ohm of variation , and above we saw that 50Ohm between the nearest switches is good enough.

Switch bounce means that you can't afford to accidentally average over a bounce, you need to sample fast (say 200 kHz), and then take say the Nth highest of 20 samples. So you get a result 100 microseconds after the first switch went high. (Obviously that algorithm is just a guess, just to show the kind of options engineers have when they come up against the real system).

> 3. Using ADCs doesn't get you away from scanning. ADCs have a sampling rate too -- and unless you pay a lot for your ADCs, ...

Yes you need a good ADC, but that ADC will still be an single IC that does nothing vastly fancy. Such a thing is difficult and expensive to buy retail, but if I was Cherry I might well be able to negotiate a deal that brought it to a fraction of the end price of the keyboard (which I would sell at a premium).


> Suppose your resistors and measuring equipment were good enough that you needed 5% between neighbouring values. Then you can squeeze in 141 distinct values between 1 Megohm and 1 kilohm. Seems plenty to me.

How would you detect multiple simultaneous keypresses, either for n-key rollover or for shift/ctrl/alt types of keys?


Went looking for this after you mentioned it. As far as I can tell, Cherry is not licensing that tech or selling it yet except in the form of a complete keyboard.

https://the-gadgeteer.com/2017/08/11/cherry-mx-board-6-0-mec...


That does seem plausible to me at first glance. I had read about the analog keyboard controller before, but nobody seemed to _really_ know what they’re doing. It would be interesting to confirm this :)


How would a keyboard like this detect simultaneous key presses? It seems like you'd either see only the lower-resistance key (if the resistances are orders of magnitude apart) or end up with a completely nonsense value.


Let's say that Q, W, and E are 1Ω, 2Ω, and 4Ω, respectively. I'll pretend they're wired in series to make the numbers work out nicely, but it should also work in parallel. If you measure 3Ω, then Q and W are pressed. 6Ω: W and E. 7Ω: all three.


That's the keyboard I use.. it's the most perfect keyboard ever.


All these latencies are well below 50ms worst-case. For me, those are perfectly fine for typing. While it is obvious that high latencies have an adverse affect on typing, there certainly is a "good enough" threshold. I find it very hard to believe that you'd be able to see any difference in typing with 2ms and 20ms latency. The article even implies that terminals with higher latency might cause RSI, which is damn close to fearmongering. The article also claims that the GNOME Human Interface Guidelines require 10ms latency, while clicking that link shows "0.1s", which is much more reasonable. And just to be clear: I'm strictly talking about typing. I do not question that in other areas latencies have to be much lower than 50ms (like mouse movement, music making, etc.).


Latency, as the article points out, is less of an issue than jitter.

Consider that 80wpm is right around 60ms a keystroke. 50ms jitter will have some keystrokes arriving twice as slowly as others. I suspect that's a noticeable difference; indeed, I suspect I've been noticing it for decades now.


Agreed. Where its inconsistent, I personally find myself working harder to manage the stream of input.


I definitely can feel the lag in normal usage, and it's stressing me. Having tried various terms, I've always quickly returned to xterm with bitmap fonts - even before I'd heard of latency measurements being done on terminal emulators. xterm feels almost instantaneous. Gnome Terminal, for example, always felt sluggish. Not having done any measurements I couldn't tell what it was (and didn't think that other terminals could possibly have noticeably slower latency) - and I still think there can be other contributing factors, like usage of Vector fonts and aliasing. Or, as someone else points out, it might be more jitter than lag.

I also never liked GNOME 3, and suspected it must have something to do with it being a compositing WM and the latency it probably adds on top.


Between Windows, macOS, and the major Linux desktop environments, practically no one is using a non-composited desktop these days.

Compositors tend to aim for 60fps, because that's what most displays can handle. That means you _start_ with 16ms worst case latency on the output side.


You can use one of the many bare-bones window managers to get rid of compositing. I still use xdm and only recently switched away from icewm (because of bitrot) to openbox.


The physical display is still 60 fps. Frankly, CRT displays used to be only 60-80 fps.

I still think that there are much longer (and noticeable) delays in the terminal emulators themselves, dwarfing the compositing and display delays.


So am I the only one left without compositing WM?


Here I am too. I stick to dwm, currently migrating to wmutils because portability, no compositing on my side. Firefox is the only GUI program I use (besides st), and it will probably remain as such since the web has become a chaotic and complex environment, unusable without a major web-browser.


While I agree with most of what you put here, the idea that higher latencies may contribute to RSI is plausible.

A while back, I was showing symptoms. It turns out, that I was tensing during periods of waiting to see feedback. Now, being aware of it, it's no big deal, and I don't do it anymore. Symptoms gone.


As a gamer, I wonder how noticeable this latency truly is (2ms - 50ms). The browser runs at "60fps" so any input, by default has a 16ms delay (unless I'm understanding the browser incorrectly). This kind of delay does not phase me one bit. However, if it was 3 frames (ie. ~50ms), how much would that bother me?


Lands right in the "sees last char input hit the screen as next one is being, or about to be typed" range, depending on typing speed.


50ms = 0.05s = 1/20s

Are you sure you're typing 20 letters per second?


No, not often. For some common words yes. Depends on keyboard at that point. Some will take it, some won't.

In game terms, input can approach that speed, which is really what I was writing to. Should have said, by analogy.

For me, a whole lot depends. If I am transcribing, I look at the screen, and it can matter. Same for a quick type of a common phrase. I'll blast that out.

For most things, it's a few CPS and slower.

And this is all modal. Interactive, like cursor navigation is different from general character input, which is different from common, short words.


Yes, it is plausible for latencies in the range of >100ms, but not for the ones shown in the article.


You don't think someone with a pretty rapid response time couldn't tense and relax?

They may just do it when the latency varies.


You mean you hit key "a", and while waiting 50ms for it to appear you tense up? No, I don't find that likely.


For one key, in isolation, agreed, not likely.

However, for a string of them? Yeah, could be a factor. My own trigger was mere awareness.

Just knowing it couldbe latent to excess triggered that tensing, like "stay ready."

All I'm saying is its plausible. People can, do, will respond very differently to these kinds of things. Worth investigation.


For typing I doubt it, ideally you should be able to type fine if the display only updated every 10 seconds.

For i.e. the arrow keys, or anything interactive you will probably have to be watching for the response.


Agreed. Another example would be gaming with the keyboard - latencies are surely much more noticeable here. I would guess it is similar to music making, where you strive for latencies <10ms.


It's surprising how (relatively) poorly Alacritty did. The first question in their FAQ still says:

> Is it really the fastest terminal emulator?

> In the terminals I've benchmarked against, alacritty is either faster, WAY faster, or at least neutral. There are no benchmarks in which I've found Alacritty to be slower.

Despite them already acknowledging that they have problems with latency[1]

1. https://github.com/jwilm/alacritty/issues/673


That's because alacritty claim's are pure bullshit: https://github.com/jwilm/alacritty/issues/289


For a long time, the response to "but what about scrolling?" has been "eh, just use tmux"

I think the author may have walked back on that a bit, but this is definitely the thing that turned away most of the people I know who tried it.


I actually just started using alacritty full-time recently, after walking away from it a few months ago because of the lack of scrollback. The only reason I'm back is because I am now super comfortable with using tmux in my workflow (and tmux's scrollback), so I don't need scrollback in a term. emulator any more.


You are conflating the presence of scrollback buffering with scrolling speed. craftyguy was talking about speed, and thus the latter.


Also, it requires opengl 3(iirc), so you can't even run on a lot of older computers(2012 era?). I can't on a x201.


2012 is Sandy-Ivy Bridge era, x201 is older than that


I believe the x201 was from 2010. I can add that I can't use alacritty on my x201 for this reason as well. I've stuck with st since I find it easy to customize (Xresources seems to require scanning lots of documentation vs just a quick grep in code) but I wouldn't call it anything special. I might switch to mlterm if I can put in the effort to learn the Xresource incantations I need.


I believe that's because Alacritty focused on optimizing for throughput instead of latency. That is it will take less time to render a large chunk of output, such as running yes.


iterm2's new metal renderer[1] has greatly increased my terminal happiness on mac. Highly recommended.

[1]: https://gitlab.com/gnachman/iterm2/wikis/Metal-Renderer


Just enabled this and it's very snappy. Thanks!


I just tried it, ran `time cat bigassfile.txt` test. How come it is slower after I enabled this? I have a mid-2013 MBA.


When the metal renderer is enabled the screen updates at 60fps. When it is not enabled, the refresh rate is governed by the current bandwidth. There are cases where the new renderer has lower throughput because it does not use adaptive framerate. I’ll look into this.


As another datapoint: for a 100k line log file, iterm2 had up to 2.5x variability between runs and the fastest was about twice slower than Terminal.app.


I tried it myself without benchmarking and it felt much faster, but actually timing it shows the original renderer was about 1.5s faster. Fascinating. I'm guessing the original renderer drops frames.


No ligature support is a deal breaker for me though, and it doesn't seem there's an easy fix to this issue.


I haven't jumped on ligatures yet ... In what ways do they in crease productivity?


They don't but they don't reduce it either. They just look pretty. Some people like it while others do not.

The problem with the IT community is we often try to argue the personal preference using some bastardisation of the scientific method. So you'll get people make claims about readability et al. But really it's just personal preference.

Personally I quite like them when used with a typeface that doesn't go nuts with ligatures. Hasklig is a great example; it's based on Source Code Pro and only really uses ligatures for character combinations that are generally only used together in an ASCII art kind of way. But that's just what I find pretty; others will undoubtedly hate it and have their own reasons too.

Sonny advice is just experiment. If.you find it pretty then use them; if you don't then don't use them.


Code readability.


It makes code less readable for me. To each his own, I suppose.


I agree. The ligature thing looks neat at first but I have spent far too many years looking at regular ASCII text.

I mean, spot the difference between ==, = and =. Or <= and ≤. I especially hate != as ≠.

And since at some point we have to use a different editor or read a git diff, we need to read the regular ASCII form anyway. So what's the point and why bother learning ligatures?


I like the ligatures too, but I found it was slowing my terminal down too much. YMMV


I wonder how much “deep” profiling we can do to improve things, e.g. to know how much is just graphics, how much is time spent processing byte-sequence spam, or heck even time spent processing scrollback buffers and stuff. I suspect it would turn out to be a mix. And I strongly suspect we could identify just a handful of efficient terminal sequences and push for broader support of the most efficient ones (i.e. things that directly tell the emulator to do X instead of receiving three dozen other sequences that produce the same result).

Graphics are a likely culprit but even then there can be multiple layers to the problem, sometimes literally. Putting bits in a window can be surprisingly expensive and it’s hard to have nice bells and whistles and speed at the same time.

It’s hard when given many bytes at a time. I once observed that simply by splitting a “vim” buffer vertically, my terminal received a significantly greater number of bytes (such as extra spaces for layout and several more special terminal sequences). The split also seemed to trigger more “full screen” or “most of screen” refreshes, versus smaller and cheaper updates that were typical of single editors. Scrolling, as it turns out, is a lot more complex in a split buffer.


Wow, is a terminal emulator expected to write content to your /tmp? Security!

> It turns out the VTE library actually writes the scrollback buffer to disk, a "feature" that was noticed back in 2010 and that is still present in modern implementations. At least the file contents are now encrypted with AES256 GCM since 0.39.2, but this raises the question of what's so special about the VTE library that it requires such an exotic approach.

https://bugzilla.gnome.org/show_bug.cgi?id=664611#c48 seems to clarify. This affects all vte-based terminal emulators, but only when setup to unlimited scrollback.

See https://lwn.net/Articles/752924/ for a quick test I made which seems to confirm that in the limited scrollback case, nothing is written to disk.


I started using Cathode a lot. It has terrible input latency, destroys my battery, is very hard to read, but coupled with some cherry blue switches it really brings me back to a better time.

If I ever need to blow off some steam, I just start smacking the "degauss" button.

Puts the meta key in the wrong place but I'm usually using vim for remote editing anyway.


The difference between Vim (GTK2) and Vim (GTK3) is rather jarring. I have two hypotheses:

1. GTK3 is just slow.

2. The GTK2 code path in Vim is older and thus more optimized than the GTK3 code path.

Which one is it?


I would put good money on the first one, based on experience with GTK development and my guesses about the way Vim uses GTK (presumably, they don't do anything different across GTK2/GTK3 that they don't have to).


Maybe GTK2 didn't vsync?



While I understand that testing all terminals like this would be unreasonable to ask, I would have liked to see at least couple of real end-to-end tests, i.e. captures with high-speed camera looking at keyboard-to-display latencies. Considering how complex the whole pipeline is, that would have helped to validate the methodology used and might have also revealed some additional differences due the way the applications render themselves.


Keep in mind that most monitors are at least 10ms late, on top of the inherent latency introduced by the refresh rate, and most input devices are very late as well.


Really surprised the didn't include Kitty terminal. It is the fastest I found on macOS (even compared to Alacritty) when using PragmataPro font (which made me switch away from Terminal.app due to slow rendering).


Can't agree more! With all respect to Alacritty, I find Kitty easier to install/update, having more features and at least comparable performance (as it also uses OpenGL for rendering). Overall it seems a pretty mature product and the author is very responsive to Github issues.

I'd like to see it more in these terminal-related articales and discussions...


Just set up uxterm - and I don't know if this is placebo, but it feels noticably snappier than urxvt.


I prefer urxvt, but it does take customization and work to get it up to par feature wise, but that's a good think as that's the power of urxvt in the first place. I also have a strange affinity for the xfceterminal, I use it sometimes even though awesome is my wm.


Hm, gedit is in there, but kate is missing ;) But interesting article: I use Terminator (I like the tiling functionality) and vim as my editor, and it did not feel that "slow" to me (compared to konsole or gnome-terminal; but that's of course terribly subjective).


gnome-terminal never felt slow to me. I tried out a few of these other terminals and honestly, alacritty and terminology and gnome-terminal all feel exactly the same.


There's a whole class of terminal emulators that this ignores, including (but not limited to) bogl-bterm, zhcon, kmscon, fbpad, fbterm, the one in Linux itself, and console-terminal-emulator+console-fb-realizer.


mlterm also supports sixel, regis and tektronix modes.


Never tried mlterm before... it's insanely fast.


Okay, so this is a thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: