Hacker News new | past | comments | ask | show | jobs | submit login
A Testament to X11 Backwards Compatibility (theresistornetwork.com)
258 points by foob on Dec 4, 2013 | hide | past | favorite | 89 comments



This is a great positive example of the (often negatively presented) double-edged sword that is extreme backwards compatibility.

X11 is an incredibly backwards compatible system, but this comes at the cost of positively absurd code complexity. I remember reading some release notes from a new version a few years ago that said 100,000+ lines of deprecated code were removed, and that was barely a drop in the bucket.

And, of course, this is why projects like Wayland and Mir (Canonical's new display server) have been born — it's simply too cumbersome (and in some cases, impossible) to innovate on a behemoth like X11.

But people forget why X11 has such a giant codebase. It's not just bloat — a lot of it exists for a very good reason, and this article is an excellent illustration of what that is.


I will never understand why people insist on making new clients and servers that are backward-compatible with terribly old, deprecated protocols, instead of just preserving the old clients and servers that spoke those protocols natively within some sort of sandboxed emulation system.

Take Trident (IE's rendering engine), for example. Why can only one version of IE be installed at a time? Why does IE11 come with IE7/8/9/10 "compatibility modes" (which, much as they might claim, aren't faithful recreations of IE7/8/9/10 behavior)? Wouldn't it make more sense to just ship an IE "browser chrome" that could load up the actual rendering engine DLLs from IE6-through-11 (similar to how Google made IE connect to Chrome Frame), with increasing levels of virtualization and sandboxing as you go further back to cope with the difference in environments?

Apple and Microsoft have both got this right a few times: Apple had Rosetta for PowerPC emulation; Windows had WOW16 and WOW64. But besides these isolated instances, it really isn't a common idea.


EDIT: I didn't see your edit which gave Apple's Rosetta and X11 as examples. Those are good counter-examples to my point. But then again, those strategies were not done by X11 itself, rather by compatibility layers as you suggested. And in fact this is precisely what a lot of companies are doing, for example the XMir project (http://mjg59.dreamwidth.org/26254.html).

____

I agree that that's a great strategy in general, but I don't think it translates well to display servers.

You can't just "put it in a sandbox". You need driver support, and you'd need a sandboxed environment sophisticated enough to deal with all the idiosyncrasies of the X protocol.

Even putting running an old version of X11/UNIX in a virtual machine is of limited benefit, because X11's core functionality is often tightly coupled with drivers and other things which would need a lot of special wrappers and hacks to work properly.

I'm not saying it wouldn't work (and I am not an X11 developer), but I do think there are extremely compelling and legitimate reasons for people to not follow your suggestion (at least in this case).


Why not make a clean break? None of this sandbox stuff.

X11 2: Electric Bugaloo - The list of major graphical linux programs in active use is very finite and most of it doesn't talk to X directly but relies on stuff like Gnome or QT which are libraries that have frequent churn anyway. No?

I bet if you could get the Top 30 projects to target your clean shiny X replacement in one fell swoop you'll end up most of the way there in terms of adoption within a few years. Linux is command line centric and frequently updated after all with users who expect things to break anyway. "Old" X odds and ends that won't be updated can be thought of much like Java Applets are thought of now a days.

Oh you need that abandoned Chemistry application from 2006 that depends on X11? Spin up a VM. We can afford to be more pragmatic nowadays. Nobody expects to run 5 year old apps on their smartphones, but desktop linux can't afford a clean break once every 2 decades?


You're looking for the Wayland project. Excellent implementation. They reuse lots of ancillary X11 code too, like the input code from X.org, which has the virtue of working with a billion weird language/input/hardware combinations.

You can run X applications on top of it, transparently. They perform just as well.

GTK+ and Qt have been ported to it, Gnome and KDE are working hard at getting full ports over.

Canonical, of Ubuntu fame, has recently started working on a competitor, following the same strategy, called Mir. They've ported Qt to it, but are generally lagging. Wayland has the advantage of being crewed almost entirely by old X.org hands, who know a lot about what they're doing.


> Oh you need that abandoned Chemistry application from 2006 that depends on X11? Spin up a VM. We can afford to be more pragmatic nowadays.

That was exactly what I was suggesting with "sandbox stuff"--except that the OS can provide a stub library that recognizes what that "abandoned Chemistry application from 2006" is asking for, and spin up the VM itself, in the background, making it look as if it was just a compatibility layer. See, for example, Windows 7/8's "Windows XP Mode" -- or, as I mentioned before, Rosetta.


That's not as easy as it sounds. In a desktop GUI, customers expect seamless integration between such sandboxes. Making the clipboard and drag and drop work two-way between such VMs can be a lot of work. Even for simple text, that may involve encoding conversions. That, in turn, means that the host must be able to infer what encoding the guest expects. Styled text and graphics are way harder.

The VMs also must react to changes in the parent OS such as a change in the keyboard layout, two apps running in different VMs may both want to have some control over hardware, etc.

The approach works well in Windows because Microsoft spends lots of time on it and because the compatibility changes aren't that great.

Of course, it also works well in systems where there is no need for the VMs to interact (other than via the network). VM (http://en.wikipedia.org/wiki/VM_(operating_system)) is a nice example there.


Making the clipboard and drag-and-drop work between any two X11 programs running on the same server is a hell of a lot of work, and many times simply impossible. So why are your goals for sandboxing so impossibly high?


Because I want my system to work, unlike that example you give. And impossibly? It worked fine in Apple's 68k-PPC and PPC-x86 transitions, across the various shims that Microsoft has for all kinds of old software, and may even work fine across various VM hosts running on Mac OS X (disclaimer: I have little experience with those)


> Oh you need that abandoned Chemistry application from 2006 that depends on X11? Spin up a VM.

No, just load a compatibility library, possibly in the form of a wrapper program. If Wine can run Windows programs on Linux by translating Win32 to POSIX and X11, which it can to a very usable extent, something equivalent can translate X11 library calls to whatever the new library speaks, especially since it will be able to use whatever X11 code it needs.


And now, you're supporting the old system and the new system.

Maybe it's worth it. Maybe it's not. Just keep in mind that if you're keeping an emulated version alive, someone has to mind the emulator, and that's not free.


Sandboxing doesn't remove the need to provide ongoing security maintenance for a Web browser engine. For one, sandboxes can be attacked through whatever IPC mechanism you provide (e.g. Pwnium). But more importantly, the Web browser engine enforces security mechanisms that OS-level sandboxing does not address (e.g. the same-origin policy and history sniffing countermeasures).


People get upset when stuff don't work. If you want customers, you make stuff work. You don't break stuff, or change stuff so they have to call you and have you tell them how to make it work.


How do you deal with security updates? You'd have to apply them to differen code bases all the time. Pas bad as it sounds it's easier to work with one for base


Joel Spolsky, Things You Should Never Do, Part I

http://www.joelonsoftware.com/articles/fog0000000069.html

  Back to that two page function. Yes, I know, it's just a 
  simple function to display a window, but it has grown 
  little hairs and stuff on it and nobody knows why. Well, 
  I'll tell you why: those are bug fixes. One of them fixes 
  that bug that Nancy had when she tried to install the 
  thing on a computer that didn't have Internet Explorer. 
  Another one fixes that bug that occurs in low memory 
  conditions. Another one fixes that bug that occurred when 
  the file is on a floppy disk and the user yanks out the 
  disk in the middle. That LoadLibrary call is ugly but it 
  makes the code work on old versions of Windows 95.


The right way to resolve such bugs is to have abstraction layers that hide these awful hairiness.

If hair regarding low-memory conditions, Internet Explorer and other things all appears in the same function, some abstractions are definitely missing.

Also, if the reason for these things isn't clear, that's what comments are for.

Rewriting code, from my experience, has had only great results.


This is so absolutely true. What nobody recognizes is that most of these really fundamental systems we run on - unix system calls, our window managers, our shells, etc - were designed and implemented in an era where the supercomputers couldn't dent your Nexus 5 in terms of most computing performance metrics.

They didn't have the luxury of having an API that might be a few thousand KB big loaded into memory to act as a single redirection to your underlying implementation. They worked in the confines of bytes of memory, not gigabytes.

We can afford to be generic, to make extensible runtime programmable interfaces and runtime evaluation type dynamism, because we have the performance necessary. But our core APIs are still written like its 1980.


However, this line of reasoning also leads to slow web applications that make a high-powered workstation feel like a slow 386 from 15 years ago.


My only concern is that over the years I've seen everything go in cycles. So maybe 8mb is not a lot of RAM for a computer nowdays. A few years ago it was a lot for a router. I like my OpenWRT router that runs Linux. Before that the same could be said for mobile computers with GSM modules (cellphones).

I'd like to think that Linux will continue to run on machines with at most 4mb of RAM with not much processing capacity because if history keeps repeating itself we're going to keep on inventing new devices with those constraints.


But our core APIs are still written like its 1980.

Exactly what level of abstraction is good for the "core APIs"? Something actually has to send a series of bytes to be written to the disk, even if a bunch of serialization and encoding abstractions are written on top of that. If the latter is the "core", what's the less abstract stuff that inside that?


Rewriting can be great as long as you take all the things Joel notes into account and decide it is still the right decision for your project.


Then you end up with "abstraction layer" like autoconf, and the "cure" is MUCH worse than the disease.


autoconf is operating within difficult constraints (only /bin/sh and make are assumed to be installed), and uses shell scripts that write shell scripts that write shell scripts.

A) autoconf is not similar at all to an API layer hiding hairiness in its domain

B) autoconf becoming terrible does not mean that portability abstractions are necessarily terrible


Amusingly enough, Wayland is already developing backwards-compatibility cruft even though it's not shipping anywhere yet. The core Wayland support for window creation and management cannot support minimizing windows, so it's being essentially obsoleted by an extension called xdg_shell, but xdg_shell isn't mandatory so applications and Wayland compositors will wind up having to support both.


This is funny. You know that window minimization is not in the X11 spec either, right? Its something that X window managers handle internally (just like on Wayland where this is an internal compositor thing).

Of course, desktop apps wanted to interact with minimization (read current state, etc), so the WM authors joined forces to create a spec that allowed this. See how it specs minimization here: http://standards.freedesktop.org/wm-spec/wm-spec-1.3.html#id...

This is part of the "wm spec", and the thing corresponding to it in Wayland is called "xdg_spec", Both are produced by "xdg" (X Desktop Group) and are optional things that are used by "traditional desktop environments".

It is a good thing that wayland does not enforce the existance of window minimization, because it is not always something that makes sense in all use cases of wayland. For instance in a phone like Jolla.


>The core Wayland support for window creation and management cannot support minimising windows

One of the problems with Big Rewrite projects is that, inevitably, early releases will never get feature-parity with the latest version of whatever it is that's being rebuilt. Managing expectations then becomes critical to avoid this sort of legacy-support nightmare.


I thought things like xdg_shell grow up as either extension or in Weston, and then it gets merged into the stable Wayland protocol after it is battle tested and ready. Is this not the case for xdg_shell?


This is probably a stupid question but why can't someone make an X11 version that speeds up and optimizes everything but breaks backwards compatibility?


That's basically Wayland. If we're lucky, the KMS+Wayland+XWayland stack will end up with a better architecture, backwards compatibility, and less code than the old XOrg.

(My hope is inspired by ZFS which, due to its "rampant layering violations", had more features and less code than the UFS+LVM stack it replaced.)


Because something you depend on every day needs one of those backward compatible features.


There's really no point to doing that, since X is a protocol, and a pretty flexible one at that. Your new shiny can have an X server running in it to act as a go-between and talk something more modern with the new stuff instead.

The OP would have been able to do the same stuff with Exceed on Windows or the X server that comes with OSX, after all. X11's survival on linux seems to be largely a matter of momentum and the fact that it's the least common denominator in a fragmented landscape.

But now Ubuntu has approached the level of ubiquitousness that's necessary to push for a real change (thus mir) and the rest of the linux world is rallying behind wayland.


What are you going to run on it? 99% of userland programs expect compatibility with some historical version of X11. Are you going to rewrite GNOME and KDE and everything else?


Many userland applications don't use X libraries directly, but use a toolkit like Qt and GTK. Both projects are working on Wayland support [1, 2], bypassing X completely.

1 - http://wayland.freedesktop.org/qt5.html

2 - http://wayland.freedesktop.org/gtk.html


It's worth noting that GTK+'s abstraction over X11 is fairly leaky in places- hence the always shaky state of its ports to Win & Mac.

Qt has less trouble in this regard.


This is also a testament to the old HP. Around the time this logic analyser was built I used a fair amount of HP kit - oscilloscopes, EPROM programming kit, the plotter printer where you could have multiple ink colours and the original Deskjet printer. All of it was fantastic stuff and very expensive - we joked that 'HP' was an acronym for 'Highly Priced'!

What do I mean by fantastic stuff? Specialist items such as this logic analyser were built to an exceptionally high standard. To all intents and purposes HP equipment back then was bullet-proof. It cost a lot but it was an investment rather than an expense. The documentation was also superb. You felt quite privileged to use it.

Moving on to the products HP make now, I have a few of their laptops and a printer. The laptops are collecting dust and the printer does not have any ink in it. I have no intention of ever buying any of their kit ever again. It is all consumer stuff that does a lot compared to what was possible in 1992 but none of it has 'wow' factor.

I cannot think of anything they make nowadays that is sophisticated in a 'rocket science' kind of way. Sure I might not be exposed to their finer and more sophisticated products, but I should be through marketing, reading 'tech' news websites and so on.

'HP' really should have kept onto the pioneering/cutting-edge-technology image and made their PC's the kit of choice for anyone that does difficult maths, scientific stuff, interfacing or UNIX.


"You felt quite privileged to use it."

And using that high-quality stuff could give some semblance of order and comfort while debugging tough hardware problems. ("At least I know the problem is not with the 'scope.")

I used to use HP oscilloscopes and logic analyzers from that era, and I still have and use a DMM and a couple of calculators.


Companies change focus but hang on to their brands for consumer recognition, despite radically different strategies. The "HP" of back then is the "Agilent" of today. Similarly, yesterday's "Motorola" is today's "Freescale".


...and sometimes they do something completely different. One company that changed focus and held onto their brand name is the UK's Whitbread. They had a virtual monopoly on beer and public houses in southern England up until some time in the 1980's, then, for no obvious reason, they sold all of the brewing interests, bought a coffee chain, a few hotels and some gyms. They kept onto the Whitbread name which was once synonymous with beer even though they moved so far away from the stuff that they most definitely were not the same company. There is no 'consumer recognition' as they trade on the High Street with different names - 'Costa Coffee' etc. - but they do trade on the stock market as Whitbread.

Back to the point in hand, Apple have somehow managed to make themselves de-facto for musicians, video editors and people that draw pretty pictures in Photoshop. These are the 'halo' users and mere mortals that think of themselves as possibly being creative one day buy Apple in part because that is what the creative professionals use. The fact they get no further than 'Crazy Birds' on their iPad Air is neither here or there.

HP should have worked on a similar strategy but in the 'technicial/scientific' sphere rather than the 'creative' sphere. They should have listened to and looked after the customer base they had built up from selling things like 'scopes so that the de-facto kit to buy for anyone doing anything vaguely technical was HP. This too could have had a halo effect, so anyone studying something like engineering would instinctively want to buy HP rather than some other cheap Chinese junk.


>mere mortals that think of themselves as possibly being creative one day buy Apple in part because that is what the creative professionals use. The fact they get no further than 'Crazy Birds' on their iPad Air is neither here or there.

I keep hearing this sentiment bandied around but I have seen no evidence of it in real life. Most of the "mere mortals" I know that use macs don't make any pretensions beyond "its much nicer and I can afford it." Nobody is saying to me "I bought it because I fantasise that one day I'll run a recording studio from it." And who on earth buys iPhones and iPads to be "creative?" What a load of nonsense.


If you ask the average 1st world human being "What are Macs good at?" They usually say something about ease of use and multimedia stuff. It's just what people think.

Now, on my Mac, I do exactly zero multimedia related things. So I agree that the multimedia thing isn't necessarily true, but that's what I hear about Macs from not-technicals.


You can blame Meg Whitman for part of that.


Motorola is still Motorola, their radios are of the same quality as ever.


Agilent could build Unix workstations ;-) ...


I would buy one of those instantly, regardless of what weird ass CPU architecture or interpretation of Unix was involved.


Even H-POX??!


Did your spelling corrector just mutilate HP-UX?

BTW, as long as it can run X, compile Python from source and run Emacs, I'm perfectly fine.


More like HP mutilated Unix. ;)


Ah yes, the inevitable fall away from the power users towards the oh-so profitable average-consumer market. It's a sad(ish) story that's seen in too many companies.

Something I've noticed a bit of though is that this "shift" is generally only a perceived shift - it's usually not the case that the high-end is abandoned entirely, usually it's just an expansion to include crappy, cheap, consumer stuff.

Take for example Dell; Dell has one foot in the horrible world of consumer computers, with loads of crap. However, they still produce the totally excellent (at least last time I used them) Optiplex line of workstations for businesses. Those things are nicely laid out internally, with good support and pretty solid reliability.

Dell also produces some of the nicest monitors you can buy; I've lusted after one of those beautiful 30-inch 2560x1600 monitors for pretty much my entire life.

I don't have much experience with HP, but I've owned their inkjet printers, and they're total garbage. However, I also know that their larger business-oriented printers are durable workhorses that plenty of people will swear by.

So, yeah, often we think the companies we know and love go to crap, but in reality it's just us not spending the money on a worthwhile product (from them or any other company).


I'm a fan of the HP Microserver for home use, it's pretty neat, but certainly not as high end as the examples you mentioned.


One of my pet peeves is in there:

   DisallowTCP = false
Negating negatives is always more difficult to understand and more likely to be error prone. Why not just this, with the default being false.

   AllowTCP = true


I have the feeling that the default used to be true, and when the default was later switched to false, it was done by this option to not change the behavior of legacy configurations.


My problem is what is on the left side of the equals sign. It is already a negative ("Disallow") so the value on the right side is negated.


While I can't speak to this example specifically, in general:

The issue is what happens if the line is missing entirely?

If the default is to allow, then adding a line "allowTcp= true" hasn't actually changed your config. And deleting a line "allowTCP = true" doesn't stop allowing TCP, which could be confusing.

What the addition of the line does is actually disallow TCP.

The above is the kind of reasoning that causes configs to end up with double negatives.


The solution is very simple in my eyes - do not include a negative on the left side, and pick the default on the right side to match intentions.

These cases seem to have happened the other way around - values/meaning are picked for the right side first, which then requires contorting the left in order to match intentions.


I cannot fail to disagree with you less :p


You shouldn't not disallow no quintuple negatives neither.


I remember I started to use Linux in the early to mid- nineties. Back then home networking equipment was a bit too expensive for me (basically I was student-broke), so I did set up a local network between my "big" Linux desktop computer and my small laptop using PLIP (that is: IP over the parallel line interface IIRC). Then I'd launch an X11 server on the laptop but display on it a session from a user on the desktop. So both my brother and I could surf simultaneously from the "fast" machine. As I remember it things the network was slow, but I clearly remember that it worked. We had our first Internet connection (dial-up) and we were "sharing it" and surfing simultaneously (using Mosaic?) for hours and hours.

Later on I've configured similar setups so that several devs could use older PCs to run fat IDEs from the one fast machine to older PCs (it was funny the day I yanked the power cord of the fast machine, interrupting everybody ; )

Up to this day I still love the fact that it's trivial for one local user (if you allow it) to run programs in the visual X11 session of another user. I'm using several browsers from unrelated user accounts: one only for my personal GMail / Google Docs, one only for browsing, one only for my professional GMail account, etc. That's a feature I use daily and, for my use, I think it's easier (and faster) than running several VMs.

You can also run simultaneously several X11 session, for example at different sizes (say one at 1920x1200, another one at 1280x800, etc.).

I'd really miss these features if they were to go away: I hope the newer Wayland and Mir etc. will still allow one "user" to display graphical apps on the display of another user.


for bandwith reasons, i had a headless VNC X server on the box, and executed X apps locally. Then i'd also start the usual Xserver and client and run a vnc client full screen. So i had the first vnc X server showing on the screen.

then on another box, i had the usual X server+client combo, and i could open a vnc client and have fast remote access and shared screen :)... vnc was much faster than remote X as it only sent input and graphics. X sends all sort of metadata that is not really that useful for most cases.


If you're willing to dick around with the kernel config options, you can still get x86 Linux to run binaries from the early 90s. I vaguely recall reading a mailing list thread where Alan Cox, I think it was, commented that he had a bunch a.out binaries from that era still kicking around for testing.


Alan Cox said something to that effect on Google+ during that "stable ABI" slapfight a while ago:

"However it's not an Open Source disease its certain projects like Gnome disease - my 3.6rc kernel will still run a Rogue binary built in 1992. X is back compatible to apps far older than Linux."

https://plus.google.com/115250422803614415116/posts/hMT5kW8L...

(IIRC Alan Cox maintained a version of Rogue for linux for a while.)


It's worth reading that G+ post just to see Alan Cox and Linus jumping all over GNOME


Well he has a point, while the kernel interfaces remain consistent, most userspace libraries have the habit of breaking compatibility every few years/get replaced altogether.

It's easier to run the Windows version of early Linux games like Unreal through wine than getting the native version to work.


Glib and GTK+ remained ABI stable for the whole of 2.x afaik.


For a while people were porting the RH5 libc around :/.


The buried story here is that somebody is running e17 and there's no screenshot of the memory analyzer UI doing a backflip.


I especially hate the smugness some of the developers of the X replacements; of course you can design a leaner architecture when dropping 80% of the features that make X so great. Congratulations, you are true geniuses!


The ORIGINAL geniuses, because of course nobody ever thought of replacing X11 before, so all is left for you to do is to use your unbridled creativity to decide whether to call it X12 or Y1.


I certainly hope you realize that quite a few members of the Wayland core development team are former/current X.org maintainers/core developers.


The reason they're starting from scratch is they ran out of features to cut from X without breaking it.

On one hand, they're justified because most people don't use X as intended.

On the other hand, maybe people should be.


I wonder what's the network adapter on a 1992 machine. Coaxial cable?


"The LAN interface includes both 10BaseT and 10Base2 connectors."

http://alliancetesteq.com/equipment/agilent-hp-1670a

So an off-the-shelf Ethernet switch will do. Contemporary 10GBase-T products are generally compatible with ancient 10Base-T. Another fine example of extreme backwards compatibility.


I got one of these bad boys onto the University of Washington CS lab network once: http://en.wikipedia.org/wiki/AT%26T_UNIX_PC

It didn't have DHCP, so I had to configure my laptop with its MAC address, acquire an IP, and then quickly move the cable to the UNIX PC. It didn't have DNS, so I had to manually find the IP of the CS webserver. And it obviously didn't have an HTTP client, so I had to use telnet. But it worked just fine!


The user manual is at http://cp.literature.agilent.com/litweb/pdf/01660-97025.pdf

It's RJ-45 and BNC aka 10BASE-T and 10BASE-2


My old Sun workstation of that vintage had an AUI[1] port that could be connected to either a 10Base-T, 10Base-5, or 10Base-2 transceiver.

[1] http://en.wikipedia.org/wiki/Attachment_Unit_Interface


That was also my initial guess! But sadly incorrect.


You could also get 10Base-FL AUI transceivers.


AUI connectors were common in 1992 (and, sigh, AAUI for the Apple kit I looked after; same function, twice the price!). D-Plug that accepted different transcievers for thinnet, thicknet, or 10BaseT.


Wow, I had no idea tech like that went into test gear at that time.


Okay that's seriously metal.



It probably performs pretty well, too. The farther you get away from Xlib, the more likely you are to interact with X correctly.


That is the least impressive thing on X11 backwads compatibility.

And this is the sole reason why everyone hates X11. sadly.

X11 is always a server (which show things on the screen) and a client (which handles windows and send what to show on screen to the server... or something like that).

So any X11 instance in a desktop is a server+client. Hence, anywhere you have any X graphical interface, you can either receive windows from some other client, or send your windows to another server (since you have both server and client running, remember). So the only thing that device did was send the X client windows to some TCP socket instead of the local one. Not taking credit for it having X at all... that is awesome. but what this post describe couldn't be more banal.


Sure, remote display is no big deal if you're already using X11. But most embedded devices were not farsighted enough to use X11 in the first place; it's much more common to see the wheel being reinvented poorly.


The Tektronix logic analyzer from the time used an embedded copies of Windows and had no 'remote' capability. No doubt if you hook it up to a modern network with Internet access it would become compromised immediately.


I am sure the X client running on the oscilloscope has plenty of security bugs too.


I agree and don't doubt that it did, but few things were both as vulnerable and as targeted as an unpatched Win95/98 system attached to the Internet (capital I). The reason was simply that embedded versions rarely got patched, and when a vulnerability was found that was in the embedded version as well, malware could find it and use it long after everything else was safe. There were a couple of EMC storage management consoles that got hit by this problem and ATM terminals as well. The scary part would be having your 15 year old piece of test equipment be rendered unusable by such an event. The odds were small but decidedly non-zero.


agreed. that is why i ended with "Not taking credit for it having X at all... that is awesome. but what this post describe couldn't be more banal."

also, remember this device probably cost more than 7x a average PC. and it was already the 90's...


Legacy cruft that no one cares about anymore. That's why Wayland exists.


>Wayland exists to be legacy cruft that no one cares about anymore.

FTFY




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: