Aren't smartphones already an example of starting over? Both IOS and Android run each program in their own sandbox, with highly restricted to nessasary system resources.
At a fundamental level, they're still very similar to every computer on the market. They're very nearly IBM architecture, and the security features you're discussing are added after the fact (not inherent to the architecture itself).
I've been thinking more and more about the problems holding progress of experience back. I think it's the dogma baked into the operating system itself. I'd love to share my ideas and help build a new OS with people.
You should give more details. A good number of HNers agree with you, have direct experience with building OSes, and a working knowledge of the existing alternatives to the main three.
I don't think the next big thing will be a new operating system -- that's thinking too shallow. We've tried dozens over the years, and right now we're dividing between consumer experience and server work (and poorly at that; no Linux distribution these days bothers to pretend they're separate any more). Plan 9 also swam as far as it could off to the deep end and didn't catch on, for a few reasons; it's a genuinely good model but as we know from this startup game, the good ideas don't always take hold.
I suspect even if you designed something better than Plan 9, which would be a feat, the smart minds and money are already thinking past Intelville. Getting past The Architecture (what do we call it? IBM?) that's been a staple of computing for decades is the next big thing. That's what the author is hinting at, I think, and I'll be interested to read his paper.
(ARM isn't what we're looking for, it's just a better Intel. Same architecture.)
One of my deep-seated beliefs is that backward compatibility can hurt more than benefit, and this is sort of a corollary.
Oddly: Linux started as a desktop personal computer operating system (with one user: Linus), and that's always been his focus (though others have of course had other interests).
The funny thing is that there's not a whole lot of difference between the needs of servers and personal systems. Both value uptime and latency, both benefit from hotplug flexibility (personal systems because we're always plugging things into them, servers because you can't take the system down when adding/modifying parameters), security's important, sandboxing, device support, and what all else. The biggest likely difference is whether and how advanced a direct graphical output device is attached, beyond that, they're similar.
As for scrapping everything and starting over: it's almost always a mistake. Refactoring and incremental improvements discard much less knowledge and provide a continuous migration path (Plan 9's biggest failing, absent licensing, since fixed but far too late). Virtualization may well offer a buffer against this -- we can run old environments in their own imaginary boxen.
>I don't think the next big thing will be a new operating system -- that's thinking too shallow.
What exactly do you have in mind? As long as the new thing is Turing-complete, the path of least resistance says that we'll just drag all the old baggage over with us.
The stack we have is incredibly malleable, there is no such thing as a sufficiently clean break. The ability to fix it now is as present as it ever will be, pretending otherwise is naval-gazing that relies on the assumption the process of architecting new paradigms for a fresh pasture and building out a new stack on top of that could somehow beat to market the more simple process of porting what we've already got and bolting on access to the new nicities.
The GUI of computers could be largely rethought, especially after the introduction of mobile devices which raised consumer expectations of user interfaces. I look forward to more subtle touch gestures.
Disagree. They are rethinking DESKTOP UI's these days somehow using "mobile" as argument, and everything after KDE 3.5 and Gnome 2 has only become worse instead of better. Please give me back the proper desktop UIs.
The two interfaces really have different styles. The dual 21 inch setup a couple feet from me does not need to behave at all like the 5 inch display in the palm of my hand. The very suggestion that they would baffles me.
If we're any day going to re-design computer architecture, I think we'll put designing for artificial intelligence more priority than designing for security.
wait, isn't the whole "don't let code and data sit in the same memory" the whole point of the no-execute bit, which AFAIK is hardware enforced on any remotely modern AMD or Intel CPU? Granted it takes OS support, too..
Strict separation of code and data contradicts the definition and purpose of "computers." Single-taped Turing machines do not seperate code and data (it's a single tape and a single "memory space"). Computers are (supposed to be) Turing complete within finite resource limits, meaning they can be used to simulate a single-tape Turing machine.
I dont' really want another single-purpose appliance, so it's kind of a bummer to see the TSA "security" arguments (the article, not you) working to break general-purpose computing... (I think the best we can do for now is a trust system and, for me at least, I still have to see the code to have faith in that.)
As for the "principal of least privilege", well, that was the point of microkernels. We're all using Linux and Windows and Mac OS X (none of which are microkernels), so even if microkernels are "better" they may not be very "practical" for the moment (the old Tanenbaum–Torvalds debate).
> Strict separation of code and data contradicts the definition and purpose of "computers."
I don't believe that your argument here is correct. Code and data sharing the same memory space is not a necessary condition of being Turing complete. Just because a Turing machine works that way doesn't require any Turing complete computer to work that way.
On the other hand, the distinction between 'code' and 'data' when running a simulator is a little bit arbitrary.
> As for the "principal of least privilege", well, that was the point of microkernels.
"Code and data sharing the same memory space is not a necessary condition of being Turing complete."
See: Church-Turing thesis. If you can simulate a Turing machine (which can clearly be said to hold code and data in one memory space) then the hosting machine can also be said to share code and data in the precise sense given by the mapping of the simulated Turing machine onto the host machine (it's implementation). A computation in the simulated Turing machine is just a computation on the host, with some overhead. Conversely, a simulated CPU can't compute faster than it's host's CPU; if a Turing machine was simulated, the host was Turing complete.
"Principle of least privilege" implies a sandbox and IPC for every process, all so things don't run within the kernel, or any other process when possible. That sounds like a microkernel to me. You might say another "point" of microkernels (I don't have an exhaustive list) was to make design and debugging easy too, but those are just the same ideals inflicted on the programmer: keep it simple and homogeneous; the less you do, the less you can do wrong.
As you said, where to draw the line of separation is arbitrary. One could certainly make code and data (and processes generally) "more separate," but it's all at the expense of programmability (the general in general purpose) and speed (IMHO, why microkernels aren't everywhere), which brings us to the current compromises: monolithic kernels, OSs that warn you before installing, NX flags, patch Tuesday, and antivirus programs.
We build up an environment of temporarily-appropriate limitations from a blank (general) slate. If I ever feel too restricted, I put on my black fedora and perform a "jailbreak" (I reboot). I'm still better off with a malleable computer than with many individual limited tools. (Not that you suggested otherwise, just my 2 cents.)
Technically NX is the same memory, just different permissions applied to portions. It is possible to have an architecture where code and data are completely separate - http://en.wikipedia.org/wiki/Harvard_architecture
Data - noun - Data you haven't labelled as code—yet.
A python program is data that's just a text file. So. You know. There's that. Other things that are "data" but aren't: PDFs, Javascript and CSS.
The idea that you're going to be able to wave a magic Harvard architecture wand and fix bad inputs causing things to do different things than intended is a misunderstanding of the problem.
You sacrifice an enormous amount of flexibility and extensibility by enforcing such distinctions. Much of the power and elegance of languages like lisp comes from blurring the boundary between code and data. To the C etc. mentality, it's unthinkable, but in lisp you can maintain (modify/extend/fix) a running application, without having to unload and reload everything.
Not really. The problem is what you see as data can often be used to control code without modifying it. It doesn't have to run directly on the processor to end up being as powerful as code. You often find don't have to modify code to use the code the code that's already there to do what you please. This is true of a surprisingly large portion of modern exploits. A strict Harvard Architecture or the NX bit only helps with what is becoming an increasingly narrow portion of the attack surface.
Again, see also: Python, PDF, Javascript, CSS for a few different types of "data" which end up being as powerful as code.
A Harvard architecture can help in the same way NX does (as well as boost performance for fixed-purpose applications that rarely need to be re-programmed), but for it to still be Turing complete it's going to need some way to modify the executable code, including the potential for exploits, albeit Harvard architecture-specific ones.
The Harvard architecture could be used as the basis for a more rigorous trust model (iff the owner of the system controlled the root of trust.) Democracy is out of fashion though, so we would undoubtedly get something similar to what we have today with Redhat (for all practical purposes) "having to" pay for the right to boot Linux on a system "certified for Windows 8"...
Re: the second question, the problem with NX is that it only protects you from overflows where the attacker jumps into the buffer.
Overflows are still exploitable with NX. The attacker instead jumps to a series of fragments of library code[1]. Since libraries will always be executable, there's no problem (aside from the difficulty of finding the right chain of "gadgets").
ASLR goes some way into preventing return oriented programming (ROP) attacks, but it isn't bulletproof.
Well, it can be argued that any security feature can be circumvented in theory, which is why super-secure networks are fond of air gaps. The NX bit isn't really an air gap, just like ASLR, DEP, and so forth.
The features of a processor designed to protect itself from memory are really just stop gaps on the way to the next paradigm that supplants von Neumann, is I think what Watson is saying.
1) We think (but have not proved) that factoring large numbers is hard. We use this for cryptography. In theory the crypto could be brute forced, or maybe find some new method for factoring. In practice brute forcing would take longer than the Universe will exist and there is unlikely to be a breakthrough in factoring large numbers.
2) We think that a single overwrite of a hard disc platter is enough to destroy the information. No software exists that claims to be able to recover information that has bee over written once. No companies exist that claim to be able to recover data that has been over-written once. No university research exists showing recovery of data that has had a single overwrite. No criminals have been prosecuted or convicted with evidence recovered from a disc that's had a single overwrite. Everything we know suggests that a single overwrite is fine. But, because a well funded government might be able to recover that data we suggest that people do 3 (or 8, or 30something if you're being silly) over writes, or if the data is really important that people destroy the platters. In theory the data might be recovered, and so people have decided that in practice they will destroy the drive or overwrite more than once.
When talking about security it's a good idea to assume that someone can break whatever you're doing, and then ask if you need to do more, or need to do things differently.
Ironically, One Time Pad is breakable in practice, due to mistakes made, shortcuts taken[0] and side-channel attacks[1].
Besides, it relies on securely distributing the pad itself before information exchange can take place, which in turn is prone to the usual array of physical insecurity, design errors (e.g. using publicly available randomness), or, if distributed by a digital channel, to failures of the encryption used.
From TFA: “The role of operating system security has shifted from protecting multiple users from each other toward protecting a single…user from untrustworthy applications.…"
Interestingly most OSes are still very good at protecting users from each other. And on Linux (but not on OS X nor on Windows), thanks to how X works, it is trivial to allow one app from another user to access the display (and only the display) of another user.
So my way of protecting myself, the user, from the untrustworthy applications (mainly the web browser and it's daily major Java / Flash / CSS / JavaScript / etc. security issues) is to run applications in separate user accounts.
One browser in one user account for my personal email + personal online banking (although that one would be more secure if done from a Live CD), one browser for surfing all the Web, one browser for my professional emails, etc. Most user accounts (beside my developer account which, by default, as no Internet access [but I can whitelist sites per-user using iptables userid rules of course]: no auto-updating of any of the software I'm using) are throwaway and can be reset to default using a script.
As to giving and receiving phonecalls: a good old Nokia phone onto which you cannot even install J2ME apps is perfect ; )