Hacker News new | past | comments | ask | show | jobs | submit login
Safe Native Code (joeduffyblog.com)
141 points by panic on Dec 20, 2015 | hide | past | favorite | 72 comments



> You need a great inliner. You want common subexpression elimination (CSE), constant propagation and folding, strength reduction, and an excellent loop optimizer. These days, you probably want to use static single assignment form (SSA), and some unique SSA optimizations like global value numbering (although you need to be careful about working set and compiler throughput when using SSA everywhere)…I hate to say it, but doing great at all of these things is "table stakes."

I'm not sure. It depends on the audience you're aiming for. Go has done very well with a compiler that essentially performs no compiler optimizations (by modern standards; certainly it doesn't do most of the above) by positioning itself appropriately.

> For example, there are ways to write the earlier loop that can easily "trick" the more basic techniques discussed earlier:

Yeah, bounds check elimination ends up nightmarish quickly. I think I prefer an approach that fixes the problem at its source, by just encouraging programmers to stop using for loops (which are bad for non-performance-related reasons as well). Note that all of the examples given use for loops; if they were rewritten using iterators, there wouldn't be a problem. C# has iterators, so the machinery is there; they just need to make it more painful to use a for loop than the iterator alternative. :)

> Imagine you have a diamond. Library A exports a List<T> type, and libraries B and C both instantiate List<int>. A program D then consumes both B and C and maybe even passes List<T> objects returned from one to the other. How do we ensure that the versions of List<int> are compatible?

I don't understand why this is a problem. Both separately compiled implementations of List<int> should monomorphize to bit-identical runtime value representations. Does the problem have to do with quirks of the compiler's RTTI implementation (e.g. making sure "foo instanceof List<int>" in package A works even if "foo" came from package B)?

> First, Go generates the interface tables on the fly, because interfaces are duck typed.

Does it really? That's interesting and seems extremely inefficient. I would have assumed that the Golang compiler would just statically determine the set of vtables that need to be used by the program and write them up front into rodata. Am I missing something?


They were trying to build the entire OS using a 100% memory-safe language. At one point they could elide process boundaries and protected memory (and the associated context switch overhead) because the loader could statically verify your program didn't do anything unsafe.

The ultimate answer to Singularity/Midori/etc is yes, you can have an OS (including drivers and interrupt handlers) that is 100% provably memory safe and thread safe and is performance-competitive with existing systems.

Unfortunately Microsoft basically sheleved most of the work and it doesn't look like anyone else is going to pick up the slack. I predict more heartbleeds and zero-day RCEs in our future. At least we'll all get 0wned with fast C code.


> I would have assumed that the Golang compiler would just statically determine the set of vtables that need to be used by the program and write them up front into rodata. Am I missing something?

IIRC the Go compiler will generate the itables that it can determine are needed at compile time, but because you can dynamically request an interface conformance for a type there is also a runtime fallback.

Swift does something similar; the compiler tries to generate protocol conformances as much as possible, but because there may be an unbounded number of types at runtime, clearly it can't generate all of them.


> IIRC the Go compiler will generate the itables that it can determine are needed at compile time, but because you can dynamically request an interface conformance for a type there is also a runtime fallback.

That would make more sense. But http://research.swtch.com/interfaces claims it's all runtime-based (though that document may well be out of date).


It looks like the optimization I remember was implemented in gccgo, but maybe they never added it to the main compiler: http://www.airs.com/blog/archives/277


Here's a little more info on how and why Go does it: http://research.swtch.com/interfaces


Interesting, thanks. The relevant part seems to be here:

> Go's dynamic type conversions mean that it isn't reasonable for the compiler or linker to precompute all possible itables: there are too many (interface type, concrete type) pairs, and most won't be needed.


>Thankfully, these days you can get an awesome off-the-shell optimizing compiler like LLVM that has most of these things already battle tested, ready to go, and ready for you to help improve.

I wonder if "off-the-shell" is a typo or a pun.


So what exactly was Midori? He says they benchmarked "booting all of Windows" - what's that mean in context? And why are they back pushing and focusing on C++?

Really good series of posts. Always good to see sufficiently smart compilers and such.


One of the earlier posts in the series explained Midori in more detail. Apparently it was a research operating system built at Microsoft to explore using safe/managed code for building the entire OS while also aiming for the same kind of performance you get with raw C/C++.


So what does "booting Windows" mean? Did they port some sort of Win32 compat thing or was that supposed to be a general term for their UI?


In addition to backend compilation for managed code, Phoenix could continue to be used to compile C/C++ code. One of the things the team did was continue to compile the Windows codebase to compare Phoenix and UTC (for both functional and performance reasons, IIRC).

Disclaimer: I was on the Midori team for a few years but did not work on Phoenix itself.


How much of this tech has been published in academic papers? Microsoft's normal MO seems to be to just patent it then publish and/or use it like they did with a lot of other stuff (incl VerveOS). I'd love to read the technical reports on how they handled each problem and what specific results came from it. Meanwhile, Joe's write-ups are great substitute.


Joe explains this on the first blog entry, almost nothing, hence why he decided to write these posts, so that the information doesn't get lost and the world outside MSR gets to learn a bit about what Midori was all about.


Looking forward to CoreRT being usable for non-trivial applications. But I wonder why CoreRT is still in early development, while .NET Native is ready for universal apps on the Windows Store. Why wouldn't the runtime from .NET Native be suitable for desktop and server apps today?


As I understand it, it doesn't cover all possible MSIL opcodes or .NET APIs.

Only libraries that can run on top of CoreCLR can be AOT compiled with .NET Native. For example the F# team is currently making possible to target CoreCLR.

Then there are possibly the political issues where Microsoft wants to drive .NET Native.


Great article series. Very easy to understand, and brings a lot of inner works of a managed runtime out.


How does this compare to native client?


Admins, how did this dupe https://news.ycombinator.com/item?id=10764870 (11 hours ago)?


(I'm not an admin. You probably want to email HN because they won't see the question otherwise.)

In the past HN had a fairly strict dupe-detection filter.

That meant that a lot of good stories that didn't get attention on the first posting didn't get reposted.

Currently the dupe-detection is much weaker than it used to be. A story that didn't get much attention on the first post can be reposted easily now.

HN tried an experiment where they'd email people and ask them to repost submissions, and give those reposts a small bump. That was a lot of work, so they only do that for "Show HNs". Now they do something like an auto-repost which resets the timestamp.

This means that sometimes you'll post something, and it won't get much attention, and a few hours later someone else will post the same thing and it'll get upvotes.

This isn't going to stay like it is. They're working on a better system.

https://news.ycombinator.com/item?id=10754760

>> We've recently started doing things to make the original submitter get the front-page slot more often

https://news.ycombinator.com/item?id=10753401

>> Invited reposts are mostly deprecated now in favor of re-ups [1], but when it looks like the submitter might also be the author (as e.g. with Show HNs), we still send them. It's nice for an author to know that their post may still get discussed, and it's good for HN when an author jumps into the thread.

https://news.ycombinator.com/item?id=10705926

(A meta post, with links to previous discussion).


Thanks for that! I knew they'd loosened de-duce up a bit, but this was ~10 hours.


Interesting, maybe dupe shouldn't be a hard boolean but something more analogic.

A dupe submit could be like a major upvote, weighed by proximity. 0h-24h : +100, 1d-12d : +50, _ : +1


could they make it so that the discussion in the dupe could be merged with the latest post as it is now people add links to previous posts.


I guess they decided it didn't get a fair run and removed it from the dupe checker? That happens sometimes...


It's sad that such otherwise smart people choose to stay at Microsoft. Microsoft doesn't yet realize it, but the would really would be a better place if it were to die. They did a lot of interesting things over the years, but their view of the world is so fundamentally out of touch with what the world is today, that they can't help but lose mindshare. They're still quite strong, as there's no alternative to them on the desktop (for most people), but they never quite gained dominance on the server (and continue to further fuck it up by introducing more and more bizarre licensing arrangements), their development environments and APIs look like a bad joke, and most decent engineers would rather have their balls/ovaries removed than code for the Microsoft platform. Nearly a decade ago, when I actually did develop for Windows, I asked a fellow engineer who coded for Linux why he refused to admit how superior C# was to Java. He basically said that he is not interested in chaining himself to any given platform. In the years that followed, I saw just how right he was.

So my message to Joe and other solid folks like him: there are tons of opportunities outside MS campus. I know large companies tend to make you believe it's a barren desert out there, but it's simply not true, and it's especially not true now, when job market is starved for good talent. Go out there, make the world a better place. Let Microsoft ride into the sunset.


> their development environments and APIs look like a bad joke

I agree, cmd.exe is bad. Everything that doesn't require the command line is better in Windows land though. Visual Studio has nothing coming even close to it (CLion is getting there for C/C++ though), .NET has an amazing set of APIs and I wouldn't trade them for any other language's, Direct3D is actually a good API compared to the pile of crap that is OpenGL. Win32 is actually not that bad compared to the crap you have to deal with on Linux.

For certain things, Windows is the superior alternative. However, if your job requires you to do JS/ruby/python/etc. then by all means, go with Linux/OSX, it is clearly better. But the integration of some tools and languages on Windows beats any other OS.


cmd.exe and batch script are horrible, no argument there.

Powershell is a huge improvement, though, and in many ways far surpasses the unix command line environments. You have real objects at the prompt, with fields and methods, you aren't just wrangling plain text all over the place. Naming and parameter sets are consistent and non-cryptic. Everything is tab-completable (command names, parameter names, [some] parameter values, types, registry keys, paths, history, etc, etc). Writing new shell commands is super simple. Parameter sets for scripts and shell commands are completely declarative, no more manual argument parsing.

Not to say there aren't weak spots: performance and the error-handling model come to mind. Disabling execution of scripts by default, in a scripting environment, also a massive facepalm. There's also the separate-but-related matter of the terminal host environment itself, which has been historically terrible on Windows. At least with Win10 that's finally moving in the right direction.

Anyways, it's a shame people still think "Windows CLI" == "cmd.exe" because most of us moved past that 6 years ago when Win7 included Powershell by default.


One thing I don't get is why does powershell have to be so shitty as a programming language? It literally doesn't look like anything else on Windows. Also, the handling of _interactive_ command line in powershell is atrocious.


> so shitty as a programming language > doesn't look like anything else on Windows

These are unrelated statements, not sure what you are getting at.

Powershell is more than adequate to build one-liners, scripts, tools, and system automation. The syntax is kind of like C# and Perl had a baby. I'd say it stands up quite well against Bash, Perl, or unquestionably Batch. It's not without its warts, and I wouldn't want to build a large production app/service with it, but that's not what it was designed for nor what it claims to target.

As I mentioned in my earlier comment, the interactive terminal environment itself has long been quite poor on Windows, and Powershell inherited that. Maybe that's what you mean by "interactive command line"? It's miles better today, though, for 2 reasons - 1. The default Win10 terminal is much improved (resizeable on-the-fly, copy/paste, line selection ... still a ways to go though) 2. Powershell itself now integrates PSReadLine out of the box, which brings all manner of features that were badly needed (undo/redo stack, syntax highlighting, a solid multi-line editing story, optional emacs mode, smarter navigation, it-just-works copy/paste, easily remappable keybindings, more).


How do you think should it look to be more in line with everything else on Windows? Like VBScript? Batch files? C#?

And what bothers you about the interactive command line?


>> And what bothers you about the interactive command line?

Try zsh with a decent dotfile setup, and you'll undersstand. Until then it's like explaining colors to a color blind person.


> Try zsh with a decent dotfile setup ...

I'd have to guess what exactly that means and entails, but you're talking about customization to your favourite shell and comparing that to the stock shell without anything on another OS. Don't you think that's a bit unfair?


Your parent poster asked how a command line should look like "to be more in line with everything else on Windows". Not how a good command line under some UNIX looks like.


Do you always only read the first sentence?


Huh? For the past several years I've been doing C++ development in Vim, with YouCompleteMe providing excellent code completion. Eclipse works pretty great for C++ too, especially for browsing code. Seems to me you haven't seriously worked in Linux. The only thing that's lacking is the debugger. Everything else is either the same (for eg Java) or an order of magnitude better. All non-ms programming languages are UNIX-first. They feel bolted-on when ported. And let us also not forget the complete freedom you have on UNIX systems. Want to spin up a VM or a dozen? Go right ahead, for free. Want your server to serve five hundred people? Go ahead, no need for extra licensing. Want dev setup for nearly any language under the sun? It's a one liner. And so on and so forth.

Another benefit is, the lower level APIs are the same they were two decades ago, some are even older. And that API surface is much smaller. You basically learn them once, and they're good for life. Same with much of the tooling such as text editors, build systems, command line tools, and so on.

Once you grok all this, there's _really_ no going back to ball and chain that is Windows.


> All non-ms programming languages are UNIX-first. They feel bolted-on when ported.

This is not an argument against Windows, but against these programming languages and their maintainers.

> And let us also not forget the complete freedom you have on UNIX systems.

There are lots of commercial UNIX systems that have much more restrictive licensing terms than Windows.

> Another benefit is, the lower level APIs are the same they were two decades ago, some are even older. And that API surface is much smaller. You basically learn them once, and they're good for life.

The same holds for WinAPI etc.


> This is not an argument against Windows, but against those programming languages and their maintainers.

Disagree. It's the operating system's job to make it easy for programmers to write the programs they want. If language implementors consistently don't want to support a certain operating system, it's a sign that something might be wrong with that operating system. Not necessarily from a technical point of view - it could be, say, a marketing issue.


1. Be that as it may, no one is idiot enough to create their life's work on a proprietary platform. 2. Which is why commercial unixes are being supplanted by FOSS. And it far easier to do than, eg porting anything from Windows to anything else. 3. WinAPI is a verbose, poorly designed turd, so that doesn't really help your argument any.


I'm primarily a Linux user, and only use Windows at work, and don't consider POSIX precisely the pinnacle of tasteful API design.


It doesn't have to run faster than the bear. It only has to run faster than the other guy.


It isn't clear to me that POSIX is strictly better than the Windows API in all respects. For instance, synchronizing processes manipulating a common file is very awkward in POSIX.


You lost me at saying eclipse is pretty great...


How recent is your experience?


> Win32 is actually not that bad

Having written code for Win32 for a while, I kind of have to disagree. Compared to Gtk+ (I admit, I only used it from Python/Perl/Ruby) or Qt, building GUIs in C/Win32 is a huge pain.

I only had very brief contact with Win32's threading API, but it looked like that was indeed more fun to use than pthreads.


> Having written code for Win32 for a while, I kind of have to disagree. Compared to Gtk+ (I admit, I only used it from Python/Perl/Ruby) or Qt, building GUIs in C/Win32 is a huge pain.

On the other hand: WinAPI stays (even binary-)compatible with new versions of Windows. Gtk+ on the other hand is replaced by newer incompatible versions all the time.


Yes. For better or worse, Microsoft does a really impressive job at maintaining backwards compatibility.

(Although I feel for the poor programmers that have to make sure twenty year-old applications that grossly abuse the documented API keep running.)


Applications can depend on old versions of libraries. Keeping compatibility forever isn't a goal of libraries like it is for operating systems.


But if one wants to use GNU/Linux on a desktop (in opposite to the server) the desktop libraries are a part of the operating system - thus keeping compatibility for desktop libraries is keeping compatibility for the operating system.


The WinAPI's closest analog in Unix-land is POSIX, which is just as horrifying. Gtk+ vs. WinAPI is apples to oranges. IMO, even WinForms and Visual Studio's form designer are infinitely better than the mess that is Gtk+ and Glade. I haven't tried Qt too seriously, but my impression of it is that its object model combines the disadvantages of C# (less than adequate support for generic programming) with those of C++ (manual memory management, not even aided by smart pointers).


Have you tried QtCreator? It is somewhat Qt-centric, but other than that it's an excellent IDE.


It's interesting to me because the way I see it the linux scene is too busy duplicating effort by making new init routines and loggers than making good user experiences and ms has evolved their flagship OS into something incredibly user focused and good to use.


How does Android fit into this worldview?


Android is a Java based OS that just happens to use the Linux kernel.

Google could release Android 7 with another POSIX like kernel, using the same official NDK APIs and the only apps that would break are the ones using non public APIs.


Switched to Windows Phone 8 when my expensive android phone couldn't load a contact list quickly.


You're mistaking a few noobs for "the Linux scene". The actual Linux scene is basically running the world by now. Not even Microsoft can ignore this fact. In fact I suspect Microsoft will recognize just how shitty Windows is for most things they want to do in cloud, but not anytime soon.


You realize Linux is a first class citizen in Azure VMs, don't you?

Microsoft is not only Windows.


Try to compare performance of their Linux VMs against Amazon and Google, then come back and tell me if it's first class or not.


Eh Azure's performance (and particularly perf/price) is way off in general. It's weird (the SSD model is just dumb). Windows VMs on Azure regularly get "lost" on reboot and require several resizes to come back. I've been using Azure for a couple of years and it still remains just, well, messy, to put it politely.

I don't think it says anything about their stance on Linux that AWS and Google do a better job.


How is it "first class" then if it's like 35% slower on CPU-bound workloads?


If the Windows VMs on Azure are just as slow then that's the textbook definition of "first-class citizen." Your point would only stand if Azure ran Windows VMs as-far-as or faster than their competition, but ran Linux VMs much slower.


> In fact I suspect Microsoft will recognize just how shitty Windows is for most things they want to do in cloud, but not anytime soon.

They did when they used Linux servers to centralize all the Skype traffic: http://arstechnica.com/business/2012/05/skype-replaces-p2p-s...


The world doesn't need yet another UNIX clone factory.

Microsoft, alongside Apple, are the only companies left that still care about OS research.


How come then new OS feel sluggish on slightly older system. You would assume with better know how it would run better(Me running latest mac osx on old mac book pro)


What's that have to do with OS research? You should look at Microsoft Research's pages if you want to judge them on R&D or innovation. They do a lot of great stuff. The Midori project pushes the envelope in tons of ways that you're not going to see in most FOSS communities. That's the value of having lots of bright people in one place & paid to stay focused on the goal over long periods.

Another great example is their VerveOS work for highly-assured OS's:

https://people.csail.mit.edu/jeanyang/papers/pldi117-yang.pd...

Xax for legacy code protection was clever, too:

http://research.microsoft.com/pubs/72878/xax-osdi08.pdf


Having a great research organisation is one thing, turning that research into products (or integrating it with existing products) is quite another.


Certainly. Gotta remember that Microsoft's core business is supportimg what they already built and extending it without breaks. So, they have to be careful what they put into existing code. However, they use some R&D results in newer projects. I think uptake is slow for reasons it is everywhere.


Midori is dead, so it doesn't push any envelopes. So the dude is reduced to essentially maintenance work on a 20 year old unstable pile of garbage. What "operating systems research" does Windows 10 actually represent? It's essentially Win 8 with a slightly tweaked UI and Cortana.


> Midori is dead, so it doesn't push any envelopes. S

"Dead"? how many research OS's aren't "dead"? The project being completed just means the active research has stopped. The whole point of making a research project like this is to find some interesting bits of knowledge that you can incorporate into other projects.

I'm sure this was already useful for .NET native, and will have bits (or at least lessons learned) for every ms. language and OS down the road.

> What "operating systems research" does Windows 10 actually represent?

I'm sure there are some fancy bits buried in it. The new JIT in .NET 4.6 is pretty fancy, for example (even though it isn't strictly OS research it's pretty tightly tied to the OS given how the universal app platform and store works). Win 10 is indeed mostly a polish release from Windows 8.1 (A few years ago it surely would have been called Windows 8.2 or Windows 8.1SP1). I think the naming was just part of the strategy to stop bumping major versions, like OS X.


I assumed OS research would include improving user experience, performance and security.


I find Win10 runs better than 7 on the same hardware (Which in turn I think everyone agrees is snappier than Vista). I actually don't think the latest iterations of OS X work so well on older hardware. Upgraded a 2011' MBP to Yosemite and it was certainly a worse experience wise than upgrading my 2011 desktop from Win7 to Win10. So I guess it varies. The story of the above upgrade was of course that there was a hardware bottleneck (RAM) in the mac, which had only 2Gb, which proved way too little for Yosemite so required an update. The 2011 PC had 8GB so was fine of course. I think it's perfectly fine for the newer OS X version to use more ram.


Because they also care about selling new hardware.


> there are tons of opportunities outside MS campus. [...] Go out there, make the world a better place. Let Microsoft ride into the sunset.

Where are opportunities outside the MS campus for people who love research about operating systems and/or programming languages (besides academia)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: