Zephyr is a no-protection microkernel, like VxWorks (but unlike QNX, Minix, or L4, which run user processes in protected mode.) Everything is in one address space. It's actually Rocket, from Wind River, which also sells VxWorks and has open-sourced Rocket. Zephyr is for very small systems. Think smart light bulb, not WiFi router. Supported hardware includes the Arduno Due (ARM) and Arduino 101 (x86). The QEMU emulator can be used for testing. The API is C-oriented and very low level - fibers, threads, mailboxes, semaphores, etc.
Are you saying that Zephyr is Rocket? I didn't see anything in the press release saying that, and I didn't see anything on the web site saying that (admittedly, I only spent a few minutes looking...).
If it is Rocket, is it a product that couldn't get traction so now they are giving it away?
Here's the statement from Wind River:[1] "Wind River contributed the Rocket kernel to Zephyr who brings together industry leaders in software and hardware to build an RTOS for IoT." There will still be a proprietary version of Rocket, which is the same kernel but also has components for connecting to Wind River's "App Cloud", which seems to be a hosted IDE. The Zephyr project offers only command line build tools under Linux.
It's description of features sounds a lot like eCOS's marketing material. That one has been around a while. What's the advantage(s) you see of this one?
That's a good question. Anyone know the political background?
Zephyr is for systems far too small for Linux. You don't even have to have a file system.
Several of the endorsers in the press release speak of Zephyr as "secure". This OS has no memory protection, no user IDs, and no permission system. Any code can do anything. Messages are passed around as raw pointers; they're not copied. Nor is the size sent with the message. And it's all in C. What could possibly go wrong?
There's nothing wrong with using an OS like this when your application program is a few hundred lines. But if you add a networking stack and a mini web server, there's going to be trouble.
You just made me think of what might actually be be possible if you could hack a microwave through the control pad. I'm not sure what could be accomplished with sending unrestricted commands to the hardware in a microwave, but depending on the cheapness of the components, I suspect it's not good.
I have a friend who succeeded in emulating a microwave using MAME and a dump of the microwave's ROM. (I don't remember exactly what all of the microcontroller outputs are; most of them have to do with the LED user interface rather than activating the magnetron.) Of course the emulated microwave can't really cook food (maybe only emulated food...), but you could use that kind of emulation with a debugger to experiment with possible software vulnerabilities.
Since it was possible to reprogram Super Mario World through the controller, it seems pretty plausible that it's possible to reprogram some embedded systems through their UI even when they're not designed for that!
> Since it was possible to reprogram Super Mario World through the controller, it seems pretty plausible that it's possible to reprogram some embedded systems through their UI even when they're not designed for that!
That's about what I was thinking. Hell, if there's a standard set of firmware that most, or even just most cheap ones ship with, then it might even work for a large chunk of the units in the wild.
Just because expert C programmers were able to make it work doesn't mean that C is the best way to write secure code. I say this as someone who writes quite a bit of C (though I don't view myself an expert in it yet).
I highly doubt they just drew C out of a hat and decided to make it work for their multi million dollar, "absolutely-cannot-fail-software". They would have considered other languages out there. C was the winner.
Speaking from high-assurance niche, C absolutely sucked for that in more ways than you can imagine. Like the recent NASA post, the reason they use C is tooling and available talent. Ridiculous amounts of manpower went into developing static analysis, testing, compilers, etc for C. The best talent in high assurance pretty much all know C, too, whether or not it's their default language. Almost all the hardware supports it, too.
So, a combo of top talent w/ a C history and tons of tooling made most of them go with C. Many others went with Ada/SPARK. A few small fish even use typed or structured assembler. C is only chosen due to tooling and "better the devil you know" philosophy. Work on superior languages got more results with less effort in both human and mechanical reviews, though. So, the inferior language is only used due to an accident of history.
Are you recommending that people switch to Ada, or Pascal, or what? Your pastebin seems to strongly favor Algol-likes. It would be nice to see some more substantive reasons why people should be using an Algol-like language in 2016. Just saying that C was originally hacked together to be expedient doesn't quite make that case.
There's a number of HLL's to choose from today with less safety issues. Start with an efficient one. If developing languages or compilers, then put effort into one that has a nice foundation instead of C. Similarly, there's prior work on C-like languages that had enhanced safety (Cyclone, Ivory) and tools like Softbound+CETS that automatically make code safe w/ a penalty. Use or invest effort in such tools. And finally, if nothing else is available, use a subset of C with sensible coding style and throw every bug-finding tool you can at it.
Those are the recommendations in that order. What you use depends on your domain, constraints, skills, and so on. Modula-2, Active Oberon, Modula-3, Free Pascal, and Component Pascal were probably closest to C's domain in terms of safe, efficient, easy-to-compile languages. Ada is systematic in all the errors it handles but with steep, learning curve. SPARK straight up proves your code. One can also use DSL's/4GL's that generate proper C w/ automated checks like iMatix and Galois do (and I once did). I've also seen Ocaml and LISP used for real-time, even embedded, systems with custom runtime. Ocaml was also used for tooling w/ source to object code verification.
So, there's a number of options. Each are behind C currently in tooling due to limited uptake and investment. More uptake and investment can fix that. Even so, average results of those tools have far lower defects than C code with shorter time to train people (ignoring Ada). That most teams aren't composed of geniuses in safe coding means that's important too.
ATS is interesting. I've seen it used in device drivers and one 8-bit system IIRC. The reason I left it off is that I'm not sure most people can learn how to use it. Whereas, we've taught ALGOL languages, Scheme, and Ocaml to all kinds of people where they're effective enough with types & assertions.
These more mathematical languages have to prove themselves out with real-world examples, algorithms and compiled code, before I recommend them. I'd like to see CompSci keep hammering away at them to find easier and easier ways of learning and using them. Not to mention improve tooling.
Oh yeah, you had another question: why ALGOL-like languages? I chose ALGOL-like over C-like specifically rather than over every style. The reason is that ALGOL-like languages had better design decisions:
1. Be closer to English to make them easier to read. Important as software is read more than written. Also can help in reviews.
2. Generally have proper features like modules or interfaces to aid programming in the large.
3. Still efficient enough for programming close to the metal.
4. Prefer sane defaults or automated checks on issues that trip programmers up. A good implementation usually lets you turn these off for a module if needed.
5. Tradition of thorough language reports and grammars means less undefined behavior.
6. Most common stuff is still efficient once compiled. If Wirth-like, then also efficient to compile.
7. Less tied to a specific processor style with its own style being more amenable to safe, maintainable operation. See Burroughs B5000 processor vs PDP-11's.
8. IMPORTANT. Easier to develop tooling for verification, esp static or dynamic analysis.
9. Easier to verify mathematically.
So, these are some things off the top of my head that various languages more ALGOL-like did better than C-like languages. I wouldn't say there was a single one that had all these traits but Modula-3 came close. A modified version of Go might as well given its predecessors were used in safe-language OS's. Hope this brainstorm gives you some idea of what things I valued vs C's style.
I suspect NASA chose C due to some combination of (a) the tiny verified subset they were using was secure; (b) it doesn't matter that much how secure your language is when you're spending countless sums to verify it anyway; (c) their people and/or their new hires had experience with C; (d) their systems were designed for it. None of these reasons are valid reasons to start a new, reasonably-budgeted, security-critical project in C.
Great response except for (b). The language choice matters. Some are too complex to verify in typical usage. Fortunately, despite its complexity, C was primitive enough for tools to catch up with a subset of it. One can overdo it. See INTERCAL. ;)
As for the Linux Foundation, they seem to be branching out into strengthening the entire Linux ecosystem. Having Zephyr for embedded firmware on the bits not running Linux in Linux systems (e.g. sensors) is a step toward that.
Likely they are shooting for the whole unikernel craze.
Cram this with some server into a VM and drop the VM on top of Linux in the cloud.
Supposedly this setup provides performance vs security middle ground between containers and a full blown distro in a VM.
Meaning that as there is just a minimal kernel present you get higher performance, while being wrapped in a VM reduce the potential of exploits compared to a container.
I don't know their reasons...but virtual memory is generally non-deterministic. In practice, when I did real time programming with C, we had to empirically test for best case and worst case latencies, and then model our code performance in order to make sure that it wouldn't kill people. Perhaps if this is running on a processor with well defined instruction execution timing (that doesn't describe most modern processors), they can get full determinism.
Anything with caches is non-deterministic, although there is worst case timing. Just because you have an MMU doesn't mean you have to have paging to disk. Paging to disk is on the way out; few mobile devices do it. RAM is too cheap.
Yep. Are you referring to mlock? That's one way of keeping your data from getting swapped out to disk when you know something the memory manager doesn't.
No, he's referring to something that is not commonly available or used in desktop systems. (I think some Intel CPUs might have it built in, but it's a totally inaccessible feature).
"mlock" is about virtual memory manager "locking" physical pages, so they can't be swapped out to disk.
Here were talking about systems that might not have a disk or even an MMU to implement virtual memory. They might still have dedicated memory protection units, but those aren't necessarily able to support memory virtualization, but just simply to protect memory ranges in safety critical systems.
This is about ability to lock CPU caches (or parts of them, cache line locking) to get something akin to scratchpad memory to ensure deterministic memory access latencies.
For some applications there's a difference whether memory access takes 10 or 100 nanoseconds.
MMUs are pretty rare in the area this OS is going after. You've got the regular size constraints, plus you have fun things like memory aliasing, where the same block of memory can be accessed from multiple addresses.
I'm questioning the need towards 2020 for anything that is limited to 8kb of memory? 8MB maybe, but 8KB ?
I am holding in my hand a device the size of a postage stamp (or full sized SD card) that has 64MB of ram and 16MB of flash rom and supports eight simultaneous wifi devices (the zsun usb adapter). It uses two chips to accomplish this (Qualcomm MIPS cpu) and a third to support an external microsd card.
That device will last for three and a half hours on a 700maH battery. The devices here will last for months, if not years, on a single coin cell. Something like a wireless moisture sensor so you can more efficiently water your garden doesn't need 8 megs of ram. In such a case, why have waste all that power on parts of the chip that are never going to be used? There are a lot of areas where a small chip can do the job just as well, if not better, than a much larger chip using a more conventional OS.
Zephyr is a no-protection microkernel, like VxWorks (but unlike QNX, Minix, or L4, which run user processes in protected mode.) Everything is in one address space. It's actually Rocket, from Wind River, which also sells VxWorks and has open-sourced Rocket. Zephyr is for very small systems. Think smart light bulb, not WiFi router. Supported hardware includes the Arduno Due (ARM) and Arduino 101 (x86). The QEMU emulator can be used for testing. The API is C-oriented and very low level - fibers, threads, mailboxes, semaphores, etc.
[1] http://www.linuxfoundation.org/news-media/announcements/2016...