Just briefly reading through the specs, I can't see any reason why someone would want to use this over FreeRTOS unless they have a Rust-based embedded system project. Is there something I'm missing here?
Leaving aside other reasons to use it (userspace/kernel separation, better security and isolation, Rust)...
Almost anything is a better choice than FreeRTOS. FreeRTOS has a not-quite-open-source license, a modified version of the GPL with a clause that makes it non-free and GPL-incompatible.
FreeRTOS isn't Free Software. "OpenRTOS", the version you get if you pay for a non-copyleft license, isn't Open Source. What does that say about "SafeRTOS"?
I don't think most users of FreeRTOS particularly care about the somewhat wonky license. It certainly hasn't damaged its commercial adoption.
FreeRTOS' license is GPLv2 plus an addendum to permit its incorporation in commercial products. FreeRTOS allows you to statically link your own modules as long as you don't modify the FreeRTOS kernel. If you do modify the kernel, you have to contribute back upstream.
The wording of the license is somewhat troublesome which makes it GPL-incompatible (see: https://opensource.stackexchange.com/questions/4676/is-it-wr...), but the developers have been very clear as to the spirit of the license in many public forums: you can use you own code alongside FreeRTOS and link it into a binary blob, but if you modify the kernel, share your changes. The no-benchmarking clause is sort of annoying, but really isn't a showstopper for most people.
FreeRTOS is intended for use on severely resource-constrained embedded systems where you have a few kB of RAM. Most people developing in that sort of environment don't really care about the existence of a benchmarking clause that probably won't get enforced anyway.
The benchmarking clause itself is a moot point. The fact that it makes the software proprietary and GPL-incompatible is the main issue.
For much the same reason, the infamous "this software shall be used for good, and not for evil" clause isn't just problematic for people who worry that their actions will be deemed "evil".
I've always had good experiences with FreeRTOS (and friends).
It works well, is well supported, works on a wide variety of platforms.
The real security in Tock comes from the use of the MPU which can also be done in FreeRTOS.
The use of Rust is nice for application-robustness, but does not enhance security over the equivalent in C.
> The use of Rust is nice for application-robustness, but does not enhance security over the equivalent in C.
C-with-MMU doesn't help you avoid overrunning buffers, leaking memory, using memory after freeing it, returning pointers to stack objects, getting reference counts wrong, or many other issues. And C doesn't provide nearly as many facilities for abstraction without overhead.
>generally FreeRTOS applications have full access to all the memory.
Well yeah, because MCUs almost universally lack Memory Management Units. They can only set general permissions like R/W/X on blocks of memory, which makes enforcing permissions on a per-process basis extremely slow compared to hardware solutions.
I'd be curious to see how performant / power-efficient this will be compared to FreeRTOS.
Current microcontrollers often have better protection mechanisms than that. Not necessarily fully capable MMUs, but sufficiently capable ones to allow for protection, as well as kernel/userspace separation.
FreeRTOS supports as much use of the MPU as you wish.
The use case for essentially 3rd-party modules that I've come across are basically all of the same form which is generic 1st party (trusted) code running 3rd party blobs for specific customers, examples being smart meters, Bluetooth devices, vending machines, etc.
The thing is... in most of these cases, the best approach is to sandbox purely the 3rd-party code, not your code (which hampers and slows development).
Although its nice in theory to also have your own subsystems protected from each other, in practice, the resource constraints prevent that being done well.
Putting tight controls just around the blobs on the other hand is a much easier thing to do.
Arm Cortex M3 and up cores usually have 'Memory Protection Units'. The MPU allows you to set permissions on memory regions and gives basic kernel/application isolation.
MMU simply provides (~8) variably sized windows with RWX-type permissions.
....and indeed, this is the real protection here. Malicious code or badly-behaved code does not care that you've written the kernel in Rust, its simply a bunch-of-opcodes.
The Rust stuff is nice... but meh for security/robustness.
A controller in a device manufactured by a large and dysfunctional corporation, where the different bits of the software are written by different teams who don't trust each other, and aren't trusted by the team which integrates the product?
The use case for essentially 3rd-party modules that I've come across are basically all of the same form which is generic 1st party (trusted) code running 3rd party blobs for specific customers, examples being smart meters, Bluetooth devices, vending machines, etc.
The thing is... in most of these cases, the best approach is to sandbox purely the 3rd-party code, not your code (which hampers and slows development). Although its nice in theory to also have your own subsystems protected from each other, in practice, the resource constraints prevent that being done well. Putting tight controls just around the blobs on the other hand is a much easier thing to do.
At the risk of being a little draconic, any device that receives data from sources outside a whitelist of validated origins falls under this example. In particular, anything that can plug into arbitrary computers (and is therefore exposed to an arbitrary USB stack on the master side), or that receives network packets (especially from networks with relatively weak authentication/security and are therefore easy to join by malicious actors) falls under this description.
Off the top of my head: fitness bracelets and the like (which can contain health or even location data, or at least data that can be partially leveraged to determine location), hardware authentication keys, wireless medical implants/devices (even if there's no opportunity for data retrieval, stack smashing is a cheap way to deny service, and I have had tons of fun debugging stack smashes...)