Hacker News new | past | comments | ask | show | jobs | submit login
Mbeddr – engineering the future of embedded software (mbeddr.com)
85 points by _qc3o on Sept 13, 2014 | hide | past | favorite | 71 comments



When I work on an embedded system, I want as few layers between my code and the assembler as possible. Even your C compiler can produce undesirable code if you don't put the proper const, volatile qualifiers on your pointers.

The last thing I want is a layer of software and an IDE developed by a team that I can only assume has 0 embedded experience (judging by team page they are all architects or academics).

Secondly, if I handed off a work for hire implemented in this language I would not get a next assignment. There is no room for fad modelling in the established engineering trade and unfortunately I don't have the pull to start the disruption if I truly believed in it.


There are two problematic types of engineers out there, some have no clue what they do and some have too much clue and wave their flags all over the place. You are in the second group and have probably been doing too much C for too long.


For whatever reason, CASE tools come and go but 'C' tends to be forever.


I really appreciate your succinct criticism of my points, you didn't need to qualify with anything but "you have been doing C for too long." I consider that a compliment.

Ask my clients how much of a problem I am and then ask them how many problems I solve. And no, I don't solve them all with C.


That's good for you! I don't personally find the tool in question exactly the most appealing to me personally, but what I am excited about is that the industry is moving on, as C is far too dated as a language and even C14 doesn't give much hope. So, as you suggested that you don't appreciate the abstarction this tool adds, would you consider Rust, as it compiles right down to machine code? Essentially Rust is much C-like in terms of basic syntax, but has expressivenes of a functional language, safety-aware compiler and a ton of other features, including the fact that it's aimed at system-level applications and one should be able to write a kernel in Rust.


We differ on our opinions of C. It may be dated for you but I can I know how to manipulate the syntax to get the desired result. Rust has been on my list of things to check out, but not use for clients. Right now all roads (from Javascript, C++, Objective-C and Java) lead to C. So when I write a library in C, I know I can use it my iOS, Android, node.js and C++ projects. I cannot say the same for Rust. I do not want to be on the Rust journey quite yet because I am still smarting from node.js. And in 2012 when I was using node.js to solve problems was when I was close to becoming a problem engineer, as you believe I am.


Rust does allow you to write a dependencyless C-abi library, without requiring any runtime or such. Hope we'll see you around at some point. :)

(It seems errordeveloper is doing exactly the same flag waving they originally unfairly lambasted you for...)


Lol, supposedly mine wasn't of the same origin. To be fair, you just broke my whole idea of the two groups of people, which indeed was a very poor attempt that by itself will fall into another group. I am going to asume codehero is a nice guy and walk away from this incredibly amusing discussion. Thank you, dbaupp, for pointing this out.


Pattern matching looks cool but I have some concerns. What is the best forum for Rust discussion? I get the feeling HN would rake me over the coals for bringing up switch, falling through case labels and of course, goto.


reddit.com/r/rust is probably the best place to have exploratory discussions.

Although, our pattern matching is a strict superset of C's switch (other than fall-through). But yeah, fall-through and goto aren't supported yet, and if we were to get them (in safe code), they may have to be restricted to preserve memory safety.


Clearly a hero.


I have to say that the sequence "errordevelopper" responding to "codehero" is quite funny. :)


So true


Funny you should say that because his team was hired to develop all the tools and IDEs for Siemens and their industrial systems. The tools you see on that page were developed jointly by the BMW embedded development team and I think a few government agencies. So I'm a little puzzled by your comment and your reaction to an obviously better way of developing correct and verified software using C.


Name dropping does not mean a thing and I doubt they developed ALL the tools and IDEs for Siemens' industrial systems. If they did, then they would be wise not to mention the numerous SCADA faults found in Siemens control systems. There is absolutely no way you would roll out a system like that and have every industrial product line using it. So I am little puzzled by your claim.

The thing about these systems is they attempt to prevent you from shooting your own foot by not letting you aim. Some operations (such as erasing and writing flash) require you to do dangerous things like relocating code to SRAM and running from there while your flash operation completes. The details of each platform vary: ARM code is generated in a position dependent fashion; MSP430 is not.

I am certainly not calling C the end-all be all, but it's a tough system to beat. You could convince managers that mbedder may be the way to go, but it would be an absolute boondoggle to get existing engineers to use it. (They couldn't even convince the Park-o-matic folks to rewrite their code to use it). To a "C simpleton" like me, I would not call mbedder obviously better.


I think you fear learning something new more than anything else and calling yourself a C simpleton doesn't really help your argument. Like I said in another comment when assemblers were being developed the same exact arguments were thrown around so if history is a guide then these tools are the future and the sooner you learn them the better position you'll be in.


You're as judgmental as errordeveloper; let me do the namedropping this time. I have timestamped when I started playing with certain technologies:

SVG: 2001 AJAX/Javascript: 2002 Windows CE: 2005 DirectFB: 2005 FLTK: 2006 SCons: 2007

I have been learning new things my entire career. I have wasted tons of time trying to support the "better" technology. mbedder presents a very weak case in my mind.

My question to you is how has your experience with mbedder been so far? You seem to understand its superiority. Does the code size and RAM consumption live up to your expectations?


You don't have to prove anything to me but your reaction is not rational. I don't think anyone will argue that a tool that integrates unit testing and model checking into a coherent experience is an overall win. As for code size and other matters it depends on your use-case and in this day and age I think a few MBs here and there is not going to make or break an embedded product offering especially if whoever is making that product can deliver it with less bugs.


> As for code size and other matters it depends on your use-case and in this day and age I think a few MBs here and there is not going to make or break an embedded product offering especially if whoever is making that product can deliver it with less bugs.

This statement alone disqualifies you from making any comment about embedded development and automotive embedded development in particular.


Agree


You just keep sounding that way, which doesn't neccesarly means you are that bad. It's just how you came across with your original comment ;)


This is great on many levels.

1. They are using MPS [0] from JetBrains, a system for constructing tooling around Domain Specific Languages.

2. Markus Voelter is a huge proponent of DSLs and a very level headed fellow. Loved his interview with Laurence Tratt [1] about compile time metaprogramming [2].

This brings benefits of Ada and Lisp with a modern JetBrains style IDE while targeting a C99 runtime. I think being able to sculpt the language to the project can greatly compact the abstraction distance from the problem domain to the code.

[0] http://www.jetbrains.com/mps/ [1] http://tratt.net/laurie/ [2] http://www.se-radio.net/2007/05/episode-57-compile-time-meta...


Its great that it generates C but I'd worry that developers who are writing embedded code won't bother to try it - they often are trying to squeeze some code into a small amount of memory on a limited processor. Some of the things it provides - state machines, unit conversions and error logging are pretty simple to do within C anyway, and developers might not appreciate or want an additional level of abstraction.


State Machines, as you noted, are easy to do with C. I think the added value in this respect is that this makes it much easier to manage the State Machines. I often hear guys at work complaining about how they'd like to switch to using an RTOS because the codebase is a tangled mess of State Machines (not that the two are mutually exclusive of course). Having said that, I seem to remember there is a tool dedicated to creating State Machines that compile down to C - the name escapes me however.



QP framework. Its bloody brilliant.


That's the one.


This is clearly aimed at applications that must be very reliable, e.g. life-critical applications like cars (note BMW sponsor it).

Most embedded programming isn't that bug-sensitive, but quite a lot is and I'd wager they'd love something like this.

I know I'd use it if I were writing safety-critical code.


Code exists in context of a culture and processes. This may well facilitate those, but those kind of have to come first.


This adds an amazing amount of verification, but it doesn't mean that the resulting code is slow. Doing unit checks is awesome. The grid state machine view makes it easier to catch errors and refactor the transitions.


This looks like a lot of effort was put into it, but what pain point / problem does it address? Embedded applications (bare metal) is usually not that complicated, and when they are they are usually developed either on a RTOS, or on embedded linux which gives you almost all the advantages of desktop application development.

Coming back to embedded applications not being that complicated, now you also need to get your developers to learn another framework and IDE while they are already under pressure to develop their current feature set.

Like codehero states elsewhere it only adds another abstraction layer between you and the silicon, in a field where exact control of the silicon is paramount (Low power, response latency in real time, limited resources etc).

Furthermore what guarantee do you have that Mbeddr will support the latest processor you are working on. At least with C being a defacto standard in the industry you have that guarantee. Furthermore most processor manufacturers supplies demonstration code to use their latest processor features, how easy is it to pull that into Mbeddr?

I'm afraid this is a very fancy and polished solution looking for a problem.


It's all C. It is all syntactic sugar on top of C so if at the end of the day you want to go back to C then just compile it down and take it from there.

Your comment reminds me what people were saying when assemblers were first developed. Real programmers did not use assemblers because it was too high level or too far removed from the hardware or any number of other excuses. The fact is that plain C is inadequate for delivering correct software in high assurance environments. This tool addresses that problem and then some.


'C' is perfectly adequate for delivering safe and correct* code. Just because it's possible to have 'C' code that is unsafe is not a blanket indictment of the language. Micheal Barr, the MISRA team, Valgrind all exist to aid and abet delivery of safe and correct systems.

*whatever that means in context...

As we say - "doctor, doctor it hurts when I do that!" "Well, don't do that!"

Tools like this simply automate or add leverage to that process.

And unless there's considerable community support for the use of something like this, it'll remain a smaller thing than raw 'C'. But many embedded projects are small enough that there's little pain in reinventing the wheel.


I'm sure one can use any libraries/SDKs from Mbeddr, right? And my other impression is that the aim is not to support boards or chips at all, it's just a tool, in my understanding. Are there plans to support C++?


This looks great ! With the JetBrains expertise in IDE, that surely was a missing piece of software.

I am slightly concerned by the DSL thing, because I don't want each embedded projects to have specific dialects like we can see on LISP projects.

But, on the average, gains are high: modules ! tests ! Inline contracts and model checking ! Very nice state machines ! I like that !

Also, why C99 ? There's hardware where only C89 compilers are available...


Because nobody would use those chips for new projects. Legacy chips need legacy tools and run legacy software.


I don't agree. Popular but legacy chips can have updated tools where new but conservative chips can have odd software packaging.

Moreover, embedded development is conservative by nature and I know a lot of teams which prefers to run projects on known hardware when possible, even if we have to suffer of the tooling.

Also, don't forget the licencing. We already have to downgrade our targeted platform because we couldn't afford to pay the whole tooling where OSS was only compatible with the previous version.


Great to hear that quality meta tooling is making it's ways into the industry full of outdated and bad engineering practices.


Could you offer some specifics?


"A cleaned up version of C99 helps avoid low-level bugs. For example, the preprocessor is not supported"

NOPE.

From my cold, dead hands.


Why? The preprocessor is a huge hack that is mostly obsolete. 99% of my programs only use `#include` and `#pragma once`.


Macros used right can really help to clean up messy code and/or reduce repeating code to something clearer (e.g. in crypto-code).


Pre-processing macros can reduce repetition, but they are certainly a hack: They are not type-safe. They are not even syntactically safe. A missing brace in one can cause untoward damage down the chain. And getting it right is especially important in crypto-code!


#ifdef alone

#ifdef HW_REVISION_1

#ifdef HW_REVISION_2

etc


Playing the devil's advocate, here. When possible, this kind of if_arch/else should be compartmentalized in their own object files or dynamic libraries, and then, linked conditionally to produce an unique binary.

You shouldn't clutter your code with ifdef. It makes it hard to think with.


The whole of Plan9 is written like that.

If you look here http://plan9.bell-labs.com/sources/plan9/sys/src/9/

You can see that port & boot are the portable code. Each architecture can fall back on that less optimised code should it need to.

Then each of the directories: bcm, kw, mtx, omap, pc, pcboot, ppc, rb, teg

have platform specific c files

This greatly eases porting to a new architecture and also means cross compilation is significantly easier.

#ifdefs are rare in the source code and other pragmas are few and far between.

Conditional compilation is the enemy of readable code.


Moving and re-moving conditionals is our life. Is there a name for that structure? Polymorphic portability.


The "when possible" in your statement is a caveat large enough to drive a semi through. Professionally I have only done "big" embedded, but I have for example had a rather lengthy signal processing algorithm where sections needed to be implemented one way for most POWER systems and another way for x86-64 systems. The wrong branch cratered performance[1]. There was no way to separate the code the way you describe without implementing it twice.

[1] At the risk of starting a flame war, this situation and those like it have really soured me on the rest of the industry's obsession with "big O" for complexity analysis. I have never found it useful; I always needed to pay attention to the stuff I was supposed to be dropping.


> The "when possible" in your statement is a caveat large enough to drive a semi through

You are absolutely right. It's a real effort of architecture, and sometimes one of the marks of a great projects. EDIT (thinking of a bad experience of mine): Sometimes, this compartmentalization is also unnecessary and force the architecture to be far worse, too loosely coupled and hard to follow.

> There was no way to separate the code the way you describe without implementing it twice.

Your mileage may vary but implementing it twice can be the best solution if there was so many dependencies to the platform. You should then document the algorithm and reference this documentation in each implementation so that they don't diverge too much. That's not always the best solution, thought. There's no silver bullet in this world...


> You are absolutely right. It's a real effort of architecture, and sometimes one of the marks of a great projects. EDIT (thinking of a bad experience of mine): Sometimes, this compartmentalization is also unnecessary and force the architecture to be far worse, too loosely coupled and hard to follow.

Well, the architecture of what I was working on was not going to win any awards. :P It was made worse because we were converting code that was originally implemented to run on DSPs.

> Your mileage may vary but implementing it twice can be the best solution if there was so many dependencies to the platform. You should then document the algorithm and reference this documentation in each implementation so that they don't diverge too much. That's not always the best solution, thought. There's no silver bullet in this world...

Standard practice was for systems engineers[1] to create Algorithm Description Documents and Algorithm Implementation Documents that explained in mathematical formulas (ADD) and pseudocode (AID) what the code was doing. At least. that was the theoretical practice. The actual practice was a mess. But yes, there is no silver bullet.

[1] Which was me, even though I got to write lots of code, too.


Never done any DSP. How does it change the paradigms ?

> At least. that was the theoretical practice. The actual practice was a mess. :( This another big problem yet to solve...


There were two big architectural differences that created some weird (to developers used to conventional processors) code.

First was that the chips were fixed-point. That mainly changed the arithmetic of the individual algorithm steps, but it led to some really strange number packing schemes[1] that affected memory layout and added a bunch of code just for dealing with them.

Second was the concept of "on-chip" and "off-chip" memory. The chips did not have a conventional memory bus. Instead, the program had to manually make a DMA request to move data on-chip for processing. This was basically a stack memory space local to the DSP core. It was the only memory space the core could access directly, though. Thus, algorithms had to be implemented such that they moved memory on-chip in blocks, processed the data, then moved it back off-chip. Since these movements were expensive, the implementation also tried to chain as many things together on the core in order to minimize the DMA transactions.

Needless to say, these make for extremely inefficient code in some cases when you try to recompile on a real processor or controller that actually has an FPU and memory controller.

[1] If you have never tried working with complex numbers on a fixed-point processor, don't.


While I dislike #if/#else/#endif, dynamic libraries rarely exist in embedded systems.

Using many small object files is my solution of choice but it pushes the if/else/endif problem into the Makefile instead.


I personally tend to use yaml files which contains matrices of compatibility between objects, used to generate CMake statements.

I know people who use the file header to autodetect the right architecture but I found it too overkill and weird-bug-prone.


I actually agree, none of my code has #IFDEF, except at the top of the C file that decides if that C file will be built or not. (This simplifies having different C files per platform in the build system, adding new projects in our system is complicated enough as is! Right now it just builds .c, .cpp in every directory)


This looks awesome, one thing though: "All code is stored in XML files". I'm not sure I see the advantage of this, what else is stored in the XML other than the code and why can't it be stored on its own?


This is built on Jetbrains MPS[0], which is essentially an abstract tree editor made to behave like a regular text editor. The weird thing is, the DSLs can collide; there can potentially not be enough information in the text to unambiguously decide which DSL the author intended. Therefore, additional data has to be stored to disambiguate this.

[0] http://www.jetbrains.com/mps/


Well not only that but MPS allows for things like graphical languages too. XML is used because you're not really editing text at all, you're editing at a higher level of abstraction.

Regardless, MPS has support for things like version control merging/diffing and so on. It's a pretty mind blowing tool all round.


Oh what a terrible name because of

http://mbed.org/

Which is also an online embedded software toolchain from IDE to linker.


Why not use something like rust over this? Rust should provide the safety missing in C while also providing higher abstractions and a safe macro system with about the same performance as C


Because Rust is not ready for production ?


How big is the runtime? I have 64K of SRAM on my part and 512MB of onboard Flash.


That is quite a lot for an embedded system.

looking through them comments in an earlier HN post https://news.ycombinator.com/item?id=6268291 on running rust on the arduino DUE they link to zero.rs that should make it possible to run without any runtime


Zero.rs is dead now. Checkout http://zinc.rs :)


Sweet! I'd really like to know how big the resulting binary for that blinking leds example is.


This is really nice ! Thanks !


Hmm... is there any relation to mbed.org ? mbed is a new methodology (supported by ARM) for programming and debugging embedded devices through CMSIS-DAP.


So this is something like the friendly-C variant proposed some weeks ago?


Maybe they've never heard of Eclipse.


Embedded firmware engineers are so conservative I bet the firmware for tricorders will be written in C.


As opposed to all of those other languages that lend themselves so well to the task?

Since tricorders will most likely run linux, by default, they will have lots of C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: