I'm very unlike the author in that I mostly dislike the terminal and I really like IDEs.
That said, I think he's onto something here because, invariably, the IDE experience for embedded programming sucks. The amount of chipsets that pretty much force you to use an outdated compiler toolchain with some shitty unmaintained Eclipse plugin, or Visual Studio 2008 or whatever outdated piece of shit, is baffling.
I'll never understand why professional embedded developers just suck this up. I know large companies that develop enormous C codebases in stuff like Notepad++ or SlickEdit with ancient toolchains, no way to attach a debugger, and no way to automate tests. When I moved from embedded programming to web dev, I moved to an environment where people care about their own productivity. I can't understate how different that is, it was a breath of fresh air.
I'll never understand why professional embedded developers just suck this up
Here's my point of view: historically, software engineers didn't pick the ecosystem. The electrical engineers did. They looked at (in descending order) cost, I/O capability, manufacturing lifespan, and then finally the software ecosystem.
When things started getting too similar the chipmakers picked up on this and started selling the software as feature: "Our neato IDE has a Part Wizard! You pick the part and variant and our gizmo will set up a blank project with headers and mux already done. You can run Hello World with a mouseclick!" And thus we got things like CodeWarrior in Eclipse.
The dirty little secret in embedded is that everyone is lazy and they're behind. If that manufacturer-provided IDE does the setup and gets my code launched on the part then let's go. Hell, they even give me reference projects to steal from!
Look at all the work the OP does setting up a compiler and makefiles. Could we have done that years ago? Of course. But nobody wanted to make the effort.
A lot of us have moved on. I recently worked in an ecosystem built with Zephyr, ARM GCC (psst: get it right from ARM and skip Launchpad), VSCode (w/Cortex Debug plugin), Git, and CMake. The only thing I had to pay real money for was the JTAG debugger. It ran natively in Linux and it was sweet.
The tooling is not very good because nobody pays for software tools. When the electrical engineers pick the ecosystem they are not stupid enough to just use whatever tooling comes with the ecosystem for free, they spend $50k/year per engineer(!) to buy adequate tools to make their engineers productive. In contrast, software engineers and their engineering organizations are happy to waste equally valuable engineers just to save a couple bucks; it is insanity. In what other industry is a $10k expenditure an instant deal-breaker when investing in the productivity of an employee who costs more than $100k/yr? Companies invest more in the productivity of their truck drivers and admins than they do in software engineers. We are extremely valuable employees and we should demand that our companies invest reasonable amounts into actually making us productive instead of just tossing us the scraps since they are harming their own profitability by doing so. I mean, think of all the awesome tools that would make our jobs easier and more productive if our productivity were valued even half as much as the average electrical engineer.
Having used paid software development tools, I really don't think this is the case. In particular, I'm thinking of Xilinx Vivado, a popular FPGA IDE that starts at $3000/seat and is one of the worst pieces of software I've ever had to use. FPGA developers aren't cheap; I shudder to think of the amount of money paid for time spent dealing with issues and bugs in this program.
I am not really talking about the purchase of specific tools at this time, obviously you can buy expensive garbage and the most expensive stuff is not necessarily the best. I am talking about a systemic failure to properly invest in tooling and productivity that results in tooling being undervalued. Put another way, if I gave you $1k/yr to freely spend do you think you could improve your productivity by 1%? If I gave you $10k/yr to spend could you improve your productivity by 10%? If the scale is not high enough, if I gave 100 engineers $1M/yr to spend do you think you could improve the aggregate productivity of all of them by 10%?
One more way to put it is that the fully-burdened cost of a standard software engineer is something like $200k/yr. At $3k/seat Xilinx Vivado probably cost less than 1.5% of your yearly cost to your company. If your company paid you $30k/seat, or 15% of the cost of each engineer using it, do you think you could make a system that makes you (and probably everybody else in your company using it) 30% more productive than using bare Xilinx Vivado (either a different product, making your own, configuring or working around problems in Vivado)? If so, that would clearly be an economically sound investment for your company even ignoring the much more significant second-order benefits of increased productivity. Do you think anybody even seriously considered the idea of spending $30k/seat to make you more productive since everybody I have ever talked to would consider that an absolutely ludicrous throw-you-out-of-the-room amount to spend on a software engineer even though it is in fact a mere 15% cost increase and would thus only require a similarly modest productivity increase to be economically sound. Given the way people normally talk about such a thing you would think it is increasing costs by 10x so would need to result in a 10x productivity increase to make economic sense, but that is just plainly not the case.
While I agree with you about FPGA tools, that's a function of the fact that Xilinx and Altera won't disclose the bitstream so nobody can build an alternative.
In the embedded space, people like Green Hills Software charge lots of money for tools that they claim are worth it.
"Historically", electrical engineers were the software engineers. You picked from a small collection of available parts which met your cost and design goals, and you put up with whatever horrific software tools supported the platform. This was my experience as an EE grad doing embedded SW work in the late 90s. Embedded back then didn't get anywhere near the attention that desktop processors received. Toolchains, debuggers, emulators, library support... all poor. Expectations were low and remained low. They could be, because most embedded stuff was handled by EE folks who didn't know any better and often weren't interested in the potential overhead of any 'better' ways. Embedded margins were always typically tight, so there really wasn't a bunch of money sloshing around to fund improvements. You're talking about an industry where fractional cent savings mattered.
Very solid point, and something I should have said earlier.
I'm a computer science major that went into embedded by circumstance of working in a particular field, but we were always kind of rare. (I should have gone into CE, which in my school was a hybrid of CS and EE and would have been more valuable).
The influx of CS majors into embedded now is creating a new field of innovation and also a new field of problems, like people trying to cram node.js into microcontrollers.
But, if you saw the actual code that EEs would write, the past was way way worse.
Haha yeah, I was the young guy working with a lot of senior, senior engineers. The code was often atrocious. Those guys had a ton of knowledge, but software was just an afterthought to them. On the plus side, it kept things simple. Definitely a new world these days.
If you're strapped for cash you can run Blackmagic Probe firmware on a $2 bluepill board for JTAG with a fully open-soure tool set. (I run it on a $2 esp8266 for wireless JTAG. Open-source gives many options!)
Free as in speech, almost free as in beer. Less than a single (good) beer.
For the sealed-bottle experience, you can buy the commercially supported Black Magic Probe hardware. Still open-source and better integrated into gdb than Segger etc.
> I'll never understand why professional embedded developers just suck this up
Had this sort of argument during an interview (I was applying for a junior embedded C developer position). I was talking about the fact that I had experiences building cross-compiler, that could come in handy with embedded development.
The guy stopped me right there and said, basically: "Just no. Let's just not. We're not doing that here and have no intention of starting doing that".
He went on to explain about it and basically it boils down to the fact while technically possible, anything outside the bsp (board support package, that is IDE + compiler/linker/debugger/programmer) is unsupported. The vendors just won't support anything that's not built with their tools (in retrospect, that's very reasonable).
> I'll never understand why professional embedded developers just suck this up.
It's only been very recently that you can apply the enterprise development tools to the embedded space.
Embedded used to have all manner of screwball compilers to support some screwball architectures. Cygnus used to specialize in porting gcc to these architectures. Now, pretty much everybody has converged on gcc so you don't have to support weird, obscure things.
It's only been recently that so much embedded development has converged on 32-bit ARM Cortex. That's a lot fewer architectures you have to support and 32 bits instead of 8 bits means that you have a unified memory space rather than weird I/O accessors and chunked memory and far pointers and ... Now your tools can specialize in one debugging format/protocol and get better with time rather than having to be rewritten and rewritten and rewritten.
Embedded folks are starting to move. A lot of chip vendors support an Eclipse-based toolchain. Microchip threw in behind Netbeans (not a good move, but they realized they simply couldn't spend enough money on their proprietary IDE to keep it going). Microsoft is throwing stupid amounts of resource behind VSCode so it's going to be hard to match. The Rust embedded folks are getting really good with Visual Studio Code integration and the Visual Studio Code architecture is much better than most other IDEs. Other people are starting to realize that they can use "standard" tools as well.
However, the critical mass required has just coalesced and embedded development changes really slowly.
Life as an amateur is probably worse. I understand why businesses don't focus on this market, but the hurdles imposed on simply getting into embedded programming are absurd. Yes, you have things like Arduino that can target multiple platforms. On the other hand, all of the hand-holding with libraries and hardware modules doesn't exactly encourage diving deeply into the subject. Then there is the stuff that simply isn't accessible, such as attaching to debuggers.
That said, there are some promising projects out there. PlatformIO seems to offer a more comprehensive solution. arduino-cli is useful for those who don't use IDEs and appears to be something that IDE developers can build upon.
Life as an amateur is better. Arduino + PlatformIO is plain fun to work with.
As a professional (as in getting paid to do it), I shed being ashamed of using Arduino a while ago. With PlatformIO, you have great tooling and you can avoid the garbage that are the vendor supplied IDEs. And even tough the Arduino HAL isn't exactly elegant, I can switch between uCs without having to learn a new API. I can even port between uCs by simply changing some pin names and so on.
I'm currently looking into other frameworks, like Zephyr, to trade up from Arduino, and I'm especially looking into using Rust in the future (because Rust would give me an actual benefit to offset the cost of using the "harder" language, which will in the long run make stuff easier).
But the ease of use, especially ease of prototyping, of Arduino + PlatformIO is now the baseline.
There is also MicroPython and Espruino - again, stuff that people will make fun of for you using, but you won't care because you will have to much fun using it. Performance and size penalties means they don't work for all problems, but often enough, they do.
There is even working examples for using Swift on uCs - and I really wish that language was better supported under Linux.
There are a lot of great things about Arduino and I agree that there is no reason to be ashamed of it, but the community can be a bit disappointing at times.
I suppose that a lot of people approach the Arduino from the perspective of wanting to create something, yet I am more interested in learning how things work. You can definitely pursue to latter with Arduino. The hurdle I've been running into is a great many people are only interested in using the Arduino libraries, while the few who dive deeper seem to focus upon documenting their projects. Very few people seem to discuss bridging the gap.
Sometimes the most interesting bits lay in that gap. I learned more about microcontrollers by disassembling simple programs and referencing datasheets than from following tutorials or attempting to read the datasheets on their own. You simply cannot do that sort of stuff with the basic Arduino IDE. It hides the details of implementation and of the toolchain. In my case, it took the arduino-cli tools.
None of that is meant to diminish Arduino, MicroPython, or the many other projects that are intended to make life easier. Are they more fun? That depends upon what you're trying to accomplish. I play with this stuff because it brings back memories of the early days of personal computers, so I am more keen on seeing what can be accomplished within tight constraints and shedding away the layers of abstraction. Of course, different people will have different goals and expectations.
The problem with a lot of this solutions is that they won’t hold for complex projects. If all you are doing is turning on some leds and connecting through usb, it works okay.
Try heavily time-sensitive synchronized SPI transfers and going deep into the hardware layout becomes necessary.
One of the things I like doing is taking an Arduino project and rewriting it without using the Arduino libraries, and the benefits go beyond complex problems like timing. It is possible to trim a lot of memory and flash usage if you are tailoring your code to a specific project. The end result is you can do more without purchasing and learning a new microcontroller.
One factor to consider is long term stability, an area where web dev works very differently from embedded devices.
Embedded code is often tied to a specific hardware device. I maintain code that was originally written almost 20 years ago.
I'm extremely conservative in my technological choices for this reason. Whatever I'm using today needs to keep working if I (or my successors) have to unearth the project in 10 years.
And you're right that the vendor-provided tools are often pretty bad, but that's one of the main reasons I personally don't care for IDEs and deep integration with the environment. FZF works reliably everywhere, dummy (non-syntax aware) completion works reliably everywhere etc...
Would I be more productive with fancy-pansy code completion? Probably a little bit. But having a simple, agnostic, portable and stable development environment is well worth the trade-off IMO.
Not that I think that people are wrong for preferring IDEs, they're just a different set of trade-offs.
> Would I be more productive with fancy-pansy code completion? Probably a little bit. But having a simple, agnostic, portable and stable development environment is well worth the trade-off IMO.
I'd say historically this was less of a concern, but 10-100kloc codebases are becoming pretty common. More powerful uCs with more flash/ram, heterogeneous cores, etc are allowing very complex embedded projects to exist. I'd say fancy IDE features start to help in these cases.
It may just be a matter of time. Those of us who are old enough to remember embedded programming in assembly language, aren't all that old! The following developments occurred after I had been doing embedded development for a few years:
* Microcontrollers that could be programmed in C with tolerable results
* Big enough memory and performance to not need hand tweaked assembly code
* Vendors who didn't expect you to pay for development tools
* A single CPU platform prevalent enough for someone to adapt GCC to it
However, embedded developers are certainly not unaware of developments in general software development tools. For one thing, they invariably end up writing higher level software to interact with and test their embedded creations. For another, all but the biggest shops have some interaction between the embedded and software teams. In my own case of doing modest projects in R&D, I've often got Arduino and a Python IDE open side-by-side on a lab computer.
So the embedded industry basically has three types of toolchains/dev environments. Open source, vendor supplied, or paid commercial stuff.
The commercial stuff costs a lot of money. Depending on what the project is and the size of the company, there's a good chance an embedded developer will have to suffer with this stuff. Compiler error messages are straight out of the 90s in terms of how bad they are, basically what it was before Clang showed that C/C++ compilers could be helpful. The IDE is more like a glorified text editor with some hook-ups to their own toolchain/build system. Oh, and things like code completion or just finding the definition of a function can be completely broken with no indication as to why (thanks for the system beep to indicate a generic error IAR).
Next up come vendor supplied stuff. As you said, lots of outdated stuff. They come with wizards to generate low level code which tends to be pretty dire (thanks for using malloc in an ISR, ST). I could understand a hobbyist using this stuff to get their feet wet, but I've seen way too many companies use this stuff for actual products and it hurts to think about.
Lastly is the open source stuff. This really is my preferred method, but it's hard to get both companies and embedded developers on board with this. Most embedded developers I speak with ended up using the vendor supplied or paid IDE because they wanted the debugger "to just work", similarly for the build system. This is kind of a fair point, it sucks having a ton of work to do just to set-up and maintain the project outside of actually developing code. Of course from my perspective open source offers you flexibility, the previous two categories lock you into a specific way of doing things and you will be stuck doing it their way even if it sucks, which it often does.
You're absolutely right about the suffering the the commercial stuff and the open source stuff being more flexible.
However, when a product needs to use a certified toolchain to meet required safety standards, and to be supported using that toolchain throughout the product life, it does have its place. Doesn't stop you using GCC or LLVM in addition for the better diagnostics though. But you would not want to bet with people's lives on the assembler output of optimising compilers, when it comes down to real life critical stuff in production. You could use it, but independently validating the toolchain would be really expensive. The commercial toolchains (allegedly) provide behaviour guarantees that standard compilers do not. Probably more expensive than forking out for the commercial licences in most situations, which is most likely why they are so costly.
Doesn't mean that us devs don't wish daily for GCC and LLVM levels of user friendliness and features though...
I'd encourage anyone doing embedded development in 2020 to give VisualGDB a try. I'm very happily developing embedded systems in VS2019 with C++17/20, and it has built-in support for every platform I would want to use.
I have no affiliation with this company, just a happy customer.
Eclipse CDT is perfect for embedded developing, with no real competitor. Given the compilation command line output, its buildin parser can understand very large code base (the linux kernel, chrome browser et) efficiently and precisely, long before llvm-based tools appeared.
No need to configure tons of vim plugins, you have tons of powerful features in the default settings, like type hierarchy, call hierarchy, debugging etc.
You may have heard that Eclipse CDT didn't work out of the box. To make it work, you only need to understand how its parser work and how to properly set it up. It can support all build system, only if the build system can output the build command line output. It worked for almost every project I have encountered in the past nine years.
> Eclipse CDT is perfect for embedded developing, with no real competitor.
ugh, having used cdt a bit, it is terrible when compared to qt creator. It's pretty easy to make, say, an AVR-GCC toolchain and use it with CMake, which will then work ootb with QtC like any other C/C++ project.
A large portion of problems of using Eclipse CDT is that the Language Setting Provider is NOT properly set. See the MDN article mentioned in another comment for more on this. CDT parser is NOT bound to any build system. You can make it work with almost any build system, provided the build system had legitimate command line output. And CDT's performance is better than most of the LLVM/LSP-based tools. LSP based solution has not matured enough to be a real competitor.
I remembered that around 2011 I've tried Qt Creator, and it was a bad experience since I did know how to cope with several build systems. I've no idea how QtCreator's multiple build system support is now. For the same reason, CLion is not good enough to compete with CDT.
LLVM/LSP/Vscode/CLion, these are all exciting and have a lot of fans. Unfortunately, none of them is good enough in this subfield.
Author here. Wow, I wrote this article back in college and I was really surprised to see it up here on HN! Pretty cool to see so many talented embedded folks reading my "write to learn" piece :)
This blog post was the start of me figuring out the love of diving in deep to embedded development. Since then, I built a patient monitor device using this makefile-driven build approach. Nowadays, I am re-writing this device using embedded Rust. It has been such a great experience watching the embedded Rust space grow and mature over the years; it wasn't the case several years ago, but now I can build a complex embedded system using stable Rust! I've even got on-device unit tests working, and it's still the same terminal-driven, vim-based workflow I've gotten so familiar with.
Did the toolchain you were working on support both GCC and Clang? I ask because I was recently trying to work on a ESP32 project in vim and got stuck hard on YouCompleteMe only using a clangd-based completer while the project was only compileable with a GCC-based toolchain, leading to annoying things like headers not being found due to things like `#if GCC...` (or the like), and also not all cli switches matching up.
Any tips on getting completion working in vim for GCC-only CXX projects is highly appreciated...
In my case, I don't believe I had any GCC-only restrictions for the STM32F4 chip I was developing on, so I didn't ever run into this issue.
That being said, it's always worth experimenting with flags in YCM or clangd to see if you can get something working well enough for most development needs. I would take a look at the places where those `#ifdef GCC` macros are used and see if you can spoof the flags enough to get something workable. Throwing a `-DGCC` in your YCM flags, worst case scenario, will just raise an error message and you can remove it. Best case scenario, it won't behave quite the same as GCC but you can get basic linting, autocomplete etc. working. Sometimes this stuff requires a bit of exploration to get working the first time, but once you get a working setup it's very satisfying!
On the flip side, I've only recently started using an IDE (vscode) after using a pretty light vim build for almost a decade to learn full stack app development (there's no demand for a crusty scientific programmer). I've also recently ditched my arch+awesome wm box for a mac.
I'm blown away by the features, speed of code completion, tooling, and great built-in terminals. These's were all the features I've wanted for a while but always seemed kind of clunky in vim plugins (I didn't look too hard :)). I'm still not a fan of the memory footprint but I have plenty of it.
I'm not an embedded guy, but Makefiles are super versatile, I still use them to automate everything from building large latex docs to trivial git commits. I consider Make one of the greatest pieces of free software developed.
Make is great if you understand how to use it. It's sort of like git in that many people just pick it up as needed and so they don't understand the underlying model, which makes it mysterious to them and causes it to break in ways they don't understand.
I'm surprised that the author equated embedded programming with IDEs! In my experience that's where the IDE programming fares poorly because every environment is different (so everybody has their "official" IDE that you need to relearn) whereas the low level tools tend to be fairly standard (you usually have a vendor-provided GCC toolchain and a few command line tools for generating images and flashing).
I know that IDEs are incredibly popular nowadays, but I do 100% of my development in Vim without any issues. It does require proficiency with the command line, Makefile and shell scripting though.
For instance the Makefile shown in TFA is not great, in particular because it doesn't track changes on header files. My personal template looks like:
NAME = my_prog
CFLAGS = -Wall -O2 -MMD -MP
SRC = main.c foo.c bar.c
OBJ = $(SRC:%.c=%.o)
DEP = $(SRC:%.c=%.d)
$(NAME) : $(OBJ)
$(info LD $@)
$(CC) $(LDFLAGS) -o $@ $^
-include $(DEP)
%.o: %.c
$(info CC $@)
$(CC) -c $(CFLAGS) -o $@ $<
.PHONY : clean
clean:
$(info CLEAN $(NAME))
rm -f $(OBJ) $(DEP)
# Be verbose if V is set
$V.SILENT:
Note that it hides the actual commands being executed unless you set "V" on the command line, which I know some people actively dislike.
The "magic" here is the "-MMD -MP" flags to gcc to generate header dependencies and the "-include $(DEP)" to integrate them in the Makefile.
> I'm surprised that the author equated embedded programming with IDEs!
I think it depends how you get into it. I came to it from a hobbyist background, so using gcc and Makefiles was natural. But people coming from a commercial background will have had a variety of terrible IDEs inflicted upon them.
I'm biased for sure, but embedded rust has a growing toolset that is likely to be long lived as it's a collection of simple but effective tooling. It's the entire goal of knurling-rs and surrounding tooling to make it dead simple to run your programs on hardware.
haha, indeed I was quite late to the party in discovering the power of these tools :) In a way though, the fact that a college student in 2016 can get excited about the possibilities of vi and make speak volumes to how influential a well-designed software tool can be!
Well if you take suggestions, I personally only use vi over slow links (it shines on high latency connections where a normal editor does your head in with the slooow updates).
Since you're mentioning the arm gcc you probably have access to the likes of jed, nano or even the editor built into mc there.
There's a conflation in embedded of IDE as a toolchain assist, instead of as a code or language assist. It's a conflict of interest: Should the IDE be centered around the hardware (Current standard), or the language (standard in other domains)?
For embedded Rust, I've found the Jetbrains plugin to be outstanding. Ie, CLion, PyCharm etc with the Rust plugin. [Here's a minimal STM32 quickstart](https://github.com/David-OConnor/rust-embedded-quickstart) I wrote. It's not tied to an IDE. It uses probe-run to flash and debug.
It always bothers me when otherwise well-meaning educational materials are untested and completely miss major points they seem to be making.
Case in point, where it explains how to teach make to compile C files into object files, and then provides a rule to compile all files _from source_ rather than from object files.
# Tell make how to compile your *.c files into *.o files
%.o: %.c
gcc -c -o $@ $< $(CFLAGS)
# Finally, tell make how to build the whole project
final_binary.elf: $(SRCS)
gcc $(INCLUDE) $(CFLAGS) $(LFLAGS) $^ -o $@
This compiles the final binary from the sources, not using the object files and the rule above it.
(And clearly the leading example was never tested as it also missed the closing parentheses after "$(CFLAGS"!)
Anyway to take advantage of the c->object rule, the sources of last line should be changed to use files like this:
# Finally, tell make how to build the whole project
final_binary.elf: $(SRCS:.c=.o)
gcc $(INCLUDE) $(LFLAGS) $^ -o $@
To be pedantic, the above will recompile faster but will not necessarily be correct when header files change, so with that change it would be good to integrate automatic dependency generation [1] as well (for the object files).
My team works on Pigweed [1], which offers a more terminal based approach to embedded development. Instead of working with Make, we use GN [3] as our primary build system (though we also support CMake, and plan to support Bazel).
The "pw watch" command is an integrated watcher that can detect file changes from e.g. vim, then re-build, re-flash your device, and re-run tests according to the dependency graph. I use 2 or 3 STM32F429i Discovery boards to run tests in parallel.
If you're curious, we're giving a workshop [4] at Hackaday's Remoticon; feel free to join or watch the recording after it's up.
For what it's worth I do embedded/hardware development and almost everyone I work with uses Vim/command line tools. Some people use an IDE for programming, but all the build scripts, etc. can be run from the command line.
Just note that Makefile requires tabs to be used for indentation. Someone should configure their text editor to not follow a global configuration of converting tabs to spaces if it is used.
That said, I think he's onto something here because, invariably, the IDE experience for embedded programming sucks. The amount of chipsets that pretty much force you to use an outdated compiler toolchain with some shitty unmaintained Eclipse plugin, or Visual Studio 2008 or whatever outdated piece of shit, is baffling.
I'll never understand why professional embedded developers just suck this up. I know large companies that develop enormous C codebases in stuff like Notepad++ or SlickEdit with ancient toolchains, no way to attach a debugger, and no way to automate tests. When I moved from embedded programming to web dev, I moved to an environment where people care about their own productivity. I can't understate how different that is, it was a breath of fresh air.