Hacker News new | past | comments | ask | show | jobs | submit login
Nerves: buildroot linux and Erlang, with an Erlang "init" (nerves-project.org)
193 points by kevlar1818 on June 3, 2016 | hide | past | favorite | 43 comments



I don't think I'd agree that Nerves is "firmware". Nerves is, at its root, buildroot linux and Erlang, with an Erlang "init". At least the last time I looked, the "hardware interface" was just file I/O operations in sysfs. Which is cool and all, but that's just a process in Linux userland reading and writing files.

I spent quite some time trying to get tighter integration between a kernel module and the Erlang runtime. Because of the way Erlang works, one can't select(2)/poll(2) file descriptors in a useful way, which makes sysfs interrupt files less useful. I wrote a very simple Erlang node[0] to send messages when select/poll returns. It works okay, but the jitter is very high.

Peer Strizinger wrote something called Grisp[1] which uses the POSIX interface to RTEMS[2] to run Erlang without a canonical OS. RTEMS is still ultimately an embedded kernel, but it keeps you much closer to the metal than Linux. Unfortunately, Peer hasn't/won't release his code despite my pleas.

Most promisingly, there's Cloudozer's Erlang on Xen port to the Raspberry Pi[3]. This is a clean-sheet Erlang VM implementation that actually runs bare-metal (no POSIX!). The code is definitely experimental and I don't have the time to try to retarget it to my preferred platforms, but it's exciting anyway. It's definitely what I would call "firmware" in Erlang.

Anyway, Nerves is cool, but it's (u-boot + Linux + epmd + BEAM).

[0] https://github.com/thenewwazoo/erl_poc/blob/master/c_src/nod...

[1] http://www.grisp.org

[2] http://www.rtems.org

[3] https://github.com/cloudozer/ling/tree/raspberry-pi


> I don't think I'd agree that Nerves is "firmware". Nerves is, at its root, buildroot linux and Erlang, with an Erlang "init".

Ok, we'll use that as a provisional title, since people here have been objecting to all the previous titles. If anyone can suggest a better (i.e. more accurate and neutral) one, we'll change it again. (Normally it isn't so hard to look at a project page and find a suitable description, but I couldn't do that here—this one is surprisingly on the other side of the breathless marketing divide. Et tu, Erlang?)


The web page's original title is perfectly good. The new one is like an early investor is seeing AirBnb early on and saying "Oh, it's just like VRBO but with higher fees." True but not helpful, and not likely to result in a correct investment decision.

  * it's Elixir not Erlang
  * to the people who would use the project, it is "firmware"
Nerves is really cool. And the cool part is that it's becoming possible to do what firmware programmers used to, but using higher level more productive languages/OS.


If you have many erlang actors waiting on IO, does the runtime automatically epoll all those file descriptors?

Most things in Haskell work this way. If you wanted to get epoll behaviour without actually calling epoll yourself, you could spawn a ton of processes to wait on different sockets or whatever and then just have them write into an STM container when they receive anything. The thread receiving from x million different clients or whatever just keeps reading from the STM container.

Of course, at large scale you would probably just use epoll directly and save on the few hundred bytes per thread overhead.


It's been a while now and I don't remember the details, sorry. To the best of my recollection, the BEAM VM won't do a blocking select(2). Or maybe it forces use of poll(2) when cross-compiling? Basically, the gist is that it breaks the BEAM scheduler to block on I/O for longer than a very short time, and the file read code is pretty deeply internal, so you just can't do it (network sockets are a very special exception). You can poll a file (that is, repeated reads), but you can't block and wait for, e.g. sysfs_notify. Since all Erlang threads are green threads, there's no way to accomplish a blocking read within the auspices of the VM. Thus, the C node.

My goal was to reduce the time between (physical) interrupt and Erlang thread response because I was trying to make sure I didn't miss an event.


Ling appears to be abandoned though ... no activity in the repo for a year, no website updates etc.


> The code is definitely experimental

For Ling in general or for the raspi port?


I meant for the port, but I'll admit that it's supposition on my part. It doesn't look like the Rπ port keeps feature/fix parity with the main branch, but I may be mistaken.


That was the most interesting tech thing I saw for ... months at least. Thank you sir for your shared pointers and insight.


Thanks! Now if only I could parlay my enthusiasm for embedded systems into related employment. :)


Ditto...


I think the term embedded systems is slightly abused in this context.

From what I understand, Nerves essentially creates a minimal Linux firmware image that boots into your Elixir app. That is way too huge and bulky for most embedded systems i.e. microcontrollers and FPGAs. Furthermore, I don't think that the overhead of the firmware and VM can actually handle the level of realtime control required in the embedded space.

Since Nerves seems to be targeting the IoT space, why not just say "Elixir for IoT"?

Nonetheless, it looks like an awesome project. Elixir is a really interesting language running on a mature platform, and has come quite a long way in a very short amount of time.

Edit: from the ElixirDaze talk [1], Gareth says that the image is around 18 MB in size.

[1]: https://www.youtube.com/watch?v=poWoCWDLxRU&t=9m05s


I'm an old embedded dog and I struggle with this a lot recently.

I get that an RPi or Beaglebone or any large SoC architecture qualifies as an embedded system in the sense of "this board is running a drone/driverless car/CNC machine/etc".

But, at least to me, these are highly miniaturized desktop systems now. There's disk. There's advanced I/O (HDMI, LVDS, 802.11, Ethernet, SATA, PCIe!). There's a full operating system on there. You could put one inside a 1U case and pretend it's a server.

Not that this is a bad thing. SoCs are awesome and I personally have a half-dozen projects cooking with iMX6s and other cool parts in this domain. But I think we need to find a way, especially in the IoT era, to quantify a small device node architecture versus a Linux-desktop-on-a-board. I already see Microsoft abusing this by putting Win10 on an RPi and calling it an IoT architecture.


As someone that has played in the embedded world too, i struggled with that last year.

But let's be honest. Except if you have really strong constraint about power consumption and form factor, most of these small stuff are far easier to work with, perform probably better (because yes your software has bugs) and have so so so much better tooling.

Because the embedded world has a big tooling problem. I still hope to have some Rust on lower level to push it into the XXIst century.

And the nerves project is made by people that come from the embedded world. They completely know their limits and do not try to push it everywhere.


Not only power consumption, but also for realtime applications, like those used in industrial and automotive control systems.

In such scenarios, the overhead and unpredictability of the OS scheduler is unacceptable. When you want to say isolate a fault exactly 100us after it takes place, you can't have the OS executing that task at some different delay in a nondeterministic fashion.


well in that case you enter a completely different world. Hard RealTime system are not really only "embedded". They also exists for bigger things.

It is just a really different set of way to think. And a really hard one i may add.


Hard realtime can also be with seconds latency, to further complicate things.


I think "embedded" may be the wrong word. Perhaps "very low power" is the correct way to differentiate these things (e.g. RPi versus TI MSP430). I think that at this point they are both "embedded" since I would consider "embedded" to be anything without a keyboard and monitor (hence "embedded" inside another thing).


So a headless server is ‘embedded’? An old-school vector supercomputer that required a frontside host system is ‘embedded’? No, surely not: ‘embedded’ requires the notion of the computing system to be essentially a controller of a physical process or system, i.e., the component that controls a device or machine that operates in our actual physical world. It might very well possess a (perhaps rudimentary) screen and keyboard or keypad.


I like that definition.

I used to maintain a system most people would call true "embedded" - it was a PowerPC based VME system running VxWorks. The job of the system was to monitor some analog voltages, detect an unsafe condition, and open some relays if the unsafe condition was detected in < 10 ms. VxWorks dev tools and the PowerPC cards are very expensive, so we decided to replace it with a x86-64 server running RedHat MRG. That is the Linux kernel with the PREEMPT_RT patchset. We had a PCIe card in there for monitoring the analogs and firing the relays. Was the system still embedded? I would argue it was. The heart of the code was basically the same - VxWorks has the POSIX API for pthreads after all. There were just some driver call changes and init looked different.


Thanks for mentioning VxWorks. It looks quite interesting, and I'm surprised that I haven't heard of it before!


> , the component that controls a device or machine that operates in our actual physical world.

So is a an industrial factory control server embedded? It controls valves, actuators, reads sensors. It can still be a whole rack top to bottom full of high power Xeon servers.

Embedded literally means inside something. It is not a computer used on its own, say to run general purpose software, but as part of another larger system. There is no power or size requirements for that. Well and most of all it is a fuzzy term.


You make a good point. But the old-school vector supercomputer was likely not very-low-power. That my general point, since very-low-power devices have different programming models than a more conventional OS based computer.


that would be a controller. embedded in a similar vein means that a controller or whatever is embedded in a bigger stand-alone system, and thus not it isn't stand-alone on its own.

a desktop is stand-alone. A webserver isn't but the internet its part of isn' really stand-alone. Mobiles were called embedded, but newer phones with apps are rather stand-alone. All are embeddable.


> I still hope to have some Rust on lower level to push it into the XXIst century.

Well, there are some nice tools available, but not many seem to be willing to pay for them.

http://www.mikroe.com/compilers/

http://www.astrobe.com/default.htm

http://www.ptc.com/developer-tools/apexada


I used to but have come to accept that embedded nowadays mean anything which is black box like and does its thing. Can be Linux or a PIC program written in assembler.


I'm a new Arduino hobbyist and I struggle with it too. Raspberry Pis are definitely not embedded, they're a full-blown computer that's small. It's amazing how they managed to fit an entire desktop in a Pi-Zero-sized package, but it's not embedded.


I disagree that a Raspberry Pi is "definitely not embedded".

What if I had an ARM SoC (like in the RPi) running VxWorks or RTEMS controlling a missile? Few would argue that is not an embedded system.

What if the RPi is running real-time Linux doing computer vision on an assembly line? What if the timing deadlines are pretty loose (>10 ms), a miss a month is acceptable, and it runs a stock Linux kernel? I would argue all of those cases are real-time embedded systems, albeit with different hard real-time versus soft real-time requirements.


You can replace "RPi" with "my Dell desktop" in all of these, and they would be equally true. Does that mean it's embedded? If not, that leaves just the physical size. Does "embedded" only mean "any computer that's very small"?


Sorry if I offended anyone by using the term "embedded systems" in this context. I think of an embedded system as a large umbrella group for application-specific hardware/firmware configurations; maybe that definition doesn't hold water. It certainly differs from the popular "real-time performance" definition, which I believe is just a large sub-group of embedded systems.

I've altered the title of the post to use Nerves' actual tagline.

Disclaimer: I work for a company whose products are based around SoCs running Linux for soft-real-time applications, and we whole-heartedly call these products embedded systems.


I don't think it was offensive at all. The smartphone/SoC era has just blurred the lines. We can now have dedicated application processors that do incredible things, stuff that was inconceivable a decade ago.


Nah, not at all, I just felt that it was slightly misleading. I for one expected Elixir running on a microcontroller, similar to MicroPython [1].

[1]: https://micropython.org/


The "can Erlang handle realtime control" question seems to come up fairly regularly, but more as an expression of doubt than anything concrete.

This is the classic example of it being done: https://www.youtube.com/watch?v=96UzSHyp0F8

What I don't know is whether that's generalisable, or relies on particularly fiddly work on the microprocessor to keep latencies down.


> That is way too huge and bulky for most embedded systems i.e. microcontrollers and FPGAs

I've seen embedded 2U Xeon servers on a rack, used for signal processing or for controlling industrial systems. I would still consider those embedded. They are used inside of a larger system.

> Furthermore, I don't think that the overhead of the firmware and VM can actually handle the level of realtime control required in the embedded space.

It is perfectly fine. Just a few years ago someone demostrated using it to read data out of a camera sensor adn stream it. Beaglebone Black for example has two realtime co-processors (PRIs) which can be used to process and read data off sensors and such.

http://www.slideshare.net/fhunleth/building-a-network-ip-cam...


> That is way too huge and bulky for most embedded system

I heard a good definition: is it used as a computer or as an appliance. If it's the latter, it's at least kind of embedded, even if it's a fairly powerful processor with lots of memory and so on.


Yeah, embedded to me is something in the PIC/AVR/Cortex M0|3 space.

Still awesome and exciting stuff.


Automous driving will ve way to huge for any uc if thelime of avr and pic, but it's still called an embedded system. Or a system of embedded systems at least.


Nerves looks like a really exciting project. IoT is only going to get bigger and as a result there are a number of problems to overcome. Things like how to do remote deployment and version handling to the devices. Also with Nerves you can write the device "client" code and backend server code all in the same language.

Anyway checkout the Elixir Conference talk on Nerves: https://www.youtube.com/watch?v=118-g0ODfgg&feature=youtu.be

If anyone wants a 30% off coupon to an Elixir course, here you go: https://www.udemy.com/elixir-for-beginners/?couponCode=Save3...



Great course by the way +1


Nerves is young, but already powers rock-solid shipping industrial products!

Wow. Okay. Can you show us?


Check out Garth Hitchens' talk about 2:40 in: https://www.youtube.com/watch?v=poWoCWDLxRU


To be complete, iirc it is that product that is in production. The elixirconf EU talk that is show above also talk about some production stuff. http://www.rosepointnav.com/commercial-radar-interface/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: