Hacker News new | past | comments | ask | show | jobs | submit login
Unpopular Opinion: Don’t Use a Raspberry Pi for That (set-inform.com)
235 points by Narutu on March 22, 2023 | hide | past | favorite | 262 comments



I often end up giving the opposite unpopular opinion: RPis are overkill for a lot of DIY IoT uses. Want to have something that opens your curtains or flashes some RGB lights in your hallway or whatever? Pick yourself up an Arduino / ESP32 with built-in WiFi, often an Ethernet port, tons of GPIO. It consumes milliwatts of power, boots in under a second, and is cheap enough to be disposable.


This combined with the main article shows the exact reason why Raspberry Pi is as popular as it is. When I'm starting a new project I can either worry about Arduino or ESP32 or other microcontrollers or various NUCs or getting a board fabricated from scratch...or I can grab a RPi from the drawer and know that it will just work, regardless of whether I need to toggle some RGB lights, browse the web, or run a Kubernetes cluster.

There is nothing else out there that hits the sweet spot in between price, power consumption, processing speed, extensibility, software compatibility, out-of-box experience and lots more.


I guess that's true for a certain wealth target. If you consider the Pi as the sweet spot for price, have at it.

But https://pine64.com/product/pinecone-bl602-evaluation-board/ costs $4. It fits a different sweet spot for price, power consumption, etc. A drawer-full of 20 Pis could run me $2000. A drawer-full of Pinenuts would run me... well perhaps $2000 because I could fit 1000 of them in a drawer.

If I want to browse the web, PineTab probably beats Pi. And that's only considering a single vendor.

Not knocking Pi. If it works for you, that's fine with me.


That’s apples to oranges, more comparable would be a pi pico w, which is like $6, has better documentation, and is more likely to be available when you need one.

I have a bunch of pine devices, but if you think raspberry pi’s are difficult to come by, finding a pine device in stock over the last few years has been a challenge at least for me personally. Things have started to get much better though, it seems.


Not only are Pine hard to find stock of, but I've basically never heard of a well-built Pine64 product, and that's coming from someone with three of them. They're all buggy, flimsy, doorstops at this point. Whereas my biggest complaints about Pis are that (1) I broke a plastic case for one of mine while it was in a moving box, oops, (2) SD cards are a pain in the butt and fragile.


The smaller than a finger nail size ”user unfriendly SD card“ is replaceable with USB memory as boot device.


A 4GB Raspberry Pi 4 costs... $169 USD actively on Amazon (https://www.amazon.com/Raspberry-Model-2019-Quad-Bluetooth/d...).

I can find more powerful laptops for cheaper. I can get a Dell R720 which has drastically more computing power (32x RAM, much more powerful CPU) for twice the price.

RPI used to be economical. If it still cost $50 USD, I'd grab them in a heartbeat to do these types of projects, but as it is, on price, they are almost Apple levels of overpriced (if not more).


The RPi starts at $35. The prices you’re seeing on Amazon are scalpers takings advantage of the chip shortage. When Adafruit, for example, has them in stock, they’re at MSRP.


If you can't order a dozen any day of the week for the list price, they aren't really available at the list price.

As far as I can tell, the only Pi that you can reliably buy in quantity 1 at list price is the Pico.


I mean isn't the truth somewhere in between? You can probably get one a lot cheaper than $150 or whatever it's selling for on Amazon if you don't need it today. At the same time you're right that it means something that you can't just go out to the store and buy one at anything remotely close to MSRP


Pi Zero 2 costs $15. Pico series starts from $4. There are plenty of cheap options if you don't need the extra power.


It is the software compability. Things working out of the box, without hassle. Almost nothing beats that. Almost everything is secondary.


I find the lifetime effort of an AVR/ESP8266/ESP32 to be lower than a Pi.

I’ve had AVRs (of mine) running in the house for 9+ years without being touched (and ESPs for over 5).

Things do just work out of the box on them (often easier than installing Raspian, getting it onto wifi, setting up ssh, looking up the commands to set a GPIO, figuring out cron, rc.d, etc.)

It’s way faster IME to just use the Arduino digital_write() functionality in setup and loop. (I’m a long-time c/c++ programmer, which helps a bit.) The exploitable footprint is way lower, so you pretty much never need to do a security patch. If the power fails, you’ll never* end with a bad volume; it just boots back from flash and resumes working.


There's a lot of stuff you can do to Linux on the Pi to make it have some of the nice qualities you're describing.

I've set up my Buildroot project to copy "authorized_keys" and "wpa_supplicant.conf" from the fat32 formatted boot partition to their normal locations. So I flash an SD card, drag and drop the files onto the SD card, plug it in the Pi and SSH right in.

With regards to filesystem corruption on power failure, you can mount the root filesystem as read-only. If you need to write to files you could mount a tmpfs volatile filesystem.


You make a lot of sense except the part about price. Despite you being right that MSRP represents a great value for the advantages, the actual cost is way out of whack for what you get with a Pi, just due to the fact that they've been in short supply and thus mediated by 'scalpers' for 3+ years now.

I can still see how for some people it is still worth it, because, say an HP thin client decidedly isn't a good substitute if you want to, say, fit it in an outlet box and run off 5v USB power. For anything where you're not using the GPIO/"hats"/whatever though, unless size is a big concern, I would use a small older computer over a Pi.

If I am doing a project with a 'hardware' component tomorrow though, I agree with you, I'd (grudgingly) overpay for a Pi rather than those other things, because, of all platforms with interface GPIO pins that you can use to do cool stuff, the Pi is the one most likely to have "support" out there -- meaning either someone else has already made a tool to do some of the things I want, or someone else has run into the problems I'll run into and prompted a discussion about how to fix it.


I think it depends on the type of project. I found the out of box experience to be super easy with Arduino. You install the IDE, plugin the board via USB and can start writing code right away.

As a full stack web developer, I am always finding myself getting bogged down in activities to support the work of development (managing dependencies, deployments, builds, etc). I don't enjoy that part of being a developer.

The experience of being able to write a few lines of C code and have stuff happen right away is very pleasant and a breath of fresh air. Unless it's necessary, I would rather not complicate things by having to deal with an OS and everything that entails.

You can get an ESP32 (Seeed Studio Xaio) Arduino-compatible board for $5 from Digikey. Incredibly cheap for hobby-scale projects!


Shhhh stop spilling our secret to the software people... The semiconductor shortage is bad enough already...

That said though, programming embedded devices like Arduino and ESP32 needs a completely different style of thinking than with high level languages or web stuff. Things like MicroPython reduces the friction just a bit to make it easy enough to get on board.


The thing that tripped me up the most was concurrent tasks. In languages like Go or JavaScript it’s pretty easy to figure out how to decrement a timer or wait until x time to perform a task without having the waiting period be blocking. On an arduino, you have a few options but none of them are likely to be familiar to high level software devs.

It isn’t rocket science, but there are loads of little details like that which will make you pause and then write loads of bugs and awful software before you finally figure it out. Meanwhile, accomplishing the same thing with a high level language might be trivial.

It can be discouraging but I’ve come to love it. You learn a lot, and having a physical board doing a tangible thing with actuators and sensors can be really gratifying.


RTOSes are developed to solve this exact problem of concurrency. Try Zephyr, ThreadX (from MS) or FreeRTOS (from Amazon) or whichever OS your microcontroller vendor supports. It will be eye opening, and you get to learn to write safe code in C with synchronization primitives just like our ancestors.


Amazon did not create FreeRTOS; they simply took over maintenance a few years back. I didn’t know that happened until I fact-checked your comment, but it is enough to make me never want to use it ever again. And that makes me very sad.


They improved the licensing to MIT. The pre-acquisition license was "GPL" but it had a noncompliant restriction prohibiting comparative benchmarks. There have also been useful improvements to the codebase. Amazon is a net positive in this case.


FreeRTOS is now MIT licensed, and the primary developer is actually getting paid to work on FreeRTOS.

This can only improve FreeRTOS and the embedded ecosystem in general.

And, if Amazon becomes a problem, people can fork and bail.


ESP32 running with Arduino framework comes with FreeRTOS out of the box.


>just like our ancestors.

why am I laughing at how much this stings? we're not that old, damnit!


I saw a tee shirt the other day that said "It sucks to be the same age as old people". We are that old. :-)


Ha, I'd never seen FreeRTOS – thanks! It looks awesome.


Many microcontrolers have timers and counters or even state machines (PIOs on RP2040 is a nice example of that) for this reason. They allow you to handle those cases out of the main computing unit. The MSP430s are also full of these magic things. They are harder to get introduced to than just writing Python. But you don't realize how powerful those things are until you step out of how you were handling things on a multi-GHz computer with ton of RAM.


And these peripherals can run at hundreds of MHz doing real work in every cycle. I can probably do very low latency audio processing with an interrupt firing at the sample rate. Delay is a few samples instead of at least couple of ms like on a PC with an audio interface.


You probably could, but I’d recommend getting a chip with a dedicated I2S peripheral and use freertos with a high priority dedicated audio task that’s sole task is processing audio. You really don’t want to miss an audio sample. It’s very audible, and depending on what you’re doing makes audio processing essentially a hard real-time constraint.


FreeRTOS has a massive effect on throughput. If your audioprocessing is predictible and your interrupts too there is no reason to need that overhead.


Yes working on a Arm M7 at 480MHz I get 1 to 3 samples latency on most things I'm doing (mostly synthesis). And that's on a single core even.


I recently used Rust (RTIC) for that and it was relatively pleasant experience, it had "tasks" (that cleverly used unused interrupt handlers for context switching so it was nice and efficient) that could be triggered by various stuff, not unlike threads, and they had priorities. As long as you busy waited only in the main, lowest priority one, it "just worked"


Not having high-level languages available is a big part of the fun of programming microcontrollers.


I completely agree. It brings you back to more fundamental aspects of problem solving that sometimes you miss with higher level programming. There are a lot of things I can do on a modern web server with a language like Go that are pretty cool, but not particularly interesting because it's kind of trivial. There are so many resources available to the program, things to fall back on for resilience, plenty of common problems are ironed out into popular libraries, etc.

Trying to automate something truly reliable and consistently with a microcontroller on the other hand can be simultaneously soul crushing and exciting — there are so many edge cases and challenging problems.

Sometimes I'll spend hours trying to figure out how to interface with a single sensor, and while it isn't important or impressive in the scheme of things, I really enjoy it.


> Sometimes I'll spend hours trying to figure out how to interface with a single sensor, and while it isn't important or impressive in the scheme of things, I really enjoy it.

I found a fake sensor like that once.... was wondering why my code done by datasheet didn't worked on cheapo breakout board I bought off aliexpress.

Then I read ID register and they used older chip that had some of the stuff set up differently...


Same, I bought a couple waterproof temperature probes under the name of a reputable manufacturer and wound up getting what turns out to be notorious knock offs. It was way harder to set up and the actual sensors measured within several degrees C of each other, haha. I'm a lot more careful of where I order components from, now.


You can always run Espruino on your ESP32 -- a nodejs like environment for microcontroller. It works well and has lots of drivers for different components.


While we're on the general topic, let me share this Gist: https://gist.github.com/phkahler/1ddddb79fc57072c4269fdd6716...

fade a GPIO LED on/off cyclically in 6 lines of code. It should read "every millisecond or faster" but OK.


Using Rust and esp-idf you can simply call the standard library threading facilities if you want to do concurrency, and it's all handled in the background by FreeRTOS for you.


Realistically you will need to use an RTOS for threads and scheduling on a micro.


This is so true! I just programmed an Arduino for a diy attic fan controller. Was it hard? Not really. Was it a bigger pain in the butt than programming for home assistant? Oh gosh yes. And quite limited in comparison. It really made me appreciate zWave plus python plus home assistant plus appdaemon. High level coding - use it if you can!


I did use esphome for something similar. It uses a yaml file to generate C++ code for you and it works like a charm (in 99% of the cases). Very easy integration in home assistant and it has support for the usual home automation peripherals.

https://esphome.io


Honestly, working with all the restrictions of an embedded system like this is something that all programmers should experience at some point. Being forced to actually consider how you're using resources teaches you a lot about how to write efficient code. Or at least if gives you a better understanding of what the machine is actually doing with your code.


Not all embedded systems are that limited. Check the specs of some of the latest flagship phones.


You chose to ignore the like this your parent mentioned ;)

And I agree. All programmers should at some point work on a super slow, super limited AVR or something like that. Something where you have to make hard decisions either because you don't have enough RAM, not enough CPU power, not enough storage, not enough I/O pins etc.

Even non flagship phones are more powerful than the boxes running Windows Embedded 20 years ago.


I once ran a project on an Attiny. That bad boy has 1KiB of flash and 128 Bytes of RAM at an unthinkable 4/8MHz.

I can't begin to tell you how incredibly annoying it was, but it taught me a lot. I know a lot of people who would be completely dumbfounded at 128B of memory. That's only four int32!


Anyone comfortable with using high level languages on an Amstrad PC1512 will do just fine.

Many still don't grasp how powerful ESP32 actually happens to be.


To get on-motherboard, one might even say.


And most importantly, it doesn't have an OS that provides next to no benefit for many of those tasks, and suffers from inconsistent timing.

Something like ESP32 is much more reliable at controlling hardware. It will never miss on a triggered limit switch in time because memory ran out for some reason and it started swapping.


Ah, that kind of timing. I have made several kinds of clocks with RPis, one that ran fine for years driving a little servo to strike an hourly chime like a grandfather clock, which was all very easy thanks to having an OS with NTP and cron.

But I mostly agree with the article, I have a Gigabyte Brix fanless NUC-alike as my real home server, and a couple of Pis doing little things (and switched to 'overlay file system' so running only from memory and not writing to those frail SD cards).


> very easy thanks to having an OS with NTP and cron.

An ESP32 can still have both of those things in some capacity.

https://randomnerdtutorials.com/esp32-ntp-client-date-time-a...

https://github.com/DavidMora/esp_cron


The point the GP made was that timing issues weren't a real problem on the rpi thanks to the tools mentioned.


There are different kinds of timing issues.

There's knowing the time, which you can do with something like NTP. That the RPI can manage just fine.

And there's acting with precise timing, eg, if you need to control a mechanism and reliably react on a deadline of a few ms. A RPI doesn't perform well there, which is why 3D printers use microcontrollers instead.


Bare metal C++ for PI. https://github.com/rsta2/circle

Access to most of the hardware and real-time deterministic behavior. It’s a really great project and lets you twiddle those gpio pins at ridiculous speeds with perfect timing (less than a millisecond).

A PI comes with a whole bunch of great hardware baked in, so if you have one laying around, and want to do some microcontroller stuff, I think it’s a great choice.


It's still an A-profile MPU and not an R- or M-profile MCU, and while it will be fast it will have less deterministic behaviour than we might like. If you disable the caches and MMU you'll get better consistency. But wouldn't we expect ~microsecond accuracy from a properly-configured MCU?; ~millisecond accuracy is not a particularly high bar.


You can read pins (well one) with sub-microsecond latency using the Fast Interrupt Request, but I have not tried this myself. I think a PI would be more than capable of matching most microcontrollers just due to its very fast clock speed. Add multiple cores with the PI4 and you get a crazy amount of compute between each pulse as well.

There are a bunch of clocks that run plenty fast to enable high resolution timing as well.


The high clock speed and multiple cores are great. It's definitely a beefy system. But this is completely orthogonal to timing accuracy and consistency. Speed does not make it more consistent. Tiny low power MCUs have much more accurate and consistent timing.

Low latency can be a good thing, but it's also not related to consistency, particularly when you start looking at what the worst-case scenario can be.


So I am the opposite of an expert here, but I don’t follow. If I have control over the interrupts (which I do) and I have high precision timers (which I have), why can I not drive a gpio pin high for X microseconds accurately? What’s going to stuff it up?


As I mentioned in the previous reply, the CPU caches and the MMU to begin with. You're probably running your application from SDRAM and an SD card. The caches and page tables result in nondeterminism, because the timing depends upon existing cache state, and how long it takes to do a lookup in the page tables. And as soon as you have multiple cores, the cache coherency requirements can cause further subtle effects. This is why MCUs run from SRAM and internal FLASH with XIP, and use an MPU. It can give you cycle-accurate determinism.

The A-profile cores are for throughput and speed, not for accurate or consistent timing. However, you can disable both the cache and the MMU, if you want to, which will get you much closer to the behaviour of a typical M-profile core, modulo the use of SRAM and XIP. If you're running bare metal with your own interrupt handlers, you should get good results, excepting for the above caveats, but I don't think you'll be able to get as accurate and consistent results as you would be above to achieve with an MCU. But I would love to be proven wrong.

While most of my bare metal experience has been with ST and Nordic parts, I've recently started playing around with a Zynq 7000 FPGA which contains two A9 A-profile cores and GIC. It's a bit more specialised than the MPU since you need to define the AXI buses and peripherals in the FPGA fabric yourself, but it has the same type of interrupt controller and MMU. It will be interesting to profile it and see how comparable it is to the RPi in practice.


This is something that could only really be proven by actually testing and I don’t have a fast enough scope to really prove things.

Having said that, I think some of the concerns have fairly simple mitigations. Because of the high clock speed, I can’t see that disabling cache and MMU is required. The maximum “stall times” from either of these components should still fall well below what would be needed. It’s bounded non determinism. That’s completely different to running things under Linux.

Secondly, having multiple cores allows for offloading non-deterministic operations. The primary core can be used for real-time, while still allowing non-deterministic operations on others. The only thing to consider is maximum possible time for synchronization (for which there are some helpful tools).

As I said, I’m far from an expert. It was close to 20 years ago when I last did embedded development for a job, and I was a junior back then anyway. Still, I’d be interested to know if you think I’m way off beam.


I think you're pretty much correct. Whether these details matter is entirely application-specific, but you can go the extra mile if your application requirements demand it.

There are certainly multi-core MPUs and MCUs with a mixture of cores. The i.MX series from NXP have multi-core A7s with an M4 core for realtime use. Some of the ST H7 MCUs have dual M7 and M4 cores for partitioning tasks. There are plenty of others as well, these are the ones I've used in the past and present.


A few ms? In my experience that seems well within the capabilities of Linux. I guess last time I measured wasn't on a Raspberry Pi. I'm kinda tempted to take a shot at profiling this and writing up a blog post since it seems like a useful topic, although it will probably be a few months until I can get around to it.


I think milliseconds overestimates by a few orders of magnitude, but non-real-time OSs really suffer for the intermediate IO stuff you expect in an embedded project (e.g. SPI, I2C, etc)

A long time ago, I was playing with Project Nerves on an Orange Pi running some flavor of debian. I was doing some I2C transaction (at 400 kHz, each bit is single-digit microseconds), and I ultimately had to have a re-attempt loop because the transaction would fail so often. I found a failure cutoff of 5 attempts was sufficient to keep going. I don't recall the failure rate, but basically, whenever a transaction failed, I'd have to reattempt 2-3 times before it eventually succeeded.

Meanwhile, on a bog-standard Arduino with an ATMega328P, I send the I2C traffic once, and unless the circuit is physically damaged, the transaction will succeed.


No, the consistency of the timing is terrible on Linux.

Seriously, stick a scope or logic analyser on e.g. an I2C line and look at the timing consistency. Even on specialised kernels for realtime use, you can have variable timing delays between each transaction on the bus. And this is all in-kernel stuff that's inconsistent--it looks like it's getting pre-empted during a single I2C_RDWR transaction between receipt of one response and sending of the next message. The actual transmission timing under control of the hardware peripheral is really tight, but the inter-transmission delays are all over the place. Compare it with an MCU where the timing is consistent and accurate, and it's night and day.


The parent comment says

> control a mechanism and reliably react on a deadline of a few ms

I actually did measure this with an oscilloscope on embedded Linux (not a raspberry pi). A PPS signal was fed into Linux, and in response to the interrupt Linux sent a tune command to a radio. Tuning the radio itself had some unknown latency.

End-to-end, including the unknown latency of tuning the radio, I never observed a latency that would even round to 1 ms. That's unpatched and untuned Linux, no PREEMPT_RT. I didn't dig any further because it met our definition of "reliable" and was well, well within our timing budget.

I'll be the first to admit it wasn't some kind of rigorous test, just a casual characterization. I would not suggest anyone use Linux for a pacemaker, airplane flight controller, etc.

This is making me itch to buy an oscilloscope and run some more thorough tests. I'd like to see how PREEMPT_RT, loading, etc changes things.


My profiling was on an NXP i.MX8 MPU, which is a A-profile quad core SOC very similar to an RPi. I think it was with a PREEMPT_RT kernel, but I can't guarantee that, but I was fairly shocked at the lack of consistency in I2C timing when doing fairly trivial tasks (e.g. a readout of an EEPROM in a single I2C_RDWR request). You wouldn't see this when doing the equivalent on an M-profile MCU with a bare metal application or an RTOS.

What is acceptable does of course depend upon the requirements of your application, and for many applications Linux is perfectly acceptable. However, for stricter requirements Linux can be a completely inappropriate choice, as can A-profile cores. They are not designed or intended for this type of use.

Profiling this stuff is a really interesting challenge, particularly statistical analysis of all of the collected data to compare different systems or scenarios. I've seen some really interesting behaviours on Linux when it comes to the worst-case timings, and they can occasionally be shockingly bad.


I was referring to that yes, even if Linux performs well in the ideal case, it's not necessarily reliable, and the possible problems are hard to compensate for.

Eg, your process can randomly get stuck because something in the background is checking for updates and IO is being much slower than usual, or the system ran out of RAM and everything got bogged down by swap.

On a microcontroller you just don't have anything else running, so those risks don't exist. Eg, a 3D printer controls a MOSFET to enable/disable the heaters. The system can overheat and actually catch on fire if something makes the software get bogged down badly enough. On a Linux system there's a whole bunch of stuff that can go wrong, most of which is completely outside the software you actually wanted to run.


I guess I feel like things are a bit tangled up here.

Sure, a single purpose MCU controlling a heater MOSFET has a lot fewer failure modes than a Linux device doing the same.

I don't dispute there are a lot fewer ways it's even possible for that system to misbehave.

The original comment was recommending ESP32s over Raspberry Pis for DIY projects like opening your curtains or flashing LEDs. The ESP IDF runs on FreeRTOS, so we're already moving away from the bulletproof single task MCU. People will almost certainly be adding some custom rolled HTTP webserver on top. They might be leaking memory all over the place, there are probably all kinds of interrupts they have no idea about firing off in the background. I wouldn't trust an ESP32 curtain-bot not to strangle me any more than I'd trust a Raspberry Pi based one.

Your example about running out of RAM seems just as relevant to MCUs. You can leak memory and crash an MCU. You can overload an MCU with tasks and degrade performance. You can use cgroups or ulimit to help prevent a bad process from bringing Linux down.

I agree that Linux is not going to be as reliable as going baremetal, and I'm not recommending you use it as a motor controller. But even the most reliable MCU can fail. An MCU can get hit by cosmic rays or ESD. People might spill water on the 3d printer or physically damage it. It's not even a binary "works right or dies" thing. I've voltage glitched MCUs to get them to skip instructions and get into an unanticipated state.

In any case, the best path to safety is to imagine that the computer might be taken over by Skynet and do everything in its power to kill you. Or worse, ruin your print. If safety is the goal it's probably best to achieve through requiring the computer system to take some positive action to keep the heater on. Or even better, a feedback safety mechanism like a thermal fuse.


Being within the capabilities of something and guaranteeing that it will never exceed that are two different things. At least in the past real time guarantees for Linux came as part of an optional patch set for the kernel since guaranteeing that an algorithm would complete within a set time frame or that things like priority inversion issues would be handled correctly came with a performance cost.


I think we are talking about things like interrupt latency, not NTP synchronization.

MCU interrupt latency can be extremely deterministic. I ran some measurements for work and found Linux to be adequate for many uses, but it is a valid concern. There are some Linux kernel patches like PREEMPT_RT that attempt to bound Linux latencies, but generally MCUs are a lot better suited if latency is critical. In part because they just have less software running on them to interfere with timing.


That's not the kind of timing the original point was talking about AFAICT. Real time response is the issue with regular Linux, not vaguely accurate wall time.


I can't really disagree with what you're saying about Pis often being overkill, but I've been using Raspberry Pi Zero Ws for more projects where I might have used ESP32s, and I've been very happy with the choice. Basically any project I have that isn't battery powered or timing critical, I'd prefer to use a Pi.

Zero Ws have a $10 MSRP (of course, huge shortage at the moment). I think they're pretty cost competitive for DIY IoT.

Buildroot makes it really easy to create a custom Linux OS with your software preinstalled, any kind of custom kernel tweaks you want, and an impressive amount of software packages available. If you strip unneeded functionality from your kernel you can boot a lot faster too.

Here's a list of some stuff I like about a Pi Zero W vs an ESP32

* Ease of programming. Flash an SD card and swap it out, without having to hook the device up to a programmer.

* Extremely solid TCP/IP stack.

* Multitasking with real process isolation.

* Program organization (related to above). I find the OS abstraction very useful for enforcing cleaner designs.

* Access to Linux software packages. I can easily add nginx, apache, or lighttpd to my rootfs. It doesn't involve mangling any of my other software packages

* Interactive access. I can debug the applications by sshing into the Pi and looking at logs. I can scp new files onto the Pi.


"huge shortage at the moment"

AFAIK this "at the moment" period has now extended all the way from the time they were introduced to the present day. One long moment for sure.


My recollection is shit really hit the fan after the pandemic? It looks like the Zero W was released in 2017. I remember up until the last couple years, you could at least get one per order from Adafruit/Sparkfun at MSRP. I think there may have even been a time when you could get like, 10 per order or something?


Strongly agree here. Local SD storage IO aside, a Pi 4B 8GB is basically on par with a high end desktop I had in the mid-2000s.

That’s an insane amount of compute on something that can be sub-2W over POE, or 1.3W on WiFi.

Though for all the homelabbers out there, you probably have a NAS. Use Log2RAM, and present some ISCSI volumes to the Pi, and you’d be staggered with the very real work a Pi can do when not saddled by the SD card - without having to directly attach storage.

- -

I’d argue that for “IoT” stuff even Arduinos are overkill in terms of computing power. Granted, I realize this is the case functionally because of BOM optimization and making it forgiving (more power than needed) for beginners.

- -

Granted, going back to the original article: Yes, 1L form factors are nuts. Mac Minis are insane (but not cheap), and if you look at 35W Zen 3 Ryzen 7 PROs you can get similarly insane power cheaper if you need x86. But all the much older former office 1Ls are everywhere and offer ludicrous performance to hobbyists for pennies on the dollar, with (as mentioned by the article) sub-10W idle.

- -

Honestly, it’s just a great time to be a tinkerer. We’re drowning in ubiquitous, cheap compute.


Distro/stack recommendations for a Pi? Intrigued by your usage with Log2RAM and ISCSI volumes!


Personally I've standardized on "DietPi" which is functionally a super-stripped down Debian.


I don't see it mentioned in the thread, so I'll link to ESPHome. It is an incredible platform when you just want to read some sensors or put some relays or switches onto the network, integrates with Home Assistant. One of the most enjoyable firmware installation procedures I've ever encountered.

https://esphome.io

Most of the use cases I've seen have them report to Home Assistant or something similar, but I think some people use them directly without a host.


Well, it is right-sized to run a hub controlling all that little devices.

But yeah, especially now with various ESP32 firmwares like ESPHome you can essentially just make a YAML with specification on where to listen and what bit to flip and get simple switch/controller with zero actual coding.

There is even custom firmware to turn off-the-shelf IOT devices to work with "open" standards.


I've been playing with home automation and I agree. This guy is creating a mini AWS at his place while I am using a small Pi to run my Lua server[1] that takes less than 2MB of RAM and can run even on a microcontroller.... if anything the Pi is totally overkill already, but it's nice to be on GNU/Linux and have access to so many utilities, even a browser.

[1] https://realtimelogic.com/ba/doc/


> Want to have something that opens your curtains or flashes some RGB lights in your hallway or whatever? Pick yourself up an Arduino / ESP32

The reason I'd go for an ESP32 for this use case isn't because RPis are overkill, but rather because it's much easier to write, compile, flash, and run code on bare metal on an ESP32 that has access to Bluetooth and Wi-Fi but isn't vulnerable to file system corruption. You can do this on an RPi, it's just much harder.


ESP32-C3 is cool if you want to play with RISC-V for cheap.


Even in the Raspberry ecosystem you can easily pick up a Pico, which offers all of that.


I prefer the Pico to the ESP32. I'm very impressed with the quality and organization of the Pico SDK and documentation.

Although I have heard the Pico is not very competitive in terms of power optimization. I haven't done many battery powered projects to encounter these issues though.


Even easier, in a way... a Parallax Propeller and program in SPIN (their proprietary structured BASIC-like language), or C++ etc.

Exotic, but simpler mental model for most people. 8 independent cores and a simple non-interrupt driven programming model for doing basic timing and I/O with any of the 32 pins configurable for either input or output.

And super easy to interface both 3v and 5v, available in DIP, and in various easy format boards.


Yes. I followed a tutorial for building a doorbell that uses an ESP32 to send a notification to our phones when the button is pushed. I haven’t touched it since installation and I’m always a bit amazed it still works. An RPi would’ve been overkill.


I typically see two kinds of raspi hate, and only one of them is reasonable imo. The first is stuff like this article, where you can probably get something better than a raspi these days for about the same price (especially when you consider scalper pricing) or just run a docker image in a server you can make with an old computer. This is very good advice, especially now that getting a raspi at MSRP is borderline impossible.

The second is most of the comments here, declaring that novices should just learn embedded programming and use an arduino for everything! If you're in this camp, you should consider _why_ people chose to use raspis for project. and watch your ass because an EE is about to arrive to drag you for not just using a 555 timer in your projects.


> and watch your ass because an EE is about to arrive to drag you for not just using a 555 timer in your projects.

I know from an electrical engineer that with microcontrollers having become that cheap, it can often be cheaper to use a massively produced microcontroller instead of a 555 despite the former often being insanely overpowered for the task at hand.


Oh, cool! A summons!

Hey, nerds! Do some real hacking and program with solder and a 555 timer!

Regards, The EE who showed up to drag you all


I am not an EE, but I've done a missing-pluse detector both with a 555 and with an 8-bit micro. Both solutions were great. I sure as heck wouldn't use an Rpi for it though.


> novices should just learn embedded programming

For people that think this is a significant barrier, I'd suggest to take a look at MicroPython (on something like a ESP32 or RP2040 -- there's no reason to bother with the classic 8-bit Arduinos anymore). It gives you an environment that's surprisingly similar to what you'd use on a Pi.


I've done embedded programming, but I just don't enjoy it. It's far too in the weeds for the type of simple tasks I want to do around my house. Every time I start with Arduino, I realize that I'm simply not having fun.

Raspberry pie lets me work at a higher level that I actually find enjoyable.


Totally respect this. Especially for "household" things.

There's an alternate timeline where CPUs are really hard to make, so every SOC 'wasted' on a curtain-opener or a glorified doorbell deprives someone who badly needs a general-purpose computer and can't afford one. But in our timeline, every day probably a million perfectly good "computers" (cell phones, desktops, smart speakers, etc) get tossed in e-waste anyway, so it's not a sin to use an overkill computer for a task. (Ironically of course the Pi itself is in a shortage but... :shrug:)


EE here: micros are cheaper than 555s and discrete lol. Not many bother with them when we can toss a micro in for pennies. I agree with the sentiment tho that this whole argument sounds like when programmers bash others for not using REAL programming languages like C! Python is overkill why don’t you write this in assembly!?


From a professional perspective, I have a general annoyance towards raspi or embedded linux more generally. I have been part of multiple projects where there was a push towards prototyping/initial release on Pis and SOMs when a simple MCU was all that was required. It enabled a metric ton of bloat that made debugging the system a nightmare and made it impossible to easily port the software to a simpler hardware setup. Software grows to fit the available hardware.


> I typically see two kinds of raspi hate, and only one of them is reasonable imo. The first is stuff like this article, where you can probably get something better than a raspi these days for about the same price (especially when you consider scalper pricing) or just run a docker image in a server you can make with an old computer. This is very good advice, especially now that getting a raspi at MSRP is borderline impossible.

This bit was always the head scratcher to me. I've never owned or worked with a pi, simply because I have an old SFF PC that is many times more powerful. I remember reading stuff here on HN about people building k8s clusters and similar things on raspis... why would you do this to yourself? Just virtualize on the junky old PC if you really want to have multiple "nodes" of some sort.

I get there are use cases where "put a small slow computer in a physical location" is what you actually want, but it feels like raspi gets jammed into a lot of uses which don't make much sense. Just because you can doesn't mean you should...


> why would you do this to yourself? Just virtualize on the junky old PC if you really want to have multiple "nodes" of some sort.

An RPI uses as much power as charging a phone. Your old junk PC probably uses an order of magnitude more, and most people don't need the extra CPU power. Why waste electricity? I bought all of mine at MSRP, and at that price its likely covered by the savings on power quickly.

That said, I've also used a few "thin client" PCs to build a cluster when I got the space. As much as I've wanted it, I can't convince the better half to let me install a full server rack in our guest bedroom (or pay for electricity).

On top of all this, it's small, so it can fit anywhere. I ran a few hanging with zipties under the bed frame in my dorm room years ago. My first apartment was 400sqft - I threw out my "server" PC I built and used a few Pis. When not in use they fit in a shoebox or drawer, so it's easy to have extras.


I suppose it depends on your background and goals. Most non-techies I've worked with find it easier to learn Arduino than an entire Linux stack.

On the other-hand, if you have existing software that you want to move over to a small system, then a platform with a widely supported OS is going to be easier.


I think "hate" is a strong term but I know what you are trying to say. Articles like this suggest that too many people think the Pi is the only tool we have in our toolbox. Truth be told, we have a lot of tools and should use them appropriately.



> server you can make with an old computer

Power consumption inefficiencies often make this a very poor choice (for same workload than can be done on low-wattage rpi or similar).


I hate PI because its hardware is just junk. Cheap low end stuff designed for consumer electronics like TVs!

Devops like to use Intel NIC, ECC memory only. But somehow Pi junk gets pass because it is "open-source" or what.

Great, we will run this mission critical deployment from cheap SD CARD!!!!!


The funny thing about hardware and non-nerds/non-EEs/etc. is that most people don't care if the components are "junk". They care if they can make their project work on it. They care how much time it takes from beginning to end. They care how easy it is to get help when something doesn't make sense.

It's small, it's inexpensive, it gets the job done.


It does not get the job done.

Last problem I had with Pi: my keyboard did not work for some reason. Guess what, Pi has shitty USB 1.1 implementation (USB 2.0 is different stack), on top of that it does not give enough power to connected devices. With normal computers I had similar problems in 2003!!!

For serious use it is like Gentoo. You have to learn about its boot process, how it bootstraps from video memory.... Or how USBC power delivery is not really important (remember Pi4 initial batches?)...

It only makes sense if you are deploying embedded devices on midscale (~100 devices). For hobby projects it is a nonse. You have to learn completely new HW platform just to read sensor data...? Haha


Your arguments are almost identical to the ones greybeard embedded devs have against the Arduino. Yeah, it's expensive, uses an outdated micro (at least the AVR-based Arduinos), but it's effective because of its popularity. Basically a flywheel effect. Doesn't have to be good or optimal, just has to be flexible and have a big community.

I doubt anyone is using an off the shelf Pi with SD card for an actual safety critical deployment and expect to get it certified. There are options like the Revolution Pi which is half PLC and uses the Pi's Broadcom SOC for non-safety calculations. Some even support CODESYS.

I agree that most ARM embedded Linux SOC's can be absolute dumpster fires when it comes to peripheral documentation and poorly maintained device trees (looking at you, Texas Instruments!!!) but that's nothing new in embedded dev. Learning how each manufacturer/platform do hardware peripherals is half the battle.

So I agree that the pi isn't always the best device for an application. Cost and power savings on an ESP32, better processing on your old laptop-turned-server, and so on. But the Pi does have excellent documentation, and was lucky enough to gain enough traction to create an ecosystem that reduces friction to just get something running for beginners, which is literally its original design intention.


I once encountered a hydroponic nutrient dosing system that was, no shit, a RPi 3+ with a custom HAT for the electrochemistry and actuation. These were sold to businesses running container farms and the like.

At the end of the day, it seemed like the manufacturer had the (good) idea to automate the dosing, but thought that all the standard industrial automation tactics (PLCs, ladder logic, HMIs, etc) were somehow overkill for the application.

Which meant that the end users had to write all the software to make it work with a standard industrial automation system anyway. It was super annoying.


Every rpi (ever?) has supported the 100mA USB 1.1 output limit. All except one have additionally supported the 2.0 500mA limit, and most of those support 1000/1200mA with a boot config parameter at most, assuming your power supply is adequate.

I'm sorry you had issues, but these are just normal embedded systems problems. I totally get if that's not what you want to deal with, but you should really consider a different platform like a NUC.


Sure, running off the SD card sucks, but "flash the card on another computer, boot straight to your installed OS" is the nicest OS installation experience I've ever had.


And it gives users an easy way to set up an SSH key and wifi configuration as well


if anything you're doing can be described as mission critical there was never a real argument for using raspberry pi unless there's no consequences to losing the mission, in which case what does mission critical really mean?


[flagged]


Eh, “web dev” hate is overplayed. Most devs are “higher level” devs, be that web, Java, .NET, whatever. The professional incentive isn’t there to become a lower level programmer for many. Doesn’t mean they’re dumb, they’re just heading where the work is.


> Doesn’t mean they’re dumb, they’re just heading where the work is.

And the money is much more lucrative in Web Dev than it is in embedded right now.


I disagree with the author here. The vast majority of rpi users aren’t trying to purchase the optimal hardware for their project, they’re trying to tinker around with something fun. Literally all of my rpi projects have short lifespans since I know the rpi is able to do so many things, so I use it in many different projects.

Rpis are almost never the optimal/cheapest/best option for a project, but they’re almost always the optimal computer to reuse in 10 different projects over and over!


I don't know if it started with pi-hole but one of these projects must have been the turning point for "load up an SD card and let your pi run for years at a time with nothing else". That's how a lot of people use them and especially with recent prices it's pretty wasteful. I have 2 pis and both run multiple of the programs that people usually run solo on a pi (Octoprint, MPD server and player, UPS controller, (Zig|ZW)2MQTT and so on)


With recent prices I agree it's a problem, but they were otherwise cheap and borderline disposable computers.

I have a Pi3 that's dedicated to Octoprint - just because I want it to have all the resources when printing and I have some heavy plugins (the Arc Welder, for one).

However, the Pi4 is running Home Assistant and several other things within HassIO, like Adguard, NodeRed, the UPS monitor, Tailscale, etc. It's a much more powerful Pi, with 8GB. I would replace it with a TinyMiniMicro if it died today. Although, given that it's currently sitting in a rack, I want to also give it control of the rack fans(heat dissipation is not a problem most of the time) and add some RGB lighting, which would be perfect for the Pi.

I'd argue that, given the current prices, if you don't need the GPIO pins, the author is correct.


I shorted a few RPis in my day. It would hurt to do that now given their price.

Even so, OP is about running server stuff rather than electronics or robotics hobbyist stuff. The Pi works as a little hass and automation server, but only barely.

I've been "mini pc" curious for a while, and TFA mentions a "TinyMiniMicro" resource that seems quite useful.


From the article:

"I migrated my Pi-hole from an actual RPi"

Can a RPI be a final solution? It sure can! Can an RPI be a bridge to something else? Yes and that is where it shines!

Having a RPI or two around gives you tools to experiment with. It can be a server, or it can act as an IOT device, or a USB HID device or...

Some of my RPI projects have been replaced with old laptops, or Beelink boxes, or ESP hardware. I think of RPI as a first step, that lets me play before committing to something (and spending).


Yeah I don't think it's an awful tool for experimenting or education. But I regularly see people expecting a Raspberry Pi fleet to be an actual server cluster, and you're just creating a bad time for yourself there.

I use a combination of Intel NUCs, Gigabyte Brix, and flashed Datto Altos (branded Zotac Z-Boxes) for various roles and they absolutely shine whether running Linux or Windows-based workloads.


My homelab, up until the beginning of last year, consisted completely of Raspberry Pi's - gen 1 through 4. Because they're so resource constrained, adding new services in my homelab meant adding more Pi's. Pi's themselves then became near-impossible to buy, which meant I couldn't expand anymore.

I bought a set of four 10-year-old HP rackmount servers instead - 56 cores, 88GB of RAM, and 12TB of storage total (note: I pay for the electricity solely with home solar, so the ~400w idle isn't a financial concern). They run Kubernetes for workload scheduling. All told, it cost about $500 to acquire all of the equipment.

Honestly, I haven't looked back to the Pi days. Provisioning a new service in my homelab used to have a lead time of two weeks (acquire a Pi, image it, install services, etc.), and now I can deploy something new in 15-30 minutes. I learned a ton through running systems on resource-constrained ARM32 devices like those, but it's ultimately not a great use for them.


> I pay for the electricity solely with home solar, so the ~400w idle isn't a financial concern.

Maybe not a financial concern but probably an environmental one. Yearly that is 3.5 MWh that could be used to de-carbonate the grid instead.


Probably not actually, but nice concern troll.


Arguably in the winter that 400W is also helping heat their house, and depending on what type of furnace they use, may be reducing some of their dependency on natural gas.


Unfortunately that is similarly offset by the extra need to cool in warmer weather. Though it's interesting to think about given you have to factor in different system efficiencies for different power and heat/cool types as well as climate. Figuring out where the balance actually lies is gonna be different for everybody.


Ah, but if their cooling is electric, and hence in their case, solar, and their heat is natural gas, this is still better. =)

That being said, it's entirely possible for someone who has a solar set up to also have switched to electric heat, it's just pretty uncommon in northern climates still.

Generally though, yeah, I prefer NUCs and the like over servers for home use for power and noise constraints.


I think Pis are good for fun and for tinkering cheaply with ARM architecture. Server hardware will always better once the workload is heavy enough.


I've been thinking about this for a year or so. I currently have 7 Pis (5 Pi0ws and 2 Pi3s) running various projects at home. I started collecting them on Pi day each year, and any time I want to try something new I just grab a new one and go. A few weeks ago I took one on a plane and installed Pi OS on a 0w using the plane's wifi, then SSHed to the pi over USB and played with it that way. At home they're hanging off various UPS and power outlets, I once forgot about one powered by my basement router and I didn't notice for two years. It was still happily serving an old version of my cookbook.

Yeah, there's things I want to move to more powerful hardware. But it's a step up: it's more expensive, more setup, a more persistent system (even if everything's in containers). I wouldn't steer people away from using Pis based on that tradeoff, they still have a high ceiling for what they can do and when you do bump into that ceiling it's much easier to justify spending 10-100x for the right hardware. It took me 7 years of tinkering to hit that ceiling with my projects, I don't regret staying on Pis for so long.


If you don't need GPIO, then you don't need a RPi. They are useful, but an old laptop can do most things a Pi can do, and putting them to use keeps them out of the landfill longer.


I'm a bigger fan of Dell desktop that companies have put out to pasture after a refresh cycle. They're very expandable, have plenty of horsepower, and the idle power draw isn't that bad. The better thermal characteristics compared to a laptop usually lead to very long life as well.

Related to a sibling comment, a decent UPS could power a Dell like that for a couple of hours if it's mostly idling.


That and latitude laptops.

Optipliex desktop is around 10-15W idle

Laptops with screen off can hit around 5W (close to pi under load).

I would assume what loads the pi would almost be idle in these older dell machines.


Yes, I've got a Dell T3610 under the desk here now. My only beef is that it has weird Dell proprietary hardware, so I can't easily fit quieter fans to the PSU, or reuse the case with an ATX mobo


Agreed. A laptop also comes with battery backup, screen, and input devices out of the gate. Granted, it consumes more power than RPi. But a laptop is definitely a solid choice, based on my experience with RPi resets and corrupted SD cards, etc.


> If you don't need GPIO, then you don't need a RPi.

You might not need a Pi. You can by a hundred ESP8266 boards for the price of a modern Pi. ESP8266 and ESP32 are also more forgiving with their GPIOs.


And includes built in battery backup


Normally a good idea to remove the battery actually, they bulge over time and could present a fire hazard.


That assumes the battery still works. I have some old laptops, but few have usable batteries.


A laptop would consume significantly more power than an RPi.


However, you can buy a lot of electricity for the price of a scalped RPi these days.


This is true. But some people are not concerned just about the price.


What are you concerned about?


The feeling of waste when I'm running a 20 watt laptop in the corner 365/24/7.


20 watt is very little. Are you sure it's more of a waste than buying a brand new energy-efficient device?


Depends on the use case and your perspective. If you want to run a few dozen lines of code to do something like monitoring a sensor and calling an API endpoint -- a microcontroller can do that with milliwatts of power.


> A laptop would consume significantly more power than an RPi.

Not necessarily true. Even some of the TinyMiniMicros mentioned in the article have a single digit W consumption when idle (because they are basically laptop parts).

Now, when loaded this would be true. However, their higher processing power also means tasks get completed faster, so they don't stay at the higher power states for long.


Depends on the laptop really. Got enough laptops that use 1W idle with display off while the pi seldom idles because every process is work


I hear this argument all the time. But unless you're doing something that's solar, battery, RTG or otherwise non-grid powered, then what does it really matter? At what scale is a person running so many old laptops that they need to switch to RPi's to save power?

And if that person can afford the Pi's, they can afford the power, so there's no point quoting power costs.


I can also afford to let my faucet run 24/7. Should I do it to avoid needing to turn the knob when I need to wash my hands?


Here in Ireland water (to domestic properties) is free - as I embarrassingly discovered when phoning them up to setup an account :)

In theory, I could leave all the taps on 24/7, however that’s A) abuse of a free service, and B) a complete waste of resources.

Even cleaning the car I’m very strict about turning the hose off when I’m using the sponges.


Good, you understood my point. Using an overpowered device is a complete waste of resources.


I didn't say that having resources meant they should be abused. I said that if somebody can afford $150 for a RPi can afford $2.50/mo for power instead of $0.50/mo for power, and that as a result, factoring in lower power use is a poor reason for buying something you don't otherwise need, especially when that power is easily available.

If that power were coming from locally finite source such as solar or wind, or battery, it's a different story.


You know there's this little event going on called climate disruption, and that a leading cause is energy overconsumption, right?


That's a broad generalization. Sure, reduce, re-use, recycle.

But... it's nice to have a system that has a very minimal footprint, which isn't exactly the case with a laptop.

Then again, we can argue semantics over "need" vs "want".


> but an old laptop can do most things a Pi can do, and putting them to use keeps them out of the landfill longer.

But you'll lose Internet points.


Having worked on safety-critical, custom circuitry and programming, particularly in the entertainment engineering field, I find the prolific use of Arduinos, RPis, and ESPs, to be concerning. You pay more for an equivalent industrial board, but there's been a lot more QA/QC done on it. Don't get me wrong, I started with Basic Stamps from Parallax back in the 90s to control window display animatronics in NYC department stores. Great stuff. Fun. But with all the emphasis on high-integrity hardware and software, Rust (I prefer Ada/SPARK2014 for such things until Rust matures in this field), and hobbyist programmers creating show control or machinery controls, I worry that people don't realize that you don't have the same assurances you do with industrial-grade equipment. I opened a very large box a vendor created to control their overhead rigging for a show I was running and found an Arduino and when I had the code audited found a state that would be unsafe. Most tech workers in entertainment would not catch this sort of thing. Yet, this is machinery being deployed above an audience. Greatfully, there are new standards being developed to catch this sort of thing, but these standards/guidelines lag behind, and with the pace of production, I am sure there is a lot being done out there that is unsafe.


> I [...] found a state that would be unsafe.

That seems separate from the hardware issue with the raspi, like something that would happen with better hardware as well.

It feels like this needs to be handled in the dev tools where going into a dangerous state is automatically followed up by exiting it once an operation is performed and you can't mistakenly stay in the dangerous state. And then have an exhaustive analysis of the FSM to show it performs as expected with the correct state transitions.

What was it (an automated curtain puller?) and did the machine have useful hardware lockouts, such as physical rope-speed limiters or whatever, to make sure it couldn't do bad stuff even if commanded to?


True, I was just offering up an example on the software end, but it has to do with hobbyist embedded people think they can make an Arduino sketch for safety-critical applications.

I am more concerned with hardware issues, since the plethora of ESPs and Arduinos being made do not go through a Six Sigma type process to control the process. Also, the piece I pointed out did not have other separate hardware or other safety watchdogs in the box, like a Pilz unit supervising Beckhoff i/o. It was an Arduino with some relays off of its GPIO pins. High-integrity systems need to include both the hardware and software. There are actually standards for high-integrity systems aside from the usual aerospace stuff that applies to show control or machinery control. Safety-Related Control Systems (SRCS) are being addressed more and more in ASTM F24 for Amusement Rides and Devices.

I loved my Basic Stamp, Pic-chip, and Propeller chip days. Fun, but I am glad I progressed beyond the hobby level before anyone let me put a piece of kit up! Window displays were fairly innocuous!

I've always tried to add actual physical, mechanical interlocks on some of the stage machinery I've designed where a flipped bit or faulty i/o would cause harm or death! See my Arduino reference below.

I wish SPARK2014 would get more love. It has been around for a while with real-world applications, but Rust is the darling of the tech crowd now. AdaCore and Ferrous Systems are teaming up to bring some Ada goodness to Rust along with the legacy experience and apps.

Cool article on drones and SPARK2014: https://blog.adacore.com/how-to-prevent-drone-crashes-using-...

Cubesats: https://www.cambridge.org/core/books/building-high-integrity...

Arduino and Safety-Critical Circuit: https://forum.arduino.cc/t/safety-critical-circuit/319986/2


Just caught your "curtain puller" reference. I have designed a stage lift that travels at 4 ft/s from below stage after a sloat moves 6 ft/s out of its way to reveal 7 performers on stage after a poof of smoke. A sloat is the piece of the stage floor that needs to run down and under the stage to make an opening. I put some beefy metal flags that would only release the lift to go up after the sloat had cleared the opening. A mechanical interlock. No software needed. This was the late 90s and I was already suspicious of bits flipping as the software guys started taking over the electro-mechanical effects. Don't get me wrong. I have been programming since 1978, but I have also been a machinist, welder, underwater technical diver, rope technician, and I build my own CNC machine back in 2001 before you could buy a $300 3D printer or an X-Y-Z table for less than $20k. I have theoretical and practical experience, and red flags go up when somebody (usually a hobbyist or artist's tech) tells me they use Arduinos or RPis for their controls. I am not against it, but I know this carries some attendant concerns. Who is programming the logic? What other secondary watchdog/safety hardware (if any) is being used to ensure another layer of safety, etc...

SPARK2014 the programming language and the dev tools include a verification toolset, automated proofs, and unit testing. Rust may eventually catch up ;)


I'd never heard of a sloat before. Yeah, that sounds like the kind of thing which could crush someone pretty easily if not done right.

I agree with the hardware safety interlocks, which I guess is why I don't think so much about the software - because I imagine it was programmed by a monkey (perhaps myself, late the previous night) and I don't trust it anyways. It's like a switch an untrained stagehand might hit which needs to be safe regardless.

(I've never made anything to be around a performance, but some of my things could certainly have hurt me or friends if we weren't careful.)

> SPARK2014 the programming language and the dev tools include a verification toolset

This is something that third-party FSM libraries often lack, decent controls and verification.

The cubesats are cool. Usually I can walk over and kick my creations when they need a reset, that's a whole 'nother level.


I know that you didn't ask, but my $0.02 is that yes, RasPi's are unreliable. If you use one for as an unattended remote server, it will do you well to include a hardware watchdog that power cycles the device after it hangs. But modern NUCs seem to take MINUTES to boot the BIOS.

I had been using various BeagleBones and found them more reliable than RasPi's and faster to boot than the NUCs in the office.

But depending on the availability of a TI part is the road to perdition, so maybe I'm just using the wrong NUC models.


> I know that you didn't ask, but my $0.02 is that yes, RasPi's are unreliable.

I've been using RPis and "clones" for internal services for years, and there's hardly any fuzz with them at all. At most a power cycle once a year.

They've endured house losing power multiple times, I've yet to swap SD-card on any of them, they just keep chugging.

But sure, I wouldn't trust my life to one. I just find it odd that people say they're highly unreliable while all of mine just work.


Happy that yours works well. Many problems don't show up until you hit larger numbers.

We deployed over 1k RasPi's for a particular customer. We averaged about five reboots per day. On top of that I had to deal with the RasPi Organization's insistence that they were not an ODM. Though these were the RasPi B's and RasPi2 B+'s. I'm sure the reliability has gotten better over the years.

I'm not a big TI fan, but everything I needed I got in a BeagleBone: they're more reliable than RasPi's, you can actually get the firmware source code and they have a "normal" returns process.

I don't have the data for the BBB based cube-sats running Kubos (which was the project after the RasPi project) and there were less than 50 deployed, but I've never heard of any of them rebooting themselves unless the ground told them to do so or there was a battery failure.


Raspi's are stable if you respect 2 things :

- avoid writes to sd card too much (log2ram mitigates this, alpine in read-only solves this)

- plenty of power (the recent 3 and 4 have huge sipkes of current draw !)

Do you know the reason for your reboots ? Also before the 3+, the die have no RF shield. The B and 2 B+ are exposed if you don't use a metal case with a seperation from the PSU.


> plenty of power

Actually this is a good point I forgot about.

I had one Pi which would reboot or hang every week or so. Dismissed it for a while but then decided to troubleshoot it. I had a USB cable tester, and quickly found that the USB "charging cable" I had bought from a local shop had a 1 Ohm resistance! Threw it away and replaced it with a good one and never had an issue again.


Yes. RasPis are stable except when they're not.


Ok 1k is certainly a sizable fleet, and I only got 3B's or newer.

Nobody imports BeagleBone here so have to get it through DigiKey or similar, which means prices are way higher.


> We deployed over 1k RasPi's for a particular customer

Is that the proper tool for the job?? I just read an article saying "Unpopular Opinion: Don't Use a Raspberry Pi for That" (https://news.ycombinator.com/item?id=35260322)


... what were they doing with 1k Pi's

Please tell me they were using the GPIO pins for something.


Of course. GPIO is the simplest way to get a blinky light.

Seriously though, we used a GPIO pin to stroke the watchdog.


More anecdata: my Open Sprinkler Pi has been running for many years, with only power-fail reboots (maybe once a year). It doesn’t write much to flash, but it’s never needed a reboot because it failed.


My rPi pihole hasn't had a single hiccup for years. Multi power outages and everything. I actually thought it was going to be a way bigger hassle over time but its been smooth sailing.


I've had about half a dozen Raspberry Pi in total (only 1 through 3), and many dozens of SBCs, and I found Raspis generally unreliable except for the Raspberry Pi 1 (model B), which is extremely slow for anything nowadays.

The longest I've had running was a The Things Network gateway on a Raspberry Pi 3: OK for 2 years, then it killed the sd card (not corruption, it just wouldn't work anywhere anymore). I set it up again and it killed the new card within a week. Again, and within a month. I gave up and it sucks because the LoRa shield is Raspberry-specific.

A Raspberry Pi 2 died just because while decoding ADS-B. SD was fine.

For the rest of the experiences, I always found some data corruption at some point. Often hanging and needing monthly reboots.

The only worse SBC (and it wasn't really an SBC) I've experienced was the Cubox i4-Pro. I eventually lost it, and I'm almost glad, weekly data corruptions and crashes, underwhelming performance and very fickle behavior overall.

Other <randomfruit> Pis have turd-tier hardware support but whatever you manage to get running tends to be reliable. Pine64 (the oldest model) is so far the only SBC I haven't seen crash, trash an SD card yet, or have FS corruptions yet.


So what you're saying is except for the times it rebooted, it didn't reboot.


As long as you use a good power supply along with Ethernet instead of WiFi, Raspberry Pis are rock solid in my experience.

Regardless, I'd also recommend setting up the hardware watchdog just in case. It's saved me in the past when I overloaded my Pi with a bad cron job.


> I know that you didn't ask, but my $0.02 is that yes, RasPi's are unreliable. If you use one for as an unattended remote server, it will do you well to include a hardware watchdog that power cycles the device after it hangs.

Maybe you got a bad batch. My Pis never had any problems whatsoever.

Please check logs to see if they are experiencing brownouts. Not all power supplies work, specially with a Pi4.

The thing that likes to fail a lot is SD cards. My homeassistant Pi uses a SSD(harvested from an old Chromebook) because of that.

TinyMiniMicros seem to be a better option than NUCs these days. Boot faster too.


People haven't been able to get an RPi for years, and yet they still won't switch to those nice Beaglebone Blacks running standard Debian in stock over there.

Shrug. At this point I see RPis as a litmus test.


Should probably expand on this a bit. In 2014 we were involved in a project with a reasonably well-known power grid operator. They had been deploying expensive equipment to monitor the status of various devices at remote substations. They had soft requirements for uptime, which means they set a budget and got the most reliable equipment they could afford. The monitoring system was expected to work somewhere between 4 and 5 nines, but if it didn't no one was going to get fired. The equipment they had been deploying was more or less expensive overkill, offering 5 9's for about $50k per box. Multiply that by about 1600 and you get the idea for the budget. Someone in the organization hit on the idea of using redundant consumer-grade equipment and initially spec'd out some rack-mount x86 systems with beefy redundant power supplies and SSD storage. But then they heard about RasPi's and wanted to give them a try.

Should you use a Raspberry Pi for that? The best way to answer that question was to try it out. The first 20 units were built out with a beefy, though small UPS and a decent enclosure. They were deployed alongside existing systems to see if they a) worked, b) gave the same data as the expensive system and c) were reliable.

The answer was... some individual units did not hit the 4 9's reliability target. The ones that did, seemed to continue to be reliable up to at least 5 9's. But it was hard to determine which would fail without putting them in the field, waiting six months and seeing which ones hung. But the price was cheap enough that putting two in the same corner of the wiring cabinet and adding a hardware watchdog was quite affordable.

We deployed just over 1000 in this redundant configuration and it worked fine.

By this time we collected enough data to chart a MTBF histogram, essentially a chart of how many machines lasted how many days without needing a reboot. We wound up using a lot of Original RaspberryPi B+'s in 2015, which was after the RasPi 2 came out. (Purchasing at this client took a LONG time.) I often wonder if our supplier sent us boards that had been returned from other customers.

I had good experiences with BeagleBoard's in the late 2000's and this was just as the Black was coming out. We deployed another 600 with BeagleBone Blacks and got MUCH better reliability numbers. Is the BBB an intrinsically better product? I dunno. Did I just get a batch of bad RasPi's from my supplier? I dunno. Should you ever run an embedded system without a hardware watchdog? Probably not, no matter how reliable you think your system is.

But... the real problem with the RasPis was support. How do you return a RasPi? You don't. You throw it away and get a new one. Yeah. That doesn't work for a lot of people. How do you get the firmware for a RasPi in 2015? You don't. Can I get the gerbers so I can turn a few custom boards? No. I talked with RPT several times about this and their response was "we're not an ODM."

And that's PERFECTLY FINE. I never said I though RasPi's were "bad" -- I may have said "there are applications for which RasPis are not a great fit." If you need the firmware, consistent reliability or the ability to turn custom boards, you absolutely don't want to buy a raspberry pi in 2015.

Also... a lot of people are responding with "Except for the several times the system rebooted, I didn't have to reboot my RasPi," which I'm not sure I understand.


I generally agree, there are a few places I'll throw a pi, because it's small and easy to set up and just hack something.. Like a print/scan server or an easy way to get a microphone or webcam hooked up to the LAN..

But a lot of times, an ESP8266 or even just ATmega part is just fine.. other times, some old laptop or PC is way better..

Currently, my only Pis in operation is the Pi400 on the living-room TV for retropie, and the one attached to the printer and scanner to network those without having to install weird stuff on the windows/linux clients


Agree with this article - those little Prodesks are available very cheap and with some impressive configurations for power vs. strength.

I have an i3-8100T version which includes on-cpu video encoding, ideal for Plex. Runs unRAID. Replaced a much larger system which took much more power.


This is what I do. Small form factor PCs meant to sit behind monitors are plenty powerful for many homelab tasks. I divide mine up into multiple VMs and have CPU, memory, and disk to spare for whenever I want to spin up something new.


> A complete Raspberry Pi 4 Model B 8GB kit is admittedly cheaper — typically $150 these days

Raspberry Pi was great for $35. For $150 not so much.


RPi prices are so inflated they barely make sense at all, let alone for all these little "give my plants half a cup of water every day" projects.


The power available in a RasPi is way overkill for "give my plants half a cup of water every day" projects too. I don't need four cores and dual HDMI outputs to tell a motor to spin for half a minute once a day.

I'm just building a outside watering system. I was going to use a RasPi, simply so that it can log the things like temperature, light levels, rainfall, and water butt levels that it uses to work out how much water to dole out, and so that I can log in remotely and change the settings. But there's a shortage and I ended up getting an Arduino instead. It can do everything I need except data logging and remote access, but those don't matter so much.

RasPi should be split into two camps - one being a system for people who want to build a small computer, with decent speed, RAM, real time video decoding, etc, and the other for people who just need a slow low power device with Linux that has GPIO.


> I'm just building a outside watering system. I was going to use a RasPi,

You could also use a $1 ESP8266 or a slightly more expensive ESP32.


While that looks like a very nice chip, it still doesn't have ethernet or non-volatile storage, so it doesn't help with remote access or logging.


Silence, volume and stability are 3 features they also have, atleast with heatsinks like these: http://move.rupy.se/file/cluster_client.png

SD cards are saturated by 2W TDP Raspberry 2, but the 4 has workload speeds for CPU intense operations.

This cloud hosting service has been running from my home fiber with better uptime than AWS/GCP for the last 10 years: http://host.rupy.se

Longevity is the key value.


If your alternative to RPis are used NUCs or laptops, than you lost me.

I want reliable and quiet. I have RPis running for more than a decade everyday with not a single crash.

In what conditions are those used PSU? Who knows. I'm also not a fan of bringing other people's dust and keyboard grease along.


Many NUCs are whisper quiet and just as reliable as an RPi. Often more so given the number of folks that run everything in their RPi off an SD card without considering the lifespan of them.

I didn’t replace my RPi with a NUC but I did get a Celeron powered mini PC. It’s basically silent and has been happily running Ubuntu under my desk for over a year now.


If new, sure, but then you're in for a lot more money than the Pi.

If it's used, I'm not interested.


Used = value. But if you want new and still cheap, there are mini PCs with SoC style pentiums, barebines around $200, $20/8gb ram and 20/256gb nvme. And they can often have 2.5gb Ethernet these days.


My mini PC was $150. Definitely more than a RPi but not insane.


I've got a mix of boxes ranging from Raspberry Pis to NUCs to many-core Xeons. They each have their roles - for example, a NUC acts as a DVR for security cameras, and I want it separate from other stuff. But I agree that RPIs can be very good for basic stuff. I've had one running as a print server for an indestructable old laser printer for 9 years now with no issues whatsoever. It's outlived two older Mac Minis I used before the NUC.


> I want reliable and quiet. I have RPis running for more than a decade everyday with not a single crash.

I had the original Model B running continuously for 8+ years until the USB contacts rusted-out due to high humidity.

The prices in the article are absurd: why would I choose a $260 SFF PC vs a $5 RPi Zero W I got 5 years ago? The comparison may make sense for new purchases with the ongoing shortage.


"NUC"s can be fanless PCs. The fanless part makes them quiet. There are no moving parts to make a noise.


I still miss the sound spinning disks, sometimes; it let me know the computer was thinking hard.


For prototyping, RaspberryPis will get you where you're going faster than anything else. The Linux environment gives you speed of development. The form factor let's you toss it in a parts bin. The GPIO gives you direct, easy access to the physical world.

After prototyping, redeveloping for a different platform is often not worth the cost. If you didn't use the GPIO, switch to a NUC or similar. Or don't.

Raspberry Pis are in data centers now, occupying server racks. Don't fight the signal; retransmit.


It's unfortunate that people don't include nuance in their discussions. These "do" or "don't" type articles make broad generalizations, sometimes as arguments in favor of their position.

For instance, using SD cards for storage is problematic for certain uses, but that doesn't include the popular practices of using a small USB-attached SSD, nor of logging to RAM, nor of logging / writing to NFS. That by itself is just a factor, not a reason.

Likewise, price and availability make the assumption that we're talking about Raspberry Pis, which we know have gone commercial and are way too expensive and hard to get. There are many, many other kinds of Pis like Orange, Nano, Rock, et cetera.

There are plenty of other factors which are good or bad, depending on your situation, like power supplies - a single NUC power supply brick can be HUGE, but a single IKEA USB power adapter can power several USB devices - so it depends on your use case.

These are nitpicks, but what really irks me is when people make comments like this:

..."don’t have to put up with the quirks of Raspian or running an alternative distro that has zero community"

That's dismissive, reductive, and a rather shitty take. Perhaps this author shouldn't be running Linux at all because of all the quirks in each distro. Perhaps they should run Windows. Oh, wait! Windows has even more quirks, and arguably a community so large and disparate that I'd consider it worse than any community of any OS project with "zero community".

Oh, well. My take is that Pis of all sorts are wonderful little devices that have an assortment of advantages and shortfalls for many, many use cases, and you can learn lots using those advantages and overcoming those shortfalls, if you want. That last part - "if you want" - means more than anything. It's personal, and others shouldn't tell you what you should or shouldn't want.


Certainly agree that the Raspberry Pi's SD card is unsuitable as a boot device if you have any issues with power failures or voltage regulation. You can boot from a USB but by the time you pay for the USB drive you are in the same price range as other Micro computers.


I often had filesystem corruption after needing to pull the plug only once. Some Linux gurus said I should install ZFS!

I have a habit of using heavy-duty sdcards, because they also seem very susceptible to ESD in this dry, year-round desert weather.

RPis are fun toys and I got a lot of mileage out of an $80 kit I picked up while I was in community college. I got some attachments and a 7" touchscreen and I had so much fun installing Raspbian on it. I later attempted to configure it as some sort of surveillance station but that didn't go well. Then I installed the actual Pi-Hole package on it and that was great, but I eventually migrated to NextDNS.

Nowadays I have practically no use for a Linux box with a tiny screen and abysmal storage abilities. It's sitting on my desk, darkened screen for months on end. I couldn't even think of a use case. I tried making it some sort of media server to hook up to my speakers, but the media centre packages are abysmal and crashy and can't handle Bluetooth or anything useful. I couldn't even get them to play YouTube videos. They're as bad as the surveillance packages.


Look at hifiberry and the OS that comes with it. I use it to setup airplay to some passive speakers and it works great. Supports various sources including airplay 2.


FWIW, my home DNS has been running on a pi SD card for at least 5 years with many, many power outages (I live in a rural area with very unreliable electricity). So far it hasn't had a problem. Overall I agree with the sentiment, and will probably move off the pi at some point. Just saying the SD card situation isn't as dire as it is sometimes made out to be.


I used a HDD connected via USB as a boot drive and after a few days of working fine it suddenly wasn't able to mount it any more, it was something to do with not enough power current in the USB ports. Lost all data and had to reformat the whole thing.


"the storage is WAY more reliable than micro-SD cards"

Common MLC media is good for 5,000 writes before end of life. SLC media can take 100,000 writes, and is available in SD card form (but it is more expensive).


IMO it's "You don't have to use a Pi for that"

Also the later generation thin clients (i.e. Dell/Wyse 5060, 5070) are cheaper still and also pretty good. They're reasonably responsive for desktop tasks and can even drive 4k60 over DisplayPort.


I've seen people waiting months to get their hands on a Raspberry pi 4 to self-hosting bunch of apps. Not sure why. Something like an HP Elitebook G2 with a i5 and 8gb ram can be had for $110 on secondary markets like Amazon renewed or ebay. About 10x the performance compared to a Pi 4. Only thing missing is GPIO port. But even that is a easy fix for another $10-15. Only down side is the power draw. Compared to a Pi4, you're going to see an average power draw of 20-40 watts.


I run a Mastodon server. In one of the giant floods of new users, I installed Docker on an RPi that I already had laying around and used it to increase my asynchronous worker capacity. I doubt I would have gone out and bought an RPi for that, but I had one up and running anyway, and it turned out to be perfect for the job.

Note that I could solder my own boards together from bare PCBs and components. I just don't want to have to when someone else has already done the hardware work for me.


1. Too expensive. Maybe in 5-10 years, pricing will return to normalcy. When inventories are tight, they should backlog fulfilling corporate orders because they drive scarcity with deep pockets that harm innovation and accessibility for average retail customers. Have a preorder waiting list rather than unactionable "out-of-stock" indications that lose sales.

2. Probably can do the same (electronics) with an Arduino or run (a lightweight job) in a container on an existing computer.


Comparing off the shelf USED hardware to new one isn't exactly fair.

Yeah, it's option to consider but I don't think price comparison is fair or even important here.

What is however is that you can just generally shove an SSD to NUC and forget about it for forever and not worry about microSD woes, finding the right case and power supply for rPi etc.

I think pi only gives real benefits when you actually are using the IO it provides over "normal PC"; or if you really need to shave that few watts off.


I am getting access denied from India.

“You do not have access to set-inform.com. The site owner may have set restrictions that prevent you from accessing the site.”

I got an error when visiting set-inform.com/2021/08/24/unpopular-opinion-dont-use-a-raspberry-pi-for-that/. Error code: 1020 Ray ID: 7ac01a0f4d06415e Country: IN Data center: bom06 IP: 49.37.133.177 Timestamp: 2023-03-22 17:19:49 UTC


They are also blocking my Hong Kong IP


I wanted to give my girlfriend the option to watch streams on her TV.

An important point was that the solution had a regular browser that worked with every streaming website out there.

So, I got an Asus TinkerBoard, because it had good video rendering. I constantly had issues with that thing. Updates, connection problems, etc.

Then the FireTV stick was released with the Silk browser. I bought one, and never went back.


I've been pricing out small systems recently for a light weight server at home (to use in part with a weather station). Given the lack of availability/expense of RPi 4s and the expense of a decent NUC, I'm leaning in the direction of an M2 Mac Mini, which has better idle power than AMD64 options, and not much worse than an RPi 4.


Availability has been and is still OK at other "piish" SBC vendors like Hardkernel, Radxa/Allnet, friendlyelec, orange, banana, pine64, etc. Most of them get new stock regularly even when things sell out.

If OS options is making you skittish, pick one with good Armbian support.

If you need the extra power (even an rpi4 sounds like overkill for a weather station) you could also look at Intel embedded systems. I haven't verified but I'd wager you can find something comparable to the M2 in size and power for half the price or less.


Isn't the very cheapest M2 Mac Mini still like, $600...? That seems to put you in a completely different price and performance category than even the most kitted-out RPI possible, even at inflated grey-market prices.


You can buy an Intel Compute Stick for cheap.


I have been running my homelab for 2 ~ 3 years, and I never considered rpi to be one of the servers ... It is just for toys or some "high school robots" stuff, with 0% availability and are very fragile. How many times your SD card fail / filesystem break / undervolt occurs / accidentally short-circuit? If you want to have a host running Linux 24 * 7, go with a used Thin Client on eBay (they are at least x86). If you are like me who want to build a rack at home, go with some used PowerEdge / ProLiant gears from eBay. They do way better things than your Pi (with BMC, Xeon cores, and ECC memory, possibly), and they are cheaper as well (my PowerEdge R520 servers only cost less than C$200 each). These machines have proper CPU, hard drive, and power supply. Do not use a Pi unless you are building some IoT experiments that require GPIOs.


Why am I getting a cloudflare error? Visiting from India via Chrome. Msg: I got an error when visiting set-inform.com/2021/08/24/unpopular-opinion-dont-use-a-raspberry-pi-for-that/. Error code: 1020 Country: IN Data center: fra08 Timestamp: 2023-03-22 13:33:46 UTC


One thing CloudFlare can do is basically replicate the old-school abuse-and-attack-reduction tactic of blocking most Asian IP blocks.


I get same 403 error.

Here is archive: https://archive.is/HmDQz


Cloudflare thinks you are a threat. If your ISP assigns you a dynamic IP, maybe trying renewing your lease?


Same.

    Error code: 1020
    Ray ID: 7abeef5a9d8c6e55


Same but attempting to visit from Vietnam


This is a letdown of the current state of technology - guess most of us expected to be able to run a (regular distro) Linux on $2 chips that could even embed a "real time processor" to do the bitbanging. The opposite happen: a Raspy Zero price rivals the NUC.


The article is right, if you need a (very) low power server the rpi is unbeatable. I have tried a lot during the years to make it as robust as possible, however I find that there is only a matter of time until a power failure, regular update or even a simple reboot will render a pi unusable. And you'll be powerless to fix whatever flavor of boot error is occurring half a world away. Never had these kinds of issues with regular servers. Perhaps if there was a way to "boot to ssh" to fix these...

Some recommend the miniPCs from Dell, Fujitsu etc. But they also have custom firmware with their own set of problems and strange design decisions which you won't be able to change if you needed to.


I never had a breaking update on Raspbian. However, I have run into write errors caused by power failures that required an fsck on the SD from a different computer: I now have a `@reboot touch /forcefsck` cronjob, but it may just be a talisman (IIRC, dirty filesystems get a fsck automatic at boot)

Have you considered read-only boot & system partitions? That way, your pi will always boot, at the cost of have to remount in read/write mode to perform updates.


The big benefit of something like a pi is that it offers the flexibility of a linux-bootable computer with the IO pin interface of an MCU. That being said, maybe there's a cheap, easy way to get a GPIO interface on a NUC? I know there are USB to GPIO boards you can buy, but I have no idea how easy they are to use, and what the drawbacks might be. There are also extremely expensive GPIO interfaces that negate the cost-savings of using a cheap NUC.


I fully agree. We built our app testing parcour with about 10 phones, 10 nfc simulators and a Raspberry to orchestrate the whole thing.

The Raspberrys USB ports sometimes did not work properly and we feared that we could quickly kill the SD card. Additionally the processing power was not that great.

In the end we switched to an Intel N5000 notebook with a broken screen (working HDMI) that we had lying around (used price under 100€,in this condition maybe under 50€). The system now has aN uptime of over 100 days and works much better.


When you're chasing minimum power consumption, tho, a Pi4 (in my case, a CM4) with a 200MHz base clock, a USB3 card, a SATA card with 6 SSDs attached, running 3 Alpine VMs and a couple Docker containers will idle at 12W and perform surprisingly well when asked to while drawing no more than 27W with every system running at full tilt. I didn't expect it to be as good as it is, and I can't find anything else that can touch it. But my use case is narrow and specific.


Without disputing anything in the article, I often see little to no value placed in consistency, simplicity, and repeatability with these kinds of analysis.

In this case, I would prefer to manage 1 set/type of hardware for all my uses unless I really needed to deviate. If the difference is $100 in overkill for a Pi-hole, but right-sized for other use cases, I find value in the consistency.

1 form factor, 1 OS, 1 set of hardware compatibility, if a script works 1 place, it works everywhere, etc.


My house has one Raspberry Pi 2b for DNS (Pi Hole), and then everything else (including a second Pi Hole) is in docker containers on my file server. Between the two, I've been very happy.

The file server is an old i7 2600k that used to be my desktop PC before I added some hard drives and installed Unraid on it. Unraid makes managing docker containers incredibly easy, especially with the Community Applications and Auto Update Applications plugins.


The value of the RPi is its versatility. It's not the optimal tool for any task, but I can be confident it will work well enough for an extremely wide range of projects. Some projects you might know exactly what you need at the start, but most times you don't, and when you prematurely optimize you often find yourself bending over backwards to make the wrong thing work.


For me the main issue has been the storage reliability.

But what if there was a "Pi" that would boot from the network. Not local, but from Internet. With some tinkering, you could add a small network mounted drive for persisting things like configuration information. Bootup times would be long, but quite often these type of devices are not constantly turned on and off.


Pi already can boot from the network, mate.


The thing that makes the RPi such a great choice when there are so many choices is the community and the plethora of information about the platform. Here’s the hardware and the software that we know works, and piles of people have it running right now. Mix in an unknown element and who know how it’s going to behave. Let alone where you can get help.


the rpi foundation is another example of openai-like evolution of an org - started out with a noble cause but switched course when faced with the realities.

the whole supply-chain fiasco was a great example of their current priorities when their arm was twisted. they started out with promoting cs education but ended up being more loyal to their corporate partners.

but to be fair to them, half the problem was us, who want it for cheap computing. the "hackers" in this site could easily achieve similar results by repurposing an old phone than to buy another one of these sbc for their home lab. to be honest, we are spoilt by the rpi ecosystem, and we choose it even when overkill because we can (or at least could until they became over $100).

at the end of the day, you do what you want with your wallet.


Given in Canada it's cheaper to just buy a small Intel PC than a raspberry pi 4 of late, I've been exploring alternatives lower down ... ST-Micro's stuff as an alternative to Arduino (which is also $$$ here).

I guess it depends on what you're doing. I like working with hardware and low-level software.


I have to say these look really tempting: https://www.aliexpress.us/item/3256805075742151.html

I need to split some services off of my overloaded Raspberry Pi 4 but I just can't find any in stock anywhere. That NUC is so low priced it's hard to resist pulling the trigger, and in all likelihood it probably crushes the RPI4 in terms of performance.


yeah, watch for heat though. (I didn't have that specific model, but I had one overheat in a hot summer day ...)

Of all places, London Drugs sometimes carries them. Places that sell POS (point-of-sale) systems are more likely to have the small form factor computers.


yeah, you can use an old refurbuished pc for most of it. But running on 5W .. no. Putting it literally into the wall, no. Arduino is to much overhead upfront. It is the ideal middle ground.

I went with rockpi because of the shortages .. works just as well and has a little more punch.


And instead of buying more powerful hardware you can try to get the always free oracle VPS (4vcpu ARM, 24GB RAM, 200GB storage), do your compute stuff there, and keep the RPI as pi-hole/home assistant only. It will save you on electricity bill as well.


> free oracle VPS (4vcpu ARM, 24GB RAM, 200GB storage)

<insert my standard disclaimer regarding oracle based on lived experience>

https://news.ycombinator.com/item?id=29514359

Just because you haven't been bitten yet, does not in any way mean you won't be, and crucially: it doesn't preclude others following your advice being bitten.


> it doesn't preclude others following your advice being bitten.

Just Google which ones are actually always free. Didn't have any problems personally.


At least in my case I was not presented an obviously free option and the shapes given matched the always free tier, despite not being included in it.

I'm a reasonably smart individual, I did not click through blindly and made a concerted effort to discover the free option and was taken aback by the fact that I couldn't find an option that was branded as being free and was assured by a friend (working for Apiary in Oracle) that it was fine to pick the shape I picked.

Look, Regardless of how it came to pass: I should have been permitted to cancel. I continually paid what I considered to be the final bill (3 times in fact) until ultimately I was charged for an amount that it was not possible for me to actually pay (imagine trying to pay 0.1 US Cents or $0.001).

I know we like to think that we're smarter than other posters and that "maybe they were holding it wrong", but I can't put into words how sincerely you'd be mistaken in thinking that in this case. You are very likely thinking while reading this: "Yeah, but that wouldn't happen to me, it hasn't happened, I feel safe enough in my decisions I could get out of it" -- you'd be wrong.


Thanks for the thread. It is full of Oracle horror stories

I was about to give Oracle free tier a try for some hobby projects.


PiHole shouldn't be writing much to the SD card if at all; Something seems wrong there. Once it boots (a read only affair), it should be running exclusively from memory, no? Maybe periodically updating the rules and saving to disk?


I have noticed that this website is blocked for some reason in India. For affected users, here is the archived copy: https://archive.ph/HmDQz


Please start here. Tons of thinclients

https://www.parkytowers.me.uk/thin/hware/hardware.shtml


Yeah the value proposition ist quite out of whack at the moment on most SBCs in the rasp 4 class.

Not a particularly unpopular opinion anymore though. Most subreddit for example are already steering people towards minipcs


I tend to try out a lot of Raspberry PI projects (PI-hole, WeeWX, etc) then eventually migrate them to a VM in an ESXi server. Much easier to manage and more reliable.


While his arguments of performance to power might have some truth, I strongly dislike that he's using the current pricing as a comparison point.

Sure, buying one now is (relatively) expensive for what it is, but I bought my Pi's back when they were MSRP ($45-54 USD for RPI4 4GiB and $35 USD for RPi3B+). Nothing can compete at that price point. While there are cheaper options or more powerful options for a bit more, none of them have the same support the Raspberry PI has.

As for reliability, dropping ~$25 USD for a drive enclosure and having it boot from that is doable with the Raspberry Pi 4 (and the 3 but I've never tested it).


I agree that you shouldn't use the pi as a server. The biggest drawback is that arm is still not as well supported as x86 and isn't as heavily optimized.


> You do not have access to set-inform.com. The site owner may have set restrictions that prevent you from accessing the site.

Anyone else get this?


I do. I'm in Turkey right now.

Guess they want the opinion stay unpopular after all shrug


Same from my country… I checked if it’s my iPhone’s iCloud Private Relay.


I've found an Intel Compute Stick for $30. I'm tempted to buy it and find an use later. PiHole, mail server.


Ha! I was building OpenCV on a Pi and I remembered I had a 6500T thin DELL I could use instead.


The reason not to use a Raspberry Pi is that you can never buy them.


Do NUCs have GPIO or do I have to stick to a SBC for those?


Industrial mini PCs will often have 8-32 GPIOs for controlling equipment. Expect to pay a hefty markup for one of these from a name brand, or a moderately reasonable price for a no-name Chinese NUC.

The NUCs you see from e.g. Intel had a "custom solutions header", which was a bunch of common buses on an internal header you could expose with some bios config and wires, including 1-2 GPIOs, but that was it.


I have a qotom box which I use for my firewall and it has a header for GPIO. I think that it might be a singular GPIO, though. You'd have better luck attaching an esp32 over USB to add additional GPIOs.


There are various sorts of USB-based add-on GPIO boards these days, but often you're fine with an ESP or Pico doing some of the low-level work and reporting back via serial or USB HID.


You can always use an AVR or ESP32 as your remote GPIO.


Not typically


ESP8266 is more than enough for many of my projects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: