I'd go one further and say that basically agile in hardware is fundamentally impossible in any way people commonly use agile.
If you have any custom hardware, you are basically stuck with a turn time of at minimum a day or two for any changes (often more, weeks + for new PCBs is common if you're not throwing huge amounts of money at people).
In this context, any process that depends on rapid small iterations is basically impossible, because each iteration just takes too much time.
Ive done firmware for a long time. Platforms vary considerably and most have surprises. Debugging facilities are limited. Reference examples are sparse and chatgpt hasnt got much info on niche OSes like zephyr, and safertos except some FreeRTOS.
Pieces of code interact more heavily than a linux machine. Testing requires more hand-holding/baby-sitting. Cross-platform architectures don't scale down well. There are many types of comms busses with no/few standard embeddings.
Some teams know how to make it a lot easier. Some CTOs know this, but most find out the hard way. Embedded practices lag webdev by 5-10 or more years bc they were good enough for a long time or for small projects. Expectations are rising, but there is less leverage than adtech so salaries are ok but not explosive.
I wonder (as someone who's basically always been in the pure software land) if the way to get around this is to overbuild your hardware prototype. Throw on more sensors, actuators, and motors than you actually need, and parameterize the physical properties of the hardware (like mass, power, and center of gravity). Then do your iterations in software. You can always ignore a signal coming from a sensor, but it takes weeks to add a new sensor and get a new prototype. So work out all the behavior in software, where you can iterate on a minutes -> hours timetable.
Then once you know how it works and have done most of your optimizations, you can trim down the BOM and get rid of hardware that proved useless. You'd probably want another round of tuning at this point, but at least you know basically how it works, and have tried out many different combinations of hardware in your software iterations.
It's unfortunately not that simple. For most parts you can get breakout boards or dev kits that implement the chip and all its support circuitry. You can drop those into a simple PCB or just a breadboard and get going pretty quick. This is appallingly expensive for more than a handful of prototypes, but it does work. IME, this is how software people approach electronics.
The real trouble comes during the next step. You have to integrate all these disparate boards into one unit. You can copy schematics and layouts from the dev boards, but that takes a lot of time. Once you've done that, you need to re-validate your entire stack, electronics, firmware, and software. You will inevitably find that you've missed a connection or the routing introduced some noise or you got this component value wrong. So you spin a new board, wait a week, and do it all over again several more times.
Debugging electronics is a lot like debugging software. Except that the code is physical objects and the bugs are emergent properties of the whole system and are inherently chaotic. You (generally) can't step through the system like you would with code, you can only observe a small number of nodes in the system at any one time.
It's harder than you expect because you're dealing with invisible forces and effects of the fundamental laws of physics. It requires either very deep knowledge or a lot of time in iterations to solve some of these problems.
- when your function calls don't have enough white space between them, they'll some times mix up their return value (crosstalk)
- the more deeply your if/else statements are nested, the more random their results end up being (voltage/IR drop)
- when the user makes a selection, your bound function is called somewhere between once and a dozen times. Each time with a different value (bounce)
- their is no `true` or `false`. You just are given a float between one and zero, where one is true and zero is false. Sometimes `true` is as low as 0.3 and sometimes `false` is as high as 0.7
- the farther apart your variable declaration is from it's use, the less it represents what you actually declared. If it's two lines apart, a 'foo' string is still 'foo'. Hundred lines apart though, and 'foo' might become 5.00035 (attenuation)
If you want your program to execute as fast as possible, you have to worry about the speed of light! A trace a few millimeters longer than its partner can have nanoseconds of delay which can easily corrupt your data!
And don't forget that you have to balance the physical shape and arrangement of components and traces with frequency and surrounding components otherwise you've created a transmitter spewing out noise at tens of MHz.
Or the corollary: if you aren't careful you can receive radio signals that will corrupt your data.
Oh, you think your wire can handle the 0.5A your widget needs? Let me tell you about transients that spike to tens of amps for a few hundred nanoseconds. But it's okay, that problem can be solved with a bit of trigonometry.
On the plus side, if you forget to connect your ADC to something, you now have a surprisingly decent random number source.
I love the absolute chaotic insanity of electronics. On the surface things make sense, but one level deeper and nothing makes sense. If you go further than that, at the bottom you'll find beautiful and pure physics and everything makes sense again.
I feel the same way about software. It's a hot mess, but under everything there's this little clockwork machine that simply reads some bits, then flips some other bits based on a comparison to another set of bits. There's no magic, just pure logic. I find it a very beautiful concept.
Not so fast, some alpha particles from a distant galaxy strike your memory chips and some bits flip. If the CPU gets too hot or too cold it starts misinterpreting opcodes, branches, etc.
The reality is that computers are comprised of several PCBs running with thousands of multi-GHz signals. So all of the foregoing engineering design principles had to be observed to make our systems as reliable as they are.
I came here to say what you just did. Hardware modularity is incredibly useful in the face of unstable product requirements. Focusing on integration, optimizing form factor can come once the requirements are locked in (if such a condition ever exists lol).
It's common to design PC boards that have holes and traces for components that aren't installed. If you need three motor controllers now, design a board with space for six, plus a prototyping area of plain holes. Allow for extra sensors, inputs, and outputs. It's easy to take the extras out of the design later when you make the board for the production product.
That's what dev kits are in electronics. They usually even come with schematics/PCB layouts so an engineer can quickly copy a working design and remove the stuff they don't need. There's still a huge gulf between those prototypes and production and there's plenty of mistakes to make requiring multiple revisions.
I'd go one step further and say that Agile might as well be tossed away entirely since it's common enough for companies to treat it like waterfall anyway, making the practice of many small iterations or iteration at all unwelcome. If your software team embraces iteration and incremental improvement, any presence of agile is probably redundant; if they aren't, or the nature of the work doesn't facilitate it, then Agile gets in the way regardless of the domain.
When hardware design can be software emulated then iteration can be more agile. There's a story how NVIDIA was the first one to do this for GPUs- this was done out of desperation because they were out of money and had to ship quickly. They didn't have time and money to do any revisions so they just shipped what they had done in software emulation even though some of the features were defective.
https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...
Absolutely all chip designs get software emulated/simulated for rigorous testing before being sent to the fab for production.
It seems like the publication turned what is industry standard into a sensationalistic article.
The only thing Nvidia did different was rolling directly to tapeout with their simulated design without any intermediate prototypes which was indeed a risky move but not unheard of for cash strapped semi startups.
We have been able to largely get rid of COBOL. Why haven't we been able to do the same with Javascript? It's a very useful language given the amount of existing code and support, but very few people I know actually like it.
In the 90s, vendors seriously invested in major development suites (Visual Studio, Visual Age, C++ Builder, Delphi, Visual Basic) and improve languages (Java, C#, Pascal, Basic) for desktop environments. But the web development landscape is largely dominated by Javascript, which is actively hated, and Typescript, a better but far from great alternative, with everything else being a tiny percentage. What happened?
What happened is that contempt culture solidified as one language became the top language. What happened is contempt culture solidified as pro-native-users came together over their disdain for the web.
The is a steady stream of js contempt on HN. I don't think it's representative of how actual devs think about JS. The contempt forever seems stuck at the shallowest stages, of assuming JSnis terrible & disparaging it out of hand, without making a single claim or contention.
Building on the web is not difficult & gets your experience in front of users fast. We've made better and better tools, while exploring a huge variety of app development styles.
I don't know what js did to so many of y'all. I've done web stuff across libcgi, perl, php, java, and yeah early js has some warts but it's like 98% the same as any other language would be (even if classes looked weird for the first decade). The npm package management ecosystem is vastly easier to use & understand than anything I'd seen before. It's unclear what about this language makes people so miserable, and it's not something I've seen in person in my professional career. But the contempt is on high display regularly in the comments.
I think much contempt is actually directed at how JS features have not always had the same performance profile, support, or implementation across browsers.
The language itself is fine, and incongruities have long been polyfilled.
With Shared Array Buffers, Webworkers, WASM and WebGPU, offloading heavy computational work will only become easier.
I guess some people don't see the problem with an excess of accidental complexity when they get paid a lot of money to do simple things like building UIs in a very complicated way.
It feels cheap to me to write off an ecosystem that did amazing things because sometimes the job might be simple. Its rooting around to try to be unhappy. And it's unclear that it really is a problem. An endpoint in node and a small react app can be be cobbled together in an hour. That same basis in skill scales up nicely. Is there really a complexity problem? It seems like a hand wavy excuse.
Yeah the hello world stories are always golden paths. People be making bloated, hard to reason about, hard to understand and debug software for some extra money and I'm digging to be unhappy? ok...
npm is easy to use? In my experience npm and the ecosystem that surrounds it are the only package managers I have ever used that consistently don't work, and in terms of design/ease of use in the theoretical scenario of it working properly it's marginally better than pip and significantly worse than cargo and hex.
Any popular language is going to seem like it sucks because everyone is going to have an opinion of it. Yes, even your favorite language that you think would be so much better on the web than JS.
We're stuck with JS because it's easier to just improve it than swap out the entire ecosystem. Now it's quite good.
1. Cobol is still around and it is much older than JS
2. Cobol systems is not as entrenched client side JS. It is used on websites because it is what browsers support.
JavaScript (TypeScript) is my all time favorite language. I write code in Go, PHP and Python and have coded in C# and Java. But for me nothing beats the simplicity, ergonomics and maintainability of modern TypeScript.
I think you misunderstood the post, they mean that the original purpose of Javascript has been misused (with mostly the fault lying on the web stack) instead of it being a bad language.
Javascript was the first modern high level programming language available to the masses, and easily the simplest to use and deploy. It allowed millions of people a degree of creative freedom only available in universities, and it's the reason the web is more than simply a collection of research papers. I know a good portion of HN wishes it were otherwise, because the endless (and at times deranged) antipathy towards javascript is so constant here there is an entire rule in the guidelines telling people to not just go on a full tilt rant about its very existence when a site requires it. But for everyone else (especially those of us for whom javascript was a gateway drug) it was obviously not a mistake.
The mistake was all of the extra complexity and nonsense that came about when SV realized the web was Serious Business. Having to compile it from a "strict superset" language because Serious Business means strict types and tests, tests tests. Taking it out of the browser, aligning the entire ecoysystem to a single, fragile, badly designed point of failure. Let's abandon the entire field of application programming and just ship an entire Chromium instance and a webapp for everything. What happened to all of my RAM? Who cares, everything is free now, numbers are a lie.
I mean the entire point of frontend frameworks was supposed to be that it would simply be faster to transport JSON diffs rather than entire html pages, and do the rendering in the browser. But of course any such efficiencies wound up being eaten by increasingly bloated and inefficient code. And now people are discovering backend rendering and "vanilla" JS like Howard Carter discovering the tomb of King Tut.
Javascript the language, even Javascript the idea (being able to write and run code on the web) is at worst neutral, and arguably a good idea. At least from the point of view of the old hacker ethos where the point is to liberate the masses from the gatekeepers of intellect and information is as many ways as possible. But as with everything else turning to shit in our world, blame capitalism for taking something with the potential for good, and utterly ruining it.
It's not exponential backoff, but I've done this in RabbitMQ with some queue weirdness.
I have the main queue with a reasonably short timeout (30 seconds). That queue is set up with has a dead-letter queue where failed messages that don't get ACKed get moved to.
The dead-letter queue has a TTL of ~5 minutes, where it's dead letter queue is the original queue.
So basically, if a message fails a worker, it gets kicked over to the dead-letter queue, which then moves it back to the main queue after the TTL times out. This foes mean a crashing message will fail forever (so you have to keep a careful eye on how many messages are in the dead-letter queue), but I've managed to work around this so far. Or you can use proprietary extensions (x-delivery-attempts).
The clever thing about Cog is that it's a templating language which is designed to be hidden in comments inside files (such as Markdown) that don't usually support an embedded templating language.
Normally code generation via templates is a part of the build process, but this is not always ideal for all uses. For example some template is simply too slow to run, or needs some specialized dependency which is otherwise not required. Or a particular code can be manually written, but you want to further document how that code could have been automatically generated. Cog templates are intended to persist, so that the output can safely replace the input and be directly checked into the repository without any additional dependency, making them useful for such situations.
(This concept is not that unusual, by the way. CPython itself has a preprocessor called the Argument Clinic [1] which was in fact inspired by Cog.)
Just because you can do so easily does not mean everyone can. I am apparently unable to segment my muscle memory like you seem to be able to (I'm actually jealous you can do that!).
I don't think it's any different from, say, speaking other languages or switching back and forth from countries that drive on different sides. These things just become automatic.
But that's not universal either; I have a friend who started losing english vocabulary when they first learned french. (The collected-anecdote metaphor (not, afaik backed up by any particular cognitive science) seems to be having "slots" for language, one-vs-two-vs-many...)
I just think it comes down to practice, like being able to speak multiple languages or program in different programming languages.
And something that can help is to differentiate your layout physically. In my case an entirely different keyboard, but even something like pressing space with the other hand can help keep them separate.