Hacker News new | past | comments | ask | show | jobs | submit login
Putting out the hardware dumpster fire (acm.org)
223 points by peter_d_sherman on June 23, 2023 | hide | past | favorite | 109 comments



I think one of the primary reasons that it is such a dumpster fire is there traditionally hasn't been an "open" ecosystem in the hardware world, though now they are being forced towards that direction kicking and screaming.

Every part of the hardware ecosystem has traditionally been done in closed, NDA ridden environments and only over the last 5-10 years has that even started to change.

Designing chips has required NDA-based PDKs. Designing complex PCBs has required closed-source EDA tools. Interacting with any of the IC peripherals often requires binary or non-redistributable firmware.

Hell, even with the modern "open" switch architectures, like Trident3, you can't get software or detailed datasheets from broadcom without an NDA. Same thing with some of the ARM based stuff like with the Raspberry Pi.


We're aiming to push things in the other direction with OpenTitan: https://github.com/lowRISC/opentitan/

It's an Open Silicon root of trust, all RTL (the actual hardware design in SystemVerilog), firmware, documentation and verification environment is open source and in the repository I just linked.

We're closing in on our first discrete chip (details here https://opensource.googleblog.com/2023/06/opentitan-rtl-free... and https://lowrisc.org/blog/2023/06/opentitans-rtl-freeze-lever...) and have lots more in the pipeline (our project director Dom Rizzo gave a keynote at the Barcelona RISC-V Europe summit recently with some details, sadly not available on video yet).

The hope is this will be a real proof point of the value of open source in hardware and, if as successful as we like it to be, can push the industry from a closed by default to people having to justify why they're not using open technology.


Can you explain how the root of trust is configured? Is it efuses or some sort of onboard mutable nonvolatile storage? If I buy a system with one of these chips in, am I likely to be able to set my own root of trust, or will the OEM have irreversibly set the root of trust to their own key?


It looks like the chips themselves should support change of ownership. Whether an OEM ships them in an unlocked (transferrable) state is up to them.

https://opentitan.org/book/doc/security/specs/ownership_tran...


If you ever need someone to test your stuff when it makes it into the physical world, would love to help :)

I really want to see an open motor control peripheral that could be used in robotics or UAVs, it's amazing how ubiquitous they are, yet rarely are very integrated like other controllers.


I like this approach, speaking from a very general perspective (very uneducated regarding hardware, comparatively). Absolutely brings back memories of the 90s and arguments regarding open vs closed source and "security through obscurity".

Security through obscurity doesn't work, ultimately. When economic stakes are / were lower, it CAN have benefits. At this point, I suspect it's likely that more eyes and openness is better.

That said, I do think that the best solution is likely to be based in a mixture of approaches, much as has been pursued (to my knowledge) and developed over time already. However, personally, I'm a big fan of "formal methods" and seeing more real-world deployment of such methods.

In practice, as has been done that I've seen, you start with small & critical subsystems, trying to design for "parsimony" - making formal methods and everything else more realistic / practical (e.g., "microkernels", everyone's favorite 'solution' since the 80s, at least). But, it's all very challenging because then it has to be balanced against performance, cost, etc.

Not sure this comment adds much, here - again, not an area I have much direct knowledge or experience in - but, your comment did bring some analogous areas and work I'm more familiar with to mind.


Most electrical components have traditionally had open datasheets. It is only a relatively recent phenomenon that mass market ICs are locked behind NDAs.


It really depends on the complexity. Sure, 555 timers and even microcontrollers have datasheets. CPUs that run Linux, Wifi chips supporting recent standards, etc.? Rare. This is why Linux in the 90s was a big pain; you get some random Ethernet chip and there is no documentation on how to talk to it, so you're just dead in the water. (I think Android was the turning point. From then on, stuff had to support Linux, because Mobile was the future and Apple wasn't going to buy your chip. But of course, "works on Linux" means that it works in the hardware vendor's 3-year-old version of Linux. And "works" is always up in the air when hardware vendors start writing code.)

When I worked on embedded devices, we would report bugs and the vendor would send us new datasheets with updated known issues; updated to include the issue we reported. Or we'd do something like "the datasheet says it can handle a 100MHz clock but it's very unreliable" and they'd send back a datasheet saying it only supports 50MHz. That's the reason that datasheets are closed, every customer gets their own datasheet.

The more you get into higher margin stuff, be it hardware or software, the more you'll run into datasheets or branches just for you. It is weird and inefficient. But cheaper than testing it yourself.


It seems to depend on the market.

For example CPUs that run Linux open reference manuals are still the norm in the industrial range (chips like the NXP i.MX series, the TI SAMA ones or the fairly new STM32MP1) whereas in the mobile market (Qualcomm, Exynos etc) you can't get any information without a NDA and they'll only sign one with you if you're going to be buying millions.

The first category also tend to have good mainline support, the second stuck on ancient vendor kernels.


> But cheaper than testing it yourself.

It always felt like, IMO, that we end up testing it ourselves anyways. Obviously we don't go through the whole spec, but like you, we implement what we need and find the relevant bugs/errata ourselves. It's very painful to run across these and I don't recall an MCU that doesn't have at least one that we run into and have to workaround because development is already deep enough that we're economically locked it and it wouldn't be worth the rework to switch to a different MCU. Sometimes the bugs are in the silicon, sometimes they're in the middleware, but both are very painful to root-cause.


I think that they were saying it was cheaper for the manufacturer to not test it and use their customers as bug testers and troubleshooters.


Complex CPUs and peripherals up to around 1995, e.g. up to Pentium, had mostly open documentation, not much different than for a 555.

Only after that the public documentation has become progressively more and more restricted.

Some restrictions are quite recent, e.g. only since the first Zen AMD has stopped publishing the BIOS and Kernel Developer's Guides and Intel has stopped publishing even summary datasheets for the mobile CPUs only a few product generations ago (summary datasheets are still published for desktop CPUs).


CPUs that run Linux, Wifi chips supporting recent standards, etc.? Rare.

Intel was relatively open with its documentation (including reference schematics -- I think you can still find the 440BX ones on their site somewhere) until around the end of the P4 era, so that covers "CPUs that run Linux". As for the wireless stuff, I suspect a lot of that has to do with regulatory issues.


I mean, sure, for certain x86 things... like 20 years ago, but IIRC their host controllers were not though, and that isn't but a fraction of all CPUs.

Think about how many billions of devices there are where you can't get simple a pinout diagram for the main CPU. I would argue only a tiny, TINY, percentage of any Linux devices actually have CPUs/SOCs/SOMs with sufficient documentation to look at from a hardware design standpoint.


StrongARM and followon ARM CPUs from Intel had good documentation too.


>Android was the turning point

>vendor's 3-year-old version of Linux

Android uses the latest LTS Linux kernel.


i only hit NDA on secure element chips and similar things. 99% of my embedding work is plain ol mcus w reams of datasheets available. i do not do pc stuff tho


One common component where things are really secretive is with flash storage. This is because the underlying physical components are pretty commodified, and the software/firmware (like block virtual addressing) is where brands actually distinguish themselves. It's kind of unfortunate since there's a lot of really cool stuff you could do with more control over the hardware like reducing the size of a failing disk to extend its life instead of complex wear-leveling techniques.


If I'm ever going to get on the conspiracy theory bandwagon, it will be on two topics:

1) Return to office.

2) Why we need a dedicated CPU and DRAM attached to flash memory. You can garbage collect and wear level in your OS if you want to. Manufacturers say "no, we have a super secret secret sauce that nobody else could possibly improve upon". Uh huh, sure.


> Why we need a dedicated CPU and DRAM attached to flash memory. You can garbage collect and wear level in your OS if you want to.

The thing I don't get is, the chips are all commodities, and it's not like soldering them to a board is rocket science. Why isn't one of the companies that makes fully-specified inexpensive RISC-V chips selling one attached to some commodity flash chips and an NVMe connector? Include some minimalist open source firmware that offloads most of the work to the host OS and let the Linux community figure out how to make it better.

At minimum it should allow you to undercut the existing competition on price because you're using a less expensive controller and no DRAM, and in a few years the open source drivers could have enough advantages over black box devices that nobody wants to buy anything else.


For starters, I doubt that acting as a NVMe device is something you can do with a off the self "inexpensive RISC-V" with any kind of acceptable performance. A NVME engine would almost certainly be something that the flash controller would have implemented in fixed function hardware.

Also NAND flash practically requires some sort of error correction system - another thing that fixed function logic in a custom ASIC is very well suited for. Using CPU time on the host CPU for that would probably suck.


The basic function of transferring data between the flash chips and the PCIe bus shouldn't be a bottleneck.

Dedicated silicon for error correction might be more efficient, but when there are already fast idle CPU cores it may not matter, and ECC algorithms can already be accelerated by SIMD instructions. Or for high performance machines add the dedicated silicon to the host CPU.


These kind of high speed interfaces aren't just something that one just bitbangs on some microcontroller.

I think even the first step, ie. finding a RISC-V chip that has capability of acting as a PCIe device at all will be a challenge. "Inexpensive" for sure not.


There are RISC-V chips with PCIe controllers on them. I'm not sure if you can use the hardware designed to act as a host to act as a device, but it wouldn't surprise me. In any case I would find it hard to believe that the hardware that can act as a device would cost significantly more.

You can get a whole NVMe SSD for $15:

https://www.newegg.com/silicon-power-128gb-p34a60/p/N82E1682...

The hardware required to do this can't be that expensive.


> I'm not sure if you can use the hardware designed to act as a host to act as a device, but it wouldn't surprise me.

I don't how it is for PCIe, but for example USB is very asymmetric and host controllers generally cannot as devices. There do exist dual-mode IP cores (e.g. for mobile SoCs)

> In any case I would find it hard to believe that the hardware that can act as a device would cost significantly more.

Indeed the area/cost difference between host-only and dual mode controller may be small. But in a general purpose chip there will be dozens of other competiting features in all the other subsystems besides PCIe. Each of those small features would increase the chip cost, so they won't be added without a cost-benefit calculation. So it can still be that while PCIe device support would be cheap to add, it could still be considered too low bang-for-buck to be worth including. Or that there may be such chips supporting PCIe device mode, but they then also would include things like video encoding/decoding, HDMI, ethernet, etc. and thus be an expensive chip.

> The hardware required to do this can't be that expensive.

With a custom ASIC (the linked one seems to have Phison E12) that includes only the minimal needed hardware, yes.


>RISC-V in charge of NAND storage, presenting NVMe (or SATA) interface.

Aren't WD SSDs just that?


No, they are not just a converter that exposes raw NAND to your PC. Rather they abstract away all the details of how painful raw NAND is to deal with into simple "read/write sectors" interface that all your normal filesystems like ext4, NTFS, etc. deal with.

Raw NAND comes with various pain-in-the-ass limitations such as: only erasable in huge (~megabytes) blocks at a time, only writable in slightly-uncomfortably-big chunks (like 32KB or so), and erasing a given region can be done a limited amount of times before it breaks. Oh, and all those details (like exact sizes of erases and writes) will change time to time as manufacturing processes of NAND dies get more advanced.


That's software, not hardware. And operating systems have long dealt with things like variable sector sizes. You want the abstraction layer. What you don't want is for it to be a black box, and what you shouldn't care about is whether the silicon that runs the code is soldered to the flash chips or not.


Why offload it to the host? If the CPU on the NVMe card is well documented then run the flash software on that.

The same goes for network offload features in ethernet controllers, make it an open architecture that you can write your own firmware for.


The host CPU is faster, which lowers access latency, and modern machines have large numbers of generally idle cores. It also reduces the cost of the drive.

Meanwhile you could also have drives with offload engines, in the same way that some network cards have offload engines and some don't. But the ones that do are more expensive.


My guess is because flash chips are quirky and not very interchangeable. The error correction, wear leveling, etc all probably needs to be configured and calibrated to the specific chips used in a way that only can really be done by the manufacturer or a really big OEM like Apple.


I'm trying to imagine why wear leveling would depend on the chips. The number of writes you can get out of them would, but the optimal wear leveling algorithm should be to spread the writes around as much as possible regardless of that, shouldn't it?

Likewise, some chips could require more error correction because they expect more errors. But the amount of error correction is a tunable in the algorithm; it's a space vs. resilience trade off. Where to set the dial shouldn't be hard to derive from the manufacturer's specifications, or empirical testing if the spec is worthless.


Wear leveling: Just search Google Scholar for "wear leveling" to get an idea for how deep that rabbit hole goes. At minimum think about the difference between placing an infrequently written logical sector vs a frequently written one, and what the optimal strategy would be given physical blocks with varying write life left on them. Then how do you partition between regions you treat as SLC vs MLC? And on and on and on.

Error correction: Not just amount. What block size? Interleaving? How do you trade overhead due to more ECC vs overhead due to more spare blocks (have to decide how many to allocate up front)? What is the behavior vs temperature and what are you running the thing at?

There has been an ongoing push to standardize NAND interfaces but the controller still needs to know a fair amount about the specific chips it's talking to. I don't know how it will resolve but hopefully the integration becomes more like DRAM where things are a bit more consistent. Then again, if you care a lot about reliability you qualify specific DRAM modules too...


> At minimum think about the difference between placing an infrequently written logical sector vs a frequently written one, and what the optimal strategy would be given physical blocks with varying write life left on them.

That's not a simple answer but why would it depend on whose chips you have?

> What is the behavior vs temperature and what are you running the thing at?

This is where you might start seeing differences between manufacturers. But this is also optimization. If you don't have this information the chip should still meet a minimum spec and you can use a conservative value. If you do know it you can do something more efficient. And if this becomes popular the chip makers will start publishing what's necessary to do the optimization.


> That's not a simple answer but why would it depend on whose chips you have?

How do you estimate how much life is left on any given part of the physical flash?

This is one of those areas where neglecting the optimizations qualitatively changes the result. It's like Twitter without scalability, sure you can build it in a weekend but what problem does that solve?

Don't get me wrong, I would like to see all of this stuff get better standardized, documented, and open source, but for the most part this is a cost-sensitive commodity and if you want to roll your own SSD qty 1 from bare chips it's just going to be orders of magnitude more expensive than letting SanDisk or whoever do it. Maybe it will be possible to get the hyperscalers and cloud operators to push for the kind of standardization you describe, it would benefit them.


> How do you estimate how much life is left on any given part of the physical flash?

In proportion to how many times it's been rewritten already, or by checking the error rate for what's already written there.

> This is one of those areas where neglecting the optimizations qualitatively changes the result.

It mostly changes how much error correction you have to use. But the nature of it also allows other optimizations.

Suppose you're going to use multi-drive array. Now you could stripe the error correction across devices and use the same system to recover from device failures, which at the same level of total error correction is more resilient.


I think NAND controllers are fine. If erase is slow, especially at lower voltage levels, then buffer away. I also think internal controllers help mask and work around shitty yields on the arrays.


> 1) Return to office.

In the spirit of unbridled conspiracy theory (i.e. I have no evidence for this and don't believe it is generally true):

Landlords are paying CxOs kickbacks to push return to office in order to prop up commercial real estate.


I think that's basically true but it's not a conspiracy - take Apple, for instance. They have a $5 billion HQ on their balance sheet. If everyone works from home, then their investors will ask the CEO "why aren't you using this asset that costs $5 billion? It's inefficient - sell it off."

The problem here is that if the CEO sells off the HQ (in a market where everyone has a bunch of unused real estate) they'll almost certainly take a loss (and potentially a loss of billions of dollars, if they e.g. can only sell it for $3B), whereas if they keep it then it's technically valued at what it originally cost to build. The CEO's job is to make the company (look) profitable, so the CEO wants to find any excuse to use the HQ, even if it loses a bit of money or doesn't otherwise make much sense.

For companies that lease, it's the same problem or even worse - corporate office leases tend to be e.g. 3 years, and they're locked into paying actual dollars every quarter until the lease term ends, so if they can't claim to be using the office space then they're directly losing cash. Renegotiating the lease is a bad bet as the landlords have basically zero incentive to do so in the current environment. Thus, they want to appear to use their office space.

There are some other niche benefits too - forcing return to office is a great way to reduce employee count without needing to claim you're downsizing (although if you want to downsize then you can totally make the claim), and it makes office perks more valuable.

There are also some tax breaks from local governments that are contingent on having X number of workers in the office, and WFH could potentially require reversing the tax break.

None of these are "conspiracies", they're just sparkling incentives.


Maybe. The most compelling reason I have heard is it's an easy way to get attrition. You don't have to do layoffs, which are expensive emotionally and financially.


In my city office buildings are being converted to residential housing. 3 buildings recently in an area of population size 250k.


What part of the world? I've heard that the cost of converting office spaces to residential is more than building new.


People keep telling me this too, so I’d be super interested to know if it wasn’t the case, because it really feels like a weak/overinflated reason.


The reasons I've seen cited, chiefly individual unit plumbing, don't jive with modern building construction.

Most office buildings are reinforced concrete floors, supported by internal load bearing columns, with glass facings.

... that's exactly the same as a hotel.

Hotels have per-unit bathrooms, typically stacked on top of each other (floor over floor).

Ergo, you can frame out/lose some space to cut several hotel-like vertical utility paths, plumb pipes from them to each floor's bathroom (or HVAC), and Bob's your uncle.

Actual reasons would be things like office floors are built to lower load bearing standards than residential... which I don't believe is true.


I found some papers [1,2] which claim conversions are not prohibitively expensive, though I assume not all office buildings are equally suitable. Open-plan office in particular seem impractical to convert. I wonder how many of the companies that built them that way to save money are now stuck with an empty building no one wants to buy?

[1] https://pdfs.semanticscholar.org/da2d/49a0090e8631ff08eb4813...

[2] https://journals.sagepub.com/doi/pdf/10.1080/004209805003803...


Waterloo Canada


(1) is really simple, managers want to lower the effort for themselves to interrupting people with sudden meetings


IMHO the secrecy is more because they don't want people realising how incredibly unreliable NAND flash is becoming. 20 years of retention after 1M cycles? Advertise that proudly to everyone. 10 years after 100K cycles? Not too bad. 5 years after 10K cycles? OK. 3 months after 1K cycles? No, don't tell anyone. The very few leaked TLC/QLC flash datasheets out there don't even have any clear endurance numbers anymore.


i have done lots of flash storage and never hit an nda thankfully


That is my experience, for NDA, Atmel wouldn’t let you see a datasheet without it.

But, I’m starting to see fairly typical data sheets for micros being hidden away behind portals.


Traditionally there hasn't been an "open" ecosystem in the hardware world only if the "traditions" are understood to be no older than three decades ago.

The IBM Personal Computer has been amazingly open and this has been tremendously important for the evolution of the computer industry, by creating de facto hardware standards.

Unfortunately many managers have been less impressed by the huge benefits for the entire society that this openness has caused, than by the fact that later IBM has failed to exploit as well as their competitors the open standards created by themselves, for increasing their profits.

Then, slowly after 1990 and more and more after 2000, the ugly fashion of secret documentation and NDAs designed to prevent competition on the markets, with the hope of enabling higher product prices, has spread everywhere.

It is impossible to estimate whether this secrecy has ever been profitable for the companies that practice it, because while it has prevented the apparition of competitors that would have lowered the prices, it has also limited the sizes of the markets in which their products are sold by limiting their customer list mostly to the already existing, because many of the potential customers cannot evaluate whether a product is suitable for their needs.

In order to accept the harassment of an NDA, you must be already convinced that you need that product. Before this annoying NDA practice, it was normal to evaluate a much greater number of products for any new project and it was much more frequent to decide for new suppliers.

The NDAs may be beneficial for those who strongly dominate a market, so they do not hope to grow, but only to retain their captive customers, but they prevent the growth of the smaller companies. Nevertheless, most small companies appear to stupidly imitate the behavior of the big ones and they are equally secretive about their products, which does not have any rational justification.


I completely agree here. I would say that POSIX is to software and OSI layers as the PC/104, ATX standards are to hardware design.

Software designers, being the privileged bunch, wags the hardware dog, when the limitations of hardware previously limited software decadence prior to the late 90s. I'm not advocating for dominance of one over the other, but great design requires coordination. I wanted to say "platform independent", but that can sometimes introduce its own issues. I don't think there is such a thing as "too platform agnostic" until it runs into performance and compatibility issues.

A few months ago, Hackaday wrote on the Single Board Computer ecosystem, and lamenting the lack of standards (some which have more to do with connectors that dimensions of the board): https://hackaday.com/2022/10/05/the-state-of-the-sbc-interfa...

I would add to that, that SBCs are little more than glorified motherboards, with major developers- Raspberry Pi, Beaglebone, Orange Pi, Rock Pi, not all discussing their form factors, feeling a $25 board is too cheap to require a discussion on mounting holes, similar to ITX and ATX. I've built PCs since the 2000s. My first was a VIA C7 with integrated CPU. I still have an integrated ASROCK AMD E-350 in an ITX form factor (E350M1), Which I can use in an ATX case, or a tiny MBox 350 case. A never adopted mobile standard, such as Mobile-ITX, and EOMA68, allows hand-me-down boards to be repurposed into laptop cases, such as the Pi-Top v3. Most buyers of a RPi 3 have a desktop PC, or laptop, and might not use it as one, but at the same time, someone in the 3rd world might, and the laptop form factor isn't going away anytime soon (despite the interest in VR/AR glasses).

Even if some laptops are getting much slimmer than a PiTop v3, there are still many gaming laptops keeping up the bulky form factor alive. One of the failures of the OLPC was the battery life, and that it wasn't really repairable or upgradeable. This isn't to say there needs to be a laptop for everyone, but that a lot of the obvious issues have already been addressed with truly solar powerable computers, such as the Ambiq Apollo 4, which run at 6uA/mhz (and they have plans to develop a linux-capable chip in an Apollo4 like energy footprint). The Raspberry Pi is already a household name, for over 10 years, and yet they are more ignorant of competing boards. If you see all the boards out there- Pine 64, Caninos Loucous, it's like "Join my platform!" We're the virtuous developers, using all GPL software.

But honestly, I don't think someone who has no PC really cares about that- they might, if they had the time to read about the differences between open source, BSD and GPL, but I think the engineers at Broadcom and RPi seem to think that, the environment is an inevitable victim, and not much effort should be spent on component design when it seems like such a low priority in designing something for only 3-4 years of product support, and they aren't even going to think about it after its official End of Life. To that , I say, if someone handed me a bunch of salvaged parts- an empty laptop case, a Raspberry Pi 3, I could put it together. But if someone just handed me a Rpi 3, and a keyboard, and a mouse, and a USB power, and a 720 monitor, and I live in a country with no power, that's not a really practical solution. It's more future proof to develop a form factor that can outlive its individual components. so that while a LCD display on a modular laptop might only be 13" and use 2 watts of power today, it could be designed today, and a future one that can be retrofitted, like a large SHARP Memory in Pixel one, that uses 30mW, could be solar powered. And it wouldn't require buying a new laptop case. (For reference a 4.4" MIP uses around 5mW)

At the very worst, SBC makers would have to allocate more empty PCB space to fit a form factor, when they could get by making a smaller one (or at least having to pay for developing 2 form factors, one "universal.")

Yet, it wouldn't be as embarrassing as Walmart's $200 Linux PC in 2007, which used an embedded VIA C7 in a Mini-ITX motherboard, in a Mid-ATX case, to keep up the appearance of face. https://www.wired.com/2007/10/200-everex-gree/ (And there isn't really anything wrong with having an ITX in a full-ATX either, esp if installing a 3-slot RTX 4090.

At best, more users might be willing to buy a universal form factor because they could easily adapt it to components they already have.

Of course, I'm only addressing a very specific issue/request, and it isn't suggesting that technology is going to solve everything. There is a book written on the OLPC that already addresses its failures "The Charisma Machine". In my opinion, though, a computer is a tool that can serve more as an appliance than a service, much like TI-Solar calculators were useful tools. Software as a Service and Apple (Hardware as a Service), are not always practical solutions, because not everyone can afford the security updates (and sometimes, the application might not even require it, like a public kiosk for general information). I think computers reaching their "end-of-life" have a utility at the very least, as an offline encyclopedia, and e-library.


> I think one of the primary reasons that it is such a dumpster fire is there traditionally hasn't been an "open" ecosystem in the hardware world

Is it, though? Software thrives because it has an open ecosystem. But it's no less of a dumpster fire of complexity, being slowly buried under the technical debt. We literally have decades-old system designs wrapped into multiple layers of progressively newer systems, and openness doesn't help here. Most software is write-only.

The overarching reason might be the lack of refactoring and feedback loops. It's more related to the production cycles and incentives than to the second-order effects of openness.


Having ran my own internal hw/warranty service for many years hw management is a skill, there are a lot of traces and if you have a cold joint, zinc whiskers, etc it can wreak havoc and diagnostic skills are imperative.

Diagnosis is much more important than open, as to truly diagnose with open will require EE level skills and time which is not free.


What forces them toward an open ecosystem?


Customer demand. If you're Google or AWS or any sufficiently large customer and the black box firmware is getting in your way, you have the resources to roll your own. Then the incumbents not only lose their business, they face the prospect of new competition when that company publishes the documentation and firmware because they're more interested in commodifying their complement and getting bug reports on what they're now using internally than in competing in that commodity market.

Then from the other end, RISC-V is starting to get good enough that relatively small companies can put the pieces together into useful open products that compete with closed incumbents.

The hardware vendors are better off to get in front of it and publish the same for their own hardware so they can gain some market share in the time before everybody is doing it.


That's interesting, thanks!


One long-term project I’m planning is actually a fully open source desktop computing platform. While originally meant as a learning project, I realized a few years ago Ben Eater[0] has done this in a way far superior to anything I could create myself, so I started focusing on very basic hardware, beginning with power supplies. My goal ultimately is to select a processor that is as close to open source as possible, design a motherboard around it, make it fast enough to be suitable for general purpose uses, design a PSU, daughter boards (hot swap SATA backplane, I/O ports), a few PCIe x16 lanes, and ultimately a custom graphics card.

Designing the motherboard is surprisingly easy, the way PCIe is setup makes routing high speed connections fairly straightforward, and most I/O chips are just some sort of bus input and the interface output. The hardest part is finding the ICs that actually have good documentation not locked behind an NDA, or have good alternatives as one of my criteria is that every chip I select must have a pin-for-pin drop in replacement available. 6gbs SATA is the hardest one to source. I suspect this problem only will compound if I ever get to creating a graphics card.

[0]: https://eater.net/


This is something I'm also working on, but using off the shelf hardware for now and working on the OS first[0]. I'm going for something lower spec and portable for now, mostly because there are a fair number of relatively well documented RISC-V processors these days (the single core, 1 GHz C906 CPU inside the SoC probably technically counts as open source!), where it seems feasible to write drivers. I'm not sure it would be a wonderful "full desktop" solution (it's close to the original Pi Zero in terms of performance), but I hope to target larger chips as the OS gets more mature. The i.MX8 they mention in the research paper is actually one of the ones I've considered, it's also used by the MNT Reform (definitely worth a look if you haven't seen it before, and are designing your own larger scale computer!), and the Librem 5 (which I got after many years of waiting, and don't have a ton of use for right now).

I actually would like to eventually play around with treating all the "hardware bits" inside of the computer, like the wifi chip, graphics card, etc., more like a distributed system than black boxes, but this generally comes down to writing a lot of the firmware myself (luckily this is a hobby/research project, so "non viable" solutions are still on the table).

The more immediate goal for me is something portable, and comparable to a palm pilot/blackberry in terms of performance and capabilities. The current hardware will likely have an ESP32C3 for wifi (32-bit RISC-V), the main D1 CPU (64-bit RISC-V), and an RP2040 (32-bit Arm Cortex-M) for keyboard and IO, so I'll be able to test out some of my "network of tiny computers that make one small computer" ideas.

[0]: https://onevariable.com/blog/mnemos-moment-1/


I love this concept and is why my current designs are littered with E-keyed M.2 ports and the only real I/O is on my board are USB-C and a few USB-As. The advantage is that I can supplement built-in peripherals with USB devices until I build them.


Does the Talos II not fit this already?


Who wants to spend tens of thousands of dollars for that?


Right, I’m not quite sure where it fits within the market. It’s quite pricey for pro-sumer and not enterprise-y enough for larger companies to buy into it.


It's about the fastest system you can get if you want open hardware. There are people willing to pay a premium for that. It's obviously not going to find a mass market at that price.

But maybe they'll make enough money to develop a lower-priced one that can reach more customers.


My understanding is their “Blackbird” model is meant to be the lower priced model, though Raptor acknowledges the whole line was never going to be mass market; rather, a well made and open computer for those who want full custody over a “modern”/current desktop or server platform.


> There are people willing to pay a premium for that.

I wish this existed in the laptop space, where I can't even buy what I want.

But the Talos II exists, where you can get 4 threads per core for the low price of... oh, just $8,000.

That's almost one and a half MacBooks!


I like the boldness of the title and the simple clarity of the writing.

Academic papers are frustrating to me in that they seem to use esoteric language in order to create a veneer of importance, sometimes above trivial real content.

This stands against that trend and harkens back to papers of old.


> they seem to use esoteric language in order to create a veneer of importance,

It's kind of true, but also I think the real answer is that you're not the target audience.

A lot of time you use esoteric language because part of the problem being solved for in these conversations are:

1. what are useful and meaningful abstractions?

2. which abstractions are seeing uptake?

3. Who gets credit for the abstraction?

This mean papers are very often introducing new terminology as part of an exploratory conversation. Communication is optimized for the people engaging in that process.

As time passes most of these terms fall into disuse and the field circles around a few winners which then pass into the wider world. This stage is much more approachable and this content is what most of us are used to consuming.


Nah.

Nobody complaints when you invent words or phrases to add clarity, but most academic papers are NOT doing this. It's an academic circle-j** where all the authors play the same game to try to fit in with each other.

And in this respect, you're 100% right. Professionals who make a living doing work, even the ones who know more about the topic than the researchers, are not the target audience. The target audience is the others participating in the circle-j** who determine if the paper should be published.

And no, I'm not saying that researchers are worthless and don't know anything. Merely that they have different purposes and standards for their work. And it has always been this way, in every field.


>Professionals who make a living doing work, even the ones who know more about the topic than the researchers, are not the target audience.

You know what's the shittiest spot to be in? A PhD student that's also one of those working class professionals. Can you imagine working on something you know represents a significant technical challenge in industry and then writing a paper on it and then it get rejected because it's "just engineering".

My personal favorite (ie in my area) is the original pytorch paper isn't publishable for this reason (but simultaneously is the highest cited - through it's arxiv orcid - ML compiler paper of all time).


I'm purely curious at this point, I have nothing else to add here---what's your favorite example of the other failure, the type where all the researchers in the industry hype something and then 10 or 20 years later, you can look and say "well, that was a swing and a miss..." ?


Uh I'm not a pundit or historian or whatever but I don't think this is possible except in a negative externalities sense (PFAS, climate change, etc). If you hype something in industry and it doesn't work then you or your company get fired (or something like that). While in academia you can write a paper that "proposes" or is "toward" something or has an implementation in the grad student's personal dialect of whatever language (ie shit code). Granted shit code abounds in industry too but at least in industry someone is going to review some aspect of the implementation before, during, or after it ships.


I know it’s probably asking too much but I wonder what the world would look like if academic papers were actually fun to read.

Brian Greene is probably considered a pariah around here but I really enjoyed reading his book The Elegant Universe.


I'm not sure. Imagine if life-critical safety documents were "fun to read". Or material safety data sheets. Or assembly instructions for heavy machinery. Or even code.

All of this stuff needs to be extremely clear, precise, and readable by professionals. That's not the same as being "fun to read", and certainly not by laypeople. You can't really know whether an academic paper is clear, precise, and readable unless you're a professional in the sub-discipline the paper was written for.


Here's a Usenix presentation by the last author, Timothy Roscoe, from August, 2021 on YouTube. [1]

He makes a very compelling case for greatly expanding the concept of an operating system into all of the hardware in a given computing environment.

[1] https://www.youtube.com/watch?v=36myc8wQhLo


> He makes a very compelling case for greatly expanding the concept of an operating system into all of the hardware in a given computing environment.

I haven't watched the presentation, and just glanced at the paper, but while I see how this would be an improvement, I'm not sure if giving operating systems all that power would be an absolutely good idea.

While I certainly don't trust the firmware that runs on my devices, in most cases it's self-contained to the device itself. Modern operating systems are themselves a dumpster fire of security issues and spyware, built by corporations who realized they can also profit from their user's data, and they take advantage of that in subtle and nefarious ways. Even Linux distros aren't safe, and users need to be under constant vigilance that the software they're running isn't actively tracking or exfiltrating their data. These are much larger and widespread threats than any ones caused by firmware.

I think the solution must involve a radical simplification of hardware, and a migration towards open source platforms. RISC-V looks very promising in this regard, and, coupled with a reasonably secure OS, is the best path towards safer computing.


"Dumpster fire?" You are too kind. It's a cesspool, and a leaky one a that. Mostly Microsoft fault, and it has been now known to be getting worse.


>Mostly Microsoft fault,

It's the lack of capability based security, and a common tragic misunderstanding of what it is, that is the root cause of this dumpster fire. This effects all operating systems in common use, including all versions of Linux, Mac OS, Windows, etc.

Imagine if your only choice to buy an ice cream cone was to give the other person full access to your checking account, forever. That's what phones do, and as a result, we've learned to call that a "capability"

Imagine taking out a $5 and handing it over to pay for that ice cream... the most you can lose, ever, is $5. That is what capability based security does. You interactively chose what side effects you'll allow (by picking bills out of your wallet), at the time of purchase.

For desktop apps, like an office suite, all you have to do to port code over is change the calls to file selection dialogs (which return a list of file names) to "powerbox" calls (which return handles), then use those handles instead of directly opening the files. The user doesn't see any difference in behavior, but they are now firmly in control of things, instead of your program.


> all you have to do to port code over is change the calls to file selection dialogs (which return a list of file names) to "powerbox" calls (which return handles),

Actually you don't even necessarily need that. Assuming you have operating-system-provided file selection dialogs, you can just return "/app/d41d8cd98f00b204e9800998ecf8427e/spool/whatever-the-fuck-it-is.ext", and only allow the app to read and write that directory. Although that's a bit more balky in general, it can handle most use cases without even needing to modify the offending program.


I asked a friend about why their company could not keep their devices up to date for more than two or three years.

Basically, they did not have long-term maintenance agreements in place for the underlying subcomponents. The subcomponents were filled with vendor coded firmware blobs. There was some desire to fix this going forward for marketing purposes, but little desire to go back in time to fix old devices that were short on support.


From the abstract:

> The immense hardware complexity of modern computers, both mobile phones and datacenter servers, is a seemingly endless source of bugs and vulnerabilities in system software.

> Classical OSes cannot address this, since they only run on a small subset of the machine. The issue is interactions within the entire ensemble of firmware blobs, co-processors, and CPUs that we term the de facto OS. The current “whac-a-mole” approach will not solve this problem, nor will clean-slate redesign: it is simply not possible to replace some firmware components and the engineering effort is too great.


30 years ago, I said "When Moore's law finally stops, we will go back and clean up all the kludges we committed to over the decades." Then I realized that was how we would know that Moore's law was dead - when we started cleaning up our mess.

I'm sorry to say we are there. Just after Mr. Moore himself died, his law died.


I thought that when the Intel Management Engine implementation details (hidden Minix!) were released that it would only be a matter of time for a "Full FOSS" movement within enthusiast communities.

I was wrong, it seems.

There hasn't even been a push for an affordable 802.3ab gigabit ethernet card with an FPGA.


Ethernet card built from a FPGA or one with an FPGA on the side? Because in both questions your talking about the fact that there are basically three companies providing FPGAs and none of them seem particularly interested in killing their high margin markets. Worse the "DPU" crowd are largely selling about $100 worth of compute for $3k+ because it has an attached 100G port.

OTOH I assume the next step for the icestorm/etc folks once they actually have a reasonable toolchain working on the lattice parts is to actually find a company willing to produce a low cost open FPGA and write the backend bits to work with the toolchain.

Similarly there are various open ethernet mac projects floating around, although AFAIK none of them actually have been fabbed into something a random consumer can buy.

So much of this is really just the broken VC model in the US, no one is interested in funding stable but low margin businesses when they can gamble on high risk/reward ones. And banks won't loan people money without some kind of collateral. I suspect you have to look to China/etc at this point for to fill in the "sell a crapload with little margin" business (ex: lenovo). Making it sorta inevitable that they steamroll everyone, because if there is one truth in the tech field, its the guy with the most volume tends to win long term.

Along those lines, the Si consolation has really been a negative in may aspects of the market. If one could still buy current generation PCIe switches for less than the price of an entire computer it would still be possible to find motherboards that have shared device bandwidths (aka a half dozen x4 m.2 slots, or three x16 slots that aren't electrically just x4s) all hung off a x16 gen4/5 slot. Which is really the scam here, at least in my case I have a number of Gen3 parts which would all work great behind a single Gen4 x16 port with a switch port but instead I have to waste gen4 slots on gen3 devices.


> I assume the next step for the icestorm/etc folks once they actually have a reasonable toolchain working on the lattice parts

Wait, they don’t? What’s missing?


its not a democracy -- you discovered the implementation that is convenient to the towers of executive authority; there is no change just because it is more widely known, in fact, increased boldness in tracking activity in real-time and building dossiers on users for "ads" is growing, along with the stockpile of money available to implement that.


> Indeed, these cores and their firmware usually explicitly sandbox Linux on a corner of the chip (the “application cores”), preventing it taking any meaningful role in managing and securing the platform. On an Android phone, Linux is effectively an application runtime.

The reason Linux is sandboxed into a corner of the system on an Android phone is because the phone is more interested in not trusting you than it is not trusting the SIM chip's processor, cellular radio, etc.

Google don't want you:

* getting unrestricted access to digital content they or others sell and make available on the platform. Hollywood won't let them sell movies on Play Store if Google can't guarantee you won't be able to make a copy of the files on the device, or output HDMI to a recording device. Ditto for Netflix. Ditto for record companies/RIAA. Ditto for magazines and newspapers. The list goes on. Your smartphone, no matter what anyone tells you otherwise, is primarily designed as a media consumption device. Its secondary purpose is a cloud services consumption device (Google or Apple) and last on the list is "a device for you to use however you want." For example, on iOS, the Files app and any file save dialog always defaulting to your iCloud folder, and the inability to set a default save location, is not even remotely an accident. Apple forces you to, every single time, select somewhere else if you don't want to save to iCloud.

* disabling the methods they use to track you or opting out of the data collection they perform wholescale and continuously.

* disabling the methods application publishers use to track and collect data about you.


I'm not sure what fire has been put out here. IMO, a more convincing proof for the usefulness of this approach would be that it allows to find whole new classes of exploitable vulnerabilities that can then be corrected ahead. That's how static code analysis tools typically demonstrate their value. Without such proof, the description that they make of a hardware platform is mentally interesting but not clearly useful.


I think there’s potential for a second-order effect here: generate enough vulnerabilities and SoC designers will start to put it out themselves.


Absolutely. Fire won't be put out unless someone shows exactly how much there is...


On an Android phone, Linux is effectively an application runtime.

This is a good way of putting it, and it’s sad

Our OS is no longer open source -- they built the real one underneath


> The immense hardware complexity of modern computers, both mobile phones and datacenter servers, is a seemingly endless source of bugs and vulnerabilities in system software.

I'm a bit torn here. On one side, I'd really love to have equipment that can be proven to be bug free in theory or in practice. On the other side, often enough the only thing allowing me as an user to actually use a computing device for things that have not been intended by the manufacturer - everything from running my own software over running pirated games to using ad and tracking blockers - are (serious) security bugs, or when one uses the manufacturer-provided options to load their own software, the device irrevocably bricks parts of itself like Samsung's Knox environment does after rooting your phone, which breaks a ton of stuff - most notably Netflix DRM or Google Pay.


I'm basically on the "other side", I know some people have genuine security concerns but still find it amusing when people complain about devices being "obsolete" once they no longer receive updates, when I avoid them like the plague because all they ever do is remove intended and unintended freedoms.

But maybe we're in a trap, because we can use hacks to do what we need, it's not worth the effort of switching from Android etc. But if these hacks were fully gone, alternatives would find it easier get off the ground.

Have to hope so because we're going to find out, these systems are only getting more secure and locked-down with time.


Are vertically-integrated SoCs (Apple Silicon, NVIDIA Tegra, etc.) subject to the “OS only controls part of the hardware” problem?

I always figured that in these systems, there would be power-efficiency and BOM-cost pressure to “de-vendor” the SoC by replicating IP-core functionality with logic merged into firmware or a few larger service cores — or even kernel daemons run on the efficiency cores of the application processor. I would expect e.g. the fan controllers on these systems to be a part of the PCH’s event-loop logic, rather than its own little microcontroller with its own little firmware.

If this doesn’t happen—why doesn’t it?


> or even kernel daemons run on the efficiency cores

When (not if) the kernel scheduler locks up while the fans are slow/stopped at the same moment some scheduled user process begins driving 250W+ through a CPU the isolated fan controller (not to mention isolated thermal throttling logic, also independent of the OS) has great value.

> If this doesn’t happen—why doesn’t it?

Hardware evolves faster than operating systems and has complex requirements of which operating systems are oblivious, and there is no world where device manufacturers and consumers will tolerate being gated on operating system evolution. These complications have to run somewhere so independent processing elements appear and grow. This pattern delivers ever crucial compatibility by hiding new complications from legacy operating system interfaces.

The only path I can imagine that would change this paradigm requires a.) greatly increasing the reliability and security of operating systems to the point where they can be trusted not to fail and damage hardware or corrupt things and b.) generalizing all known and future hardware and operating system interfaces such that hardware drivers (which, due to the more rapid evolution of hardware, do not enjoy the long term maintenance cycle of operating systems) can be highly portable, both across platforms and forward. Both of these requirements are "Hard" as in we have only begun to achieve this in primordial ways.


The M1 mac laptops have gone partway in this direction, though they still have a great deal running as firmware: the speaker drivers can drive a lot of power into the speakers in order to get better and louder sound, but this opens up the possibility of a particularly aggressive waveform damaging the speakers from overheating. The OS is responsible for ensuring that this doesn't happen by modelling the power dissipated in the speakers and limiting the power when they are likely to be getting too hot.


You don’t seem to be paying attention to the words “vertically integrated” in my question — which were the whole point of my question.

Apple creates both the OS and the hardware for their phones. So why shouldn’t the hardware microcontroller firmware for the hardware they create (or commission and constrain the design of), run as one or several RTOS blobs that ship as part of the OS on one or several shared MCUs?

Let me put it this way: in x86, the motherboard has a power controller, but the CPU also has its own power controller. But on a vertically-integrated SoC system, I would expect there to be only one power controller, that controls power for the CPU, the other cores on the SoC die, and the various other components on the logic board. And I would expect that power controller, whether standalone or integrated into the CPU, to be running first-party code written by the system integrator (who is also the designer of the logic board, the SoC, the CPU in the SoC, and the power controller itself) rather than running code written by some vendor who was writing it in a “this chip could be used in many different builds, so it must be designed defensively for badly-integrated systems” manner.


One thing is that even in a vertically integrated company you don't necessarily have great communication between the teams responsible for one part or another. And so if there's not a great technical push for it, it can still make sense to have a bunch of silos to simplify the interfaces between different parts of the system.


> You don’t seem to be paying attention to the words “vertically integrated” in my question

That's because vertical integration doesn't actually solve the core problems.


I’ll go as far as to say this approach is also doomed to fail. We gotta standardize and rewrite from scratch everything, both hardware and software otherwise the current mess will never be tidied up.


"A key result of this work as applied to existing hardware will be whether it is possible to assign less-than-complete trust to any of the black-box components in an SoC, or whether current HW design practices are incompatible with building secure, correct systems."

Having been exploring this research area for a while I fear it is the latter and there is no clean formal model that will apply to real hardware as currently construed.


[pdf]


I've noticed the trend of ever-increasingly complex hardware with little or no change in performance since the 90s, and ranted about it extensively since I joined HN. I think we're past considering the problem from a technical perspective and need to look at the conspiratorial forces involved.

To me, the problem is one of monopoly and corporate thinking. When Intel and AMD make most of the chips, we end up with a Coke vs Pepsi mentality. There may be a hundred other hungry manufacturers, but without access to capital they'll never get enough traction to scale.

Around 2000, FPGAs were set to go mainstream and let us design CPUs of our own, but Xilinx chose to keep them proprietary so they never evolved. It's unfortunate that the company most capable of thwarting the status quo is the one propping it up. But they were trendsetters, that's how it's going with all things tech now.

Then Nvidia mostly monopolized the GPU industry and sent us down the SIMD rabbit hole. So we can process large vector buffers at great personal effort ..and that's about it.

What I and I think a lot of people probably want is a return to big dumb processors. Something like a large array of RISC-V cores with local memories connected via content-addressable memory and copy-on-write that presents a symmetric, unified address space that can be driven with traditional desktop programming languages. Had a CPU like this kept up with Moore's law, we would have had a 10 core machine in 2000, a 1,000 core machine around 2010 and a 100,000 core machine today, reaching between 1 million and 10 million cores by 2030. Which shows just how incredibly slow CPUs are vs what they could be.

The situation is so bad that I've all but given up on things ever improving. I mostly think about getting out of programming now and going back to living a normal life in the time I have left. Because the writing is on the wall: SIMD will deliver a narrowly conceived notion of neural network AI that's "good enough" and will stop all further evolution of CPUs. We'll miss out on the dozen or so other approaches like genetic algorithms and be forever separated from simple brute-force methods that "just work". The AI will soon be like Mrs. Davis and the vast majority of people will be face-down in their phones, then connected neurologically. The arrival of WALL-E and Idiocracy will coincide with the destruction of the natural world by 2100 and that will be that.

The argument really comes down to centralized control vs distributed resilience and freedom. It's pretty obvious which one we have right now, and that it's getting worse each year. Now I look on each headline with growing weariness, picking up on which mistakes will be doubled down on, which kool-aid they want us to drink this time. Because without the intervention of at least one successful internet lottery winner, or a concerted effort by thousands of hobbyists, there's simple no viable path from where we are now to where we could be, making the vast majority of the work we do a waste of time in the face of the potential we might have had.

It's hard to write anymore without sounding like a fringe lunatic projecting frustration on a world that is blissfully unaware that anything's even wrong. I'm probably wrong for doing so. Just another reason to probably get out of this business.


What do you mean "little or no change in performance since the 90s"? Computers now are hundreds if not thousands of times faster than in the 90s.

Also, if anything, modern desktop hardware is so complex not because of centralization, but because of decentralization. If your computer was made by a single manufacturer, that manufacturer could optimize the final product as much as it wanted, because it'd have control not only of each component, but of the communication between each pair of components. It'd only need to make sure that the programming interface remained the same, so that the software could still function. Because different companies make your CPU, your motherboard, your storage, your memory, etc. and they all have to agree on some protocol so the parts can talk to each other, each manufacturer focuses their optimization efforts on the part that they manufacture, irrespective of what the rest of the system is doing. That's how you get SSDs running garbage collectors, GPUs with little operating systems, motherboards with little operating systems, etc.


They might be referring to the fact that, eg, opening Microsoft Word today can take significantly longer than Word 2003 on Windows XP (or similar). See also Wirth's Law[1] and such.

EDIT: though they focus on the hardware-could-be-better side, rather than the software-could-be-less-awful side (:

There have been some good outspoken critics of how software bloat has canceled out (or more) modern hardware gains. Casey Muratori, Jonathan Blow, and Mike Acton come to mind — they have some good material on the subject [2][3][4]. Some of the issues you mention, w.r.t. the hardware being discombobulated, are addressed in that first blog/video. No big surprise these people come from a video games background; video game hardware is traditionally much more tightly-bound-together.

[1] https://en.wikipedia.org/wiki/Wirth%27s_law

[2] https://caseymuratori.com/blog_0031

[3] https://www.youtube.com/watch?v=pW-SOdj4Kkk

[4] https://youtu.be/rX0ItVEVjHc?t=4211 (timestamp, but the whole talk is good)


> What do you mean "little or no change in performance since the 90s"? Computers now are hundreds if not thousands of times faster than in the 90s.

Do you seriously think the person you’re replying to didn’t know this?


well, it's quite hard to interpret that statement any other way, so I can understand the request for clarification including a direct contradiction of the obvious interpretation.


FPGAs will never be mainstream for replacing CPUs or GPUs: the performance overhead of an FPGA compared to dedicated silicon is huge and not going down, regardless of how open or closed they are. FPGAs have their uses in niches, but I don't think they'll ever be something the average user has in their device, and certainly they won't be the only thing in there. Anyone wishing to disrupt the way mainstream compute is done will need to make silicon, and competitive silicon (thankfully, this is more accessable than it ever has been thanks to the fabs no longer being vertically integrated with chip design companies, but it's still a very large barrier to entry). Someone with the resources to do it making a 100,000 core machine would be interesting, but I doubt the results will be very good (see greenarrays).

similarly, genetic algorithms are not actually very good. The only advantage of them is they can kinda work for a wide variety of wonky problems, but there's almost always a better option, usually one which is way better than a GA.


The point of FPGA is not to make competitive CPUs. It’s to _prototype_ them. Give wide access to that, and more people will be able to design cores, and experiment with different design trade-offs, or even computing models.

Granted, a softcore on FPGA is at high volumes much more expensive and much less capable than a corresponding ASIC. At low volumes however it’s the only affordable alternative, and that’s make them the gateway to hardware design by the people (and increasingly, for the people as well).

By the way I recently purchased a Tillitis key¹, a security dongle with a RISC-V soft core (PicoRV32) running on the iCE40 Ultra Plus FPGA. I absolutely love their approach, where you can run arbitrary programs on that key, and each program gets an independent secret seed (derived from the key’s secret and a hash of the program). Sure this FPGA soft core is much less powerful than an equivalently priced hard core would have been, but I’m not sure they could have done such an open design without it.

Especially the upcoming unlocked version, which will allow customers to customise the "hardware" itself. I personally can’t wait, I’d like to experiment a couple optimisations.

[1]: https://tillitis.se/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: