I think that the patent threat will crumble with more and more complex features being implemented on the Open Source cores, like BOOM and so on. I don't think that ideas that are taught commonly at university and are all over GitHub will be able to stay "owned".
The question than will be how close we can come to the proprietary systems and which use-cases are low-key enough that their cutting edge new innovations don't make a difference.
I think the microcontroller space will be wholly owned by RISC-V in the long run. For cheap, commoditized stuff, open is better than closed for everyone. For the high performance stuff, the manufacturer still needs to protect their investment, especially because the will need to invest more and more for ever smaller gains.
Do you think that makes sense or am I phantasizing? :D
Someone eventually has to build the hardware and that's the really expensive part. Arm has the upper hand here as they deliver more than just an ISA, they can sell you a whole silicon and software stack including engineering expertise to help get your thing to market. Plus they control the ecosystem so you're not going to run into odd edge cases (e.g. Loongson which is sorta MIPS). Not saying that companies like SiFive are not competent but they are just one player in what will become a very big game.
> There is stuff like that going on in PCB design, where manufacturers make free design software, or distributors making footprint libraries.
Totally different things. Asking TMSC how to design a RISC-V chip is akin to asking Advanced Circuits or OSH Park how to design an Intel motherboard. They can make the board for you, but they dont design it.
> they can make the board for you, but they dont design it.
TSMC is a manufacturer, but they still have a whole IP/Parts library that they share with their customers.
> TSMC's Design Service Alliance partners provide industry-leading, silicon-verified libraries, IP and design services directly to designers in fabless semiconductor companies, IDMs and systems houses. DSA partners provide best-of-class, silicon-verified libraries; high-performance, leading-edge intellectual property cores; unprecedented accuracy in design simulation, validation and verification; faster cycle time from specification through tapeout to finished wafers; and access to experienced designers and developers of complex functions.
It's not about asking them how to design a chip, it's about them providing the tools needed to go from a Verliog design to something they can manufacture.
So I would say to that extent they do, the foundries provide dev kits with cells to use on their process, and there's definitely the same incentive, good reusable IP gets you products faster which gets them more business. I think a lot of the landscape is just driven by the sheer cost of getting it wrong. Spinning a pcb is a bummer. Needing a new mask set is so much worse.
There is eFabless, among other efforts in the vein you describe, they do a multi project wafer shuttle thing that google sponsors using skywater. It's supposedly an open source PDK, I haven't used it.
Absolutely agree with GP on microcontroller. I mean it is practically everywhere now, from Video Encoding / Decoding Engine to within GPU.
And that cost is exactly why it makes Zero sense the mainstream view on internet and HN that SiFive or RISC-V is going to crush ARM on Server / Desktop / Smartphone computing. Despite repeatedly asking what are the competitive advantage. Of course China betting everything on RISCV is a different story.
And this submission only have 52 upvote with very little comments. i.e It will never reach mainstream thoughts. Although I dont think any of the RISC-V Evangelism Strikeforce would want to hear them anyway.
> "For cheap, commoditized stuff, open is better than closed for everyone."
Having an open core is only half the battle. Cheap, commoditized stuff is tuned and tweaked to within an inch of its life for the available foundries and specific processes on which it is being fabbed both to squeeze out maximum yield and to give the highest possible performance at minimum power. That's very expensive engineering for which investors are going to want to earn their money back on.
I think for the high-performance stuff, history will repeat itself like with China's silk production. The "silkworms" (today's silicon IP) will eventually become open and everyone will know how China did it.
China’s silk industry attempted to prevent replication by withholding information: in today’s lingo, that’s not patents, that’s trade secrets. Those are intentionally mutually exclusive: the motivation for patents was quite literally to prevent knowledge being kept secret and eventually lost by granting time-limited exclusivity in exchange for public disclosure. (That’s why you can’t get a patent for something that was previously published elsewhere, even if it was you publishing it.)
In theory, every patented technique is already public: it’s described by the text of the patent. In practice, today’s patents are malicious compliance writ large: most of the time, they describe only the most indispensable (not necessarily the most difficult) parts of the processes in the most obtuse language possible. And while I hold that a patent lawyer’s job is morally repugnant, the overall fault is hardly with those exploiting loopholes, it’s with those who cast the loopholes in stone (making them part of national law, pervasive international agreement bundles such as the WTO, etc.) in the first place.
Can someone explain why the RISC-V ecosystem won’t eventually stabilise into a series of lower performance open source cores and higher performance closed source (and non licensable) cores.
In other words why should designers open source performance leading cores for competitors to copy?
You're Netgear or Asus or someone. You need a core to put in your network equipment, and the low performance one is nearly good enough. If you spend some money to make it 10% better then it is good enough. If that amount of money is less than you'd have to pay to license ARM, you might as well do that.
You also might as well release your changes.
If you try and keep your improvements to yourself then when someone else goes to make their own improvements, they start with the original design instead of yours. To get the benefit of their improvements you now have to do more work yourself in order to integrate them with yours. How does that help you?
Think about how Linux works. If you release it for anyone to use, when someone else wants to make independent improvements, they start with your design and you get the benefit of their work for free (and vice versa). And you make your money selling routers/phones/coffee machines/whatever, not processors, so this is in your interest.
Several iterations of this later and the "higher performance closed source (and non licensable) cores" are starting to look like HP-UX and AIX, meanwhile Linux has captured nearly the entire market.
>and the low performance one is nearly good enough.
You may be surprised at the CPU requirement for modern WIFi Router.
>If that amount of money is less than you'd have to pay to license ARM, you might as well do that.
That is not only it, the cost are going to support the linux firmware these router will be based on. Which having ARM ecosystem will be beneficial. So your initial cost estimate will have to include that into calculation as well.
And it is part of the reason why all routers are moving from their own MIPS CPU to ARM CPU.
> You may be surprised at the CPU requirement for modern WIFi Router.
You may be surprised at the performance of modern CPUs. Even the slow ones are pretty fast.
> That is not only it, the cost are going to support the linux firmware these router will be based on. Which having ARM ecosystem will be beneficial. So your initial cost estimate will have to include that into calculation as well.
The Linux ecosystem is highly portable. It's not like Windows where you're stuck with already-compiled binaries for the wrong architecture. You have the source code, it already runs on half a dozen processor architectures, so you compile it for the one you're using.
> And it is part of the reason why all routers are moving from their own MIPS CPU to ARM CPU.
They're doing that because MIPS is dead. The company that owns MIPS went bankrupt and has now announced they're going to make RISC-V processors:
The key point you’ve missed is that Linux is gpl licensed. If you make changes and make those changes available in a binary you have to release the source code. Not so for a new RISC-V core (or notably for many of Linux’s competitors).
Sure if you make a small change then you might release it. Designing cores isn’t all about small incremental changes though.
Still not sure what the incentives are for a firm that has made a significant investment in a new core or in extending an existing core, what the incentive is for them to release those changes.
With software, the one true incentive is that someone else will maintain your code and you won't have to spend valuable time constantly rebasing your downstream patches. This is why e.g. Netflix contributes heavily to FreeBSD's network stack.
This is the real reason to upstream, regardless of license. When that incentive doesn't apply and companies only release for compliance, we get Android-vendor-style source dumps and "BSPs" that are not very useful — not upstreamable and often barely even work as documentation.
It's a bit harder with silicon since a chip is "done" once you've sent it to manufacturing, but still somewhat applies when you maintain an evolving line of chips.
Thanks - agree completely. Possibly also true that Netflix isn’t really worried about competitors using their contributions to FreeBSD to compete with them?
That should generally be the default assumption, because companies can pay for the improvements that most benefit their niche.
Suppose your company makes the mid-range device. You have competitors above and below you. The one above you uses a fast expensive processor, the one below you uses a slow cheap processor. You need a medium processor for a medium price. If you design your own, it gets cheaper than the existing middle one but still not as cheap as the cheap one.
So if you design one, it's too big, costs too much to fab, for the low end. It's too slow for the high end. You're the only one who can use it. You and companies outside your market who don't compete with you at all, because you make network switches and they make cars.
You also make a good point about the GPL above, but there isn't any reason why processors couldn't use a GPL-style license, is there?
I’m not a lawyer so can’t comment on the GPL point but I hope you’re right.
I suspect that the most likely route to open source higher performance cores is some sort of cross industry consortium collaboration - rather than a single firm - with GPL cores and a central body doing a lot of the work.
I do wonder whether you eventually end up with something that economically looks a lot like a cross between Red Hat and Arm though - say with membership fees for support rather than licensing fees.
It's a fair question. How does an open high end chip design benefit anyone other than a competing company? How would an end user be able to verify that the chip in their hand is the same as what's in the schematics they get? Wouldn't it be cost-prohibitive for regular people to have those designs manufactured? High performance cores being opened would be nice to have but not really practical, unless I'm missing something.
This is why I think the priority for software will shift back towards performance in the coming decades. Even now very few people can get decent GPUs. Who knows if one day chips that can run the latest bloatware will be affordable?
Because open-source system don't just 'stabilize'. Just like Linux didn't stabilize on low performance cores and on high performance cores you use some proprietary core.
Open Hardware is still way more difficult but we will not reach any 'stabilization' anytime soon.
"As should be obvious by now, there is no situation where these foundry processes and tools are open source."
This is already false. The famous OpenPDK by Skywater (sponsored by Google) is proving this article wrong from the beginning. (https://github.com/google/skywater-pdk)
Yes, thats a rather old technology node, but you can now synthesize your free RISC-V design with a free Toolchain (openROAD) onto this open PDK.
That process node is not just "rather old", it is literally 20 years old. Sure, it's nice that these old processes are being opened up, but for anything finer you're still going to run into the patent concerns that the article talks about.
sure, but having the ability to have an open 1GHz Athlon XP (with increases in IPC allowed by modern architectures) would be nothing to sneeze at. See https://en.wikipedia.org/wiki/130_nm_process
The question than will be how close we can come to the proprietary systems and which use-cases are low-key enough that their cutting edge new innovations don't make a difference.
I think the microcontroller space will be wholly owned by RISC-V in the long run. For cheap, commoditized stuff, open is better than closed for everyone. For the high performance stuff, the manufacturer still needs to protect their investment, especially because the will need to invest more and more for ever smaller gains.
Do you think that makes sense or am I phantasizing? :D