Hacker News new | past | comments | ask | show | jobs | submit login
RISC-V is succeeding (semiengineering.com)
413 points by PaulHoule on March 1, 2022 | hide | past | favorite | 299 comments



The real reason RISC-V is succeeding, relative to ARM:

If you make your own CPU

- With ARM, You have to pay ARM for ISA license (after a several months long negotiation), and you cannot license your CPU design to anyone, as ARM has the exclusive right to do that with their designs.

- With RISC-V, you get a free ISA license, and you can license your CPU design to others.

If you do not make your own CPU

- With ARM, you can (after a several months long negotiation) license one of the few designs ARM has available. There's no other vendors that can offer you ARM CPU designs.

- With RISC-V, right now, you can license any among hundreds of options available, from tens of vendors. The licensing process is usually very short and straightforward. Alternatively, there are some open hardware designs. You can get commercial support for some of them.

Frankly, unless ARM does radically change their business model, I do not expect them to survive.


> Frankly, unless ARM does radically change their business model, I do not expect them to survive.

Survive what? I don't see RISC-V disrupting much of ARM's bigger-named business (eg, phones, with some inroads into other things like Apple Silicon & laptops). Maybe Amazon pivots into RISC-V for the next Graviton? But that also seems unlikely unless someone with very deep pockets invests in making an actually competitive RISC-V CPU core at the mid/high end.

ARM's low-end market seems likely to be taken over by RISC-V. So like the Cortex M series days seem numbered without a change in the business model. But that seems like about it.


> Survive what? I don't see RISC-V disrupting much of ARM's bigger-named business (eg, phones, with some inroads into other things like Apple Silicon & laptops).

Go back a decade or so: how many people thought that ARM could compete against Intel/AMD?


One decade ago was 2012.

Chromebooks on ARM have already been out for a year. Windows RT for ARM was coming out later in the year. Smart phones are clearly all going ARM. The 64 bit ARM spec was out and people were excited about it.

I think it was a lot clearer that ARM was going to succeed x86, as compared to looking forwards now as to where RISCV will beat ARM.

I do suspect the microcontroller ecosystem will have lots of RISC-V. But it seems a lot less clear that it will succeed ARM in the mobile/laptop/desktop/server markets. I personally do not think that’ll likely happen any time soon.


The microcontroller space is where the cost of the ISA really matters, too. Most of the other IP in a microcontroller is cheap. If you compare that to a mobile SoC, the CPU core often isn't even the most expensive block.


ARM still isn't really competing against Intel/AMD. It took an entirely new form factory that radically changed the consumer landscape for ARM to get a foothold at all. What is RISC-V's smartphone explosion?

ARM has so far, outside of Apple and very limited cloud experiements (and some very badly received laptop experiments), not really put a dent in Intel/AMD's markets. But all of this was fueled by the once-in-a-generation explosion that gave ARM untold increases in adoption "for free". RISC-V has seemingly nothing similar, and RISC-V itself certainly isn't manufacturing any such radical shift.


Graviton2 is generally available and depending on your codebase and dependencies can be a drop-in replacement. I was amazed at the breadth of ARM docker images that exist for common use cases.


Umm, ARM has the share in the phone market where Intel/ AMD couldn't make a dent...

It's a matter of time other shares will be eaten when x86 is having hard time innovating over its decades old design.

More battery time for consumers laptop and less electricity bill for cloud vendors are something very attractive.


> Umm, ARM has the share in the phone market where Intel/ AMD couldn't make a dent...

Yes, obviously, which I mentioned repeatedly. For ARM to be successful outside of embedded it took an entirely new category of device to appear. What's RISC-V's entirely new category of device where it happens to be uniquely positioned?

> It's a matter of time other shares will be eaten when x86 is having hard time innovating over its decades old design.

Based off of what? The only ARM CPU that isn't thoroughly outclassed by AMD & Intel's x86 CPUs is Apple's, and Apple sure isn't licensing that to anyone. And so far every time ARM has tried to enter the domain of x86 it's been either absolutely embarrassingly bad (laptops) or mediocre at best and only for the very latest generation at that (servers)

> less electricity bill for cloud vendors

ARM server CPUs have the ~same 250W TDPs of Intel & AMD server CPUs. There's no power savings to be had here.


>Go back a decade or so: how many people thought that ARM could compete against Intel/AMD?

Hello, that would be me? :)

A decade ago was 2011 / 2012. Not a lot ( if not zero ). Even Anand from Anandtech was still cheering on Intel because of Intel's foundry leadership.

Wrote it on Anandtech and AppleInsider ( probably on HN with my old account as well ) the moment Intel decide not to make chips for iPhone or Fab Chips for iPhone in 2011. That was before Intel announced their Customer Foundry. They later went to do a JV with a Chinese company which later become Spreadtrum now known as Unisoc.

The whole reason why Arm competed against Intel and AMD wasn't because of ARM the ISA. It was the business interest and Foundry model. On a projected annual 1.5B Smartphone shipment by 2020 ( Which turns out to be a little too optimistic ), and 150M PC shipment by 2020 and continue to trend downwards. ( A little too pessimistic ) Even had TSMC not taken over the leading edge crown or stayed one node behind Intel the market today would still have been the same. The smartphone market was far bigger than the PC. This actually ties to why people were dismissing moore's law in the late 00s and early 10s. And it was the economy scales that wins.

What turns out to be wrong though was Tablet didn't take over PC. We are not in a Post-PC world. E-Sport (x86) picks up and PC as a Gaming platform is bigger than anyone could imagine. PS4 would move to x86 ( I dont think rumours of PS4 on AMD x86 even began til early 2012, and only partly confirmed by 2013 ) GPU is no longer a Gaming niche but a fundamental in Data Science. The hype of so called Cloud where everyone laughed at or sceptical of ( that is including me ) turns out to be multiple order of magnitude bigger as well. x86 ( Intel / AMD ) thrives because of that.


I worked in the CPU business 30 years ago and it was clear then that ARM would be very competitive. I was in the UK. Perhaps that made a difference.


RISC-V is more than a decade old.


A decade ago Microsoft was releasing ARM based laptops (or laptop like tablets if you insist). They were slow as hell Tegra 3 disappointments, but it was happening 10 years ago.


Their phone business is probably safe for now, but as you said they are set to lose a lot of their embedded business. The stuff that powers your HDD, your fridge, your IoT devices. Lots of high volume use cases


IDK about Cortex M being soon overtaken by RISCV.

There is a huge discussion about peripherals and IP. You can buy peripherals for ARM all day long, they’ll work from one chip to another for the most part. Once everyone can “make their own CPU”, expect fragmentation in the peripheral interfaces.

I think low end cortex A and R are where V will shine first. Something like a CISCO IP phone, not an iPhone.


> ARM's low-end market seems likely to be taken over by RISC-V. So like the Cortex M series days seem numbered without a change in the business model. But that seems like about it.

Doesn't that inevitably set up a classic Innovator's Dilemma?


I don't think so. ARM may just not care about losing that market. The CPUs they design at that ultra-low-end really don't carry over into the mid & high ends, which is where the high profile design wins & "high" (relatively) margins are anyway. Like what makes for a good hard drive controller doesn't really have anything at all to do with what makes a good smartphone CPU. So RISC-V winning at hard drive controllers or other (tiny) embedded usages is not really setting up an "S" curve for explosion here to challenge ARM's markets. They are really fundamentally different markets & products.

Kinda like how Intel just never cared about going after embedded, and they're not exactly struggling as a result of that. With the big asterisk of smartphones where scaling up proved a quicker path to shipping than scaling down, at which point inertia took over, but for that to repeat would require a currently unknown new product category.


I'd bet on AR glasses that somehow need to squeeze the compute power of a midrange cellphone into the form factor of a pair of glasses.


And if ARM started allowing everyone to use their IP for free you would expect them to survive? ARM the architecture might survive but probably no ARM the company. IMHO a more likely outcome with Risc (assuming it gets widely adopted) would be one or two companies like Samsung, Qualcomm, AMD or even Intel gaining a massive advantage and profiting from it instead of cheaply licensing their designs to other companies. I don’t really see how is that preferable to the ARM model where company can theoretically license the most advanced ARM cores there are (of course aside of what Apple is doing) or mostly equal terms.

Of course RISC might push them to make their licensing mechanism to be more flexible if they start considering RISC an a real threat to them (I don’t think this is very likely to happen in the near future, though)


>if they start considering RISC an a real threat to them

I'm afraid they have seen RISC-V as a threat for a while now. It has already been years since their first FUD campaign.

>if ARM started allowing everyone to use their IP for free you would expect them to survive?

That's not how I see ARM surviving.

One way ARM could survive is by doing what the MIPS owners did: Abandon the ISA, move to RISC-V, use your extensive expertise to make competitive cores, and your clout in the market to sell them to your pre-existing clients.

They could totally pull it off, but it would honestly surprise me if they did so, considering how poorly they've dealt with RISC-V so far.


> The way ARM could survive is by doing what the MIPS owners did: Abandon the ISA, move to RISC-V, use your extensive expertise to make competitive cores, and your clout in the market to sell them to your pre-existing clients.

Even if that’s their best bet, long term, I don’t see why they would have to start doing that right now. Why would they leave their castle with its golden egg-laying goose, only because they know it won’t live for X more years?

Timing such transitions is always difficult (you can’t wait until your goose is dead), but I think they’re better of waiting at least a few more years.

The MIPS owners were in a different situation, weren’t they? Their goose already was dying or even dead.


ARM has extensive IP and know how. They could build up and sell high performance RISC-V IP/cores now and prevent RISC-V focused companies like SiFive from gaining a foothold by denying them revenue and investment.

Alas, that would probably have worked better 2+ years ago, there is a lot of movement now.

It seems like Intel has realized this. They tried to buy Sifive, but have now themselves joined as a major RiscV International sponsor and are investing heavily in the ecosystem.

Of course there is a downside there too, because it accelerates a software and hardware ecosystem transition to a different architecture that can enable other players. We'll see how it plays out


> Intel has realized this

One of the reasons Intel is spending money on RISC-V could be that they want to undercut Arm, as the latter are now beginning to be competitive in Intel's X86 world. RISC-V is going to succeed at all as an ISA, it is going to hurt Arm before Intel.


intel bought into arm as well, then sold it off like many other things in the past. it will be interesting to see if intel is forward-thinking this time, or just chasing profit ... again.


>I don’t see why they would have to start doing that right now.

Me neither, but spreading FUD about RISC-V as they've been doing doesn't help their credibility should they take the MIPS path.


That's just a job for the marketing department. I can hear them already:

ARM's vast experience brings the quality you've come to expect to the RISC-V world. Want to lower TCO and level up your designs, while leveraging existing experience, but worried about low quality IP polluting your tech? Fear not. etc...

You can probably let an AI spew this stuff by now. Training data is easy to find.


> Training data is easy to find.

Got a good chuckle from that, I did! :D


Take a look at the Halloween Documents and then look at Windows Subsystem for Linux. This industry can absolutely forgive FUD in the long term if a company changes its path.


Long term, yes.


But is RISC-V somehow superior to ARM soo much that ditching their ISA and switching to RISC-V is their only way to survive. Is there anything specific about RISC-V that would ARM to make significantly more efficient cores (and I don’t think that there is any evidence that this would be the case). Because if not it would an exceptionally bad move for ARM to do (basically like Intel deciding to ditch AMT64 to make ARM or Risc cpus.


Abandoning the ISA is probably too extreme.

MIPS was pretty dead when they abandoned it, ARM is still alive and well.

ARM could survive by making the ISA free and competing by having the best ARM core designers and designs, and/or by having proprietary addons for media handling/decoding.


They will have no choice. Either ARM becomes irrelevant as people switch to an easier to license ISA, or they have to swallow a different profit model and compete by having the _best_ ARM cores in an open market of ARM IP.

The trend with RISC-V suggests that ISAs are going to be commoditized and the real value will be in the implementations themselves.


> real value will be in the implementations themselves.

It is not that difficult to re-target a micro-architecture (what you call implementation) to a new ISA, especially if the ISAs relatively close, as most modern ISAs are. RISC-V and Arm ISAs are an example where that should be especially easy.


Totally, all the more reason that the specific ISA is less important than the implementation/micro-arch.

Why pay for ARM if you can retarget to RISC-V and still retain most of the benefits of your designs.


As "baq" also says: you pay Arm (or Apple, or Qualcomm or ...) for the micro-architecture. I expect Arm to drop licensing costs for their ISA to zero to compete with RISC-V at the low end.

Right now, the Arm ISA has much better micro-architectures than RISC-V, why bother with RISC-V? I think this will change over the next 5 years. One of the ways to change this could be Arm selling RISC-V implementations.

Of course there are also dangers in RISC-V: one is that we will get too many RISC-V versions, so no stable software ecosystem emerges for any single one of them. Adapting Linux, GCC and so on to your specific version of RISC-V (or any processor really) is rather expensive.


you'll pay ARM for a good ARM RISC-V CPU.


Market segmentation exists. RISC-V being open means it's also open to ARM. You may, in the future, pay ARM to license a SoC that's a RISC-V ISA core or four with their tried-and-true peripheral cores and maybe a couple of M0 cores to boot.


Assumes that Arm have no implementation IP - which comes with the ISA and is not available to others - and that the only factor in choosing an ISA is how 'easy' it is to license. Very strongly disagree with these assumptions.


> Very strongly disagree with these assumptions.

It is the fundamental basic business ideas that they have it all wrong. Which is why every time these discussions came up they are non-productive. It is no different to saying Linux would take over desktop in the 90s because it is free. We are in 20s and coming up 30s. It is still not happening.

But at least ( or hopefully ) we have passed stage where RISC-V is fundamentally better ISA than ARM. And talk about the business aspect of it.

Edit: Ok I was wrong, the ISA debate is near the bottom of the page. I didn't invent the term "Riscy Silver Bullet" for no reason :).


> It is no different to saying Linux would take over desktop in the 90s because it is free. We are in 20s and coming up 30s. It is still not happening.

The desktop? Maybe not. But Linux has done well with laptops in recent years (Chromebooks). Definitely not "taken over", but 30 million units a year is nothing to sneeze at either.


ChromeOS is Linux in name only, just like Android. Google just needed a kernel and a functional low level stack. They can switch to Fuchsia in the near/medium future and most user would probably not notice a thing.


"Riscy Silver Bullet" is very good.

Completely agree.


ARM won't become irrelevant as long as Apple exists.


Arm is the fourth ISA for the Macintosh and its descendents. Apple has shown three times that they're anything but wed to architectures.


"And if ARM started allowing everyone to use their IP for free you would expect them to survive?"

Irrelevant question that no one anywhere has any obligation to care about.


Well the comment above (which I assume you didn’t read) implied that ARM would have to changes it’s business practices to remain competitive. Whereas I think it’s current businesses model is the main reason it’s still competitive.

Longterm I’m afraid that a completely open ISA might result in a less competitive market as long the cost of developing competitive cores is high enough. All top players could just start behaving like Apple and just keep their cpus for their own products/cloud services instead of selling them at commodity prices and losing a competitive advantage. So having a ‘neutral’ player like ARM might still be preferable.


I don't think the parent comment was suggesting we have an obligation to care about ARM's survival.

But ARM probably cares about ARM's survival so it is relevant to the discussion about what future decisions ARM might make.


When the landscape changes that some business model depended on, it is no one else's obligation to care if some business using that model adapts or doesn't.

I literally quoted a question that asks "you expect", and what I am pointing out, not declaring but observing, it's already a fact regardless if I say it, that neither you or I nor anyone else has any obligation to expect anything.

It doesn't even matter if we do expect something. Even if you were some kind of Arm fan like being a fan of a sneaker brand or a celebrity, your good wishes still don't change anything.

"you expect" is just irrelevant in this question. Arm can adapt to the new environment or not. The environment is changing and it doesn't matter if anyone likes it or not.

And the nature of the change, people giving away something that someone else used to be able to charge rent for, is not unfair or unethical or immoral in anyway, nor is it any kind of net loss for society as a whole, so there is no argument for doing something to artificially protect Arm's business model.

Steve used to sell cakes. Sally gave everyone her recipe for cake and now fewer people buy Steve's cakes. So what?

Do you mean to say that Sally should not be able to give away her own cake recipe? Steve doesn't like it because he had a good gig going for a while there, but so what? The environment Steve was operating in changed, and his business model no longer works. And while you could sorta-kinda say it's Sally's fault, you can't say she did anything wrong either legally or morally or in a holistic sense for everyone as a whole, and so there is nothing anyone else should do about it. Steve had no special right to his cakes being purchased. He has the right to offer them, but no one is obligated to buy them, and no one is obligated to care if he fails to sell enough to live well on.

I don't expect anything related to Arm, and neither can anyone else. It's just a totally irrelevant question.


I think the real value isn't so much to do with the Free-As-In-Beer aspects, and more to do with the Free-As-In-Freedom aspects. Having so much open documentation and off the shelf capabilities is like a superpower, and you're starting to see it with stories like these:

https://riscv.org/blog/2020/11/13-year-old-nicholas-sharkey-...

We've got kids designing their own cores. That was a bar that previously was out of reach to anybody without a nine-figure engineering payroll. That's amazing.


Not really, students have been designing RISC cores as long as RISC has been around. It's a standard exercise.

RISCV does make it a bit easier and gives you a clean ISA but it hasn't really had that much of an influence due to its own design i.e. it's caused people to herd around it so there's lots of tutorials and work to borrow floating around.


Having a relatively sophisticated core you can take off the shelf and modify for your own research can be incredibly useful in academia, I would have really liked that when I was doing my thesis. For large companies too who might want particular new instructions or sets of features that are best suited to their own use cases.


SPARC has been open source since 2005, complete with high quality implementation released in 2006. 64-bit, 8 cores, 32 hyperthreads, rising to 64 by 2007. Simulator also provided.

SPARC was able to run Linux and Solaris, and used by Sun in their servers that powered much of the commercial internet in late 90s and early 2000s (before x86 Linux took over).

There have been a few others since then. So relatively sophisticated open source cores for study and modification have been around for many years by now.

It looks to me like the difference with RISC-V is that a larger community has formed around it due to smart organisation and the timing being right, rather than availability.


This is because the underlying CAD tooling has gotten much smarter and easier to use. It's not because of anything RISCV did in particular.

This toolchain enables going from knowing nothing to a core in one class. One doesn't even need to enroll in the class to follow along.

See: https://ocw.mit.edu/courses/electrical-engineering-and-compu...


Surely it is because of the open source ecosystem, especially open-source tooling, reaching a critical mass?

From article: “The open-source community developed key tools that are crucial to make RISC-V-based processors ubiquitous, such as chip technology process design kits, design verification suites, implementation tools, and more”

Closed source tooling was available and “smart”, but licensing, cost, and inflexibility were significant roadblocks. Disclaimer: I don’t work in the industry.


I think to really have the conversation we would have to define the terms. Much of the original CAD tooling was open source because it was written at Universities.

Things like: is it enough for one to make any core or does one have to make the "best" core? If it just has to exist that was pretty much always possible. Definitely not the lowest friction way to go. But possible.

For example here is basic MIPS core originally written in 2003, from chapter 1 of a common VLSI textbook: http://pages.hmc.edu/harris/cmosvlsi/4e/code.html

It's just over 400 lines of verilog and super easy to follow. The tooling changes that happened in the 90s made that possible.

I experimented making cores that were taped-out in my University, as an undergraduate, in the late 90s. It didn't cost my University even close to 9 figures. It was definitely a lot harder than now but the designs were also a lot more modest. Things, essentially untrained, students can do now would've been unthinkable then.


Stands to reason then that they are not designing in the same sense, but configuring and customizing. People have been doing that for a while now on FPGA, or in hardware with 8080, and it's hardly anybody's kids. What's so "amazing" about it?

Dumb cores are a commodity now, which progressed in horsepower from their ca. 60 years old ancestors. I.e. the core is hardly innovative, so what is so amazing about it, technically?


Also, waiting for IP is a huge bar for entry.

Suppose you just want a soft core for a one-off FPGA project. RISC-V is a no-brainer if you need to run complex stuff like a Linux kernel. There's myriad of projects to pick from on GitHub. Try them all, pick the one you prefer.

The barrier to entry is so much reduced that you can use RISC-V "by accident" without having considered it in advance, as part of a normal engineering process in companies of any size, vs months-long waits for IP, after deciding it was worth obtaining.


This all rests on the several suppositions including:

- Arm has no IP that can’t be quickly replicated by other firms (for very little money).

- That the quality of designs from other firms will be better than those from Arm (or for the architecture licensees their own).

- The really big Arm licensees would see a commercial advantage to switching.

Intel threw billions at the smartphone market and still lost against Arm. Today there are zero RISC-V smartphone designs. That may change but probably largely because of the Arm China and Nvidia missteps.

I expect (and look forward to) RISC-V establishing a presence in the market but this sort of commentary does the ISA no favours.

Edit: I see elsewhere you think they should abandon the Arm ISA in favour of RISC-V! really not sure about that ..


1) Intel is/was handicapped by using an old CISC ISA which has some cost in power/performance. This complex x86 decoder ain't free.

2) Network effect favoured ARM.


Intel also had the advantage at the time of a clear process advantage which almost certainly would have more than offset the decoder cost.


Nothing in life is ever simple...

> With ARM, You have to pay ARM for ISA license

This is only true for some definitions of ARM. See for example:

https://en.wikipedia.org/wiki/Amber_(processor) "The Amber core is fully compatible with the ARMv2a instruction set and is thus supported by the GNU toolchain. This older version of the ARM instruction set is supported because it is not covered by patents, and so can be implemented with no license from ARM Holdings"

> Frankly, unless ARM does radically change their business model, I do not expect them to survive.

Looks like someone is having similar thoughts inside ARM. https://hackaday.com/2018/10/02/free-arm-cores-for-xilinx-fp...

There are other efforts too.


If it becomes an issue ARM probably will change dramatically, however keep in mind RISC-V let's you licence hundreds of probably pretty crummy cores (and a handful of good ones) whereas ARMs are battle tested, verified, and come with documentation. RISC-V docs are still a bit 1980s i.e. as far as I'm aware there's no official online source of HTML docs, and not as much blessed software as arm (i.e. compilers tested in-sync with the ISA). Oh and people to sue...

I also trust the ARM ISA designers more than the RISC-V ones, but I don't know enough that you should trust me.


As a hobbyist I see this from a very different perspective. ARM documentation is overwhelming and ironically ARM is confusing as hell to read about due to its fragmentation into 32-bit and 64-bit ARM as well as Thumb1 and Thumb2. I don't have a great memory, and I really get lost when reading ARM tutorials and docs.

RISC-V for hobbyist like me is just WAY WAY easier to deal with. There is a great consistency between RISC-V 32-bit and 64-bit which ARM lacks. There is just one compressed ISA and it maps to the regular one in straight away manner. Register naming is more sensible and consistent.

ARM instructions with their complex addressing modes are not easy to read. ARM vector instructions makes your head explode trying to keep all the details in your head at the same time. RISC-V vector instructions are just way easier to grasp.

In many ways it reminds me of the documentation situation with Python vs Julia. Python has more documentation for sure, but a big problem with that is all the stuff that is outdated irrelevant, combined with the difference between Python 2.x and Python 3.x although that has gotten easier in recent years.

New technologies will always have less documentation but they often benefit from not having lots of legacy which complicates understanding.

At least ARM is easier than x86, but that is a pretty low bar.


The year is 2012. Blender is an open source 3D modeling and CG software. There are hundreds of crummy plugins and hacks (and a handful of good ones,) whereas Maya is battle tested, developed by professional Autodesk engineers. It comes with good documentation and support. Blender documentation and tutorials were essentially youtube videos with cheesy electronic music in the background. (I'm probably getting some details wrong, but you get the point.)

You're not wrong in your assessment. However you're looking at it as it is right now, as opposed to where it's headed for in the future and what it could ultimately become. While CG and computer hardware are very different fields, the effects of communal knowledge sharing are very similar and very powerful. When bored high schoolers and broke college kids (and curious adults) can tinker with the tech, they enter the industry with that much more experience and/or have another avenue for career transition. And that is far more valuable than the mild (and sometimes nonexistent) conveniences of ARM over RISC-V.


Writing code for Blender != Developing a high performance core that is going to be fabricated on TSMC's leading edge process.

In the first case anyone with the right skills can do it. In the second you need access to lots of specialist tools / proprietary industry knowledge.

If RISC-V 'wins' in application processors it will be because an Intel or a Qualcomm invests (probably) hundreds of millions in building a team that works on a multi year project. It definitely won't be bored high schoolers.


Who says you have to fabricate a RISC-V processor on leading edge TSMC nodes? What about FPGAs? What about older nodes? The whole point of my argument is that when you make the proprietary "industry" knowledge more generally available, more people can contribute to the ecosystem as it removes barriers to entry.

> In the first case anyone with the right skills can do it. In the second you need access to lots of specialist tools / proprietary industry knowledge.

In the first case, it used to require specialist tools like the Autodesk suite, and people had to have access to that software to develop the right skills and acquire the industry knowledge.

I don't think RISC-V "winning" is the right way of measuring success. Whether or not it becomes the dominant ISA for everything is irrelevant. It serves an important role in our market and it doesn't need the guys at Qualcomm and Intel to use it to make it successful. Either way Intel is investing big money into RISC-V: https://www.zdnet.com/article/intel-invests-in-open-source-r...

Also, careful what you say about high schoolers: https://riscv.org/blog/2020/11/13-year-old-nicholas-sharkey-...


Fair enough and I don’t disagree at all but I do think it’s a bit different to software development on that there is probably a limit to how far you can go without one firm spending serious money. As you say it may be Intel - which doesn’t really give me a warm glow.


Shockingly this is not strictly the case anymore and it's become less true everyday.

https://info.efabless.com/press-release-efabless-launches-ch...


130nm - that was leading edge 20 years ago.


That means we are only 20 years away from running everything on opensource hardware.


I'm sorry to say there does seem to be a certain lack of respect for the Arm ISA designers floating around. Arm v8 is in billions of smartphones and powers the core with the fastest single threaded performance out there, but somehow it's obviously inferior and will be supplanted.


> I'm sorry to say there does seem to be a certain lack of respect for the Arm ISA designers floating around.

I was going to use that exact wording in a different comment but decided against it, spooky


That is spooky!

I mean there was clearly a lot of analysis put into aarch64 - which probably involved Apple etc - but I've seen comments previously saying they obviously got it wrong because [insert some headline design decision].

They might have got it wrong but they're definitely not amateurs at this and you can't prove anything with a one line comment.


I guess the point is, given the eye balls RISC-V is going to attract, you'd have your Tencents and Apples of the world pouring resources into it, just like they did with AARCH64. And when that happens, where does ARM, as a company, go? They did to Intel what RISC-V is poised to do to them, may be in a decade or two.

I feel ARM could go the Docker way, who faced a similar foe in the stakeholders seeking to commoditize their core business... ruthlessly stomped on them, creating k8s, runc, registries, microvms, serverless, and what-not. Despite Docker having stellar engs working for them (and in some cases, better tech: Swarm, for instance), there's nothing they could do.


Why would Apple pour resources into it? To save a few cents licensing costs on a $1,000 smartphone and give up a decade of hardware and software investment. Sorry, not buying it.


Apple already have a "We can do whatever the fuck we want" licence on aarch64 anyway, notice how they've been adding their own instructions?


> you'd have your Tencents and Apples of the world pouring resources into it

The Tencent and Apple of the world are...Tencent and Apple.

If Amazon say decided to throw billions at developing a Graviton 4 using RISC-V...they really couldn't leverage much of their previous Graviton development work. They also wouldn't have a deep bench of low level RISC-V wunderkind to bring on board to leverage the new platform. It's a chicken and egg problem.

Why would Amazon then want to pay to bootstrap a whole RISC-V market? That's a lot of CapEx without a clear potential for returns. You'll note they did develop Graviton using the ARM ISA so they didn't have to create from whole cloth a "Graviton" market.

Amazon made the same decision as PC makers in the 80s. They consolidated around the IBM clone architecture to take advantage of the rest of the market, especially Microsoft and IBM, supporting the IBM clone market. Commodore, Atari, and Apple all had to fight uphill against IBM clone market.

I haven't seen any technical advantage of RISC-V over ARM that would suggest it's worth pouring billions of development dollars into. It might make sense for Western Digital for drive controllers but not phone, tablet, PC, or server makers.


AFAIU all the Graviton chips so far are using off the shelf cores licensed from Arm. What Amazon is doing is assembling a shopping list of cores, peripherals (probably also to a significant extent licensed from Arm rather than a 3rd party or even in-house) etc. and creating a SoC, which is manufactured by someone else. Even this is no walk in the park, I'm sure Amazon has hundreds(?) of engineers working on Graviton.

Now, if someone would create a RISC-V core competitive with the Arm server cores, and an ecosystem of RISC-V compatible peripherals (well, not compatible in the sense that some random peripheral would need to care about the ISA, but that Arm might not want to licenses their peripherals to a RISC-V based project?) shows up, I'm sure Amazon could do roughly the same with a hypothetical RISC-V Graviton. But if you're talking about designing a RISC-V core from scratch, that's a whole different ballgame.


> AFAIU all the Graviton chips so far are using off the shelf cores licensed from Arm. What Amazon is doing is assembling a shopping list of cores, peripherals (probably also to a significant extent licensed from Arm rather than a 3rd party or even in-house) etc.

This is my point. Amazon using ARM lets them tap into the huge ARM ecosystem with minimal investment on their part. The cores they're using have known qualities and meet whatever performance targets they want.

At this point RISC-V doesn't have that same ecosystem. It doesn't even have extant high performance cores someone can just order. Building momentum is expensive. If selling that ecosystem isn't your main business it's probably not worth investing millions or billions to develop it.


> I haven't seen any technical advantage

RISC-V has geopolitical advantage however in that it's not controlled by US/UK.

I can see why Amazon wouldn't pay to bootstrap that, but I can see why Tencent/Huawai/Alibaba would try to.


> Frankly, unless ARM does radically change their business model, I do not expect them to survive.

I kind of agree, but it’s not straightforward. ARM’s growth wasn’t so much a hockey stick as a more continuous curve with a small exponent. Yes, they only started in 1990, but that was itself a reboot/restart after acorn’s 12 years of effort. And they were scrambling hard for every deal for at least another 15 years.

The environment in which RISC-V emerged is different, of course, but many of the dynamics are still the same. Car companies were still using 16 and even 8 bit devices into the 21st century.

Secondly, ARM has a large arsenal of patents that go along with the license, and they continue to add to the load.

I bear ARM no ill will but I also want RISC-V to supplant them. I just don’t think it’s automatic.


Their business model was to be sold to NVIDIA, but that didn't work out.

Anyway, that's all well good, but - does anyone end up making decent RISC-V processors that could replace a desktop CPU, or a RPi CPU, or something weaker but not in a small niche? Or is this still a vision for the future?


>does anyone end up making decent RISC-V processors that could replace a desktop CPU

As of November, a large number of extensions got ratified, including the vector extension, cryptography acceleration, hypervisor support and other important features.

RISC-V is finally not missing any important feature ARM or amd64 have, and it does it with an order of magnitude lower number of instructions (equivalent but simpler, i.e. better) and with significantly higher code density.

However, test chips with the first designs implementing all of that will take time, even assuming they were tapped out right then, after confirming no last minute changes.

High performance cores depend on these extensions, so we'll begin to see them soon. We know multiple such efforts exist.

Tenstorrent has one such project, led by Jim Keller.

>Their (ARM) business model was to be sold to NVIDIA, but that didn't work out.

They intend to go public now. I recommend against buying those shares, as I do not expect ARM to turn around.


> with significantly higher code density.

This is not correct and is a known weakness of RISC-V at the moment. The ARM toolchains are really good and have decades of improvements in them. This is a big issue for those who need to buy ROM in volume.

Better tools are coming online though. For example:

https://blog.segger.com/code-size-closing-the-gap-between-ri...

"One of the issues faced by RISC-V developers is that the code density of the RISC-V instruction set for deeply embedded processors does not match that of Cortex-M with existing tools."


> RISC-V is finally not missing any important feature ARM or amd64 have, and it does it with an order of magnitude lower number of instructions (equivalent but simpler, i.e. better) and with significantly higher code density.

This has historically been a problem with RISC-V and it’s not really something you’ve backed up as being improved.


> […] an order of magnitude lower number of instructions (equivalent but simpler, i.e. better) […]

I am not sure how to parse the «simpler, i.e. better» part; simplicity vs betterness is orthogonal at best and is contradictory at worst.

Very few people write meticulously handcrafted, highly optimised code for any ISA today, and the compiler writers actually prefer a more complex ISA as it becomes easier for them to transalate high level programming abstractions into an efficient concrete hardware implementation and run it through an optimising pass. Simplicity of the ISA does not immediately translate into faster or more optimised code.

Moreover, creating a highly efficient optimiser is no easy feat as it depends on a large variety of factors and the in-depth knowledge of a specific CPU incarnation: the knowledge of each instruction's timing, instruction interdependencies (i.e. the most optimal instruction sequence that does not stall the pipeline or maximises the ALU/whatever utilisation etc etc), L1 I and D cache line sizes, the knowledge of a particular CPU model / generation… – and that is just a beginning. There is a good reason why Intel, AMD, IBM, ARM et al publish hefty optimisation guidelines / documents for each generation of their CPU's with the compiler writers in mind: the same sequence of instructions issued for the CPU model 123 might be less (or more) efficient than in the next CPU model 456 and vice versa; moreover (e.g. specifically for Intel vs AMD), a sequence of instructions optimised for a particular Intel generation might underperform for an equivalent AMD model or vice versa. Same is likely to be true for different RISC-V core designs from different RISC-V vendors. This is the reason why -mcpu=XXX and -mtune=YYY exist in GCC and clang/LLVM: efficient low-level code optimisation is a tricky business and depends on a large number of compounding factors.

It remains to be seen how the optimisation will be tackled in the RISC-V case with its potpourri of optional ISA extensions and also with the instruction fusion. Given that a random concrete RISC-V implementation can be a combination of the base ISA + a random permutation of any RISC-V extensions (in the worst case scenario), emitting the code that will run equally efficient across N different RISC-V core designs appears to be problematic. Instruction fusion can pose another challenge as a fused macro-instruction may or may not run as efficiently on a RISC-V instance that does not fuse them. There is no guarantee that the compiler generated / optimised code will be equally highly performant across different implementations even for the same set of ISA extensions.


>and the compiler writers actually prefer a more complex ISA as it becomes easier for them to transalate high level programming abstractions into an efficient concrete hardware implementation and run it through an optimising pass.

Itanium aka the Itanic demonstrated this is not the case.


What remedie do you recommend against buying those shares, though?


My take is that RISC-V is not that great of an architecture and might never compete at the high end. (e.g displace ARM, Intel, POWER, etc.)

On the other hand there is a vast market for low-end CPUs that have specializations to be disk controllers and things like that. If the ISA and all the IP required to make specialized devices is free and particularly if there is a culture in which it is easy to learn how to do that then RISC-V will have a special place.

My next spatial computing project is going to be RP 2040 based mostly because I have the parts in stock (other projects are blocked because of supply chain issues) but I am really an AVR8 fanatic and the only path forward I see to higher performance AVR8 systems is a soft core running on an FPGA where very hard tasks (vision? comms?) get offloaded to the FPGA and that can be a very appealing architecture where somebody might prefer RISC-V particularly if the tooling and accelerator integration are there.)


> might never compete at the high end. (e.g displace ARM, Intel, POWER, etc.)

It might also be successful, even in that segment. The RISC-V ISA was specifically designed to scale up (e.g. to OOO and vector processors) while still being very simple, especially at the low-end - a basic RV32E is really comparable in complexity with many real-world 8- and 16-bit chips.


The pretty minimal RV32EC core Ibex is estimated 15k gates: https://github.com/lowRISC/ibex

But, the similarly minimal Arm Cortex M0 is, drumroll, 12k gates.

Not saying this means Arm is better than RISC-V, such a small difference can probably be explained by some microarchitectural feature or lack of it. Just that you can scale down Arm as well.

But yes, I can see tiny chips like these putting a squeeze on 8 and 16 bit microcontrollers.


>My take is that RISC-V is not that great of an architecture

That's not the impression I got, when I studied the specification.

Do you care to elaborate?



From your statement, I thought you had looked into it yourself.

I remember reading that ex-ARM employee's take back then, when it was discussed here[0]. It was based in a pre-standard version of the ISA which was already old at that point in time, and also ignored the rationale behind many of the decisions it criticized.

By objective metrics (simplicity of instruction set, features available, code density, absence of µarch assumptions), I find RISC-V to be the best general purpose ISA available at the present time.

[0]: https://news.ycombinator.com/item?id=24958423


These objective metrics are not even close to being tested at the moment, whereas aarch64 has made noticeably different decisions guided by real life. They might be wrong but I can't see confidently say riscv is actually better other than simplicity.

If the macro fusions don't work for example that's a pretty dramatic blow against RISC-V compared to aarch64. Similarly code density is a tradeoff because riscv's simplicity means more instructions in the first place i.e. requires actual battle testing first.


Jim Keller is overseeing a very large RISC-V core that should be around the same performance level as X1 or Zen 3. I guess we'll see what the ISA can do pretty soon.

He said they were talking about open-sourcing the CPU block. If they did (the CPU isn't their core business), we could see lots of RISC-V competition popping up. The age of CPUs being commodity items seems to be approaching.

https://youtu.be/KOHQQyAKY14?t=494


That is definitely a phat core but I wouldn't expect it to be exactly trading blows with Zen3 or alder lake given (that depends on one's definition of "around the same" I suppose) since the backend seems quite narrow. That might be all they need for AI however, going to watch the video tomorrow. It has big caches by the looks of things but the throughput is more like Zen2 depending on what they can clock it at.

Either way I don't think anyone is seriously arguing you can't make a high-performance RISC-V process, but the thing is that scaling from (say) fast at memory and basic integer operations all the way up to all the sometimes wacky shit people want from their desktop machines will probably not be easy. It took Apple decades to do it with an ISA they helped design themselves.


https://images.anandtech.com/doci/15813/A78-X1-crop-23.png

They didn’t mention how many branch units they have, but the ALU count is identical to X1.

They have half the SIMD units, but each unit is twice the width which increases instruction density on high performance workloads.


They are planning to open the Small CPU core, the packet processor and the vector processing unit. NOT the actual high performance RISC-V core.

I also think the high performance here is a relative term to their small core.


The bitmanip extension that is scheduled to be part of the RVA22 profile will contain add with shift instructions (d= a + b<<c), which should obviate the need for one of the more common issues that otherwise would need fusion.


I thought that at first but the truth is that x86 and ARM are stuck in a certain mindset, their instruction sets are so complicated it is impossible to mix compressed and uncompressed instructions. Once you consider compression then RISC-V is on average slightly denser than x86. Compression will be everywhere by default.

The only really janky part is the fusion of more than 3 instructions. Now that is insane. However, it is like energy storage for renewables. Once you have an electric grid that can store days worth of energy that's actually extremely amazing. The old way of doing things will be viewed as something cavemen or barbarians did. If macro op fusion scales beyond 3 instructions then RISC V will easily beat other architectures because 2 is the limit for every other architecture.


>Once you consider compression then RISC-V is on average slightly denser than x86.

A note for those not following: RISC-V as in RV64GC. x86 as in 32bit x86.

64bit x86 aka x86-64 aka amd64 is dramatically worse in code density.


High end today requires specialized chips. E.g. if you want massive parallel processing you would want small cores, because you can make more of those. RISC-V gives that kind of flexibility of designing minimalist cores which can then be repeated a thousand times on a silicon die. The same thing that are beneficial for cheap specialized hardware is also often beneficial for high-end stuff, where you really need parallelism to push performance.

Whether you are designing a server chip or machine learning powerhouse it is all about having lots of cores. To get more cores you need smaller cores and RISC-V has demonstrated they can make small cores of half the size of an ARM core with similar performance.

RISC-V simplicity really starts paying off once you go very small.


Jim Keller's team is creating a 6-wide issue RISC-V CPU. It should have similar IPC as ARM X1 or AMD Zen 3.

https://youtu.be/KOHQQyAKY14?t=494


here's an 8-issue design:

https://moonbaseotago.github.io/


There are rumours of Apple being interested with RISC-V. May be MacBook 2030 will feature it.


Apple posted job advertisements looking for RISC-V technical roles.

What they plan to do with them is unknown, but Apple has the cash flow to try a lot of things.

I would expect them to quickly adopt it for embedded applications, and experiment with making their own high performance design, including porting MacOS and Rosetta Stone. They can afford to do this much, regardless of whether they end up moving their main CPUs to RISC-V or not.


I would imagine apple actually ships hundreds of devices with processors - each dongle and peripheral has at least one.


Yep. There's surely a bunch of uses of RISC-V that have little or nothing to do with the main CPU, even in a conventional machine.

Trust modules, device controllers, smart adapter cables etc.


For what it's worth, Apple always has people on hand monitoring innovations in technology and testing their stack on new hardware, most of which never sees the light of day.


See OS X on Intel being a thing inside Apple for years and years before they decided it should see the light of day.


NEXTSTEP was multi-architecture even before Apple owned it.


Probably for future research if they intend to try to smear campaign it :-)


Apple had a big part in aarch64 being what it is today so I am quite skeptical they'd just jump ship.

I might be making this up (I can't remember the company) but I'm pretty sure at least one chip designer has moved from risc-v to arm. It's not clearcut.


Making a RPi equivalent would require a GPU with Linux support, I don't see any sign that RISC-V chip vendors are able to do this.

Building a desktop system with a PCIe graphics card would be easier.


There actually exists at least one GPL-licensed GPU core (GPLGPU), but it's based on a 90s PC 3D accelerator that's really just not that good.

I believe the team behind LibreSOC is also working on a 3D processor, but I don't know how far along they are.

However, if a vendor instead ships a decent framebuffer and high-performance vector unit, I think LLVMpipe would not be a problem for most RPi-equivalent use cases, either light desktop use or server use (especially the latter!)


Isn't Imgtec doing that?


Where can I get the source code for PowerVR kernel drm code from?



Nowhere because it's not available yet. They only made a few announcements.


This has since changed. They've released a DRM/DRI driver.


I follow some of the writings of Nassim Taleb. I came to the conclusion that RISC-V will "succeed" because there's no way for it to fail. It's like Linux.

I don't think licensing fees to ARM are a problem per se, it's the "frictional costs" that it introduces. My understanding is that the Raspberry Pi developed their RP2040 because ARM had a reasonable package available for its low-end Cortex-M processors, which doesn't apply to their higher-end Cortex ones.

RISC-V doesn't even have these restrictions. Even extremely small outfits can play around with RISC-V designs. Most of them won't go very far, but with enough lottery tickets, one of them is bound to draw the winning numbers.

Sure there are considerable obstacles. I toy with microcontrollers. For hobbyists like me, ARM processors are still the most sensible choice. I'm not expecting any shift from that for at least five years. But who knows after that? Anything can happen.


ARM's less open platform also comes with some advantages though. It's easier for ARM to prevent ecosystem fragmentation and non-standard instruction set extensions. Or to push for migrating to newer platforms like the move to 64bit ARM.

Software is less affected by these issues because it's inherently more flexible. If I install some wonky experimental Linux feature, it doesn't really matter and I can just revert. Different story when this is baked into hardware. The startup costs for independents to contribute to a software only ecosystem is also much lower. How many organisations have the talent to contribute to RISC-V, but wouldn't be able to afford purchasing a license?

Didn't mean to sound so negative but I worry in the hardware space that the advantages of such an open design are not as great.


> ARM's less open platform also comes with some advantages though. It's easier for ARM to prevent ecosystem fragmentation and non-standard instruction set extensions.

It's kind of funny way of looking at the core part of the ARM ecosystem while forgetting how much outside of the CPU is non-standard, undefined. None of the ARM devices share bootloader, device enumeration, and a plethora of things needed for an open, non-fragmented OS/Software ecosystem like how PC does.

Maybe you can run parts of the same ARM machine code on most devices, but it's not terribly portable to be honest, it has to be very generic. For example, Android devices end up in a pile of trash because you can't just upgrade the kernel to the latest version on a 1, 5, 10 year old smartphone without losing functionality or being stuck at step 1 for the lack of tools from broken forum links and shady fileshares. So much for software flexibility...


So you're criticising Arm fragmentation but the possibility of more fragmentation in RISC-V is ok?


My idea is that a CPU is just a component and it's useless by itself without considering the rest of the computer system. ARM needs to dip more into standardising the rest of the picture, and the RISC-V guys could also start looking into creating an open computer architecture initiative/group to prevent further fragmentation.


RISC-V is trying to create a standard platform specification to address some of these concerns

https://www.youtube.com/watch?v=l2w4cWFpqAA


Agree 100% - Arm could certainly do better.

I do worry though that the RISC-V ecosystem could be really torn in two by a big player (Intel?) who adds proprietary extensions and associated software.


The market is already fragmented depending on how you look at it. There are lots of custom chips for all sorts of stuff today which has their own ISA because ARM cannot be used because it imposes the requirement to implement around 1000 instructions.

With RISC-V you could end up with LESS fragmentation, no more. A lot more of the specialized chips can start using RISC-V with particular extensions rather than inventing their own ISA.

That has benefits in that one can reuse more tools and code.

I don't see a reason why RISC-V for desktop systems or high-end smart phones should get any more fragmented than ARM. There is already a standard set of extensions RV32G for these kinds of hardware.

And at least RISC-V has been designed from scratch to handle fragmentation. The base standard gives developers a way to check what extensions are supported in code. It also allows operating systems to include code to simulate unsupported instructions.

RISC-V is prepared for fragmentation in a way that ARM isn't. It is different to design for that possibility and find ways to deal with it as opposed to design around that idea that through strict rules one would avoid fragmentation ever happening.


So ARM's lack of fragmentation means I can install any Android ROM on any ol' ARM device?

Or are ARM devices so crazily fragmented that I need to custom compile for each and every device in existence?


That's the equivalent of saying forking is bad.

The big plus of RISC-V is allowing those who accept the risks and cost to experiment and develop new specialized components. Arm won't let you.


If anything was standardized, Linux wouldn't have the drivers problem.


> will "succeed" because there's no way for it to fail. It's like Linux.

Didn't OpenRISC fail? https://en.wikipedia.org/wiki/OpenRISC

Also there are different definitions of "succeed" from different viewpoints. In academia RISC-V is already a big success.


Good point. How is RISC-V better than OpenRISC's ISA?


RISC-V does not have unused/unusable exceptions and tries not to expose internal details of implementation as much as possible.

To show difference, OpenRISC has division-by-zero exception which is unnecessary - it can be ruled out by compiler or, if compiler did not rule it out, can be checked and triggered by comparison, conditional jump and special trap command. MIPS did exactly that almost fourty years ago.

OpenRISC also has (optional) branch delay slot, which complicates out-of-order implementations and even in-order implementations with memory mapping unit - what state should you keep to properly handle MMU exception in the branch delay slot command?

Alpha AXP, being the ISA that was designed to be useful for 20-25 years, ditched both of these in early 1990-th. RISC-V, which extends experience of RISC-IV (SPUR [1]) and other RISC designs, including POWER and SPARC, also does not sport these atrocities.

[1] https://news.ycombinator.com/item?id=19266152

All in all, RISC-V represents an ISA that can be implemented differently with less effort than OpenRISC and, to be frank, most other RISC ISAs.


They did not build on OpenRISC because it was originally mostly just a chip and it was 32 bit only.

RISC-V started out with the goal of being capable of very small to really large chips and do 32 and 64 bit.

OpenRISC did add 64 bit later but by that point RISC-V was already going.


Mainline Linux kernel support is a fairly big plus.


> I toy with microcontrollers. For hobbyists like me, ARM processors are still the most sensible choice. I'm not expecting any shift from that for at least five years.

I started using ESP32C3s in my projects. They are cheap powerful RISC-V micros supported by Arduino SDK. IMHO it's already better than the ARM offerings, but I'm still waiting for good Rust support.


The ESP32C3 seem to have the same friction - closed wifi blobs? They might get supplanted by something with a more malleable software stack.


someone told me the wifi blobs exist because of regulations prohibiting wifi chipsets from being easily modified to use illegal frequencies, but they still want to do much of the functionality in firmware so they can have more flexible asic designs that can keep up with standard changes. If that is all true, wifi blobs are unlikely to change.


Regulatory issues are one reason. The second is that the wifi side of things might be some 3rd party IP they bought and the licencing terms prevent open sourcing it. Wifi stacks are complex and filled with edgecases. And every release just makes that moat bigger.


... . -. -.. / -- . / .- -. / . -- .- .. .-.. break .. / .... .- ...- . / .-.. --- --- -.- . -.. / ..-. --- .-. / -.-- --- ..- break 6c6f6d65696e30313031676d61696c636f6d


Are there any bigger RISC-V boards around yet? I'd love to goof around making a toy RISC-V operating system, but those ESP32C3s look a bit gutless (400kb of RAM? Oof.).

Ideally I'd like something specced closer to a raspberry pi.


The HiFive Unmatched[0] from SiFive is a beast of a dev board, but with a price to match ($665 from Mouser). For a smaller Arduino-style board, the HiFive1 Rev B[1] exists as well, but it's more an MCU than CPU.

[0]: https://www.sifive.com/boards/hifive-unmatched

[1]: https://www.sifive.com/boards/hifive1-rev-b


What 400kb of RAM on an MCU?! That's a huge amount for an MCU. Arduino's started out with 2kB of SRAM. Linux also requires a MMU to run, which most's MCU's don't have.


Thanks; but I don’t want an MCU. I don’t even know what an MCU is. Arduino style devices aren’t interesting to me.

I want something I can run Linux, like a raspberry pi but RISC-V. I know there were some bigger development boards out there for $600+ a few years back. Is there anything in the pipes which is smaller, for hobbyist messing around?


There is at least one now called the Sipeed Nezha, costs about $115 on Aliexpress with 1GB RAM. Seems like it's similar to an older RPi.


Oh thanks for pointing me in the right direction. It looks like they're now shipping $20 development boards with their new chips in the Sipeed LicheeRV. 512MB of ram and linux support:

https://www.aliexpress.com/item/1005003594875290.html


> It's like Linux.

It's unlike Linux in so many ways, licensing being just one really key example.

RISC-V may well succeed but not because it's like Linux.


Indeed - it's not like linux. It's an ISA spec, not an implemention.

It's more like POSIX or something from that comparison.


If you are making a custom chip, you will still license a ton of different IP blocks that you will pay licensing costs for. What is special about the CPU?

I guess it's nice to have a free CPU core but at the end of the day you want USB, external RAM, a bus to connect all these things and so on.

Maybe we will have free implementations of those at some point but it probably won't be from the current crop of chip designers - there have probably never been more ungrateful, non-reciprocal users of free software.


Right. When you are talking cores, this is all fine. A completely valid topic for academic discussion.

When you are talking chips with peripherals and packages and memory configs, it gets a little more muddy.

When the conversation moves from academic to trying to buy a chip and ship a product.

I am aware of a popular chip vendor next year making a three core chip. Core0 the fastest and primary is a RISCV, it will be flanked by two smaller and weaker ARM cortex m0 or m4s.

They’re doing that because they have peripherals established for the ARM and almost none for the RISCV.


RISC V cores are not necessarily free.


I am also a hobbyist and that is actually what makes me very hopeful about RISC-V. For a non-expert like like RISC-V instruction set is actually possible to grasp. Modern ARM is getting way too complex.

Depends on what you do. If you just write code in C/C++, then it doesn't matter. But if you actually like to dabble with assembly code then RISC-V actually makes that possible.

It may not be as awesome to write as 68k was back in the Amiga days, but RISC-V is the closest to that which I have experienced in modern times.

In particular I think RISC-V has a great potential in teaching a new generation how programming works in detail. I felt when I went into this industry that it helped being able to actually read the low level code produced by the compiler and understand it. That is getting ever more difficult.

We are reaching levels of complexity where it becomes very hard to give anyway today the kind of bottom up understanding of how computers work that you could get back in the 80s and 90s.


Which part of modern ARM do you feel is getting too complex? And what is there to gain by making an ISA more understandable to humans? Educational ISAs already exist for learning. The complexity of even something like x86 is not a real obstacle for people who actually need to interact with it.


It is already possible to experiment with Arm IP for free with Arm flexible access.


Can you expand on Taleb's message, and what you mean when you say "can't fail"?

Free stuff does fail for all kinds of reasons.


I can't remember the exact criteria that Taleb mentioned that led me to believe that great things might come of RISC-V.

So I'll fall back on the "optionality" argument. Vendors, particularly small vendors, can experiment with RISC-V either free, or very cheaply more so than they can with ARM. Right there, RISC-V opens up a segment that didn't exist before. They'll be more experimentation. When your downsides are limited, but your upside is potentially huge, that's what I mean by "optionality". This is a powerful driver of success.

Naturally, any individual endeavour can fail, but collectively, there's going to be some huge winners.

My prediction is that RISC-V will open up some new category of computing. I can't tell you what it is, who will do it, or when they'll do it, because I don't know. My point is not so much that I can predict what it will be, but merely that there are forces in favour of RISC-V.

I am not, however, predicting the imminent demise of ARM.

I think also that a lot of commentators are viewing things through the lens of what we understand and the received wisdom today. Even - or maybe especially - experts tend to be hemmed in by what is "obvious".

RISC-V is like a genie that's been let out of the bottle. You can't put it back in.


Vendors can experiment with Arm for free with “Arm Flexible Access”!


The "minion cores"[1] idea from lowrisc.org might be a good early example. You can find something similar for one ARM chip I know of (TI Sitara with PRUs), but only one that I know of...perhaps because it's expensive to experiment like that, as you mention.

[1] https://lowrisc.org/docs/memo-2014-001-tagged-memory-and-min...


Companies fail. Information doesn't.

By virtue of being open source, RISC-V won't vanish, no matter how many companies choose to adopt it as their ISA and fail.

Seems like, given enough time, one of those companies will just not die, and end up getting big.


There's a lot of open source projects, most of them fail. I don't think the availability of an OSS ISA is good enough to overcome the friction of adopting a brand new technology, unless it enables something that competitors don't. And it doesn't seem to do that w.r.t ARM (so far).


If I write a free operating system, nobody will use it except me. When I lose interest literally zero people will care that it exists. Hard for me to see how that would be a success no matter how many copies of the useless bits reside around the world.


You absolutely will (almost certainly).

But if you take the set of people who have written free operating systems, there's at least one big winner.


I work with RISC-V these days.many AI chips are using RISC-V.

A non-technical perspective about RISC-V: China is leading to some extent in RISC-V land(Sifive works with Hifive in China, Alibaba released its RISC-V,etc). China could not license x86, could not acquire MIPS, tried to get part of ARM now(the China ARM company is fighting with ARM), but the big land needs to "own" a CPU core at will desperately, RISC-V fits perfectly well.


Did they give up on Loongson? I thought they already had a domestic core architecture.


No, they are even working on upstreaming support for LoongArch and getting distros to support it. For eg:

https://wiki.debian.org/LoongArch https://gcc.gnu.org/pipermail/gcc-patches/2021-November/5855... https://lists.llvm.org/pipermail/llvm-dev/2021-December/1543...


That was just MIPS without paying licensing fees. MIPS has lost most support and I doubt China wants to foot the bill for maintaining everything. RISC-V allows them to offload most of the hard software parts to other countries.


...and RISC-V is almost a clone of MIPS anyway, so no surprise there.


In what way is RISC-V a clone of MIPS?

There are VERY different ideas most notably using variable-length instructions rather than one instruction width (begging the question of whether RISC-V is actually RISC).


An argument could be made that the instructions look similar. Very RISC.

Their encoding is completely different, and that's what matters.


We run a cluster of RISC-V CPUs based on open source designs to schedule and post process workloads for our edge accelerator ASIC.

10/10 would do it again, except this time we may pay SiFive or someone like that for something requiring less "customization".


Interesting, can you say anything about what sort of chip to chip communication is used? AXI, Wishbone, plain old serial?


AXI primarily.

But, I should also mention, we only use the RISC-V cores as a pre/post-processor, scheduling engine, service processor. We have custom hardware that does the bulk of the inference math (we are a convolutional accelerator with a number of constraints traded off for speed and power). The fabric itself is driven by engines that are programmed by the scheduling engine (RISC-V).

Happy to answer more specific DMs.


Can you DM on this site?


Yikes, no! But hit me up here: shaba@recogni.com.


Do you have a website / product page?


I don’t like this article because it is very scarce on data.

You are succeeding? I’ve heard this last four years in a row.

Show me the numbers, shipping units, example of arm designs getting displaced by risc v.

This article is very light on the proof. It is just riscv talking points.

Don’t get me wrong, concept is fine. But at this stage, after hearing about this for years, where is the traction? Prove it.


I see too much cheerleadering for RISC-V.

Apple is not going to adopt it, ARM is still fighting against X86 Imperium in the desktop/server market, and the mobile ecosystem of tooling, compilers, libraries and build process just isn't there.

If RISC-V manages to win education market previously targeted by MIPS and microcontroller, that is already quite good.


Apple is not opposed to RISC-V: https://jobs.apple.com/en-us/search?search=risc-v (3 jobs available)


The CPU is dozens of processors in any given desktop/server. Thanks to SSD and HDD controllers alone a significant portion of processors in servers are ARM (and RISC-V candidates).


Which are meaningless for most people, probably the last time I cared what CPU was driving the harddisk MS-DOS still ruled the PC world.


Its meaningless to you to consider, but it'll have impact. As ARM took over the controllers world, it gained the volume and engineering efforts to move to phones and mobile... now macs and servers.


That's part of the reason ARM will be very hard to displace. Just like there is a ton of x86 code out there in the server world that's not going anywhere. There is a ton of ARM code (especially peripheral driver code) out there in the uC world. And much, probably most, of it is not open source and not actively maintained.

The good news is compute and memory are getting so cheap it's probably not long until RISCV cores just emulate ARM and solve the problem, all inside a greeting card at a profit...


We've moved past this for the most part. Anything running on an ARM will be almost entirely written in C. At that point, porting to RISC-V becomes much cheaper. As long as the savings from RISC-V exceed the cost of porting the firmware, this is what companies would do.


The code may have been written in C but many drivers are distributed in binary only form with headers. Unfortunately the binaries are often ARM only.


Indeed, after being founded 37 years ago, servers are hardly impacted by ARM, and Macs are meaningless for 80% of the desktop world, which by the way are still Intel in what concerns the Mac Pro workstation.

So cheerleading for RISC-V to overtake in a couple of years what took ARM 37 years to achieve, is really flying high.


AWS is moving big into ARM, and macs are just the first consumer computers to move - not the last. The story isn't over yet.

> so cheerleading for RISC-V to overtake in a couple of years what took ARM 37 years to achieve, is really flying high.

Oh yea i agree it won't be a few years. It'll be many years - but it'll move faster than ARM did just because its so much easier to do today than 37 years ago.


Macs were also the first consumer computers to move into PowerPC.

I can hardly execute Mac software as old as some stuff I have on Windows floppies, backwards compatibility is what keeps x86 ruling on the desktop.


All Javascript based web apps run on ARM Mac and Intel Mac without any rewriting or migrations. Not even with a compat layer like rosetta. When a RISCV OS gets a major browser, then most people (not HN crowd maybe) will be able to do their day-to-day work on it.

Yes stuff from 2000 doesn't run today, and yes, its unlikely that web apps will age well, but it does change how people think about software for the average user. A sufficiently large people don't care about backwards compatibility, and apple's graveyard of un-runnable software proved it.


It's meaningless to us as software developers but I'm sure ARM likes having customers to buy their low spec designs. Every product doesn't have to be a top tier high performance CPU design.


Indeed, and not every RISC-V CPU needs to be 100% open source.


IIRC Western Digital already switched to RISC-V for their storage controllers.


They do, and they affect the industry as much as whatever CPUs they were building in-house before the switch.


I would argue it already has. In 2017/18 in my last year of undergrad, our computer architecture class used RISC-V as the ISA to teach with. Granted, we used the book written by the designers from Berkeley, but our old book was MIPS and also written by them.


Intel’s support for RISC-V marks a technological and cultural shift https://semiengineering.com/which-processor-is-best/


Apple has demonstrated that is possible to run x86 software in another architecture without any major loss of computing power. This has opened the possibilities not only for ARM-based designs, but for RISC-V and others as well. I believe that architectures like x86 and ARM will be a thing of the past in the next few years as companies move to produce chips that suits their needs instead of paying royalties to Intel or ARM.


Apple added non standard ARM extensions to make that possible. It could be done in other arch’s but it doesn’t come for free.


FWIW, RISC-V has the Ztso "extension" in its standard.


> Instead of paying royalties to Intel or ARM.

I totally expect intel to offer a RISC V chip in the next decade. Made by them, competing with ARM for low power.


I totally don't, unless they can create a nice moat. Intel doesn't play well if it can't prevent others from playing in its space.

Perhaps they'll try to do another WinTel combo with RISC-V? Sounds like MS can just do it on their own.


I think the goal would be to sell packaged up RISC-V IP for manufacturing on their fabs (to chip designers). If they want people to use their fabs, they'd do well to provide some pre-made IP that works well that can be included easily in bigger designs.

The goal is to make money as a fab, not make money as a chip seller to OEMs.


Is there a platform similar to RPi but with a RISC-V cpu? (I don't care about gpu, so headless one, but with rj45 and few USB3 ports would be great).


SiFive did had one in the past (HiFive Unleashed). They still offer an Arduino like board for $59.

See: https://www.sifive.com/boards/hifive1-rev-b

The more RPi like one actually available, seems to be the Nezha, available on AlieExpress for around $99~170 (and on other places too)

https://fr.aliexpress.com/item/1005002668194142.html

A short review here: https://www.cnx-software.com/2021/05/20/nezha-risc-v-linux-s...

And a extensive wiki page: https://linux-sunxi.org/Allwinner_Nezha

Finally, there also seem to be a BeagleBoard (coming soon or vaporware?) in association with the chinese chip-maker starFive:

https://beagleboard.org/static/beagleV/beagleV.html


How about the https://www.espressif.com/en/news/ESP32_C3 ? Not sure if it supports USB or Ethernet (IIRC previous versions supported ethernet and also USB to some extent). This would be less powerful than a rPi, but on the up-side is available right now on Amazon for about $20 with good support.


"Sipeed Nezha 64bit RISC-V" may fit. https://de.aliexpress.com/item/1005002856721588.html


Kind of expensive though, its cousin[1] is more appealing to me, though I haven't ordered one yet.

1: https://www.aliexpress.com/item/1005003594875290.html


Sipeed Lichee RV. At around 28 euro with a dock it's pretty unbeatable. (For a RISC-V)


How about the Starfive VisionFive [1]?

[1] https://starfivetech.com/en/site/exploit


RISV-V will succeed because countries like China, India and Russia need a CPU ISA that isn't directly or indirectly controlled by the U.S. government.

Somehow, they're also too stupid to roll their own.


Russia has the Elbrus VLIW architecture, which is quite good with computations common in military applications (it has roots in specialized DSPs), so they are certainly not "stupid". But the problem is that developing an ISA is a relatively small part, most of investment and value lies in compiler and software infrastructure which can work with it. One of the biggest issues with Elbrus is the crappy proprietary compiler, which can not fully unlock the hardware potential. They also have to compile and debug Linux distributions and packets for thim themselves in-house, which adds to costs as well.


A VLIW processor that suffers due to an insufficiently smart compiler? Gee, haven't heard that one before.


The whole premise of RISC-V is that the ISA itself is a commodity and there is no value in creating yet another proprietary ISA, so it makes sense to have it freely available to anyone and pool resources around the software ecosystem.

So China/India/Russia/EU/whoever are in fact smart in choosing RISC-V rather than reinventing the wheel for no benefit.


>Somehow, they're also too stupid to roll their own.

Or smart enough to avoid doing that.


What would it take for Apple to switch to making RISC-V CPUs? I mean wouldn't it mostly be a change in the instruction decode logic? Once you are inside the chip most conventional chips (eg not The Mill or many-core) basically the same logically?


They've already got a lot of licenses for ARM already set up as perpetual licenses, and a lot of expertise, and an already deployed ARM toolchain. So there's no reason for Apple to switch to Apple manufactured RISC-V over Apple manufactured ARM, and given how Apple got burned by Intel's stagnation from Sandy Bridge until Alder Lake, I'm not sure there's a run of success long enough for another company to convince Apple to drop their own control in the medium term future.


I meant specifically, besides rewriting the instruction decode unit, and maybe tweaking the memory access to meet some technicalities, what would be involved to taking the M1 and making it support a different instruction set.


That has very little to do with Apple. What you're really asking is how hard would it be to add support to a CPU for a second instruction set.


Well, Apple was only an example of a company with the money and perhaps the inclination to do so. But no, it isn't really an Apple specific question.


Not really worth pondering. Apple has enough funds to do it just as an experiment, without much care about cost.


Key quote from an excellent and well balanced article.

> I don’t believe the success of RISC-V is because it is cheap or lower cost. If you just want to do the same as you can get with an Arm core, you absolutely should just buy an Arm core because it’s so well verified. It’s so well designed. It’s exactly what you want. The only reason for using RISC-V is because you want the freedom to change it and add your own things to it.

RISC-V is great and important as an enabler of innovation and experimentation. As a like for like replacement for the cores in your smartphone less so.


If someone really believes into RISC-V long-term success, is there a way to bet actual money on the outcome? And by this I mean buying shares, options, or others.

Not looking for investment advice, I myself do not know enough about investment and RISC-V. I'm just curious how one can think about making money from that type of development that may impact a whole part of the industry. I guess one could bet against Intel, but are there other approaches?


Hope we can get a performant 64 bits RISC-V desktop CPU not too far in the future. This may be an significant inflection point in computer systems.


It will succeed even further after China solves its fab issues. Considering how the West overplays the technological sanctions card, many countries will jump on this bandwagon as soon as RISC-V solutions will be more or less competitive. We already can see it in the embedded space (even Russia has a sizable stake in RISC-V) and I expect we will see reasonable high-end solutions in 5-10 years.


I think the last point in the article is super interesting and under-discussed here - now that academics and engineers have a wealth of free core IP, will we see more open development in the EDA space? Yosys and the ecosystem around it certainly points this direction, but there's a LOT to the EDA space.

Are universities pursuing academic EDA development, too?


I'm itching for a riscv version of the pinebook pro. I'd drop some descent coin to whomever makes that avaialble.


Does this mean that we can have completely free computers soon?


No

(People forget about foundry IP, miscellaneous analog bits of CPUs, the actual physical cost of manufacture, and the question of what "free/libre" actually means for hardware.)


I think it's interesting even just to try to define an "IP-free computer" in a clear way. Like presumably we don't care if minor components like resistors or screws are proprietary, because it's trivial to replace them with a different supplier. Or taking it to the extreme, obviously we don't care where the steel in the screws comes from. Steel is steel, even if being a steel company requires tons (kilotons?) of proprietary machinery. It's hard to even figure out where the screws in a computer come from, much less the steel.


Steel is steel, but there's a whole question of "rare" minerals that Freephone were looking at. Tantalum capacitors require a mineral that's sourced from the warzone in the DRC!


Depends whether you mean "free" as in "zero financial cost" or "free" as in "all the RTL, firmware, and software source code is available to anyone with a permissive license that permits derivative works".


RISCV is an open ISA, it has absolutely no legal influence over the openness of a chip that implements it.


It has a lot of influence actually; simply because it allows an "Open Source" chip.


https://libre-soc.org/ is going for free hardware with free tools, but the value of "soon" might be quite high. There was a previous item on 180nm silicon: https://news.ycombinator.com/item?id=27772066


You can have them yesterday (RISC-V or else), if performance isn't a concern and if you're OK with deploying said computer inside a FPGA.


The fine article points out that RISC-V is succeeding because it allows businesses to create processors customized with their own (no doubt proprietary) extensions for their own needs. So unless somebody wants to pay the immense engineering costs to create a completely libre processor, no.


Yes


think for a minute - the energy inputs, the materials inputs, the transportation inputs, the intellectual work inputs, the software architecture advances inputs..

asking for "free" in the face of these factual inputs, appears to echo the most lazy and gluttonous of human instincts.. really unproductive, spoken from a person who has tried to confront the excesses of American markets e.g. software patents, rent-seeking models, etc..

also spoken from a person who defends regularly "free as in Freedom, not free as in BEER" .. this is a call for free beer in the worst form.. really unproductive


I think the GP meant as completely free as in speech free. A computer made of 100% open source components, all chips with Verilog, all masks etc for the PCB... and I think I that is a long way off.


The AMD64/x86_64 ISA patents expire 2023. This makes RISC-V kind of pointless since x86 has a vastly superior ecosystem. I would prefer a solid x86 system without IME, Pluton, secret microcode, EFI cripple bootloader and all the other surveillance, anti-freedom crap over RISC-V any day.


The 32-bit x86 patents expired awhile ago, and no one has done anything with it (far as I can tell). It's just really hard to implement an x86 CPU, and there probably isn't much motivation when they're so ubiquitous and cheap already.


Via technically still exists & makes x86 CPUs, mostly as part of China's efforts to get off of foreign technology dependencies: https://en.wikipedia.org/wiki/Zhaoxin


AIUI, there's quite a few new x86 systems targeted at industrial/embedded use and retrocomputing. I've also heard of at least one modern core which implements i586.


There are reason why one would want to implement a RISC CPU over x86-64. Atleast as a small team. Another problem is that many application today assume the presence of atleast SSE if not AVX2 which have been around for longer then or almost a decade. A patent expired CPU will always have to play unless the architecture they clone was stopped being developed much (as SuperH was which J-Core based it's design on).


AMD64 contains everything up to SSE2. AVX2 only exists since 2015. I doubt many applications require it.


I think most of today's software runs on any AMD64 CPU. That said, this is changing. GCC and LLVM developers have introduced aliases "x86_64-v2" through "x86_64-v4", with the non-existent "v1" being the base spec as defined by the Athlon 64/Opteron.

v2 adds for example SSE4.2 and SSSE3, present since Intel Nehalem (first-generation Core i7 etc). I think other families support v2 since Intel Silvermont (Atom), AMD Bulldozer (FX), and AMD Jaguar (AMD E-series). I think one of the bigger losses would be AMD Phenom/Phenom II.

I think Red Hat wants to require x86_64-v2 starting with RHEL9. Arch Linux seems to also have considered it, but seemingly has since decided to support both v1 and v3, to provide both a bigger speed boost and better compatibility with old PC's.


I too remember when AMD and Intel simply gave up on filing any new patents in the early 2000s


Do you run many programs that need anything newer than SSE2? I can name one, I think.


x86_64 is much harder to implement though.

The superiority of the ecosystem is debatable. I guess you mostly have proprietary software in mind? (including Microsoft Windows). That will improve with time.


Are there plans by any vendor to develop/implement/sell any reasonably powerful x86 processor without IME/PSP?


It depends what you mean by "reasonably powerful" --- there are a handful of small Chinese companies with x86-cored SoCs, and of course this:

https://news.ycombinator.com/item?id=25589081


Do you have a source for that? The initial announcement was in 1999. Did they file for the patents later?


Too much focus on ARM OR RISC-V. The processor space is incredibly diverse. There are tons of embedded applications that don't require massive software ecosystem. The tenslica, arc, ceva ISAs have all shipped in the billions sitting next to arm inside SoCs.


What scares the hell out of me, it would be so easy to backdoor an implementation of the open core in a well-hidden way.

Whereas the "big" CPU providers are staking their reputation and therefore future business on providing a non-backdoored CPU, it would be fairly trivial for an individual device manufacturer to provide a backdoored CPU design for their chip design.

It could become the whole cheap-device OEM firmware situation all over again (as we saw with many backdoored routers), but this time the blob is located on-die, so is significantly harder to reverse engineer or audit.


The main provides already provide a backdoored CPUs - Intel ME and AMD PSP.

There is a a general belief that only some good guys have the keys. I don’t know what it is based on.


This is a problem with common thinking these days: " general belief that only some good guys have the keys". I paid for the device, I should get the keys!


If you care about security, you probably don't care about performance. High-performance cores are very complicated and hard to follow. But, if you give up performance, you can design a core that uses very simple concepts, making it hard to hide a backdoor in the design. Things like Chisel, which let you write your design in a higher-level language, help with that too.


> it would be so easy to backdoor an implementation of the open core in a well-hidden way

If anything it's way more difficult that doing so on a closed core.

> It could become the whole cheap-device OEM firmware situation

If you think high-end proprietary routers were not backdoored think again.


It completely depends on your definition of success, if you want an open ISA with open implementations (maybe on open process nodes) that are available, cheap and running open firmware, we aren't there yet. The closest to that seems to be SiFive, but they don't have open firmware and I don't think they do open implementations.


I have a question about RISC-V: Why is the OpCode on the 'wrong' end of the instruction? Has there ever been ANY machine that puts the op code in the low-order bits?

It doesn't really matter -- buy why? Most RISC-V decisions were made for some good reason. (also, I'm not convinced RV32E makes sense anymore.)


It's a little endian architecture so the low-order bits are found "first", directly at the insn address. This matters because of the variable-length insn support.


Yes, and, RV32 is 32-bit, and although addresses access bytes, generally, the hw I've seen fetches at least a word at a time. Such a word may be 2 packed instructions, true. But I still see no advantage putting the opcode in the low-order bits.

Now, there is a bit-serial implementation of RV32I, which may indeed benefit from this bit ordering. However, optimizing for an unusual hardware approach (bit-serial) seems sub-optimal.

In other words, I would hope there would be some demonstrable benefit to putting the opcode in the low-order bits when memory fetches are at least 32-bits wide in even the simplest machines.


You have to put the length encoding within the lowest-order 16 bits because of the little-endian arch and variable length structure. Might as well put everything related in the very lowest bits. Not sure what your objection to that is. Besides, common insn formats do have funct* bits that are essentially part of the opcode, and found elsewhere in the insn words. The whole arrangement is meant to conserve encoding space as far as possible.


Popularity often decides which tech succeeds. RISC-V I can see becoming a huge player in large cloud infrastructure but I would probably put my money on ARM at this point for most things. What is technically the best is rarely what wins.

That being said I am still holding on to my god damn laser disc player until the end !


I feel like The Mill is an architecture that deserves to succeed more than RISC-V, if only they would take the RISC-V openness approach instead of patenting it.

https://millcomputing.com/


The Mill architecture has no desire to be commercialized. I honestly don't know why anyone would care about it. Sabotaging RISC-V for the sake of this isolated island project is a very weird trade off. I'm sure adopting RISC-V will save a life or two. With the Mill project you're unlikely to see it even be manufactured at scale.


Mill has been slideware for as long as I can remember, and I don't see that changing. There are some interesting ideas there that I think would be interesting to explore for comparch researchers, but I have very near zero faith in Mill Computing ever producing something people can buy and use.


Why was the title changed?


Are there any RISC-V laptops on the market? I would love to get my hands on one if they're available.


Just curious, is the RISC-V arch better/worse than ARM?


That's a very hard question.

First of all, it depends on what your goal is. Do you want to create a high-performance general-purpose CPU? Do you want to create a power-efficient CPU? Maybe a special-purpose accelerator? How about a low-cost microcontroller?

Second of all, none of those above are directly dictated by the ISA, but rather by the hardware implementation (think AMD Zen vs Intel Core or the embedded ARM Cortex M4 vs higher-performance Cortex-A53). An ISA can however more easily accommodate certain goals (RISC-V for example is very easily extensible, so if you want to create a special-purpose CPU, it might be right up your alley).

Finally, software also matters a lot. Compiler quality is very important in the resulting performance. I think mediocre compiler support and insufficiently advanced compilers are one of the reasons Intel Itanium never took off, for example. Itanium was a very complex ISA and therefore harder to exploit to its full potential.


Yes.


It depends. For some things, yes. For others, no. This is the best book ever on Risc-V (I think): https://www.amazon.com/RISC-V-Reader-Open-Architecture-Atlas... and they directly compare arm, x86, and risc-v to each other. As you'll see, "it depends".


> RISC-V is succeeding MIPS

Fixed it for you. But hey, it's not like there's anything wrong with that!


This is one of the cases where HN attempts to remove clickbait from article titles hurts its meaning. An article titled "Why RISC-V is succeeding", like this one, would lead me to expect an overview of the positive contributors to RISV-V growth, as this article does.

An article titled "RISC-V is succeeding" would lead me to expect an article focused on market share over time, which this one is not


It can't succeed fast enough. I'm sick of space heaters posing as x86 laptops, and I refuse to voluntarily buy into the Apple ecosystem, even if they do some things well. How far away are we from competitive laptop/desktop quality RISC-V chips? It seemed like there were a few extension that everybody was waiting on, but now most of them are complete, and still haven't seen anything of substance.


> I'm sick of space heaters posing as x86 laptops

Then buy AMD? There's choices here, and AMD's laptop CPUs for the last few generations have been very power efficient. See for example today's Anandtech coverage of the latest Zephyrus G14: https://www.anandtech.com/show/17276/amd-ryzen-9-6900hs-remb...

It's nonsensical to think that RISC-V will somehow fix this, since it's a heck of a lot more about execution quality than ISA (fine-grained clocking, power gating, sleep states, etc... as well as just an overall efficient cache & instruction execution design).


Taking into consideration how long it's taken the ARM industry to get a single decent laptop/desktop processor with the M1 since ARM64 was published. And it's yet to be matched by any other company.

Odds are good we're going to be waiting for a while.


ARM started at a time when Moore Law still hold strong. It was much harder to catch up. The incumbents can't double their products performance every 18-month anymore. So RISC-V can have a easier time catching up than ARM did, one hopes.


What's the reason the law no longer holds?

Physical limitations or Intel just busted with their research?


During a CPU cycle the entire chip needs to stabilize; new voltages come as inputs and some time is required to stabilize the outputs to the correct values, this time cannot be lower that size_of_chip/speed_of_light and it is already quite close, but the chip cannot become much smaller as the transistors are already at an atomic scale.

Breakthroughs are now architectural, like if someone figured a way to implement a CPU with half as many transistor we could get a ~30% increase in clock frequency


> space heaters posing as x86 laptops

The instruction set has nothing to do with this. Apple has better numbers because it uses the most advanced manufacturing process.


You can certainly argue that's because of other design choices (optimizing for high IPC at low clocks by trading off space/etc) but it's certainly not just because of their manufacturing process. Both the A15/M2 architecture and AMD Zen4 will be on TSMC N5P this year, it's pretty doubtful that AMD will be able to catch Apple in IPC or perf/watt imo.

And AMD is currently the more "low clock/high IPC" of the two major x86 vendors, so they're the easiest comparison, Intel is really clock-focused still (see: today's Anandtech review comparing AMD 6000/Zen3+ against Alder Lake Mobile and note the power scaling numbers).

https://www.anandtech.com/show/17276/amd-ryzen-9-6900hs-remb...

Anyway Jim Keller's "I'm sure x86 isn't dead yet" aside, it seems undeniable that tweaking the instruction set to enable deeper reorder and scale the frontend wider should have performance benefits. Saying out-of-order depth doesn't matter is like saying that code density doesn't matter, or speculation depth doesn't matter. These are things that can be simulated, or measured on real-world code, I don't have numbers at-hand but it seems self-evident that there are metrics that those tweaks should improve on.

It doesn't mean x86 is at the end of the line, but if you told me that a 50-year legacy ISA (even if it's been cleaned up a lot over the last 30) had maybe a 10-20% performance, perf/watt, or area/watt benefit due to "intentional design" (aka tuning to the task) and the knowledge of hindsight - I have no reason to doubt that as being facially untrue.

10-20% is still "competitive", so Keller isn't wrong, but it also still puts x86 at a disadvantage in the long term. That's a generation of memory (Zen3+ got 12% moving to DDR5) or most of an architectural generation or maybe a half-node step that x86 would have to stay ahead to maintain parity (not just competitiveness).


This article suggests that on a Haswell, decoders consumed between 3 and 10% of the power, depending on the workload tested: https://www.usenix.org/system/files/conference/cooldc16/cool...

On newer wider and deeper designs this number is most likely smaller, and of course the decoder on Arm or RISC-V consume more than 0% too. So most likely, all in all, the "x86 tax" in terms of power consumption is in the low single digit %.


Haswell is a much narrower architecture than A14 though - 4-wide decode vs 8-wide for Firestorm. So it's 3-10% for an implementation that does significantly less than Apple's. And the problem is that x86 has geometric complexity as you go wider, so it consumes significantly more if you want to go as wide as Apple did.

You're saying "nothing wrong with an n^2 algorithm, it works fine for us with n=1000", when you're comparing against someone who is showing results for n=1M. Obviously it would be better for someone in the latter situation to use an n-log-n algorithm. Not real numbers or transistor order-complexities, but you get the idea, x86 has asymptotically worse transistor complexity than ARM as you increase the decode width, and that is true.

Also, that number is for a full-fat Performance Core. If the rest of the core shrinks, the decoder gets relatively bigger, unless you reduce its performance too. So the amount of decoder area (as a % of the overall core) is worse on efficiency cores than performance cores, because there's less "everything else" there.

(likely you will reduce the decoder somewhat, Goldmont and Gracemont both use a 3-wide instead of 4-wide, but geometric scaling means that it's much more expensive to scale wider than you would save from scaling down. And the reliance on 'tricks' like instruction cache likely won't scale nearly as well - it's the same amount of "hot code" for many tasks, regardless of how many threads are running in it, because most of the work happens in certain hotspots. The increased overhead of decoding could, among other factors, be one of the reasons why Gracemont e-cores are not particularly appealing in perf/area. They are very solid on performance but the area scaling isn't all that great compared to hyperthreaded Golden Cove Performance Cores.)

Also that's really just looking at one part of the processor in isolation. OK, so the decoder only consumes 3-10%, but that doesn't say anything about how it limits the ways the rest of the processor to scale. Everyone loves car analogies, it doesn't matter how big an engine you put in the car if it has a rinky-dink air intake that can't provide enough air to fully utilize its displacement. Looking at "the size of the air intake relative to the size of the engine bay" doesn't give you a realistic picture of how it affects the ability of the rest of the design to scale. A turbo might allow you to scale the rest of the engine up much larger than a naturally aspirated one.

(or maybe the fuel system might be a better analogy...)

As far as the instruction cache - those are tricks can be used on ARM architectures too (and I believe Apple does?), they aren't themselves a substitute for a scalable ISA, they're treating the symptom. They may do more on an architecture that is more bottlenecked in that area, but they do give you at least some speedup on everything. Treating the symptoms is really orthogonal to the topic of how much an ISA limits decoding width or reorder depth in itself.

Anyway, x86 isn't Itanium and it isn't going anywhere, but it strongly looks to me like ARMv8 is on a solid foundation that solves some weaknesses of the x86 design. It's a clean-sheet "how would we do out-of-order/speculation if we could design our instructions over again" and it largely achieves that goal IMO. I don't see it as an inherent law of the universe that "there must exist enough tricks for x86 to match the performance of any competitor". There's always lots of metrics to look at, and in many cases Apple is designing for fundamentally different metrics than AMD/Intel, but I don't quite get the reluctance people have to admit that ARM is going to be better than x86 in some of those metrics, even on iso-node (like A15 vs Zen4). And this is like, the core metric that aarch64 was designed around.


> You're saying "nothing wrong with an n^2 algorithm

Quadratic scaling is certainly not ideal, but in the grand scheme of things OoO processors have other components that consume more power as well as having quadratic or even worse scaling. For instance, the issue queue, the ROB, multiported register files, bypass networks.

Hence, when you go to a wider and deeper OoO design I claim that the relative importance of the decoder will decrease.

> If the rest of the core shrinks, the decoder gets relatively bigger,

In the previous paragraph you're arguing that as the core gets beefier the decoder will get relatively bigger. Now you're arguing the exact opposite! So which is it?

> As far as the instruction cache - those are tricks can be used on ARM architectures too (and I believe Apple does?), they aren't themselves a substitute for a scalable ISA, they're treating the symptom. They may do more on an architecture that is more bottlenecked in that area, but they do give you at least some speedup on everything. Treating the symptoms is really orthogonal to the topic of how much an ISA limits decoding width or reorder depth in itself.

I'm quite sure pretty much any processor outside of some really tiny microcontroller will have an instruction cache. But maybe you're talking about micro-op caches. So yes, they are in a way treating the symptom that it can be more efficient to cache decoded instructions instead of decoding them over and over in a loop. But so what, if it works use it! Might as well claim that caching in general is cheating instead of just having faster memory.

> looks to me like ARMv8 is on a solid foundation that solves some weaknesses of the x86 design. It's a clean-sheet "how would we do out-of-order/speculation if we could design our instructions over again" and it largely achieves that goal IMO.

I fully agree, ARMv8 is certainly pretty much a 'best practice' general purpose ISA. I just don't think it's such a dominating factor. If ARM or RISC-V eventually take over the world I think it'll be more due to different licensing/business models/etc rather than the superiority of the ISA itself.

> I don't see it as an inherent law of the universe that "there must exist enough tricks for x86 to match the performance of any competitor".

Indeed there is no such inherent law, but so far looking at the history of microelectronics it seems that with a modest transistor budget penalty and some elbow grease you can make almost any ISA fly.


Comparing decoder width for x86 vs ARM is really apples-to-oranges. One x86 instruction may do the work of several ARM ones. All the OoO stuff happens at the uop level anyway.


That's true if you take the strict textbook definition of "RISC" and "CISC" but everyone has been saying for 10+ years that really isn't how things work anymore. ARM has pulled a lot of the "useful goodies" out of CISC ISAs and CISC ISAs have gone to RISC micro-ops internally on a lot of things. And x86-64 is really not all that dense anyway either.

Also, a huge amount of instructions end up being used very little, there are a few "hotspot" instructions that make up a huge percent of actual instructions executed.

https://arxiv.org/pdf/1607.02318.pdf

Measured code density on SPEC2006, ARMv8 has geomean 6% more instructions, and geomean 12% higher code size. But Apple designs use 4x the decoders-per-thread in their Performance Cores compared to Intel (8 decoders for 1 thread for Apple, vs 4 decoders for 2 threads for Intel).

Apple's designs are targeting much, much higher decode performance even considering the lower density of ARM (4/1.06 = 3.77x as much "normalized decode performance per thread"). Which shouldn't be surprising as their IPC is around 3x of modern x86 processors as well according to Anandtech's work on M1. They aren't magic, the code runs about the same on their processors too, they're just designed for much wider layouts than x86 and using super deep re-ordering to get that performance into a single thread.


This argues even more against x86 as it translates to an internal ISA.

In effect, the argument becomes that decode-to-risc-cost + risc-execution-cost is always going to be bigger than risc-execution-cost.

And this is before you consider the effects of things like looser memory ordering or more registers reducing unnecessary MOVs.


But it does. x86 is hell to decode in parallel, because it's variable length (and in a complex way), so Intel can't build CPUs that keep a wide pipeline fed with instructions efficiently like Apple can, with the 8-wide decoder in the M1. That's one of the things that makes the M1 special, and how it gets away with lower clocks and power consumption than competing x86 designs.


Variable length doesn't matter that much, you just have something that has a wide shifter on the front. And instruction density is important because it saves cache. A single x86 instruction can take the place of several RISC ones, which is why they needed to have it decode so wide. The M1 is almost entirely a process advantage.


It's not about a shifter, it's about determining instruction boundaries. With variable-length instructions the next instruction start depends on the previous one, making it an inherently serial process. Working around that to decode in parallel is not easy and gets superlinearly more complex the wider you make the decoder. A fixed instruction size architecture like ARM64 doesn't have to deal with any of that.


A fixed instruction size architecture like ARM64 doesn't have to deal with any of that.

What it does have to deal with is needing several times higher fetch bandwidth. It's all a bunch of tradeoffs.

Related article: https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...


> It's all a bunch of tradeoffs.

Fixed vs. variable is a bunch of powerful tradeoffs.

The way x86 can stack a bunch of prefixes on an instruction is pretty bad though. You have to interpret many pieces of an instruction to find the size.


Current x86 as well as many Arm cores have micro-op caches, so they don't need as wide decoders as a design that doesn't have such a thing, like M1. (That doesn't take away from the fact that M1 is a very impressive design, of course)


> Apple has better numbers because it uses the most advanced manufacturing process.

on the contrary, arm is a simpler isa to implement which means radically less circuitry for reading the instruction. modern Intel cpus go so far as having entire systems that translate external x86 instructions to an internal risc based one. That's a huge source of wasted power ie: heat.


ARM also decodes to uOps, and no that decoding is not a huge source of wasted power. The vast, vast majority of power (and thus heat) in x86 CPUs (or ARM for that matter) is spent doing actual work. Ie, speculation, branch prediction, doing the actual ALU math, cache management, prefetching, etc... Almost none of it goes to dealing with the ISA.

The almost singular advantage ARM has is just the looser memory model, although the best ARM CPU on the market by a landslide (Apple's M1) can run x86's memory model, and it seemingly doesn't cost much (hence how Rosetta 2 can perform so well)


Modern ARM CPUs go so far as having entire systems that translate external ARM instructions to an internal risc based one.

I hope you realize that any big instruction set can be reduced to a smaller instruction set. The only instruction set that cannot be reduced further is a single instruction set.


I run an x86 processor at 50% power and it's less of a heater.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: