Hacker News new | past | comments | ask | show | jobs | submit login
Arm is canceling Qualcomm's chip design license (bloomberg.com)
622 points by necubi 63 days ago | hide | past | favorite | 450 comments




Qualcomm is known for having a particularly aggressive & hardball-style legal department to enforce its patents on core telecom IP. I believe the most likely outcome is they just settle the dispute here. Arm fighting hardball with hardball.

Which would not really affect the ecosystem of phones using Qualcomm arm chips, it would just change the margins / market cap of Qualcomm.

Yes, longterm Q might invest in their own RISC implementations, but I don't see a viable business case for Qualcomm to just stop ARM development for the foreseeable future.


Qualcomm doesn't have nearly as much to lose as ARM does and they know it.

Qualcomm is almost certainly ARM's biggest customer. If ARM loses, Qualcomm doesn't have to pay out. If ARM wins, Qualcomm moves to RISC-V and ARM loses even harder in the long-term.

The most likely outcome is that Qualcomm agrees to pay a slight bit more than they are currently paying, but nowhere near what ARM is demanding and in the meantime, Qualcomm continues having a team work on a RISC-V frontend for Oryon.


Just the impact of making this move will have a chilling effect, regardless of the long term outome.

ARM Ltd wants to position itself as the ISA. It is highly proprietary of course, but the impression they want to give is that it is "open" and freely available, no lock-in, etc.

This really brings the reality back into focus that ARM controls it with an iron fist, and they're not above playing political games and siding against you if you annoy their favored customers. Really horrible optics for them.


"Chilling effect" implies that we should want ARM to succeed.

IMO we need to question the premises of the current IP ecosystem. Obviously, the principles of open-source are quite the opposite to how ARM licenses IP. (Afaik, ARM also licenses ready-to-go cores, which is very different from what Q is getting.)

It's easy to see how RISC-V avoids the conflict of interest between owning the ISA and licensing specific implementations.


> RISC-V

We’d just get a bunch of proprietary cores which might not even be compatible with each other due to extensions. Companies like Qualcomm would have zero incentives to share their designs with anyone.

ARM is not perfect but it at least guarantees some minimal equal playing field.

> Afaik, ARM also licenses ready-to-go cores

Which is the core of Qualcomm’s business. All their phone chips are based on Cortex. Of course ARM has a lot of incentives to keep it that way, hence this while thing.


> We’d just get a bunch of proprietary cores which might not even be compatible with each other due to extensions.

No different than ARM. Apple has matrix extensions that others don't, for example.

The ecosystem (e.g., Linux and OSS) pressure will strongly encourage compatible subsets however. There is some concern about RISCV fragmentation hell that ARM used to suffer from, but a least in the Linux-capable CPU space (e.g., not highly embedded or tiny), a plethora of incompatible cores will probably not happen.

> Companies like Qualcomm would have zero incentives to share their designs with anyone.

ARM cores are also proprietary. All ARM cores actually, you can't get an architectural license from ARM to create an open source core. With RISCV at least you can make open cores and there are some out there.

But opening the ISA is attacking a different level of the stack than the logic and silicon.


> RISCV fragmentation hell that ARM used to suffer from

Does "fragmentation hell" refer to Age Old problem of massive incompatibility in the Linux codebase, or the more "modern" problem people refer to which is the problem of device trees and old kernel support for peripheral drivers? Because you can rest assured that this majestic process will not be changed at all for RISC-V devices. You will still be fed plenty of junkware that will require an unsupported kernels with blobs. The ISA isn't going to change the strategy of the hardware manufacturers.


Neither. It refers to proliferation of incompatible extensions that plagued ARM. Most well known one was hf incompatibility, but there were others too.


> No different than ARM

Everyone has more or less the same access to relatively competitive Cortex and Neoverse cores. As ARM’s finances show that’s not a very good business model so it’s unlikely anyone would do that with RISC-V.

You can make opensource cores, but nobody investing massive amounts of money/resources required to design high end CPUs will make them open source. The situation with ARM is of course not ideal but at least the playing field is somewhat more even.


> No different than ARM. Apple has matrix extensions that others don't, for example.

Not anymore, M4 supports ARM SME instead.


> Companies like Qualcomm would have zero incentives to share their designs with anyone.

And yet, that's what linux did in 1991- they shared the code, lowering the cost of buying an operating system. I wouldn't say there is zero incentive, but it certainly lowers the incentive without a profitable complementary hardware implementation that can be sold for less than the proprietary isa when there is a royalty free license allows the manufacturer/fab designer "mask rights" to get a small margin within the difference of the foundry/ISA proprietary core competitor.


Hardware is not software. Almost nothing alike. Surr it might happen for low-end/old chips with low margins but nothing cutting edge or competitive on the high-end


software used to be hardware limited, and therefore efficient. Today Software relies on many more transistors per joule to compute a number of operations in high-level languages. I'd agree 22nm isn't leading edge, but foundries like Skywater could theoretically provide more options at 65nm and 90nm in coming years that are fully open source, except for the cost of the foundry technique perhaps: https://riscv.org/wp-content/uploads/2018/07/1130-19.07.18-L...


Yes, I think we might be talking about slightly different things. I don’t really see OS model working for higher-end / leading-edge chips.

In segments where chips are almost an interchangeable commodity and the R&D cost might be insignificant relative to manufacturing/foundry it would make a lot more sense.


Even the base integer instructions are Turing complete in RISC-V. Only instruction extensions that could be a point of contention are Matrix operations, as T-Head and Tenstorrent already have their own. Even then, I can't find a reason how this "clash" is any different than those in the x64 or ARM space.

Even if Qualcomm makes their own RISC-V chips that are somehow incompatible with everyone else's, they can't advertise that it's RISC-V due to the branding guidelines. They should know them because they are on the board as a founding top tier member.


> they can't advertise that it's RISC-V due to the branding guidelines

Unless it’s a superset of RISC-V. They can still have proprietary extensions


> "Chilling effect" implies that we should want ARM to succeed.

It really doesn't.

I agree an actual open ISA is far preferable, ARM is not much different than x86.


I don’t understand how they can copyright just the ISA. Didn’t a recent Supreme Court case in oracle v google Java issue decide that you can copy the api if you impelement it differently? So what exactl is arm pulling? Implementation hardware specs? I suspect Qualcomm can do that on its own


> Dudn’t a recent Supreme Court case in oracle v google Java issue decide that you can copy the api if you impelement it differently

No, it didn’t. It ruled that the specific copying and use of that Google did with Java in Android was fair use, but did not rule anything as blanket as “you can copy an API as long as you re-implement it”.


It was a little more nuance than that. Oracle was hoping SCOTUS would rule that API Structure, sequence and organization are copyrightable - the court sidestepped that question altogether by ruling that if APIs are copyrightable[1], the case fell under fair use. So the pre-existing case law holds (broadly, it's still fine to re-implement APIs - like s3 - for compatibility since SCOTUS chose not to weigh in on it in Google v. Oracle).

1. Breyer's majority statement presupposes API's are copyrightable, without declaring it or offering any kind of test on whats acceptable.


> So the pre-existing case law holds (broadly, it's still fine to re-implement APIs - like s3 - for compatibility since SCOTUS chose not to weigh in on it in Google v. Oracle).

There is no clear preexisting national case law on API copyrightability, and it is unclear how other, more general, case law would apply to APIs categorically (or even if it would apply a categorical copyrightable-or-not rule), so, no, its not “ok”, its indeterminant.


You are right there is no single national case that clearly ruled in either way, but the status quo is that it's de facto ok. Adjacent case law made white room reverse engineering & API re-implementation "ok" de jure, which is why most storage vendors - including very large ones - are confident enough to implement the S3 protocol without securing a license from Amazon first.

Edit: none of the large companies (except Oracle) are foolish enough to pursue a rule that declares APIs as falling under copyright because they all do it. In Google v. Oracle, Microsoft files briefs support both sides after seemingly changing their mind. In lower courts, they submitted an amicus brief supporting Oracle, then when it got to SCOTUS, they filed one supporting Google, stating how disastrous it would be to the entire industry.


I think it might actually be patents rather than copyright restrictions that are in play here.


Qualcomm is slightly bigger than ARM so it seems like a fair fight to me. Does Qualcomm police it's IP at all?


According to Wikipedia,

Qualcomm has 50,000 employees, $51 billion assets and $35 billion revenue https://en.wikipedia.org/wiki/Qualcomm

ARM Holdings has 7000 employees, $8 billion assets and $3 billion revenue https://en.wikipedia.org/wiki/Arm_Holdings

I think "slightly bigger" is an understatement.


That's roughly the same size -- like swamp thing vs namor -- both are name brand, almost blue chip heros.

Or put another way -- as they said in gawker[1] -- if you're in a lawsuit with a billionaire you better have a billionaire on your side or you're losing.

In this case -- it's unlikely that qualcomm will have quite enough juice to just smoosh Arm, in the same way that they would be able to just smoosh a company that's 100th the size of arm (not just 1/10th), regardless of the merits of the case.

[1]https://gawker.com/how-things-work-1785604699


> Qualcomm is slightly bigger than ARM so it seems like a fair fight to me.

I'm not really sure what you're responding to. It's got nothing to do with size whether or not something is fair, it's what is in the contract. None of us know exactly what's there so if it becomes disputed then a court is going to have to decide what is fair.

But that was entirely not the point of my comment though. I'm talking about how corporations looking to make chips or get into the ecosystem view ARM and its behavior with its architecture and licensing. ARM Ltd might well be in the right here by the letter of their contracts, but that canceling their customer's license (ostensibly if not actually siding with another customer who is in competition with the first) is just not a good look for positioning they are going for.


You might be right, but they do perhaps also have to establish that their contracts are going to be defended/enforced. Otherwise they have nothing.


Big middle ground before nuclear option of canceling license entirely though. It's a bad look too because Nuvia/QC has bad blood with Apple, and Apple is suspected to be a very favored client for ARM, so ARM has this problem of potentially appearing to be taking Apple's side against Qualcomm.

I'm not saying that's what happened or that ARM did not try to negotiate and QC was being unreasonable and the whole thing has nothing at all to do with Apple, or that ARM had any better options available to them. Could be they were backed into a corner and couldn't do anything else. I don't know. That doesn't mean it's not bad optics for them though.


Does Qualcomm police it's IP at all?

Traditionally they've been known as a tech company that employs more lawyers than engineers, if that tells you anything.

I'd probably go up against IBM or Oracle before I tugged on Qualcomm's cape. Good luck to ARM, they'll need it.


I am an ex Qualcomm employee. We often called ourselves a law firm with a tech problem. QC doesn't actually have more lawyers than engineers, but I'd not be surprised if the legal department got paid more than all the engineers combined.


Oracle v Qualcomm would be epic.


The public will likely be the loser of such a battle. :-(


Not if they end up cancelling each other's patents. :)


>Qualcomm moves to RISC-V and ARM loses even harder in the long-term.

I think long term is doing a lot of heavy lifting here. How long until:

1. Qualcomm develops a chip that competitive in performance to ARM

2. The entire software world is ready to recompile everything for RISC-V

Unless you are Apple I see such a transition taking a decade easily.


> 1. Qualcomm develops a chip that competitive in performance to ARM

Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

If Qualcomm were motivated, I believe they could swap ISAs relatively easily on their flagship processors, and the rest of the core would be the same level of performance that everyone is used to from Qualcomm.

This isn’t the old days when the processor core was deeply tied to the ISA. Certainly, there are things you can optimize for the ISA to eke out a little better performance, but I don’t think this is some major obstacle like you indicate it is.

> 2. The entire software world is ready to recompile everything for RISC-V

#2 is the only sticking point. That is ARM’s only moat as far as Qualcomm is concerned.

Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1. With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

If Qualcomm stopped making ARM processors, what alternatives are you proposing? Everyone is switching to Samsung or MediaTek processors?

If Qualcomm were switching to RISC-V, that would be a sea change that would actually move the needle. Samsung and MediaTek would probably be eager to sign on! I doubt they love paying ARM licensing fees either.

But, all of this is a very big “if”. I think ARM is bluffing here. They need Qualcomm.


> Everyone is switching to Samsung or MediaTek processors?

Why not? MediaTek is very competitive these days.

It would certainly perform better than a RISC-V decoder slapped onto a core designed for ARM having to run emulation for games (which is pretty much the main reason why you need a lot of performance on your phones).

Adopting RISC-V is also a risk for the phone producers like Samsung. How much of their internal tooling (e.g. diagnostics, build pipelines, testing infrastructure) are built for ARM? How much will performance suffer, and how much will customers care? Why take that risk (in the short/medium term) instead of just using their own CPUs (they did it in some generations) or use MediaTek (many producers have experience with them already)?

Phone producers will be happy to jump to RISC-V over the long term given the right incentives, but I seriously doubt they will be eager to transition quickly. All risks, no benefits.


> Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

You're talking essentially about microcode; this has been the case for decades, and isn't some new development. However, as others have pointed out, it's not _as_ simple as just swapping out the decoder (especially if you've mixed up a lot of decode logic with the rest of the pipeline). That said, it's happened before and isn't _impossible_.

On a higher level, if you listen to Keller, he'll say that the ISA is not as interesting - it's just an interface. The more interesting things are the architecture, micro-architecture and as you say, the microcode.

It's possible to build a core with comparable performance - it'll vary a bit here and there, but it's not that much more difficult than building an ARM core for that matter. But it takes _years_ of development to build an out-of-order core (even an in-order takes a few years).

Currently, I'd say that in-order RISC-V cores have reached parity. Out of order is a work in progress at several companies and labs. But the chicken-and-egg issue here is that in-order RISC-V cores have ready-made markets (embedded, etc) and out of order ones (mostly used only in datacenters, desktop and mobile) are kind of locked in for the time being.

> Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1.

That's actually true, but porting Android is a nightmare (not because it's hard, but because the documentation on it sucks). Work has started, so let's see.

> With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

I wonder what the percentage here is... Again, I don't think recompiling for a new target is necessarily the worst problem here.


> > Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

> You're talking essentially about microcode; this has been the case for decades, and isn't some new development.

Microcode is much less used nowadays than in the past. For instance, several common desktop processors have only a single instruction decoder capable of running microcode, with the rest of the instruction decoders capable only of decoding simpler non-microcode instructions. Most instructions on typical programs are decoded directly, without going through the microcode.

> However, as others have pointed out, it's not _as_ simple as just swapping out the decoder

Many details of an ISA extend beyond the instruction decoder. For instance, the RISC-V ISA mandates specific behavior for its integer division instruction, which has to return a specific value on division by zero, unlike most other ISAs which trap on division by zero; and the NaN-boxing scheme it uses for single-precision floating point in double-precision registers can be found AFAIK nowhere else. The x86 ISA is infamous for having a stronger memory ordering than other common ISAs. Many ISAs have a flags register, which can be set by most arithmetic (and some non-arithmetic) instructions. And that's all for the least-privileged mode; the supervisor or hypervisor modes expose many more details which differ greatly depending on the ISA.


> Many details of an ISA extend beyond the instruction decoder. For instance, the RISC-V ISA mandates specific behavior for its integer division instruction, which has to return a specific value on division by zero, unlike most other ISAs which trap on division by zero; and the NaN-boxing scheme it uses for single-precision floating point in double-precision registers can be found AFAIK nowhere else. The x86 ISA is infamous for having a stronger memory ordering than other common ISAs. Many ISAs have a flags register, which can be set by most arithmetic (and some non-arithmetic) instructions. And that's all for the least-privileged mode; the supervisor or hypervisor modes expose many more details which differ greatly depending on the ISA.

All quite true, and to that, add things like cache hints and other hairy bits in an actual processor.


1. That doesn't mean you can just slap a RISC-V decoder on an ARM chip and it will magically work though. The semantics of the instructions and all the CSRs are different. It's going to be way more work than you're implying.

But Qualcomm have already been working on RISC-V for ages so I wouldn't be too surprised if they already have high performance designs in progress.


That is a good comment, and I agree things like CSR differences could be annoying, but compared to the engineering challenges of designing the Oryon cores from scratch… I still think the scope of work would be relatively small. I just don’t think Qualcomm seriously wants to invest in RISC-V unless ARM forces them to.


> I just don’t think Qualcomm seriously wants to invest in RISC-V unless ARM forces them to.

That makes a lot of sense. RISC-V is really not at all close to being at parity with ARM. ARM has existed for a long time, and we are only now seeing it enter into the server space, and into the Microsoft ecosystem. These things take a lot of time.

> I still think the scope of work would be relatively small

I'm not so sure about this. Remember that an ISA is not just a set of instructions: it defines how virtual memory works, what the memory model is like, how security works, etc. Changes in those things percolate through the entire design.

Also, I'm going to go out on a limb and claim that verification of a very high-powered RISC-V core that is going to be manufactured in high-volume is probably much more expensive and time-consuming than the case for an ARM design.

edit: I also forgot about the case with Qualcomm's failed attempt to get code size extensions. Using RVC to approach parity on code density is expensive, and you're going to make the front-end of the machine more complicated. Going out on another limb: this is probably not unrelated to the reason why THUMB is missing from AArch64.


> verification of a very high-powered RISC-V core that is going to be manufactured in high-volume is probably much more expensive and time-consuming than the case for an ARM design.

Why do you say this?


Presumably, when you have a relationship with ARM, you have access to things that make it somewhat less painful:

- People who have been working with spec and technology for decades

- People who have implemented ARM machines in fancy modern CMOS processes

- Stable and well-defined specifications

- Well-understood models, tools, strategies, wisdom

I'm not sure how much of this exists for you in the RISC-V space: you're probably spending time and money building these things for yourself.


There is a market for RISC-V design verification.

And there is already some companies specializing on supplying this market. They do consistently present at RISC-V Summit.


The bigger question is how much of their existing cores utilize Arm IP… and how sure are they that they would find all of it?


> That doesn't mean you can just slap a RISC-V decoder on an ARM chip and it will magically work though.

Raspberry Pi RP2350 already ships with ARM and RISC-V cores. https://www.raspberrypi.com/products/rp2350/

It seems that the RISC-V cores don't take much space on the chip: https://news.ycombinator.com/item?id=41192341

Of course, microcontrollers are a different from mobile CPUs, but it's doable.


That's not really comparable. Raspberry Pi added entirely separate RISC-V cores to the chip, they didn't convert an ARM core design to run RISC-V instructions.

What is being discussed is taking an ARM design and modifying it to run RISC-V, which is not the same thing as what Raspberry Pi has done and is not as simple as people are implying here.


Nevertheless, several companies that originally had MIPS implementations did exactly this, to implement ARM processors.


I am fan of the Jeff Geerling Youtube series in which he is trying to make GPU (AMD/Nvidia) run on Raspbery Pi. It is not easy - and they have linux kernel source code available to modify. Now imagine all Qualcomm clients have to do similar stuff with their third party hardware, possibly with no access to source code of drivers. Then debug and fix for 3y all the bugs that pop up in the wild. What a nightmare.

Apple at least have full control on hardware stack (Qualcomm do not as they only sells chips to others).


Hardware drivers certainly can be annoying, but a hobbyist struggling to bring big GPUs’ hardware drivers to a random platform is not at all indicative of how hard it would be for a company with teams of engineers. If NVidia wanted their GPUs to work on Raspberry Pi, then it would already be done. It wouldn’t be an issue. But NVidia doesn’t care, because that’s not a real market for their GPUs.

Most OEMs don’t have much hardware secret sauce besides maybe cameras these days. The biggest OEMs probably have more hardware secret sauce, but they also should have correspondingly more software engineers who know how to write hardware drivers.

If Qualcomm moved their processors to RISC-V, then Qualcomm would certainly provide RISC-V drivers for their GPUs, their cellular modems, their image signal processors, etc. There would only be a little work required from Qualcomm’s clients (the phone OEMs) like making sure their fingerprint sensor has a RISC-V driver. And again, if Qualcomm were moving… it would be a sea change. Those fingerprint sensor manufacturers would absolutely ensure that they have a RISC-V driver available to the OEMs.

But, all of this is very hypothetical.


> If NVidia wanted their GPUs to work on Raspberry Pi, then it would already be done. It wouldn’t be an issue. But NVidia doesn’t care, because that’s not a real market for their GPUs.

It's weird af that Geerling ignores nVidia. They have a line of ARM based SBCs with GPUs from Maxwell to Ampere. They have full software support for OpenGL, CUDA, and etc. For the price of an RPi 5 + discreet GPU, you can get a Jetson Orin Nano (8 GB RAM, 6 A78 ARM cores, 1024 Ampere cores.) All in a much better form factor than a Pi + PCIe hat and graphics card.

I get the fun of doing projects, but if what you're interested in is a working ARM based system with some level of GPU, it can be had right now without being "in the shop" twice a week with a science fair project.


> It's weird af that Geerling ignores nVidia.

“With the PCI Express slot ready to go, you need to choose a card to go into it. After a few years of testing various cards, our little group has settled on Polaris generation AMD graphics cards.

Why? Because they're new enough to use the open source amdgpu driver in the Linux kernel, and old enough the drivers and card details are pretty well known.

We had some success with older cards using the radeon driver, but that driver is older and the hardware is a bit outdated for any practical use with a Pi.

Nvidia hardware is right out, since outside of community nouveau drivers, Nvidia provides little in the way of open source code for the parts of their drivers we need to fix any quirks with the card on the Pi's PCI Express bus.”

Reference = https://www.jeffgeerling.com/blog/2024/use-external-gpu-on-r...

I’m not in a position to evaluate his statement vs yours, but he’s clearly thought about it.


I mean in terms of his quest for GPU + ARM. He's been futzing around with Pis and external GPUs and the entire time you've been able to buy a variety of SBCs from nVidia with first class software support.


AFAIK the new SiFive dev board actually supports AMD discrete grsphics cards over PCIe


Naively, it would seem like it would be as simple as updating android studio and recompiling your app, and you would be good to go? There must be less than 1 in 1000 (probably less than 1 in 10,000) apps that do their own ARM specific optimizations.


Without any ARM specific optimizations, most apps wouldn’t even have to recompile and resubmit. Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand. Google would just have to decide to support another target, and Google has already signaled their intent to support RISC-V with Android.

https://opensource.googleblog.com/2023/10/android-and-risc-v...


I remember when Intel was shipping x86 mobile CPUs for Android phones. I had one pretty soon after their release. The vast majority of Android apps I used at the time just worked without any issues. There were some apps that wouldn't appear in the store but the vast majority worked pretty much day one when those phones came out.


I'm not sure how well it fits the timeline (i.e. x86 images for the Android emulator becoming popular due to better performance than the ARM images vs. actual x86 devices being available), but at least these days a lot of apps shipping native code probably maintain an x86/x64 version purely for the emulator.

Maybe that was the case back then, too, and helped with software availability?


Yep! I had the Zenfone with an Intel processor in it, and it worked well!


> Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand.

No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device. Though that doesn't change the result re compatibility.

However – a surprising number of apps do ship native code, too. Of course especially games, but also any other media-related app (video players, music players, photo editors, even my e-book reading app) and miscellaneous other apps, too. There, only the original app developer can recompile the native code to a new CPU architecture.


> No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device.

Google Play Cloud Profiles is what I was thinking of, but I see it only starts “working” a few days after the app starts being distributed. And maybe this is merely a default PGO profile, and not a form of AOT in the cloud. The document isn’t clear to me.

https://developer.android.com/topic/performance/baselineprof...


Yup, it's just a PGO profile (alternatively, developers can also create their own profile and ship that for their app).


> Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

If that's true, then what is arm licensing to Qualcomm? Just the instruction set or are they licensing full chips?

Sorry for the dumb question / thanks in advance.


Qualcomm has historically licensed both the instruction set and off the shelf core designs from ARM. Obviously, there is no chance the license for the off the shelf core designs would ever allow Qualcomm to use that IP with a competing instruction set.

In the past, Qualcomm designed their own CPU cores (called Kryo) for smartphone processors, and just made sure they were fully compliant with ARM’s instruction set, which requires an Architecture License, as opposed to the simpler Technology License for a predesigned off the shelf core. Over time, Kryo became “semi-custom”, where they borrowed from the off the shelf designs, and made their own changes, instead of being fully custom.

These days, their smartphone processors have been entirely based on off the shelf designs from ARM, but their new Snapdragon X Elite processors for laptops include fully custom Oryon ARM cores, which is the flagship IP that I was originally referencing. In the past day or two, they announced the Snapdragon 8 Elite, which will bring Oryon to smartphones.


thank you for explaining


A well-designed (by apple [1], by analyzing millions of popular applications and what they do) instruction set. One, where there are reg+reg/reg+shifted_reg addressing modes, only one instruction length, and sane useful instructions like SBFX/UBFX, BFC, BFI, and TBZ. All of that is much better than promises of a magical core that can fuse 3-4 instructions into one magically.

[1] https://news.ycombinator.com/item?id=31368681


1 - thank you

2 - thank you again for sharing your eink hacking project!


Note that these are just a person's own opinions, obviously not shared by the architects behind RISC-V.

There are multiple approaches here. There's this tendency for each designer to think their own way is the best.


I get that. I just work quite distantly from chips and find it interesting.

That said, licensing an instruction set seems strange. With very different internal implementations, you'd expect instructions and instruction patterns in a licensed instruction set to have pretty different performance characteristics on different chips leading to a very difficult environment to program in.


Note that this is not in any way a new development.

If you look at the incumbent ISAs, you'll find that most of the time ISA and microarchitecture were intentionally decoupled decades ago.


>Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1. With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

This is only true if the application is written purely in Java/Kotlin with no native code. Unfortunately, many apps do use native code. Microsoft identified that more than 70% of the top 100 apps on Google Play used native code at a CppCon talk.

>I think ARM is bluffing here. They need Qualcomm.

Qualcomm's survival is dependent on ARM. Qualcomm's entire revenue stream evaporates without ARM IP. They may still be able to license their modem IP to OEMs, but not if their modem also used ARM IP. It's only a matter of time before Qualcomm capitulates and signs a proper licensing agreement with ARM. The fact that Qualcomm's lawyers didn't do their due diligence to ensure that Nuvia's ARM Architecture licenses were transferable is negligent on their part.


ARM already did the hard work. Once you've ported your app to ARM, you've no doubt made sure all the ISA-specific bits are isolated while the rest is generic and portable. This means you already know where to go and what to change and hopefully already have testing in place to make sure your changes work correctly.

Aside from the philosophy, lots of practical work has been done and is ongoing. On the systems level, there has already been massive ongoing work. Alibaba for example ported the entirety of Android to RISC-V then handed it off to Google. Lots of other big companies have tons of coders working on porting all kinds of libraries to RISC-V and progress has been quite rapid.

And of course, it is worth pointing out that an overwhelming majority of day-to-day software is written in managed languages on runtimes that have already been ported to RISC-V.


Interesting, does anyone know what percentage of top Android apps run on RISC-V? I'd expect a lot of apps like games to only have binaries for ARM


The thing about RISC-V is that they indirectly have the R&D coffers of the Chinese government backing them for strategic reasons. They are the hardware equivalent of Uber's scale-first-make-money later strategy. This is not a competition that ARM can win purely relying on their existing market dominance.


Aren’t Android binaries in Dalvik so you only need to port that to get it to run on RISC-V?


Many games, multimedia apps (native FFMPEG libs), and other apps that require native C/C++ libs would require a recompile/translation for RISC-V.


Not Android, but Box86 already works on RISC-V, even already running games on top of Wine and DXVK: https://youtu.be/qHLKB39xVkw

It redirects calls to x86 libraries to native RISC-V versions of the library.


FFMPEG has a RISC-V port. We're yet to try it, but I did successfully compile it to target RISC-V vector extensions.


Most FLOSS libraries are already ported over thanks to GNU/Linux.



Aren't most applications NOT using the ndk?


Everyone that doesn't want to write Java/Kotlin is using the NDK.

Although from Google's point of view the NDK only purpose is for enabling writing native methods, reuse of C and C++ libraries, games and real time audio, from point of view of others, it is how they sneak Cordova, React Native, Flutter, Xamarin,.... into Android.


NDK usage is pretty high among applications that actually matter.


Most major apps use the NDK.


That's what's magical about Apple. It was a decade-long transition. All the 32-bit code that was removed from macOS back in 2017 was all in preparation for the move in 2019.


Apple has done it multiple times now and has it down to a science.

68k -> PPC -> x86 -> ARM, with the 64 bit transition you mixed in there for good measure (twice!).

Has any other consumer company pulled a full architecture switch off? Companies pulled off leaving Alpha and Sparc but that was servers which has a different software landscape.


I don't believe any major company has done it. Even Intel failed numerous times to move away from x86 with iAPX432, i960, i860, and Itanium all failing to gain traction.


For Apple it was do or die the first few times. Until x86, if they didn’t move they’d just be left in the dust and their market would disappear.

The ARM transition wasn’t strictly necessary like the last ones. It had huge benefits for them, so it makes sense, but they also knew what they were doing by then.

In your examples (which are great) Intel wasn’t going to die. They had backups, and many of those seem guided more by business goals than a do-or-die situation.

I wonder if that’s part of why they failed.


In a way that's also true for the x86->ARM transition, isn't it? I had an MacbookAir 2018. And.. "it was crap" is putting it very, very mildly. Yes it was still better than any Windows laptop I got since and much less of a hassle than any Linux laptop that I'm aware of in my circle. But the gap was really, really small and it cost twice as much.

But the most important part for the working of the transition is probably that, in any of theses cases, the typical final user didn't even notice. Yes a lot of Hackernews-like people noticed as they had to recompile some of their programs. But most people :tm: didn't. They either use AppStore apps, which were fixed ~immediately or Rosetta made everything runnable, even if performance suffered.

But that's pretty much the requirement you have: You need to be handle to transition ~all users to the new platform with ~no user work and even without most vendors doing anything. Intel never could provide that, not even aim for it. So they basically have to either a) rip their market in pieces or b) support the "deprecated" ISA forever.


> Rosetta made everything runnable, even if performance suffered.

I think a very important part was that even with the Rosetta overhead, most x86 programs were faster on the m1 than on the machines which it would have been replacing. It wasn’t just that you could continue using your existing software with a perf hit; your new laptop actually felt like a meaningful upgrade even before any of your third party software got updated.


I don’t think so. I’ve got a 2019 MBP and yeah, the heat issue is a big problem.

But they weren’t going to be left in the performance dust like the last times. Their chip supplier wasn’t going to stop selling chips to them.

They would have likely had to give up on how thin their laptops were, but they could have continued on just fine.

I do think the ARM transition wasn’t strictly good, it let them stay thin and quiet and cooler. They got economies of scale with their phone chips.

But it wasn’t necessary to the degree the previous ones were.


> I do think the ARM transition wasn’t strictly good

That’s a total typo I didn’t catch in time. I’m not sure what I tried to type, but I thought the transition was good. They didn’t have to but I’m glad they did.


IBM also did it, with mainframes. But otherwise, no.


In a sense, Freescale/NXP did it from their old PowerPC to ARM.


> Companies pulled off leaving Alpha and Sparc

Considering the commercial failure of these efforts, I might disagree


MacOS (as NeXTSTEP and/or OpenStep) also ran on SPARC and PA-RISC I believe.


OpenStep was developed on SunOS, and was the primary GUI out of the box


I think windows-on-arm is fairly instructive as to how likely RISC-V would go.


>> 1. Qualcomm develops a chip that competitive in performance to ARM

Done. Qualcomm is currently gunning for Intel.

2. The entire software world is ready to recompile everything for RISC-V

Android phones use a virtual machine which is largely ported already. Linux software is largely already ported.


And with VM tech, and the power of modern devices even some emulator/thunking layer is not too crazy for apps that (somehow) couldn't cross compile.


2. Except games...

But ARM and RISC-V are relatively similar and it's easy to add custom instructions to RISC-V to make them even more similar if you want so you could definitely do something like Rosetta.


Switches like that are major, but get easier every year, and are easier today than they were yesterday, as everyones tools at all levels up and down both the hardware and software stacks get more powerful all the time.

It's an investment with a cost and a payoff like any other investment.


Keep in mind, Apple _did_ actually take a good decade from starting with ARM to leaving x86.


With 100% control of the stack and an insanely good emulator in Rosetta.


Qualcomm's migration would be much easier than Apple's.

Most of the Android ecosystem already runs on a VM, Dalvik or whatever it's called now. I'm sure Android RISC-V already runs somewhere and I don't see why it would run any worse than on ARM as long as CPUs have equal horsepower.


Yeah, but Qualcomm doesn’t control Android or any of the phone makers. It’s hard for large corps to achieve the internal coordination necessary for a successful ISA change (something literally only Apple has ever accomplished), but trying to coordinate with multiple other large corps? Seems insane. You’re betting your future on the fact that none of the careerists at Google or Samsung get cold feet and decide to just stick with what works.


Wouldn’t coordination to change ISA between multiple companies receive heavy scrutiny in the Lina Khan era?


NDK exists.


The companies with large relevant apps running on the NDK are well staffed and funded enough to recompile.


It's not about whether they can, it's whether they will. History has proven that well-resourced teams don't like doing this very much and will drag their feet if given the chance.


it's not about that, it's about running the apps whose makers are out of business or just find it easier to tell their customers to buy different phones


Is the transition fully over if the latest MacOS still runs an x86 emulator for old software?


> Qualcomm develops a chip that competitive in performance to ARM

That’s what Oryon is, in theory.


>2. The entire software world is ready to recompile everything for RISC-V

This would suggest that RISC-V is starting from scratch.

Yet in reality it is well underway; RISC-V is rapidly growing the strongest ecosystem.


I think it takes Apple at least 9 years to prepare and 1 year to implement.


Thing is businesses don't work like side-projects do.

Qualcomm is more or less a research company, the main cost of their business is paying engineers to build their modems/SoCs/processors/whatever.

They have been working with ARM for the last, I dont know, 20 years? Even if they manage to switch to RISC-V, and each employee has negative performance impact of like 15% for 2-3 years this ends up in billions of dollars, because you have to hire more people or lose speed.

If corporate would force me to work with idk Golang instead of TypeScript I could certainly manage to do so, but I would be slower for a while, and if you extrapolate that on an entire company this is big $$.


> because you have to hire more people or lose speed

Yes and 9 women can make a baby in 1 month :)


In Norse mythology, Heimdallr was born of nine sisters. I'm not sure that it took any less time than usual, but I enjoy the story all the same. https://en.wikipedia.org/wiki/Nine_Mothers_of_Heimdallr


and Norse mythology has 9 world dimensions, so maybe it worked for them


Just take a guess at how the baby will be like and get everyone to pretend it already exists for the 8 months (and throw away the experience if mispredicted afterwards) :)


It's called pipelining, and the concept works well in all modern processors. Can also work with people, you only have a initial setup delay :)


No but 9 women can have 9 babies in 9 months.

Which is a 9x output.

Production and development requires multiple parties. This mythical man month stuff is often poorly applied. Many parts of research and development need to be done in parallel.


If you make screws, sure :)


> If corporate would force me to work with idk Golang instead of TypeScript

I think the most evil thing to do would be to switch places: TS for backend, Go for frontend. It can certainly work though!


Building a website that way would yield quite a popular Show HN post!


TS running under Node.js for the backend, I'd dare say, looks pretty standard.

But I like to imagine the Web frontend made in Go, compiled to WASM. Would be a fun project, for sure.


Try Java 1.8.


>Qualcomm doesn't have nearly as much to lose as ARM does and they know it.

Not even close. Android OEM's can easily switch to the MediaTek 9400 that delivers the same performance as the Qualcomm high-end mobile chip at a significantly reduced price or even the Samsung Exynos. Qualcomm, on the other hand, has everything to lose as most of their profits rely on the sales of high-end Snapdragon chips to Android OEM's.

Qualcomm thought they were smart by trying to use the Nuvia ARM design license, which was not transferable, as part of their acquisition instead of doing the proper thing and negotiating a design license with ARM. Qualcomm is at the mercy of ARM as ARM has very many revenue streams and Qualcomm does not. It's only a matter of time before Qualcomm capitulates and does the right thing.


The transition to RISC-V depends entirely on how much of the cpu core susbstem is from ARM. The ISA itself is one part, there are branch predictors, l1,l2 and 3 caches, MMUs, virtualization, vector extensions, the pipeline architecture, etc etc. So moving away from ARM means they need performant replacements for all that.

I'm sure there are folks like SiFive that have much of this, but how is it competitively I don't know, and how the next snapdragon would compete if even one of those areas is lacking... Interesting times.


Moving to a whole new architecture is really really hard, no? The operating systems and applications all need to be ported. Just because Qualcomm cannot be friends with arm, every single Qualcomm customer from google to some custom device manufacturer needs to invest years and millions to move to a new architecture? Unless I am fundamentally misunderstanding this, it seems like something they won’t be able to achieve.


Android already supports RISC-V, so while migrating an SOC to it is not painless (third-party binaries, device-specific drivers...), the hard work of porting the operating system itself to the ISA is done.


> If ARM wins, Qualcomm moves to RISC-V

Around 30-40% of Android apps published on play store include native binaries. Such apps need to be recompiled for RISC-V otherwise they won’t run. Neither Qualcomm nor Google can do that because they don’t have source codes for these apps.

It’s technically possible to emulate ARMv8 on top of RISC-V, however doing so while keeping the performance overhead reasonable is going to be insanely expensive in R&D costs.


Binary-only translators exist, for instance Apple has https://en.wikipedia.org/wiki/Rosetta_(software)


Apple gross revenue is 10x the Qualcomm, and the difference in net income is even larger. Apple could easily afford these R&D costs.

Another obstacle, even if Qualcomm develops an awesome emulator / JIT compiler / translation layer, I’m not sure the company is in the position to ship that thing to market. Unlike Apple, Qualcomm doesn’t own the OS. Such emulator would require extensive support in the Android OS. I’m not sure Google will be happy supporting huge piece of complicated third-party software as a part of their OS.

P.S. And also there’re phone vendors who actually buy chips from Qualcomm. They don’t want end users to complain that their favorite “The Legendary Cabbage: Ultimate Loot Garden Saga” is lagging on their phone, while working great on a similar ARM-based Samsung.


> I’m not sure Google will be happy supporting huge piece of complicated third-party software as a part of their OS.

Yeah, for the upcoming/already happening 64-bit-only transition (now that Qualcomm is dropping 32-bit support from their latest CPU generations), Google has decided to go for a hard cut-off, i.e. old apps that are still 32-bit-only simply won't run anymore.

Though from what I've heard, some third party OEMs (I think Xiaomi at least?) still have elected to ship a translation layer for their phones at the moment.


You’re suggesting that Snapdragon processors would switch to RISC-V and that would be no big deal? Presumably Qualcomm is committed to numerous multi-year supplier agreements for the arm chipsets.


Qualcomm pitched Znew quite a while ago. It mostly ditched 16-bit instructions and added a bunch of instructions that were basically ripped straight from ARM64.

The idea was obviously an attempt at making it as easy as possible to replace ARM with RISC-V without having to rework much of the core.

https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...


An attempt that failed miserably. (it was formally rejected a year ago)

But, by now, it is expected that Qualcomm's RISC-V designs have been re-aligned to match the reality that Qualcomm does not control the standard.


Actually, it was an attempt to reuse as much of the ARM design they got when they bought Nuvia moving to a different CPU architecture. They were worried about ASIC design not software code.


This affects their custom Nuvia derived cores. I'm sure Qualcomm will be able to keep using ARM designed cores to cover themselves while they ween off ARM in favor of RISC-V if they need to.


This is a bit off topic, but has anyone demonstrated it's possible to design a big RISC-V core, that's performance competitive with the fastest x86 and ARM designs?


Well, Tenstorrent, Andes and others have their respective designs...

On the in-order side, I can see on-par performance with the ARM A5x series quite easily.


After a bit of digging, I found that the SiFive P670 has performance equivalent to the Galaxy S21, or the desktop Ryzen 7 2700, which is not too bad and definitely usable in a smartphone/laptop form, so competitive with the 2021 era designs. Definitely usable. It's not clear what's the power level is.


The P670 is a core, not a chip, so you can't really get to power numbers (or indeed, raw performance as opposed to performance / watt or performance / GHz) without actually making a practical chip out of it in some given process node. You're better off comparing it to a core, such as the ARM Cortex series, rather than the S21.

SiFive claims a SPECint2006 score of > 12/GHz, meaning that it'll get a performance of about 24 at 2 GHz or ~31 at 2.6 GHz, making it on par with an A76 in terms of raw performance.


Qualcomm’s business strategy has become hold everyone at gun point then act surprised when everyone looks for alternative customers/partners/suppliers.


They're the Oracle of hardware.


> Qualcomm doesn't have nearly as much to lose as ARM does and they know it.

question: isn't arm somewhat apple?

...Advanced RISC Machines Limited and structured as a joint venture between Acorn Computers, Apple, and VLSI Technology.

https://en.wikipedia.org/wiki/Arm_Holdings#Founding


> question: isn't arm somewhat apple?

Not for decades. Apple sold its stake in ARM when Steve Jobs came back, they needed the money to keep the company going.


>>Qualcomm moves to RISC-V and ARM

That is a HUGE cost!


> Qualcomm is almost certainly ARM's biggest customer.

You think Qualcomm is larger than Apple?


Absolutely.

There are nearly 2B smartphones sold each year and only 200M laptops, so Apple's 20M laptop sales are basically a rounding error and not worth considering.

As 15-16% of the smartphone market, Apple is generally selling around 300m phones. I've read that Qualcomm is usually around 25-35% of the smartphone market which would be 500-700M phones.

But Qualcomm almost certainly includes ARM processors in their modems which bumps up those numbers dramatically. Qualcomm also sells ARM chips in the MCU/DSP markets IIRC.


Qualcomm's modems aren't ARM processors, they're Hexagon.

https://en.wikipedia.org/wiki/Qualcomm_Hexagon


Qualcomm may have the market but Apple has the profit.


Apple has (to a first approximation) a royalty-free license to ARM IP by virtue of the fact that they co-founded ARM - so yes, Qualcomm is most likely paying ARM more than Apple is.


Just to clarify for those that don't know ARMs history, Acorn were building computers and designing CPUs before they spun out the CPU design portion.

Apple did not help them design the CPU/Architecture, that was a decade of design and manufacturing already, they VC'ed the independence of the CPU. The staffing and knowledge came from Acorn.


> Apple did not help them design the CPU/Architecture

I believe they had a big hand in ARM64. Though best reference I can find right now is this very site: https://news.ycombinator.com/item?id=31368489


Oh, I was just wanting to clarify the "Apple co-founded".

They had the Newton project, found ARM did a better job than the other options, but there were a few missing pieces. They funded the spun out project so they could throw ARM a few new requirements for the CPU design.

As a "cofounder" of ARM, they didn't contribute technical experience and the architecture did already exist.


Sigh. Newton. So far ahead of it's time.


On modem side they can move to whatever they want without impacts. But on the apps side they need to run Linux/Android/Windows/etc so are dependent on Arm.


> If ARM wins

Qualcomm pays them.

> Qualcomm moves to RISC-V

That’s like chopping your foot off to save on shoes…

It would take years for Qualcomm to develop a competitive RISC-V chip. Just look at how long it took them to design a competitive ARM core…

Of course they could use this this threat (even if it’s far-fetched) to negotiate a somewhat more favorable settlement.


> Qualcomm is almost certainly ARM's biggest customer

what about Apple?


is risc-v anywhere near the same efficiency ballpark?


RISC-V is very competitive with ARM when comparing similar PPA niches.


High performance and low power laptop and phone SoC’s, no way. There exists no competitive risc-v chip.


We're all assuming that Oryon-V is already being developed.


"Developed" and "successfully shipped" are two enormously different things.


Not just telecom, they are just super aggressive in general as a bully with weapons pile. I remember they tried to threaten Opus codec with patents just becasue, when IETF proposed it for Internet standard. Luckily that failed, but it shows their nasty approach very clearly. So now they are getting the taste of their own medicine.


The phones haven't been custom ARM chips since 32-bit Krait, IIRC.

This is about Nuvia.

https://en.m.wikipedia.org/wiki/Krait_(processor)


Snapdragon 805 had a 32-bit Krait designed by Qualcomm

https://www.qualcomm.com/products/mobile/snapdragon/smartpho...

810 had a 64-bit core designed by ARM

https://www.qualcomm.com/products/mobile/snapdragon/smartpho...

820/821 had a 64-bit Kryo custom core designed by Qualcomm

https://www.qualcomm.com/products/mobile/snapdragon/smartpho...

After that it was all cores from ARM. The custom CPU team worked on their server chip before getting cancelled and most of the team went to Microsoft


When you look at https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.ht... table then you can see that in Snapdragon 8xx series the first "ARM cores out of shelf" was 888 in 2020.

865 (2019) has Cortex-A77 + Kryo 4xx Silver 888 (2002) uses Cortex-X1 + Cortex-A78 + Cortex-A55 cores


Most of the cores branded "Kryo" have negligible differences from the original Arm cores. There might be some differences in the cache memories and in the interfaces between the cores and the rest of the Qualcomm SoC, but there are no differences in the inner CPU cores.

Snapdragon 865 has standard Arm cores. The same is true for the older Snapdragon 855, Snapdragon 845 and Snapdragon 835, which I am using or I have used in my phones.

The claim of Qualcomm that those cores have been "semi-custom", is mostly BS, because the changes made by Qualcomm to the cores licensed from Arm have been minimal.


That table doesn't have the Snapdragon 810

https://www.qualcomm.com/products/mobile/snapdragon/smartpho...

I worked on it in 2014. The table does have 808 listed. That may have been a lower end version.

Qualcomm got caught being late. They were continuing development of custom 32-bit cores and Apple came out with a 64-bit ARM core in the iPhone. The Chief Marketing Officer of Qualcomm called it a gimmick but Apple was a huge customer of Qualcomm's modems. Qualcomm shoved him off to the side for a while.

https://www.cnet.com/tech/mobile/qualcomm-gambit-apple-64-bi...

Because Q's custom 64-bit CPU was not ready the stop gap plan was to license a 64-bit RTL design from ARM and use that in the 810. It also had overheating problems but that's different issue. There were a lot of internal politics going on at Q over the custom cores and server chips that ended up in layoffs.


I need to have a closer look. Thanks.


Qualcomm should put a giant bid on SiFive tomorrow, to remind ARM that its not unassailable


Intel supposedly put in a multi-billion dollar offer and got laughed out of the room.


So a business that is entirely dependent on ARM IP and, for the most part, Android should "remind" the company they're dependent upon? Let's do a thought experiment - Qualcomm switches to RISC-V while the other Android SoC makers (MediaTek, Samsung, Google, Xiaomi, etc.) stay on ARM. Who buys the new Snapdragon RISC-V phone?


They can design chips. What Qualcomm might not be able to do is deliver solid software support for their riscv64 hardware. Android decelerated their efforts toward support recently.


>Yes, longterm Q might invest in their own RISC implementations

Q is investing over $1billion into RISC-V.

ARM is fucked long term. Sure Qualcomm themselves are no angel. But the absurdities of this case are basically making ARM toxic to any serious long term investment. Especially when ARM is in Apple's pocket and ARM isn't releasing any chip designs competitive with Apple's chips where they get free reign to do as they want. Basically a permanent handicap on ARM chip performance.


Since I don't see the Android or IOS app stores ever switching to RISC-V I think ARM will be fine.


Why would you think that? The Play store already supports x86 binaries when necessary.


Because an extremely low percentage of apps support x86.


An extremely low ship x86 support but a huge number support it already for internal uses. The emulator, for both local and CI, and any host-side tests are all x86. These are extremely common to have for everyone. Big companies love having CI, and small ones love the emulator to avoid needing a ~dozen dev phones for the different API levels and form factors.

But importantly since ARM and RISC-V have the same memory model, unlike x86, then it really is just going to be an extra compilation target for any native libraries and that's about it.


The Qualcomm online defenders are something else too.

Qualcomm have been acting badly for years, including attempting to turn RISC-V into Arm64 but without the restrictions. You cannot trust people that behave like this, where everything they do is important and everything you do is worthless.

The funny thing is Qualcomm do have some wildly impressive tech which is kept secret despite being so ubiquitous, but they have had persistent execution failures at integrations which lead to them throwing their partners under the bus.

Qualcomm have the same sort of corporate difficulty you see at Boeing, only in a less high profile sector.


> throwing their partners under the bus

I found it telling that every single smartphone vendor refused to license Qualcomm's proprietary tech for smartphone to satellite messaging.

> In a statement given to CNBC, Qualcomm says smartphone makers “indicated a preference towards standards-based solutions” for satellite-to-phone connectivity

https://arstechnica.com/gadgets/2023/11/qualcomm-kills-its-c...


Anyone only ever uses Qualcomm chips because they have a gun to their head.

Usually that gun is the latest wireless standard like 4g or 5g.


4g and 5g are open standards.

It was their now mostly irrelevant CDMA patents that Qualcomm used as a weapon against device makers.

> Many carriers (such as AT&T, UScellular and Verizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to 911.

https://en.m.wikipedia.org/wiki/Code-division_multiple_acces...

In my opinion, Qualcomm's abuse of their CDMA patents is the reason that zero device makers were willing to get on board with a new Qualcomm proprietary technology.


They’re open standards, sure, but there is almost no competition in the cellular modem space. Intel tried and failed spectacularly. Apple bought the scraps of the Intel modem business and still hasn’t released their own modems after… 5 years? Cellular tech is hard and has a lot of esoteric requirements. If you want a cell modem for a smartphone, you essentially buy either the latest and greatest from Qualcomm or you buy something from one of the Chinese companies (Huawei, Mediatek) which has its own set of problems.


Samsung, Huawei and MediaTek SOCs implement the same open 4g/5g standards.

Apple's modem is said to be shipping this coming spring in the newest iPhone SE iteration.

Google's Pixel phone lineup has used Samsung's modems for generations now.


There’s so many sub varieties of the standards. You’re making a gross oversimplification that they’re “the same”. Compare intel vs Qualcomm modems that were released on iPhones. They were “the same” standard but the Qualcomm modems were notably faster in testing. Maybe they’re all at parity these days, but it’s pretty hard to do a fair comparison.


By all means, point to some official statement showing that Google cannot market the Pixel phone as supporting the 4g/5g standards due to Google's use of Samsung modems.


You’re not hearing what I’m saying. I’m saying that Qualcomm historically has supported more advanced 4g and 5g standards that allow them to achieve faster modem rates. Whether that’s true today I’m not sure, but it was definitely true back when Intel was making modems. “Supporting 4g/5g” is meaningless. It matters what bands, what data rates, the sub carrier rates, how many channels you can bond together, etc etc. Take a look at the product briefs of each modem and compare the actual “supported features” and it’s a lot more specific than just “5g” and certain bands, for instance.


>You’re not hearing what I’m saying.

You are saying Qualcomm doesn't have competition because Qualcomm make the best modem and others making worst product cant compete?


Google certainly tortured me and everyone else I knew who had a Pixel 6 (or Pro) phone, you would randomly lose cellular and WiFi and they would not recover on their own, necessitating a reboot or toggling airplane mode to get back online.

The Exynos chipset is cursed, Samsung only ships it in markets where performance is a lower priority than price, hence not shipping Exynos in the US outside the Google Pixel whitelabel relationship.


> hence not shipping Exynos in the US

I thought it was primarily because of some patent/royalty dispute with Qualcomm?

And/or it not having support for CDMA which was not relevant outside of the US. Now that it’s not an issue I wouldn’t be surprised if Samsung would transition to Exynos eventually (they are already apparently selling some models).


Google's Pixel handsets have worse modem performance than similar flagship Samsungs sold in the US, as even Samsung won't sell their underperforming Exynos chipsets in their flagship phones in the USA.

Exynos 5G New Radio chipsets got really bad with the Pixel 6 series, where the phone randomly loses cell signal and WiFi at the same time in areas with strong signal, and the only way to get back online is to put the phone in airplane mode or reboot the phone, sometimes neither works though.


"We're disinclined to acquiesce to your request.

Means 'no'."


> attempting to turn RISC-V into Arm64 but without the restrictions

This flew past me, do you have a link?


Brucehoult mentions it further down.

> This time last year they were all over the RISC-V mailing lists, trying to convince everyone to drop the "C" extension from RVA23 because (basically confirmed by their employees) it was not easy to retrofit mildly variable length RISC-V instructions (2 bytes and 4 bytes) to the Aarch64 core they acquired from Nuvia. At the same time, Qualcomm proposed a new RISC-V extension that was pretty much ARMv8-lite.

This is enough of a philosophy change to break existing RISC-V software, and so is purely motivated by a desire to clone IP they supposedly licensed as honest brokers.


Not a cpu designer, but aren’t variable length instructions a big part of why x86 decoders take up so much area, and also increases branch predictor complexity?


The RISC-V creators were aware of the issues when they designed the C extension with 16 bit instructions to be able to compete with ARM's Thumb. So the bottom two bits of an instruction are enough to distinguish between 16 bit and 32 bit encodings (the standard has a scheme for future longer instructions, but hardware that doesn't implement them can ignore that).

This means that if a RISC-V reads a 16 byte block from instruction memory, it only has to look at 8 pairs of bits. This would require 8 NAND gates plus 8 more NANDs to ignore the top half of any 32 bit instructions. That is 4x(8+8)=64 transistors.

The corresponding circuit for x86 would be huge.

But note that this just separates the instructions. You still have to decode them. Most simple RISC-V implementations have a circuit that transforms each 16 bit instruction into the corresponding 32 bit one, which is all the rest of the processor has to deal with. Here are the sizes of some such circuits:

Hazard 3: 733 NANDs (used in the Raspberry Pi RP2350)

SERV: 532 NANDs (serial RISC-V)

Revive: 506 NANDs (part of FPGAboy)

You would need 8 such circuits to handle the maximum 16 bit instructions in a 16 byte block, and then you would need more circuits to decode the resulting 32 bit instructions. So the 16 NANDs to separate the variable length instructions is not a problem like it is for other ISAs.

The problem with 16 bit instructions for small RISC-V implementations is that now 32 bit instructions will not always be aligned with 32 bit words. Having to fetch an instruction from two separate words adds circuits that can be a large fraction of a small design.


x86 specifically is extra awful to decode because there's no easy way to tell the instruction length without doing a good portion of the decode process, so it ends up needing to attempt to decode at each byte offset. Valid lengths being all between 1 and 15 doesn't help.

So processing a fetched 16 bytes requires doing 16 partial decodes, each of them non-trivial (skip over unordered prefix bytes; parse fixed prefix before opcode (most bytes are the opcode immediately, except 0x0F, 0xC4, 0xC5, and more for rarer instructions & AVX-512/APX, which have extra skipping necessary); index a LUT by the opcode (with a different table for the 0x0F case, and maybe more cases idk; plus maybe some more length depending on byte after opcode; for some opcodes the length even depends on if a specific prefix byte was given, which intel just assumes doesn't happen and behaves slowly if it does); if the opcode needs ModR/M, also parse that from the next byte (another variable-length sequence)).


So much is relative: on a giant OoO core it's barely noticeable. In the case of x86 the range of lengths is extreme: a simple two or four choice is much simpler.


I don't really see how this follows. Fixed length instructions are a well known and highly desirable property for instruction decoders (at the cost of some density), which has been known about and done since the first RISCs. And ARMv8 is basically just a warmed-over RISC with very little new or innovative in there. It's just an ISA.

ARM Ltd is basically rent-seeking on keeping the ISA proprietary, same as Intel and AMD do. Sure it's a highly specialized and very impressive task to make an ISA, but not to the value of extracting hundreds of millions of dollars ever year. If ARM gets upset because someone wanted fixed-length instructions standardized in RISCV that's really the height of hypocrisy.


>If ARM gets upset because someone wanted fixed-length instructions standardized in RISCV that's really the height of hypocrisy.

ARM never said that.

>but not to the value of extracting hundreds of millions of dollars ever year.

ARM makes money on selling ARM design, not from their ISA licensing.


> ARM never said that.

Someone did.

> ARM makes money on selling ARM design, not from their ISA licensing.

That's not correct, they make money from ISA licensing. That's called their architectural license, and that is what is being canceled here.


When do Qualcomm's patents run out?


As I understand it [1] the context is:

Qualcomm had one type of ARM license, granting them one type of IP at one royalty rate.

A startup called "Nuvia" had a different type of ARM license, granting them more IP but at a higher royalty rate. Nuvia built their own cores based on the extra IP.

Then Qualcomm brought Nuvia - and they think they should keep the IP from the Nuvia license, but keep paying the lower royalty rate from the Qualcomm license.

ARM offer a dizzying array of licensing options. Tiny cores for cheap microcontrollers, high-end cores for flagship smartphones. Unmodifiable-but-fully-proven chip layouts, easily modifiable but expensive to work with verilog designs. Optional subsystems like GPUs where some chip vendors would rather bring their own. Sub-licensable soft cores for FPGAs. I've even heard of non-transferable licenses - such as discounts for startups, which only apply so long as they're a startup.

If Nuvia had a startup discount that wasn't transferable when they were acquired, and Qualcomm has a license with a different royalty rate but covering slightly different IP, I can see how a disagreement could arise.

[1] https://www.theregister.com/2022/08/31/arm_sues_qualcomm/


Do you think that Qualcomm bought Nuvia with the expectation that Nuvia's royalty agreement would remain intact? Perhaps they wouldn't have paid as much, or purchased them at all if the license is able to be terminated in that way.


I have no idea of the specifics of Nuvia's license.

But it's totally common for corporations to make value-destroying acquisitions. Some research suggests 60%-90% of mergers actually reduce shareholder value. Look at the HP/Autonomy acquisition, for example - where the "due diligence" managed to overlook a $5 billion black hole in a $10 billion deal. And how often have we seen a big tech co acquire a startup only to shut it down?

Mergers only seem rational because once a mistake is set in stone, the CEO usually has to put a brave face on it and declare it a big success.

I could certainly believe during the acquisition process that the specifics of Nuvia's license were overlooked, or not fully understood by the person who read them.


Then, they should have done their homework because any transferability clause would be in Nuvia’s licensing agreement.

Or maybe there is no such language in the contract and Arm is over-extending, but that sounds unlikely.


> and they think they should keep the IP from the Nuvia license

Incorrect according to Qualcomm. They claim Snapdragon X Elite & other cores are from scratch rebuilds, not using any of the design used for Nuvia.

They did however use engineers who had designed Nuvia. So there may be a noted resemblance in places. Latest Tech Poutine: 'you can't delete your mind.'


Reasonably sure that Qualcomm aren’t claiming their X Elite cores are free of Nuvia created IP. Rather that Arm can’t prevent the transfer of that IP to Qualcomm.


The linked article says: "However, says Arm, it appeared from subsequent press reports that Qualcomm may not have destroyed the core designs and still intended to use the blueprints and technology it acquired with Nuvia"

Obviously it's hard to know for sure - it could even be an Anthony Levandowski type situation, where an ambitious employee passes off an unauthorised personal copy as their own work without Qualcomm realising.


That's all routine though. This kind of license negotiation happens all the time, in every industry. Companies need to work together to sell their products. And almost always it ends up just being rolled into whatever the next contract they write is. Very occasionally, it ends up in court and one side settles once it's clear which direction the wind is blowing.

But getting to the point where a supplier of critical infrastructure pulls a figurative knife on one of their biggest customers for no particularly obvious reason is just insane. ARM Ltd. absolutely loses here (Qualcomm does too, obviously), in pretty much any analysis. Their other licensees are watching carefully and thinking hard about future product directions.


See, this is one of the downsides of running an IP based business.

If you're selling physical chips and customer decides not to pay for their last shipment, you stop sending them chips. No need to get the courts involved; the customer can pay, negotiate, or do without.

But when you're selling IP and a customer decides not to pay? You can't stop them making chips using your IP, except by going through the courts. And when you do, people think you're "pulling a figurative knife on one of your biggest customers for no reason"


That's true enough, but you can refuse to license them for their next product, which is (or should be) incentive enough. Selling silicon IP blocks is a very mature industry, ARM is one of many such companies, and no one else is out there throwing doomsday bombs like this. Really, this is extremely weird.


> but you can refuse to license them for their next product

That might not be legally possible - or deemed to be anticompetitive. Cancelling an existing license if a firm has breached it would probably be less problematic.


My understanding is that it's the other way around.

- Qualcom has a "Technology license". Because ARM design the entire chip under that license, ARM charge a premium royalty.

- Nuvia had an "Architectural licence" (the more basic licence). Nuvia then had to design the chip around that foundation architecture (i.e. Nuvia did more work). The architectural license has a lower royalty.

Qualcom decided they were using Nuvia chips, and therefore should pay Nuvia's lower royalty rate.

ARM decided that Nuvia's chips were more or less ARM technology chips, or possibly that Nuvia's license couldn't be transferred, and therefore the higher royalty rate applied.


No. Both Qualcomm and Nuvia had an ALA = "Architecture License Agreement". Qualcomm had also licensed many ARM-designed cores separately.

An ALA signed with ARM gives the right to design CPU cores that are conformant to the Arm Architecture specification. When the CPU cores that are designed thus are sold, a royalty must be paid to ARM.

The royalties negotiated by Nuvia were much higher than those negotiated by Qualcomm, presumably based on the fact that Qualcomm sells a huge number of CPU cores, while Nuvia was expected to sell few, if any.

When Qualcomm has bought Nuvia, ARM has requested that Qualcomm shall pay the royalties specified by the Nuvia ALA, for any CPU cores that are derived in any way from work done at Nuvia. Qualcomm has refused, claiming that they should pay the smaller royalties specified by the Qualcomm ALA.

Then ARM has cancelled the Nuvia ALA, so they claim that any cores designed by Qualcomm that are derived from work done at Nuvia are unlicensed, so Qualcomm must stop any such design work, destroy all design data and obviously stop selling any products containing such CPU cores.

The trial date is in December and ARM has given an advance notice that they will also cancel the Qualcomm ALA some time shortly after the trial. So this will have no effect for now, it is just a means to put more pressure on Qualcomm, so they might accept a settlement before the trial.

Qualcomm buying Nuvia should increase the revenue for ARM from the work done at Nuvia, because Qualcomm will sell far more CPU cores than Nuvia, so even with smaller royalties the revenue for ARM will be greater.

Therefore the reason why ARM does not accept this deal is because in parallel their revenue from the ARM-designed cores licensed to Qualcomm would drop soon to zero. Qualcomm has announced that they will replace the ARM-designed cores in all their products, from smartphones and laptops to automotive CPUs.


Not quite. Qualcomm had an existing architecture license but with lower royalty rates than Nuvia’s. They claim they can sell Nuvia derived designs under that license with it’s lower royalty rates.


Thanks for the clarification! :-)


Maybe? The article I quoted was a big vague on that point, only mentioning "chip blueprints" which is pretty ambiguous.

However, some sources [1] say the "architectural license" is "higher license fee, fewer use constraints, greater commercial and technical interaction"

There are often two parts to the cost of these licenses - an upfront fee, and a per-chip royalty. So it could be both at the same time: Nuvia, who made few chips, might have negotiated a lower upfront fee and a higher per-chip royalty. Whereas Qualcomm, who make lots of chips, might have prioritised a lower per-chip royalty, even if the upfront fee was greater.

[1] https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...


What is often overlooked on this topic is, that ARM also has a duty to protect its ecosystem.

By using its dominant position in Smartphone chipsets, Qualcomm is in progress to establish a custom ARM-architecture as the new standard for several industries, fragmenting the ARM-ecosystem.

For decades, ARM is carefully avoiding this to happen, by allowing selected partners to "explore" evolutions of the IP in an industry but with rules and methods to make sure they can't diverge too much from ARM's instruction set.

Qualcomm acquired Nuvia and now executes the plan of using their restricted IP in a unrestricted fashion for several industries ("powering flagship smartphones, next-generation laptops, and digital cockpits, as well as Advanced Driver Assistance Systems, extended reality and infrastructure networking solutions").

ARM has designed architectures which achieve comparable performance to Nuvia's IP (Blackhawk, Cortex-X), but Qualcomm's assumption is that they don't need it and that they can apply Nuvia's IP on top of their existing architecture without the need of licensing any new ARM design.


>What is often overlooked on this topic is, that ARM also has a duty to protect its ecosystem.

It is not overlooked. The duty of any company to protect its IP and contract during dispute is largely if not entirely irrelevant on internet inclusive but not limited to HN. They simply want whatever company they like to win and the one they hate to lose.

This has been shown repeatedly on Apple vs Qualcomm in modem and IP licensing. Monopoly trials, or Apple vs US etc.

And just want to say thank you. You are one of the very very few to the point I can count them with fingers, to actually dig into court case and not relying on whatever media decided to expose us to.


Last time I checked, Qualcomm did not introduce any custom instructions into the ISA. What damage are you speaking of?


The merge of the Qualcomm architecture with the Nuvia IP they acquired, which was created under a far-reaching license ARM granted to Nuvia. Combining both creates a custom architecture different from ARMs consolidated and harmonized designs offered to licensees (i.e. Blackhawk or Cortex-X).

The IP of Nuvia was not supposed to be used in all the use-cases that Qualcomm intends to deploy it in (and moreover there is still the ongoing legal dispute that Qualcomm is actually not allowed to use it)


So when is ARM going after Apple for their custom architecture?

Afai, Q hasn't diverged from the standard instruction set at all in the Oryon snapdragons.


Apple was a founding partner of ARM back when Advanced RISC Machines was first created and owned almost half of the company when it was created.


Presumably Apple didn't violate the terms of their own licensing agreement with ARM.


> So when is ARM going after Apple for their custom architecture?

As soon as they violate the terms of their architectural license, which seemingly hasn't happened yet.


The wrinkle is that nuvia's IP supposedly wasn't used here. Qualcomm set up an IP firewall during the acquisition and immediately switched the team over to doing new work under the existing Qualcomm license. It's about the right timeframe for the v2 chips at the very least.


That's not Qualcomm's position in court. Do you have any legitimate source for that statement?

Also, the foundation of Qualcomm's "Oryon" is clearly Nuvia's "Phoenix" core, which is based on Arm’s v8.7-A ISA.

After Acquisition, Qualcomm formed a team to redesign Phoenix for use in consumer-products instead of servers, creating Oryon.

That's the issue they have. Qualcomm was/is confident to resolve this IP issue of the technical QCT-division via their licensing strong-arm QTL, forcing ARM into accepting Qualcomm's view.

However, they possibly overstepped a bit, as they also expect that they don't need to license newer CPU-designs from ARM because (like Apple) they built a custom design under their architecture license.

But in reality the core design of Oryon was in parts built under the license agreement of Nuvia, which has explicit limitations in transferability (only Nuvia as-is) and usage (only servers).

In court, Qualcomm doesn't even dispute that, they argue that this contract should not be enforced and hope that the court agrees.


In a way, the legalese details are less relevant to the industry perception of ARM as the move is seen as lighting the house on fire in attempt to win an argument.

Qualcomm is the supplier of high-performance ARM-based SoCs in consumer segment, with the best performing core design. ARM is not doing damage to Qualcomm here but to ARMv8/9s long-term survival.

I, for one, am greately unhappy about it because RISC-V is a disgusting design that happened to be in the right place at the right time, becoming yet another example of the industry pushing for abysmally inferior choice due to circumstantial factors. I sincerely hope it fails in all possible ways (especially the RVV extension) and a completely redone, better design that is very close to ARMv8-A takes its place.

But, in the meantime, we have ARMv8/9-A, which is the best general-purpose ISA, even with the shortcomings of SVE/2, where AVX family, with especially AVX512VL extension, is just so much better.


The alternative for ARM is to accept that the supplier of high-performance ARM-based SoCs is taking control over the future roadmap of ARM architecture not just for the consumer segment, but a wide range of industries, just like they announced publicly. And they do that by taking the cash-cow of consumer products to push their custom architecture into those other industries.

As of now, ARM is largely in control of the evolution of ARM architecture, because even by those with an ALA (like Qualcomm), ARM's CPU-designs are the reference for the evolution in the respective industries. Straying too much from those designs turned out to not be economically feasible for most players since the move to 64bit, which is a beneficial development for ARM as they can drive a harmonized ecosystem in different industries.

Now ARM gave Nuvia a very permissive license to cooperate on the creation of ARM-based architecture, for a segment where ARM was very weak: server-architecture. With the licensing contract explicitly limiting Nuvia to use the resulting IP only for servers and only to Nuvia.

Now regardless of the legal dispute, Qualcomm now plans to use this IP to create a design roadmap parallel to that of ARM, with a market-position in consumer smartphone SoC's funding a potential hostile takeover of several other industries where ARM carefully works to establish and maintain a competitive landscape.

Qualcomm's plan is to achieve something similar to Apple, but with the plan to sell the resulting chipset.

So while ARM is building and maintaining an ecosystem of ARM as a vendor-agnostic architecture-option in several industries, Qualcomm is on a trajectory to build up a consolidated dominant position in all those industries (which may end up forcing ARM to actually follow Qualcomm in order to preserve the ecosystem, with Qualcomm having little vested interest to support an ecosystem outside of Qualcomm).


Are there anything you think are missing in RVV? I'm trying to collect such things and create a list of instructions that could improve it.


Picked it up from skimming the irrational analysis piece on it today, but it looks like it's unsubstantiated speculation on oryon v2.


> If Arm follows through with the license termination, Qualcomm would be prevented from doing its own designs using Arm’s instruction set

i'm not sure this is true. certainly "chip" IP has been a real legal quagmire since, forever.

but it was my understanding that you could neither patent nor copyright simply an "instruction set".

presumably what you get from ARM with an architecture license would be patent licenses and the trademark. if so, what patents might be relevant or would be a problem if you were to make an "ARMv8-ish compatible" ISA/Architecture with a boring name? i haven't seen much about ARM that's architecturally particularly unique or new, even if specific implementation details may be patent-able. you could always implement those differently to the same spec.

to further poke at the issue, if it's patents, then how does a RISC-V CPU or other ISA help you? simply because it's a different ISA, doesn't mean its implementation doesn't trample on some ARM patents either.

if it's something to do with the ISA itself, how does that affect emulators?

what's ARM's IP really consist of when you build your own non-ARM IP CPU from scratch? anyone have examples of show-stopper patents?


You can patent something like "any possible hardware circuit that implements [the functionality of some weird yet mandatory ARM instruction]". The patent doesn't cover emulators because they're not hardware.

Way back in the day there were some MIPS patents that only covered a few instructions so people would build not-quite-MIPS clone CPUs without paying any royalties.


thanks for the hint. i found https://www.probell.com/lexra/

sheesh, patent https://patents.google.com/patent/US4814976A/en is a real "gem"

but its probably a good example: faulty patent (later invalidated) to do something obvious

MIPS sues a company that doesn't even implement the odd instructions because it traps them, allowing a possibility of emulation.

there's literally no case here

just to sue them into oblivion and squish them with superior cash resources. and then to get squished by ARM because they weren't paying attention.

it's like a dark fairy tale. i hate corporate lawyers.


Thanks for highlighting Lexra, my first startup. After becoming a patent agent and strategist 15 years later, these are my takeaways.

1. Little companies don't sue big companies. No need. Startups exist because they have something new and move faster. 2. Big companies don't sue little companies. Too little money, it would look anticompetitive to the gov't, and most startups fail, anyway. 3. Medium companies sue little companies when they start taking away prime customers.

ARM sued Picoturbo and they settled. Lexra chose to fight MIPS. Lexra and MIPS hurt each other. That gave ARM the opportunity to dominate the industry.

On an unrelated topic, readers looking for concise basic info on patenting that your attorney might not mention might enjoy. https://www.probell.com/patents


That is what was done with the THUMB instructions. IIRC


Are there remaining patents relative to arm architecture that haven't expired already?


For ARM7TDMI probably no, but for the A, R and M cores, for sure there is. That is the name of the game.


Most comments here seem to think that Qualcomm has to settle or switch to RISC-V. But from my understanding the article is only about their license to design custom chips with ARM IP, not about using ARM's designs.

For example the Snapdragon 8 Gen 1 uses 1 ARM Cortex-X2, 3 ARM Cortex-A710 and 4 ARM Cortex-A510, which are ARM designs. Their latest announced chip though, Snapdragon 8 Elite, uses 8 Oryon cores, which Qualcomm designed themselves (after acquiring Nuvia).

So is Qualcomm not still able to create chips like the former, and just prevented from creating chips like the latter? Or does "putting a chip together" (surely there is a bit more going into it) like the Snapdragon 8 Gen 1 still count as custom design?


They’re only losing their license to make custom cores. They’re still free to use ARMs own cores.

The reason being that ARM gave Nuvia a license to design cores at a specific rate, then Qualcomm bought them to use those cores. ARM claims that the license to design cores does not have a transferable rate to it.


Now Qualcomm does not want to continue to use any Arm cores, both because their own cores are better and because that would save them the cost for royalties.

Obviously, Arm tries to prevent Qualcomm from using their own cores, because this time Arm would lose a major source of their revenue if Qualcomm stopped licensing cores.

When Arm has given architectural licenses to Qualcomm and Nuvia, they were not worried about competition, because Qualcomm could not design good cores, while Nuvia had no perspective of selling so many cores for this to matter.

The merging of Nuvia into Qualcomm has changed completely the possible effect of those architectural licenses, so Arm probably considers that giving them has been a big mistake and they now try to mend this by cancelling them, with the hope that they will convince justice that this is not illegal.

For any non-Arm employee or shareholder, it is preferable for Arm to lose, unless the reduction in revenue for Arm would be so great as to affect their ability to continue to design improved cores for other companies and for other applications, but that is unlikely.


There’s a LOT of conjecture in your comment.

Your very first line is one to begin with.

ARM also doesn’t seem to care if QC design their own cores. They just care that they renegotiate the royalty agreement. This is clear if you actually read their statements.


Because Arm wants to increase the royalties for the cores designed by Qualcomm, that is a pretty certain indication that these royalties have been smaller than for the cores licensed from Arm.

Therefore Arm cares a lot if Qualcomm designs their own cores, because that would cause a smaller revenue for Arm.

If Arm had not cared whether Qualcomm designs their own cores, they would have never sued Qualcomm.

The official reason why Arm has sued Qualcomm, is not for increasing the royalties, because that has no legal basis.

It is obvious that the lawsuit is just a blackmail instrument to force Qualcomm to pay higher royalties for the cores designed by them, but the official object of the lawsuit is to forbid Qualcomm to design their own cores, by claiming that the Oryon cores used in the new Qualcomm chipsets for laptops, smartphones and automotive applications have been designed by violating the conditions of the architectural licenses granted by Arm to Qualcomm and Nuvia, so Arm requests that Qualcomm must stop making any products with these Arm-compatible cores and they must destroy all their existing core designs.


ARM want to raise the royalties for the cores designed by Nuvia, whose IP has now permeated all of Qualcomms IP.

Again, your comments are pure conjecture not based on anything factual. I might as well just start saying how QC wants to rip off ARM IP and it would be as factually relevant as your comments.


ARM doesn't care Apple design their own cores.

ARM is perfectly happy for Qualcomm to design their own core as long as it was the agreed rate for those sector.

ARM is happy to compete on design, which is what the Cortex X5 is doing. And shown just as competitive against Oryon.

The rest of your comments are like making up stories to back up whatever you think is the truth. And most of them have zero factual basis.


This "cancellation" is likely to be paused until the lawsuit is resolved so it's hard to say what this means. Presumably this is a part of the negotiations going on behind the scenes.


Let’s just assume this happens for a moment.

What do Android OEMs do? They can’t use Apple chips, or now Qualcomm chips. Switching to another architecture is a big deal.

Would this basically hand the Android market to Samsung and their Exynos chips? Or does another short term viable competitor exist?


This move doesn't stop Qualcomm from licensing ARMs reference cores, it only blocks them from designing their own in-house ARM cores like Apple does. The vast majority of Qualcomm chips currently on the market are built around reference cores, they only recently got back into the custom core game with their acquisition of Nuvia which also kicked off this dispute with ARM.


1. Qualcomm is still allowed to buy , use and sell ARM's Design IP. i.e Going back to Cortex X series.

2. Mediatek is available, mostly with latest ARM's IP. And extremely competitive. The only thing missing is Qualcomm Modem. It isn't Mediatek's modem are bad, they are at least far better than whatever Apple's Intel Modem had used or planned. The only problem is Qualcomm is go good customers still prefer it for the relatively little price they are paying for.

3. It is not like Android OEM cant make their own SoC. Especially considering the smartphone market can now be largely grouped as Apple, Samsung and Chinese. Together they are 95%+ of Market share.


There are probably Risc-V companies eagerly anticipating an opportunity in that space, but I don't know if any are in the performance ballpark right now.


Qualcomm itself is one such company.


MediaTek is still available.


I notice how 'viable' isn't an operative in your statement.


MediaTek was always viable, it was always GPU that made non-Qualcomm non-viable


They've come a long way, Samsung is apparently considering Mediatek chips for their next flagship phone (S25).


Their flagship tablets line up, Galaxy Tab S10 plus and ultra, is entirely Mediatek too.


They're pretty good. Just can't beat qualcomm / Apple flagship. So around Intel level ;)


300W in a phone chip is a bit toasty.


Intel's latest Arrow Lake stuff, about to go on sale, is said to have much better power efficiency, FWIW. Something like half the wattage for roughly equivalent benchmark results, according to them.


Dimensity 9400 looks good


Without (complete) kernel sources, they're already e-waste.



Don't know about cell phones but their Chromebooks are pretty good.


Everyone's going to have to buy a Pixel phone, ahahahahaahahha.


Samsung phones are presumably firm as well. They recently switched to snapdragon (qualcomm) chips but before they were using exynos (samsung) chips.


Samsung phones use both, in a very literal sense. Their flagship devices usually have both Snapdragon and Exynos variants for different regions, and their lower end devices are mostly Exynos across the board.

The S23 line was an exception in using Snapdragon worldwide, but then the S24 line switched back to using Snapdragon in NA and Exynos everywhere else, except for the S24 Ultra which is still Snapdragon everywhere.

Yes it's a confusing mess, and it's arguably misleading when the predominantly NA-based tech reviewers and influencers get the usually superior QCOM variant, and promote it to a global audience who may get a completely different SOC when they buy the "same" device.


Is the Qualcomm chip still considered significantly better than the Exynos? I remember that was the case a few years ago.


Last time I checked, the latest Qualcomm Snapdragon was faster and more energy efficient than the latest Exynos. Especially the top-binned chips that Samsung gets.

Still, the fact that Samsung can swap out the chip in their flagship product with virtually no change other than slightly different benchmark scores means that these chips are pretty much fungible. If either manufacturer runs into serious problems, the other one is ready to eat their market share for lunch.


Yes, Exynos is still behind in performance and thermals. Exynos modems are also still garbage and plague Pixel phones with issues. Though slowly improving with each generation, it's awful that it's being troubleshot in public over years.


Samsung still uses qualcomm in US markets and exynos outside US even for their flagships.


There is MTK that offers the Dimensity series SoCs with Arm cores. Qcom can also go back to using Arm Cortex cores in the next Snapdragon SoC.


Because of all the discussions in the comments about ARM and RISC-V, could someone explain to me the difficulties of designing a chip for a new ISA?

I'm wondering because to me as a layman it sounds like it's 'only' a different language, so why is it not that easy to take already existing designs and modify them to 'speak' that language and that's it?

Or is an ISA more than just a different 'language'?

Or is hardware not really the biggest problem, but rather Software like compilers, kernels, etc.?


Your view even of human languages is simplistic. Forgive me, I don't mean to be rude, I'm just trying to explain.

You might think that languages are just have different words for the same things.

In reality the problems are where the same things don't exist. People don't view the world the same way and don't have an equivalent word. In Turkish it's very important whether your aunt is on your mother's side or your father's side so there are different words for each....but there's no words for "he" or "she" as they don't bother with gender in sentences.

So for example every conversation converted from Turkish to English loses an important bit of meaning about relationships and the sex of a person has to be inferred from context which is not easy to do automatically.

Similarly computer software....and there's a lot of it.


It's interesting that many with CS/CE background believes in irreversible death of Sapir-Whorf hypothesis and victory of the Universal Grammar Theory, while many from the exact same cohort are able to trivially explain how tightly coupled ISA and CPU implementation are and how mutually incompatible different ISAs can be.

I mean, it's most likely overlaps and not actually the same set of people, but I find it ironic and funny.


I suspect that it's an argument about nothing. e.g. I can think about my "mother's sister" but I don't care about it enough to invent a shortcut word for it like "teyze". To Turks I think it means something because in some families there's great emphasis on the connections with one or other side of the family. It's rather a brutal and dismissive attitude sometimes.

So I personally think that groups of people have a common understanding and some commonly accepted attitude that make up their culture. The purpose of words is to reference those feelings. An outsider can understand to a degree because we are all human but they usually get the emphasis wrong and also tend to miss lots of implications.

Of course you as an entrant to a culture (e.g. a kid) are going to get educated over time about what it all means and you're going to be discouraged from expressing alternate cultural values because overall not enough people feel like that to have invented convenient ways of expressing it.

So language is going to affect you but as some idea becomes popular and needs expression people do invent new words. So you can affect it - if you can get enough people to pick up on your invention by adding a new idea to their mental model of life.


> Or is an ISA more than just a different 'language'?

It tends to be more like going from C89 to Haskell. You're not just switching the keywords around, but also fundamental architectural concepts. There's still some parts you can recycle and some skills that transfer, but less than you'd like.

> Or is hardware not really the biggest problem, but rather Software like compilers, kernels, etc.?

That's the next problem. Kernels, device drivers, support hardware, a lot of low level stuff needs to be adapted, and even a company the size of Qualcomm doesn't necessarily do everything inhouse, there will be lots of external IPs involved and all those partners need to also be willing to migrate over to a different ISA.


I'd perhaps put ARM vs RISC-V closer to like C# vs Java (or maybe JS vs Python) - on a high level rather similar, but a majority of the specifics don't really line up (especially at the uop level requiring dumping a bunch of previous optimized cases, and adding a bunch of new ones).


Designing a new clean and better ISA is easy. With the current existing experience that could be done in weeks or a few months at most, to allow for the simulation of how various variants run benchmarks, to determine the best variant.

Nevertheless, designing the ISA is the only easy part. Then you have to write high-quality compilers, assemblers, linkers, debuggers and various other tools, and also good documentation for the ISA and all the tools.

Developing such toolchains and convincing people to use them and educating them can take some years. Optimizing various libraries for which the performance is important for the new ISA can also take years.

These things are what the incumbent ISAs like Aarch64, RISC-V or POWER provide.


Interesting to see RISC-V described as an "incumbent" :D


Having already shipped in 10b+ chips by two years ago, it isn't wrong.

It has already entrenched itself in the industry.


It's not about the chip itself at all, it's about the software that runs on it. My Computer Architecture professor always used to say, nobody wants to recompile their code!

There are still binaries that were compiled in the 80s running happily on an x86 system because the chip conforming to the ISA guarantees that machine instruction will run the same as it did in the 80s.

As for "only" a different language, absolutely lots of software does this. As part of Apple's move from x86 to ARM, they implemented a software called Rosetta which translates x86 instructions into ARM (also known as emulation). The only problem with this is that there's a performance penalty you pay for the emulation, which can make using a program slower, choppier, etc.



And Qualcomm is the only competitive ARM chip on the market besides Apple's. And now they are being taken out by ARM. Is it really that expensive to re-license things? This seems self-defeating.


Samsung and MediaTek make pretty competitive ARM processors.

MediaTek 9400 is literally the top-performing SoC on the market.

Samsung Exynos 2400, 2400e

MediaTek Dimensity 9400, 9300 Plus, 9300

https://nanoreview.net/en/soc-list/rating


The geekbench scores of Apple are highest and then Qualcomm according to this page. MediaTek’s top chip is literally 25% slower according to his listing.

The ordering in ranking is a little weird.


25% slower than apple is more than enough for a mobile chip. The new apple chips are faster than desktop chips from a few years ago


You got even that wrong. Single-core GB score is highest, multicore is not. Snapdragon's score is higher.


People do not seem to understand that these chips are in different price brackets, even between Qualcomm and Mediatek. That is why there is a discrepancy in performance.

Would not be fair to compare a $20 toaster to a $50 toaster and say that the $20 toaster is slower.


I never thought I would hear MediaTek and top chip maker in the same line


AWS has the Graviton ARM CPU that is pretty competitive but you can only rent them.


Ampere has Graviton-like chips that you can actually buy, but neither Graviton or Ampere are really in the same market segment as Qualcomm and Apple.


System76 recently released an ampere desktop. It starts at about half the price of a Mac Pro and seems to top out at many more cores.

I’m not sure if the low end ampere is as slow as a high end mac though.


Graviton is just an ARM neoverse core no? It's not a bespoke design.


I think so.


I believe Graviton is competitive in the server market yes, but not in mobile or laptops.


Graviton4 is based on Neoverse V2 which is based on X3.

Neoverse V3 was announced Feb of this year. It should have an 80-85% performance increase and should basically be based on something very close to x925.


The issue is that Qualcomm wants to switch to ARM chips where they don't pay ARM much money.

When you're making ARM chips, you can either license the instruction set or you can license whole core designs. Most people license whole core designs. You build a chip using an ARM Cortex X4 core, 3 Cortex A720 cores, and 4 Cortex A520 cores and call it a day. But when you're using ARM's core designs, you have to pay them a lot to license those core designs. But when you're licensing the instruction set (and designing your own cores), you pay ARM a tiny licensing fee. This is what Apple does.

In this case, the story goes:

A startup called Nuvia wanted to create custom ARM cores for servers and negotiated a deal with ARM that was very favorable since ARM would like to grow its server marketshare. The agreement included a stipulation that they couldn't sell their IP to another company for that other company to build ARM cores built on Nuvia IP (according to ARM). Qualcomm argues that they have an instruction set license so they're allowed to build custom cores based off that license. ARM says that Nuvia's instruction set license means that Qualcomm can't.

I don't know what the cost difference is between core licenses and instruction licenses, but some places seem to think it's around 5x. Qualcomm has around 35% of the ARM chip market, but crucially a huge portion of the more expensive (and profitable for ARM) flagship cores. It's possible that Qualcomm is half of ARM's business (maybe more). If Qualcomm shifts to their own core designs and starts paying ARM 20% of what they're paying now, that could wipe out 40% of ARM's revenue.

If Qualcomm can shift from more expensive core licenses to cheaper instruction licenses, it would wipe out a huge portion of ARM's business. Worse, if those cores are better and become the de-facto flagship cores, it'd wipe out even more of ARM's business as companies like Samsung and Google might feel the need to buy Qualcomm chips (with better cores) rather than buying ARM's Cortex X cores for their flagship phones.

Likewise, Qualcomm is unlikely to stop at smartphones. They're already moving into laptops which will make it harder for any company using ARM-designed cores to get a foothold there. Qualcomm could move into servers in the near future and offer something better than the ARM Neoverse cores that are used by AWS's Graviton, Google's Axion, and Ampere.

So it's an enormous threat to ARM's business and ARM feels like it gave Nuvia a sweetheart deal because they were a small startup looking to enter a mostly new market rather than disrupting their current business - and they gave them a license with restrictions on it. Then Nuvia sold to Qualcomm who is using that IP to stop paying ARM - and ARM thinks that goes against the restrictions that Nuvia agreed to.


Ah? ARM doesn't build chips but provides the architecture and license it to other companies.

There are plenty of ARM chips designed by multiple companies and built by multiple foundries


ARM licenses cores for both CPU and GPU, not just the architecture.

There are not “plenty of ARM chips designed by multiple companies”, almost all of them except for Apple (and now Qualcomm) use ARMs off the shelf design.


https://www.arm.com/markets/computing-infrastructure/cloud-c...

> Annapurna Labs, Ampere Computing, NVIDIA, Intel, Marvell, Pensando Systems, and others use Arm Neoverse and Arm technologies to create cloud-optimized CPUs and DPUs.

https://en.m.wikipedia.org/wiki/ARM_architecture_family

> Companies that have designed cores that implement an ARM architecture include Apple, AppliedMicro (now: Ampere Computing), Broadcom, Cavium (now: Marvell), Digital Equipment Corporation, Intel, Nvidia, Qualcomm, Samsung Electronics, Fujitsu, and NUVIA Inc. (acquired by Qualcomm in 2021).


The first list is basically what I said: they use licensed core designs and don’t make their own.

The second list is out of date. Intel has completely pulled out of ARM earlier this year and most of the others do not actually design their own ARM cores anymore. It’s become a lot less common in the ARMv8+ era.


Also NPU


Now that the Nvidia deal has fallen through, SoftBank is trying to ramp up profitability while continuing to search for another buyer.


There's a missing word here (which otherwise makes the sentence nonsensical): "He’s also expanding into new areas, most notably computing, where Arm is making its own push."

I'm guessing cloud computing, but guess you could add any buzzword in...


Another odd word choice early in the piece:

> their so-called architectural license agreement

Definition 2 is when this tends to be used: https://www.merriam-webster.com/dictionary/so-called


On mobile devices efficiency is so important, I don't see how Qualcomm would be able to live without ARM licences. RISC-V and other architectures like x86-64 are nice, actually I think the peripheral libraries, boot and stuff like that are bigger headache to replace for Qualcomm's clients given that they can just switch the gcc to a different arch - still if your code is 25% less efficient, that'll be quite noticable for the consumer - or in your battery and weight costs.

What am I not seeing here? I think they'll just settle.


You're assuming there is something inherent to Arm specifically that makes it efficient. I'm not sure about that: it just evolved naturally as it was used in portable devices predominantly. Same thing can be done with RISC-V-based designs, but obviously it will take a lot of time.


ARMv8 is not an evolution of previous designs, it was created by and for Apple's phone SoCs.

Whereas RISC-V was created by academics who mostly just claim it's the best because they invented it.


I suggest having a look into the actual pedigree of RISC-V, rather than making wild guesses.

It might have been born in academia, but from those drafts until the first ratified spec there's a good decade of high quality input from distinguished industry veterans.


What’s missing from most of these analyses is the perspective that Arm really doesn’t want Qualcomm to become a dominant - architecture license (ALA) based - vendor of Arm based SoCs. Bad for Arm and for Arm’s other customers and the ecosystem.

Whilst Qualcomm has a wide ranging ALA that’s always a possibility. This might just be an opportunistic move to remove that threat to Arm’s business model.


Bad for Arm – sure. But that’s because Arm themselves want to be the dominant vendor. Arm’s other customers lose either way.


So previously ARM mainly just licensed the ISA or licensed already made cores for people who wanted to creating their own and now they want to shift the paradigm to mainly being that you buy cores from ARM and get them to customise them for you? They want to move up the food chain?


> Arm’s other customers lose either way.

Sure Arm has done things - eg pricing v8 to v9 that customers hate. But do you really think Mediatek for example wants to compete with Qualcomm selling Nuvia based cores with a low ALA based royalty.


I guess they'll have to give ARM all that money they got from Apple over their modem dispute.


Wasn't/isn't Arm for sale?

Is this just a ploy to strongarm Qualcomm into buying Arm?


I don't think that sale would be approved by any regulating body


What does all this have to do with Intel and AMD calling a truce?


AMD and Intel aren't 'calling a truce'. They've always worked together on projects where the industry would benefit from standardization and having many experienced people put their 2c in. They still compete on products, just not standards - which is a good thing.


Not much I'd say. Intel has its own massive yield / fab issues to deal with, and AMD's GPU business is being eaten by NVIDIA while its CPU business never had much market share to begin with... it doesn't make sense for these two to fight each other, not when NVIDIA is knocking on both their doors.


Since Qualcomm is using these custom cores to launch into the pc market and directly competing with x86


Paywalled.


I hope Qualcomm wins and wins its countersuits too.


[flagged]


We've banned this account for repeatedly breaking HN's guidelines and ignoring our request to stop.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


Apple sheep?


Bloomberg disappoints by failing to mention RISC-V at all in the entire article.

They have to be doing this deliberately, as it's hard to explain otherwise.


If the comments in here are correct, RISC-V is really not an option at this time due to performance.


>due to performance

That would require pretending Ventana Veyron V2, Tenstorrent Ascalon/Alastor, SiFive P870, Akeana 5000-series and others do not exist or do not yet have any customers.

Pretending, because they actually exist, have customers, and are thus bound to show up in actual products anytime now.


I don’t feel like you addressed the performance issue.

I don’t think anyone said they don’t exist.


Could be the best thing that's ever happened for RISC-V!


Most people do not realize how slow RISC-V is right now. Yes, it will definitely get better, but it will take some time given how far behind it is.

Like 30x slower than a top of the line Apple Mx series CPU. Maybe there is a high performing RISC-V chip out there but I haven't yet run into one.

RISC-V benchmarks: https://browser.geekbench.com/search?q=RISC-V. Compare to an Apple M4 benchmark: https://browser.geekbench.com/v6/cpu/8224953

That said, RISC-V is good for embedded applications where raw performance isn't a factor. I think no other markets are yet accessible to RISC-V chips until their performance massively improves.


There is a chip out there that contains both an ARM and a RISC-V core, the RP2350. It's reasonable to assume that the ARM part and RISC-V part are manufactured in the same process. There are some benchmarks pitting the two against each other on e.g. this page: https://forums.raspberrypi.com/viewtopic.php?t=375268

For a generic logic workload like Fibo(24), the performance is essentially the same (quote from above page):

    Average Runtime = 0.020015 Pico 2
    Average Runtime = 0.019015 Pico 2 RiscV
Note that neither core on the RP2350 comes with advanced features like SIMD.


It is true you can find slow ARM chips. But you cannot find fast RISC-V chips.


I wager that statement would be turned on it's head if we restricted the comparison to chips of similar transistor density. Fast ARM chips do exist, as ARMv8 designs fabbed on 5nm TSMC with noncompliant SIMD implementations. If there were RISC-V chips in the same vein as Ampere or Nvidia's Grace CPU, I don't see any reason why they couldn't be more competitive than an ARM chip that's forced to adhere directly to the ISA spec.

RISC-V hedged it's bet by creating different design specs for smaller edge applications and larger multicore configurations. Right now ARM is going through a rift where the last vestiges of ARMv6/7 support is holding out for the few remaining 32-bit customers. But all the progress is happening on the bloating ARMv9 spec that benefits nobody but Apple and Nvidia. For all of ARM's success in the embedded world, it would seem like they're at an impasse between low-power and high-power solutions. RISC-V could do both better at the same time.


It's a Cortex-M33, a 32bit microcontroller core with no virtual memory. Are we really comparing microcontrollers to modern aarch64 processors?


Yes? Because nobody has released a RISC-V MPU comparable to what you perceive as "moden" arm64 MPUs.

RISC-V is simply a ISA and not a core. The ISA affects some of the core architecture but the rest is also implementor specific. High-end cores will take time to reach market. Companies with big guns like Qualcomm can most likely pump out if they wanted to, and will most likely be doing so in the future since they are pumping over $1 billion into the effort.


How you design a core is very different based on if you're targeting ultra-low-power tiny microcontroller designs vs high performance and high power laptop/desktop-tier designs.

And it's not been proven that RISC-V is a good match for the second group (yet).

Remember it's sometimes very non-obvious what quirks of an ISA might be difficult until you actually try to implement it - one of the reasons ARM had a pretty much "clean sheet" rewrite in ARMv8 is things like the condition codes turned out to be difficult to manage in wide superscalar designs with speculative execution - which is exactly the sort of thing required to meet the "laptop-tier" design performance requirements.

It may be they've avoided all those pitfalls, but we don't really know until it's been done.


We're not quite there yet. A bunch of mission critical stuff like SIMD were only added in the last 2-3 years. As it takes 4-5 years to design/ship high-performance chips, we still have a ways to go.

Ventana Veyron looks interesting. Tenstorrent's upcoming 8-wide design should perform well.

Qualcomm pitched making a bunch of changes to RISC-V that would move it closer to ARM64 and make porting easier, so I think it's an understatement to say that they are considering the idea. If ISA doesn't matter, why pay tons of money for the ISA?


There were two competing SIMD specs, and personally I'm glad that RVV won out over PSIMD. It's an easier programmer's view and fewer instructions to implement.


RVV was not going to lose this one. RVV's roots run deeper than RISC-V's.

RISC-V was created because UC Berkeley's vector processor needed a scalar ISA, and the incumbents were not suitable. Then, it uncovered pre-existing interest in open ISAs, as companies started showing up with the desire for a frozen spec.

Legend is that MIPS quoted UC Berkeley some silly amount. We can thank them for RISC-V. Ironically, they ended up embracing RISC-V as well.


It is much more than just SIMD.

I think RISC-V chips in the wild do not do things like pipelining, out-of-order, register renaming, multiple int/float/logic units, speculation, branch predictors, smart caching.

I think all existing RISC-V chips in the wild right now are just simplistic in-order processors.


You are wildly mistaken here.

Back in 2016, BOOMv1 (Berkley Out-of-Order Machine) had pipelining, register renaming, 3-wide dispatch, branch predictor, caches, etc. A quick google seems to indicate that it was started in 2011 and had taped out 11 times by 2016 (with actual production apparently being done on IBM 45nm).

They are on BOOMv3 now.


Almost all in-order processors will do pipelining, so that's there. Many are even multi-issue. Andes has an out of order core [1] and so does SiFive (though I don't know of many actual chips using these.

[1] https://www.andestech.com/en/2024/01/05/andes-announces-gene... [2] https://www.sifive.com/cores/performance-p650-670


Could patents be one of the reason why? Genuine question.


You're confusing the ISA with the chip. Current RISC-V chips are slower than high performance ARM ones, but that's because you don't start by designing high performance chips! You start with small embedded cores and work your way up.

Exactly the same thing happened with ARM. It started in embedded, then phones, and finally laptops and servers. ARM was never slow, they just hadn't worked up to highly complex high performance designs yet.


you don't start by designing high performance chips! You start with small embedded cores and work your way up.

I disagree. For example, the first PowerPC was pretty fast and went into flagship products immediately. Itanium also went directly for the high end market (it failed for unrelated reasons). RISC-V would be much better off if some beastly chips like Rivos were released early on.


The high end requires specifications that were not available until RVA22 and Vector 1.0 were ratified. First chips implementing these are starting to show up as seen in e.g. MILK-V Jupiter, which is one of the newest development boards in the market.

With the ISA developed in the open, the base specs microcontrollers can target would naturally tend to be be ratified first, and thus microcontrollers would show up first. RVA22+V were ratified November 2021.

With the ISA developed inside working groups involving several parties, some slowness would be unavoidable, as they all need to agree on how to move forward. Thus the years of gap until RVA22+V from the time the privileged and unprivileged specs were ratified (2019).

RVA23 has just been ratified. This spec is on par with x86-64 and ARMv9, feature-wise. Yet hardware using it will of course in turn take years to appear as well.


Isn't the problem the lack of advanced features for executing the current ISA with speed? I thought RISC-V chips seen in the wild do not pipelining, out-of-order, register renaming, multiple int/float/logic units, speculation, branch predictors, multi-tier caching, etc. The lack of speed isn't really related to a few missing vector instructions.


There's a lot of cores that do all of that.

Most cores are pipelined; it is RISC after all.

There are quite a few superscalar cores, even a c906 is superscalar.

The c910/c920 is an OoO, renaming core, with speculation.

What they're lacking is area and power. A ROB with six entries is not going to compete with a ROB of six hundred entries.


The first PowerPC chip was introduced in 1992! Itanium was in 2001, wasn't a from-scratch design and was famously a disaster!

Not really comparable.


Is it slower by design or just because it’s implementations have not been aggressively optimized?


I believe it is just lacking aggressive optimization. ARM is basically RISC as well, so it isn't an architectural limitation.


(Naive question) then is it really a big hurdle for a company who knows how to make arm chips to try making riscv chips?


The question is more of will your customers agree to go along with this major architectural shift that sets you back on price-performance-power curves by at least five years and moves you out of the mainstream of board support packages, drivers, and everything else software-wise for phones.

Also we should not pretend that ARM is just going to sit there waiting for RISC-V to catch up.


> The question is more of will your customers agree to go along with this major architectural shift that sets you back on price-performance-power curves by at least five years and moves you out of the mainstream of board support packages, drivers, and everything else software-wise for phones.

Embedded is moving to RISC-V where they have low performance needs.

One example is the Espressif line of CPUs - which have shipped over 1B units. They have moved most of their offerings to RISC-V over the last few years and they are very well supported by dev tools: https://www.espressif.com/en/products/socs


Yes. This caveat is most clear. I am wondering about the question of performance raised in this thread,


It’s not a big hurdle so long as they hire Jim Keller to rapidly improve yet another architecture. (Only half-joking.)


Jim Keller is actually working on this right now at Tenstorrent.


Ascalon (claiming Zen5-tier performance) is done, and has been available for licensing for about a year now.

It was then made public that LG bought a license right away.


It’s easy to claim a lot of things.


>It’s easy to claim a lot of things.

It certainly is easy to casually spread fear and doubt.

But it is really far-fetched to think that the people at Tenstorrent, who have successfully delivered very high performance microarchitectures in other companies before, are lying about Ascalon, and that LG is helping them do that.

It would even be more far fetched to claim that Ventana Veyron V2, SiFive P870, Akeana 5000-series, all of them available high performance IP, are lying about performance.


I suspect not? I think the principles and methods of optimization are the same.

But I say this as a software guy who doesn't actually know CPU design.


Making the chip isn’t an issue for them. It’s the software compatibility post facto.


Well you need several years to catch up - and those doing arm are not standing still. Same problem big software rewrites have, some are successful but it takes a large investment while everyone is still using the old stuff that is better for now.


Maybe fine for Android but this will set their windows plans back another decade if it happens

It has taken them that long to make arm be a thing on windows and that’s building on people porting stuff to arm for Mac to finally get momentum.

RISC-V with windows will be an eternity to be feasible.


Just something I as a random person been thinking, how likely is next version of Windows _not_ going to be something Linux-based with WINE+Bochs preinstalled?

Windows branding is now forever tied with x86/x64 Win32 legacy compatibility, meanwhile WSL had captured back a lot of webdevs from Mac. Google continues to push Chrome, but Electron continues to grow side by side. Lots of stuff happening with AI on Linux too, with both Windows and Mac remaining to be consumer deployment targets. Phone CPUs are fast enough to run some games on WINE+Bochs.

At this point, would it not make sense for MS to make its own ChromeOS and bolt-on an "LSW"?


I think it’s almost certain that Microsoft will not be changing their kernel to Linux.

I think you’re over estimating what percentage of users use WSL. They’re an insignificant fraction of the user base.

And with games, I think you’re also overestimating how good the translation layers like Proton are, and how rapidly Microsoft advance DX as well.


>RISC-V with windows will be an eternity to be feasible.

Will it now?

Microsoft was already deeply involved in 2021 as per that years' summit RISC-V Foundation's technical talks. Ztso was pushed by them.


Whether Microsoft has windows running on an architecture is a very different level from whether it’s feasible to use it as a daily driver on windows. The ecosystem is what matters for most people.

Windows for arm hails back to 2011. They’re only just now getting native arm ports for several major packages. That’s ~13 years for a well established architecture that’s used much more universally than RISC-V. They don’t even have arm ports for lots of software that has arm ports on macOS.

RISC-V will take an aeon longer to get a respectable amount of the windows ecosystem ported over.


>The ecosystem is what matters for most people.

Absolutely agree.

The key development Microsoft has demonstrated recently is the ability to run x86 Windows software in non-x86 Windows systems.

Now that this is in place -and will only get better-, there is no longer a chicken and egg situation.

Instead, what we have is a clearly defined path to migrate away from x86.


Arm on windows may date to 2011, but it was mostly a side project with 1-2 maintainers. With sufficient investment, it shouldn’t take 13 years to build up RISC-V support.


This.

It is evident to anybody paying attention that Microsoft has RISC-V support well underway.

But even if they had to start from scratch, it would be much easier, thanks to ARM having paved the way.


Like everything else, it doesn’t matter much. Windows ran on Itanium, Alpha, and as pointed out ARM for over a decade.

Without the ISVs, it’s a flop for consumers.

MS has had an abysmal time getting them to join in on ARM, only starting to have a little success now. Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster. That’s the kind of rug pull that helped kill Windows Mobile.

Emulators aren’t good enough. They’re a stop gap. Unless the new chip is so much better than the old it’s faster with emulation then the old one was native no one will accept it long. Apple’s been there, but that’s not where MS sits today.

And if your emulator is too good, what stops ISVs from saying “you did it for us, we don’t have to care”? So once again they don’t have to do it at all and you have no native software.

MS can’t drop their ARM push unless they want to drop all non-x86 initiatives for a long time.


>And if your emulator is too good, what stops ISVs from saying “you did it for us, we don’t have to care”? So once again they don’t have to do it at all and you have no native software.

x86 emulation enables adoption.

Adoption means having an user base.

Having an user base means developers will consider making the platform a target.

>Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster.

Would it now? If anything, offering RISC-V support as well would further reinforce the idea that Windows is ISA-independent, and not tied to x86 anymore.


Switching CPU architecture is not about changing a compilation option, it's about eliminating centuries old assembly codes, binaries, and third party components and re-engineering everything to be self hosted on-prem at the company. Commercial software companies are reckless and stupid lazy and unbelievably inept, so lots of them won't be able to do this, especially for the second time.

In case this translation was needed at all. The point is the point is not a "-riscv" compilation option.


> If anything, offering RISC-V support as well would further reinforce the idea that Windows is ISA-independent, and not tied to x86 anymore.

Anymore? It’s been independent since the 90s. It’s only ISVs that have been an issue.

And a rug pull is a fantastic way to scare all the ISVs far far away.


You sure? Microsoft dropped Alpha, MIPS, and PowerPC by the time Windows 2000 rolled around. Beyond that point, only the Xbox 360 and Itanium versions had anything different to the usual X86/64 offering.


While there was only one popular choice, they’ve always kept it flexible. That was a core design decision.

But, as an example, Windows Phone 8 and later were based on the NT kernel. You already mentioned the 360.


Whose's rug would even be pulled?


How do you imagine ARM having helped?


The x86 emulator for one.


Fair, though I don’t think translation is a good long term strategy. You need native apps otherwise you’re always dealing with a ~20-30% disadvantage.

The competition isn’t sitting still either and QC already hit this with Intel stealing their thunder with Lunar Lake. They’re efficient enough that the difference in efficiency is far overshadowed by their compatibility story.

Ecosystem support will always go to the incumbent and this would place RISC-V third behind x86 and ARM. macOS did this right by saying there’s only one true way forward. It forces adoption.


>You need native apps

For native apps, you need users. For users, you need emulation.

It cannot be overstated how important successful x86 emulation is for the migration to anything else to be feasible.


I think you just ignored the rest of my comment though which specifically addresses why I don’t think just relying on translation is an effective strategy. Users aren’t going to switch to a platform that has lower compatibility when the incumbent has almost as good efficiency and performance.


>when the incumbent has almost as good efficiency and performance.

The incumbent is the only two companies -Intel and AMD- that can make x86 hardware.

The alternative is the rest of the industry.

Thus having a migration path should be plenty on its own.

Intel and AMD can both join by making RISC-V or ARM hardware themselves. My take is that they will too, eventually, come around. Or they'll just disappear from relevance.


The incumbent is not just x86 but now ARM as well.

You have to think in network effects. You mention “the rest of the industry” yet ignore that it’s mostly arm , which would make arm the incumbent.

x86 is the king for windows. But ARM has massive inroads with mobile, and now desktop with macOS, and servers with Amazon/Nvidia etc

There’s a lot better incentive to support ARM than RISC-V for software developers. It isn’t one or the other , but it is a question of resources.

Intel and AMD seem fine turning x86 around when threatened as can be seen by Lunar Lake and Stryx Point. Both have been good enough to steal QC’s thunder. You don’t think ARM manufacturers will do the same to RISC-V?

TBH most of your arguments for RISC-V adoption seem to start from the position that it’s inevitable AND that competing platforms won’t also improve.


So which of Microsoft’s false starts would you take as them taking ARM seriously?

Why do you think they’d take RISC-V any more seriously than their previous attempts at ARM?

There are two fallacies to overcome here.


I think it's already a great thing for RISC-V, imagine things somehow go well for Qualcomm, do you really think they wouldn't prepare a plan B given ARM tried to get them out of the market?


I don’t think they have a plan B. Architectures take half a decade of work. Porting from risc-v to arm is not a matter of a backup plan, it’s that of a very costly pivot.


Qualcomm have a Plan B.

This time last year they were all over the RISC-V mailing lists, trying to convince everyone to drop the "C" extension from RVA23 because (basically confirmed by their employees) it was not easy to retrofit mildly variable length RISC-V instructions (2 bytes and 4 bytes) to the Aarch64 core they acquired from Nuvia.

At the same time, Qualcomm proposed a new RISC-V extension that was pretty much ARMv8-lite.

The proposed extension was actually not bad, and could very reasonably be adopted.

Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question. RISC-V will eventually need a deprecation policy and procedure -- and the "C" extension could potentially be replaced by something else -- but you wouldn't find anyone who thinks the deprecated-but-supported period should be less than 10 years.

So they'd have to support both "C" and its replacement anyway.

Qualcomm tried to make a case that decoding two instruction widths is too hard to do in a very wide (e.g. 8) instruction decoder. Everyone else working on designs in that space ... SiFive, Rivos, Ventana, Tenstorrent ... said "nah, it didn't cause us any problems". Qualcomm jumped on a "we're listening, tell us more" from Rivos as being support for dropping "C" .. and were very firmly corrected on that.


> Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question.

For general purpose Linux, I agree. But if someone makes Android devices and maintains that for RISC-V… that's basically a closed, malleable ecosystem where you can just say "f it, set this compiler option everywhere".

But also, yes, another commenter pointed out C brings some power savings, which you'd presumably want on your Android device…


Qualcomm can do whatever they want with CPUs for Android. I don't care. They only have to convince Google.

But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.

If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.


Qualcomm can do whatever they want with CPUs for Android. I don't care. They only have to convince Google.

But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.

If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.


Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question.

Android was never really Linux though.


This is officially too much quibbling, even if we settled philosophical questions like "Is Android Linux?" Then "If not, would dropping C make RISC nonviable", there isn't actually an Android version that'll do RISC anywhere near on the horizon. Support _reversed_ for it, got pulled 5 months ago


>Support _reversed_ for it, got pulled 5 months ago

Cursory research will yield that this was a technicality with no weight in Google's strong commitment to RISC-V Android support.


If you trust PR (I don't, and I worked on Android for 7 years until a year ago) - this is a nitpick 5 levels down -- regardless of how you weigh it, there is no Android RISC-V


[flagged]


There is no Android RISC-V. There isn't an Android available to run on RISC-V chips. There is no code to run on RISC-V in the Android source tree, it was all recently actively removed.[1]

Despite your personal feelings about their motivation, these sites were factually correctly relaying what happened to the code, and they went out of their way to say exactly what Google said and respected Google's claim that they remain committed, with 0 qualms.

I find it extremely discomfiting that you are so focused on how the news makes you feel that you're casting aspersions on the people you heard the news from, and ignoring what I'm saying on a completely different matter, because you're keyword matching

I'm even more discomfited that you're being this obstinate about the completely off-topic need for us all to respect Google's strong off-topic statement of support[2] over the fact they removed all the code for it

[1] "Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches."

[2] "Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI)."



I don't know why people keep replying as if I'm saying Android isn't going to do RISC-V.

I especially don't understand offering code that predates the removal from tree and hasn't been touched since. Or, a mailing list, where we click on the second link and see a Google employee saying on October 10th "there isn't an Android riscv64 ABI yet either, so it would be hard to have [verify Android runs properly on RISC-V] before an ABI :-)"

That's straight from the horses mouth. There's no ABI for RISC-V. Unless you've discovered something truly novel that you left out, you're not compiling C that'll run on RISC-V if it makes any system calls.

I assume there's some psychology thing going on where my 110% correct claim that it doesn't run on RISC-V today is transmutated to "lol risc-v doesn't matter and Android has 0 plans"

I thoroughly believe Android will fully support RISC-V sooner rather than later.


[flagged]


Here are the actual commits that all the fuss was about: https://android-review.googlesource.com/c/kernel/build/+/306... and those at https://android-review.googlesource.com/q/topic:%22ack_riscv...

It's certainly more than just disabling a build type - it's actually removing a decent bit of configuration options and even TODO comments. Then again, it's not actually removing anything particularly significant, and even has a comment of "BTW, this has nothing to do with kernel build, but only related to CC rules. Do we still want to delete this?". Presumably easy to revert later, and might even just be a revert itself.


Dropping the compressed instructions is also a performance / power issue. That matters to mobile.


What do you mean by Plan B? From what you've just said it sounds like their proposal was rejected, so there is no plan b now?


They can roll their sleeves up and do the small amount of work that they tried to persuade everyone else was not necessary. And I'm sure they will have done so.

It's not that hard to design a wide decoder that can decode mixed 2-byte and 4-byte instructions from a buffer of 32 or 64 bytes in a clock cycle. I've come up with the basic schema for it and written about it here and on Reddit a number of times. Yeah, it's a little harder than for pure fixed-width Arm64, but it is massively massively easier than for amd64.

Not that anyone is going that wide at the moment. SiFive's P870 fetched 36 bytes/cycle from L1 icache, but decodes a maximum of 6 instructions from it. Ventana's Veyron v2 decodes 16 bytes per clock cycle into 4-8 instructions (average about 6 on random code).


> Yeah, it's a little harder than for pure fixed-width Arm64, but it is massively massively easier than for amd64.

For those who haven't read the details of the RISC-V ISA: the first two bits of every instruction tell the decoder whether it's a 16-bit or a 32-bit instruction. It's always in that same fixed place, there's no need to look at any other bit in the instruction. Decoding the length of a x86-64 instruction is much more complicated.


Why do they use two bits for it? Do they plan to support other instruction lengths in the future?


So that there are 48k combinations available for 2-byte instructions and 1 billion for 4-byte (or longer) instructions. Using just 1 bit to choose would mean 32k 2-byte instructions and 2 billion 4-byte instructions.

Note that ARMv7 uses a similar scheme with two instruction lengths, but using The first 4 bits from each 2-byte parcel to determine the instruction length. It's quite complex, but the end result is 7/8 (56k) 2-byte instructions are possible and 1/8 (512 million) 4-byte instructions.

IBM 360 in 1964 thru Z-System today also uses a 2-bit scheme to choose between 2-byte instructions with 00 meaning 2-bytes (16k instructions available), 01 or 10 meaning 4-bytes (2 billion instructions available), and 11 meaning 6-bytes (64 terra instructions available).


> Why do they use two bits for it?

To increase the number of 16-bit instructions. Of the four possible combinations of these two bits, one indicates a 32-bit or longer instruction, while the other three are used for 16-bit instructions.

> Do they plan to support other instruction lengths in the future?

They do. Of the eight possible combinations for the next three bits after these two, one of them indicates that the instruction is longer than 32 bits. But processors which do not know any instruction longer than 32 bits do not need to care about that; these longer instructions can be naturally treated as if they were an unknown 32-bit instruction.


Qualcomm has been working on RISC-V for a while, at outwardly-small scale. It's probably intended as a long-term alternative rather than a ready-to-go plan B. From a year ago: "The most exciting part for us at Qualcomm Technologies is the ability to start with an open instruction set. We have the internal capabilities to create our own cores — we have a best-in-class custom central processing unit (CPU) team, and with RISC-V, we can develop, customize and scale easily." -- https://www.qualcomm.com/news/onq/2023/09/what-is-risc-v-and..., more: https://duckduckgo.com/?q=qualcomm+risc-v&t=fpas&ia=web


Qualcomm pitched a Znew extension for RISC-V that basically removes compressed (16-bit) instructions and adds more ARM64-like stuff. It felt very much like trying to make an easier plan B for if/when they need/want to transition from ARM to RISC-V.

https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...


I suppose Google ( + Samsung ) can bear that cost in the context of Android Arm -> Android RISC-V.


Qualcomm's been involved with RISC-V for several years now.

If anything, ARM is the plan B that they'll likely end up abandoning.


It’s a bit much to say their primary product that they’ve done for decades is a plan B. By definition it cannot be a plan B if it’s executed first and is successful.

I think a lot of RISC-V advocates are perhaps a little too over eager in their perception of the landscape.


Typo. Meant to write "ARM is the plan A that they'll likely end up abandoning".


No kidding; and while RISC-V is a massive improvement, I hate to be the wet blanket, but RISC-V will not change signed boot, bootloader restrictions, messy or closed source drivers, carrier requirements, DRM implementation, or other painful day to day paper cuts. Open source architecture != Open source software and certainly != Open source hardware; no matter what the YouTubers think.


Until arm or risc-v fix standardize the bootloader, it’s always going to be a big deal for each arm/risc device added anywhere…


Relevant RISC-V specs were released years ago and implementations follow them.

I know of no boards that have application processors and yet not implement SBI. Furthermore, everybody seems to be using the opensbi implementation.

ARM and RISC-V are not the same.


It would be naive to think that Qualcomm is only starting its RISC-V effort today and from scratch.


Meanwhile RISC-V slowly but surely picks up momentum.


It’s getting extremely tiresome to see RISC-V comments in every thread about this. It’s unnecessary and irrelevant.


It's arguably the most glaring element of background context I can imagine, there's a reason people are mentioning it. Just because it's not ready to compete right now, in the medium-long term it's looking like a flat out alternative to ARM. ARM wants their slice of the money now because this decade could easily end up being peak ARM times. Sell high.


Damn!

So what happens to the Raspberry Pi?

Edit: OK, following the discussion now. Nothing in the short term, potentially longer term.


The Raspberry Pi uses Broadcom, not Qualcomm chips. It also uses cores designed by Arm, which are not affected by today's news.


Yeah this is a total botch for me.

That is what I get for posting tired.


Broadcom (The makers of the chips used on the pi) did have a bid to acquire Qualcomm back in 2017 [0] but the bid was withdrawn after Trump blocked the deal.

So nothing will happen to the Pi (Arm also has a a minority stake in Raspberry Pi)

[0] https://investors.broadcom.com/news-releases/news-release-de...


Next week, Qualcomm will likely announce a 64 core RISC-V RVA23.

ARM really shouldn't pursue an aggressive posture with lines outside iOS or Win11 ecosystems. The leverage won't hold a position already fractured off a legacy market. =3


you think they can just flip a RISC-V switch and keep all the performance instantly? I can't really understand the logic from some people here.


People also seem to forget that everything needs to be ported. If you're an Android manufacturer, you're not going to stop shipping phones, waiting for the Android RISC-V to catch-up to ARM, or for RISC-V to get the speed and features of current ARM CPUs. You're going to buy ARM processors from another vendor.

The Windows ARM port is going to take even longer, I doubt that Microsoft has that working at anything beyond research stage, if that.

Getting the RISC-V ecosystem up to par with ARM is going to take years.

If you want to spin this in RISC-Vs favor, then yes, forcing a company like Qualcomm to switch would speed things up, but it might also give them a bit of a stranglehold on the platform, in the sense that they'd be the dominant platform, with all of their own customisations.


In theory, the Raspberry Pi foundation could easily move 3 million 1.8GHz RVA23 in 1 quarter... with 64 cores + DSP ops it wouldn't necessarily need a GPU initially. =3


The Raspberry Pi community would probably jump on a RISC-V board, but that doesn't help Qualcomm or it's customers.


Manufacturers adapt quickly to most architectural changes.

If you are running in a posix environment, than porting a build is measured in days with a working bootstrap compiler. RISC-V already has the full gcc and OS available for deployment.

We also purchase several vendors ARM products for deployment. Note, there was a time in history, when purchasing even a few million in chips would open custom silicon options.

Given how glitched/proprietary/nondeterministic ARM ops are outside the core compatibility set, it is amazing it was as popular as the current market demonstrates.

Engineers solve problems, and ARM corporation has made themselves a problem. =3


"keep all the performance instantly"

Depends what you mean by performance (most vendors ARM accelerated features are never used for compatibility reasons), as upping the core count with simpler architecture is a fair trade on wafer space.

i.e. if ARM is using anti-trust tactics to extort more revenue, that budget is 100% fungible with extra resources. Note, silicon is usually much cheaper than IP licenses.

One can ask polity to explain things without being rude to people. Have a wonderful day =3


Funnily enough Qualcomm tried to persuade RISC-V to let them drop compressed instructions. Presumably because they're trying to crowbar a RISC-V decoder onto the Nuvia design and compressed instructions are breaking it somehow.


They should buy the intel alumni founded RISC-V startup, and pour resources into a RVA23 based chip with dual on-chip SDR asic sections (they have the IP).

i.e. create a single open-chip solution for mid-tier mobile communication platforms.

They won't do this due to their cellular chip line interests. However, even if it just ran a bare bones Linux OS... it would open up entire markets. =3


With how poorly Intel is doing, not sure "Intel alumni" is a plus. Lmao.


Indeed, the Intel installed base has enough market inertia to last a business cycle or two with AMD.

Even with the recent silicon defects... People will tolerate the garbage as they want the NVIDIA+Intel performance.

Architecturally speaking, there were better options available... just never the equivalent price over performance of consumer grade hardware. =)


1. I am fairly sure games and other performance sensitive apps are using the Android NDK which is not available for RISC-V.

2. I am fairly sure a competitive RISC-V CPU is not days or weeks but years away.


> 2. I am fairly sure a competitive RISC-V CPU is not days or weeks but years away.

And chasing a moving target fueled by the largest technology companies on the planet.


Tying products to Googles ecosystem is usually financially risky. Not a good long-term strategy for startups. =3


I think it is more of a "chicken and egg" ordering problem.

1. The RISC-V design standard fragmentation issue has been addressed.

2. A reasonable mobile class level SoC will be available for integration after any large production run of the chips.

If ARM forces #2 out of silliness, than it also accelerates #1 in the market.

In general, there is plenty of use-cases even if a chip is not cutting edge. =3


ARM is owned by an investment bank, SoftBank. It operates kind of like Goldman Sachs. ARM is becoming a chipmaker, just like Intel. But Intels' CHIP grant is more similar to a pre-emptive 2008-era bailout of the banks (TARP), for being "too big to fail." (because it makes defense chips) https://irrationalanalysis.substack.com/p/arms-chernobyl-mom...


ARM is not becoming a foundry. Almost no one wants to become a foundry because the margins are too low.

This is why they are subsidized for tens of billions of dollars by countries all over the world.


I understand ARM is not becoming a foundry (at least anytime soon, which I will explain in a second). First, I said they are becoming a chipmaker like Intel because they have two of three things to make chips- an architecture, and physical core IP (POP) https://www.arm.com/products/silicon-ip-physical/pop-ip. They are making chips at all three foundries, but obviously don't own physical fabs. Softbank has 46 trillion in assets, with over 57.8 billion in operating income. What's stopping Softbank from making an offer to buy a majority stake in a Japanese foundry such as Rapidus (2nm) and other EUV equipment, such as Lasertec? https://semiwiki.com/forum/index.php?threads/shared-pain-sha... Ultimately, one starts to question, whether Qualcomms's interest in producing more consumer laptop chips is really competitive with the offerings by AMD and Intel, and whether this is really the best use of foundry space when foundries producing chips could be one day used against an amphibious assault on Formosa.

My point is that the commercial lawsuits between Qualcomm and ARM are just one part of a larger geopolitical issue- x86 lost the mobile market 20 years ago- the only reason Intel is surviving is national security- they could have been bankrupt had they not been propped up by a pre-emptive Defense Production Act. Consumers benefit because now they have the choice between more ARM software and x86 products, but I think that is just a short term benefit. Eventually the architecture cuts off support for old software, such as x86-32 bit, and now with X86S, they are only supporting 64 bit. So in the long term, it's better to have options. WINE was developed because of a fear of repeating the Irish Potato Famine ( a mono culture: https://gitlab.winehq.org/wine/wine/-/wikis/Importance-of-Wi...), in economic terms. In other words, just because Intel might not want to sell 32 bit chips anymore, doesn't mean others might not want/need to use some application that only exists on one platform (and all the engineers retired- with lost code/unported code).

There's an AI bubble: https://www.marketwatch.com/story/the-ai-bubble-is-looking-w... When a number of companies get investments and start to produce chips that add little extra value- slightly faster chips in lower power with 10 different architectures than run windows 11, then there is less justification to continue investing in companies that do not produce interesting new hardware because the end result is that they are being shaped by windows 11, rather than a unique feature- automotive efficiency for in car apps, sure, but laptops that run Oryon that are hard to boot linux aren't any more interesting than an x86S processor that can only boot linux 6.8 etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: