Open source semiconductors is such a broad area, and sadly only a tiny spec of it is open source: The HDL code that describes the digital circuit.
Everything else is very experimental at best. These days the FOSS FPGA tools are finally getting some traction, with Yosys and Nextpnr. But AsicOne tried to make an ASIC with open source, and faced endless troubles.
For ASIC there is basically QFlow, which is quite old, but used successfully to tape out a chip in the past, and there is OpenRoads, which is very new, experimental and ambitious. There are still major gaps in these tools, so in the end you inevitably have to sign an NDA and use proprietary tools and libraries.
And that's just talking about DIGITAL semiconductors where you compile HDL to pretty much generate the transistors from foundry cell libraries. So you have to sign an NDA to get the cell library, but you can at least release your code.
For analog chips, you can't do anything. An analog design highly depends on the parameters of the transistors you use, so before you even BEGIN designing, you have to sign an NDA to get the transistor models and you can NEVER open source an analog design.
The small dot of light at the end of the tunnel are projects like Minimal Fab, who make more accessible fabrication lines with open transistor models.
The crazy thing is that back in the days there were lambda rules, which were open rules anyone could use to design and model with. But with sub-micron devices, these scalable rules no longer scale, so fabs started producing secret models for their specific process.
I'm hopeful that after FPGA, and digital ASIC, analog will be next to be revolutionized.
> And that's just talking about DIGITAL semiconductors where you compile HDL to pretty much generate the transistors from foundry cell libraries. So you have to sign an NDA to get the cell library, but you can at least release your code.
Yes you can release your code, but you can't release your netlist or GDS-II. There's no guarantee that somebody else will be able to take the same HDL and close timing, even with the same foundry libraries (say if they are using a different tool, or different options). You'll also need things like clock-gating cells, memories, IOs (at a minimum) and those are foundry specific, so those would need to be abstracted out in some way.
> For analog chips, you can't do anything. An analog design highly depends on the parameters of the transistors you use, so before you even BEGIN designing, you have to sign an NDA to get the transistor models and you can NEVER open source an analog design.
Now this is where I disagree. Sure you can't open source your analog GDS-II, but maybe that's not the way to go. In my opinion what you want to do is build a foundry independent PDK for a generic 28nm, 40nm or whatever node using PTM models. A well designed analog circuit needs to be relatively independent of specifics, otherwise it's not going to work across all corners (this is more true for modern nodes than the kind of nodes the old textbooks talk about) and it'll be difficult to port to another process. So there's a good chance that analog circuits built for 'generic 28nm' or 'generic 40nm' could be ported to any foundries process (of course the PDK needs to be well designed). Yes you won't be able to push things to the limit as the DRC will be wider, but analog rarely needs to go to the limit. You could probably take the same approach for digital, but that's a lot more open source stuff to build.
Check out OpenRAM and FreePDK45 for academic projects taking this approach. Unfortunately FreePDK45 is only available to those with an academic email (despite being called 'open source'), which makes me very sad.
Yea, I think this is an interesting approach. But FreePDK45 slides mention that it's not designed for manufacture, and there does not appear to be an easy upgrade path. So you could design a chip with FreePDK, release the design, and then redo the whole thing in a vendor PDK.
I talked to someone who worked on AsicOne, and he said that even if you make your own PDK and draw your own transistors and everything, you'll still have to sign an NDK to do the sign-off and what not. I'm not intimately familiar with the whole process myself, but from what I understand it is basically impossible to have an open source analog design that you can actually manufacture. (sure, you can make a theoretical toy thing, but if you can't manufacture it, who cares?)
I'm quite familiar with the process and I believe it's entirely possible.
You will need to run foundry DRC decks, but the company you're taping out through will do this for you (I presume that you're not big enough to deal with TSMC directly). This is because a design that fails DRC could actually break other people's chips if you're sharing a wafer.
Of course if you really want to know that it'll work you need to also run foundry LVS and stimulate corners with foundry spice models and foundry PEX. But if you're gutsy you could skip this, if you believe you've put enough margin into your PDK corners.
Certainly there is zero need to redraw your transistors. Transistors are transistors, a few layers (od, poly, contact, implant, ...), there's no magic, no magic sauce. The foundry wants an SVG with overlapping rectangles (of some minimum size), nothing more.
I think the bigger thing for Open Source Chips is going to be the expiration of a ton of the x86 patents next year. That is going to free up a whole host of people to take a look at trying to build A better open x86 processor. Maybe then we can finally get to having a truly open source server from chip through software.
What’s the point of open source for consumers, expect for maybe education, when you can’t produce something from the source yourself nor be sure the product you buy is made from the source?
It’s very cost prohibitive to run a batch through a semiconductor fab. You are not going to request one offs unless you have a few millions to spare.
It’s not like running make after cloning some sources.
Well the barriers are pretty low today. Even fairly small companies like Axis (they make security cameras) or ubiquiti (routers and access points) can work with CPU vendors to license their tech and get a custom CPU. With an opensource IP the barrier will be even lower. Maybe even within a well funded kickstarter range.
Maybe for a killer home router with a 100% open hardware+software platform that can route 8 ports of 5Gbit/port at line speed that would more trusted than random commodity hardware.
Or something to handle say 8 security cameras and use machine learning to handle all 8 streams for not just motion detection, but also identifying what person/object is in each stream.
I'd certainly pay a premium for smart devices in my home that I knew that I could trust, wasn't spying on me, doesn't require cloud connections, used open APIs/standards, and wouldn't die with the next time a company dies, gets bored, gets greedy, get purchased, etc.
This is not accurate, Axis is NOT a small company by any standards, they have done ASICs since the 1980's and a tape-out there now is in the $10M range or more I'm sure, and then we're talking about the same amount in salaries to do all design and testing.
It's fun to have access to HDL code but it's simply not possible to actually turn it into chips as an OSS effort.
FPGA-stuff is the way to go I think. The HW cost will be much higher but you can muck around with it as a lone engineer in your home with a turnaround time of minutes.
"Custom CPU" seems like a bit of a stretch. They have ASIC people on staff (eg https://rocketreach.co/ya-chau-yang-email_61973123), but based on the fact that you can flash basically all their devices with OpenWRT (including those disc-shaped APs), I think the "ASIC" is just a conventional ARM or MIPS core with some peripherals geared toward hardware acceleration of common network tasks— switch routing, vlan tagging, etc.
This page talks about the specific features available for hardware offloading:
faster iteration and development and more competition means more innovation and cheaper prices and more choice for consumers.
The primary benefactor of open source has never literally been the consumer because only an exceedingly small niche of people makes use of the source directly. The biggest benefactor is the ecosystem as a whole.
For consumers the great thing about open source has never been that they can run make in a terminal. It's been that people can run make in a terminal who can then produce products for consumers.
There are efforts such as https://libresilicon.com/ to design open source processes to manufacture silicon chips at the small scale. I am hopeful that these efforts will bring down one-off cost to the 10k USD level or less.
AIUI, you can just use FPGA on a 10nm-ish process and get roughly the same performance as hardwired logic on a micron-precision chip. Still useful for analog though, and I imagine that going even a bit lower but still very cheap (in the hundred-nm range, perhaps) would still yield better performance than a state-of-the-art FPGA, given the one- or two-orders-of-magnitude overhead that's inherent in those.
Wouldn't it be x86-64 rather than x86 patents which are due to expire next year? It's weird that we've seen no interest so far in an open source reimplementation of x86 16-bit or 32-bit. You would think that it ought to garner quite a bit of interest among retrocomputing enthusiasts, if nothing else.
I'm sure there's tons of patents, but the p6 core came out in 1995 and most of the 32 bit stuff predates that and has all been patent free for quite a while. I guess if you're building your own processor, why make an x86?
Which specific patents ? Isn't the ISA protected by copyright rather than patents ? And implementing x86 is anyway not that attractive due to the complexity of the frontend. If you had to do something from scratch, why would you do x86 today ?
> Isn't the ISA protected by copyright rather than patents ?
Interfaces are uncopyrightable, according to everyone but the Court of Appeals for the Federal Circuit. And the case where the latter said it was copyrightable has been appealed to the Supreme Court specifically to overturn that ruling, and the Supreme Court will probably do so.
I'm aware of the google case, but "ownership" of ISAs seems to be a settled thing. That's partly what ARM's business is (i.e., to license the ISA to Apple for example). Do you have a clarifying source on copyright v/s patents regarding ISAs? RISC_V would also not be so significant if it were just a matter of running out patent clocks. MIPS is quite old too and would have expired soonish by the same logic (MIPS's own open-source announcement notwithstanding). Intel has also been passive-aggressively threatening MS and others legally so that they don't emulate x86 on ARM hardware. I remember when MS showcased some ARM laptop, Intel Legal sent out some press briefing claiming how they'll protect their IP. Qemu gets a pass though since it's non-commercial and there is nobody to sue. It would also be bad PR.
Edit: You may be right that it's only patents afterall and not copyright. This was Intel's briefing:
"However, there have been reports that some companies may try to emulate Intel’s proprietary x86 ISA without Intel’s authorization. Emulation is not a new technology, and Transmeta was notably the last company to claim to have produced a compatible x86 processor using emulation (“code morphing”) techniques. Intel enforced patents relating to SIMD instruction set enhancements against Transmeta’s x86 implementation even though it used emulation. In any event, Transmeta was not commercially successful, and it exited the microprocessor business 10 years ago."
The AMD64 patents. The first Athlon 64 was released in 2003, so the architecture is still protected. Until those patents expire, the best you can do is a 32-bit P6/K7 equivalent.
> I think the bigger thing for Open Source Chips is going to be the expiration of a ton of the x86 patents next year.
AMD has been building x86 chips and the patents haven't stopped it. ARM has been building non-x86 chips and the patents haven't stopped it. I'm failing to see how the expiration of x86 patents will have any effect.
That’s because AMD licenses the x86 patents from intel and Intel licenses the amd64 patents from AMD. There are a few other niche x86 licensees (VIA, DM&P, etc.) that don’t make anything in the high end market.
Is the real barrier to CPU production open source of the logic design or is it in research into chip production, node design, including hundreds or potentially thousands of process steps, maximising yields, segmentation of the market and other marketing issues, defining standards, chipset design and integration, validation, the generally prohibitive cost of building fabs and actually producing the chips and getting sufficient capital to make inroads into the marketplace, e.g. with OEMs.
The idea that open sourcing the logic is going to make other companies competitive with a financial behemoth like Intel is really a stretch.
If silicon/chip IP becomes free, and the only major cost to tape-out is an integration team, and Fab related stuff, China, not silicon valley, is the real winner
>All the more reason to make it profitable for fabs to exist within the US.
I'm particularly interested in this space. Anyone more familiar with it care to comment? Is there a push for more US fab capacity? Any startups? Any political push to change laws to make it more favorable to start one?
I can't answer your questions but may I piggy-back?
What stops Fab Facilities existing in "western" countries at the moment?
Someone told me a major reason for the move was because the chemicals used in top-tier facilities are basically banned in the west. Not technically banned but considered so dangerous (they're all carcinogenic) that the cost (health and safety, insurance, compensation) make them uneconomic.
Is that right?
Does it matter whether fab facilities are near to final manufacturing locations? Will Apple\Foxconn buy chips from a San Fran shop if they then have to be shipped to China for inclusion in the device? Since most devices are put together in SE Asia, the delay might be killer...
Sorry, not much to back it up other than lack of knowledge of any shared-wafer fabs in the US. From what I can tell, having a fab in the US is more a consequence of wanting to make one's own chips, not 'chips as a commodity' (foundry) business:
The story stretches the rapid adoption of RISC-V to a "revolution." A non-proprietary ISA doesn't automatically lead to ML accelerators or signal processors or 10 billion transistor 5 nm ULSI devices.
RISC-V is great; a universal commodity ISA is welcome and will remove some impediments. But RISC-V isn't going to reduce the IC business to simple integration and fab. Specialized devices are already indispensable and will only become more critical as the general purpose CPU performance curve continues to flatten.
"Fab related" stuff is the biggest portion of silicon/chip IP in any given design - the
"software-like", front-end part is quite minor to begin with. Of course it is important to get it right, which is why improvements in high-level design and the like (including open source designs) can still be a big win.
Yeah I work in photonics (phd student in a lab that does optical microresonators). The problem is that photonic structures are massive (10s-100s of um) and waste a lot of space. Photonics will never beat electronics in terms of density on the chip because you're limited by the wavelength of the light you're using. They are also quite lossy. Every time the waveguide turns, you have some radiation loss, for example. You can't contain an optical signal on a chip as well as you can contain an electrical signal.
I'm not a photonics expert, but I have done some development on a device using optical parametric amplification (OPA) for high performance bulk compute.
The project goal was to outperform ASICs on a very well-defined, highly regular problem, and achieve it without the capital cost of typical ASICs, so that specialised circuits could realistically be built and used for different problems. So, not a general purpose CPU, but something that can compute.
OPA is potentially an extremely high bandwidth signal processing process, which doesn't involve converting to electrical signals, through transistors and then back to optical the way that some photonic designs do. It is more like the way optical communications amplifiers work, directly amplifying the modulated light that is passing through.
We are talking >1THz bit rates per logic element, and it's also quite an energy efficient process (despite limited OPA conversion efficiency, because you can recycle some of the light that hasn't converted), so it was worth exploring.
In the process, no insurmountable technical obstacles were found during the time of the project, but we ran out of time and money.
But it was surprising to find that, despite the superficial promise of photonics, it wasn't obviously a lot faster, or faster per Watt, than the best silicon electronics after all. This is because silicon transistors are pretty fast and efficient these days, and because you can fit a huge number of them in an area much smaller than the wavelength of visible light. You can confine light too, and there has been some published progress at nanoscale OPA elements, but it's a much more complicated structure and process (plasmons etc) than OPA in bulk materials, and nanoscale OPA may be just as difficult to manufacture as nanoscale transistors. Also, quantum: Just due to light being quantized as photons, there comes a point where to carry enough information at high data rates, the power density needed is an issue.
Sorry, I didn't answer your question :-)
My guess: Actually making a photonic CPU is economically and motivationally constrained rather than science constrained at this point, even though there's plenty of R&D still needed to do it. The motivation isn't that strong because the benefits aren't that obvious, and I think if they were obvious, the big commercial labs would have shipped a working prototype already.
There's lots of optical data transport, even plausible research for on-chip communication, but nobody really has a plausibly commercial optical transistor except at a very conceptual stage.
Though optical parts are still 'big' and nobody seems to know how to make small IC's - 20 years ago I worked in networking and they were trying to make optical switches with mirrors!
At the long haul level, they use a transport layer tech called SONET. Rather than demuxing the optical signal into bits, then back into optical and out another pipe, they wanted to switch using fancy mirrors. Frankly, I'm not sure of the advantage. Maybe some performance? Security? I'm doubtful though.
This trend (if it's actually happening) is a symptom of the death of Moore's law. When Moore's law was operating in full swing, it would rarely make sense to build your own chip using open source. In the time that you could design and build your chip, general processors would have doubled in speed and you'd be better off waiting for the general processor.
Here are a few predictions, some of which have already occurred, that may result due to the death of Moore's law:
Already Happening:
- More custom chips (squeezing the last bit of performance)
- More reliance on the cloud, to push off processing power where there are more economies of scale
- The rise of traditionally "second tier" processor manufacturers (e.g., AMD, ARM) to be head-to-head with traditional leaders (e.g., Intel).
- A greater amount of chip manufacturers R&D dollars spent to each dollar of revenue.
Starting to Happen
- China and developing countries catching up with chip technology (when the leaders are no longer growing exponentially, it's easier to catch up)
- Governments imposing their will on chip developers (when there is less competition over performance, other factors like trust and national origin will start mattering)
- Trade secret theft, i.e., in the past if you stole Intel's designs, you would get one good chip, but that theft would be obsolete in 18 months. Now, it gives you a much longer advantage.
- An societal shift from utility to branding, i.e., as all goods start becoming equal, branding is the main differentiation.
- Living standards catching up to western and U.S. standards.
Further Afield:
- No significant technological improvements for decades.
- Stagnant per-worker/capita productivity.
- Economic growth becoming far more tied to population than individual productivity.
- Economies/governments fighting to increase their population (e.g., through legal immigration or by force).
- Government's power and control becoming based more on population rather than ideals or innovation (e.g., China).
- More monopolies ... in a dynamic and innovative society, a small smart company can defeat a larger slower one. In a stagnant society, that won't work and the competitive advantage can only be obtained by consolidation and economies of scale.
- Social unrest ... in an exponentially growing society, there is room for every generation to become wealthier than their parents, but in a flat society, on average half will become richer and half will become poorer.
I would argue there is more to an economy than processor speeds or price of transistors e.g. hardware security, software security, data integration, expansion of communication capabilities, not to mention other industries (biotech, transportation, energy), etc.
Although I must admit that I do not know how much the increase in modern hardware capabilities contributes to the productivity of the economy.
I disagree with your statement regarding significant future discoveries, as that is something that is very hard to predict.
If CPU and computer hardware become commodity, it would certainly be a new era of tinkering that is upon us. Perhaps more pressing problems than computational power f CPUs and transistor cost will then be attacked.
> there is more to an economy that processor speeds ... I frankly do not know how much increase in modern hardware capabilities contributes to the productivity of the economy.
In industrial times, you would be right. However since the "information revolution", a huge amount of productivity gains can be directly attributable to transistors. Sure it started with improving simple calculators, but then came spreadsheets, industrial CAD models, instantaneous global communication, remote teams, global branding and a whole lot more.
In fact, it's pretty fair to say, that nearly every standard of living improvement in western countries since roughly 1980 can be attributable to the transistor.
> In fact, it's pretty fair to say, that nearly every standard of living improvement in western countries since roughly 1980 can be attributable to the transistor.
Very interesting statement.
Out of cusiosity, is there any way you could back it up e.g. chemical or biological research was not very digital for a long time after the 1980s. Of course, digitization was transformative in the end of the 90s and beginning of the 2000s, and it seems to me that it is just now where we are in that exponential growth of exploiting IC technology.
I quickly checked your impressive background, and I guess you have seen a lot of the IC sector in terms of innovation and technology. I believe that you are making a sound call here.
At the end, everything is intertwined. Advances in material science go back to IC design and production, and vice versa.
Indeed performance has become less important since it reached some kind of plateau for the moment. So things that were secondary in the past like stability, security and customizability become more important, that's were Open Source chips come in. When buying a new computer or Smartphone, there are now plenty of startups selling decent devices that are effectively on par with brand laptops. When Linux is of interest, it's just a matter of time until it becomes available on the particular configuration.
Honestly, I therefore think the reverse development will further continue. Big brands will have an even harder time convincing people to buy their devices. Of course the economy might become more stagnant, on the other hand people might have to work less and be able to invest more time into other ideas - even if those aren't that profitable.
Is it worth adding that specific applications (High Frequency Trading, Crypto Mining, on-chip support for things like 5G) and the increased focus on special factors they bring (like latency and energy efficiency) mean general-purpose PUs are less and less attractive? I guess that is the other side of the coin of the death of Moores law...
Everything else is very experimental at best. These days the FOSS FPGA tools are finally getting some traction, with Yosys and Nextpnr. But AsicOne tried to make an ASIC with open source, and faced endless troubles.
For ASIC there is basically QFlow, which is quite old, but used successfully to tape out a chip in the past, and there is OpenRoads, which is very new, experimental and ambitious. There are still major gaps in these tools, so in the end you inevitably have to sign an NDA and use proprietary tools and libraries.
And that's just talking about DIGITAL semiconductors where you compile HDL to pretty much generate the transistors from foundry cell libraries. So you have to sign an NDA to get the cell library, but you can at least release your code.
For analog chips, you can't do anything. An analog design highly depends on the parameters of the transistors you use, so before you even BEGIN designing, you have to sign an NDA to get the transistor models and you can NEVER open source an analog design.
The small dot of light at the end of the tunnel are projects like Minimal Fab, who make more accessible fabrication lines with open transistor models.
The crazy thing is that back in the days there were lambda rules, which were open rules anyone could use to design and model with. But with sub-micron devices, these scalable rules no longer scale, so fabs started producing secret models for their specific process.
I'm hopeful that after FPGA, and digital ASIC, analog will be next to be revolutionized.