Hacker News new | past | comments | ask | show | jobs | submit login
How to Design a New Chip on a Budget (ieee.org)
145 points by kensai on Feb 6, 2018 | hide | past | favorite | 56 comments



I believe the major disruption waiting to happen in chip design is open source tools, and the hardware languages.

Frankly, the closed source tools are pretty awful from a user interface perspective. Under the hood, amazing things happen, but the tools are pretty awful to use.

and the languages... ugh, the languages. Our big leap forward was SystemVerilog. It's a bastardization of three languages: of 80's c++, 80's Verilog, and an interesting constraint solver in a distinct dialect.

We need (in my humble opinion):

* Open source high performance discrete event simulator (language agnostic, use an intermediate representation like a source mapped netlist)

* Said simulator, but distributed (multi core, process, data center)

* An ecosystem that encourages language development (Said simulator could help)

* An open source synthesis framework that can read the intermediate representation netlist.

* GUIs that don't cause eye hemorrhaging and monitor punching.


I worked in EDA for several years, late 80's/early 90's. Its tough. Yes, the current tools are miserable. But the market isn't huge, so there are only so many development dollars to go around. And the interfaces between tools are information-lossy kludges held together with duct tape and string.

Open source is hard, because without access to the data, it is hard to do a good tool like a timing analyzer. Heck, when you do have access to the data and you can talk to the physical chemist that designed the process because his desk is 5 rows over it is still hard to get it better than "close enough".

I've thought many times about doing an open source event-driven logic simulator. At one point, I was a top guru of that technology. The thing is, I doubt if a sufficient number of people would care, and it is basically worthless without all the tools that feed it and drive it.


I would be happy if all my bullet points applied only to cycle accurate sims.

I think the market would create itself with the right ingredients present. I think a high level language, a well defined intermediate representation with an eye toward synthesis but initially just targeting simulation, and a cycle accurate high performance simulator for said intermediate language would be explosive.

(bonus points for writing the sim in WASM, heh)


WASM? Well, if I'm going to tilt at windmills, I'll use the project to learn Rust while I'm at it.

Explosive? I remain unconvinced. There just aren't enough people doing that kind of work.

I am excited to see progress in open source FPGA fitters. I've always felt those would be hard to do because without access to the performance model, it is hard to do a good job of auto-placement. So that is cool, even though the current open source tool (I forget the name...) only does a few FPGAs.


If an open source tool exists, people will end up playing with it. That's categorically not the same of closed source tools - as social creatures we want to share what we're doing, so if we can't share that we stumbled on a cool closed-source tool leaked on the internet, what's the point of playing with it?

So there's that.

I would very much like to hear about your progress on this, FWIW.

Also, I'm not sure, but just in case the following things I was reminded of are directly/indirectly useful/helpful (for ideas or parts, or maybe some of the engineers may be interesting to talk to..?)...

- MARSSx86 - a cycle-accurate x86 emulator built on top of QEMU: http://www.marss86.org/~marss86/index.php/Home

- Cling - a C++ interpreter built on top of LLVM's JIT: https://root.cern.ch/cling

There are several cycle-accurate emulators out there, MARSSx86 is the first I discovered. I'm not sure if it's useful.

I'm not entirely sure why I'm mentioning Cling. It used to be based on a custom runtime (and called CINT) that was absolutely massive and was basically its own C++ implementation. Cling is effectively a very small patch/driver on top of LLVM's C++ implementation and its JIT runtime.


Why do you think an high level language would be a positive thing? The ones that are currently available generate worse designs than hand written Verilog.

Verilog is still used for a very good reason, it is not because the EDA industry is stuck in time.


Email me (email in profile) please, would love to connect.



I don't see how this will ever work. The needs of the defense sector are just fundamentally at odds with the needs of the commercial sector. On the hardware side you get things like Intel and AMD trying to hide "chips within chips", and on the software side you get things like foreign governments demanding to inspect proprietary software before they'll allow it to be imported.

The past and present are mired in "national interests"; but the real obstacle seems to be the idea that you can prevent other people from having what you have (an idea, a weapon, a key, etc.).

EDIT: I was responding to this quote from your link: "The goal of the ERI is to more constructively enmesh the technology needs and capabilities of the defense enterprise with the commercial and manufacturing realities of the electronics industry." I didn't mean to imply that what your company is working on won't work. Unless, of course, that is what your company is working on.


I especially like your focus on physical design, as it (along with verification) is usually the bottleneck in modern cutting edge node chip development. Any plans to update the public about progress?


Having spent about a decade in a chip design software startup that was eventually bought by Cadence, I'll say:

- you're absolutely right that the tools are horrible to use, and Verilog's time is up

- existing chip design companies are extremely conservative and hate adopting anything new even from their own preferred vendor, because risk

- a lot of the key knowledge is spread by apprenticeship and oral culture; analog IC engineers are another culture altogether


I spent over a decade doing circuit design and I can corroborate your points. Chip design is maddening! One of the most interesting differences between software and hardware engineering is how many blogs and writing on the web you can find about software. You can almost self teach a masters in CS through online blogs and various postings! The only writing you find online about hardware is either patent's or IEEE white-papers.

There is still something magical when a new chip design comes back from the fab and boots up!


On the languages front, there is Chisel, which looks promising. I'm not a hardware guy, but Chisel looks a lot more reasonable to work with than VHDL/Verilog, and it's good enough for doing real work on RISC-V. As someone who has thought about dabbling in this space, I'm curious about the thoughts of anyone who has worked with it: https://chisel.eecs.berkeley.edu/

For the rest; yeah, it looks like it's pretty eye-bleeding and hacked together from decades worth of odd little one-off tools: http://opencircuitdesign.com/qflow/, https://opencores.org/howto/eda, https://chitlesh.fedorapeople.org/papers/FEL12ReleaseNotes.p...


I would stick with VHDL and SystemC. Any tools that could actual synthesize your designs are going to handle VHDL and Verilog.

I do prefer VHDL to verilog. My main reason is Verilog does not make a distinction between variables and other objects. Verilog treats almost everything as if it were an assignment type. Furthermore, variables in verilog are shared by default. VHDL shows how it should be done. There is a clear distinction between signals and variables. Additionally, variables are local by default.

My opinion of course. Although VHDL can be bit more wordy. However, that's least of my worries (wordiness) when designing something. I would rather have clear semantics for me and other people on what the heck is going on.

However, Verilog is just as capable.


Chisel compiles to Verilog, so you can use that to bridge to other tools for actually synthesizing your designs.

But it brings with it the full flexibility of a general purpose programming language which makes it a lot easier to make parameterizable designs which can be more easily reused.


SystemVerilog is much more capable than either verilog or VHDL, and has brought many of the VHDL safety features to make design with it safer by construction, plus more, without the wordiness. Yes you can still write bad code, but any decent hardware flow, or synthesis step will catch basic things like mis-sharing of variables etc. I agree it's a language weakness it's not an issue in practical hardware design. In addition to The design enhancements, the fact it's got pretty powerful verification features makes it my language of choice today, at least among the low level HDLs.


Have you ever tried Bluespec SystemVerilog? I've never done hardware design, but from what I've read, it seems like an improvement over VHDL and Verilog. I'm curious to hear the opinion of someone who's actually done hardware design.


It's better than Verilog. It extends the types system, and makes user defined types easier to use. This helps with a lot with a big design with a lot modules acting as a layer of abstraction. The type system is stronger than verilog to some degree, but still is weakly typed to retain backward compatibility. VHDL still has much stronger type checking.

It also supports direct programming interfaces. For instance you have a chip your designing that needs to run software. Your test bench can call that C function to also test some software at the same time. You can also use it to speed up simulations by just modeling the input and output in C for a module. However, I prefer SystemC for this. It's great for behavior level simulation, and also runs quickly. It also is much easier to interface with any software you want to test with the design.

So I prefer SystemC for behavior models, and VHDL for gate level implementation. However, you can't really ignore verilog. Although, that depends where you are working and who you are working with. System Verilog is kinda of in between SystemC and VHDL.

https://preview.ibb.co/hRz8qc/Screen_Shot_2018_02_07_at_9_21...


I'm not sure open source EDA will happen and be as competitive as open source software has been in general. Many of the tools are extremely complex. For your example of just a simulator and systemverilog: the spec for SV is more than 10 years old, yet the best oss can do today is compatability with a small subset of it, and orders of magnitude slower than a commercial simulator. The reason? The skill set to write one of these not only needs very solid programming skills and knowledge of advanced algorithms for performance, but also some domain knowledge to understand how the tool should work from the user side and he side. This last point is the reason most oss EDA today is written by hw engineers who know some programming, rather than true software developers. Of course you will get some cases where a single engineer knows both areas well and produces some nice/interesting tools like vloghammer, icarus, ventilator, etc. However these folks are rare, and for oss EDA to be truly competitive it needs many contributors with this skillset. Thats why a language as big as SystemVerilog has not once been fully implemented in open source yet. So hard to see the other, arguably harder, challenges like Place and route, STA (on smaller nodes), analog simulation, parasitic extraction, etc being taken on in oss EDA for anything but older technologies, or with very limited capabilities. Open source IP on the other hand could take off.


I read all that as an indictment of SystemVerilog rather than of simulators.

I agree with the assessment of hw engineers doing software.


So why has it not happened in your opinion for SV, or spice or place and route despite there being references for each of these available for in some cases decades? There are free implementations of these but in each case they don't come close to a commercial offering.


Heh, #1 and #2 we are developing internally. Bonus points is that it supports analog/mixed signal as well through the chosen abstraction (SPICE level black box), models leakage, power consumption and thermals as well. Email me (on profile) if you want to connect.

For a public example, our simulator is very similar to Level 2 of the PARADISE flow being developed at LBNL.


http://efabless.com does 180nm open source in the browser window and you get a chip in the package at 5K USD or less.

Xfab is on the backend.


The article mentions "Magic" as open source design tools.

I was hired by UC Davis as Summer Intern to work on porting it from X10 to X11 in 1988.

Very cools to see it still around and being used after 30 years!


I remember using it in my layout class in 2004 or so. I didn't realize it was open source, I'm sort of tempted to find it and give it a spin.



I have thought about this. I have looked at several multi-wafer services for my own projects. Mainly some interesting analog designs mixed with digital.

For instance http://cmp.imag.fr/ has a 350nm process for only 650 euros per sq mm. However, that does exclude packaging costs which is quite steep for such a low number of devices.


minimum 5.5mm^2 is required, so we are talking about 3575 euros here. Besides, it is very rare that you can get a production grade chip on the first try, bug happens all the time, you might need 2 or 3 or even more tries to get a working product.


You might, but there are plenty of examples of first Silicon working, the trick is to spend a lot of time on getting your simulations right.


How easy is it to take code written for an FPGA and transfer it to run on an ASIC (e.g. code written in Verilog). Are they so different it would need to be rewritten from scratch, or could it be done with the equivalent of a a recompile? Do people often use FPGAs as a stepping stone to a custom ASIC design? Would learning how to program FPGAs be of any use when it comes to designing ASICs?

I'm interested in getting into FPGAs, but I can't help but worry they are a bit of a dead end as any sufficiently popular use case will eventually be replaced by an ASIC.


FPGAs are definitely not a dead end. By virtue of being reconfigurable, they will never be obsolete as long as ASICs are a thing. Now, some whole new technology will come along eventually, supplanting present day ASICs and FPGAs... but until then...

Program as a term means something different with chip design than it does with software. An analogy is that to program an FPGA is to paint a canvas. The source code in chip design is instructions for how the canvas should be painted.

Another analogy would be to program an FPGA is to cook a meal. The source code is the recipe for the meal. But one doesn't run a recipe on a meal.

These analogies break down because a painting and a meal is passive... it doesn't do anything by itself, or react to the outside world.

So another analogy would be building a car. Here "programming" and "building" are the analogous terms. The instructions for the assembly line to construct the car is the source code. Once built, the car responds to stimulus (steering wheel, pedals) and does stuff. Same with the FPGA. It has inputs, it responds and does stuff. If you painted a picture of a CPU in your FPGA, it could run software.

There is tremendous overlap in designing for an FPGA and an ASIC. Most ASICs start life as an FPGA simply to prototype an idea.

The difference between an ASIC and an FPGA, at a high level from a design perspective, is the difference between writing with a pen vs a pencil. Learning to write is equally applicable.

It's probably not helpful to think about right now, but an FPGA is actually an ASIC.


The LUT based architecture is starting to run out of steam, I think a CGRA sort of architecture is the future, but programmable logic startups will likely fail, and there's approximately a zero percent chance that Xilinx or Altera would try anything that new.


Problem is you still generally need simple logic to combine some course grained blocks. Also we have a lot of FGPA's that including adders, RAM, DPS cores, and more course grained devices.

Honestly, a LUT can be pretty efficient structure for what it does. The biggest advantage to coarse grained structures is their they are much faster since the internal construction can use optimal routing.

The biggest issue with FGPA's is the programmable routing/connections. Ideally each LUT would form a complete graph. However, the number of wires grows at the approx rate n(n-2)/2 where N is the number of LUTs. So instead the structure is more hierarchical. Still the majority of silicon on an FPGA is still just used for routing.

However, I think an array of ALU's actually could be quite useful for some applications over an FPGA.


I think GPUs, FPGAs and scalar cores will all mix into a single fabric. As you mentioned, FPGAs are getting dedicated hard blocks, GPUs are getting scalar cores and CPUs are getting LUTs.

> However, I think an array of ALU's actually could be quite useful for some applications over an FPGA.

http://www.adapteva.com/announcements/epiphany-v-a-1024-core...

http://www.greenarraychips.com/home/documents/greg/PB001-100...

http://www.xmos.com/products/silicon/xcore-200


No, I understand, but the granularity of the LUT exacerbates the routing problem, because in a CGRA you can route multiple wires at once.


Well this depends on the underlying routing architecture of either system. However you are right in general since finer grain logic means more things that need routing.

Nothing stops you from treating LUT outputs in groups like corse grain system though. FPGA manufactures could make chip with a different routeting topology that works really well for certain applications.

However, we could be making lots of devices that fit certain data flow patterns better. By doing so makes the devices simpler and faster.

Routing is pretty important. Its just current FPGAs are built with quite flexible interconnect.

If you want see really limited programble interconnect look at some old PLDs that you program by blowing fuses.


I would say that most digital ASICs have the logic proved in FPGAs first. While automated tools exist to go from FPGA to ASIC they are nowhere near as good as hand layout.

FPGAs are not going away as they fill the nitch where a CPU/GPU is not fast enough, but the quantity of units needed does not justify the cost of making an ASIC. Cheap FPGAs also replace glue logic in smaller run products.

Many different FPGAs exist for different tasks. Some include multiple ARM cores so your FPGA design doesn’t have to waste FPGA space on a processor. A few even have programmable analog sections.


I was under the impression that automated layout had gotten a lot better over the past few years. In particular, it's very different than hand layout and works best when you let it go whole hog at a chip, but doesn't really compare well to hand layout on tiny sections.


It can be done with essentially a recompile as long as it's all digital, but the result won't be great. If you want to use area well, get high clock rates and power efficiency, you really need to spend a significant amount of effort on physical design.

Technically speaking, you have to worry about these things on FPGAs as well.

But FPGAs and ASICs are quite different, so you'll have to adjust the design for those differences. In fact, you may have to adjust the design when switching fabs, because of process differences and resulting standard cell library differences.


Not easy or cheap. But quite possible


The TL;DR; of this article is:

"...a simple ASIC (say one that is a few square millimeters in size, fabricated using the 250-nm technology node) might cost a few thousand bucks for a couple dozen samples."


There's a lot of people quoting the price in this thread but very few coming forwards to say "yes, I've actually done this".


Everyone who took a grad class in VLSI layout has done this, it really isn't uncommon.


The number of people with the experience, knowledge, cash, and who are not bound by NDA is very small.


I studied Electronic Systems Engineering, and particularly enjoyed my classes in circuit design, including writing a basic CPU in Verilog and deploying it to an FPGA.

My summer jobs in university were for software companies, because it's easier for software devs to make something useful in a couple of months, and I was more interested in the company's location than the type of work.

I think I want to get back into hardware. I just transferred with my boss from a spin-off to a parent company, and the large company has absurd policies that bother me. Everything done during working hours belongs to the company, so all my side projects are on hold. They don't keep me busy with company projects though, so I'm really bored a lot of the time (and end up here on Hacker News). The side projects (e.g. Chinese learning) are totally unrelated to the company's business (control systems for microSD testing equipment), but it's futile to oppose the policy.

What's it like on the hardware side? If I get a job doing hardware, but have idle time in the office, am I allowed to pursue my own side projects and publish those on Github? That would encourage me to make the leap.


Literally everyone who has done this is bound by NDA. Every single foundry will make you sign one.


Having signed the a couple of those NDAs. The foundries are mainly concerned about their standard cell library, and any information that may let a competitor understand the details of their lithographic process. Most engineers just use the cell library from foundry, but cell library does contain information about a foundry's process.

A lot foundries will let you make chips without using their cell library. You some times have to make complete custom components in the analog world. (Be warned this a ton of work and is no easy under taking)

However, even if you developed your own cell library for a particular foundry. It will still be tied by an NDA since it may leak information about how the foundry handles optical proximity correction and uses phase shifting (of light) for their process to increase resolution. Also how many layers it takes to implement something and how each of those layers have particular characteristics may reveal some of the chemistry and material science used to dope the silicon or create certain structure for their process.

~edit a few typos/omissions~


How much of this was "simulatable" -- either by Cadence or Magic?

What's the average number of tries to get something right?

How'd you end up in this business?


So cadence for instance lets you create device models. For instance if we are making a simple inverter some things you would need are: width and height, channel (width x height), Zero bias voltage, Zero-bias depletion capacitance (Planar and Sidewall), Channel length, Surface potential, Oxide Thickness, Carrier Saturation speed, Junction Grading, Area diffusion, Transconductance, Carrier Mobility,

The list goes on. So once you get this information you can create a model file that will let simulate your simple NOT gate. However, if you are starting from scratch such as there is something that does not exist in the standard cell library. You can't just measure these properties since the device does not exist yet. So you have run other simulation software to get reasonable values as calculating them by hand is not pretty. Also to get these values you may need information from the foundry. For instance Intel's finfet transistor behave differently in some regards compared to a traditional planar transistor (mainly the channel). Intel is not going to just tell you how they work so you can get accurate model of them without an NDA. Also a foundries process can effect your design as layer thickness can changes such capacitance. So the big thing is cadence does not let you model

Cadence also can only simulate a limited number of devices. So for large designs/system you can't simulate the whole thing. So you can only simulate sub components for big designs. It's also slow to simulate large design again pushing you to smaller simpler sub components. It's limited to things you can generate a net list for. It additionally will let calculate cross voltages. I could keep going on and on, but there is only so much I can cram into a hacker news response.

Depends on who you are working for and what you are making. However, you generally use 2 spins for a large device. I bet you have heard the term engineering silicon. That's usually the first spin. If there are problems usually the only changes are made to metal (wiring) layers of the masks. If a serious problem is discovered it may require a complete re-spin. That's if you mainly using the standard cell library provided by the foundry. If you are making something from scratch that's whole other story. However, you still generally build into your design other elements that let you shut off defective parts or include redundant elements to increase the chance of one elements working. So the designs also include a lot additional circuits and logic that you may not be aware of for debugging and testing purposes. If you can't get the device perfect you still may sell use it and publish an errata or more specify a more limited range of operation.

I'll just say I am computer engineer. My first job was at Micron.


Yes, God forbid you need native (low threshold) devices... want to minimize NWell spacing for stacked devices while preventing ESD latch-up... care about capacitance density or voltage variation.

Basically, if you are designing mixed-signal/analog then your PDK (Process Development Kit) either comes from a tier1 foundry (TSMC/UMC), the process is a very good copy (SMIC/GF), or you need a year of support and a one or more full time process support engineers.


Question, I read that fabs use specific, proprietary technology to produce ASICs and this technology is protected by NDAs and Trade Secrets. Some components require intimate process knowledge, which may never be open. Why wouldn't these companies have patented these technologies?

Granted there isn't a lot of love for patents with software, but I think ASIC design is a case where the concept would be beneficial for the public. Although the fabs would have a monopoly on their process for a time, it would atleast be published so that open source tools could be made that take advantage of these processes.


Process technology details go out of date quickly, so trade secrets/NDAs work well enough. Patents are more expensive and require explicit disclosure.


You said it yourself: "Although the fabs would have a monopoly on their process for a time, it would atleast be published so that open source tools could be made that take advantage of these processes."

I know of at least one place that grows their own wafers and does all their own processing, and publish absolutely nothing about it - no patents, no external papers, zip, zilch, nada - specifically because other people could read the patents and figure out what they were doing, so they keep it all trade secret. It gives them absolutely no advantage to publish about something it requires enormous capital costs to even be considering, it risks exposing some of their secret sauce, and it costs tens of millions of dollars (...a year) to develop and maintain - and you'd want them to what, give it away? For 'open source'?

Open source hasn't even given us a competent PCB package, and has only recently given us a marginally passable office suite. In 2018.

Let the magnitude of the utter failure of open source just sink in for a minute, there. Are people really that clueless, that they think "well, someone will take advantage of this if we just get people to publish about all the details that someone else spent all the money to create AND maintain!"

I absolutely don't get the circle jerk for open source. It's amazing and wonderful at times, but let's not pretend it solves real problems. Elon Musk isn't going to open source his rocket designs anytime soon, but if you want a selection of 35000 poorly designed MP3 players, open source has your back!!112

If an open source solution solves a real problem, it was government funded (SPICE, LAPACK, MAXIMA, etc). We all paid for it. The one shining counterexample is the GNU Compiler - but FFS, they couldn't build a kernel even after the success of their compiler!


I don't care what anybody else thinks, I 1000% agree with this rant. I feel similarly strongly about it, and have ever since I discovered the whole open source thing myself several years ago.

With gcc, IIRC Richard Stallman was basically just playing cat-and-mouse with feature parity with the commercial compilers for several years. Okay, very impressive investment in terms of total SLOC, but methinks that's a byproduct of a brain being able to excel in exactly the field it's really really good at. Apparently rms wasn't a kernel person (?).

Broadly speaking open source gives me the impression of a bunch of people who honestly don't know what they're talking about and who aren't really all that smart. As a collective, that group isn't going to have very good ideas or execution. There are undoubtedly some smart people hiding in the corners, but their work is shunned or scoffed at because of the collective lack of intelligence of the whole.

While probably a frustratingly unanswerable question, I've been yearning for an online community for some time that are like-minded toward the idea that open source isn't everything and that there are better things out there.


It isn't online: for most of us, that's work. :)


> (One study found that the average commercial application contains 35 percent open-source code.)

Yay!

<Clicks the link> https://info.blackducksoftware.com/rs/872-OLS-526/images/OSS...

Nooo…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: