The problem is that Verilog/VHDL isn't a "programming language" in the sense that C, Lisp, Haskell, or Python are programming languages. So approaching them with a programming language mindset is asking for a lot of pain and misunderstanding.
HDLs like Verilog and VHDL describe digital circuits, not algorithms and instructions for manipulating data. If C code is akin to instructions for getting to a grocery store and shopping for vegetables, HDL code is describing the blueprint of a house textually. Maybe the solution is building some ultra high level abstraction that can somehow encompass both problem domains, but given how difficult hardware synthesis with existing HDLs is right now I don't know if that'll happen anytime soon. And the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies; if anything it's a limitation of the register-transfer level abstraction that's currently used to design digital hardware.
"To write Verilog that will produce correct hardware, you have to first picture the hardware you want to produce."
I think that's the crux of the issue. Most digital designers do have a picture of the actual hardware, as a block diagram, in their heads. When I write RTL, the process is very front-loaded: I spend hours with a pen and paper before I even sit down at the keyboard. The algorithms in question are only a small part of the work of building functioning hardware; where when designing software, I would let the compiler make decisions about how long it expects certain operations to take, and what to inline where, these are all things that I plot "by hand" when building hardware, before I even open a text editor.
I think, then, that the author kind of misses the point when he goes on to say that "you have to figure out how to describe it in this weird C-like [...] language" -- to be honest, that's the same for all types of programming: when I go home and write C, I have to take abstract concepts and express them in this weird C-like language, too! Arcane syntax is irritating, but is not something fundamentally 'hard' (unless it's too arcane, anyway).
By the way -- I also often wondered "why the hell does synthesis take so long?". I originally assumed it was because Xilinx's (and Synopsys's, and ...) software engineers were terrible and had no idea how to write efficient programs. This might be true, but I now believe is probably not the majority of it; if "why's it taking so long?" is a question that interests you, I recommend looking into the VLSI CAD tools class on Coursera.
By far, most of my time spent in FPGA dev so far has been envisioning the state machines and timing details involved. I use VHDL, so it's never been a question of "how can I make VHDL output what I want?"---for me, it's always the struggle of "why can't I just ask VHDL to make n of these units?", where n is some upstream provided number.
I think the author might need to step away and look at it from the other side: how can we take a working, but overly-verbose language like VHDL and make it more powerful? At least VHDL was envisioned from the beginning as a hardware description language, and it definitely shows.
VHDL generics and generate work nicely for 1d cases, but for 2d cases (systolic arrays), it's difficult to make the scripting really work without hard-coding a bunch of corner cases.
Another example is that defining barrel shifters is impossible to parameterize, because you need to hardcode the mux cases (see the Xilinx datasheet[1]). That's kind of insane, considering that bit-shifting is a very common and basic operation. This is particularly problematic if you're trying to describe something without having to resort to platform-specific instantiation blocks.
It's a little frustrating that VHDL doesn't have a higher-level standard instantiation library, because you're chained to a platform the moment you start doing anything other than basic flip-flops.
Well, I suppose you could say assembly is an analog of writing hdl that is very structural.
I haven't done it myself, but generates can be nested. You'd have to check to see if your tools support it or not though.
With the xilinx example, I'm not sure what you mean. Is it choosing to do multi-level muxing vs a naive muxing solution? I'd start by just writing a simple behavioral version, and only if that didn't meet performance constraints would I bother doing anything structural about it.
It's late and maybe I'm just not thinking it through. I'll take a stab at some of this and maybe it'll be clearer to me.
Thanks for the course pointer. It seems to me that there is no new offering of this course in near future. Do you have any pointers to similar online courses related to VLSI CAD tools ?
given how difficult hardware synthesis with existing HDLs is right now I don't know if that'll happen anytime soon
Synthesis sometimes feels like a great blind spot in the hierarchy of abstractions. It is hard, critical, and yet appears to be developed only by niche players.
the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies
IMO it has everything to do with the open-ended nature of synthesis. When you compile software, it's very procedural. You have a linear chain or network of paths. You construct it. You improve on it where you can. Hardware on the other hand- you have a cloud described in RTL, you construct it. That's not hard. But when you get to improving it? It's like the packing problem, with N elements, and to make things better every element can be substituted with a variety of different shapes!
I think the issue here is that Synthesis and Place and Route tools are squarely in the Computer Science Algorithms domain. Hardware engineers in general don't have the background for that kind of work.
And software engineers don't crossover to the hardware side often.
So the people suffering with the "slow" tools etc, are usually not in a very good position to do anything about it.
But really, the slow side is in the place and route. If you don't over constrain your design, this can go pretty quickly actually. It's when timing is tight, and first pass guesses aren't coming up with a satisfactory solution that things slow down.
IIRC, most of the time these just end up boiling down to 3-SAT, which will make the average Computer Science person throw up their hands in the air and say "it's NP-Hard, you can't make it more efficient" (even though NP is still an open problem).
I think there's one EE/CE professor at my university working on the SAT solvers that form the crux of the optimizers in most of these tools, but at the end of the day it's still a bunch of heuristics that, worst case, run in O(2^n) time.
And the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies; if anything it's a limitation of the register-transfer level abstraction that's currently used to design digital hardware.
If that's the case, then why are Chisel and bluespec much faster to simulate despite having less investment in tooling?
> HDL code is describing the blueprint of a house textually
... combined with features for simulating dynamic loads, say by modeling a party full of people jumping around.
From what I've seen of SystemC, I thought it was basically the same idea but with a different syntax - the entirety is available for simulation, but only a subset of the language constructs are synthesizable.
Much agreed. Verilog/VHDL are simply not programming languages. They are Hardware Description Languages. They describe parallel components that will actually be "wired" together.
My advice if you are a programmer or computer scientist and you get tasked with writing Verilog of VHDL "code" you need to be able to explain the difference -- You've just been offered a job as a hardware designer and engineer. Having spent 5 years doing hardware engineering and a lot longer doing software consulting I can say it's an entirely different set of skills if not an entirely different career path.
"it's politically difficult", "scary", "one person suggested banning any Haskell based solution"
Just going by his description (as I don't know much about hardware design except a single uni course ~15 years ago), it sounds as if a functional programming language like Haskell would be a perfect fit?
"one person suggested banning any Haskell based solution" -- that's pretty much literally calling a taboo on Haskell.
The article doesn't really name any reasons for this, can anyone here explain, maybe?
The only reason I can imagine would be perhaps that if you've got some seasoned veteran hardware developers living breathing verilog that are used to doing it "this way" for decades, they will be adverse to change, especially if it means learning a new programming language, perhaps in particular if it means a new paradigm or because of Haskell's pure/mathematical/function theoretical bend? (I'm just guessing here)
However (and this is where I'm most probably certainly wrong), if you're developing systems that cost 8 figures to prototype, and the current solutions are super slow and/or inaccurate to test, shouldn't even seasoned veterans, at least some of them, be able to swallow their pride?
> "one person suggested banning any Haskell based solution" -- that's pretty much literally calling a taboo on Haskell.
Personally, I would love to call this nonsense out: Say, "OK, you get to lead a team that is prohibited from using any Haskell-based solution. My team will compete with yours and must use a Haskell-based solution. May the best team win."
Functional programming doesn't make sense for this sort of thing as you can't define a circuit recursively. Mitrion-C is an example of a higher-level functional language aimed at configuring FPGAs. It's pretty useless unless you have insider knowledge on how the system works.
Synchronous data-flow programming is a functional style language made for programming embedded systems.
It follows the paradigm of clocked circuits, where variables are wires.
The latter has explicit recursion which can be disabled when designing embedded systems.
Haskell libraries closely follow this paradigm with their functional reactive DSLs.
Recursion is actually useful to recursively define circuits with repeated patterns, e.g., the butterfly FFT.
A circuit that calculates the nth Fibonacci number or performs quicksort. I imagine any synthesis would have to generate something that is effectively an iterative solution: instead of producing n identical circuit blocks for n calls of the function, it would only have one circuit used over and over again. But this defeats the ability to exploit the potential parallelism that could be achieved in the execution of the algorithm.
Well, you can define a combinational circuit as something that has n inputs and m outputs and it is one of:
* a gate,
* a circuit that takes two inputs and produces two outputs by swapping the order,
* a circuit that takes one input and produces one output,
* a circuit that takes one input and produces no outputs,
* the circuit obtained by taking two circuits and composing them in serial,
* the circuit obtained by taking two circuits and composing them in parallel.
This produces a simple inductive definition for combinational circuits. Synchronous, sequential circuits can be obtained by taking a combinational circuit and connecting the first n outputs to the first n inputs via D flip-flops. If you want asynchronous circuits, you can add another combinator that takes a circuit and a connects the first n outputs to the first n inputs without the flip-flops.
With these definitions, it's quite straight forward to manipulate circuits in a functional manner.
In school we learned VHDL. I found it EASY to adapt.
However you have to ditch the notion you are working with a programming language.
So either build your own CPU in VHDL specifically optimized for you problem. Or "code" it like you would build hardware!
This means taking a different look at things:
for example storing in an array: you create a bus: address bus, of the width you need, data-bus of data width you need to write (keep it reasonable, or you run out of routing), and a write/read bit. Then hook your counter up to that, increment the time and databus, and have fun.
VHDL for FPGA's is easy but in the real world you need to deal with things like latency, PLLs, et al. There are few academic programs that make students aware of the trials and tribulations in modern hardware design.
I agree. You can't look at it as a programming language. We also learned it in school and at some point I flipped a switch and it was remarkably easy to use it after a few failures. I thought it was very exciting to build digital circuits using it.
IMO the syntax of Verilog isn't so problematic. Adhering to a simple style guide will avoid most gotchas. The author does make a passing reference to the real problems though:
1. Simulation is many orders of magnitude slower than the same code running in production (and you see code in production only after many months and dollars). You don't face this slowdown when writing software.
2. Compared to software, there is much less code reuse in hardware. With software, it doesn't cost much if your binary is 10% or even 100% bigger, so your class can be written a more general/reusable but less efficient way. But a 10% increase in the size of a chip makes it significantly more expensive, so as a hardware designer you tend to create one-off 'classes' that are as efficient as possible, but less general/reusable.
You can overcome 2 by building a software that parametrizes your hardware.
Just like a core wizard, you can build a Python scripts that output the Verilog/VHDL code according to the desired settings. Not only accelerates the process, but may also prevent bugs on the long run.
> The aversion to Haskell is so severe that when we discussed a hardware style here at Google, one person suggested banning any Haskell based solution
...curious, but why would folks in this field even care about what programming language is this based on? Theoretically abusing the concept, wouldn't you just use something Brainfuck based if you found a nice way to describe your hardware using some BF based software and DSL? Most attributes of programming languages that we software guys care about seem irrelevant at this level, so Haskell or C or Lisp, why would it matter?
Well, because some languages are harder. Haskell is a tough nut to crack (especially the "Monad" thing, but the type system also gives some question marks as well)
Yeah, functional languages are closer to what hardware does, but more importantly, they may be close but not close enough
As the example showed, you have several side effects. Combinational logic (http://en.wikipedia.org/wiki/Combinational_logic) is a very good match to functional languages, however Sequential logic is "very different" (from an analysis point of view)
> not because of deficiencies in [Haskell], but because it’s politically difficult to get people to use a Haskell based language
I know tons of languages, but I've never really been able to understand Haskell. The whole monad thing is just...weird. I've read several descriptions, and had it explained to me multiple times on HN, but I still just Don't Get It. And I feel like I should Get It, because I have a degree in math including an upper-level course in logic.
To me, that suggests a problem with the language design of Haskell -- maybe monads are a bad abstraction, maybe the syntax should be different, maybe there's a brilliant and simple explanation that I haven't found yet. But the bottom line is that the learning curve for people coming from other languages needs to be a lot lower for Haskell to ever be anything but a tiny niche.
I agree, except that I think Haskell would be pretty big in an alternative universe where everybody already learns programming with Haskell. That is, in the dynamic system of software development, default-to-imperative and default-to-pure-functional are both stable equilibria.
As far as monads go, the things that helped me was (1) think of writing in monads not as programming in an imperative language, but as writing a functional program that returns an imperative program (as a value in a rather opaque type), and (2) forget about do-notations and learn about using the arrow-like operators directly.
There isn't much to Monads at all. One problem that some Haskell beginners hit is that they read some bunk monad explanations and only get confused, and that monads get so hyped up that when met with the actual description, they try to look for something more that isn't actually there.
Monads are just a set two or three operations on a data structure, which obey certain laws. The operations are either ( map(function,structure), collapse(structure), create(value) ) or ( bind(function,structure), create(value) ) where you can implement map and collapse in terms of bind, and vice versa. You don't even need to know the laws to use monads, only to create a new monad.
map(function,structure) is a function that applies a function to every element in the structure. Pretty straightforward. For example:
-> map(addThree,[1,2,3])
[4,5,6]
Collapse is a function which takes structures embedded in a structure of the same type, and gets rid of one level. For lists, it's the same as concatenating all sublists.
-> collapse([[],[3,4],[6,1,2]])
[3,4,6,1,2]
Create takes a value and puts it into the structure in the simplest way possible.
-> create(3)
[3]
These three functions are enough to have a monad over your structure, and are how monads are usually defined in mathematics. But lets look at the alternative.
For the sake of explaining bind, let's introduce a new function which we will just use as an example:
This definition of bind is valid for all monads.
I have used the list monad as an example here, but many other structures are also monads. You don't really see the power of what you can really do with monads until you use them in a language that supports them well.
In the actual definition of monads, collapse is called join. The definition using bind is the one that is more popular in programming, while the definition using join and map is more popular in mathematics.
I'm not actually going to show what the laws governing the interactions between these functions are in this comment, but I can explain them if you ask.
One generally simulates the RTL long before doing anything with gates. Mostly because RTL is faster. Only when you have things sufficiently nailed down, have done all the timing analaysis, etc, do you start running gate sims.
Even for pretty damn large designs, compiling RTL shouldn't be a big hinderance…minutes instead of hours.
Considering the magnitude of complexity involved with gate sims, I find it hard to complain. I can't speak for other vendors, but Mentor's questa does a pretty amazing job at optimizing and running at "speed."
The main issue I think is that simulations are generally single core affairs. When you've got millions of gates in your design, having to serialize that event simulation down to a single core is surely going to be a bottleneck.
When the simulators can run on multiple machines, partitioning the design (this is the tricky part), and run more in parallel, we'll see some speed advances.
But for that to work, you'd have to be very careful how you do your design. If it's spaghetti with tentacles reaching out from every part of the design to every other part, you probably won't have much luck splitting that up for parallel sims.
Personally, I think verilog is weird, but I come from a VHDL background. Like verilog you have a subset that is synthesizable, but it's harder to shoot yourself in the foot with VHDL in my opinion.
I spend a lot of my time in SystemVerilog doing verification these days. It brings some nice concepts to the verification table, but what an awful language. It's like they stapled 3 different languages together, and took everything that is bad about OO and stuffed it in there without ever asking "Does this actually make Verification easier?"
VCS took 15 minutes for SV recompile and 60+ for clean compile (depending on level of "clean" and level of "compile") for "large" projects on top of the line servers.
Obviously hardware development looks hard and weird to software people, in the same way C++ looks weird to an EE.
It's slow only because the synthesizer makes huge efforts in the optimization of the design, trying to pack the logic in as little resources as possible. Shut down optimizations and suddenly its blazing fast (like icarus verilog).
Chip vendors often give you a way to instantiate on chip hardware (memories, latches, etc).
I found the path of least resistance (and highest performance) was to figure out what circuit I wanted, then basically use Verilog to wire these primitives together to make it. By causing verilog to use an actual memory block, rather than a stack of flipflops, you would yield a nice performance increase also.
The trouble is that doing that isn't vendor neutral, and the whole approach probably wouldn't be any good if you were targeting ASIC. It was just 'alright at college'.
In 'real' pro verilog development, do people do this?
I'm late to this discussion, but as a "real pro" VHDL coder, I can at least sate your curiosity.
At my company, we abstract the vendor specific implementations to have a common interface that we can then use to keep the rest of our code vendor neutral.
For example, within the "Dual Port RAM" section of our revision control system, we have separate files that instanciate memory control blocks for Xilinx Spartan and Altera's Cyclone FPGAs, and generalize the interfaces so that all I see when creating a design is a vendor agnostic "dpram" component interface. When I need to use one in our design, I just need to import the correct file into my build corresponding to the actual FPGA that will be used. Migrating to another vendor involves changing which file gets during synthesis.
I would ask your team to take a second look at Bluespec. I had used it for a project some 4 years back, Even today When I have to code something I wish I had Bluespec.
I'd also like to point out MyHDL: http://www.myhdl.org/doku.php It allows the full power of Python (being just another module) while you're developing and testing, and when you're ready to compile to hardware you can compile the RTL subset of your program to Verilog or VHDL and take it to hardware from there.
MyHDL is great, more people should use it. I find it especially powerful for building test frameworks. I love using the power of Python for all the verification logic.
LOL, I love this thread. Hardware is the new Software! But Hardware description languages are just that, structured languages that describe hardware. If you come from a background of programming, it will be hard for you. I am someone who comes from a background of hardware that had to learn software, but then when Verilog came along, oh man, was I in heaven. Do not think of VHDL or Verilog as a language, think of it more in terms of HTML, HTML is a mark up language for type setting, HDL is a mark up language for hardware. Hardware is not sequential, everything will happen at the same time unless you prevent that with a state machine. With sequential languages like C or Pascal etc., everything happens sequentially unless you go out of you want to try to make things happen seemingly at once. With HDL, you have the opposite problem, everything will happen at once on each clock cycle, unless you make a state maching to make things sequential. Once you see this, and the amazing power of it, you will see why I say... Hardware is the next Software.
Agreed that hardware language is generally hard to pick up on but "produced something random" is a bit strong (excluding bugs in simulators, which are a bit more common than I would've expected).
I'm having a hard time reading this article, considering he starts off with false information like:
"The problem is that Verilog was originally designed as a language to describe simulations, so it has constructs to describe arbitrary interactions between events. When X transitions from 0 to 1, do Y. [...] But then someone had the bright idea of using Verilog to represent hardware."
Verilog was made from the start with the sole intention of describing hardware. Hardware programming and software programming are fundamentally different. Comparing them is irrelevant.
HDLs like Verilog and VHDL describe digital circuits, not algorithms and instructions for manipulating data. If C code is akin to instructions for getting to a grocery store and shopping for vegetables, HDL code is describing the blueprint of a house textually. Maybe the solution is building some ultra high level abstraction that can somehow encompass both problem domains, but given how difficult hardware synthesis with existing HDLs is right now I don't know if that'll happen anytime soon. And the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies; if anything it's a limitation of the register-transfer level abstraction that's currently used to design digital hardware.