Hacker News new | past | comments | ask | show | jobs | submit login
LowRISC: Open-source RISC-V SoC (lowrisc.org)
135 points by br0ke on Aug 5, 2014 | hide | past | favorite | 43 comments



Hm... the link only talks about the CPU, but calls itself a SoC. There's a lot more than needs to go on any chip that calls itself a "SoC", and much of it is very poorly served by existing "open source" solutions:

    + DRAM
    + I2C
    + GPIO (with stuff like 3.3v, tristate outputs, pull up/down, etc...)
    + USB2 host/device
    + SD/MMC
And that's just at the very basic level. Once you get into the consumer world you need to start talking about video output, camera input, video decode and encode acceleration, programmable GPUs,...

Really the CPU is, in some sense, the most solved problem from the perspective of open source. The designs themselves may be closed IP, but the instruction sets are meticulously documented and their behavior is very standard across many vendors and ISAs.


The peripherals you list span a wide range of complexity.

GPIO, I2C, and SD interfaces (in approximately increasing order of complexity from my point of view) are one-person jobs for the right person. I've been in charge of all the GPIO for complex mixed-signal chips several times in the past, and I could crank these out in no time. But someone who's never designed for ESD and latchup, beyond-the-rail inputs, etc., would probably find these pretty nasty.

Forget using fab-supplied GPIO; those designs are almost always much (3-5x or more) larger than they need to be, and in advanced nodes, they tend to add additional processing cost. Specifically, most fabs want to stick with per-pad snapback clamps, which are (1) big and (2) require (in processes newer than about 0.13u; you can get away without it in earlier nodes) an additional "ESD" layer to fix doping gradients so that the devices don't destroy themselves as soon as they snap back. A much better solution, for many reasons, is transient rail clamps and steering diodes. First, because diodes can handle insane amounts of current per micron of perimeter (think: easily 50 mA / um of perimeter for a diode, vs single digit mA / um for snapback devices), their layouts are more compact, and they don't require ballasting to prevent current crowding. Second, and more importantly, clamps and diodes can be simulated (and the simulations, if not correct, are at least predictable in the way they fail depending on the models the fab gives you). Snapback is effectively voodoo: design what looks like it should work, test it, and hope that some circuit you accidentally put too close to the pad doesn't change the behavior enough to cause failures.

DRAM controllers are another step up in complexity. Depending what standard you're going after, this is going to take some reasonable work.

USB2 is, in a word, hideous. A team starting fresh is looking at several person-years (or more) for a well-designed physical interface, control logic, etc.

One wonders if they can convince someone to donate designs. Come to think of it, I'd do their GPIO/ESD/latchup design for them just for the fun of it; my current employer certainly wouldn't object.


USB2 has a fairly sane external PHY specification though with lots of parts in the market. I was thinking about USB in the context of the data layer, which (to my mostly-software eyes) seems comparatively straightforward and sane.

But absent some random hackery on opencores.org, no one seems to have really put effort into doing it on an open part in a serious way.


Greetings. I use some of those aforementioned very standard CPUs and have an issue: Many tasks I use the computer for are far more security critical than performance critical I would like have someone augment a cpu design that I'm using to give it 256 bit 'pointers' which pack 64 bit start, end, and offset, and a set of fine grained permissions and special privileged instructions for modifying these pointers. This way huge classes of security vulnerabilities will be prevented by the hardware.

I won't mind if it's 10x slower— though the thousands of times slower that I'd get with a software simulation would likely be too slow to be practical.

What? You say that the chips I currently used have closed and secret designs and are not available for modification? But I thought you said that the CPU is the most solved problem from the perspective of open source??

I guess it's good that people are working on actually open CPUs so that things like http://www.cl.cam.ac.uk/research/security/ctsrd/cheri/ can be built.


And with an "actually open CPU," how does one verify that the silicon in the final package is actually what's in the design and that no "closed and secret designs" have been added by the fabricator?


You need to xray the die and confirm the designs are the same, but it doesn't guarantee that extra or bad transistors aren't inserted that leak data.

You never really know.

https://www.cl.cam.ac.uk/~sps32/ches2012-backdoor.pdf

https://www.usenix.org/legacy/event/leet08/tech/full_papers/...

http://en.wikipedia.org/wiki/Hardware_Trojan


One solution could be to obfuscate the design when it goes to fabrication and then check the amount of time it takes to fabricate. This assumes that obfuscation is possible in logic design, and insertion of back-doors in an obfuscated design is going to be non-trivial.


Well, somewhat a solved problem if your hardware is a uniform combinatorial logic and routing mesh (E.g. a FPGA), not exactly energy efficient.

But I think this is a weird diversion: That I can't add (or pay to add) advanced security features in my CPUs even at substantial (but sane) costs is a clear reason the current closed ecosystem is inferior to an open source one.

This remains true even if even an open cpu design were not cost-effectively auditable at the hardware level, it's an orthogonal issue (and even more so— the closed cpu designs are inherently less audi-able if hardware backdoors are your concern). An open design doesn't have to be better in ever possible way to be better in some.


I think from the perspective of openness and verification a simple 8-bit CPU like a 6502[1] would be ideal - there's not a lot that can hide in 3500 transistors.

[1] http://www.visual6502.org/


RISC-V looks like MIPS but with some of the more dubious design decisions of the time (e.g. branch delay slots) fixed. The mix of 16-bit and 32-bit instruction lengths is reminiscent of ARC.

In other words, the characteristics of SoCs using this core will likely be very similar to the many out there using MIPS: cheap and simple, with performance that's acceptable for applications like routers and other embedded devices.


That raises an interesting point -- the original MIPS ISA hasn't been patent encumbered in quite a while (and even then, only two non-essential instructions were patent protected). Why should a person is this unknown ISA instead of MIPS?


RISC-V started off from a modified MIPS ISA, but we frankly ran out of opcode space. We needed 64b, IEEE floating point, and a ton of opcode space to explore new accelerator and vector ISA extensions.

Even the smallest changes to MIPS to clean up things like branch delay slots means it's a new ISA anyways, so you get zero benefit keeping it "mostly MIPS". You can read a bit more about this in the "history" section in the back of the user-level ISA manual.


I think it may be to avoid any political/legal issues - despite the patents having expired, MIPS still sells ISA licenses. On the other hand, RISC-V basically is most of MIPS (but most RISC ISAs are very similar anyway).

They also avoided the patented instruction issue completely by removing all alignment restrictions from the regular load/store; probably a good idea, with memory bandwidths being the bottleneck now and buses growing wider - the extra hardware is also negligible, basically a barrel shifter and logic to do an extra bus cycle if needed.


The specification (http://riscv.org/riscv-spec-v2.0.pdf) clearly states the reasons. It has nothing to do with political/legal issues. There are very good technical reasons for designing a new ISA.

That RISC-V resembles MIPS is a testament to what was good about the MIPS design, however if you look closely you will find the many ways in which RISC-V is different.

Truly, the specification is highly readable and the footnotes enjoyable. Having implemented multiple MIPS cores and so far one RISC-V core, I'm deeply impressed with the care that went into the design.


> The mix of 16-bit and 32-bit instruction lengths is reminiscent of ARC.

MIPS has also MIPS16..


And now microMIPS, another attempt at 16-bit encodings.


Thanks, I didn't know about microMIPS. I read about and was surprised by the LWMx(Load Word Multiple) instruction which doesn't seem in the RISC spirit at all.. That's funny, I think that ARM with its v8 ISA (64bit registers) dropped a similar instruction which it had before.


I would attempt a nibble based compact instruction representation to reduce external memory bandwidth. Fixed width instructions kinda suck now that memory is such a bottleneck.


I've long wished of a middle ground between FPGAs and CPUs - namely a CPU with user-changable instructions.

Have a CPU that is a CISC (but internally a microcoded TTA), but with a large chunk of the microcode user-writable (So you have push-inst and pop-inst, where push-inst pushes the new instruction microcode into the microcode storage and copies the old instruction microcode onto the stack and pop-inst does the opposite). It keeps the advantages of fixed-width instructions while, depending on how the microcode is encoded, potentially having significant memory savings.


The arc processor line from Synopsys does this commercially, I believe. Risc-v seems to be trying to support this sort of thing; there is reserved opcode space for implementation specific extensions


I was under the impression that Synopsys' ARC processors were configurable at design time, not runtime.

Or am I missing something? Have a link?


JITs could synth new instructions.


That hardware implementation of RISC-V listed on their website is written in Scala (using Chisel). That's very cool. I want to see synthesis results.


Hi, are you talking about the Sodor cores? I wrote those and wouldn't mind answering any questions about RISC-V or Chisel.

Regarding Sodor, they're designed to be instructional (we use them on our undergrads at Berkeley) and open for anybody with a C++ compiler so they can learn about Chisel and RISC-V. I pushed them through synthesis once just for kicks, but I didn't work on making them FPGA-ready. Chisel will give you the Verilog of the core, but you'd still need to write a test-harness that's specific to your FPGA.

The RISC-V user manual lists some of our existing RISC-V silicon implementations (8 so far, listed in Section 19.2), whose RTL aren't (yet) open-source.


Yes, that's what I was talking about. Dave Patterson was giving a talk in Portland about three years ago on heterogeneous computing, and he mentioned Chisel. And now it seems to be relatively mature. I am happy to see innovation in both the HDL and uArch space.


Hey, I've been interested in RISC-V and Chisel, and am a bay area local (Live in Oakland)... what is the best way to get in contact with you and others at UCB?


Hmmm, I actually have no idea!

Chisel has a google group that you can post any comments or questions you have (chisel.eecs.berkeley.edu). If you wait a week, we should have something similar up at (riscv.org) too.

Chisel has an occasional "boot-camp" where you can come and learn how to use it, and RISC-V will have something similar too I believe in January.


wow. yeah i totally missed this the first time i skimmed this. Very Cool.


Volume silicon manufacture is planned

I highly doubt it. But if it's true, that would be the missing link for all open source hardware design.

It would also be nice if they gave some idea of the kind of performance or implementation they are considering.


People are making volumes of Bitcoin ASICs for surprisingly small amounts of money, so it is possible. The question in my mind is who's going to be buying volume quantities of this chip and why.


A company like Silent Circle/Blackphone could buy a million or two in the future. Maybe some Raspberry Pi competitors, too.


One of the team members was a co founder of the Raspberry pi.


So you would say this isn't a low risc proposition?


I'm not familiar with the IP issues. Would it be possible to center open-source processor development around the ARM instruction set?

It looks like the privileged part of the RISC-V ISA is not finished yet. This is a great project, but it seems a long way off.


I'm surprised they don't just use OpenRISC, there's already hardware shipping that uses ASIC implementations of that internally.


They discuss their reasoning briefly in the manual [1, p. 3]:

We are far from the fi rst to contemplate an open ISA design suitable for hardware implementation. We also considered other existing open ISA designs, of which the closest to our goals was the OpenRISC architecture. We decided against adopting the OpenRISC ISA for several technical reasons:

-- OpenRISC has condition codes and branch delay slots, which complicate higher performance implementations.

-- OpenRISC uses a fixed 32-bit encoding and 16-bit immediates, which precludes a denser instruction encoding and limits space for later expansion of the ISA.

-- OpenRISC does not support the 2008 revision to the IEEE 754 floating-point standard.

-- The OpenRISC 64-bit design had not been completed when we began.

[1] http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-54...


Or SPARC, which is patent and license free, open, and has a decade of shipping in real hardware to back it up.


Historically, ARM really, really did not like unlicensed or open-source ARM processors. But times change, so I wouldn't be surprised to see them take some easy PR from openwashing at some point.


That seems highly unlikely - unless we're talking about 3rd generation behind the latest tech chips. I could see them open source say the ARMv6 architecture in 3+ years, when ARMv8 has already taken off, and ARMv7 is in legacy mode. But meh.


ARM would very aggressively go after any open source implementation of the ARM ISA. For the longest time you couldn't find any ARM documentation on the net because it was all behind a license agreement that read, "won't be used to make an open source version of our schwag"


If I remember correctly, the last time I checked 586-level x86 was fully open and some of the P6 patents were close to expiring, so it might make another contender for an open-source CPU. Since Intel and AMD don't license x86 soft-cores unlike ARM and MIPS (which RISC-V is similar to), I think there could be fewer legal issues. Compatibility is another bonus; it's possible to put an entire PC-compatible on a single chip: http://www.vortex86.com/dx


(Disclaimer: I'm developing my own RV64 FPGA implementation.)

I wish the page was a little more clear on what the intentions are, etc, but seeing RV64 in silicon would be immensely exciting. Producing a simple in-order machine, even with usual set of peripherals isn't very hard at all and nor that expensive on an older process node, but there's a world of difference if we start talking superscalar out-of-order multi-core SMP. Seeing OpenRISC on the Advisory Board I suspect it's more the former than the latter.


I've spent some time with Robert Mullins and he is an absolutely stand-up guy. It was a pleasure to be taught by him.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: