Hacker News new | past | comments | ask | show | jobs | submit login
India's first RISC-V based Chip is Here: Linux boots on Shakti processor (geekdave.in)
452 points by signa11 on July 30, 2018 | hide | past | favorite | 88 comments



Annoyingly this chip lacks the Compressed (RVC) extension, making it incompatible with all existing Linux distros. These will either have to be recompiled without any Compressed instructions (which increases I-cache pressure on other CPUs that do support it), or we'll need to ship two versions of everything. From discussions I believe they've been recompiling Fedora & Debian from sources without RVC.


The shakti effort was started in 2014. As a member of the group I can say that, then RVC was only seen at as optional extension. We got the tapeout opportunity an year ago. Later the riscv community has taken the decision to make RVC extension mandatory for distros without involving most of its committee members. Had we been told earlier would have given the support.


This is true and very unfortunate how it was handled. However I do think the decision to standardize on RVC for Unix-like systems / servers was the right one, no matter how it was arrived at.


I think the politics of how the decision was made are more important than the technical merits of the decision. Given that, apparently, some community members decided to make RVC mandatory without telling the rest of the committee, I think the only right thing to do is to recompile distros without RVC and live with it. Then, the large binaries and icache pressure will stand as a permanent reminder of how not to make a decision.

Also, is it possible to use only a handful of RVC-enabled binaries on a system that otherwise doesn't use RVC? If so, it might not be so bad.


This seems not correct to me. No one has made RVC "mandatory". For a start no one had any authority to do any such thing!

Some people building Linux-capable processors decided to include RVC.

Some people with Linux distributions (Fedora, Debian), seeing that all known current efforts to build linux capable processors were supporting RVC, and seeing the large benefits it provides, decided to build their distros using RVC.

If someone wants a Linux distro that doesn't use RVC .. it's not a problem. The instructions supported are exactly the same. There are no portability problems. If a package will build successfully with RVC then it will compile without it, no problem, with no additional human work. Just change one configure flag when you build the compiler (to default to rv64g instead of rv64gc). The only difference is the binaries will all be 30% to 50% bigger.

Some more build machines will be required, and a few GB of disk space. Theses are trivial things.

It's absolutely nothing like the difference between, say, armeabi and armhf or the differences between armv4, armv4t, armv5, armv6, armv7.


Is this really such a big issue? Currently there are already multiple arm builds from some distros to deal with similar problems: https://archlinuxarm.org/about/downloads


It's exactly the fragmentation of ARM that I'd like to avoid. Same story goes for bootloaders, kernels, out-of-tree drivers etc.

You might find my 15 minute talk at the RISC-V workshop interesting on this subject: https://rwmj.wordpress.com/2018/05/21/my-talk-from-the-risc-...

The bit about RISC-V on servers starts around 5 mins in.


>> It's exactly the fragmentation of ARM that I'd like to avoid.

RISC-V has an extensible ISA so it's likely to be a mess by design. I'm huge fan of risc-v so I really feel bad saying this. It's telling that RV-IMAFDC is the base for unix systems. It was originally IMAFD = G, but now GC is the standard. Then there's the V extension which I think will be very important and it already sounds like it may be fragmented (graphics and AI may be extra beyond vectors). But then there is space in the instruction encodings explicitly for custom extensions.

These variations are important, and the ability to customize in a standard way is too. I think it's going to be an amazing decade or two in the CPU space because of risc-v but it's also going to be a lot of churn IMHO.

Once open designs catch up to the state of the art and a real base ISA solidifies I hope a RISC-6 comes along with a more optimal encoding and things become really stable for a long time. But that's just my optimism getting out of hand.


Why do you talk about 'the real base ISA' solidifies? The 'real base ISA' is solidified and its designed to let you run a lot of software at minimal complexity.

The problem is that there is NEVER one optimal ISA for all applications. It makes absolutely no sense to have V, T, J extensions in the new 'real base' because there are tons of systems that don't ever need it and it would be a waste.

That's why the RISC-V foundation defines profiles for certain application domains. RV64GC is the logic one for Linux for the moment.

Also you ignore that very often an ISA is not used as a mass market product, but rather in a smaller more specialized space. The costume extensions are very important for people like that. In other cases, the potential AI extensions for vector for example, will probably never gone show up in standard profile for Linux.

I think what we are like gone see is that RV64GC will remain the base level for the Linux profile and eventually more advanced profile with V and potentially others will emerge.

The whole point is that no long build everything around ISAs that is never optimal but rather around profiles.


>> Why do you talk about 'the real base ISA' solidifies? The 'real base ISA' is solidified and its designed to let you run a lot of software at minimal complexity.

The "real base ISA" will be whatever set of options becomes widespread 10-15 years from now. In my mind it's entirely feasible (though not as likely) that the approach of using hundreds of minion cores (see Esperanto) to do graphics will become a common thing (since there is no free GPU and none around the corner). If that is the case, the V extension will become part of the "real base ISA" because everyone will want it. So far the A and C extension went from optional to necessary - it would be naive to think it will stop there. Also, as implementations improve someone may come up with a low cost (low area) way to implement DSP instructions. Maybe that becomes common because the low additional cost is a "why not?". That's part of my point too, the flexibility will bring innovation and that may lead to certain things naturally becoming part of most peoples expectations.

It looks like OpenGL has done something similar. They had a few specifications (ES in particular) for different classes of hardware, but as it became clear that certain features could be handled by the lesser hardware variants those became part of the standard.


Again, I don't see how even in that case, the V extension would be part of 'the real base ISA'.

You seem to again ignore that there are many, many applications that don't need or want a graphics card or SIMD of any kind.

What you are talking about is not the 'real base ISA' but the 'real os application profile'. By not hard coding these choices into the ISA you have much more freedom to create new profiles for future needs without changing anything about the ISA.

> So far the A and C extension went from optional to necessary

That is not the case. Its simple false. There are tons of RISC-V chips that don't have either, some of them are even in production.

Before the Linux RV64GC profile was released there simply was no defined official standard for OS application like Linux.

> Also, as implementations improve someone may come up with a low cost (low area) way to implement DSP instructions

That already exists, its the P working group.

> That's part of my point too, the flexibility will bring innovation and that may lead to certain things naturally becoming part of most peoples expectations.

What you are missing is that 'most people expectations' are limited to a specific program domain. Nobody will ever expect graphics on a IoT edge processor. So the default profile for IoT will not include that.


I guess I'm talking about the base ISA for unix systems. This whole discussion started because someone designed a processor without the C extension and now they have to rebuild every Linux package without that. The base ISA for unix like systems was originally G (IMAFD) without C and it changed. So the base for unix systems will be whatever the major distributions decide it will be regardless of what the RISC-V foundation says. So far they are in alignment. If the P extension ends up costing little area beyond GC and a bunch of developers want it because of audio/video applications they're going to want to rebuild everything with it by default - because server chips should just include it anyway.

I'm not saying this exact scenario or the V on will come to pass, just that I expect the standard set of options widely in use will continue to change like it already has.


G was just a shortform to bundle commen extentions, it was never defined by the foundation as 'the base ISA for linux'. That was just what the first released version of RISC-V spec had in it.

I would assume that the V,B,J and possibly P extentions will all end up in the most used linux profile.

However I think the RV64GC will still be the relevant fallback that most distros will over to cover a very large range of chips.


Is the volume for these specific problem domains ever going to get high enough to make yields worth it? There's a reason commodity CPUs took over.


RISC-V is designed to be universal. That means you have to support everything that is now dominated by MIPS, ARM, x86 and so on. There are lots of markets and even today we use lots of ISA.

Lets remember that there BILLIONS of chips that are internal to companies doing other hardware and all of those should be RISC-V as well.

So if you have that ambition you need to have a way to balance standardization and fragmentation. Profiles are way to find standards for specific use type primary in order to do software standardization on these profiles.


If the only issue is the kernel (which I don't know it is) ... I'm not that sure there is such a big problem. If it is essentially different architectures which requires all binaries to be compiled differently then the problem is much more dire.

I think your points in the video is important - or rather, should be important - but from the fragmentation of ARM it seems they are not as important to chip makers as they should be. However ARM's fragmentation seems to have not been a serious impediment to it's adoption.

I think though going forward the solution may be to target higher level machines (i.e. JVM). This obviously is not realistic for the kernel - but for software that most people want to run on RHEL it would most likely be fine.


For RVC the problem is everywhere. However I argue in the video that even just different kernels, bootloaders or out-of-tree drivers are a real problem when you're deploying servers at scale.

ARM has distinctly not been popular in the data center (despite years of work). That's for many reasons but one is surely the fragmentation of the platform.


> ARM has distinctly not been popular in the data center (despite years of work). That's for many reasons but one is surely the fragmentation of the platform.

At least for the server realm, ARM defined the "Server Base System Architecture" standard

> https://en.wikipedia.org/w/index.php?title=Server_Base_Syste...

exactly to avoid fragmentation in an area where custumers desire standardization.


Fascinating talk. Feels like every line has been earned with tears and sweat ;)

Can i ask off-topic about the build farm you mention (approx 12:00). Is build performance an issue ? How long do typical builds take ? Would a x5 performance improvement become a must-have feature ?


Actually the current build farm with 3 x HiFive Unleashed boards and a bunch of VMs is fine (perhaps less so if we had to build everything twice). There are two bottlenecks now: lack of SATA disks on two of the boards, instead we're using NBD; and the Koji build software itself which introduces a lot of overhead by creating from scratch fresh buildroots for every build.


Is there a reason why you are not cross compiling?


32-bit ARM also having a problem doesn't mean it's not a problem.


You link shows exactly how big an issue this is.


> Annoyingly this chip lacks the Compressed (RVC) extension, making it incompatible with all existing Linux distros. These will either have to be recompiled without any Compressed instructions (which increases I-cache pressure on other CPUs that do support it), or we'll need to ship two versions of everything.

Perhaps ARM actually did have a point with their "FUD" campaign against RISC-V:

> https://archive.is/SkiH0

(EDIT: in particular, consider point 3).


And how many mutually incompatible instruction sets have been shipped under the ARM branding? I mean, glass houses and all.


Not really, this is simply a case where the Shakti people were so early that the standard simply wasted defined yet, it was not even clear when how things would be standardized.


OTOH Arch provides 29 different ARM packages…


Why did they choose to not have RVC? Also, what happens if one of these groups working on RISC-V decides they need to add a couple of more instructions in there? Is there a group that you can send your patches for the review like Linux kernel? I do not understand the compatibility of the risc-v cpus!


I was discussing this with someone involved with the project and it comes down to wanting to handle only 32 bit instructions (RVC instructions are 16 bit; other RISC-V extensions can have > 32 bit instructions, although of course if you don't support any of those extensions then you don't have to deal with it). This is a simplifying assumption in the decoder element in their pipeline.

RISC-V does allow you to define other extensions in a methodical and detectable way, and we intend to detect those at runtime (as you would, for example, with something like AES-NI or AVX on Intel). However RVC impacts every bit of compiled code so there's no way to deal with it at runtime (something like that would make all binaries much bigger and would greatly complicate the toolchain).


i like it how they just want to process 1 length instructions (32bit). that's so much easier to build for debuggers/assemblers etc. i think it's a good move to allow for people to write custom OS onto it. As the more complicated aspects of generating machine code have gone, it's easier to write targeted compilers, assemblers and debuggers etc. i find this very interesting choice!


It's probably more about the hardware's instruction decoder. The RVC code has unaligned 32-bit instructions, and that greatly increases the complexity at instruction decode time. Now you have to worry about crazy stuff like an instruction that straddles two different pages.


I'm surprised to hear this, I would have expected a 16bit noop being necessary to align 32 bit instructions.. Is-there a summary of the discussion about this choice?


The spec calls out code density as the reason.

> The standard compressed ISA extension described in Chapter 14 reduces code size by providing compressed 16-bit instructions and relaxes the alignment constraints to allow all instructions (16 bit and 32 bit) to be aligned on any 16-bit boundary to improve code density.


The only solution I see is to launch a JIT translator (cough QEMU cough) whenever an illegal instruction exception pops, and hope for the best. Certainly not the best choice not to support RVC when everybody else does.


In a sibling comment "neuron_clash" states that when work on shakti started RVC was considered an optional extension, and by the time the community decided RVC should be mandatory it was too late for shakti to support it.


what are you talking bro? it's already running on ubuntu


What does that have to do with anything? AIUI they have compiled their own distro (which is not Ubuntu). It leaves the rest of us with the problem of how (or even if) we support this chip. Do we offer two versions of Fedora, with twice the storage, twice the build time, and toolchain changes to support these two architectures? Or do we compile Fedora without RVC, impacting performance on chips which do support RVC? This is an unfortunate (but long-predicted) side-effect of not making RVC mandatory for Unix-like RISC-V systems.


If you are referring to that screenshot in the article, that is just the PC it's connected to. If you look closer at the terminal you can see it's running Buildroot.



Another ppt from team. https://content.riscv.org/wp-content/uploads/2018/07/1100-19...

Project lead looks to be running a semiconductor company, and they say they have NDA with the fab company.


Thanks, this article was pretty lackluster to be honest..


Few things excite me as RISC-V. Hardware is in dire need of major disruption and it's starting to happen, a bit by bit. Hardware manufacturing as well needs disruption, but that's an ultra hard mountain to climb, for now.


If others are curious what Shakti means, as I was, here's the Wikipedia summary

> Shakti is the concept or personification of divine feminine creative power, sometimes referred to as “The Great Divine Mother” in Hinduism. As a mother, she is known as “Adi Shakti” or “Adi Parashakti”. On the earthly plane, Shakti most actively manifests through female embodiment and creativity/fertility, though it is also present in males in its potential, unmanifest form.[3] Hindus believe that Shakti is both responsible for creation and the agent of all change. Shakti is cosmic existence as well as liberation, its most significant form being the Kundalini Shakti, a mysterious psychospiritual force.[4][5]

> In Shaktism, Shakti is worshipped as the Supreme Being. Shakti embodies the active feminine energy of Shiva and is synonymously identified with Tripura Sundari or Parvati.

https://en.wikipedia.org/wiki/Shakti


In everyday speech, it generally means power/strength.


They called it POWER because initially they wanted to us the POWER ISA rather then RISC-V. Luckily they switched over.


Why luckily? Whats wrong with POWER?


Because RISC-V is an open ISA. POWER would have been made in India in terms of 'you patents are not valid here' and not have been usable outside of India.

RISC-V also is becoming the standard for research and the open-hardware community.

Now the Indians can share and collaborate with the open-hardware community.


Isn't it the case that RISC-V is only open for non-commercial use, and that for commercial products it still has licensing requirements attached to it?


No. I'm sure there'll be some cores with that license (if there aren't already), but the ISA itself is BSD licensed.


RISC-V was made by people at Berkeley and then was released threw a non-profit foundation. It is completely free and open to use.

There might be specific implementation that could be licensed as you describe but I am not aware of any at the moment.


Not (as) open!


People should really look at the Shakti program. It is really quite cool and because it releases everything in open-source we have a real shot at getting all kinds advanced processors and other free IPs.

There was a workshop at Chennai:

https://riscv.org/2018/07/risc-v-workshop-in-chennai-proceed...

Also, this is an old video but gives the basic information for the project: https://www.youtube.com/watch?v=OoxOzvf78uQ

This is one of the leads: https://news.ycombinator.com/user?id=gsmadhusudan

I think I will repost his comment on Shakti, more people should see it, it was an answer in a thread about Shakti being an 'ARM killer':

------------------------------------------------------------------------------------

As the lead architect of Shakti and the guy who helped kick-start the project, I figure I am owed my 2 cents !

1. We never positioned it as an ARM killer ! That was the imagination of the reporter who wrote the article.

2. Shakti is not a state only project. Parts of Shakti are funded by the govt, these relate to cores and SoCs needed by the Govt. The defense and strategic sector procurement is huge, runs in the 10s of billions of USD.There is significant funding in terms of manpower, tools and free foundry shuttles provided by the private sector. In fact Shakti has more traction with the private sector than the govt sector in terms of immediate deployments.

3. The CPU eco-system including ARM's is a bit sclerotic. It is not the lic cost that is the problem, it is the inherent lack of flexibility in the model.

4. Shakti is not only a CPU. Other components include a new interconnect based on SRIO, GenZ with our extensions accompanied by open source silicon, a new NVMe+ based storage standard again based on open source SSD controller silicon (using Shakti cores of course), open source Rust based MK OS for supporting tagged ISAs for secure Shakti variants, fault tolerant variants for aerospace and ADAS applications, ML/AI accelerators based on our AI research (we are one of the top RL ML labs around). 4. the Shakti program will also deliver a whole host of IPs including the smaller trivial ones and also as needed bigger blocks like SRIO, PCIe and DDR4. All open source of course. 5. We are also doing our own 10G and 25G PHYs 6. A few startups will come out of this but that can wait till we have a good open source base. 7. The standard cores coming out of IIT will be production grade and not research chips.

And building a processor is still tough these days. Try building a 16 core, quad wide server monster with 4 DDR4 channels, 4x25G I/O ports, 2 ports for multi-socket support. All connected via a power optimized mesh fabric. Of course you have to develop the on-chip and off-chip cache coherency stuff too ! 8. And yes we are in talks with AMD for using the EPYC socket. But don't think they will bite.

Just ignore the India bit and look at what Shakti aims to achieve, then you will get a better picture. I have no idea how successful we will be and I frankly do not care. What we will achieve (and have to some extent already) is - create a critical mass of CPU architects in India - create a concept to fab eco-system ind India for designing any class of CPUs - add a good dose of practical CPU design knowhow into the engineering curriculum - become one of the top 5 CPU arch labs around

Shakti is already going into production. The first design is actually in the control system of an experimental civilian nuclear reactor. IIT is within the fallout zone so you can be sure we will get the design right. If you want any further info, mail me. My email is on the Shakti site. G S Madhusudan


Very different things are in Indian semiconductor industry in comparison to China:

1. Availability of professionals

India: makes tons of electronics engineers and semi specialists, but very very few of them find employment in the country.

China: there is a somewhat ok amount of undergraduate cadres, but for anything above this, you have to attract people from abroad. And yes, Chinese fabless were hiring from abroad since the very beginning. In fact, people who make SoCs at Allwinner, Rockchip and etc are around 50% undergrad and 50% masters level people. In their early days they were eager to hire random college grads and teach them verilog on site.

2. Goals

India: a research program, all work in the past few decades was about delivering some kind of proof of concept level "national chip"

China: make money quick - 9 out of 10 Chinese fabless start with bog down standard, off the shelf "solutions" from ARM, and add some flavour: here you have 4 channel camera controller, here eDP on chip, and here 10G Ethernet for pennies.

3. Markets

India: with all respect, the truth is there are none. And from many people I hear the same criticism - even if the 10th in a row state backed effort to make the "national chip" will succeed, there will be no chances of it ever sustaining it with microscopic domestic market as demanded by political mandate.

China: foreign markets - even 15 years ago, Chinese fabless well understood that their value proposition is actually lesser in domestic market than for the export manufacturing. Most Chinese buying a PC 20 years ago were not deliberating whether their PC has Sigmatel audio codec or some cheaper domestic analogue, but for somebody making stuff for export, every penny saved on expensive imported chip mattered a lot. Even today, the pattern holds: Chines domestic market smartphone models have high-end Qualcomm or Samsung flagship class chips in their majority, and for export they do Mediatek, Allwinner, and Spreadtrum


I think the difference could be that now they are playing in the RISC-V space that is developing in a global movement.

Now the Indian national program is developing and working together on the same stuff that many silicon valley start ups and western university do.

We are seeing something really exiting in the works and many companies in China also see the potential.


The biggest difference is that China can actually deliver. They built their own top supercomputer on a new architecture.


> open source Rust based MK OS for supporting tagged ISAs for secure Shakti variants

Oy! Got a link?


I don't think it is released. You might want to look at LowRisc tagged ISA stuff that has been released, Shakti and LowRisc will work together on some of that. You can boot that on FPGA.

However the Rust based OK is not there yet so far as I know.


Thanks for sharing this technical information.

Unfortunately, I'm not the kind of engineer to jump in and help build CPUs.

Makes me feel proud of the work at least partly from the city I grew up and studied in - Chennai, India.

Also very proud of the humble explanation of progress, and sensible goals. One could easily imagine a project like this getting distracted by PR &a chase headlines instead of technical progress (remember the cheap laptop competitor to OLPC?).

I cheer for this project and the people involved in it with all my heart.


Thanks for sharing this comment!

I wonder if part of the govt’s interest comes after discovery (-ies) of backdoors in US and China based processors — a national security motivation to develop indigenous manufacturing.

Congrats to this team. Great project!


That may be one. But I think another strong motivation is threat of sanctions from US. I don't know the specifics, but even most recently as today, there were news about US wanting India to stop buying oil from Iran or face sanctions (https://economictimes.indiatimes.com/news/economy/policy/no-...).

There is a movie in Netflix called "Paramanu" (https://en.wikipedia.org/wiki/Parmanu:_The_Story_of_Pokhran) that talks about how India was forced by US back then when it wanted to test nuclear bombs. There are also countless other stories including how US tried to stop India from buying cryogenics engine (https://timesofindia.indiatimes.com/india/India-overcame-US-...) and some GPS related incident in the recent Kargil war with Pakistan (https://timesofindia.indiatimes.com/home/science/How-Kargil-...).

This post in Reddit also has some good info: https://www.reddit.com/r/india/comments/27l015/what_fuels_in...


This is excellent news.

I wonder how long it will be until we can buy a full raspi-like platform that has no proprietary chips or firmware.


Current RISC-V chips are still proprietary, it's only the ISA that is free.

As for firmware, that's already pretty open with ARM.

You can boot a Tegra X1 (Google Pixel C, Nintendo Switch) without any blobs with coreboot. (You still want one tiny blob — DDR4 training stuff — if you want your RAM not to be super slow.)

Allwinner and Rockchip platforms also tend to not have blobs, at least not user visible/modifiable ones.

You can even boot a Raspberry Pi without blobs https://github.com/christinaa/rpi-open-firmware (but without usb/ethernet/etc.)


> Current RISC-V chips are still proprietary

Whenever people make statements like 'RISC-V chips' you know its wrong because their is a broad range of chips from many different people.


Even if there were open RISC-V designs out there you'd need a way to make sure it's actually the design that's etched on the chip. It seems like a hard problem to solve. Even FPGAs (and their closed source toolchains) could be backdoored.


Super high security is just one application and not needed for most application. Also, taping out open designs is still far more secure then anything else.


So then I'm sure you can name a counter example? What fully open chip can I buy?


What fully open integrated circuit of any kind can you buy? Maybe there exist simple 74xx chips with completely open designs, taped out using open source tools, manufactured using completely transparent processes, and then independently verified to prevent backdoors? I'm not aware of any, but I suppose the military have something like this.

Anyway depending on your level of trust at the moment you could:

* Download rocketchip (BSD licensed) and run it on an FPGA. You'd have to trust the FPGA vendor and their proprietary tools like Vivado. This lets you run Linux, at a speed of about 50 MHz.

* Run PicoRV32 on the reverse-engineered Lattice 8K FPGA using the open source toolchain. You cannot run Linux on this, but you can run short C programs without libc, and given that Clifford has done a very good job fully understanding the FPGA we can be reasonably sure there are no backdoors. My experiments with this are here: https://rwmj.wordpress.com/tag/icestorm/


Of any kind... I think the uA741 is about as open as any IC design can possibly be at this point.

Still available from many places. Here is one. http://www.ti.com/lit/ds/symlink/ua741.pdf


Shakti, Pulp platform, OnChip and others.


What do you think of the HiFive Unleashed [1]. I don't have one but they have claimed [2] the entire boot process is free and open source?

1. https://www.sifive.com/products/hifive-unleashed/

2. https://forums.sifive.com/t/who-should-buy-the-hifive-unleas...


Looks very cool, I'd not heard of this before!

I can't see where you can actually buy one, or how much it costs, but looks like a project to keep an eye on.


Weird, there's something up with their Crowd Supply page. I saw the price there the other day (in the right hand panel that is currently empty):

https://www.crowdsupply.com/sifive/hifive-unleashed

The expansion board still lists a price:

https://www.crowdsupply.com/microsemi/hifive-unleashed-expan...

From memory, the HiFive Unleashed was on the order of $1000. The FAQ discusses why the price is high, they're planning for it to come down as demand ramps up.

There's also an Arduino-like embedded board:

https://www.crowdsupply.com/sifive/hifive1


The price for the initial batch was $999 a piece. Unless you can buy one from one of the initial backers I think you will have to wait for them to produce more of them.


> no proprietary chips

This is a huge boundary to cross, and I'm not convinced that it's actually worth doing.


Once somebody builds a competitive modern semiconductor process themselves, owns it outright, and is willing to publish everything about it.

So, never.


lowRISC[1] has been working on this for a few years now. I think one of the lowRISC people worked on the Raspberry Pi as well.

[1] https://www.lowrisc.org/


What is that supposed to mean? Did they design the video core 4 itself? Slapping proprietary ARM cores onto an existing proprietary design to make a crippled SoC isn't that useful if your goal is to make an SoC from scratch with a new ISA.


lowrisc is the latter.


Previous discussion, about 8 months ago: https://news.ycombinator.com/item?id=15684225


sorry, but this is not the same _discussion_ article you have referred to, was an introduction to the shakti project, but now linux is booting on the cpu.

two, very different things.


I do not take these links to mean, "why are you posting this, we have already discussed this", but as "here's related conversation that you might find interesting and relevant".

Such crosslinking is a true service.


ah, i was inadvertently conflating that with this. thanks for your clarification, and as you have pointed out, it most definitely is quite valuable.


The article misspells it as 'Shakrti' in the body. Posting here because there is no comment section on the blog.


So it appears that this is taped out using Intel's 22nm FinFET process.

https://twitter.com/ShaktiProcessor/status/10232743804309217...


What's the price point of one of these processors? The article doesn't seem to be clear on this.


From what I can gather, this is only in the test stage and has not yet been fabricated yet. They are testing this using FPGA


No, it's a test chip manufactured using Intel's 22 nm process:

https://twitter.com/ShaktiProcessor/status/10232743804309217...


this is the best news i ahve seen so far




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: