Hacker News new | past | comments | ask | show | jobs | submit login
FPGA Interchange format to enable interoperable FPGA tooling (googleblog.com)
259 points by teleforce on Feb 12, 2022 | hide | past | favorite | 53 comments



I'm really interested in this subject. What with it being my job. I've thought about this a lot, and the fundamental problem for me is that you have RTLs, but these don't mean anything, they're a cruel joke on engineering students "Oh sure, you can do that in this language but does it map."

The problem is that really it's only a tiny subset of RTL languages that are supported by FPGAs, and an even tinier subset that's optimial. God alone can write 10 lines of RTL that'll map to DSP blocks utilizing the carry chain. The problem isn't that the languages don't support multiple targets, it's that the targets are so idiosyncratic that you have to know exactly how it maps.

Now, what this format promises to do encompasses describing the resources on a device, but there's a few problems with this. Firstly, they're incredibly complex in terms of trade-offs (you can do X Y Z but X only in A mode Y in all modes but B, Z but the timing of N is off). The likely result of this are a set of simplified rules that make it tractable, but not performant. Secondly, since the only way of getting optimal hardware is to design it for one target - an interchange format seems like nonsense. It's exactly like RTL - it describes everything but doesn't functionally help you translate between targets at all.

Fundamentally this just doesn't seem like it solves anything, it's seems to suggest "Hey guys, adopt our standard for all your stuff!", but why? They don't seem to be offering anything of value.

If I were being particularly glib, I'd wish a swift promotion to the guys at Google who authored this.


All of this. There is a reason why NextPNR, the project mentioned in the article, uses custom code for every architecture instead of a fixed database. There is also a reason why almost everyone uses nextpnr instead of the alternatives that use such a single format chip database. It is an approach that, according to all current evidence, suits itself well to experimentation and research, but does not scale to large and complex real world architecures.


Except that nextpnr is one of the tools supporting the interchange format and you can see the status at https://symbiflow.github.io/fpga-interchange-tests/


If by the alternative you are referring to VPR, you are wrong. There are both public and non-public FPGA tool chains based on modified versions of VPR (and some of them are far more complex than what is currently handled by nextpnr).


This isn't quite how nextpnr is architected. Yes, there is architecture-specific code for each targeted FPGA such as ECP5, Gowin, Nexus parts, etc. But those architectural components are largely descriptions of the physical components that exist on the chip, and how they're wired together. Each architecture does implement a basic interface for describing logic elements. But it's not like there's a separate placer or router for each architecture that's hand tuned to the primitives. The placer and router have to work in abstract on some basic logical unit for any code reuse to occur at all! Go look at the ECP5-specific code in NextPNR for instance; it's all actually very small and there's nothing too magical about it. It has some device-specific passes, but that's not really unusual (a software compiler might have heavily generic APIs for all machines, and then some machine-specific passes, too.) Most of it is just "This is what a logical element looks like" or "What I/O standards are valid for pin XYZ" and other rote stuff. Some of this is boilerplate or chaff and some of it is actual wheat, but it's not like, earth shattering. Any compiler developer would think it's normal...

In short: each architecture has some code for it, and often a lot of it is boilerplate. Some of it is for specific purposes (some devices might have specific rules about promoting clock lines or global device objects, whatever, and you might put them there.) But the generic descriptions of what the place-and-route algorithms work with, the "Basic Elements of Logic" (BELs), that's exactly what this project aims to solve. It describes how an Artix-7 or an ECP5 device can theoretically be mapped to these basic place-and-route abstractions. All EDA tools work this way... nobody would completely rewrite a placer and router ground up from scratch for every target architecture or PDK for every client they have or whatever. They all work at some level on an abstract description of a physical layout, and the vendor brings the real meat to the table and wires them together. The question of API design and where to draw the line between those two is, well, software engineering. So it's not like there will be effortless magical zero-effort plug-and-play between NextPNR and some random architecture. It just means that the integration work that must happen is now easier. That's it.

So this does not make technology mapping more effective or anything like that more effective. It's just an efficient binary format for describing the FPGA wire layout with enough fidelity to do things like place-and-route effectively. That's all; it doesn't really improve expressiveness, fmax, or anything like that. It's a developer velocity thing so that vendors and tool developers get a little more stuff "for free" than before.

Honestly as long as this lets me use NextPNR for Xilinx devices easily, I'm fine with it. As the maintainer of NextPNR for a package manager there might also be ancillary benefits for me but I don't think those matter much.


I'm an FPGA noob but I really like ghdl. It also has a feature to check code is synthesizable and translate it down to a vhdl 93 netlist, so you can use modern features that proprietary synthesizers may not support in your vhdl codebase


I worked with FPGAs for my master's thesis (specifically, scheduling workloads for a runtime-reconfigurable FPGA). The bad taste of the proprietary tooling is what made me abandon the field.

I've always kept an eye on the GHDL and gEDA projects, but open progress has been glacial. The Icestorm project finally opened up FPGAs enough for me to want to work with them, but it came 15 years too late.

I agree with you that designing for FPGA hasn't become any easier since the introduction of hardware accelerators inside the chip (multipliers, block rams etc) but the lack of standardization has been hurting the field for more than 20 years. Even upgrading a project from Xilinx $chip to Xilinx $chip++ was a multi-week project.


Your comment is insightful. But what would be the solution? IP blocks for everything? Smarter compilers/mappers? A dominant FPGA vendor who would force a standard for everyone to follow? Or are you content with the status quo? I mean, I don't see why an FPGA expert wouldn't be :)


Perhaps this is why FPGAs are still relegated to mostly prototyping tools, or niche use cases where such low level reprogrammability is needed.

For mainstream use and high perf/low power you're maybe still better off doing custom synthesis.


I'm not quite sure I'm parsing this right but I thought FPGAs were pretty widely used. If you're not using an FPGA then you either need some other general purpose hardware or you need an ASIC. An ASIC is a multi-million-dollar investment. There are many applications that need "hardware performance" but don't justify manufacturing an ASIC.

As the parent says, you're likely to be targeting a specific FPGA for your application.


> I'm not quite sure I'm parsing this right but I thought FPGAs were pretty widely used.

They are very widely used. Some iPhones used an FPGA. iPhone 7 had a small lattice FPGA. https://www.ifixit.com/Teardown/iPhone+7+Teardown/67382


"FPGA" is a big space, like "processor". There are big server CPUs, and tiny microcontrollers.

The FPGA in that iPhone is the FPGA version of a microcontroller. Those are very popular, as glue logic, and for kludging in emergency fixes after production has already begun. They are also not terribly difficult to program; you hardly even need an HDL, much like you often don't really need a compiler to program a microcontroller.

All the "high level synthesis" pfaff is aimed at the big gigantic $20,000-per-chip FPGAs. Those have mostly been a big fat failure, except for a few big military and cryptocurrency-mining sales. They are wonderful for prototyping ASICs, but that application doesn't sell enough chips to be economically meaningful.


> If you're not using an FPGA then you either need some other general purpose hardware or you need an ASIC. An ASIC is a multi-million-dollar investment.

And if you are using an ASIC, most of the design is usually carried out in an HDL, just like an FPGA design. The end result may be different, but the process is similar.


I do not think it needs full optimality to be useful though. Because the tools are open source, I can see making SpinalHDL et al. to bitfile in a streamlined flow a reality. You can finally have performant, functional languages supported by tools. Because, as someone who uses them every day, VHDL, Verilog, and SystemVerilog are not good.


The problem is not that you need a unifying language to do this , you already have 2 vhdl and verilog which are mature etc. The problem is and has always been vendors don't want to release their tools. It's like saying only one assembly language should exist. Why? There can be multiple versions of assembly but the vendors of these things need to release a version of llvm or gcc that allows people to build for there devices without having to pay 10000s of dollars to use them.

Players like Xilinx have held out their compliers and instead selling them and using lawsuit to prevent people from reverse engineering there IP. This hasn't stopped people or lesser players like lattice from doing this but it's definitely slowed it down. I think open sourcing or releasing the compliers is just kind of forced because what new company would but a complier if they can get one for free (yosys).


HDLs like VHDL and Verilog are at a different level. If HDLs are like C, the interchange format is more comparable to a .o file.

That being said, EDIF already exists.


we've commonly used verilog (just instantiating gates) to do this - partly because for a period (ie the 90s) we didn't really trust synthesis tools to get it right and always ran gatesim before we taped out just in case.

My crowning glory hack from that era was a perl script that read verilog netlists and lisp placement data between the place and route stages of the P&R tool and did location dependent scan insertion producing a netlist and a the lisp instructions to the P&R tool to fix it up


You can pretty well represent low-level netlists in Verilog.

EDIF is big and complex and there are different only partially compatible versions in parallel use.


On a semi-related note, Google has also been investigating in improving support for SystemVerilog support in open source tooling -> https://opensource.googleblog.com/2021/09/open-source-system...


HDL can really be either. I would compare RTL to assembly, and gate netlists to .o files.


Xilinx is a software company.

They consider their chips to be the copy-protection dongles for the software.

It's nuts, and it infuriates me, but looking at it this way explains a whole lot of their behavior.


Soon: there are 15 different formats for passing netlists around ( oblig https://xkcd.com/927/ )

I have very little hope from any netlist format that comes from Yosys. Not only has it changed a couple times in the last years, but the latest iteration is some JSON-based monstrosity that has to be by far the most inconvenient netlist format ever created (see https://github.com/nturley/netlistsvg/blob/master/test/digit... ) .


https://www.chisel-lang.org/

Remember when this was going to bring it all together?


Did any project other than Chisel make use of FIRRTL? https://github.com/chipsalliance/firrtl


Like a few other people have said, EDIF has existed for decades and it works and it's open.

As of late it seems to be a lot of re-invention of problems that have been solved, except there's some use of JSON, Python, etc. It feels like people solving problems they find interesting rather than solving problems that are necessary.


Since I was curious myself to see what was out there for EDIF, I stumbled upon this -- an open-source C++ library to read/write EDIF (I haven't used the library nor do I have any affiliation with the project):

http://torc-isi.github.io/torc/index.html


Indeed, in the mid 2000s I worked on building tool flows in a multi-vendor environment with EDIF as the common netlist format. It worked fairly OK, with some tweaking.

I synthesized complex modules to EDIF/NGC using Synopsys Design Compiler, Xilinx XST and Mentor Precision and stitched them together via NGDbuild targeting Virtex 4 FPGAs.


Fair enough to create a device/vendor independent format for describing FPGA resources, but why not use EDIF [1] for the logical and physical netlists?

[1] https://en.wikipedia.org/wiki/EDIF


Or BLIF or EBLIF (already used by the verilog-to-routing flow), or structural verilog, or a dozen more. But no, let's invent yet another netlist format.

The fact that these existing formats are all named "xxx logic interchange format" should give you an idea what will happen to this new "interchange format". It will be used mostly to interchange with itself.

From what I gather ( https://github.com/SymbiFlow/python-fpga-interchange/blob/ma... ) , the new format is basically close to the existing Yosys/nextpnr JSON format except dumped as a Cap'n Proto binary file. I am not impressed -- this is even more inconvenient than a huge JSON file.

I guess the meat here is on the universal device resources format anyway...


A few reasons why EDIF might not be a good candidate for this:

1) The standards body that controls EDIF no longer exists, this could limit the ability to evolve and improve it.

2) Large placed and routed FPGA designs contain a lot of data and rendering it to EDIF would just slow down the process of transferring, parsing, etc. Binary is the way to go (unless debugging, the FPGA Interchange Format offers a human-readable version for that). As a comparison, some of the proprietary vendor design formats can take tens of minutes to load a full design.

3) Expressing the details necessary to reconstruct a placed and routed FPGA circuit would not naturally flow from EDIF. EDIF is good for describing a logical netlist. However, a placed and routed FPGA implementation is the mapping of a logical netlist onto the physical netlist of the FPGA. Building a format suited to FPGA architecture makes expression of these constructs easier and more efficient.

4) Writing and maintaining an EDIF parser/de-parser creates more work than using a schema-based code generator. Although work is still involved when interfacing the tool to the format, it removes a pile of issues associated with text-based formats.

[edit: fixed typo]


All these are good points but missed a key thing.

The interchange format includes a computer readable description of the FPGA which tells the tooling what is available inside the FPGA and how they can be connected.


Cadence, et al, came up with the "OpenAccess" for circuit design databases in the early 2000's. Needless to say... it's not very open. Hopefully this is different but I'm not holding my breath.

"OpenAccess is a proprietary API controlled by the OpenAccess Coalition" https://en.wikipedia.org/wiki/OpenAccess


It's because they did it to counter Artisan "free IP"

Artisan is now off the scene, and they have no incentive to do it properly now.


Note: I support design silicon at Google, but made no contributions to this project.

I'd love to see an open standard be widely adopted in this space. In my daily work, open standards (Si2.org formats and SDC in particular) drives meaningful competition and moves the industry forward.


Is SDC actually open? The company whose name is the first letter of the acronym is known to be litigation-happy, and has successfully sued smaller players into oblivion just for trying to interoperate. The larger players never care about this.


You can download some generic specs from Synopsys under an "open source license agreement" (hah) for various things including SDC and Liberty.[1] I think it even comes with a generic set of tcl definitions for things like set_false_path, create_clocks, etc that is intended to be integrated by EDA toolmakers themselves into whatever they're making, I guess.

It's not like it's all that uniform between the vendors; it's just a generic set of limited TCL APIs, but I guess it doesn't matter if they can just bully you anyway.

[1] https://www.synopsys.com/community/interoperability-programs...


Potentially relevant: AMD Receives Approval for Acquisition of Xilinx[1] and HN discussion[2]

[1] - https://www.amd.com/en/press-releases/2022-02-10-amd-receive...

[2] - https://news.ycombinator.com/item?id=30285837


OCP ODSA work on chiplets, https://www.opencompute.org/events/past-events/odsa-chiplet-...

Could the upcoming TSMC fab in Arizona be used for FPGAs which require US mainland provenance?

OpenFPGA, https://osfpga.org/introduction-to-openfpga-a-fully-customiz...

> The Open-Source FPGA Foundation offers a set of free and open-source tools enabling fast prototyping for FPGA chips and automated EDA support, through open standard collaboration. The Foundation aims to set FPGA companies free from engineering-labor intensive process in producing FPGA chips, give full freedom for software developers when customize FPGA software stacks and provide an open collaboration for FPGA end-users to implement high-quality designs.


Semi-related but what is the best intro to FPGAs, their pros/cons and how to program/interact with them?

I have a background in software engineering so I'm fairly comfortable with that part but looking for a sideline into FPGAs.


>By focusing on the only architecture type in mainstream use on the market today, namely island-based (also called tile-based) FPGAs, the standard reaches a level of universality and conciseness, which makes it easy to work with, adopt, and implement.

This implies that there are other architectures this group looked at and felt did not need to be addressed in this announcement. What are they, I wonder?


The other popular architecture is the CPLD (complex programmable logic device) which grew out of the PALs (programmable array logic) of the late 1970s. Altera pioneered the CPLD and Xilinx the FPGA in the mid 1980s.

Until recently "FPGA" was used for tile based architectures and "CPLD" for macroblock architectures, but marketing has redefined the terms such that small FPGAs are now called CPLDs if they are non volatile (they are already configured when you turn them on).


I think it might be trying to explain the difference between a format for use when building ASICs and one that targets fpga architecture. With an fpga, there is a structure (the tile) that contains some ram some, luts and other blocks. The luts are reconfigurable, and so is the wiring, but otherwise the fpga is mostly fixed. So this is quite different than an asic, where there are more cell types (flip flops, logic gates, or fets, etc) and they can be freely placed in the design.


Pretty sure it’s just PR/HR gibberish to justify why this project is so acutely aware of the Xilinx chips they’re supporting.


There is a nice discussion[1] about the same in SymbiFlow[2]. I hope, they at least read through it.

[1] https://github.com/SymbiFlow/ideas/issues/19

[2] https://symbiflow.github.io/


I don't think it's the same. That proposal seems to be be an intermediate format that VHDL and Verilog (and things like MyHDL/SpinalHDL). The FPGA Interchange Format seems to tackle a more compelling challenge: A format for defining the routed (or maybe partially-routed) design (actually, it seems like a general-purpose netlist that can describe the FPGA before and also after place & route).


What's funny about this is that it exists, but owned by Xilinx... In the early 90's NeoCAD created multi-vendor P&R tools. Xilinx bought them, their software became basis for Foundation and then ISE.

Lattice also uses the NeoCAD tools for their FPGAs (at the time, they were AT&T's ORCA FPGAs). Basically Lattice Diamond and ISE are almost the same tool..


general hints based on history

Car - you need a model T Computer - you need a os/360 and later ms/dos Os - you needs a os/360, unix/linux and ms/dos then windows Mobile - you need apple then android

The key immature part is that when you start this no one told you not to start with ecp5 but x. And that is the problem. You need start with 1 then better 2 dominating players in a free to choose environment.

Waiting for the market to mature.


Should have used XSD. ARM, STM, TI, Autosar already use XML-based formats.


Has anyone tried the tooling of inaccel.com?


Google: "So ... we're loosing search ... and people don't trust our email ... and we killed Reader ... Android is falling to bits ... but we have money, and we need a new business ... ... lets get into FPGAs!"


I down-voted you and I want to explain why. First, the authors of the article are from Antmicro, not Google. They partnered with google in this effort.

But more importantly, your comment is easy cynicism and doesn't offer any new information or insight. I'm guilty of it myself, and when I recognize it in myself I try to resist. All language is open to interpretation, especially written language. Dialog is more productive if one tries to assume the most charitable reading instead of the worst.


Fair enough - it was quick cynicism - apologies for my quick reaction. Not really up to Hacker News standards.

For some perspective, I've been around long enough to have seen many many similar things before ... generally not for the better. I saw Google appear, and do very well, all the while acknowledging that their world is complex and their business had the potential for terrible consequences. Remember "Don't be Evil."

That could have been enough - do great, useful, interesting stuff, and profit from it.

But Google (like most of the others) has been subsumed by new managerialism and short-term goals. Search and Maps have both morphed and look more like low-end advertising directories. Android hasn't learnt much from Windows; is driven by "feature tick-boxism" and becomes more bloated and confused (for users and programmers) with each release. And short-term profit-driven decisions have written-off or endangered products like Reader, Drive, and Gmail.

So when I see "Google" and "FPGA" I was primed for warning flags. And flags-a-plenty there are. I'm afraid I distrust the basic model: pick an open source tool (nextptr) and "extend" it (which locks users into the new tool). Create an "alliance", bundle, and add bits and pieces. It looks (deliberately or not) like an attempt to dominate an environment - reminds me of the Android development environment, and not in a good way.


With each passing month it gets hard to assume the most charitable read from things coming out of Google when the negative things keep coming. They really need to stop penny pinching when it comes to their customers.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: