Hacker News new | past | comments | ask | show | jobs | submit login
FPGA Dev Boards for $150 or Less (fpgajobs.com)
182 points by cushychicken on Nov 6, 2023 | hide | past | favorite | 88 comments



It's missing the best option for beginners: the UPduino[0]! It's a cheaper ($30) and more capable (39 GPIOs) alternative to the iCEstick or TinyFPGA BX. There's even a slightly more expensive variant with an onboard RP2040 microcontroller[1].

[0]: https://www.tindie.com/products/tinyvision_ai/upduino-v31-lo...

[1]: https://www.tindie.com/products/tinyvision_ai/pico-ice-rp204...


for the more expensive price class, there's also the glasgow interface explorer, which after a few years delay is finally shipping now

https://www.crowdsupply.com/1bitsquared/glasgow


Glasgow isn't an FPGA development board. It has an FPGA in it, but it is explicitly not designed as a general-purpose board and would be poorly suited as one; get something else if that's what you're after.


   “ The following items cannot be shipped to your country and have been removed from your cart:

   UPduino v3.1 low cost Lattice iCE40 FPGA board”
Great


If you're in the EU, you can order through their European storefront: https://lectronz.com/stores/tinyvision-ai-store

(Haven't tried it but they explicitly call it out in the shipping section.)


Thank you for pointing this out.

Unfortunately, this becomes $64

Unit price:€32.59

Total price:€32.59

Subtotal€32.59

Estimated shipping (42gr):€16.76

Taxes (21.0%)€10.36

Total in EUR €59.71


That's really disappointing. I was toying with the idea of getting one myself. But if it's basically twice as expensive I'm not sure.


That's very cool. We've gotten some other great suggestions from Reddit, too. Gonna add to this list! Thanks for sharing this!


A lot of beginner FPGA projects are just crappy microcontroller / crappy microprocessor projects.

I'm thinking back to my college years, where I spent about 70% of the LUTs of our little FPGA board making a Wallace Tree Multiplier. Yes, good to learn Verilog over, good for learning how half-adders and adders could work together to make bigger circuits and all that, but its not exactly a good use of FPGA capabilities.

Given how many chips are available today on the market, what are hobby-level FPGA designs that truly take advantage of custom logic in a way that a microcontroller and/or microprocessor (or other common parts) cannot replicate?

---------

Looking at history: I think the traditional use of FPGAs and/or ASICs were matrix multiplication routines, specifically Reed Solomon error correction codes. The most common implementation was probably CD-ROM error correction IIRC.

But I'd argue that such routines are doable with ARM Neon these days, especially with PMULL (Neon carryless multiplication, specifically designed to accelerate Galois Field multiplication). And a lot of other matrix-multiplications are likely an ARM Neon problem solvable with a tiny Cortex-A5 or Cortex-A7. (These CPUs are available at $8 to $20 price points, far cheaper than an FPGA, and they run Linux so they're also easier to program for than learning Verilog). Microchip's SAMA5D2 for example is like $10 and a total solution is under 500mW of power consumption (DDR2 included).

I think communications is the right overall idea. A lot of problems come down to large matrix-multiplication or other large-scale compute problems. But a lot of radio circuits (ex: Bluetooth, LoRa, Zigbee, etc. etc.) already have ASICs. Perhaps communication protocols itself need experimentation, and FPGAs are best at that?

I do think that a low-cost, low-latency, low-power communication protocol should be invented for wired communications, or infrared, etc. etc. And that might make more sense to FPGA-out rather than using a microprocessor / SIMD / ARM-Neon on.


The number of compute-focused applications that are better on FPGA is going to be tiny. Doubly so if low-end, triply so if not real-time.

FPGAs shine in hard real-time applications and as "EE Duct Tape," but almost never as raw compute, even if your utilization is rather high. If you need to slurp in data from a bunch of ADCs at many GB/s and do signal processing without missing a sample, FPGA shine. Radar, sonar, signal analyzers, beam forming, that sort of thing. If you need to connect PC buses (PCIe, Ethernet) together in a novel fashion, say because you are prototyping a new PC chip or router or building AWS, then FPGAs shine. The moment volume gets high, the scales tip back towards ASICs, but many important applications are intrinsically low volume. Often in prototyping, but sometimes in deployment too. How many F-22s exist? Only about 200. Custom chips wouldn't come close to filling a FOUP, so you can bet your bottom that they (and the labs that engineered them) are full of FPGAs.

The world is full of "Look ma, I did a FPGA" projects that in the real world would have absolutely no business running on a FPGA. That's fine, we all need to train on something, but the natural inclination to overstate scope of these pet projects can be confusing unless you know that real FPGA applications are confined to narrow (but extremely important and exciting and valuable) verticals.


According to this the F22 used an Intel i960MX- + a custom DSP from Raytheon derived from the radar processor on the F-15. but that the engineers expected to replace the DSP with a PowerPC chip.

https://www.militaryaerospace.com/computers/article/16710716...


Well... I'm thinking from the perspective of a hobby-engineer. Not so much F35 scale.

iCE40 is a $6 surface mount chip, which means I'm comparing it against all other $1 to $20 chips within my capability to put into OSHPark's 6-layer PCB-layout service.

My toolbox includes 8-bit uCs like AVR (ATMega, AVR DD, AVR EA), 16-bits like MSP430, 32-bits like Cortex-M0+, M4, M7. It includes Linux-scale Microprocessors like Microchip SAMA5D2, Microchip SAM9x60-D1G, or Boards like Beaglebone or Rasp. Pi. (And yes, I've double-checked. These 0.80mm pitch BGAs seem like they fit and route on OSHPark's 6layer 5mil trace/space impedance controlled specifications)

So where does an FPGA fit inside of here?

--------

Strangely enough, "Glue Logic" is an 8-bit territory these days. AVR DD has CCL, which are a 4x 3-input LUTs + 2x JK Flipflops + Event system that executes even while the 8-bit CPU is asleep.

See here: https://ww1.microchip.com/downloads/en/AppNotes/TB3218-Getti...

So the smallest "glue logic" purposes of FPGAs is... well... outcompeted. The $1 uCs are beating FPGAs at this particular task now. I truly can configure 12 input pins of the 8-bit uCs + 4-output pins to act as simple glue-logic fully async from the uC's clock (IE: zero code / MHz used, still functional during sleep, etc. etc. Bonus points, Event-routing system means that events route to the ADC/Timers/etc. etc. even while uC is sleeping, for maximum power efficiency). If some latency can be tolerated, you can even hook up these CCL / routing to interrupts and run a bit of code on it.

AVR DD's CCL isn't good enough for any serious design like a 32-bit LSFR. But you know, a CRC32 (LSFR implementation) probably would be best done on such an iCE40 FPGA rather than the 8-bitter's piss-poor compute capabilities. But 3x AND gates + 1x XOR gate scattered across the board? That's an 8-bitter job today.

---------

I think the answer for "What is the best total solution under $50" will likely be microprocessors and full scale chips. (Or even a full sized SBC like Rasp. Pi or Beaglebone).

But if we change the question to "What is the best total solution under 50mA", suddenly the FPGA is far more competitive. FPGAs aren't that expensive, now that I'm looking up these tiny iCE40 chips. But 1k LUTs is still pretty small.

Speaking of which: ouch. A lot of iCE40 are 0.40mm and 0.50mm pitch BGAs, so no OSHPark 6-layer for those. QFN and TQFP are available though. So just be careful about chip selection and think about the PCB you're planning to use with these chips.


> iCE40 is a $6 surface mount chip, which means I'm comparing it against all other $1 to $20 chips within my capability to put into OSHPark's 6-layer PCB-layout service.

If you are a hobby EE (and works as a software engineer for your day job), $6 is negligible. Some of the higher end RF chips cost 3 figures per chip. Cost of BOM only truly matters at scale.


I dunno. I think my mental model for my hobby stuff is that I'm aiming for a small-run (1000 or less) Etsy store kinda deal.

IE: I'm going to sell something for $150 to $500 in relatively small numbers, that meaningfully helps people with some specialized niche task that big companies are blind to... with a BoM aimed at maybe $30 and an overall production line of 1-hour (assembly time + testing / manufacturing / boxing) time or less, since I'd likely be the only person boxing these devices up and shipping them out.

I mean, ideally maybe like 10-minutes assembly time or shorter really. Depends on how much time you're valuing your labor.

I bought an HDMI lag tester that proved whether monitors for the fighting-game community were 18ms lag or 30ms lag, since the fighting game community is very, very, very particular about tournament setups. There's no way a device like this would make a sale at the large scale, but that's the kind of "Etsy-project" that I literally bought back when I cared a lot of about getting my home setup close to tournament specs.

Or perhaps $300 joysticks custom built to look/feel like arcade sticks, at least before Madcatz / big guys started making them.

In case you're curious: this was a $120 doohicky that was an HDMI output signal that flashed white-rectangles on the top-left, top-right, center, bottom-left, and bottom-right of the screen, .... plus a photodiode that accurately measured when the HDMI-signal went out minus the latency to the milisecond. The last time detected was updated through HDMI output.

This is a project most of us hobby EEs could accomplish and likely sell on Etsy. But we gotta keep costs down below $30 BoM in practice. Its a meaningful project and something good tournament organizers knew to buy and test with.

---------

I've heard the estimate that for hobby / Etsy store level manufacturing, you're looking at 5x BoM for a fair price. Ex: $20 BoM sells at $100, $100BoM sells at $500. If you can't accomplish this, then your business idea sucks, go think of another more profitable idea. If this niche product exists, then you've got a potential Etsy-business idea.

I think there's a good market for $100 to $500 specialist niche electronic tools like this, taking advantage of the small sizes of communities, small scale of builds, small markets, etc. etc. (If it were a large market, Hasbro or Nintendo or some "big guy" will jump in and likely take your market. If its like 1000 total lifetime sales, that's enough to make the hobby worth it but small enough that no big company would tackle that niche).

If you're talking about $500 parts, then we're talking about $2500 sales price (using the 5x BoM fair price scaling as a mental model), which is likely outside of the hobby/Etsy craft tool for niche subject market.

There's a lot of hobbies out there where $100 to $500 tools (ex: $100 HDMI lag tester, $300 joystick, Replicated Pop'n Music controller), is fair. Going above $500 or $1000 Bill-of-Materials (aka: sales prices in $2500+ range) kind of gets you back into professional tools and you're suddenly a loser.


Ah you plan to sell it, I thought you meant just building it for personal use. Yeah for production BOM optimization is an entirely different story.


Or at least, I'm pretending that I'll sell it. Lol.

No promises. But if something looks good enough maybe I'll ramp it up to a real production run.


There are still places I see FPGAs used by hobbyists, for example Hams working with Software Defined Radio, game console emulators with a focus on correct timing, other retro computing where FPGAs can replace/upgrade components that are hard to find.


> So where does an FPGA fit inside of here?

It doesn't. You're not missing anything.


To answer my own question, I've decided to look up the specs of Lattice Semiconductor's iCE-40 LM1K FPGA. This is very small, just 1k LUTs. But a lot of these "matrix multiplications" and Galois-field stuff simplify down into absurdly small linear-shift-feedback registers in practice (!!). At least for encoding (decoding is far more difficult).

With that in mind, these iCE-40 low-power devices are claiming to be of the ~10mA class, which puts them in the small microcontroller region. (Ex: RP2040 is 20mA, so we're already undercutting RP2040 let alone a proper Cortex-A level chip).

So... yeah. Okay, I see the use. But that's still a _lot_ of extra work compared to grabbing an off-the-shelf Cortex-A5, lol. But given the right power constraints, I can imagine that the $6 to $20 FPGA / iCE-40 would be more useful than adding a full size Cortex-A5 (or better) with SIMD / other such advance computational instruction sets.

Ex: I think I'd be able to program a LSFR for 8-bit Reed Solomon encoding (Galois add/multiply) that'd pair up with a standard microcontroller (think any ARM-Cortex M4 here), all for a total solution power consumption under 20mA going full tilt.

Since DDR2 RAM starts at like 100mA power consumption, there's a lot of FPGA+Microcontroller that you can fit before even the smallest microprocessors (aka: Cortex-A5) make sense.

----------

So I'm thinking that a small microcontroller that needs to write-only communicate over a noisy channel could in practice, require a Reed Solomon encoder (or turbocodes or whatever modern crap exists. I'm not up-to-date with the latest techniques). Reed Solomon encoder is 100% better on an FPGA since its just a linear shift feedback register.

Or heck, the matrix-multiplication to decode a Reed Solomon error correction scheme is surprisingly compute heavy, and might also be superior on an FPGA than the 10mA class uC.


In my day job I work on a product that has FPGAs, and we don't do a single matrix multiplication.

We use them primarily for performant interface with obscure bus protocols, where high performance variously means high throughput (tens of Gbps) with zero acceptable loss, or low latency (interpret the bus protocol and produce the correct response in <10ns), but amusingly for our particular application, not usually both at the same time.

Our volume is too low and the set of bus protocols we need to interact with changes too rapidly for ASICs to be economical. And it's not possible to meet our performance targets with off the shelf SoCs alone or discrete logic gates.

Although I agree with your point that its hard to beat CPUs (and GPUs) when your needs are primarily computation.


One common student project we had used the FPGA to generate a (VGA*) video signal. For example using the onboard ADC to sample a signal and visualise the waveforms. A more advanced idea was to also implement a line-drawing algorithm on the FPGA to generate wireframe graphics. While this can also be done on a microcontroller and some even include video outputs and GPUs, I think it is a nice way to see on a low level how to generate the signals with the correct timing. I used this for example to add a video output to a Gameboy.

Another a bit more exotic and involved application is a Time to Digital Converter, which can take advantage of the low-level routing inside the FPGA to sample a digital signal with significantly higher precision than the clock (resolutions of 10s of picoseconds down to below 10ps depending on the FPGA).

For work, we mostly use FPGAs for data acquisition systems, low level data processing, high speed data links and so on.


Alas, modern embedded screens (ex: NewhavenDisplays) are either SPI (for small screens) or "8080-protocol" (8080 bus-like protocol) on the faster / larger screens and somewhat easily implemented using bitbanging. So VGA is somewhat out-of-date for a hobbyist, the market has moved on from VGA in practice.

> Another a bit more exotic and involved application is a Time to Digital Converter, which can take advantage of the low-level routing inside the FPGA to sample a digital signal with significantly higher precision than the clock (resolutions of 10s of picoseconds down to below 10ps depending on the FPGA).

That certainly sounds doable and not too difficult to think about actually. But as you mentioned, its exotic. I don't think many people need picosecond resolution timing, lol.

Still, the timing idea is overall correct as an FPGA-superpower. While picosecond resolution is stupidly exotic, I think even single-digit nanosecond-level timing is actually well within a hobbyist's possible day-to-day. (Ex: a 20MHz clock is just 50 nanoseconds, and bit-stuffing so that you pass 4-bits of info / 16-time slots per clock tick means needing to accurately measure the latency of 3.125ns level signals...). This is neither exotic nor complicated anymore, and is "just" a simple 80Mbit encoding scheme that probably has real applicability as a custom low-power protocol.

And its so simple that it'd only use a few dozen or so LUTs of a FPGA to accurately encode/decode.

Ex: 0000 is encoded with a 0ns phase delay off the master clock.

0001 is encoded as 3.125ns phase delay off the clock.

0010 is encoded as 6.25ns phase delay off the clock.

... (etc. etc.)

1111 is encoded as 46.875ns phase delay off the master clock.


Yes, VGA is really not very useful nowadays, but I think it is still a useful (student) project for FPGA beginners that is relatively easy to implement, more exciting than blinking an LED and can be built on for other things.

The downside of SPI (and to some degree 8080) screens is the low refresh rate / missing vsync. There are also screens with an RGB interface, which is then again similar to VGA but digital. But yes, this does not really require an FPGA and an ARM controller with RGB interface is probably much more useful for most applications. (Or even MIPI-DSI, but I have not used it myself so far.)

Still, I have a TFP410 lying around that I wanted to strap to my FPGA at some point to get something better than VGA.

> Still, the timing idea is overall correct as an FPGA-superpower.

And while this is especially true on FPGAs with dedicated hardware like a serdes or gearbox, one can still squeeze out a bit more on most FPGAs with DDR IO or several phase-shifted clocks.


Jump Trading has a FPGA team. There are always job openings for it. Not really a hobby project, but it gives you an idea of real world applications.

>We’re looking for brilliant engineering talent to join our FPGA team that is building next-generation, ultra-low-latency systems to power trading with machine learning and other algorithms on a global scale.

>You’ll work alongside a small team of experienced engineers who came to Jump from leading companies in FPGAs, semiconductors, networking cards, and more… as well as PhDs from top FPGA research labs around the world.

https://www.jumptrading.com/careers/5305081/?gh_jid=5305081


"Given how many chips are available today on the market, what are hobby-level FPGA designs that truly take advantage of custom logic in a way that a microcontroller and/or microprocessor (or other common parts) cannot replicate?"

Any boolean-logic heavy workload such as password cracking or SHA256-mining (Bitcoin) is perfectly suited for FPGA platforms and will outperform any microprocessor or GPU in terms of performance per watt. For example in the early days of Bitcoin, FPGAs such as the Xilinx XC6SLX150 ruled mining, and many such implementations were developed by hobbyists.


I honestly don't think its possible to implement SHA256 on 1k LUTs that's discussed by these FPGA dev boards in this post. (Let alone an implementation that's going to beat out traditional CPUs or GPUs).

Like seriously: 1k x 4-LUTs means that these iCE40 FPGAs has 4096-total inputs to all of their logic. SHA256 has ya know, 256-bits of input and probably takes more than 16 "steps" to implement even with a perfectly route. (But if anyone proves me wrong, consider me happy).

You're thinking orders of magnitude too big here. The FPGAs described in this post are much, much, much smaller.


Oh, right, not 1k LUTs. But toward the $120 range, such as the Digilent Arty S7 listed in the post, with 23k LUTs, it's likely possible to implement SHA256 cracking or mining and beat a CPU or GPU in performance/watt. Probably not performance/dollar though.


(Haven't looked into all the software available recently, so YMMV, just some thoughts:)

The other side of the story is the availability of a low-cost/free and capable tool chain. It's my impression that AMD/XILINX wins on that.

Of course this also depends strongly on the purpose. I think open source tool chains are not yet a state that you can bigger problems with it, so if you want to get into the job market, maybe train with a vendor software. Different story if it's for home projects. And if you want to hack on the open source tool chain, all the power to you!


Without a doubt the case.

Pretty much all of the major vendors require a license agreement, and a node lock to a specific MAC for your computer.

They generally do hobble to toolchain a little bit as far as the number of LUTs you compile to. Top tier, huge AMD/Intel chipsets are gonna require you to shell out to use all the LUTs and specialized IP blocks.


This. I'm currently planning on purchasing one of the Kria KV260s for that reason. They're above budget, but are quite capable and you get the free Vivado/Vitis toolchain


Be forwarned, when I bought one, I had to email back and forth for a couple weeks with Xilinx and sign some legal stuff before they would ship it, even within the US. Might've been ITAR but they wanted to be really sure I wasn't going to build a guided missile with it.


I wonder if they could just put a fuse in the silicon that blows out if subjected to beyond a reasonable acceleration or abrupt altitude change?


I'm planning on ordering one via Digikey, but that's fine. I work in the US defense industry regardless, so I shouldn't have too much trouble.


When I bought one a year ago in the US I had no issues. I guess their insistence on it being a neural network inference device has bit them in the ass.


Can I make a suggestion - the DE0-Nano and DE0-CV. Lots of legacy IO while being an older device family that still has a very fast and efficient compilation time. Hardware wise, both have vanilla SDRAM, the CV board has 7segments, PS/2 ports and VGA out.

If you want to learn FPGAs, don't let tools get in the way of learning. Stick to Altera/Intel if for only 1 reason: Signaltap.

SignalTap is the single best tool you can get for getting somewhere in your FPGA journey, primarily if you ever plan on interfacing with hardware or ICs outside your FPGA. That is what FPGAs are designed for, anyway. It's like superman x-ray vision for your bugs.

I would recommend not using the open source tools with ice40 if only because there is no equivalent of SignalTap. Imagine having no gdb, no printf debugging, and all you have is 2 LEDs. Don't waste your time. Sure, Quartus can be annoying sometimes, but don't get distracted on tooling when you are trying to get your PS/2 keyboard or mouse interface working. Or check how many pixels your VGA horizontal back porch is, even though you think you wrote the verilog correctly.

With Signaltap you cook in a highly configurable logic analyzer into the design, and you can include any IO pin, bus internal register, or state machine.

It's one thing to play around in the simulator and test bench your own code, but that's rarely where the issues come up. Nothing beats actually analyzing your own design and external interfaces, warts and all, to see down to the exact clock cycle where something happened.

By the way, Xilinx does have something similar as an ILA, but it's not nearly as good, and the 7 series compilation times are not gonna be as fast as the cyclone IV.


I really like the DE0-CV in particular for having the 7 segment displays and accessible buttons and switches. When I was first getting started in FPGA stuff, I spent a lot of time just getting basic circuits to work, and you need some physical IO to get any feedback that your stuff works. It gives you a lot of things to learn on before you start worrying about VGA output or whatever (but does have a lot of interesting stuff on board for when you do want to mess around with that stuff!)


This type of debugging is fine for people just starting out or hobbyists but won't be useful if you're trying to design HDL professionally. You really need to be able to create accurate simulations so you're not debugging on hardware the entire time. Hardware debugging takes a long time (builds can take hours) and you're more limited in terms of how many signals you can view/how many samples you can take.


You are absolutely correct, and because it doesn't scale well, you have to keep it highly targeted.

It all depends on what you are designing. Something DSP focused like a MJPEG encoder can be entirely simulated as it can be abstracted away from anything external. Most you would need would be a simple model of your external DRAM controller for a framebuffer. And this would definitely be both the fastest and easiest way to develop it.

On the other hand, I did a job where I had to design a secure enclave accessible as a typical LBA-accessible SD card through a standard SD reader. I designed a pcb specifically to let me sniff the protocol both with a real card and my own IP. Of course, I started with implementing everything in the SD spec to the letter. It still required a crazy week of in situ debugging with dozens of card readers and cards to see why both ends violated the spec but worked anyway. In the end, I was able to design the flash translation layer fully in simulation but the SD link and phy layer were developed almost fully with the signaltap debug and compile loop.


SignalTap is a delight but as the other commenter points out, it's a hard thing to deploy on most real designs due to how much space it eats.


Xilinx has ILAs (integrated logic analyzer) which is similar. In both cases, they are just logic analyzers.

You never would deploy a product with signal tap or ILAs, what would be the purpose? 1) You usually read out the ILA/Signal Tap stream using JTAG. The newer Ultrascales do have a core that allows you to not have to do this, but in most cases, JTAG is what you use. 2) For debugging, you can always load a debugging bistream with your ILA in the circuit. 3) Bottom line, ILAs take up space and are only useful for slower signals unless you want to use precious BRAMs.


This is missing boards with Lattice ECP5 FPGAs, which are a nice alternative to the iCE40 FPGAs and are also supported by the open source tools, but also offer more logic, memory, and IO.

I only know of the evaluation board from Lattice [0] and the OrangeCrab board [1], but there are probably more.

[0]: https://www.latticesemi.com/products/developmentboardsandkit...

[1]: https://orangecrab-fpga.github.io/orangecrab-hardware/


There are more! The ULX3S [0] offers the ECP5 in three different sizes in up to 84K LUTs, plus it has an onboard ESP32. Fully open source with plenty of projects and examples built around it [1].

[0] https://www.crowdsupply.com/radiona/ulx3s

[1] https://ulx3s.github.io/


I was going to point this out too, but I'm frankly not sure how well the opensource toolchains actually work on these chips. I spent a number of hours a year or two back trying to figure out how to talk to the high speed serdes, and failed miserably.

But for the kinds of use cases one gets out of an ICE40, it seems the ECP5 devices are going to be pretty solid choices with the open source tool chains. Ex, lots more LUTs talking to slow devices/GPIO pins.


It seems to work well as far as I can tell, but my ECP5 unfortunately does not have the serdes. I wanted to stick to the 256 caBGA package to keep the board simple, but there is no variant with serdes... The PLLs, block ram, and other IO work fine (have not tried the DDR and gearbox blocks yet).


I am a fan of these boards, i believe the economy of scale is propped up by the LED billboard industry


i can vouch for the orangecrab. if you want an ECP5-based FPGA board, it's great.


Missing from the list is the BeagleV-Fire for $150 which was released last week with:

* RISC-V CPU: 4x 64-bit RV64GC application cores & 1x 64-bit RV64IMAC monitor/boot core

* FPGA: 23K logic elements (4-input LUT + DFF), 68 Math blocks (18×18 MACC), and 4 SerDes lanes of 12.7 Gbps.

[0] https://www.beagleboard.org/blog/2023-11-02-beaglev-fire-ann...


What a neat little device.

Get into RISC-V and design a custom “accelerator” type thing on the FPGA.


The list is missing Lattice ICE40HX8K-B-EVN.

This board costs $85 or EUR 80.

It does not include anything superfluous, but it has four 40-pin 2.54 mm (100 mil) headers. Most of these 160 pins are usable as FPGA I/O pins.

No other cheap FPGA board offers so many I/O pins and by using standard large-pitch headers it is easy to connect the pins to anything else.

This series of Lattice FPGAs had its bitstream reverse-engineered and there are open-source tools for programmming it.

There are faster FPGA boards available, but in many cases those are not usable due to having too few pins routed to external connectors.


Tang Nano starts at about 10$ https://tangnano.sipeed.com/en/


That’s the one I know too, thought that’s the way to go.


Is anyone here aware of beginner friendly learn-by-building style resources for getting into FPGA programming? Eventually, I would like to get to a point where I could build something like a DCPU-16 or maybe even an rv32mi core.


I wish 8bitworkshop got more love. It is amazing. From the homepage "Write 8-bit code in your browser. Ever wanted to be an old-school game programmer? Learn how classic game hardware worked. Write code and see it run instantly." It lets you get your feet wet in Verilog without buying the hardware first. This description doesn't do it justice at all so check it out.

https://8bitworkshop.com/


At the risk of sounding like a shill because it's the third time I link to it: https://nostarch.com/gettingstartedwithfpgas

I was looking for a resource to get started too and stumbled upon this new book. I haven't picked it up yet but liked all No Starch Press books so far.


Russell's book is great. We got to preread it for FPGAjobs. His website, NANDland.com, is also great - and free!


Nand2tetris, skip around until you find something that interests you.

Fpga4fun is a other good resource if it's still around.

Try and do some stuff besides just another softcore CPU in the beginning, even if it seems redundant. Maybe try a VGA pattern generator with some cosine LUTs or something.


In a game jam a friend and myself turned the Nand2tetris computer into a working computer on an FPGA board with VGA output. Then we built a small game (game and watch level) using the Nand2Tetris high level language. Very fun.


I haven't tried this but this looks interesting. Designing Video Game Hardware in Verilog https://www.amazon.com/gp/product/1728619440/ref=as_li_tl?ie...

And the online learn by building simulator. https://8bitworkshop.com/v3.10.1/?platform=verilog&file=cloc...



https://github.com/enjoy-digital/litex

they have tutorials, you can get compatible boards for around $20


NANDland.com is a great resource for this.

We're going to work on improving our resources for getting into FPGA programming. Stay tuned.


So FPGA prices have come down to hobbyist levels. Would someone point me toward hobbyist-level resources for programming such a device? Or is toying with machine code simply too tedious for achieving substantial results? My interest in these stems from the potential to hardwire inner-loop procedures that would otherwise have run atop a stack of multi-million LOC abstractions.



There's various open source work going on, e.g. at Boston University (https://www.bu.edu/rhcollab/projects/software-hardware/fpgas...) but it's still pretty much at the research level.


Yosys and nextpnr have been production-ready for years, they handle your average hobbyist FPGA project just fine (and 1-2 orders of magnitude faster than the vendor tools).


They were at hobbyist levels 15 years ago when Spartan-3 came out. Then everyone decided that FPGAs had to be premium priced and killed off the affordable parts.


Spartan-3 never had an open/accessible toolchain, IIRC. That's what put me off from investing my spare time in it.


The toolchain is free (not open source) unless you are building very very high performance products. Even in industry we use the free Vivado most of the time. At one point, there was some Synopsys products that groups were using but they lagged in features compared to the vendor tools.


I've followed this tutorial recently, and it's amazing:

https://github.com/BrunoLevy/learn-fpga/blob/master/FemtoRV/...

The author includes detailed instruction for how to build a micro-controller in Verilog on an icestick, starting from a very simple blinker all the way to a functional RISC-V core.

My other suggestion would be: for most of the toolchain, skip your package manager and directly install the binary artifacts published on this Github repo:

https://github.com/YosysHQ/oss-cad-suite-build

You'll spare yourself a world of pain.


What do you consider "hobbyist-level resources"? The tools to take you from HDL (Verilog or VHDL) to a bitstream to load into the device are typically free to download.



I bought spartan 6 board in 2016 for $35.


This is a pretty cool looking product that marries an FPGA with a raspbery Pi Pico. $35.00 https://www.tindie.com/products/picolemon/picofabric/#produc...


Here's some of what we used during undergrad: https://www.realdigital.org/

We used the Blackboard ($139) primarily and it covered most needs.

There are free, well-written courses as well on their website, covering basic digital logic to creating IP that communicates with the PS over AXI. My only complaint is their community forum is completely unmoderated and abandoned.


I always wonder what happened to CPLDs. Wouldn't it be possible to make them at the same capacities of FPGAs? What would the intrinsic differences be?


CPLDs ran into scaling issues. Routing problems increase exponentially the more logic you add to them. Eventually routing delays make it pointless, not to mention power usage is horrific.

Not many CPLDs were made beyond about 256 macrocells. Even a typical lowend FPGA will be 5k to 50k "macrocells" or some other form of LUT-based logic cell.

As an example, the last time I had to design with a CPLD it was a 128 macrocell part, and had a static power draw of 0.5W, which is kind of ridiculous.

Altera did try to make a sort of hybrid part, the MaxII and MaxV series which are just tiny FPGAs that are flash programmed. Though, if you wanted that, there are plenty of better ones out there like the ice40.


CPLDs are used mainly when you have a PCB design with lots of slow logic that you want to simplify or decrease in space and thats it. They still have their purpose.


I have a project where I want a 1 Hz PLL trained to an intermittent phase and frequency aligning signal (PPS from GPS) that I then want phase-aligned via PLL or DLL to a feedback signal (current sense resistor in line with a nixie tube, might have to go through an ADC which might need a static negative time offset). This is a pretty low resource task, but it seems like the number of PLLs in these ICE FPGAs is quite limited. Is a dedicated hardware PLL necessary to do this or could I synthesize DLLs with reasonable accuracy?

I'm late to post here, but I've been waiting for the tail of interesting suggestions and lists to drop down. It sounds like the UPduino BX / pico-ice (iCE40UP5K) and OrangeCrab (ECP5) are the two best picks. The OrangeCrab feels like a sledgehammer, but I worry that the one PLL and lack of native ADC (granted that's fixable) on the UP5K options won't be enough.


On this awesome list [1] I found the ICEBreaker board [2] which was great to learn on. It’s a fully open source design too [3].

1. https://www.joelw.id.au/FPGA/CheapFPGADevelopmentBoards

2. https://1bitsquared.com/collections/fpga/products/icebreaker

3. https://github.com/icebreaker-fpga/icebreaker

Edit: link formatting.


[1] Looks like a very nice list to explore, thx!


I'm surprised this does not include the excellent Alchitry boards [1] which are quite cheap and capable ($100 for an Artix-7 35T dev board is a steal)

[1]: https://www.sparkfun.com/categories/tags/alchitry


Does anyone know if yosys support for the Xilinx Spartan-7 is possible yet? When I last looked it was only partially supported.


Which ones would be capable to generate HDMI at 4K?


In practice: those which come with a HDMI connector onboard (or perhaps DVI + external DVI->HDMI adapter?).

Unless your hand soldering is good enough to obtain correct impedance & match wire lengths.

I'd expect most such boards to be capable of generating a 4K signal timing-wise. But have enough LUTs, blockRAM etc to do something useful with that? As usual: it depends.


Why would you use FPGA for generating 4k content


Why not?


i got a dev board, what's the next step?


How are those so expensive




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: