top | item 27162358

MicroFPGA – A coming revolution in small electronics (2019) [video]

106 points| peter_d_sherman | 4 years ago |youtube.com | reply

56 comments

order
[+] contingencies|4 years ago|reply
There is substantial complexity being swept under the rug here.

Interesting talk but I remain skeptical. Anyone got stats on how FPGA ecosystems have fared vs. MCUs over chipageddon? From a cursory search it seems: 1. There are very few PMOD suppliers 2. They do extremely low volume. So low, in fact, it appears there is simply ~no demand. Therefore, good luck hiring and scaling a team that is familiar with this stuff on a budget. Especially since the half-life of the software tooling is probably 3-6 months and the majority of toolchains are still vendor-specific. Further, in terms of supply chain security, the plethora of suppliers for lower-end MCUs cannot be matched in lower-cost FPGA offerings, which rely heavily on specific vendors who cannot readily guarantee global availability.

M-LVDS is an onboard high speed signalling standard whereas USB and ethernet are specifically designed for off-board connectivity (and have features like power distribution, dynamic topologies, etc.). They are apples and oranges.

The notion of a baseboard for these things is so simple, why isn't there a FPGA+PMOD+baseboard PCB generation and fabrication webservice offered by Digilent? Because it's not that simple. PCB design is often largely constrained by non-electronic functional aspects such as form factor, thermals, mounting and assembly considerations. Having modules to just plug in is great for a quick test, but the second you actually want a product you have to consider the whole problem and that typically means multiple iterations of traditional PCBs for these reasons.

Conclusion: The case is over-stated and the market has effectively rejected the current offerings.

[+] g_p|4 years ago|reply
> Anyone got stats on how FPGA ecosystems have fared vs. MCUs over chipageddon?

No data I can offer as such, but anecdote from several suppliers who integrate some big name FPGAs into their products. They've seen supply dry up like other chips, causing several months of delays to lead times. It's not "zero supply", but a significant reduction in supply.

The vendor-specific toolchains and supply chain security issues you point out are huge issues, and ones most people (even security experts) are a bit blind to. You can't verify the state of a programmer FPGA by inspection (at least not using normal techniques), so you can't easily verify the output of your bitstream compiler, which is the output of the complex design toolchain.

The toolchain output and bitstream are probably encrypted (or heavily obfuscated at minimum), and this will make it near impossible to audit the component. I don't see how anyone can defend against a Solarwinds (i.e. Stuxnet style attack) in the long run, as a toolchain compromise could introduce arbitrary modification of the logic, and would be unlikely to be spotted, and very difficult to ever detect once in the field.

[+] metaphor|4 years ago|reply
This talk struck me as conflicting in a sort of have-your-cake-and-eat-it-too kind of way. The wrinkles part[1] summarizes a lot of what this individual is advocating. Paraphrasing (with equivalent liberal handwaving of details as the talk does):

I want to write FPGA code like any other high-level programming language (but quickly dismiss that the hard part of developing native HDL is in fact that it's fundamentally a lot more akin to architecting physical hardware).

I want to use FPGAs because they're powerful (but only when a conceptual design aligns with a traditional microcontroller architecture).

I want to use FPGAs because they're flexible (but I need to nerf proprietary building blocks that endow specific devices with competitive edge, or trade significant performance in other ways to do so).

I want to commodify the development peripheral market (but only if they conform to a certain commercially trademarked and weakly defined spec[2] inherently constrained to low-frequency applications where sloppy SI is presumed to be inconsequential).

I want to standardize high-performance interfaces (so arbitrarily "abandon standards you don't like" and simply write your own, but only if they're not "annoying"...and look like a custom FIFO with specific signal labels).

I'm sure this will be an unpopular opinion, but the positive feedback loop generated around this topic needs a stability check.

[1] https://youtu.be/ME_e06ApxJA?t=2080

[2] https://digilentinc.com/Pmods/Digilent-Pmod_%20Interface_Spe...

[+] pclmulqdq|4 years ago|reply
FPGA evangelists like this really need a reality check. Programming in RTL is not easy, and it's not really a skill that most people need, so it doesn't make a lot of sense as an educational tool.

The "killer app" for hobbyist FPGAs might be some sort of "SoC builder" tool that lets you build a specific microcontroller around a few high-performance cores, and then program it like software.

But don't worry. Since 2000, mainstream FPGA programming has been right around the corner.

[+] elcritch|4 years ago|reply
Good points! His talk is full of enthusiasm, and I am partial to the overall notion of better integration of data and peripherals.

Still it seems much of what he's wanting could be handled by approaches like the RP2040 chip from the RPi folks that let's you define your own low level pin level protocols using assembly to program state machines. Also many newer MCUs allow you define almost any function on any pin via internal IO muxes. Some MCU's even contain a small FPGA / CPLD.

Also, I prefer the "Mikroe Click" add-on / peripheral board from Mikroe Electronics. It's got a decent ecosystem and it handy for making quick "proto-pcb"'s that integrate sensors but allow you to iterate smaller circuits (even with hand made proto boards https://www.mikroe.com/proto-click). There's a few other non-Mikroe vendors as well.

Given how difficult FPGA's seem to be to program, there's lots of alternatives to achieve the same goals. Things like RISC-V or OpenPOWER and the leveling off of improving semiconductor node sizes will probably lead to a lot more "customizable peripherals" in the next decade, but programmed using the MCU.

[+] haberman|4 years ago|reply
What constrains PMod to low-frequency applications with sloppy SI? What inherently limits the frequency?

I have been dabbling in FPGA programming on a Xilinx Nexys A7. Through some PMod ports, I've managed to drive the shift registers on an LED matrix display at 20MHz. Is this considered high frequency?

My oscilloscope shows that the signal is somewhat messy at this frequency. But the data sheet for the shift register suggests that the thresholds are 0.7xVcc and 0.3xVcc for high and low, so it seems to be good enough. Is this shift register unusually accepting of sloppy SI?

I'm not understanding how a PMod port would constrain the usable frequency. If my clock is going at 100MHz inside the FPGA, what prevents signals of this frequency being sent through PMod wires? Is a PMod port different than the normal I/O pins offered on a board like TinyFPGA?

[+] pclmulqdq|4 years ago|reply
As a (former) FPGA developer, this revolution has been coming for a while, and I'm still waiting. Unfortunately, the programming model actually makes things really difficult for people who think in a software frame of mind. High-level synthesis gets part of the way, but not for low-level peripherals.

PMODs are good for university/hobby projects with FPGAs, though, and I'm glad the bar is getting lower and lower. FPGAs are usually built on very high-tech processes, so the gap between an FPGA (at 28 nm) configured like an SoC and an actual microcontroller (at 90 nm) may be smaller than we think, and that may be the saving grace of FPGAs as hobbyist devices.

[+] blihp|4 years ago|reply
I think it's closer than most people might think. I suspect this is largely because there seems to be a wide gulf between most hardware and software hackers (i.e. they don't talk much and tend to throw things over the wall to each other) and FPGAs require you to have a foot in both disciplines. You're right that the mindset required is a hurdle but once I got over that (I'm primarily a software person and it only took the better part of an afternoon), I've been finding it pretty straightforward. I'm struggling more with dusting off my digital logic design skills... I'll get there, just need to get back in practice.

The current state of the open source tool chain is looking pretty reasonable (covering the development life-cycle with yosys/nextpnr/iverilog/verilator/etc) and a few FPGA families having been reverse engineered (the ones I find the most interesting are also the furthest along: the ice40 and ECP5) so you can buy/build open devices and use them with open tooling today. Granted, the devices to choose from are at the low end of the universe of FPGAs out there (i.e. <100k LUTs) but that's enough to do a lot of interesting things with for now and (hopefully) eventually higher-end devices will be added. That's not terribly different than the state of play re: microcontrollers and SoCs these days in the open source world.

The one part of his talk that I think is off base is replacing microcontrollers/SoCs with FPGAs in most scenarios. At any given process node, the specialized device (i.e. a processor) should be able to crush the generalized one (i.e. FPGA) every time in terms of cost, raw performance and perf/watt and it would likely take more than the current ~2-3 process node delta to overcome that. I don't look at an FPGA as being an either/or proposition: use a microcontroller/SoC for what they're good at in conjunction with an FPGA for what it's good at where the application warrants it. Sure, if you need a tiny bit of software control throw a soft core in there and save the hassle of adding a micro. But don't try to replace a hard core with a soft one just because you can.[1]

[1] Two cases where exceptions seem to be obvious and make sense: - New/experimental ISAs like RISC-V where you can't readily/affordably source CPUs in the open source world yet. - Emulators mainly because the ISA you need is typically at least effectively dead. Any devices still available aren't often good fits for what you're trying to accomplish.

[+] e-_pusher|4 years ago|reply
Curious - why did you get out of FPGA development, and what do you work on now?

I have anecdotally heard of FPGA/RTL devs leaving the field mostly because of the poor tooling etc.

[+] xfer|4 years ago|reply
Not to mention the tools are shit and synthesis takes a long time. If you thought c++ compilers are slow just try using fpga tools.
[+] ozmaverick72|4 years ago|reply
The talk mentioned a number of small flag boards. Can someone recommended one of them to get started playing with fpgas ?
[+] snvzz|4 years ago|reply
Open synth/route stack supports Lattice iCE40 and ECP5 the best.

iCE40 are a relatively simple, cheap and easy to work with architecture with a focus on low power usage. iCESugar + nanoDLA (cheap sigrok-friendly logic analyzer) is a good cheap set to get started on FPGAs with. These are cheap on aliexpress.

ECP5 are much larger, much more complex and powerful enough to fit a whole computer (e.g. the minimig open Amiga implementation) in. I don't recommend going anywhere near this until you've already got started with iCE40, but a high-density and price-effective development board is the ULX3S.

[+] analog31|4 years ago|reply
Haven't gotten very far yet but TinyFpga BX was easy to get set up and running "LED blink" which is the hardware version of hello world. I've used microcontrollers since forever and want to expand my options.
[+] ccosm|4 years ago|reply
Barring some exponential increase in the usability of FPGA tools, I think the mainstreaming of these technologies will forever remain just around the corner.

More coarse-grained architectures like GPUs or reduced-scope architectures with a less intense cognitive load like the programmable IOs found in Raspberry Pi Picos or some Texas Instruments MCUs seem like a much more feasible solution for the vast majority of potential FPGA use-cases a tinkerer would run into.

[+] jvanderbot|4 years ago|reply
Q for the community. FPGA seems to be used for three things: 1) Custom I/O or high performance interfaces that aren't widely standardized 2) Prototyping boards / processor cores 3) Blazing fast implementations of algorithms that are hard to run otherwise.

Is that about right?

If even close, (3) is very interesting to me for a variety of reasons. Is my understanding correct that this is a reasonable use of FPGAs and that maybe now is a reasonable time to get into it?

[+] DoingIsLearning|4 years ago|reply
Caveat on 3):

The only real benefit of an FPGA for algorithms is when you're algorithm benefits from parallelization.

There is nothing intrinsically faster in programmable logic.

The point is that execution is truly concurrent, as long as you have space in the FPGA fabric, you can _almost_ do everything at the same time.

I say this as someone who has done a fair share of FPGA projects, it is very difficult to make the business case for an FPGA, if your problem can be solved with GPU programming on a COTS GPU.

Regardless of whatever you read, FPGA's do have a purpose but will most likely continue to only be used in niche/custom applications.

[+] metaphor|4 years ago|reply
I offer 3 different reasons why FPGAs might make sense to consider:

1. (general) application reconfigurability where the benefits of SRAM performance, integrated hard IP, fabric real estate, and/or cycle-accurate RTL control cannot be easily replaced by other roughly asymptotic solutions;

2. (general) offloading massively/embarassingly parallel compute architectures where commodity GPGPUs are either physically incompatible and/or power-wise too inefficient;

3. (enterprise) overall volume/performance/budget/schedule objectives fall short of the sort needed to justify pushing a bespoke ASIC design through process pipeline.

Relatively speaking, "blazing fast implementations of algorithms" is possible iff the compute bottleneck can be effectively parallelized; I'm not certain what was meant by "hard to run otherwise".

To put this into perspective, reliably clocking a non-trivial FPGA application at 200 MHz is objectively hard in a way that hobbyist types outside of the professional domain can't seem to understand (or arrogantly dismiss). And yet for the right class of problems, such an application may have the potential to put a GPGPU clocked in the GHz to shame on both compute throughput and power consumption fronts.

[+] coryrc|4 years ago|reply
Correct. I know how to use both well and would like to use an fpga in my own projects, but it never makes sense to. Even a custom Lisp processor is slower then extra instructions on a faster processor :( and not like I have time for that anymore
[+] mikewarot|4 years ago|reply
CuriousMarc just did an intro[1] to logisim-evolution[2], which can generate verilog to then program FPGAs.

  [1] - https://www.youtube.com/watch?v=gYmDpcV0X7k
  [2] - https://github.com/reds-heig/logisim-evolution
It's the best of both worlds, simulate to design and debug, then push through to hardware.
[+] bryzaguy|4 years ago|reply
If you currently program micro controllers, I bet you’ll like this. Really checks all the boxes for why you’ll be interested, how it works, what the landscape is, and advice for getting started/contributing. Doesn’t feel like a pitch, unlike similar talks. Rather, it seems like an actual cool opportunity! Makes me want to start learning verilog.
[+] ellis0n|4 years ago|reply
A great talk and good slides. Just join good development software to low cost FPGA matrix hardware and you won't need CPU in many cases.
[+] spiritplumber|4 years ago|reply
Why not use a propeller or propeller 2? Similar benefits, cheaper, easier to code on.
[+] blihp|4 years ago|reply
FPGAs just allow you to remove a layer or two of abstraction and corresponding latency (vs a CPU) where you don't want/can't have it and gives you control, speed and/or efficiency improvements in exchange. The price you pay is you're dealing with things at a much lower level (i.e. bits and wires)... if you don't need the trade-off, don't make it. They are for the situations where there is no off-the-shelf solution that quite solves your particular problem adequately.
[+] analog31|4 years ago|reply
I can't speak for the OP, but for myself, my motivation for learning FPGA's is to find out what they can do, that I can't already do on a microcontroller. I'm already pushing a screaming fast MCU to its limits. The engineers look at my designs and say: "You're trying to do stuff in an MCU that's easy in a FPGA."
[+] balefrost|4 years ago|reply
I believe one big advantage is that most FPGAs can have multiple clock domains. So while everything on the Propeller is tied to the system clock (and the system clock is hopefully fast enough that you have the resolution that you need to do the right thing at the right time), FPGAs can simply use clocks running at different frequencies.
[+] amelius|4 years ago|reply
FPGAs offer unlimited parallelism.
[+] dtgriscom|4 years ago|reply
To quote the top comment on the YouTube page: "When the speaker is talking about a slide, show the slide." Amen.
[+] nynx|4 years ago|reply
This is a great talk.