Almost all FPGAs have entirely proprietary toolchains, from Verilog/HDL synthesis all the way down to bitstream loaders for programming your board. These tools are traditionally very poorly built as applications, scriptable only with things like TCL, terrible error messages, godawful user interfaces designed in 1999, etc etc.[1] This makes integrating with them a bit more "interesting" for doing things like build system integration, etc.
Furthermore, FPGAs are inherently much more expensive for things like reprogramming, than just recompiling your application. Synthesis and place-and-route can take excessive amounts of time even for some kinds of small designs. (And for larger designs, you tend to do post-synthesis edits on the resulting gate netlists to fix problems if you can, in order to avoid the horrid turn around time of hours or days for your tools). The "flow" of debugging, synthesis, testing etc is all fundamentally different and has a substantially different kind of cost model.
This means many kind of "dream" ideas of on-the-fly reconfiguration "Just-in-Time"-ing your logic cells, based on things like hot, active datapaths -- is a bit difficult for all but the simplest designs, due to the simple overhead of the toolchain and design phase. That said, some simple designs are still incredibly useful!
What's more likely with these systems is you'll use, or develop, pre-baked sets of bitstreams to add in custom functionality for customers or yourselves on-demand, e.g. push out a whole fleet of Xeons, configure the cells of machines on the edge to do hardware VPN offload, configure backend machines that will run your databases with a different logic that your database engine can use to accelerate queries, etc.
Whether or not that will actually pan out remains to be seen, IMO. Large enterprises can definitely get over the hurdles of shitty EDA tools and very long timescales (days) for things like synthesis, but small businesses that may still be in a datacenter will probably want to stick with commercial hardware accelerators.
Note that I'm pretty new to FPGA-land, but this is my basic understanding and impressions, given some time with the tools. I do think FPGAs have not really hit their true potential, and I do think the shit tooling -- and frankly god awful learning materials -- has a bit to do with it, which drives away many people who could learn.
On the other hand: FPGAs and hardware design is so fundamentally different for most software programmers, there are simply things that, I think, cannot so easily be abstracted over at all -- and you have to accept at some level that the design, costs, and thought process is just different.
[1] The only actual open-source Verilog->Upload-to-board flow is IceStorm, here: http://www.clifford.at/icestorm/. This is only specifically for the iCE40 Lattice FPGAs; they are incredibly primitive and very puny compared to any modern Altera or Xilinx chip (only 8,000 LUTs!), but they can drive real designs, and it's all open source, at least.
Just as an example, I've been learning using IceStorm, purely because it's so much less shitty. When I tried Lattice's iCE40 software they provided, I had to immediately fix 2 bash scripts that couldn't detect my 4.x version kernel was "Linux", and THEN sed /bin/sh to /bin/bash in 20 more scripts __which used bash features__, just so that the Verilog synthesis tool they shipped actually worked. What the fuck.
Oh! Also, this was after I had to create a fake `eth0` device with a juked MAC address (an exact MAC being required for the licensing software), because systemd has different device names for ethernet interfaces based on the device (TBF, for a decent reason - hotplugging/OOO device attachment means you really want a deterministic naming scheme based on the card), but their tool can't handle that, it only expects eth0.
As a software programmer, my mind is blown by the idea of a 3GHz CPU. That's 3,000,000,000 cycles per second. The speed of light c = 299,792,458 meters per second. Which means that, in the time it takes to execute 1 cycle, 1/3,000,000,000 seconds -- light has moved approximately 0.0999m ~= 4 inches. Unbelievable. Computers are fast!
On the other hand... Once you begin to deal with FPGAs, you often run into "timing constraints" where your design cannot physically be mapped onto a board, because it would take too long for a clock signal (read: light) to traverse the silicon, along the path of the gates chosen by the tool. You figure this out after your tool took a long amount of time to synthesize. You suddenly wish the speed of light wasn't so goddamn slow.
In one aspect of life, the speed of light, and computers, are very fast, and I hate inefficiency. In another, the speed of light is just way too slow for me sometimes, and it's irritating beyond belief because my design would just look nicer if it weren't for those timing requirements requiring extra work.
How to reconcile this kind of bizarre self-realization is an exercise left to the reader.
"pre-baked sets of bitstreams" is a good way to describe what I expect to be the most likely scenario. In other words, spiritually very similar to ordinary configurable hardware (PCIe cards, CPUs, DIMMs, USB devices, etc)
Don't forget the niche nature of an FPGA, too. They are incredibly slow & power hungry compared to a dedicated CPU, so similar to the CPU/GPU split you find that only particular workloads are suitable.
Your comment represents some of the best of HN (detailed, illuminating, informative), but is incredibly depressing for someone with an idle curiosity in FPGAs. This is what I've long suspected, and it seems that the barrier to entry is generally a bit too high for "software" people.
My understanding (and please correct me if I'm wrong!) is that it's pretty easy to burn out FPGAs, electrically, in a way that it's not particularly trivial to do with other sorts of circuitry. So one downside of exposing your bitstreams is people are going to blow things up and demand new chips.
aseipp|9 years ago
Furthermore, FPGAs are inherently much more expensive for things like reprogramming, than just recompiling your application. Synthesis and place-and-route can take excessive amounts of time even for some kinds of small designs. (And for larger designs, you tend to do post-synthesis edits on the resulting gate netlists to fix problems if you can, in order to avoid the horrid turn around time of hours or days for your tools). The "flow" of debugging, synthesis, testing etc is all fundamentally different and has a substantially different kind of cost model.
This means many kind of "dream" ideas of on-the-fly reconfiguration "Just-in-Time"-ing your logic cells, based on things like hot, active datapaths -- is a bit difficult for all but the simplest designs, due to the simple overhead of the toolchain and design phase. That said, some simple designs are still incredibly useful!
What's more likely with these systems is you'll use, or develop, pre-baked sets of bitstreams to add in custom functionality for customers or yourselves on-demand, e.g. push out a whole fleet of Xeons, configure the cells of machines on the edge to do hardware VPN offload, configure backend machines that will run your databases with a different logic that your database engine can use to accelerate queries, etc.
Whether or not that will actually pan out remains to be seen, IMO. Large enterprises can definitely get over the hurdles of shitty EDA tools and very long timescales (days) for things like synthesis, but small businesses that may still be in a datacenter will probably want to stick with commercial hardware accelerators.
Note that I'm pretty new to FPGA-land, but this is my basic understanding and impressions, given some time with the tools. I do think FPGAs have not really hit their true potential, and I do think the shit tooling -- and frankly god awful learning materials -- has a bit to do with it, which drives away many people who could learn.
On the other hand: FPGAs and hardware design is so fundamentally different for most software programmers, there are simply things that, I think, cannot so easily be abstracted over at all -- and you have to accept at some level that the design, costs, and thought process is just different.
[1] The only actual open-source Verilog->Upload-to-board flow is IceStorm, here: http://www.clifford.at/icestorm/. This is only specifically for the iCE40 Lattice FPGAs; they are incredibly primitive and very puny compared to any modern Altera or Xilinx chip (only 8,000 LUTs!), but they can drive real designs, and it's all open source, at least.
Just as an example, I've been learning using IceStorm, purely because it's so much less shitty. When I tried Lattice's iCE40 software they provided, I had to immediately fix 2 bash scripts that couldn't detect my 4.x version kernel was "Linux", and THEN sed /bin/sh to /bin/bash in 20 more scripts __which used bash features__, just so that the Verilog synthesis tool they shipped actually worked. What the fuck.
Oh! Also, this was after I had to create a fake `eth0` device with a juked MAC address (an exact MAC being required for the licensing software), because systemd has different device names for ethernet interfaces based on the device (TBF, for a decent reason - hotplugging/OOO device attachment means you really want a deterministic naming scheme based on the card), but their tool can't handle that, it only expects eth0.
aseipp|9 years ago
As a software programmer, my mind is blown by the idea of a 3GHz CPU. That's 3,000,000,000 cycles per second. The speed of light c = 299,792,458 meters per second. Which means that, in the time it takes to execute 1 cycle, 1/3,000,000,000 seconds -- light has moved approximately 0.0999m ~= 4 inches. Unbelievable. Computers are fast!
On the other hand... Once you begin to deal with FPGAs, you often run into "timing constraints" where your design cannot physically be mapped onto a board, because it would take too long for a clock signal (read: light) to traverse the silicon, along the path of the gates chosen by the tool. You figure this out after your tool took a long amount of time to synthesize. You suddenly wish the speed of light wasn't so goddamn slow.
In one aspect of life, the speed of light, and computers, are very fast, and I hate inefficiency. In another, the speed of light is just way too slow for me sometimes, and it's irritating beyond belief because my design would just look nicer if it weren't for those timing requirements requiring extra work.
How to reconcile this kind of bizarre self-realization is an exercise left to the reader.
sliverstorm|9 years ago
Don't forget the niche nature of an FPGA, too. They are incredibly slow & power hungry compared to a dedicated CPU, so similar to the CPU/GPU split you find that only particular workloads are suitable.
yid|9 years ago
dakami|9 years ago
flamedoge|9 years ago