top | item 12709654

(no title)

yid | 9 years ago

This aspect of their new chips is massively underrated. An FPGA is the future-proof solution here, not chip-level instructions for the soup-du-jour in machine learning.

Edit: which is not to say that I'm not welcoming the new instructions with open arms...

discuss

order

astrodust|9 years ago

I'm not as hyped about FPGA-in-CPU so much as I am of having Intel release a specification for their FPGAs that will allow development of third-party tools to program them.

Right now the various vendors seem to insist on their own proprietary everything which makes it hard to streamline your development toolchain. Many of the tools I've used are inseparably linked to a Windows-only GUI application.

wcrichton|9 years ago

We're starting a project at Stanford to solve just this problem! The Agile Hardware Center: https://aha.stanford.edu/

The plan is to have a completely open source toolchain from the HDL to P&R.

ronald_raygun|9 years ago

I'm not too familiar with FPGAs, but isn't the tradeoff that since they are flexible they are usually much slower than CPUs/GPUs and it is usually used to prototype an ASIC? How is FPGA-in-CPU going to be a good thing?

greendragon|9 years ago

It will only really take off if they can get the user experience of getting a HDL program to the FPGA to the same level as getting a shader program to the GPU. Unless they can do all that in microcode though it's going to force them to open up some stacks of Altera's closed processes. I'm hopeful but there's a lot of proprietary baggage around FPGAs that I think have kept them from truly reaching their potential.

yid|9 years ago

> I'm hopeful but there's a lot of proprietary baggage around FPGAs that I think have kept them from truly reaching their potential.

I don't really know much about this aspect, could you elaborate? I'm genuinely curious.

oneshot908|9 years ago

There's no evidence so far that FPGAs come anywhere close to GPUs w/r to deep learning performance. All the benchmarks so far, through Arria 10, show it to be mediocre for inference, and the lack of training benchmark data IMO implies it's a disaster for that task. See also Google flat out refusing to define what processors they measured TPU performance and efficiency against.

FPGAs are best when deployed on streaming tasks. And one would think inference would be just that, yet the published performance numbers are on par with 2012 GPUs. That said, if they had as efficient a memory architecture as GPUs do, things could get interesting down the road. But by then I suspect ASICs (including one from NVIDIA) will be the new kings.

wyldfire|9 years ago

GPUs have GDDR5 and that is primarily what allows them to dominate in so many applications. Many of them are primarily memory-bound and not computation-bound. This means that the super-fast GDDR memory and the algorithms which can do a predictable linear walk through memory get an enormous speed boost over almost anything else out there.

protomok|9 years ago

> But by then I suspect ASICs (including one from NVIDIA) will be the new kings.

Yes, I suspect Nvidia has been developing/prototyping a Deep Learning ASIC for some time now. The power savings from an ASIC (particularly for inference) are just too massive to ignore.

Nvidia also seems to be involved in an inference only accelerator from Stanford called EIE (excellent paper here - https://arxiv.org/pdf/1602.01528v2.pdf).

jwr|9 years ago

Oh, these are entirely different approaches. New instructions are something I can use immediately (well, after I get the CPU). I know how to feed data to them, I can debug them, they are predictable, my tools support them, overall — I can make good use of them from day one.

As for FPGAs, anybody who actually used one, especially for more than a toy problem, will tell you that the experience is nowhere near that. Plus, at the moment, we are comparing a solution that could possibly perhaps materialise with something that will simply appear as part of the next CPU.

sliverstorm|9 years ago

FPGAs sound amazing, but if you work with them you learn they can be a real PITA. The vision of the dynamically shapeshifting coprocessor FPGA is a long way in the future.