(no title)
diamondman | 11 years ago
GPUs are vector processors. They are good at certain types of repetitive unrelated (parallel) math. They can be good at simulating physics since physics is described with matrix math.
You could simulate the gates in a chip, though this is a MASSIVE problem and will tax the GPU for meh results. If you decided to simulate a chip in a GPU, simulating an FPGA would be quite pointless since FPGAs are less than optimal so they can be reconfigured to implement (not simulate) different chips.
This would be like running a Sparc emulator in a Java VM implementation of an Intel VM... mabe throw javascript in there somewhere. Each level of emulation, simulation, or implementation robs resources.
Yes those architectures are more open but they do not do the same thing. More work with those setups so, say, Postgres can offload certain math to the GPU, is great and should be done. But this is somewhat different than FPGAs (though in the same spirit).
CPUs, GPUs, and FPGAs all solve different types of problems very well. But they do not run each other's problems well, and certainly do not implement each other super well. Well, with one exception: you can build a reasonable GPU with shaders in an FPGA, but this is not to out perform the FPGA it is implemented on in a task. If you hardwired the FPGA to run the equivalent operations of a shader in OpenCL running on a GPU implemented in the same FPGA, the FPGA would win hands down.
No comments yet.