top | item 17731582

(no title)

EchoAce | 7 years ago

It may be my lack of knowledge about optics, but from an ML perspective this seems rather mundane if not useless. Model training involves high levels of parallelism on a large scale for difficult tasks, something I can’t see these optical chips doing. Does anyone have any further information that might enlighten me to otherwise?

discuss

order

stochastic_monk|7 years ago

By performing these transformations optically, they primarily get data parallelism (like [GTV]PUs). I expect this to happen. NVIDIA’s ACDC paper provides an FFT-accelerated neural network layer (similar to deep-fried convnets), with an offhand remark that the transformations could be performed optically. I wonder what kind of information bandwidth they can get, though.

dekhn|7 years ago

Physicists were using optical lenses to do approximate FFTs over a hundred years ago.

drumttocs8|7 years ago

Can you use something like wavelength division multiplexing to get different data streams and achieve parallelism there?

d--b|7 years ago

I can't read the article as it's behind a paywall, but if you can make a chip that's 100% optical, then it means that when you beam your input data at the input end of the chip, you _instantly_ get the output at the end. No need for cycles for multiplying, adding and so on. Plus it wouldn't heat up like silicon does.

p1esk|7 years ago

you _instantly_ get the output

That's not how physics works, unfortunately.