top | item 38474277

(no title)

goldenkey | 2 years ago

The future of this is the same as what happened with PCs. The specificity will initially be thrown away, added back later as an accelerator, and eventually, brought back into the fold as an automatically used accelerator. It all comes full circle as optimizations, hinting, tiering, and heuristics.

AI will be used to select the net to automatically load. Nets will be cached, branch predicted etc.

The future of AI software and hardware doesn't yet support the scale we need for this type of generalized AI processor (think CPU but call it an AIPU.)

And no, GPUs aren't an AIPU, we can't even fit whole some of the largest models on these things without running them in pieces. They don't have a higher level language yet, like C, which would compile down to more specific actions after optimizations are borne (not PTX/LLVM/Cuda/OpenCL.)

discuss

order

No comments yet.