top | item 13596271

(no title)

general_ai | 9 years ago

The way I see it, TF is about to pull _way_ ahead thanks to XLA JIT/AOT. All of a sudden you get the ability to fuse things at a much more granular level, which could reduce memory bandwidth requirements by a lot. Frameworks like Torch can't do any fusing at all, since their computation is fully imperative. Tactical win for imperative frameworks, I suppose, but strategically functional graph is the way to go. DB people realized this in the 70s, ML people are realizing this now.

discuss

order

congerous|9 years ago

TF is way behind on UI, which is why it's making Keras its front-end. It's fairly slow on multi-GPUs compared to Torch and neon. It might pull ahead in performance on GCE, but that's just for lockin.

general_ai|9 years ago

TF is in a fortunate position of having several UIs at this point. It's a lower level framework with a lot of power. If you don't need all that power, Keras or TFLearn or Slim are pretty great. If you do, it's there for you. I see no evidence that Google's goal with TF is to lock you into anything, and especially GCE. I'm a former Google employee, and I can tell you unequivocally — that's not how Google actually works.