(no title)
ansk
|
1 year ago
I've seen and ignored a lot of "pytorch good, tensorflow bad" takes in my time, but this is so egregiously wrong I can't help but chime in. Facilitating graph-level optimizations has been one of the most central tenets of tensorflow's design philosophy since its inception. The XLA compiler was designed in close collaboration with the tensorflow team and was available in the tensorflow API as far back as 2017. It's not an exaggeration to say that pytorch is 5+ years behind on this front. Before anyone invokes the words "pythonic" or "ergonomic", I'd like to note that the tensorflow 2 API for compilation is nearly identical to torch.compile.
brrrrrm|1 year ago
TF's doesn't seem very good. I just tried to figure out how to learn a linear mapping with TF and went through this:
1. googled "linear layer in tensorflow" and got to the page about linear.
2. spent 5 minutes trying to understand why monotonicity would be a central tenet of the documentation
3. realizing that's not the right "linear" I couldn't think of what the appropriate name would be
4. I know MLPs have them, google "tensorflow mlp example"
5. click the apr '24 page: https://www.tensorflow.org/guide/core/mlp_core
6. read through 10[!] code blocks that are basically just boiler-plate setup of data and visualizations. entirely unrelated to MLPs
7. realize they call it "dense" in tensorflow world
8. see that "dense" needs to be implemented manually
9. think that's strange, google "tensorflow dense layer"
10. find a keras API (https://www.tensorflow.org/api_docs/python/tf/keras/layers/D...)
mochomocha|1 year ago
n_u|1 year ago
exe34|1 year ago
__rito__|1 year ago
I have seen some good ones, too, of course.
shmel|1 year ago
mft_|1 year ago
(This pattern is relatively easy to understand: smart people creating something get their gratification from the creation process, not writing tedious documentation; and this is systemically embedded for people at Google, who are probably directly incentivised in a similar way.)
sroussey|1 year ago
https://chatgpt.com/share/66fc325a-99e8-800d-925c-4924837b1e...
marcinzm|1 year ago
lgessler|1 year ago
dekhn|1 year ago
From what I can tell Google is moving in a direction that doesn't require tensorflow, and I don't see it gaining signficant adoption outside google, so it seems most likely we will simply see it deprecated in about 10 years. It's best to see it as a transitional technology that Jeff Dean created to spur ML development internally, which was mistakenly open sourced, and now, Jeff's reports typically use Jax or other systems.
catgary|1 year ago
ithkuil|1 year ago
zozbot234|1 year ago
Agreed of course but it's not like they came up with this approach from scratch. They seem to have just picked it up from Theano (now Aesara/PyTensor).
YetAnotherNick|1 year ago
whymauri|1 year ago
uoaei|1 year ago
JAX is right there. No need to beat a dead horse when there's a stallion in the stables.
ansk|1 year ago