This is fun idea. With these kind of coding tasks you won't get any advantage of using differentiable programming paradigm, but it is a nice reminder how syntactically bad TensorFlow is. Code of any differentiable program should look identical to any non-differentiable program. Maybe a small annotation à la TorchScript [0] can be tolerated, but not reimplementing everything via function calls with overly descriptive names.
Btw link to GitHub repo is broken. Copy&pasting URL works.
IMO it's a huge advantage to be able to write torch model code that looks like how you'd write the same program in pure python, and still have them be serializable.
Author here. The goal of this challenges (and related articles) is to demonstrate how TensorFlow can be used as any other programming language. I am totally against of the usage of machine learning where's not required at all, and I hope to be able to solve all the puzzles without having to rely upon deep learning based solutions.
If in the other puzzles there's some optimization problem that can be expressed in a differentialble way, then I'd use ML for sure. But until there exist a deterministic solution, ML is just a waste (I say this as a ML researcher :) )
Author here. The goal of this challenges (and related articles) is to demonstrate how TensorFlow can be used as any other programming language. I am totally against of the usage of machine learning where's not required at all, and I hope to be able to solve all the puzzles without having to rely upon deep learning based solutions (but who knows, if there's something that can be easily expressed as an optimization problem having a language that's differentiable can help a lot).
That's Google AdSense auto ads (ML based placing of the ads in the page). I've set the level to "low" but I guess the number and positioning of ads are modulated by google using the info it has on you. I see only 3 ads, but maybe different users see different numbers of ads.
All the comparisons like > are better written using their TensorFlow equivalent (e.g tf.greater). Autograph can convert them (you could write >), but it’s less idiomatic and I recommend to do not relying upon the automatic conversion, for having full control.
...but I'm not sure you realized that the for loop and the if statement in your code are being transparently compiled to dataset.map() and tf.cond() for you by Autograph :)
Even if now autograph is able to convert them correctly, I still prefer to have every operator explicitly converted whenever possibile.
The loop, luckily, never had this transpilation problems
Good reading ! It would be interesting to have other similar challenges, such as Euler, solved in idiomatic Tensorflow and Pytorch. Also some examples of more complicated state-of-the-art algorithms, such as sorting/graph/trees algorithms reimplemented in these frameworks.
It would be a great introduction to these frameworks for people who never touched anything ML-related, leaving the neural network content to later in the learning process.
Learning how to create differentiable algorithms and neural networks would be easier once the way those frameworks work is understood (ingesting data, iterating dataset, running, debugging, profiling, etc).
If you are starting with neural networks or differentiable programming, learning both the maths and the frameworks at the same time can be quite overwhelming
wtf
also a reddit user contacted me for telling me that. Do you have any idea on the reason and what can I do to avoid this censorship?
The reddit user guessed because of the word "leone" in the domain name, but it is part of my surname, I can't change it :<
[+] [-] mlajtos|4 years ago|reply
Btw link to GitHub repo is broken. Copy&pasting URL works.
[0] https://pytorch.org/docs/stable/jit_language_reference.html#...
[+] [-] nevermore|4 years ago|reply
IMO it's a huge advantage to be able to write torch model code that looks like how you'd write the same program in pure python, and still have them be serializable.
[+] [-] tubby12345|4 years ago|reply
[+] [-] keyle|4 years ago|reply
Day 8 "FML!" checks python version installed...
[+] [-] me2too|4 years ago|reply
[+] [-] not2b|4 years ago|reply
[+] [-] me2too|4 years ago|reply
Let's see if I'm able to face them all (that's also my first year that I join the AoC - so it's totally new for me)
[+] [-] an-allen|4 years ago|reply
Was hoping to see some training of a model to produce outputs. Good effort nonetheless!
[+] [-] me2too|4 years ago|reply
If in the other puzzles there's some optimization problem that can be expressed in a differentialble way, then I'd use ML for sure. But until there exist a deterministic solution, ML is just a waste (I say this as a ML researcher :) )
[+] [-] NeutralForest|4 years ago|reply
[+] [-] me2too|4 years ago|reply
[+] [-] exdsq|4 years ago|reply
[+] [-] me2too|4 years ago|reply
That's unfortunate
[+] [-] werdnapk|4 years ago|reply
[+] [-] cglong|4 years ago|reply
[+] [-] brilee|4 years ago|reply
[+] [-] me2too|4 years ago|reply
Even if now autograph is able to convert them correctly, I still prefer to have every operator explicitly converted whenever possibile. The loop, luckily, never had this transpilation problems
[+] [-] antpls|4 years ago|reply
It would be a great introduction to these frameworks for people who never touched anything ML-related, leaving the neural network content to later in the learning process.
Learning how to create differentiable algorithms and neural networks would be easier once the way those frameworks work is understood (ingesting data, iterating dataset, running, debugging, profiling, etc).
If you are starting with neural networks or differentiable programming, learning both the maths and the frameworks at the same time can be quite overwhelming
[+] [-] 0-_-0|4 years ago|reply
[+] [-] me2too|4 years ago|reply
[+] [-] bufferoverflow|4 years ago|reply
[+] [-] NotEvil|4 years ago|reply
[+] [-] me2too|4 years ago|reply
[+] [-] udbhavs|4 years ago|reply