felippee's comments

felippee | 7 years ago | on: Autopsy of a deep learning paper

Oh, somebody got triggered here! Yes, there is sarcasm in this post! And if you don't like it, fine. But please, don't give me bullshit about being a jerk. I think you probably have not seen a real jerk in your life yet.

felippee | 7 years ago | on: Autopsy of a deep learning paper

I'm not sure how to interpret these pictures. They don't suggest anything to me. And certainly don't suggest anything about the quality of representations. And BTW how do you measure quality of representations?

felippee | 7 years ago | on: Autopsy of a deep learning paper

Yes it is certainly not fair that the network they spend one page explaining and probably weeks training and researching can be hardwired in 30 lines of python. This is very unfair. But this is the reality, and so the post states.

Also the idea to add coordinate as a feature has been used in the past without giving even much thought.

Toy examples are great. As long as they are not trivial. Some guy, presumably smart, once said that "things should be as simple as possible but not simpler". The toy example they play with is just too simple.

felippee | 7 years ago | on: Autopsy of a deep learning paper

Author of the post here: I think their paper would have been much better if they included the piece of code which I wrote in python to explain that the transformation they are learning is obviously trivial and the fact that it works is not in question. This would leave them a lot more space to focus on something interesting, perhaps explore the GAN's a little further, cause what they did is somewhat rudimentary. But that omission (and lack of context for previous use of such features in the literature) left a vulnerability which I have the full right to exploit in a blog post.

felippee | 7 years ago | on: Autopsy of a deep learning paper

Author of the post here. I totally agree that negative stuff should be published. But without the fanfares. I think they could have changed the tone of that paper and I would not have an issue with it. It is likely that if they did that they'd never go through some idiot reviewer who expects "a positive result" or some similar silliness. This is not a perfect world. The paper as is makes strong claims about the novelty and usefulness of their gimmick. If it turns out your stuff is at least partially hollow and you take on the pompous tone, you have to be ready to take some heat. Science is not about tapping friends on the back (which BTW is what is happening a lot with the so called "deep learning community"). Science is about fighting to get to some truth, even if that takes some heat. People so fragile that they cannot take criticism should just not do it.

felippee | 7 years ago | on: Autopsy of a deep learning paper

The post mocks them primarily for learning the trivial coordinate transform. That is the core of the paper and ridiculing this piece leaves very little left on the table. The ImageNet test is just an appendix, a cherry on the cake, a curiosity one should say.

felippee | 7 years ago | on: Autopsy of a deep learning paper

Author here (of the post, not the paper). I think you don't understand how science works. The whole point of the exercise (which indeed may have been forgotten these days) is to attack ideas/papers. The first line of attack should be your friends to make sure you don't put anything out there that is silly. The second line of attack are the reviewers, who may or may not be idiots themselves, but in the perfect world should serve the same purpose. The third line attack are independent readers, people like me. I found it to be trivial, took my liberty to attack it. It is not personal and should not be taken so. These guys may in the future publish the most amazing piece of research ever. But this one is not it. They should realize this and my blog post serves this purpose. If somebody gets offended and takes it personally, so be it. I think people should have a bit thicker skin, especially in science. I took quite a bit of bullshit myself (and I'm sure I will have to take more) and never complained. So relax, read the paper, read the post, learn something from both and go on.

felippee | 7 years ago | on: Self-Supervised Tracking via Video Colorization

No, there is something special to machine learning and AI in general. E.g. SIGGRAPH papera are different. No one there claims to solve more than they actually do solve. DL is soaked with hype and self congratulatory BS. The best way to spot it is to check the citations. Typically they solve an already solved problem, skipping entirely any pre deep learning literature on it (or if they do cite it, only to dump BS on it) and then just cite a few of their own more or less relevant papers. I'm aware I'm overgeneralizing here and not every paper is like that, but I've seen enough to detect a trend.

It is as if defending or advertising "deep learning" was the purpose of the paper. It is not. The purpose of a paper is to show a solution to a problem. Much of DL literature (again not all) is a "solution in a desperate search of a problem" rather than the opposite.

I think many of these papers (including this one) would make a great blog post, but just isn't quite enough in terms of scientific content for a full blown paper. A curiosity, nice gimmick, but nothing more. Not really a solution to a problem, not really any idea of non trivial universality.

felippee | 7 years ago | on: Self-Supervised Tracking via Video Colorization

Right, I totally agree. What is more, I think this same result could actually be sold without the pompous deep learning bullshit and be received quite differently. If they did not claim to invent the wheel, but rather modestly noted their observation (which in a limited way is actually quite cool - that is from a known - at least on this forum - deep learning skeptic like me), it would make a much better impression.

Same is true actually for many DL papers. They'd be actually cool, if they weren't oversold.

felippee | 7 years ago | on: AI winter is well on its way

Author here: seriously I'm here at the front page for the second day in the a row!?

The sheer viral popularity of this post, which really was just a bunch of relatively loose thoughts indicates that there is something in the air regarding AI winter. Maybe people are really sick of all that hype pumping...

Just a note: I'm a bit overwhelmed so I can't address all the criticism. One thing I would like to state however, is that I'm actually a fan of connectionism. I think we are doing it naively though and instead of focusing on the right problem we inflate a hype bubble. There are applications where DL really shines and there is no question about that. But in case of autonomy and robotics we have not even defined the problems well enough, not to mention solving anything. But unfortunately, those are the areas where most best/expectations sit, therefore I'm worried about the winter.

felippee | 7 years ago | on: AI winter is well on its way

It's not like humanity really needs another chess playing program 20 years after IBM solved that problem (but now utilizing 1000x more compute power). I just find all these game playing contraptions really uninteresting. There are plenty real world problems to be solved of much higher practicality. Moravec's paradox in full glow.

felippee | 7 years ago | on: AI winter is well on its way

> Also why does every single result has to be breathtaking?

If you build the hype like say Andrew Ng it better be. Also if you consume more money per month than all the CS departments of a mid sized country, it better be.

felippee | 7 years ago | on: AI winter is well on its way

I'm skeptical, and side with Rodney Brooks on this one. First, reinforcement learning is incredibly inefficient. And sure, humans and animals have forms of reinforcement learning, but my hunch it that it works on an already incredibly semantically relevant representation and utilize the forward model. That model is generated by unsupervised learning (which is way more data efficient). Actually I side with Yann Lecun on this one, see some of his recent talks. But Yann is not a robotics guy, so I don't think he fully appreciates the role of a forward model.

Now using models for RL is the obvious choice, since trying to teach a robot a basic behavior with RL is just absurdly impractical. But the problem here, is that when somebody build that model (a 3d simulations) they put in a bunch of stuff they think is relevant to represent the reality. And that is the same trap as labeling a dataset. We only put in the stuff which is symbolically relevant to us, omitting a bunch of low level things we never even perceive.

This is a longer subject, and a HN is not enough to cover it, but there is also something about the complexity. Reality is not just more complicated than simulation, it is complex with all the consequences of that. Every attempt to put a human filtered input between AI and the world will inherently loose that complexity and ultimately the AI will not be able to immunize itself to it.

This is not an easy subject and if you read my entire blog you may get the gist of it, but I have not yet succeeded in verbalizing it concisely to my satisfaction.

felippee | 7 years ago | on: AI winter is well on its way

I sure agree there are many interesting things going on, there is no question about that. Also most of them are toy problems focused in some restricted domains, while a huge bag of equally interesting real world problems is sitting untouched. And let me tell you, all those VC's that put in probably way north of $10B are not looking forward to more NIPS papers or yet another style transfer algorithm.

felippee | 7 years ago | on: AI winter is well on its way

Sure, but have you heard about Moravec's paradox? And if so, don't you find it curious that over the 30 years of Moore's law exponential progress in computing almost nothing improved on that side of things, and we kept playing fancier games?
page 1