It seems that the primary contribution of this technique is that it uses specific assumptions supported by neuroscience research in order to allow for composability of learning and better generalization. By introducing these specific assumptions (e.g. contours define objects), they are able to reduce the complexity that the model has to learn and thereby reduce the amount of data that it needs.
Obviously, the question then becomes: what happens when you have visual situations that violate or come close to violating the assumptions made?
I'm not familiar enough with the specifics of RCNs to be able to answer this; maybe someone else can. Very interesting paper and approach regardless.
After six or seven click-throughs, I downloaded the PDF.
I haven't read it but skimming, I could see that there definitely were no formulas in it at all . Which sort of says, at best what it tells you is "we did this thing, which is kind of like X and kind of like Y with Z changes". Essentially, no way to reproduce or understand by itself. The first reference then had a link behind a paywall...
So despite lots of apparent explanation, it seems like what they're actually doing is essentially unspecified (at least to the interested layman). It seems like at best an expert in the field of "compositional models" could say what is happening.
Also, the paper is published under the heading of an AI firm Fremont, ca rather than folks in a university, with the many authors listed by initial and last name...
Again: no one cares about CAPTCHA in the deep learning world compared to other more challenging benchmarks. I wouldn’t be surprised that many optimizations could be made with ANY kind of effort put into it. Still waiting for Vicarious to go beyond MNIST and text CPATCHA.
This is trueish, but there is more to it than that.
It is true for sure that absolute performance on MNIST isn't the most interesting thing in the world.
But when introducing a new tool or technique being able to show competitive performance on MNIST is a good way to show that it isn't an entirely useless thing.
I'd note that recent Sabour, Frosst and Hinton paper[1] (where they finally got Hinton's capsules to work) spends most of the paper analyzing how it performs on MNIST, and only a short section on other datasets.
I assume I don't need to point out that Geoff Hinton does know a little about deep learning, and if he thinks submitting a NIPS paper on MNIST is acceptable in 2017 then I'm not going to argue too hard against it.
Paper abstract highlights the model's data efficiency several times:
Learning from few examples and generalizing to dramatically different situations are capabilities of human visual intelligence that are yet to be matched by leading machine learning models. By drawing inspiration from systems neuroscience, we introduce a probabilistic generative model for vision in which message-passing based inference handles recognition, segmentation and reasoning in a unified way. The model demonstrates excellent generalization and occlusion-reasoning capabilities, and outperforms deep neural networks on a challenging scene text recognition benchmark while being 300-fold more data efficient. In addition, the model fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters without CAPTCHA-specific heuristics. Our model emphasizes aspects like data efficiency and compositionality that may be important in the path toward general artificial intelligence.
Unclear how to run on the CAPTCHA examples referenced in the paper, even though they did make the datasets for those examples available.
Bummer, a big part of what the paper mentions about being so great with this RCN model is being able to segment sequences of characters (of indeterminate length even!). Sad that I cannot easily verify this for myself!
We talked about releasing more comprehensive proof of concept code, but ultimately decided against it. While helpful for other researchers, offering anyone on the internet a ready-to-use arbitrary captcha breaker seemed like a net-negative for society.
I'd love to read this, but the faint text on white background... good god. I went through the code looking to change the background so I could read it and found this:
Huh. Did they change it? I see a very thin font in the header and in bulleted lists, but the rest of the text on the page is black (literally #000000) and relatively bold compared to what I'm used to seeing online (could just be that it's slightly larger, which is also good! it's by no means big, just nice to see something not pointlessly tiny).
The header has the awful "ObjektivMk1-Thin" font mentioned elsewhere, but for me the body is a normal "Roboto","Helvetica Neue",Helvetica,Arial,sans-serif font-family.
Featuring some of the worst typography I've seen on the internet. There clearly was an attempt, but just leaving font-face as default would've been more readable.
This paper looks really interesting to me, although after quickly reading the introduction it's evident that I'm going to have to invest quite a bit of time and effort on the paper to grasp its key ideas. I come from more an encoding-decoding, deep/machine-learning background, as opposed to a probabilistic graphical modeling or PGM background, and my knowledge of neuroscience is minimal.
To date, my experience with "deep PGM models" (for lack of a better term) is limited to some tinkering with (a) variational autoencoders using ELBO maximization as the training objective, and to a much lesser extent (b) "bi-directional" GANs using a Jensen-Shannon divergence between two joint distributions as the training loss.
Has anyone here with a similar background to mine had a chance to read this paper? Any thoughts?
It looks RCN sits between traditional machine learning (with manual feature selection) and 'modern' neural networks (CNN). The traditional methods are too rigid to capture the essential information, while the CNNs sometimes are too flexible to avoid overfitting. Different from CNNS, RCNs have a predetermined structure. Humans are not born a blank slate, we have a neural structure encoded in our genes, so we don't need millions of training samples to recognize objects. So maybe RCN is onto something.
I am curious how RCN performs on real-life images like ImageNet, and how do they perform against adversarial examples. If they can easily recognize adversarial examples, that would be very interesting...
> In 2013, we announced an early success of RCN: its ability to break text-based CAPTCHAs like those illustrated below (left column). With one model, we achieve an accuracy rate of 66.6% on reCAPTCHAs, 64.4% on BotDetect, 57.4% on Yahoo, and 57.1% on PayPal, all significantly above the 1% rate at which CAPTCHAs are considered ineffective (see [4] for more details). When we optimize a single model for a specific style, we can achieve up to 90% accuracy.
66% with reCaptcha and up to 90% when optimised is much higher than what I can achieve with my actual brain. Maybe I should consider using a neural network to answer those, it happens quite frequently that I need 2-3 rounds to get through reCaptcha.
This is a paper that departs from the 'normal' AI routine and takes a very different approach. Is there another paper formally describing the RCN network? What goes inside the RCN cell? I find it more like a teaser than a revelation at this point.
I do not see a discussion in the paper regarding computational efficiency of RCN detection. The only hint about performance that I found is at the end of supplementary material where the authors state:
> Use of appearance during the forward pass: Surface appearance is now only used after the backward pass. This means that appearance information (including textures) is not being used during the forward pass to improve detection (whereas CNNs do). Propagating appearance bottom-up is a requisite for high performance on appearance-rich images.
I presume from this that in the current form RCN requires much more computations than CNN per detection, but I could be wrong.
If I'm not mistaken, a Deep Belief Net or Deep Belief Machine would also be a generative model with enormously greater data efficiency. Comparing against CNNs is a red herring: the advantage of requiring less data to develop a model is more a generative/discriminative issue than it is an "RCN vs everyone else" issue.
dpandya|8 years ago
Obviously, the question then becomes: what happens when you have visual situations that violate or come close to violating the assumptions made?
I'm not familiar enough with the specifics of RCNs to be able to answer this; maybe someone else can. Very interesting paper and approach regardless.
unknown|8 years ago
[deleted]
joe_the_user|8 years ago
I haven't read it but skimming, I could see that there definitely were no formulas in it at all . Which sort of says, at best what it tells you is "we did this thing, which is kind of like X and kind of like Y with Z changes". Essentially, no way to reproduce or understand by itself. The first reference then had a link behind a paywall...
So despite lots of apparent explanation, it seems like what they're actually doing is essentially unspecified (at least to the interested layman). It seems like at best an expert in the field of "compositional models" could say what is happening.
Also, the paper is published under the heading of an AI firm Fremont, ca rather than folks in a university, with the many authors listed by initial and last name...
PDF for the curious:
http://science.sciencemag.org/content/sci/early/2017/10/26/s...
Edit: tracked down that apparently has some "real" math. Whether is even what the OP is doing remains to be seen.
https://staff.fnwi.uva.nl/t.e.j.mensink/zsl2016/zslpubs/lake...
bufo|8 years ago
nl|8 years ago
It is true for sure that absolute performance on MNIST isn't the most interesting thing in the world.
But when introducing a new tool or technique being able to show competitive performance on MNIST is a good way to show that it isn't an entirely useless thing.
I'd note that recent Sabour, Frosst and Hinton paper[1] (where they finally got Hinton's capsules to work) spends most of the paper analyzing how it performs on MNIST, and only a short section on other datasets.
I assume I don't need to point out that Geoff Hinton does know a little about deep learning, and if he thinks submitting a NIPS paper on MNIST is acceptable in 2017 then I'm not going to argue too hard against it.
[1] https://arxiv.org/pdf/1710.09829.pdf
singularity2001|8 years ago
does your network solve/recognise those?
flor1s|8 years ago
The title of the paper is: A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs
The title of the article is: Common Sense, Cortex, and CAPTCHA
That's nowhere near the sensationalist title at HN: RCN is much more data efficient than traditional Deep Neural Networks
boltzmannbrain|8 years ago
Learning from few examples and generalizing to dramatically different situations are capabilities of human visual intelligence that are yet to be matched by leading machine learning models. By drawing inspiration from systems neuroscience, we introduce a probabilistic generative model for vision in which message-passing based inference handles recognition, segmentation and reasoning in a unified way. The model demonstrates excellent generalization and occlusion-reasoning capabilities, and outperforms deep neural networks on a challenging scene text recognition benchmark while being 300-fold more data efficient. In addition, the model fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters without CAPTCHA-specific heuristics. Our model emphasizes aspects like data efficiency and compositionality that may be important in the path toward general artificial intelligence.
sherbondy|8 years ago
Unclear how to run on the CAPTCHA examples referenced in the paper, even though they did make the datasets for those examples available.
Bummer, a big part of what the paper mentions about being so great with this RCN model is being able to segment sequences of characters (of indeterminate length even!). Sad that I cannot easily verify this for myself!
fuelfive|8 years ago
BucketSort|8 years ago
body { text-rendering: optimizeLegibility; }
Ok
Groxx|8 years ago
The header has the awful "ObjektivMk1-Thin" font mentioned elsewhere, but for me the body is a normal "Roboto","Helvetica Neue",Helvetica,Arial,sans-serif font-family.
tertius|8 years ago
cwilkes|8 years ago
alphydan|8 years ago
singhrac|8 years ago
sdenton4|8 years ago
creo|8 years ago
nightcracker|8 years ago
warent|8 years ago
cs702|8 years ago
To date, my experience with "deep PGM models" (for lack of a better term) is limited to some tinkering with (a) variational autoencoders using ELBO maximization as the training objective, and to a much lesser extent (b) "bi-directional" GANs using a Jensen-Shannon divergence between two joint distributions as the training loss.
Has anyone here with a similar background to mine had a chance to read this paper? Any thoughts?
barbolo|8 years ago
real-hacker|8 years ago
I am curious how RCN performs on real-life images like ImageNet, and how do they perform against adversarial examples. If they can easily recognize adversarial examples, that would be very interesting...
dx034|8 years ago
66% with reCaptcha and up to 90% when optimised is much higher than what I can achieve with my actual brain. Maybe I should consider using a neural network to answer those, it happens quite frequently that I need 2-3 rounds to get through reCaptcha.
nnx|8 years ago
ps: thanks god for Reader mode on Safari
visarga|8 years ago
dpandya|8 years ago
(mentioned by boltzmannbrain in one of the other comments)
unknown|8 years ago
[deleted]
_0w8t|8 years ago
> Use of appearance during the forward pass: Surface appearance is now only used after the backward pass. This means that appearance information (including textures) is not being used during the forward pass to improve detection (whereas CNNs do). Propagating appearance bottom-up is a requisite for high performance on appearance-rich images.
I presume from this that in the current form RCN requires much more computations than CNN per detection, but I could be wrong.
stochastic_monk|8 years ago
What I don't quite understand is why Deep Belief Nets seem to not be getting press these days. For example, see this paper from 2010: http://proceedings.mlr.press/v9/salakhutdinov10a.html.
unknown|8 years ago
[deleted]
gugagore|8 years ago
https://gizmodo.com/a-new-ai-system-passed-a-visual-turing-t... / http://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.p...
unknown|8 years ago
[deleted]
taneq|8 years ago
singularity2001|8 years ago
unknown|8 years ago
[deleted]
jostmey|8 years ago
m3kw9|8 years ago