(no title)
heinrichf | 5 years ago
Two blog posts by a professor at U. Chicago, qualifying it of intellectual fraud:
https://www.galoisrepresentations.com/2019/07/17/the-ramanuj...
https://www.galoisrepresentations.com/2019/07/07/en-passant-...
heinrichf | 5 years ago
Two blog posts by a professor at U. Chicago, qualifying it of intellectual fraud:
https://www.galoisrepresentations.com/2019/07/17/the-ramanuj...
https://www.galoisrepresentations.com/2019/07/07/en-passant-...
dealforager|5 years ago
PartiallyTyped|5 years ago
[1] https://www.reddit.com/r/MachineLearning/comments/c4ylga/d_m...
[2] https://www.reddit.com/r/MachineLearning/comments/c8zf14/d_w...
[3] https://www.nature.com/articles/s41586-019-1582-8 / https://arxiv.org/pdf/1904.01983.pdf
bawolff|5 years ago
I suppose its all in the implications though, which are contradicting as the nature article implies it is a big deal. The nature article doesn't give any examples of interesting conjectures, or examples of interesting consequences if any of the conjectures should be true. They talk a lot about alternate formulae to calculate things we already know how to calculate. Why would we care? Do they have a smaller big-oh? Nature references the theory of links between other areas of math, if true that's great, but if its true surely they would have mentioned an example of such a link? Anyways I lean towards this not being that interesting, even if you base that just on what the nature article said.
kevinventullo|5 years ago
omginternets|5 years ago
QuesnayJr|5 years ago
testfoobar|5 years ago
This is also why I see the inevitable failure of social media platforms in regulating truth-vs-non-truth.
perl4ever|5 years ago
Although, if it was really from The Register it probably would have said "boffins" rather than "humans".
zests|5 years ago
The truth doesn't have a chance.
TeMPOraL|5 years ago
https://slatestarcodex.com/2019/06/03/repost-epistemic-learn...
yharris|5 years ago
Note that the blog you're citing was written a year and a half ago. It refers to a select few conjectures, and naturally has no references to the developments in the past year and half (which were the main reason the paper got published).
Furthermore, the author of the blog didn't respond to multiple emails we sent him, attempting to discuss the actual mathematics.
So basically the vast majority of the criticism here, is based on a single, outdated blog, by a professor (respected as he may be) who has not revisited the issues and new results since first posting the blog, and has not given any mathematical argument as to why the results shown in the paper (the actual updated paper that was published) are supposably unimportant.
Would appreciate your opinions on the matter.
_8091149529|5 years ago
1) To anyone who's studied algebra, it is clear that identities of the form LHS = RHS can be obtained by a nested application of transformations and substitutions in a consistent manner.
2) Of course, arriving at a new, insightful result often involves taking mundane steps. However, in this case, the new mathematical discoveries based on the output tableaus of your algorithm are hypothetical. Whereas the manuscript (and the authors) have already pocketed one of the premium accolades in sciences in the form of a Nature publication.
3) To drive the point above home, do you think the resulting mathematical insights themselves, without riding on the "AI" novelty aspect, would clear the bar for a Nature (or similar high-impact) publication? To be clear, I'm not a mathematican, but I believe the answer would be no. Contrast this with another AI/ML advance published in Nature quite recently: AlphaGo. Note how the gist of their paper, superhuman performance in Go, is a self-standing achievement that merely makes use of machine learning techniques.
timkam|5 years ago
Radim|5 years ago
There's an arms race:
* People are evolving memetic resistance to the incessant BS, ads and bombastic headlines.
* The SV/startup culture is evolving to inject authenticity to overcome people's BS defenses, convince them they need a change.
Honestly, do you still get excited when you read "AI solves X!"?
Probably another huckster peddling empty air, cutting corners, externalizing costs. The whole game is tired, and people are taking note. Not everything that exists requires a radical change.
catgary|5 years ago
> The paper is amazingly bad. None of the authors are mathematicians as far as I can see. I think the word “new” appears 50+ times in the paper. Looks like they updated the paper to include your observation from last time about the Gauss continued fraction without mentioning the source (the authors admit here they read your blog: http://www.ramanujanmachine.com/idea/some-well-known-results...). Classy!
Just some light plagiarism/academic misconduct!
mxcrossb|5 years ago
This is what I was wondering about while reading the article. If the AI only generates formula for which proofs involve only a few trivial steps back to something that is known, then it doesn’t feel useful. But I feel like the question “what makes a good conjecture?” in its own right makes for a very interesting discussion.
prof-dr-ir|5 years ago
https://www.quantamagazine.org/the-subtle-art-of-the-mathema...
bawolff|5 years ago
j7ake|5 years ago
The knowledge gap in mathematics between professional mathematicians and non professionals is vast, and this tool could narrow the gap.
I would bet the majority of readers of nature would not be able to point out that the outputs of the ML tool were trivial. So there is need to narrow this gap.
unknown|5 years ago
[deleted]
alisonkisk|5 years ago
QuesnayJr|5 years ago
unknown|5 years ago
[deleted]
0b01|5 years ago
agumonkey|5 years ago
boriselec|5 years ago
David147|5 years ago
unknown|5 years ago
[deleted]
Daniel51|5 years ago
Daniel51|5 years ago
http://www.ramanujanmachine.com/wp-content/uploads/2020/06/c... http://www.ramanujanmachine.com/wp-content/uploads/2020/06/p... http://www.ramanujanmachine.com/wp-content/uploads/2020/06/z...
The main criticism of the blog is "that the program has not yet generated anything new", but the post does not refer to these results. So it seems that this blog post is currently irrelevant and out-dated compared to the Nature publication.