top | item 26045328

(no title)

heinrichf | 5 years ago

The praise from popular press and the promotion by the authors should be put into context of what mathematicians think of it.

Two blog posts by a professor at U. Chicago, qualifying it of intellectual fraud:

https://www.galoisrepresentations.com/2019/07/17/the-ramanuj...

https://www.galoisrepresentations.com/2019/07/07/en-passant-...

discuss

order

dealforager|5 years ago

Man, this is an example of how difficult it is to know what is BS or not if you're not an expert on the subject. On one hand, this article was published in Nature, which I thought was trustworthy. On the other, there's this comment on a social media platform that links to a blog that also seems legit. No wonder misinformation spreads so fast. Even after reading both, I don't know what to make of it. The reaction and comments here just confuse me more.

PartiallyTyped|5 years ago

Nature has published some very questionable papers in AI/ML that are filled with malpractices. Another bogus paper that comes to mind was predicting earthquakes with a deep(read huge) neural network that appears to have information leakage and was fuelled with the hype of DL when a simple logistic regression (i.e. single neuron) could perform just as well [1,2,3].

[1] https://www.reddit.com/r/MachineLearning/comments/c4ylga/d_m...

[2] https://www.reddit.com/r/MachineLearning/comments/c8zf14/d_w...

[3] https://www.nature.com/articles/s41586-019-1582-8 / https://arxiv.org/pdf/1904.01983.pdf

bawolff|5 years ago

The articles don't contradict each other when it comes to cited facts - you can believe both!

I suppose its all in the implications though, which are contradicting as the nature article implies it is a big deal. The nature article doesn't give any examples of interesting conjectures, or examples of interesting consequences if any of the conjectures should be true. They talk a lot about alternate formulae to calculate things we already know how to calculate. Why would we care? Do they have a smaller big-oh? Nature references the theory of links between other areas of math, if true that's great, but if its true surely they would have mentioned an example of such a link? Anyways I lean towards this not being that interesting, even if you base that just on what the nature article said.

kevinventullo|5 years ago

FWIW that blog is written by one of the top leading number theorists in the US today. Of course, his opinions are his and you’re free to form your own, but just wanted to clarify that the blog is very much legit.

omginternets|5 years ago

Science/Nature are prestigious, but the quality of their articles are often questionable. Part of the problem is the short format, which makes it difficult to include a lot of context and sanity-checking. Another issue is that they prioritize the “sexiness” of the research over pretty much everything else.

QuesnayJr|5 years ago

Nature in particular seems vulnerable to the academic equivalent of click-bait articles. I think the top journals within a specific field are more reliable.

testfoobar|5 years ago

This is why I have empathy for conspiracy believers. From their perspective, their understanding of the world is accurate.

This is also why I see the inevitable failure of social media platforms in regulating truth-vs-non-truth.

perl4ever|5 years ago

As a thorough non-expert, I don't take headlines in the style of The Register seriously, even if the article is in Nature.

Although, if it was really from The Register it probably would have said "boffins" rather than "humans".

zests|5 years ago

Sometimes popular science is itself the misinformation. The authors stretch the findings to land in prestigious journals. The news stretches the findings further to sell clicks (c.f. Gell-Mann Amnesia effect). The people on the internet selectively quote articles and selectively ignore others. The algorithm tries to only show you content that you like.

The truth doesn't have a chance.

yharris|5 years ago

Full disclosure- I am one of the authors of the paper.

Note that the blog you're citing was written a year and a half ago. It refers to a select few conjectures, and naturally has no references to the developments in the past year and half (which were the main reason the paper got published).

Furthermore, the author of the blog didn't respond to multiple emails we sent him, attempting to discuss the actual mathematics.

So basically the vast majority of the criticism here, is based on a single, outdated blog, by a professor (respected as he may be) who has not revisited the issues and new results since first posting the blog, and has not given any mathematical argument as to why the results shown in the paper (the actual updated paper that was published) are supposably unimportant.

Would appreciate your opinions on the matter.

_8091149529|5 years ago

Not the person you're replying to, but I admit to characterizing your paper as "garbage" in another comment thread. Since you're inviting discourse, which I greatly appreciate, I'm compelled to reply.

1) To anyone who's studied algebra, it is clear that identities of the form LHS = RHS can be obtained by a nested application of transformations and substitutions in a consistent manner.

2) Of course, arriving at a new, insightful result often involves taking mundane steps. However, in this case, the new mathematical discoveries based on the output tableaus of your algorithm are hypothetical. Whereas the manuscript (and the authors) have already pocketed one of the premium accolades in sciences in the form of a Nature publication.

3) To drive the point above home, do you think the resulting mathematical insights themselves, without riding on the "AI" novelty aspect, would clear the bar for a Nature (or similar high-impact) publication? To be clear, I'm not a mathematican, but I believe the answer would be no. Contrast this with another AI/ML advance published in Nature quite recently: AlphaGo. Note how the gist of their paper, superhuman performance in Go, is a self-standing achievement that merely makes use of machine learning techniques.

timkam|5 years ago

I think the vast majority of criticism here does not target the research per se, but rather the way the results are "hyped" and presented as a massive break-through. I agree with this criticism, and also think that the two positive Nature reviews seem rather shallow, at least from a non-expert's perspective (this is not your fault, of course). When it comes to long term impact, I'd find it interesting to discuss how your work can (ideally) interact with proof assistants like Lean. Also, the work around Lean is a good example of a "hyped" topic that is presented by its contributors with caution and modesty.

Radim|5 years ago

Ah yes, the good old "SV culture disrupts X! Revolution at 8 o'clock!"

There's an arms race:

* People are evolving memetic resistance to the incessant BS, ads and bombastic headlines.

* The SV/startup culture is evolving to inject authenticity to overcome people's BS defenses, convince them they need a change.

Honestly, do you still get excited when you read "AI solves X!"?

Probably another huckster peddling empty air, cutting corners, externalizing costs. The whole game is tired, and people are taking note. Not everything that exists requires a radical change.

catgary|5 years ago

Look at the comments:

> The paper is amazingly bad. None of the authors are mathematicians as far as I can see. I think the word “new” appears 50+ times in the paper. Looks like they updated the paper to include your observation from last time about the Gauss continued fraction without mentioning the source (the authors admit here they read your blog: http://www.ramanujanmachine.com/idea/some-well-known-results...). Classy!

Just some light plagiarism/academic misconduct!

mxcrossb|5 years ago

> Well … OK I guess? But, pretty much exactly as pointed out last time, not only is the proof only one line, but the nature of the proof makes clear exactly how unoriginal this is to mathematics

This is what I was wondering about while reading the article. If the AI only generates formula for which proofs involve only a few trivial steps back to something that is known, then it doesn’t feel useful. But I feel like the question “what makes a good conjecture?” in its own right makes for a very interesting discussion.

bawolff|5 years ago

Wouldn't a good conjecture be anything that's interesting if true. General bonus points for if intuitively it seems like it should be obviously true (or false) but yet is hard to prove or if proving it is true would allow you to prove lots of other interesting statements.

j7ake|5 years ago

I think one interesting lesson from this nice qualification here is that at the moment ML methods to learn mathematics may look trivial from a professional mathematician (ie the results are unoriginal or trivial) but perhaps the target audience of this method may be for non professional mathematicians or students training to be mathematicians. I could still see this ML tool as a way to automate the work of some more “trivial” (from the POV of an expert) mathematics, although not the work of professional mathematicians.

The knowledge gap in mathematics between professional mathematicians and non professionals is vast, and this tool could narrow the gap.

I would bet the majority of readers of nature would not be able to point out that the outputs of the ML tool were trivial. So there is need to narrow this gap.

alisonkisk|5 years ago

Simple results in almost any specialist field would stump most readers of Nature. That's not a reason to publish in Nature.

QuesnayJr|5 years ago

This is shocking stuff. I encourage everyone to read these two links.

0b01|5 years ago

I agree with the article you linked. Mathematical knowledge is about compression. Most if not all of these formulae are just specializations of known formulae. So the value of this approach is questionable. Generating these forms can possibly be done in a much simpler way.

agumonkey|5 years ago

who else holds the compression view ?

boriselec|5 years ago

I understand his frustration. But calling it a fraud is a little bit too much.

David147|5 years ago

Looking at his post, the main criticism is "that the program has not yet generated anything new", but the post does not refer to the actual results (like formulas for Catalan's and Apery's constants).

Daniel51|5 years ago

Irrelevant.

Daniel51|5 years ago

The Nature paper presents several new conjectures related to the Catalan constant, pi^2, and zeta(3) (Apéry's constant):

http://www.ramanujanmachine.com/wp-content/uploads/2020/06/c... http://www.ramanujanmachine.com/wp-content/uploads/2020/06/p... http://www.ramanujanmachine.com/wp-content/uploads/2020/06/z...

The main criticism of the blog is "that the program has not yet generated anything new", but the post does not refer to these results. So it seems that this blog post is currently irrelevant and out-dated compared to the Nature publication.