(no title)
nicf | 11 months ago
To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.
jasonhong|11 months ago
> "The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true."
Incidentally, I've also a similar problem when reviewing HCI and computer systems papers. Ok sure, this deep learning neural net worked better, but what did we as a community actually learn that others can build on?
nicf|11 months ago
troymc|11 months ago
We "know" it's true, but only because a machine ground mechanically through lots of tedious cases. I'm sure most mathematicians would appreciate a simpler and more elegant proof.
[1] https://en.wikipedia.org/wiki/Kepler_conjecture
7402|11 months ago
However by the end of that period, it seemed to transition to a situation where the most important skill in achieving a good score was manipulating statistical machine learning tools (Random Forests was a popular one, I recall), rather than gaining deep understanding of the physics or sociology of the problem, and I started doing worse and I lost interest in Kaggle.
So be it. If you want to win, you use the best tools. But the part that brought joy to me was not fighting for the opportunity to win a few hundred bucks (which I never did), but for the intellectual pleasure and excitement of learning about an interesting problem in a new field that was amenable to mathematical analysis.
kevindamm|11 months ago
mcguire|11 months ago
"Mathematics advances by solving problems using new techniques because those techniques open up new areas of mathematics."
lupire|11 months ago
mb7733|11 months ago
senderista|11 months ago
psunavy03|11 months ago
DeepSeaTortoise|11 months ago
For actual research mathematics, there is no reason why an AI (maybe not current entirely statistical models) shouldn't be able to guide you through it exactly the way you prefer to. Then it's just a matter of becoming honest with your own desires.
But it'll also vastly blow up the field of recreational mathematics. Have the AI toss a problem your way you can solve in about a month. A problem involving some recent discoveries. A problem Franklin could have come up with. During a brothel visit. If he was on LSD.
lordleft|11 months ago
nicf|11 months ago
Like I said, I don't have any idea what's going to happen. The thing that makes me sad about these conversations is that the people I talk to sometimes don't seem to have any appreciation for the thing they say they want to dismantle. It might even be better for humanity on the whole to arrive in this future; I'm not arguing that one way or the other! Just that I think there's a chance it would involve losing something I really love, and that makes me sad.
weitendorf|11 months ago
In mathematics it is just as (if not moreso) important to be able to apply techniques used to solve novel proofs as it is to have the knowledge that the theorem itself is true. Not only might those techniques be used to solve similar problems that the theorem alone cannot, but it might even uncover wholly new mathematical concepts that lead you to mathematics that you previously could not even conceive of.
Machine proofs in their current form are basically huge searches/brute forces from some initial statements to the theorem being proved, by way of logical inference. Mathematics is in some ways the opposite of this: it's about understanding why something is true, not solely whether it is true. Machine proofs give you a path from A to B but that path could be understandable-but-not-generalizable (a brute force), not-generalizable-but-understandable (finding some simple application of existing theorems to get the result that mathematicians simply missed), or neither understandable-nor-generalizable (imagine gigabytes of pure propositional logic on variables with names like n098fne09 and awbnkdujai).
Interestingly, some mathematicians like Terry Tao are starting to experiment with combining LLMs with automated theorem proving, because it might help in both guiding the theorem-prover and explaining its results. I find that philosophically fascinating because LLMs rely on some practices which are not fully understood, hence the article, and may validate combining formal logic with informal intuition as a way of understanding the world (both in mathematics, and generally the way our own minds combine logical reasoning with imprecise language and feelings).
mvieira38|11 months ago
gessha|11 months ago
piokoch|11 months ago
This algorithm will happily predict whatever it was fed with, just ask Chat GPT to write the review of non-existing camera, car or washing machine, you will receive nicely written list of advantages of such item, so what it does not exist.
Philpax|11 months ago
whynotminot|11 months ago
crazygringo|11 months ago
I really wish that had been my experience taking undergrad math courses.
Instead, I remember linear algebra where the professor would prove a result by introducing an equation pulled out of thin air, plugging it in, showing that the result was true, and that was that. OK sure, the symbol manipulation proved it was true, but zero understanding of why. And when I'd ask professors about the why, I'd encounter outright hostility -- all that mattered was whether it was proven, and asking "why" was positively amateurish and unserious. It was irrelevant to the truth of a result. The same attitude prevailed when it got to quantum mechanics -- "shut up and calculate".
I know there are mathematicians who care deeply about the why, and I have to assume it's what motivates many of them. But my actual experience studying math was the polar opposite. And so I find it very surprising to hear the idea of math being described as being more interested in why than what. The way I was taught didn't just not care about the why, but seemed actively contemptuous of it.
Ar-Curunir|11 months ago
losvedir|11 months ago
I majored in math at MIT, and even at the undergraduate level it was more like what OP is describing and less like what you're saying. I actually took linear algebra twice since my first major was Economics before deciding to add on a math major, and the version of linear algebra for your average engineer or economist (i.e.: a bunch of plug and chug matrices-type stuff), which is what I assume you're referring to, was very different. Linear algebra for mathematicians was all about vector spaces and bases and such, and was very interesting and full of proofs. I don't think actually concretely multiplying matrices was even a topic!
So I guess linear algebra is one of those topics where the math side is interesting and very much what all the mathematicians here are describing, but where it turned out to be so useful for everything, that there's a non-mathematician version of it which is more like what it sounds like you experienced.
jvans|11 months ago
ssivark|11 months ago
Maybe because CS is more engineering than science (at least as far as what drives the sociology), a lot of people approach AI from the same industrial perspective -- be it applications to math, science, art, coding, and whatever else. Ideas like _the bitter lesson_ only reinforce the zeitgeist.
optimalsolver|11 months ago
Which is to say, if you only concern yourself with theorems which have short, understandable proofs, aren't you cutting yourself off from vast swathes of math space?
nicf|11 months ago
If you're talking about questions that are well-motivated but whose answers are ugly and incomprehensible, then a milder version of this actually happens fairly often --- some major conjecture gets solved by a proof that everyone agrees is right but which also doesn't shed much light on why the thing is true. In this situation, I think it's fair to describe the usual reaction as, like, I'm definitely happy to have the confirmation that the thing is true, but I would much rather have a nicer argument. Whoever proved the thing in the ugly way definitely earns themselves lots of math points, but if someone else comes along later and proves it in a clearer way then they've done something worth celebrating too.
Does that answer your question?
mb7733|11 months ago
Care or not, what are they supposed to do with it?
Sure, they can now assume the theorem to be true, but nothing stopped them from doing that before.
godelski|11 months ago
But being over in the AI/ML world now, this is my NUMBER ONE gripe. Very few are trying to understand why things are working. I'd argue that the biggest reason machines are black boxes are because no one is bothering to look inside of them. You can't solve things like hallucinations and errors without understanding these machines (and there's a lot we already do understand). There's a strong pushback against mathematics and I really don't understand why. It has so many tools that can help us move forward, but yes, it takes a lot of work. It's bad enough I know people who have gotten PhDs from top CS schools (top 3!) and don't understand things like probability distributions.
Unfortunately doing great things takes great work and great effort. I really do want to see the birth of AI, I wouldn't be doing this if I didn't, but I think it'd be naive to believe that this grand challenge can entirely be solved by one field and something so simple as throwing more compute (data, hardware, parameters, or however you want to reframe the Bitter Lesson this year).
Maybe I'm biased because I come from physics where we only care about causal relationships. The "_why_" is the damn Chimichanga. And I should mention, we're very comfortable in physics working with non-deterministic systems and that doesn't mean you can't form causal relationships. That's what the last hundred and some odd years have been all about.[1]
[0] Undergrad in physics, moved to work as engineer, then went to grad school to do CS because I was interested in AI and specifically in the mathematics of it. Boy did I become disappointment years later...
[1] I think there is a bias in CS. I notice there is a lot of test driven development, despite that being well known to be full of pitfalls. You unfortunately can't test your way into a proof. Any mathematician or physicist can tell you. Just because your thing does well on some tests doesn't mean there is proof of anything. Evidence, yes, but that's far from proof. Don't make the mistake Dyson did: https://www.youtube.com/watch?v=hV41QEKiMlM
solveit|11 months ago
People do look, but it's extremely hard. Take a look at how hard the mechanistic interpretability people have to work for even small insights. Neel Nanda[1] has some very nice writeups if you haven't already seen them.
[1]: https://www.neelnanda.io/mechanistic-interpretability
the_cat_kittles|11 months ago
cladopa|11 months ago
SwtCyber|11 months ago
Henchman21|11 months ago
But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types. The casual disregard for their fellow humans. The lack of true care for anything and anyone they touch.
Move fast and break things!! Even when its the society you live in.
That arrogance and/or hubris is just another type of stupidity.
dang|11 months ago
It's not just that comments that vent denunciatory feelings are lower-quality themselves, though usually they are. It's that they exert a degrading influence on the rest of the thread, for a couple reasons: (1) people tend to respond in kind, and (2) these comments always veer towards the generic (e.g. "lack of true care for anything and anyone", "just another type of stupidity"), which is bad for curious conversation. Generic stuff is repetitive, and indignant-generic stuff doubly so.
By the time we get further downthread, the original topic is completely gone and we're into "glorification of management over ICs" (https://news.ycombinator.com/item?id=43346257). Veering offtopic can be ok when the tangent is even more interesting (or whimsical) than the starting point, but most tangents aren't like that—mostly what they do is replace a more-interesting-and-in-the-key-of-curiosity thing with a more-repetitive-and-in-the-key-of-indignation thing, which is a losing trade for HN.
bluefirebrand|11 months ago
This is the part I don't get honestly
Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
Do they think the upheaval is simply going to be worth it?
Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Do they just never think about consequences at all?
I am trying not to be extremely negative about all of this, but the speed of which things are moving makes me think we'll hit the cliff before even realizing it is in front of us
That's the part I find unnerving
dkarl|11 months ago
I worked in an organization afflicted by this and still have friends there. In the case of that organization, it was caused by an exaggerated glorification of management over ICs. Managers truly did act according to the belief, and show every evidence of sincerely believing in it, that their understanding of every problem was superior to the sum of the knowledge and intelligence of every engineer under them in the org chart, not because they respected their engineers and worked to collect and understand information from them, but because managers are a higher form of humanity than ICs, and org chart hierarchy reflects natural superiority. Every conversation had to be couched in terms that didn't contradict those assumptions, so the culture had an extremely high tolerance for hand-waving and BS. Naturally this created cover for all kinds of selfish decisions based on politics, bonuses, and vendor perks. I'm very glad I got out of there.
I wouldn't paint all of tech with the same brush, though. There are many companies that are better, much better. Not because they serve higher ideals, but because they can't afford to get so detached from reality, because they'd fail if they didn't respect technical considerations and respect their ICs.