I find software engineers spend too much time focused on notation. Maybe they are right to do so and notation definitely can be helpful or a hindrance, but the goal of any mathematical field is understanding. It's not even to prove theorems. Proving theorems is useful (a) because it identifies what is true and under what circumstances, and (b) the act of proving forces one to build a deep understanding of the phenomenon under study. This requires looking at examples, making a hypothesis more specific or sometimes more general, using formal arguments, geometrical arguments, studying algebraic structures, basically anything that leads to better understanding. Ideally, one understands a subject so well that notation basically doesn't matter. In a sense, the really key ingredient are the definitions because the objects are chosen carefully to be interesting but workable.
If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Someone quoted von Neumann about getting used to mathematics. My interpretation always was that once is immersed in a topic, slowly it becomes natural enough that one can think about it without getting thrown off by relatively superficial strangeness. As a very simple example, someone might get thrown off the first time they learn about point-set topology. It might feel very abstract coming from analysis but after a standard semester course, almost everyone gets comfortable enough with the basic notions of topological spaces and homeomorphisms.
One thing mathematics education is really bad at is motivating the definitions. This is often done because progress is meandering and chaotic and exposing the full lineage of ideas would just take way too long. Physics education is generally far better at this. I don't know of a general solution except to pick up appropriate books that go over history (e.g. https://www.amazon.com/Genesis-Abstract-Group-Concept-Contri...)
Understanding new math is hard, and a lot of people don't have a deep understanding of the math they use. Good notation has a lot of understanding already built-in, and that makes math easier to use in certain ways, but maybe harder to understand in other ways. If you understand something well enough, you are either not troubled by the notation, because you are translating it automatically into your internal representation, or you might adapt the notation to something that better suits your particular use case.
Notation makes a huge difference. I mean, have you TRIED to do arithmetic with Roman numerals?
>If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Seeing the relationships between objects is partly why math has settled on a terse notation (the other reason being that you need to write stuff over and over). This helps up to a point, but mainly IF you are writing the same things again and again. If you are not exercising your memory in such a way, it is often easier to try to make sense of more verbose names. But at all times there is tension between convenience, visual space consumed, and memory consumption.
> One thing mathematics education is really bad at is motivating the definitions.
I was annoyed by this in some introductory math lectures where the prof just skipped explaining the general idea and motivation of some lemmata and instead just went through the proofs line by line.
It felt a bit like being asked to use vi, without knowing what the program does, let alone knowing the key combinations - and instead of a manual, all you have is the source code.
> If the idea is that the right notation will make getting insights easier, that's a futile path to go down on.
I agree whole heartedly.
What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
Math rarely emphasize on this. You either have talent and you get intuition for free or you're average and you swim as much as you can until the next floater. It's sad because the internal and external value is immense
I think this would be extremely valuable:
“We need to focus far more energy on understanding and explaining the basic mental infrastructure of mathematics—with consequently less energy on the most recent results.”
I’ve long thought that more of us could devout time to serious maths problems if they were written in a language we all understood.
> I’ve long thought that more of us could devout time to serious maths problems if they were written in a language we all understood.
That assumes it’s the language that makes it hard to understand serious math problems. That’s partially true (and the reason why mathematicians keep inventing new language), but IMO the complexity of truly understanding large parts of mathematics is intrinsic, not dependent on terminology.
Yes, you can say “A monad is just a monoid in the category of endofunctors” in terms that more people know of, but it would take many pages, and that would make it hard to understand, too.
He separates conceptual understanding from notational understanding— pointing out that the interface of using math has a major impact on utility and understanding. For instance, Roman numerals inhibit understanding and utilization of multiplication.
Better notational systems can be designed, he claims.
Yeah, I don't want to be uncharitable, but I've noticed that a lot of stem fields make heavy use of esoteric language and syntax, and I suspect they do so as a means of gatekeeping.
I understand that some degree of formalism is required to enable the sharing of knowledge amongst people across a variety of languages, but sometimes I'll read a white paper and think "wow, this could be written a LOT more simply".
A lot of people here suggesting they'd be great mathematicians if only it wasn't for the pesky notation. What they are missing is that the notation is the easy part..
Not at all. Over and over I find really intimidating math notation actually represents pretty simple concepts. Sigma notation is a good example of this. Hmm, giant sigma or sum()?
It's like saying that learning Arabic is the easy part of writing a great Saudi novel. True, but you have to understand that being literate is the price of admission. Clearly you consider yourself very facile with mathematical notation but you might have some empathy for the inumerate. Not everyone had the good fortune of great math teachers or even the luxury of attending a good school. I believe there is valid frustration borne out of poor mathematical education.
I love math but the symbology and notations get in my way. 2 ideas:
1. Can we reinvent notation and symbology? No superscripts or subscripts or greek letters and weird symbols? Just functions with input and output? Verifiable by type systems AND human readable
2. Also, make the symbology hyperlinked i.e. if it uses a theorem or axiom that's not on the paper - hyperlink to its proof and so on..
Notation an symbology comes out of a minmax optimisation. Minimizing complexity maximizing reach. As with every local critical point, it is probably not the only state we could have ended at.
For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.
And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl? If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.
Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.
(1) I always tell my students that if they don't understand why things are done a certain way, that they should try to do it in the way most natural to them and then iterate to improve it. Eventually they will settle on something very similar to most common practice.
(2) Higher-level proofs are using so many ideas simultaneously that doing this would be tantamount to writing Lean code from scratch: painful.
1. I work in finance and here people sometimes write math using words as variable names. I can tell you it gets extremely cumbersome to do any significant amount of formula manipulation or writing with this notation. Keep in mind that pen and paper are still pretty much universally used in actual mathematical work and writing full words takes a lot of time compared to single Greek letters.
Large part of math notation is to compress the writing so that you can actually fit a full equation in your vision.
Also, something like what you want already exists, see e.g. Lean: https://lean-lang.org/doc/reference/latest/. It is used to write math for the purpose of automatically proving theorems. No-one wants to use this for actually studying math or manually proving theorems, because it looks horrible compared to conventional mathematics notation (as long as you are used to the conventional notation).
I'd love getting rid of all the weird symbols in favor of clear text functions or whatever. As someone who never learnt all the weird symbols its really preventing me from getting into math again... It is just not intuitive.
Probably not. The conventional math notation has three major advantages over the "[n]o superscripts or subscripts or [G]reek letters and weird symbols" you're proposing:
1. It's more human-readable. The superscripts and subscripts and weird symbols permit preattentive processing of formula structures, accelerating pattern recognition.
2. It's familiar. Novel math notations face the same problem as alternative English orthographies like Shavian (https://en.wikipedia.org/wiki/Shavian_alphabet) in that, however logical they may be, the audience they'd need to appeal to consists of people who have spent 50 years restructuring their brains into specialized machines to process the conventional notation. Aim t3mpted te rait qe r3st ev q1s c0m3nt 1n mai on alterned1v i6gl1c orx2grefi http://canonical.org/~kragen/alphanumerenglish bet ai qi6k ail rez1st qe t3mpt8cen because, even though it's a much better way to spell English, nobody would understand it.
3. It's optimized for rewriting a formula many times. When you write a computer program, you only write it once, so there isn't a great burden in using a notation like (eq (deriv x (pow e y)) (mul (pow e y) (deriv x y)) 1), which takes 54 characters to say what the conventional math notation¹ says in 16 characters³. But, when you're performing algebraic transformations of a formula, you're writing the same formula over and over again in different forms, sometimes only slightly transformed; the line before that one said (eq (deriv x (pow e y)) (deriv x x) 1), for example². For this purpose, brevity is essential, and as we know from information theory, brevity is proportional to the logarithm of the number of different weird symbols you use.
We could certainly improve conventional math notation, and in fact mathematicians invent new notation all the time in order to do so, but the direction you're suggesting would not be an improvement.
People do make this suggestion all the time. I think it's prompted by this experience where they have always found math difficult, they've always found math notation difficult, and they infer that the former is because of the latter. This inference, although reasonable, is incorrect. Math is inherently difficult, as far as anybody knows (an observation famously attributed to Euclid) and the difficult notation actually makes it easier. Undergraduates routinely perform mental feats that defied Archimedes because of it.
I was writing a small article about [Set, Set Builder Notation, and Set Comprehension](https://adropincalm.com/blog/set-set-builder-natatio-set-com...) and while i was investigating it surprised me how many different ways are to describe the same thing. Eg: see all the notation of a Set or a Tuple.
One last rant point is that you don't have "the manual" of math in the very same way you would go on your programming language man page and so there is no single source of truth.
I find it strange to compare "math" with one programming language. Mathematics is a huge and diverse field, with many subcommunities and hence also differing notation.
Your rant would be akin to this if the sides are reversed: "It's surprising how many different ways there are to describe the same thing. Eg: see all the notations for dictionaries (hash tables? associative arrays? maps?) or lists (vectors? arrays?).
You don't have "the manual" of programming languages.
"
I wrote about overlapping intervals a while ago, and used what I thought was the standard math notation for closed and half-open intervals. From comments, I learned that half-open intervals are written differently in french mathematics: https://lobste.rs/s/cireck/how_check_for_overlapping_interva...
"The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration... the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it... yet finally it surrounds the resistant substance."
A. Grothendieck
Understanding mathematical ideas often requires simply getting used to them
Mathematics is hard when there is not much time invested in processing the core idea.
For example, Dvoretzky-Rogers theorem in isolation is hard to understand.
While more applications of it appear
While more generalizations of it appear
While more alternative proofs of it appear
it gets more clear. So, it takes time for something to become digestible, but the effort spent gives the real insights.
Last but not least is the presentation of this theorem. Some authors are cryptic, others refactor the proof in discrete steps or find similarities with other proofs.
Yes it is hard but part of the work of the mathematician is to make it easier for the others.
Exactly like in code. There is a lower bound in hardness, but this is not an excuse to keep it harder than that.
Mathematics is such an old field, older than anything except arguably philosophy, that it's too broad and deep for anyone to really understand everything. Even in graduate school I often took classes in things discovered by Gauss or Euler centuries before. A lot of the mathematical topics the HN crowd seems to like--things like the Collatz conjecture or Busy Beavers--are 60, 80 years old. So, you end up having to spend years specializing and then struggle to find other with the same background.
All of which is compounded by the desire to provide minimal "proofs from the book" and leave out the intuitions behind them.
> A lot of the mathematical topics the HN crowd seems to like--things like the Collatz conjecture or Busy Beavers--are 60, 80 years old.
Do you know the reason for that? The reason is that those problems are open and easy to understand. For the rest of open problems, you need an expert to even understand the problem statement.
I'll argue for astronomy being the oldest. Minimal knowledge would help pre-humans navigate and keep track of the seasons. Birds are known to navigate by the stars.
The desire to hide all traces where a proof comes from is really a problem and having more context would often be very helpful. I think some modern authors/teachers are nowadays getting good at giving more context. But mostly you have to be thankful that the people from the minimalist era (Bourbaki, ...) at least gave precise consistent definitions for basic terminology.
Mathematics is old, but a lot of basic terminology is surprisingly young. Nowadays everyone agrees what an abelian group is. But if you look into some old books from 1900 you can find authors that used the word abelian for something completely different (e.g. orthogonal groups).
Reading a book that uses "abelian" to mean "orthogonal" is confusing, at least until you finally understand what is going on.
actually a lot of minimal proof expose more intuition than older proofs people find at first. I find it usually not extremely enlightening reading the first proofs of results, counterintuitively.
> Mathematics is such an old field, older than anything except arguably philosophy
If we are already venturing outside of scientific realm with philosophy, I'm sure fields of literature or politics are older. Especially since philosophy is just a subset of literature.
> As Venkatesh concludes in his lecture about the future of mathematics in a world of increasingly capable AI, “We have to ask why are we proving things at all?” Thurston puts it like this: there will be a “continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true.”
This type of resoning becomes void if instead of "AI" we used something like "AGA" or "Artificial General Automation" which is a closer description of what we actually have (natural language as a programming language).
Increasingly capable AGA will do things that mathematitians do not like doing. Who wants to compute logarithmic tables by hand? This got solved by calculators. Who wants to compute chaotic dynamical systems by hand? Computer simulations solved that. Who wants to improve by 2% a real analysis bound over an integral to get closer to the optimal bound? AGA is very capable at doing that. We just want to do it if it actually helps us understand why, and surfaces some structure. If not, who cares it its you who does it or a machine that knows all of the olympiad type tricks.
I thought we were well past trying to understand mathematics. After all, John von Neumann long ago said "In mathematics we don't understand things. We just get used to them."
Many ideas in math are extremely simple at heart. Some very precise definitions, maybe a clever theorem. The hard part is often: Why is this result important? How does this result generalize things I already knew? What are some concrete examples of this idea? Why are the definitions they way they are, and not something slightly different?
To use an example from functional programming, I could say:
- "A monad is basically a generalization of a parameterized container type that supports flatMap and newFromSingleValue."
- "A monad is a generalized list comprehension."
- Or, famously, "A monad is just a monoid in the category of endofunctors, what's the problem?"
The basic idea, once you get it, is trivial. But the context, the familiarity, the basic examples, and the relationships to other ideas take a while to sink in. And once they do, you ask "That's it?"
So the process of understanding monads usually isn't some sudden flash of insight, because there's barely anything there. It's more a situation where you work with the idea long enough and you see it in a few contexts, and all the connections become familiar.
(I have a long-term project to understand one of the basic things in category theory, "adjoint functors." I can read the definition just fine. But I need to find more examples that relate to things I already care about, and I need to learn why that particular abstraction is a particularly useful one. Someday, I presume I'll look at it and think, "Oh, yeah. That thing. It's why interesting things X, Y and Z are all the same thing under the hood." Everything else in category theory has been useful up until this point, so maybe this will be useful, too?)
It's probably a neurological artefact. When the brain just spent enough time looking at a pattern it can suddenly become obvious. You can go from blind to enlightened without the usual conscious logical effort. It's very odd.
Just because someone said it doesn't mean we all agree with it, fortunately.
You know the meme with the normal distribution where the far right and the far left reach the same conclusion for different reasons, and the ones in the middle have a completely different opinion?
So on the far right you have people on von Neumann who says "In mathematics we don't understand things". On the far left you have people like you who say "me no mats". Then in the middle you have people like me, who say "maths is interesting, let me do something I enjoy".
The views quoted are just as cryptic as modern mathematics. Did mathematicians lose the ability to convey stuff tin plain simple ways?
Probably they are trying to romanticize something that may not sound good if told plainly.
Face it. Mathematics is one of fields strongly affected by AI, just like programming. You need to be more straight forward about it rather than beating around the bush.
To simply put, it appears to be a struggle for redefining new road map, survival and adoption in AI era.
I recently came to realize the same things about physics. Even physicists find it hard to develop an intuitive mental picture of how space-time folds or what a photon is.
Well, that's just the esoterical nature of physics, no? I mean the old adage that "if you think you understand quantum physics you do not understand quantum physics" is a reflection of this.
As someone who has always struggled with mathematics at the calculational level, but who really enjoys theorems and proofs (abstract mathematics), here are some things that help me.
1. Study predicate logic, then study it again, and again, and again. The better and more ingrained predicate logic becomes in your brain the easier mathematics becomes.
2. Once you become comfortable with predicate logic, look into set theory and model theory and understand both of these well. Understand the precise definition of "theory" wrt to model theory. If you do this, you'll have learned the rules that unify nearly all of mathematics and you'll also understand how to "plug" models into theories to try and better understand them.
3. Close reading. If you've ever played magic the gathering, mathematics is the same thing--words are defined and used in the same way in which they are in games. You need to suspend all the temptation to read in meanings that aren't there. You need to read slowly. I've often only come upon a key insight about a particular object and an accurate understanding only after rereading a passage like 50 times. If the author didn't make a certain statement, they didn't make that statement, even if it seems "obvious" you need to follow the logical chain of reasoning to make sure.
4. Translate into natural english. A lot of math books will have whole sections of proofs and
/or exercises with little to no corresponding natural language "explainer" of the symbolic statements. One thing that helps me tremendously is to try and frame any proof or theorem or collection of these in terms of the linguistic names for various definitions etc. and to try and summarize a body of proofs into helpful statements. For example "groups are all about inverses and how they allow us to "reverse" compositions of (associative) operations--this is the essence of "solvability"". This summary statement about groups helps set up a framing for me whenever I go and read a proof involving groups. The framing helps tremendously because it can serve as a foil too—i.e. if some surprising theorem contravene's the summary "oh, maybe groups aren't just about inversions" that allows for an intellectual development and expansion that I find more intuitive. I sometimes think of myself as a scientist examining a world of abstract creatures (the various models (individuals) of a particular theory (species))
5. Contextualize. Nearly all of mathematics grew out of certain lines of investigation, and often out of concrete technical needs. Understanding this history is a surprisingly effective way to make many initially mysterious aspects of a theory more obvious, more concrete, and more related to other bits of knowledge about the world, which really helps bolster understanding.
> Venkatesh argued that the record on this is terrible, lamenting that “for a typical paper or talk, very few of us understand it.”
> "few of us"
You see, if you plebs are unable to understand our genius its solely due to your inadequacies as a person and as an intellect, but if we are unable to understand our genius, well, that's a lamentable crisis.
To make Mathematics "understandable" simply requires the inclusion of numerical examples. A suggestion 'the mathematics community' is hostile to.
If you are unable to express numerically then I'd argue you are unable to understand.
Applied math is little more than semantics compression.
This fundamental truth is embedded in the common symbols of arithmetic...
+ ... one line combined with another ...linear...line wee
- ...opposite of + one line removed
x ...eXponential addition, combining groups
•/• ... exponential breaking into groups ...also hints at inherent ratio
From there it's symbols that describe different objects and how to apply the fundamental arithmetic operations; like playing over a chord in music
The interesting work is in physical science not the notation. Math is used to capture physics that would be too verbose to describe in English or some other "human" language. Which IMO should be reserved for capturing emotional context anyway as that's where they originate from.
Programming languages have senselessly obscured the simple and elegant reality of computation, which is really just a subset of math; the term computer originated to describe humans that manually computed. Typescript, Python, etc don't exist[1]. They are leaky abstractions that waste a lot of resources to run some electromagnetic geometry state changes.
Whether it's politics, religion or engineering, "blue" language, humans seem obsessed with notation fetishes. Imo it's all rather prosaic and boring
[1] at best they exist as ethno objects of momentary social value to those who discuss them
assemblyman|2 months ago
If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Someone quoted von Neumann about getting used to mathematics. My interpretation always was that once is immersed in a topic, slowly it becomes natural enough that one can think about it without getting thrown off by relatively superficial strangeness. As a very simple example, someone might get thrown off the first time they learn about point-set topology. It might feel very abstract coming from analysis but after a standard semester course, almost everyone gets comfortable enough with the basic notions of topological spaces and homeomorphisms.
One thing mathematics education is really bad at is motivating the definitions. This is often done because progress is meandering and chaotic and exposing the full lineage of ideas would just take way too long. Physics education is generally far better at this. I don't know of a general solution except to pick up appropriate books that go over history (e.g. https://www.amazon.com/Genesis-Abstract-Group-Concept-Contri...)
auggierose|2 months ago
wakawaka28|2 months ago
>If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Seeing the relationships between objects is partly why math has settled on a terse notation (the other reason being that you need to write stuff over and over). This helps up to a point, but mainly IF you are writing the same things again and again. If you are not exercising your memory in such a way, it is often easier to try to make sense of more verbose names. But at all times there is tension between convenience, visual space consumed, and memory consumption.
xg15|2 months ago
I was annoyed by this in some introductory math lectures where the prof just skipped explaining the general idea and motivation of some lemmata and instead just went through the proofs line by line.
It felt a bit like being asked to use vi, without knowing what the program does, let alone knowing the key combinations - and instead of a manual, all you have is the source code.
matheme|2 months ago
I agree whole heartedly.
What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
agumonkey|2 months ago
MrDrDr|2 months ago
A little off topic perhaps, but out of curiosity - how many of us here have an interest in recreational mathematics? [https://en.wikipedia.org/wiki/Recreational_mathematics]
Someone|2 months ago
That assumes it’s the language that makes it hard to understand serious math problems. That’s partially true (and the reason why mathematicians keep inventing new language), but IMO the complexity of truly understanding large parts of mathematics is intrinsic, not dependent on terminology.
Yes, you can say “A monad is just a monoid in the category of endofunctors” in terms that more people know of, but it would take many pages, and that would make it hard to understand, too.
dr_dshiv|2 months ago
He separates conceptual understanding from notational understanding— pointing out that the interface of using math has a major impact on utility and understanding. For instance, Roman numerals inhibit understanding and utilization of multiplication.
Better notational systems can be designed, he claims.
segfaultex|2 months ago
I understand that some degree of formalism is required to enable the sharing of knowledge amongst people across a variety of languages, but sometimes I'll read a white paper and think "wow, this could be written a LOT more simply".
Statistics is a major culprit of this.
karmakurtisaani|2 months ago
nh23423fefe|2 months ago
UltraSane|2 months ago
djmips|2 months ago
unknown|2 months ago
[deleted]
matheme|2 months ago
This is so wrong it can only come from a place of inexperience and ignorance.
Mathematics is flush with inconsistent, abbreviated, and overloaded notation.
Show a child a matrix numerically and they can understand it, show them Ax+s=b, and watch the confusion.
pathikrit|2 months ago
1. Can we reinvent notation and symbology? No superscripts or subscripts or greek letters and weird symbols? Just functions with input and output? Verifiable by type systems AND human readable
2. Also, make the symbology hyperlinked i.e. if it uses a theorem or axiom that's not on the paper - hyperlink to its proof and so on..
youoy|2 months ago
For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.
And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl? If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.
Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.
sfpotter|2 months ago
hodgehog11|2 months ago
(2) Higher-level proofs are using so many ideas simultaneously that doing this would be tantamount to writing Lean code from scratch: painful.
vjk800|2 months ago
Large part of math notation is to compress the writing so that you can actually fit a full equation in your vision.
Also, something like what you want already exists, see e.g. Lean: https://lean-lang.org/doc/reference/latest/. It is used to write math for the purpose of automatically proving theorems. No-one wants to use this for actually studying math or manually proving theorems, because it looks horrible compared to conventional mathematics notation (as long as you are used to the conventional notation).
zwnow|2 months ago
kragen|2 months ago
1. It's more human-readable. The superscripts and subscripts and weird symbols permit preattentive processing of formula structures, accelerating pattern recognition.
2. It's familiar. Novel math notations face the same problem as alternative English orthographies like Shavian (https://en.wikipedia.org/wiki/Shavian_alphabet) in that, however logical they may be, the audience they'd need to appeal to consists of people who have spent 50 years restructuring their brains into specialized machines to process the conventional notation. Aim t3mpted te rait qe r3st ev q1s c0m3nt 1n mai on alterned1v i6gl1c orx2grefi http://canonical.org/~kragen/alphanumerenglish bet ai qi6k ail rez1st qe t3mpt8cen because, even though it's a much better way to spell English, nobody would understand it.
3. It's optimized for rewriting a formula many times. When you write a computer program, you only write it once, so there isn't a great burden in using a notation like (eq (deriv x (pow e y)) (mul (pow e y) (deriv x y)) 1), which takes 54 characters to say what the conventional math notation¹ says in 16 characters³. But, when you're performing algebraic transformations of a formula, you're writing the same formula over and over again in different forms, sometimes only slightly transformed; the line before that one said (eq (deriv x (pow e y)) (deriv x x) 1), for example². For this purpose, brevity is essential, and as we know from information theory, brevity is proportional to the logarithm of the number of different weird symbols you use.
We could certainly improve conventional math notation, and in fact mathematicians invent new notation all the time in order to do so, but the direction you're suggesting would not be an improvement.
People do make this suggestion all the time. I think it's prompted by this experience where they have always found math difficult, they've always found math notation difficult, and they infer that the former is because of the latter. This inference, although reasonable, is incorrect. Math is inherently difficult, as far as anybody knows (an observation famously attributed to Euclid) and the difficult notation actually makes it easier. Undergraduates routinely perform mental feats that defied Archimedes because of it.
______
¹ \frac d{dx}e^y = e^y\frac{dy}{dx} = 1
² \frac d{dx}e^y = \frac d{dx}x = 1
³ See https://nbviewer.org/url/canonical.org/~kragen/sw/dev3/logar... for a cleaned-up version of the context where I wrote this equation down on paper the other day.
xigoi|2 months ago
borracciaBlu|2 months ago
One last rant point is that you don't have "the manual" of math in the very same way you would go on your programming language man page and so there is no single source of truth.
Everybody assumes...
BlackFingolfin|2 months ago
Your rant would be akin to this if the sides are reversed: "It's surprising how many different ways there are to describe the same thing. Eg: see all the notations for dictionaries (hash tables? associative arrays? maps?) or lists (vectors? arrays?).
You don't have "the manual" of programming languages. "
mzl|2 months ago
zerofor_conduct|2 months ago
A. Grothendieck
Understanding mathematical ideas often requires simply getting used to them
fithisux|2 months ago
For example, Dvoretzky-Rogers theorem in isolation is hard to understand.
While more applications of it appear While more generalizations of it appear While more alternative proofs of it appear
it gets more clear. So, it takes time for something to become digestible, but the effort spent gives the real insights.
Last but not least is the presentation of this theorem. Some authors are cryptic, others refactor the proof in discrete steps or find similarities with other proofs.
Yes it is hard but part of the work of the mathematician is to make it easier for the others.
Exactly like in code. There is a lower bound in hardness, but this is not an excuse to keep it harder than that.
johngossman|2 months ago
All of which is compounded by the desire to provide minimal "proofs from the book" and leave out the intuitions behind them.
ekjhgkejhgk|2 months ago
Do you know the reason for that? The reason is that those problems are open and easy to understand. For the rest of open problems, you need an expert to even understand the problem statement.
bell-cot|2 months ago
adornKey|2 months ago
Mathematics is old, but a lot of basic terminology is surprisingly young. Nowadays everyone agrees what an abelian group is. But if you look into some old books from 1900 you can find authors that used the word abelian for something completely different (e.g. orthogonal groups).
Reading a book that uses "abelian" to mean "orthogonal" is confusing, at least until you finally understand what is going on.
Davidzheng|2 months ago
scotty79|2 months ago
If we are already venturing outside of scientific realm with philosophy, I'm sure fields of literature or politics are older. Especially since philosophy is just a subset of literature.
youoy|2 months ago
This type of resoning becomes void if instead of "AI" we used something like "AGA" or "Artificial General Automation" which is a closer description of what we actually have (natural language as a programming language).
Increasingly capable AGA will do things that mathematitians do not like doing. Who wants to compute logarithmic tables by hand? This got solved by calculators. Who wants to compute chaotic dynamical systems by hand? Computer simulations solved that. Who wants to improve by 2% a real analysis bound over an integral to get closer to the optimal bound? AGA is very capable at doing that. We just want to do it if it actually helps us understand why, and surfaces some structure. If not, who cares it its you who does it or a machine that knows all of the olympiad type tricks.
ikyr9999|2 months ago
MrDrDr|2 months ago
geomark|2 months ago
ekidd|2 months ago
To use an example from functional programming, I could say:
- "A monad is basically a generalization of a parameterized container type that supports flatMap and newFromSingleValue."
- "A monad is a generalized list comprehension."
- Or, famously, "A monad is just a monoid in the category of endofunctors, what's the problem?"
The basic idea, once you get it, is trivial. But the context, the familiarity, the basic examples, and the relationships to other ideas take a while to sink in. And once they do, you ask "That's it?"
So the process of understanding monads usually isn't some sudden flash of insight, because there's barely anything there. It's more a situation where you work with the idea long enough and you see it in a few contexts, and all the connections become familiar.
(I have a long-term project to understand one of the basic things in category theory, "adjoint functors." I can read the definition just fine. But I need to find more examples that relate to things I already care about, and I need to learn why that particular abstraction is a particularly useful one. Someday, I presume I'll look at it and think, "Oh, yeah. That thing. It's why interesting things X, Y and Z are all the same thing under the hood." Everything else in category theory has been useful up until this point, so maybe this will be useful, too?)
agumonkey|2 months ago
ekjhgkejhgk|2 months ago
You know the meme with the normal distribution where the far right and the far left reach the same conclusion for different reasons, and the ones in the middle have a completely different opinion?
So on the far right you have people on von Neumann who says "In mathematics we don't understand things". On the far left you have people like you who say "me no mats". Then in the middle you have people like me, who say "maths is interesting, let me do something I enjoy".
Davidzheng|2 months ago
zkmon|2 months ago
Probably they are trying to romanticize something that may not sound good if told plainly.
Face it. Mathematics is one of fields strongly affected by AI, just like programming. You need to be more straight forward about it rather than beating around the bush.
To simply put, it appears to be a struggle for redefining new road map, survival and adoption in AI era.
isolli|2 months ago
abraxas|2 months ago
moi2388|2 months ago
Awesome that for mathematicians notation does not matter, an every solved problem is trivial..
But for a student this is not the case yet.
Take the simple pi vs tau debate. Of course it doesn’t matter which you use once you understand them.
But if you don’t understand it yet, and learn about it for the first time, tau makes everything a lot more intuitive.
voidhorse|2 months ago
1. Study predicate logic, then study it again, and again, and again. The better and more ingrained predicate logic becomes in your brain the easier mathematics becomes.
2. Once you become comfortable with predicate logic, look into set theory and model theory and understand both of these well. Understand the precise definition of "theory" wrt to model theory. If you do this, you'll have learned the rules that unify nearly all of mathematics and you'll also understand how to "plug" models into theories to try and better understand them.
3. Close reading. If you've ever played magic the gathering, mathematics is the same thing--words are defined and used in the same way in which they are in games. You need to suspend all the temptation to read in meanings that aren't there. You need to read slowly. I've often only come upon a key insight about a particular object and an accurate understanding only after rereading a passage like 50 times. If the author didn't make a certain statement, they didn't make that statement, even if it seems "obvious" you need to follow the logical chain of reasoning to make sure.
4. Translate into natural english. A lot of math books will have whole sections of proofs and /or exercises with little to no corresponding natural language "explainer" of the symbolic statements. One thing that helps me tremendously is to try and frame any proof or theorem or collection of these in terms of the linguistic names for various definitions etc. and to try and summarize a body of proofs into helpful statements. For example "groups are all about inverses and how they allow us to "reverse" compositions of (associative) operations--this is the essence of "solvability"". This summary statement about groups helps set up a framing for me whenever I go and read a proof involving groups. The framing helps tremendously because it can serve as a foil too—i.e. if some surprising theorem contravene's the summary "oh, maybe groups aren't just about inversions" that allows for an intellectual development and expansion that I find more intuitive. I sometimes think of myself as a scientist examining a world of abstract creatures (the various models (individuals) of a particular theory (species))
5. Contextualize. Nearly all of mathematics grew out of certain lines of investigation, and often out of concrete technical needs. Understanding this history is a surprisingly effective way to make many initially mysterious aspects of a theory more obvious, more concrete, and more related to other bits of knowledge about the world, which really helps bolster understanding.
matheme|2 months ago
> "few of us"
You see, if you plebs are unable to understand our genius its solely due to your inadequacies as a person and as an intellect, but if we are unable to understand our genius, well, that's a lamentable crisis.
To make Mathematics "understandable" simply requires the inclusion of numerical examples. A suggestion 'the mathematics community' is hostile to.
If you are unable to express numerically then I'd argue you are unable to understand.
xigoi|2 months ago
carlCarlCarlCar|2 months ago
This fundamental truth is embedded in the common symbols of arithmetic...
+ ... one line combined with another ...linear...line wee
- ...opposite of + one line removed
x ...eXponential addition, combining groups
•/• ... exponential breaking into groups ...also hints at inherent ratio
From there it's symbols that describe different objects and how to apply the fundamental arithmetic operations; like playing over a chord in music
The interesting work is in physical science not the notation. Math is used to capture physics that would be too verbose to describe in English or some other "human" language. Which IMO should be reserved for capturing emotional context anyway as that's where they originate from.
Programming languages have senselessly obscured the simple and elegant reality of computation, which is really just a subset of math; the term computer originated to describe humans that manually computed. Typescript, Python, etc don't exist[1]. They are leaky abstractions that waste a lot of resources to run some electromagnetic geometry state changes.
Whether it's politics, religion or engineering, "blue" language, humans seem obsessed with notation fetishes. Imo it's all rather prosaic and boring
[1] at best they exist as ethno objects of momentary social value to those who discuss them