At the end of the article, the author mentions how we could possibly find other designs of mathematics. Well, some people already have!
Some mathematicians did not like the law of excluded middle, which states that for any proposition A, either A is true or A is false. So they invented intuitionistic logic, which is normal logic without the excluded middle, and started rewriting mathematical proofs in this new system. Turns out there's a lot of stuff you can prove in intuitionistic logic.
Some mathematicians did not like the axiom of choice. One of the consequences of this axiom is that every subset of the real numbers has a least element according to some ordering. Think about it, what is the least element of {1/n : n >= 1} ? Who knows! So what did they do? Some people found it so weird they either replaced it with a weaker axiom or a contradictory one.
There's even syntax arguments in mathematics! What's the derivative of a function f? is it f'(x) or df/dx ? Is multiplication represented by a dot (.) or a cross (x) or by a juxtaposition of expressions?
Sometimes we use big existing proofs in the middle of a proof to save time. And sometimes we use the big proof to prove something far simpler than the big proof. This creates a big dependency and some people dislike hate these dependencies because the reader of the new proof will have trouble understanding the proof completely. It's like dropping in some magic in the middle of the proof and saying: "if you want to understand this proof completely, go read this other 50 page article" Sound familiar? Some mathematicians hate this so much they insist on proving things from the ground up whenever possible so that the proof is as comprehensible as possible. This is the mathematical equivalent of dependency management.
> One of the consequences of this axiom is that every subset of the real numbers has a least element according to some ordering.
An even weirder consequence of AC is the Banach-Tarski paradox [1].
Other examples of how mathematicians come up with alternative
perspectives are non-Euclidean geometries that replace the parallel postulate of the common Euclidean geometry, e.g. Lobachevskian [2] and Riemannian geometry [3].
> There's even syntax arguments in mathematics! What's the derivative of a function f? is it f'(x) or df/dx ? Is multiplication represented by a dot (.) or a cross (x) or by a juxtaposition of expressions?
Those aren't arguments. All of those are very standard things to do.
There's a Spiked Math comic with some good mathematics-engineering trash talk, though, with the mathematicians shooing away a hapless engineer with comments like "do you even know how to spell 'imaginary'?" and "why don't you go jmagine you have friends?" Again, I wouldn't describe this as an area of active debate.
Weirdly enough, I always struggled with advanced syntax, but I recently understood that it was a lack of focus on the abstraction. Kinda like programming languages :)
But that's something you can't really understand when too young.
I would like to add, sometimes mathematicians bring up random variables/constants without explicitly naming or explaining them before. This is very upsetting for someone with a lower level, trying to understand what's going on.
Is there a version of mathematics, somewhere, that don't give you the impression of being snarked upon by an old dude twirling his mustache?
I'm a big fan of linear algebra because it's the best example of why learning math is useful. Sure knowing about equation and calculus come in handy, but linear algebra is pure modelling superpowers and a much more valuable tool overall.
I don't think you can say of any topic in mathematics that it's the best example of why learning math is useful. What about calculus? It's an immensely powerful tool that, by harnessing the power of the infinite and the infinitesimal, unlocks a massive body of practical applications in nearly every field of quantitative knowledge. And what about discrete math? Without it there would be no such thing as a computer. Differential equations, the tool for modeling dynamical systems?
Mathematics is a vast topic that every quantitative discipline must necessarily draw from. Each field of study will benefit more from certain topics of it.
I definitely think linear algebra has a lot of practical applications, but I'm not sure I would necessarily call it the "best" nor the "most valuable tool overall". It certainly provides the most bang for your buck if you are doing certain kinds of modeling, but even just within modeling which tool will be the most useful will depend a ton on what you are doing. For example, for dynamical systems you may be using linear algebra, but mostly as a computational tool for solving differential equations. In this case, analysis of your problem at the level of calculus is extremely important if you want to come up with an accurate and computationally feasible model.
Mathematical truths and objects are real things with existence independent of our minds that we "discover," not just designed things. The author seems to believe that the language used to describe mathematics (which is indeed a designed thing, just like software) is the only thing "there." She is probably a formalist.
I think it is important to remember this, because mathematics, like a computer, "fights back." You cannot simply dream up whatever structure you want and have it mean what you want and behave how you want. See Godel's incompleteness theorems. No matter what you are doing, your mathematical constructs (including your implicit Turing Machines in your computer programs) must obey certain underlying constraints that are completely mind-independent. These constraints are what mathematicians study, albeit through a glass, darkly.
Regardless of ontological issues with the post, I like that it emphasizes the designed nature of our mathematical tools. The space of possible tools is so large that there is near-limitless room for human creativity and design in mathematical research. It is a shame that most mathematics classes don't really get that across.
You can simply dream up whatever axioms, undefined terms, and rules of logic you want. However, one runs the risk of having an inconsistent system or a system that is not interesting to others. Godel's Incompleteness Theorem does not say that this can't be done. Furthermore the "underlying contraints" imposed by Godel's Incompleteness Theorem is not at all what most mathematicians study. Unless I'm misinterpreting your meaning here.
There are knowledgeable people who do not believe that mathematics is independent of our minds. It's not too far fetched of an idea. While I do not personally agree with this, I won't downplay such beliefs.
>You cannot simply dream up whatever structure you want and have it mean what you want and behave how you want. See Godel's incompleteness theorems.
That is not at all what the Incompleteness Theorems actually say. They say literally nothing whatsoever about what sorts of structures you can implement inside a given foundational theory, except that there will always be more, because given any foundational theory, you can construct two more foundational theories as extensions (one in which the Goedel statement is unprovable, and one in which the theory believes it's inconsistent).
>Mathematical truths and objects are real things with existence independent of our minds
What makes you say this? Isn't this an open philosophical question? What makes you say that mathematical objects exist independent of our minds? I can dream up a set of axioms of my own and do maths from there, so I don't think mathematics necessarily exists in some Platonic ideal dimension independent of our minds.
>Mathematical truths and objects are real things with existence independent of our minds that we "discover," not just designed things.
While its almost certainly true that the content of mathematics is mind independent, it is far from obvious that these objects are "real things".The real meat of the issue is how exactly the mind-independence is cashed out. Different ideas paint a vastly different picture of mathematics and even the universe. For example platonism vs. nominalism. Lets not be so quick to put forward as an obvious truth the critical issue in question.
Well said. Advanced math is mostly about working with properties higher up the chain of abstraction, and then seeing what happens when you bring the insights learned up there back down to more concrete examples.
From an OO point of view, the real numbers inherit almost every useful trait: they're a field, they have a topology, they have a measure. Studying the parent classes, so to speak, gives you abstract algebra, topology, and analysis, respectively.
Once you get the basics of each, you can study how they interact. Then, once that stuff is clear, they can be recombined in beautiful ways to give you new objects to study.
"Well said. Advanced math is mostly about working with properties higher up the chain of abstraction, and then seeing what happens when you bring the insights learned up there back down to more concrete examples."
I came here to say the same thing. I went through a lesser known engineering discipline, "Mathematics and Enigneering" [0] and found that the type of thinking one learns doing pure math proofs has served me well in my eventual career in aerospace systems engineering. I find that the thought process in considering a proof as a high level whole/black box or being able to drill down to the finest detail while still keeping the big picture in mind has translated quite well to my day to day traversal up and down the abstraction ladder at work.
What a great article. Paraphrased "Math is a designed thing, for humans and by humans, not an absolute truth." Also this post is the BEST introduction to linear algebra that I have seen.
to assume that maths is itself the absolute truth ,which suppesedly actual mathematics are supposed to lead to, is an apt syllogism. It's saying, any kind of math that doesn't reveal an absolut truth isn't real maths.
You can play creatively in a particular nexus of math and software engineering called Djinn [0], the Haskell program that writes your Haskell programs for you.
1. An ancestor of Djinn is automated theorem proving. Why can't machines prove math theorems for us? This quest goes back to the dawn of computing science.
2. A more recent development is the Curry-Howard Correspondence. Programming in a (typed) FP language is like playing tetris. Solving symbolic logic problems [1] is also like playing tetris. Djinn exposes the connection in a REPL you can play with. And see how the computer plays tetris for you!
3. Don't want to install Djinn? No problem, just hop over to the Haskell IRC [2]. Lambdabot has a working Djinn plugin.
I've had a similar thought as the author, and often wondered - could we develop alternative systems for intermediate-to-advanced mathematical concepts that would make it easier to parse?
"Geometric algebra and its extension to geometric calculus unify, simplify, and generalize vast areas of mathematics that involve geometric ideas, including linear algebra, multivariable calculus, real analysis, complex analysis, and euclidean, noneuclidean, and projective geometry. They provide a unified mathematical language for physics (classical and quantum mechanics, electrodynamics, relativity), the geometrical aspects of computer science (e.g., graphics, robotics, computer vision), and engineering."
Very good article. Studying computer science as my sole field, I am starting to realize how much I have missed out on getting an alternative take on things.
The other day I realized that a man-made law is also a bit like mathematics or computer software. It is carefully designed and constructed. Ideally, it is intended to work like a machine with as little room for human discretion as possible. And just like mathematics, adding an another "axiom" to the law has far, far-reaching consequences.
Changes in law (almost) never add axioms because that would have global effects; instead, they tend to add folds to the manifold that law is. And many folds there certainly are. It is hard to find anything in law that holds universally. Examples (I’m picking mostly US law here, but I’m sure similar exceptions exist elsewhere):
- "Everyone has the right to take part in the government of his country, directly or through freely chosen representatives"? Not quite in the USA, as one must be born in the USA to become president.
- "higher education shall be equally accessible to all on the basis of merit": questionable in many countries, given the costs.
It seems there’s no rule so universal that it doesn’t have some exception. That, IMO, makes law so different from math that any analogy is useless.
> And just like mathematics, adding an another "axiom" to the law has far, far-reaching consequences.
Actually mathematicians virtually never do this. Almost all of mathematics (from arithmetic to calculus to category theory) operates in the confines of ZFC[1] and adds no further axioms. All of these fields may add definitions, but these are just shorthand; the are conservative and have no actual consequences. It is more like fixing Newton's laws and then experimenting with all the machines you can build with them.
>Ideally, it is intended to work like a machine with as little room for human discretion as possible
Actually, these laws are the most dystopian.
One concept that seems to have this design goal is strict liability: what you intended, what you believed, what elaborate conspiracies were created to deceive you, none of that matters. If you did it, you are guilty and will be punished, full stop. This is attractive because it's much harder to prove beyond a reasonable doubt that someone did something "with malice" or "intentionally" than to prove that they did it. This tends to show up around youth sexuality, probably because that's an area where social norms have changed quite a bit.
It doesn't matter if the 17.5-year-old you slept with presented a fake ID saying she was 18, took the lead in all sexual activity, etc. It doesn't matter if she shows up in court and begs the judge to let you go. People whose 18th birthday has not elapsed cannot give consent, full stop. Sex without consent is rape, rapists are bad people and should be made to suffer, so you're going to prison and then on to the sex offender registry. Your judge, and the judges you appeal to, all think this is ridiculous, but their hands are tied because the law was written like a computer program that failed to consider edge cases. (Literally edge cases - those that fall near numerical boundaries. Some states have patched their legal systems to allow consent between partners who are close in age but under the normal age of consent. Some have, but only for heterosexual couples. Some still haven't. My state actually tried and convicted two teenagers as adults for raping each other at the same time.)
Same with child porn. Child porn laws were created a time when taking and distributing a picture could only be part of a commercial publishing operation. Anyone who creates or possesses an image of a nude child is guilty of a child pornography offense. Seems reasonable. Now every teenager has an internet-connected camera in their process, and we found an edge case we forgot: the child pornographer is the child him/herself and the consumer is her long-term boyfriend, also underage. Still just as illegal and the same punishment is required. You would hope prosecutors would turn a blind eye, but "the law is the law."
These rather salacious examples are the most high-profile, but I'm sure the same problems happens in more mundane areas of law as well.
At some point you have to trust judges and you have to empower bureaucrats to help people get out of obviously ridiculous situations. Laws are much more difficult to change than computer programs; there needs to be a reasonable amount of discretion for a manual override until the law can be fixed.
It is also interesting that there are many parallels between software engineering and the design of mathematical proofs (or theoretical CS proofs, which I am more familiar with).
In theoretical CS, people talk of catching and fixing "bugs" in proofs, namely, mistakes that make the proof fail but can hopefully be fixed while sticking to essentially the same idea.
One can "refactor" proofs, in superficial ways (e.g., renaming of concepts), but in deeper ways also, e.g., extract part of a proof to make it an independent lemma that you can reuse (or "invoke") from other parts of the proof. One often tries to "decouple" large proofs into independent parts with clearly defined "interfaces", that the reader can understand separately from each other, though this usually implies a tradeoff (a more tightly integrated proof requires more mental space but is usually shorter overall).
One can think of the statement of sub-results (lemmas) as providing an "interface" to invoke them elsewhere, which you try to "decouple" from the actual "implementation", namely, the way the lemmas are really proven. It takes practice to find the right way to abstract away the essence of a result to state it correctly, without burdening it with implementation details, but without forgetting an important aspect of the result that will be necessary later. As in software engineering, once a result is proven, you stop burdening your mind with the implementation and mostly think about the statement (i.e., what the result is supposed to be doing) when using it.
In software engineering, one must decide which part of the code is "responsible" for checking certain properties on the objects, and that code may "assume" some preconditions on its inputs and must "guarantee" some preconditions on its outputs. In the same way, in proofs, one often wonders where certain conditions should be verified. Should they be part of the definition of the object? Does this lemma enforce more conditions on the object than what is guaranteed by its statement?
The parallel is not perfect. In software engineering, you can rely on the computer to check that your code is correct, and to execute it. In mathematics you rely on other humans to do this and check that they are convinced by your proofs. This means you can get away with appeals to human intuition which are not fully formal, but on the other hand there is no safety net when you make an error in your reasoning, no reality check that you can invoke to avoid exploring erroneous consequences. Also, this does not apply to all types of proofs; but it applies especially well to proofs that describe a construction, i.e., a way to "build" a certain abstract object, often to justify that an object with a certain desirable set of properties exists.
If you are writing programs in Coq or another dependently typed programming language, these parallels between math and programming are not just incidental; they are one and the same. Theorems are stated as functions, whose type signatures reflect the theorem being proven. Even the gap that you mention at the end of your post is bridged, since the type checker will verify that your proof is correct (modulo the trust in the metatheory of the language itself, and its implementation). Things like refactoring, choosing more or less specific statements of theorems (e.g. more or less abstraction in a library), interfaces, etc, all become exactly analogous to the same kinds of problems faced in software engineering. It's really quite remarkable.
(Disclaimer: I'm hardly an expert in these languages; I just dabble.)
Understanding programming languages definitely helped my understanding of Math. Smarter people than myself can do this the other way around, but I always needed to understand the why before it could start to stick. I didn't really understand programming languages until I could dig into the source code and the standard libraries to see how and why everything was done. The problem with modern math teaching is that is starts with fully baked axioms and it doesn't walk you through the process of discovery before it was all cleaned up into a neat terse explanation. One exception is a great book from the 40s called a Mathameticians Delight. It was recommended to my by my Yale professor and I highly recommend it.
Interesting article. I always thought math felt like programming but in a language far higher level than any of the available programming languages. So like programming but with a lot less friction when going from thought to symbols.
For example, creating new domain specific control flows with Lisp macros versus defining a Dirac delta function using limits and integrals. In programming it's easy for bugs to seep in because there are more little/subtle details and leaky abstractions. But math on the other hand feels much more abstract and clean.
Perhaps this is just because dumb silicon boxes interpret our code and humans interpret our math which gives us a much more sophisticated base language to work with.
Not trying to evangelize but FP was an great hint for that. Seeing 'tangible' (that I can create, see, step through) incarnations of groups, monoids, transitive relations etc gave an operational grounding to abstract algebra. Something needed for some of us before see the abstraction behind the notation, and understanding it.
The perception of FP as being somehow more "math-y" is nothing but a bias. There is no intrinsic magical property of "mathiness", it's just that the operational semantics of functional languages are much more well-defined. Logic programming is itself firmly based in axiomatic semantics and has a similar a priori system of reasoning to it, though distinct from FP. Imperative languages can be modeled well on Hoare logic, too, and Dijkstra did plenty of research on it, known as predicate transformer semantics. It is comparatively understudied, though.
I've never taken a college-level linear algebra course, and I've found KhanAcademy's linear algebra course to be a good gentle introduction. I just skip the parts I already know.
[+] [-] rathereasy|10 years ago|reply
Some mathematicians did not like the law of excluded middle, which states that for any proposition A, either A is true or A is false. So they invented intuitionistic logic, which is normal logic without the excluded middle, and started rewriting mathematical proofs in this new system. Turns out there's a lot of stuff you can prove in intuitionistic logic.
Some mathematicians did not like the axiom of choice. One of the consequences of this axiom is that every subset of the real numbers has a least element according to some ordering. Think about it, what is the least element of {1/n : n >= 1} ? Who knows! So what did they do? Some people found it so weird they either replaced it with a weaker axiom or a contradictory one.
There's even syntax arguments in mathematics! What's the derivative of a function f? is it f'(x) or df/dx ? Is multiplication represented by a dot (.) or a cross (x) or by a juxtaposition of expressions?
Sometimes we use big existing proofs in the middle of a proof to save time. And sometimes we use the big proof to prove something far simpler than the big proof. This creates a big dependency and some people dislike hate these dependencies because the reader of the new proof will have trouble understanding the proof completely. It's like dropping in some magic in the middle of the proof and saying: "if you want to understand this proof completely, go read this other 50 page article" Sound familiar? Some mathematicians hate this so much they insist on proving things from the ground up whenever possible so that the proof is as comprehensible as possible. This is the mathematical equivalent of dependency management.
[+] [-] avz|10 years ago|reply
An even weirder consequence of AC is the Banach-Tarski paradox [1].
Other examples of how mathematicians come up with alternative perspectives are non-Euclidean geometries that replace the parallel postulate of the common Euclidean geometry, e.g. Lobachevskian [2] and Riemannian geometry [3].
[1] https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
[2] https://en.wikipedia.org/wiki/Hyperbolic_geometry
[3] https://en.wikipedia.org/wiki/Elliptic_geometry
[+] [-] thaumasiotes|10 years ago|reply
Those aren't arguments. All of those are very standard things to do.
There's a Spiked Math comic with some good mathematics-engineering trash talk, though, with the mathematicians shooing away a hapless engineer with comments like "do you even know how to spell 'imaginary'?" and "why don't you go jmagine you have friends?" Again, I wouldn't describe this as an area of active debate.
[+] [-] agumonkey|10 years ago|reply
(MIT Sussman, SICM book) http://mitpress.mit.edu/sites/default/files/titles/content/s...
Weirdly enough, I always struggled with advanced syntax, but I recently understood that it was a lack of focus on the abstraction. Kinda like programming languages :) But that's something you can't really understand when too young.
[+] [-] zeroonetwothree|10 years ago|reply
[+] [-] Fiahil|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] ivansavz|10 years ago|reply
I'm a big fan of linear algebra because it's the best example of why learning math is useful. Sure knowing about equation and calculus come in handy, but linear algebra is pure modelling superpowers and a much more valuable tool overall.
Related: An awesome LA introductory lecture by Prof. Strang: http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-...
Related 2: A short tutorial on LA that I wrote: http://minireference.com/static/tutorials/linear_algebra_in_...
[+] [-] andrepd|10 years ago|reply
Mathematics is a vast topic that every quantitative discipline must necessarily draw from. Each field of study will benefit more from certain topics of it.
[+] [-] johncolanduoni|10 years ago|reply
[+] [-] rivalis|10 years ago|reply
I think it is important to remember this, because mathematics, like a computer, "fights back." You cannot simply dream up whatever structure you want and have it mean what you want and behave how you want. See Godel's incompleteness theorems. No matter what you are doing, your mathematical constructs (including your implicit Turing Machines in your computer programs) must obey certain underlying constraints that are completely mind-independent. These constraints are what mathematicians study, albeit through a glass, darkly.
Regardless of ontological issues with the post, I like that it emphasizes the designed nature of our mathematical tools. The space of possible tools is so large that there is near-limitless room for human creativity and design in mathematical research. It is a shame that most mathematics classes don't really get that across.
edit: fixed misgendering, sorry, that was sexist.
[+] [-] yequalsx|10 years ago|reply
There are knowledgeable people who do not believe that mathematics is independent of our minds. It's not too far fetched of an idea. While I do not personally agree with this, I won't downplay such beliefs.
[+] [-] eli_gottlieb|10 years ago|reply
That is not at all what the Incompleteness Theorems actually say. They say literally nothing whatsoever about what sorts of structures you can implement inside a given foundational theory, except that there will always be more, because given any foundational theory, you can construct two more foundational theories as extensions (one in which the Goedel statement is unprovable, and one in which the theory believes it's inconsistent).
[+] [-] andrepd|10 years ago|reply
What makes you say this? Isn't this an open philosophical question? What makes you say that mathematical objects exist independent of our minds? I can dream up a set of axioms of my own and do maths from there, so I don't think mathematics necessarily exists in some Platonic ideal dimension independent of our minds.
[+] [-] hackinthebochs|10 years ago|reply
While its almost certainly true that the content of mathematics is mind independent, it is far from obvious that these objects are "real things".The real meat of the issue is how exactly the mind-independence is cashed out. Different ideas paint a vastly different picture of mathematics and even the universe. For example platonism vs. nominalism. Lets not be so quick to put forward as an obvious truth the critical issue in question.
[+] [-] kiyoto|10 years ago|reply
It's actually a she =)
Anyway, I do agree with you (and the author) that mathematics has the potential of being a superb pedagogical vehicle in teaching design thinking.
[+] [-] mshron|10 years ago|reply
From an OO point of view, the real numbers inherit almost every useful trait: they're a field, they have a topology, they have a measure. Studying the parent classes, so to speak, gives you abstract algebra, topology, and analysis, respectively.
Once you get the basics of each, you can study how they interact. Then, once that stuff is clear, they can be recombined in beautiful ways to give you new objects to study.
[+] [-] kejaed|10 years ago|reply
I came here to say the same thing. I went through a lesser known engineering discipline, "Mathematics and Enigneering" [0] and found that the type of thinking one learns doing pure math proofs has served me well in my eventual career in aerospace systems engineering. I find that the thought process in considering a proof as a high level whole/black box or being able to drill down to the finest detail while still keeping the big picture in mind has translated quite well to my day to day traversal up and down the abstraction ladder at work.
[0] http://www.mast.queensu.ca/meng/undergrad/info.php
[+] [-] shockzzz|10 years ago|reply
But when I look at the implementation in code it's so obvious what's going on.
[+] [-] thachmai|10 years ago|reply
I can't fathom how someone who cannot understand the math formula can understand the code.
[+] [-] ccvannorman|10 years ago|reply
[+] [-] musername|10 years ago|reply
[+] [-] ky3|10 years ago|reply
1. An ancestor of Djinn is automated theorem proving. Why can't machines prove math theorems for us? This quest goes back to the dawn of computing science.
2. A more recent development is the Curry-Howard Correspondence. Programming in a (typed) FP language is like playing tetris. Solving symbolic logic problems [1] is also like playing tetris. Djinn exposes the connection in a REPL you can play with. And see how the computer plays tetris for you!
3. Don't want to install Djinn? No problem, just hop over to the Haskell IRC [2]. Lambdabot has a working Djinn plugin.
[0] https://hackage.haskell.org/package/djinn
[1] https://www.coursera.org/course/intrologic
[2] https://wiki.haskell.org/IRC_channel
[+] [-] matheweis|10 years ago|reply
[+] [-] adam930|10 years ago|reply
From the first page:
"Geometric algebra and its extension to geometric calculus unify, simplify, and generalize vast areas of mathematics that involve geometric ideas, including linear algebra, multivariable calculus, real analysis, complex analysis, and euclidean, noneuclidean, and projective geometry. They provide a unified mathematical language for physics (classical and quantum mechanics, electrodynamics, relativity), the geometrical aspects of computer science (e.g., graphics, robotics, computer vision), and engineering."
[+] [-] Yomammas_Lemma|10 years ago|reply
[+] [-] abc_lisper|10 years ago|reply
[+] [-] wmichelin|10 years ago|reply
[+] [-] euske|10 years ago|reply
[+] [-] Someone|10 years ago|reply
- everybody can vote? Well, we have https://en.wikipedia.org/wiki/Felony_disenfranchisement and, on the other end of the spectrum, in the UK "Although the law relating to elections does not specifically prohibit the Sovereign from voting in a general election or local election, it is considered unconstitutional for the Sovereign and his or her heir to do so” (http://www.royal.gov.uk/MonarchUK/QueenandGovernment/Queenan...)
- everybody with a sufficiently high income must pay social security taxes? Not if you’re member of certain religious groups (https://faq.ssa.gov/link/portal/34011/34019/Article/3821/Are...)
Even the universal declaration of human rights (http://www.un.org/en/documents/udhr/) often has small exceptions. For example:
- "Everyone has the right to take part in the government of his country, directly or through freely chosen representatives"? Not quite in the USA, as one must be born in the USA to become president.
- "higher education shall be equally accessible to all on the basis of merit": questionable in many countries, given the costs.
It seems there’s no rule so universal that it doesn’t have some exception. That, IMO, makes law so different from math that any analogy is useless.
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] johncolanduoni|10 years ago|reply
Actually mathematicians virtually never do this. Almost all of mathematics (from arithmetic to calculus to category theory) operates in the confines of ZFC[1] and adds no further axioms. All of these fields may add definitions, but these are just shorthand; the are conservative and have no actual consequences. It is more like fixing Newton's laws and then experimenting with all the machines you can build with them.
[1]: https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...
[+] [-] superuser2|10 years ago|reply
Actually, these laws are the most dystopian.
One concept that seems to have this design goal is strict liability: what you intended, what you believed, what elaborate conspiracies were created to deceive you, none of that matters. If you did it, you are guilty and will be punished, full stop. This is attractive because it's much harder to prove beyond a reasonable doubt that someone did something "with malice" or "intentionally" than to prove that they did it. This tends to show up around youth sexuality, probably because that's an area where social norms have changed quite a bit.
It doesn't matter if the 17.5-year-old you slept with presented a fake ID saying she was 18, took the lead in all sexual activity, etc. It doesn't matter if she shows up in court and begs the judge to let you go. People whose 18th birthday has not elapsed cannot give consent, full stop. Sex without consent is rape, rapists are bad people and should be made to suffer, so you're going to prison and then on to the sex offender registry. Your judge, and the judges you appeal to, all think this is ridiculous, but their hands are tied because the law was written like a computer program that failed to consider edge cases. (Literally edge cases - those that fall near numerical boundaries. Some states have patched their legal systems to allow consent between partners who are close in age but under the normal age of consent. Some have, but only for heterosexual couples. Some still haven't. My state actually tried and convicted two teenagers as adults for raping each other at the same time.)
Same with child porn. Child porn laws were created a time when taking and distributing a picture could only be part of a commercial publishing operation. Anyone who creates or possesses an image of a nude child is guilty of a child pornography offense. Seems reasonable. Now every teenager has an internet-connected camera in their process, and we found an edge case we forgot: the child pornographer is the child him/herself and the consumer is her long-term boyfriend, also underage. Still just as illegal and the same punishment is required. You would hope prosecutors would turn a blind eye, but "the law is the law."
These rather salacious examples are the most high-profile, but I'm sure the same problems happens in more mundane areas of law as well.
At some point you have to trust judges and you have to empower bureaucrats to help people get out of obviously ridiculous situations. Laws are much more difficult to change than computer programs; there needs to be a reasonable amount of discretion for a manual override until the law can be fixed.
[+] [-] a3_nm|10 years ago|reply
In theoretical CS, people talk of catching and fixing "bugs" in proofs, namely, mistakes that make the proof fail but can hopefully be fixed while sticking to essentially the same idea.
One can "refactor" proofs, in superficial ways (e.g., renaming of concepts), but in deeper ways also, e.g., extract part of a proof to make it an independent lemma that you can reuse (or "invoke") from other parts of the proof. One often tries to "decouple" large proofs into independent parts with clearly defined "interfaces", that the reader can understand separately from each other, though this usually implies a tradeoff (a more tightly integrated proof requires more mental space but is usually shorter overall).
One can think of the statement of sub-results (lemmas) as providing an "interface" to invoke them elsewhere, which you try to "decouple" from the actual "implementation", namely, the way the lemmas are really proven. It takes practice to find the right way to abstract away the essence of a result to state it correctly, without burdening it with implementation details, but without forgetting an important aspect of the result that will be necessary later. As in software engineering, once a result is proven, you stop burdening your mind with the implementation and mostly think about the statement (i.e., what the result is supposed to be doing) when using it.
In software engineering, one must decide which part of the code is "responsible" for checking certain properties on the objects, and that code may "assume" some preconditions on its inputs and must "guarantee" some preconditions on its outputs. In the same way, in proofs, one often wonders where certain conditions should be verified. Should they be part of the definition of the object? Does this lemma enforce more conditions on the object than what is guaranteed by its statement?
The parallel is not perfect. In software engineering, you can rely on the computer to check that your code is correct, and to execute it. In mathematics you rely on other humans to do this and check that they are convinced by your proofs. This means you can get away with appeals to human intuition which are not fully formal, but on the other hand there is no safety net when you make an error in your reasoning, no reality check that you can invoke to avoid exploring erroneous consequences. Also, this does not apply to all types of proofs; but it applies especially well to proofs that describe a construction, i.e., a way to "build" a certain abstract object, often to justify that an object with a certain desirable set of properties exists.
[+] [-] thinkpad20|10 years ago|reply
(Disclaimer: I'm hardly an expert in these languages; I just dabble.)
[+] [-] rdlecler1|10 years ago|reply
[+] [-] currentoor|10 years ago|reply
For example, creating new domain specific control flows with Lisp macros versus defining a Dirac delta function using limits and integrals. In programming it's easy for bugs to seep in because there are more little/subtle details and leaky abstractions. But math on the other hand feels much more abstract and clean.
Perhaps this is just because dumb silicon boxes interpret our code and humans interpret our math which gives us a much more sophisticated base language to work with.
[+] [-] agumonkey|10 years ago|reply
[+] [-] vezzy-fnord|10 years ago|reply
[+] [-] vayarajesh|10 years ago|reply
I have a keen interest in neural networks and it requires good foundation of Math.
[+] [-] vezzy-fnord|10 years ago|reply
[+] [-] Dramatize|10 years ago|reply
[+] [-] louithethrid|10 years ago|reply
[+] [-] throwaway593492|10 years ago|reply
I've never taken a college-level linear algebra course, and I've found KhanAcademy's linear algebra course to be a good gentle introduction. I just skip the parts I already know.
[+] [-] unknown|10 years ago|reply
[deleted]