top | item 26825058

(no title)

defrex | 4 years ago

Whilst it's obviously powerful, I often find myself wishing math used syntax even half as easy to understand as any decent programming language.

I suppose it's a result of being developed on a chalk board, but math seems be value _terseness_ above all else. Rather than a handful of primitives and simple named functions, it's single greek characters and invented symbols. Those kind of shenanigans would never pass a code review, but somehow when we're talking about math they're "elegant" and "powerful".

I call bullshit. Math syntax is bad.

discuss

order

IngoBlechschmid|4 years ago

Working mathematician here. Generally I concede.

However I'd like to add that often in mathematics, we are discussing very generic situations. For instance, we are not talking about the radius of some specific circle, which perhaps should be named `wheelRadius`, but about the radius of an arbitrary circle or even an arbitrary number.

I wouldn't really know a better name for an arbitrary number than `x`. The alternative `arbitraryNumber` gets old soon, especially as soon as a second number needs to be considered -- should it be called `arbitraryNumber2`? I'll take `y` over that any day :-)

Also there are contextually dependent but generally adhered to naming conventions which help to quickly gauge the types of the involved objects. For instance, `x` is usually a real number, `z` is a complex number, `C` is a constant, `n` and `m` are natural numbers, `i` is a natural number used as an array index, `f` and `g` are functions, and so on.

My favorite symbol is by the way `よ` which denotes the Yoneda embedding and is slowly catching on. All the other commonly symbols for the Yoneda embedding clashed with other common names. This has been a real nuisance when studying category theory.

mattmanser|4 years ago

We use i, x, y, etc. all the time as variable names as professional programmers.

So you're sort of arguing against a straw man there, almost no programmer would expect you to name such a concept 'arbitraryNumber2', we would also name it x or y if it made sense in the code.

robocat|4 years ago

Every programming language overloads the same few ASCII non-alpha-numeric characters to have multiple meanings. A colon : can mean multiple things in most computer languages, or it gets combined. Even a symbol like less-than < gets extremely different meanings depending upon context: comparison, template, XML, << operator (bonus if overloaded), <- operator (ugggh mixed with minus for bonus confusion) etcetera.

I think saying programming languages are better than Mathematics is just due to your familiarity.

phailhaus|4 years ago

Oooh don't even get me started on how they name things after people instead of anything remotely descriptive or helpful. Imagine if you named functions after yourself.

WalterBright|4 years ago

> Imagine if you named functions after yourself.

Ooh, good idea! I'm getting sick and tired of foo(), it's time for walter().

taeric|4 years ago

Shell's sort would like a word. Or Bloom's filter, for that matter...

tartoran|4 years ago

I find it a good thing to name it after someone who discovered something or pioneered branch in the field. Sometimes it makes it confusing but most of the time the name reference makes it very easy to remember as well

ithinkso|4 years ago

Comparing programming languages to maths doesn't really makes sense because they serve to express vastly different things. Programming languages need to unambiguously describe how to transform input data into output data. Maths language is more like a natural language and is used to communicate. It evolves in the same way as natural languages evolve and an attempt to codify it precisely is futile because there will always be idiomatic expressions, exceptions to the rules and it depends heavily on a context. You use maths language to write a story or talk with a friend what you did last night, you use programming language to build a shed or bake a bread

It might be awful from the outsider's perspective but so is a foreign language if you never learned it. Hard to complain about it though and if you want to know what others are taking about there is no other way around but to learn it - it won't change in order to make it easier for you, it will change to make it easier for the speakers

yongjik|4 years ago

A codebase easily contains thousands of identifiers - sometimes millions. You need a verbose, (hopefully) unique way to refer to them, because otherwise you will never find out which variable refers to what.

On the other hand, in a typical math textbook, the kind that will take you a full year to read through, the list of "all the symbols ever used in this book" usually fits in a single page.

There's no point in writing "CircumferenceRatio" when π does the job. Imagine solving a partial differential equation with CircumferenceRatio appearing five times each line.

creata|4 years ago

Algebraically manipulating stuff (factoring, rearranging, cancelling, expanding, simplifying, etc.) without the terseness of math notation sounds like a nightmare, regardless of whether I'm using a chalkboard or an endless sheet of paper.

> Those kind of shenanigans would never pass a code review

Yes, because code is used in very different ways to a mathematical expression. When you see code in a repository or a textbook, I doubt you find yourself copying it out over and over again in your own work.

ben_bradley|4 years ago

Both math(s) and (almost?) all programming languages have their quirks and inconsistencies. What immediately comes to mind is (a thing I've recently learned) the template syntax of C++ where if you get a single character wrong you get dozens of lines of error messages (there's a code golf on exactly this).

At least with programming, you generally don't see different semantics depending on the value of something! With math, there's sin^2 as in: sin^2 theta + cos^2 theta = 1, which reads the square of the sin of theta, etc. But then there's sin^-1 which means the inverse sine AKA arcsine, and NOT 1 / sine, which would be consistent with previous usage.

paulpauper|4 years ago

are you kidding. math syntax is super easy . but maybe that is just me. It's mostly just the same dozen or so Greek symbols used over and over

User23|4 years ago

The use of invisible operators is obnoxious because it means symbol names must all be atoms. Why is yz (multiply y z) but 23 is (toint (cat “2” “3”))? A great deal of mathematical syntax is actually ambiguous as written too. Plenty of it is fine, but it’s intellectually dishonest to deny that many common notations have no merit beyond widespread historical usage. Which in case it isn’t clear means yes of course the student should learn them for the benefit of reading great works of the past.

currymj|4 years ago

in computer science people can use pseudocode or descriptive variable and function names, and do sometimes, but still often fall back on math notation and Greek letters.

sometimes the terseness, and leaving certain details implicit, does actually add to clarity rather than hurting it. the eye can only take in so much at one time.

elihu|4 years ago

My main frivolous gripe with math notation is how everyone uses radians by default, to the point where your first visual clue that something is an angle is not any kind of unit, but rather the fact that it's being multiplied or divided by some multiple or fraction of pi. I think that the most sensible universal angle unit is "rotations". So, 360 degrees is 1, 45 degrees is 1/8, and so forth. Radians are only useful for a few special cases, like determining how far a car rolls if it's 10 inch radius tire rotated by 300 radians. (I wonder if somewhere, there's a mathematician who has modded their car's tachometer to output radians per second rather than revolutions per minute, just to make the math work out easier...)

Anyways, programming languages generally follow math notation, and use radians for trig functions and so on. Usually that's not too much of a problem, but when applied to file formats like VRML which were meant to be human readable, the results are ugly.

For the most part though, I think math notation is pretty good. At least when compared to something like standard music notation, which is full of weird rules and historical accidents.

mannerheim|4 years ago

Algorithms for calculating trig functions would probably not look good using degrees. Maybe it might look OK with what I assume is what's usually used (lookup tables + interpolation?), but for the Taylor series expansion you have to multiply by powers of pi/180 everywhere.

Calculus is generally worse with degrees. The derivative of sin(pi/180 x) is pi/180 cos(pi/180 x). That's pretty inconvenient, especially if you're writing any sort of models that need to solve differential equations. Same reason base e is preferred for exponents.

hervature|4 years ago

Radians vs. degrees isn't notation, it's a convention. You even say the reason why it is the convention. That multiplying the radius by the radians gives you the circumference. It is the only representation of angles with this special property. I mean, why should 360 represent 1 rotation? Why not use rotations itself? That way 1/4 is 1/4, 1/8 is 1/8, and so forth.