No, I'm going to agree that that is 100% within the field of academic computer science. I literally covered all the relevant notation in my freshman year.
What I'm wondering is whether or not there was some other factor going on, because I'm trained as a computer scientist and found nothing particularly objectionable about the formula, other than the f[] application notation. (And as a polyglot programmer, I've long since made my peace with that sort of notation mutation.) And I am by no means well-practiced in that sort of thing; I've been out of school for 14 years now, and only dabble on the side in this sort of thing now. The "forall y there exists an x such that" pattern in the middle is an extremely common recurring pattern, and what surrounds it on either side is also extremely simple.
Did you do it in a few seconds while reading from a slide and listening to a lecturer talk about math? I did the same thing as you and it took me more than 10 seconds to parse through the notation. If someone put this up on a slide during a talk and asked if I 'understood it', the answer would be no. Doesn't mean I'm incapable of understanding it, just that it uses muscles I don't flex very often.
'Outside their field' was poor wording on my part. I meant more that it's not the sort of thing that most CS engineers see day to day. People forget things they don't use often.
This is such a notation issue. If I were to show an implementation of permutations from 1..N in eg Brainfuck (to choose an extreme example), there's no way the mathematicians would get it. Why don't mathematicians learn math??
This issue comes up frequently when I try to read CS papers which explain their concepts using math notation. It's much, much easier to reverse-engineer the concept from a working example written in some notation I can actually read, such as... any programming language, even a programming language I've never formally learned. The math notation is literally Greek to me (if you'll pardon the pun) and does more to obscure than communicate meaning.
Not sure it's just notation but a mater of understanding.
In computer programming "exists" it's a matter of checking all the possibilities and find one, therefore is restricted to finite sets (and realistically speaking quite small the ones).
On the other hand in mathematics there is no such restriction. Existence is just an assumption, if there is at least one, then we go further with the assumption, no meter we talk about finite sets, infinite countable sets or infinite uncountable.
I'm working as a computer programmer for quite a long time and I also find this very annoying seeing people around thinking only finite when they have to solve real problems.
Didn't read the article, is that supposed to show the properties of a function that is both "1:1" and "onto"? Ie, the set of all inputs is the same as as the set of possible outputs, so for any x in the set, f(x) will return a result that is also in the set?
Didn't major or minor in mathematics but I took a few papers. Time to the read the article and collect my prize or look ignorant under my real name on the web.
EDIT: I was wrong! Though in my defense I had been given the context from TFA I think I would have got it.
But isn't CS one of the formal sciences (together with mathematics, statistics, etc.)? Familiarity with mathematical notation should be pretty foundational for active researchers in the field I would think.
It just doesn't come up very much. The computers abstract it enough that you rarely need to actually look at notation like that. Unless you're doing some real close to the metal work, which the people who kept their hands raised probably were.
jerf|9 years ago
What I'm wondering is whether or not there was some other factor going on, because I'm trained as a computer scientist and found nothing particularly objectionable about the formula, other than the f[] application notation. (And as a polyglot programmer, I've long since made my peace with that sort of notation mutation.) And I am by no means well-practiced in that sort of thing; I've been out of school for 14 years now, and only dabble on the side in this sort of thing now. The "forall y there exists an x such that" pattern in the middle is an extremely common recurring pattern, and what surrounds it on either side is also extremely simple.
WaxProlix|9 years ago
bananabill|9 years ago
WaxProlix|9 years ago
marssaxman|9 years ago
ionut_popa|9 years ago
In computer programming "exists" it's a matter of checking all the possibilities and find one, therefore is restricted to finite sets (and realistically speaking quite small the ones).
On the other hand in mathematics there is no such restriction. Existence is just an assumption, if there is at least one, then we go further with the assumption, no meter we talk about finite sets, infinite countable sets or infinite uncountable.
I'm working as a computer programmer for quite a long time and I also find this very annoying seeing people around thinking only finite when they have to solve real problems.
VyseofArcadia|9 years ago
is not expert-level material.
lacampbell|9 years ago
Didn't major or minor in mathematics but I took a few papers. Time to the read the article and collect my prize or look ignorant under my real name on the web.
EDIT: I was wrong! Though in my defense I had been given the context from TFA I think I would have got it.
unknown|9 years ago
[deleted]
thearn4|9 years ago
bananabill|9 years ago
huherto|9 years ago