Bijections aren't necessarily isomorphisms. A bijection is just a one-to-one correspondence between two sets; an isomorphism also preserves functional and relational structure on those sets.
I note that this was brought up in the comments and dismissed by the author. A mistake is one thing—we all make them—but the way he shrugged off the correction is concerning.
I like this objection, because it illustrates why category theory (or the category-theoretical way of thinking) is valuable even in elementary discussions.
“Isomorphism” is a great example of a word that is not well-defined out of context. An isomorphism of rings is not the same as an isomorphism of groups, for example.
In category theory, an isomorphism is a morphism that has an inverse. Category theorists sometimes use the punning device of treating “iso” as an adjective, so that an “isomorphism” is an “iso morphism”.
But before you can talk about morphisms, you need to know which category you‘re talking about. An isomorphism in the category of sets is indeed precisely the same thing as a bijection, so in that sense bijections are necessarily isomorphisms. But the word is traditionally used in the context of categories of algebras (e.g. groups, rings, fields, etc.): the morphisms of those categories are the homomorphisms (i.e. functions that preserve the algebraic structure), and so of course the iso morphisms in those categories are also homomorphisms.
Bijections aren't necessarily isomorphisms. A bijection is just a one-to-one correspondence between two sets; an isomorphism also preserves functional and relational structure on those sets.
That is an important distinction, but it tends to be pretty easy for most people.
Another important distinction, which I personally found much harder, is the distinction between an isomorphism and a natural isomorphism. Which is that a natural isomorphism is an isomorphism that is defined by the functional and relational structure on those sets.
I had so completely ingrained the idea that "isomorphic means the same" that it was hard for me to understand that some pairs of things were more "the same" than others. What part of "same" didn't my teachers understand?
The classic example involves vector spaces. The set of linear functions from a finite dimensional vector space over the reals to the reals is a vector space. It turns out to be isomorphic to the original vector space, but not naturally isomorphic. This second vector space is called the dual. The reason why is that the dual of the dual is naturally isomorphic to the original vector space. The natural isomorphism is defined like this. If v is a vector in the original space, and f is a linear function, then map v to the function v' defined by v'(f) = f(v).
As difficult as it was for me to understand this, it does turn out to be important. For instance the distinction between a vector space and its dual is closely related to the distinction between covariant and contravariant.
For people who aren't up to speed on Abstract Algebra, the analogy to have in mind is:
A: Rectangles and squares are the same things.
B: That isn't correct. All squares are rectangles but the reverse isn't true.
A: I only care about the number of sides the polygon has so for my domain of interest rectangles and squares are equivalent. (Therefore the falsity of the first statement is a minor issue?)
Bijective functions are of interest to me for practical purposes in data binding, UIs and the like. Having a reversible function translating data from one form to another makes creating bindings more declarative, as you don't have to prove that the getter and setter halves are equivalent (the usual alternative), if your function to begin with is bijective.
And for this practical purpose, I also don't care about preserving functional and relational structure in the mapped sets - isomorphism is not particularly relevant.
So what I'd like to know is why exactly you think this is "concerning". What bad ramifications are you afraid of?
[+] [-] ionfish|15 years ago|reply
I note that this was brought up in the comments and dismissed by the author. A mistake is one thing—we all make them—but the way he shrugged off the correction is concerning.
[+] [-] robinhouston|15 years ago|reply
“Isomorphism” is a great example of a word that is not well-defined out of context. An isomorphism of rings is not the same as an isomorphism of groups, for example.
In category theory, an isomorphism is a morphism that has an inverse. Category theorists sometimes use the punning device of treating “iso” as an adjective, so that an “isomorphism” is an “iso morphism”.
But before you can talk about morphisms, you need to know which category you‘re talking about. An isomorphism in the category of sets is indeed precisely the same thing as a bijection, so in that sense bijections are necessarily isomorphisms. But the word is traditionally used in the context of categories of algebras (e.g. groups, rings, fields, etc.): the morphisms of those categories are the homomorphisms (i.e. functions that preserve the algebraic structure), and so of course the iso morphisms in those categories are also homomorphisms.
[+] [-] btilly|15 years ago|reply
That is an important distinction, but it tends to be pretty easy for most people.
Another important distinction, which I personally found much harder, is the distinction between an isomorphism and a natural isomorphism. Which is that a natural isomorphism is an isomorphism that is defined by the functional and relational structure on those sets.
I had so completely ingrained the idea that "isomorphic means the same" that it was hard for me to understand that some pairs of things were more "the same" than others. What part of "same" didn't my teachers understand?
The classic example involves vector spaces. The set of linear functions from a finite dimensional vector space over the reals to the reals is a vector space. It turns out to be isomorphic to the original vector space, but not naturally isomorphic. This second vector space is called the dual. The reason why is that the dual of the dual is naturally isomorphic to the original vector space. The natural isomorphism is defined like this. If v is a vector in the original space, and f is a linear function, then map v to the function v' defined by v'(f) = f(v).
As difficult as it was for me to understand this, it does turn out to be important. For instance the distinction between a vector space and its dual is closely related to the distinction between covariant and contravariant.
[+] [-] hackerblues|15 years ago|reply
A: Rectangles and squares are the same things.
B: That isn't correct. All squares are rectangles but the reverse isn't true.
A: I only care about the number of sides the polygon has so for my domain of interest rectangles and squares are equivalent. (Therefore the falsity of the first statement is a minor issue?)
[+] [-] barrkel|15 years ago|reply
And for this practical purpose, I also don't care about preserving functional and relational structure in the mapped sets - isomorphism is not particularly relevant.
So what I'd like to know is why exactly you think this is "concerning". What bad ramifications are you afraid of?