It's very important to note here that 0^0=1 is a shorthand and not a truth.
Mathematicians are absolutely not stating that they have proven, or that it is true, that 0^0=1. It is a definition, not a claim of equality. They're not saying "0^0 is 1" in the sense that they say "1+1 is 2" or "0.999... is 1". They're saying "we define 0^0 to be 1". The difference is more than just pedantry, it strikes at the core of why mathematics is the most powerful tool for determining truth that humanity has either discovered or invented (hat tip to anyone familiar with that debate).
One of the cold, austere, beauties of mathematics* is that if you do not accept a definition, you can reject it as false and reason with the result. To give the traditional example, if you accept as true that two parallel lines never intersect, then along with the other 4 of Euclid's Axioms you can prove all of Euclidian Geometry (what you learn in high school). If you do not accept it as true (an explicitly accept it as false), then you can prove all of Hyperbolic Geometry (one form of non-Euclidian Geometry).
In the case here, you're free to reject the convention that defines 0^0 as 1 and reason with the result; you will not break any mathematics. But you should know that mathematicians have never run into any issues with this convention--or can handle it trivially when they arise--so you're only adding a lot of work for yourself.
When you really think about it, it fills you with awe. It's awesome.
*“Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show.” -Bertrand Russell
In a certain sense, "1+1 is 2" is also merely a definition, in the same sense that "0^0 is 1" is a definition. Addition can be formally defined in mathematics; we habitually omit this definition because it is tedious, and because addition is such an intuitive operation that we do not require a definition in order to reason about it.
Much as the question "what if the parallel axiom didn't hold?" leads to alternative geometries,
the question "what if addition didn't work in the same way?" (or "how can we generalize addition?") leads to some basic notions in abstract algebra.
This is not true. A very natural way to define a^b is as the number of functions for a set with "b" elements to a set with "a" elements. In this case there is exactly one function from the empty set to the empty set, and we have proven 0^0 = 1. This is no different than defining addition and then proving 1+1 = 2.
I see it as somewhat complex but in the end I sort of agree that it is a definition. Let's start with a problem:
0/0
The problem with this is not that the equation itself is meaningless. the limit of n/x as x->0 is infinity for any positive real number, and negative infinity for any negative real number. In essence 0/0 ends up reducing to 0 * infinity, which isn't very helpful.
I would argue that discontinuity in a form like this can never be unique to a single point in an otherwise continuous function, if the limit approached from both sides is the same, unless that point as a specific, defined value. This is because a point is arbitrarily small. The limit of 1/x as x->0 is discontinuous because from the negative side and the positive side, the limits are different, as anyone can see when you plot it on a graph.
However here we have x/x, and the limit stays at 1 from both sides. It may be formally undefined at the exact point of 0/0, but the limit from both sides is the same, and it remains constant as you get arbitrarily close. Therefore it is simpler, system-wise, to treat it as 1, than to make an exception for it.
> In the case here, you're free to reject the convention that defines 0^0 as 1 and reason with the result;
But you add complexity which is neither helpful nor desirable.
Can anyone recommend a really good book on mathematics that demonstrates the "beauty" of it? I'd love to really learn and understand about Fast Fourier Transforms and the like, but any book I have is from my old college days and is just so bloody tedious.
I used to really love maths when I was younger but it got beaten out of me by endless repetition and "now do Questions 1 - 50" tedium.
Now I get scared whenever I see the "summation" symbol.
Well of course we made it up. No one handed us any stone tablets with 0^0 on them.
Mathematicians are always making definitions, and working out which of them should be kept and which should be discarded. We keep the definitions that make the most sense, that make our lives the easiest, that make theorems easy to state, that give math a sense of being natural. Indeed, in the early days of algebraic geometry there were big debates over which definitions to adopt.
It is like deciding on a convention when you design a new programming language. In this case, experience has shown that it is pretty much always better to say that 0^0 is 1 and not 0. Among other reasons, there is exactly one map from the empty set to the empty set.
But if you say 0^0 = 0, you don't get math blowing up in some big contradiction. You just get a little more kludge here and there, a few extra special cases of lemmas that have to be spelled out in more detail. Nothing too awful.
Correct me if I'm wrong, but I infer that you're using "we made it all up" as a pejorative toward mathematicians. Of course mathematicians invented the terminology, notation, and methodology, but that's not a bad thing. It's a great thing, just like it's great that engineers "make up" bridges, chemists "make up" pharmaceuticals, writers "make up" novels, etc.
0^0, like any indeterminate form, can be made to equal anything via sufficient cleverness. Consider the limit:
y = lim_[x->0] x^[a / log(x)]
We have log y = lim_[x->0] (a / log(x)) log(x) = a. So 0^0 equals any number at all! Of course, nobody in their right mind would define exponents this way, but the indeterminacy is inherent in the definition of the symbols.
The real problem here is that x^y is a single shorthand which refers to a few fundamentally different mathematical concepts (which happen to have significant overlap with each other).
First, it refers to a function f:C x N --> C, defined in terms of repeated multiplication. f(x,0) is 1 for all x != 0, and so we adopt the convention that f(0,0) is also 1.
But it also refers to a function g:C x C --> C, defined as g(x,y) = exp(y log(x)), which has a branch cut on the negative real axis of the first argument and an essential singularity at (0,0), and so g(0,0) is necessarily undefined.
The value of 0^0 depends entirely on what sort of mathematics one is doing at the time, and therefore which function one is referring to.
But if you define 0^0=1 in general, it doesn't cause a problem here -- that definition never disagrees with x^y=exp(ylog(x)), it just defines it at the point 0^0, while the latter leaves it undefined. In other words, it's possible to make a common extension of the two; they don't actually give different values in any case.
Of course, doing this makes exponentiation discontinuous at (0,0), but seeing as it already had an essential singularity there, this isn't really a loss.
In fact, exponentiation f : C x N -> C has a natural generalization to any monoid C, where f(x,n) is x "multiplied" by itself using the monoid operator n times. In this setting, the only sensible choice is that x^0 = 1 for any x, where 1 is the unit of the monoid in question. In particular, that is the only definition of exponentiation which is parametric in our choice of monoid.
Naive Haskell example code (using Int for Nat):
f :: Monoid m => (m,Int) -> m
f (x,0) = mempty
f (x,n) = x `mappend` f (x,n-1)
-- or equivalently
f (x,n) = msum (replicate n x)
- "0^0. Why? Because mathematicians said so. No really, it’s true."
- [Detailed explanation of the tradeoffs involved in choosing different definitions of exponentiation.]
So, it's not "because mathematicians said so", it's because of a deep review of the tradeoffs of defining how exponentiation generalizes, the kind of thing that mathematicians happen to study more than other identifiable groups.
Edit: To be clear, this is good practice when linking to arXiv in general. From the abstract, one can easily click through to the PDF; not so the reverse. And the abstract allows one to do things like see different versions of the paper, search for other things by the same authors, etc.
I really like that Knuth paper: the Iverson bracket in particular is some really handy notation. Mathematicians would do well to spend more effort on notation. Currently it doesn’t seem nearly as valued as theorem proving, but in my opinion it’s just as important, because it defines how we think about the structures we’re working with.
I think computer programming would actually be quite excellent exercise for a mathematician, because it involves such heavy intimate experience with the problems of naming things, working with notation, and defining the boundaries of various abstractions. From kindergarten up through the end of an undergraduate degree, mathematics students mostly take existing notation and definitions for granted, and don’t get much hands-on experience with the problems which result from inventing bad notation or bad names. As a result, they have a less visceral understanding of the importance of good notation and good names.
[I also think programming students should spend at least a bit of time working with as many different abstraction styles and notations as they can, as well as e.g. trying to implement new toy programming languages with new semantics.]
I feel like in discrete mathematics (especially combinatorics), since we don't use continuous functions, it's useful to say 0^0 is 1, along with 0! = 1, and so on. Makes a lot of things around Binomial theorem and the like easier. I'm not so sure if it's safe to use that when doing and calculus proofs or anything along those lines, but there, you have more useful tools for dealing with limits that might approach 0^0.
0! really is 1 in a much more reasonable sense. The empty product is the multiplicative identity. The continuous notion of ! also agrees: http://en.wikipedia.org/wiki/Gamma_function
Missing Q and A: But if mathematicians insist it is 1, why do high school teachers act like they know more than the mathematicians do?
A: They don't. The statement that mathematicians uniformly say it is 1 is simply false. My high school teacher had a PhD in math, I think it's fair to say she was a mathematician. And yes, she said it was undefined.
I thought "most of the time" was implied in these sorts of things by now. And you will note the mathematicians don't "say it is 1"... they say they are choosing to follow the convention of defining it as 1, with reasons listed below.
I could come up with whatever crazy definitions in math I wanted to, and sometimes even get useful results. However, this view is not... convenient for teaching, so they pretend that you are learning "math", as opposed to "a particular math".
>"The statement that mathematicians uniformly say it is 1 is simply false"
Not really. No serious mathematician would dispute the fact that for every real x,
e^x=sum(n=0 to infinity)x^n/n!.
But the above fails at x=0 if we don't define 0^0=1. So even mathematicians who claim to not use 0^0=1, can almost always be convinced to admit that they do indeed use 0^0=1, using the Taylor series for e^x.
Are there any examples where the x^0=1 definition turns out to make other definitions more complicated to write down? For example, you'd have a general definition and need a special definition for when you get an exponent that equals 0.
I find it a bit disappointing that the article considers only limits that would justify "0^0 = 0" and "0^0 = 1". In fact, the termin "0^0" as a limit can reach any value - the same way as "0/0" can reach any value.
Im stressing this because back in school, I really thought that although 0^0 is undetermined, it can only reach exactly 0 or 1, but nothing else. This is of course wrong, but no teacher was able to tell me why. For some time I even thought I found a new theorem and tried to prove it. Later, I told some math guru about this, he thought about a minute, and told me two functions f(x) and g(x) whose limit is each 0, but for which the limit of f(x)/g(x) is 2 (or any other value, if you adjust f(x) and g(x) accordingly).
Having said that, in most cases "0^0 = 1" is a useful convention, especially in a purely algebraic context when polynomials are involved.
This question doesn't have an answer because, as this post suggests, the meanings of the first 0, the second 0, and the (^) are overloaded. The reason mathematicians don't mind the overloading is that they all have "kind of" similar meanings.
So we can talk about a recursively defined function on naturals where it's convenient to "define" 0^0 = 1. We can talk about various kinds of limit techniques in the reals where 0^0 is not actually a value but instead a shorthand for a particular kind of limit.
This probably forms the most interesting notion of what 0^0 means where we define it as the limit of a particular kind of path in the complex plane and then notice that 0^0 can take any value we choose---depending on exactly what path was taken to get there. [0]
This is exactly why you have things like 0! = 1, 0 choose 0 = 1, 0^0 = 1, the empty sum is 0 and the empty product is 1, and so on. These are DEFINITIONS and they make notation easier. They basically help the flow of mathematics. It's often difficult to watch someone try to explain "intuitively" why some of these things are the way they are and completely miss the point that they are like this because they help make other things easier.
I have a problem with this part of the explanation:
>However, this definition extends quite naturally from the positive integers to the non-negative integers, so that when x is zero, y is repeated zero times, giving y^{0} = 1,
which holds for any y. Hence, when y is zero, we have 0^0 = 1.
When y is zero, don't we have 0^0 = 1 x [y zero times]? Maybe I'm conceptualizing it incorrectly, but I'm envisioning an empty space where y would be, akin to an empty set. 1 times an 'empty set' is _not_ 1, it's one empty set, which rubs me as another way of saying nothing/zero, not 1.
I know my language is imprecise and I'm probably describing empty sets and the definition of zero incorrectly. The point is that the last step of his explanation doesn't sit right with me. Just because Y exists zero times does not mean you can just throw it out of the multiplication.
Had a set theory professor who taught us that for the non-negative integers, m^n was just the number of unique mappings from a set of cardinality n to one of cardinality m. Ergo, for all sets A such that |A| = k, k^0 is just all mappings from Ø, which is necessarily the one with empty image and pre-image. So 0^0 = 1.
While high schoolers try to prove their own intuitions about their understanding of exponents (intuition drilled into them through rote learning), mathematicians just say "we defined it that way".
It speaks to the tragedy that is the high school math curriculum.
I did not encounter this convention while working on my math degree. I am surprised that none of the characters in the article said "0^0 is nothing, but limits of the form 0^0 can be any nonnegative real number or infinite".
That's a rather long text to say "it's an arbitrary -- and conveniently chosen -- definition of a special case of the power function similar to how 1 is not prime". I also think the presentation was chosen poorly: lots of wrong information before the correct approach is presented.
I love Wikipedia. It's amazing. It makes the world a better place. I'm a pretty decent programmer. I do video games so I do lots of 3d math. I'd say I'm decent at that as well.
I hate Wikipedia for math. Absolutely hate it. Unless you are a mathematician by trade Wikipedia is damn near useless for learning new math concepts. I don't even bother checking it anymore.
In no way was wikipedia better, it was confusing and basically incomprehensible unless you already understood the material.
His presentation was much better because it leads you to the answer instead of dropping it on you from the sky. (The difference between learning by rote vs learning by understanding.)
Interesting article. I ran across a problem yesterday that was similar to P(x)^Q(x) = 1, which then asked me to find the sum of the solutions. I noticed that both P(x) and Q(x) share a root at some a. But I realized, 0^0 is most often defined as 1, and carried on. Checking the answer key later on showed that they chose to neglect that a, and call 0^0 undefined. I'm not sure how I really felt about it.
Note, I also forgot to check when P(x) = -1, assuming that Q(x) is even there ;P
I think that the most intuitive way to get the idea of why 0^0 = 1 is to take the example from combinatorics. n^k is the number of distinct sequences for the sampling with replacement and ordering (for example ball picking from repository of n different balls and counting the number of distinct ways that k balls can be picked and ordered - with replacement). I think that there is only one way of ordering results of drawing zero balls from a set of 0 different balls :)
[+] [-] rickhanlonii|12 years ago|reply
Mathematicians are absolutely not stating that they have proven, or that it is true, that 0^0=1. It is a definition, not a claim of equality. They're not saying "0^0 is 1" in the sense that they say "1+1 is 2" or "0.999... is 1". They're saying "we define 0^0 to be 1". The difference is more than just pedantry, it strikes at the core of why mathematics is the most powerful tool for determining truth that humanity has either discovered or invented (hat tip to anyone familiar with that debate).
One of the cold, austere, beauties of mathematics* is that if you do not accept a definition, you can reject it as false and reason with the result. To give the traditional example, if you accept as true that two parallel lines never intersect, then along with the other 4 of Euclid's Axioms you can prove all of Euclidian Geometry (what you learn in high school). If you do not accept it as true (an explicitly accept it as false), then you can prove all of Hyperbolic Geometry (one form of non-Euclidian Geometry).
In the case here, you're free to reject the convention that defines 0^0 as 1 and reason with the result; you will not break any mathematics. But you should know that mathematicians have never run into any issues with this convention--or can handle it trivially when they arise--so you're only adding a lot of work for yourself.
When you really think about it, it fills you with awe. It's awesome.
*“Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show.” -Bertrand Russell
[+] [-] rntz|12 years ago|reply
Much as the question "what if the parallel axiom didn't hold?" leads to alternative geometries, the question "what if addition didn't work in the same way?" (or "how can we generalize addition?") leads to some basic notions in abstract algebra.
[+] [-] mathgrad|12 years ago|reply
[+] [-] einhverfr|12 years ago|reply
0/0
The problem with this is not that the equation itself is meaningless. the limit of n/x as x->0 is infinity for any positive real number, and negative infinity for any negative real number. In essence 0/0 ends up reducing to 0 * infinity, which isn't very helpful.
I would argue that discontinuity in a form like this can never be unique to a single point in an otherwise continuous function, if the limit approached from both sides is the same, unless that point as a specific, defined value. This is because a point is arbitrarily small. The limit of 1/x as x->0 is discontinuous because from the negative side and the positive side, the limits are different, as anyone can see when you plot it on a graph.
However here we have x/x, and the limit stays at 1 from both sides. It may be formally undefined at the exact point of 0/0, but the limit from both sides is the same, and it remains constant as you get arbitrarily close. Therefore it is simpler, system-wise, to treat it as 1, than to make an exception for it.
> In the case here, you're free to reject the convention that defines 0^0 as 1 and reason with the result;
But you add complexity which is neither helpful nor desirable.
[+] [-] chacham15|12 years ago|reply
What is the equivalent in this analogy if you do not accept that 0^0=1 (i.e. accept that 0^0=0)?
[+] [-] philbarr|12 years ago|reply
I used to really love maths when I was younger but it got beaten out of me by endless repetition and "now do Questions 1 - 50" tedium.
Now I get scared whenever I see the "summation" symbol.
[+] [-] shurcooL|12 years ago|reply
[+] [-] Cacti|12 years ago|reply
[+] [-] didgeoridoo|12 years ago|reply
Teachers: Let's do it by the book and come up (somehow) with conflicting answers.
Mathematicians: Yeah, sorry guys. We made it all up.
Pretty much captures most mathematicians I know.
[+] [-] impendia|12 years ago|reply
Mathematicians are always making definitions, and working out which of them should be kept and which should be discarded. We keep the definitions that make the most sense, that make our lives the easiest, that make theorems easy to state, that give math a sense of being natural. Indeed, in the early days of algebraic geometry there were big debates over which definitions to adopt.
It is like deciding on a convention when you design a new programming language. In this case, experience has shown that it is pretty much always better to say that 0^0 is 1 and not 0. Among other reasons, there is exactly one map from the empty set to the empty set.
But if you say 0^0 = 0, you don't get math blowing up in some big contradiction. You just get a little more kludge here and there, a few extra special cases of lemmas that have to be spelled out in more detail. Nothing too awful.
[+] [-] Bahamut|12 years ago|reply
However, definitions aren't chosen all willy-nilly - there are good arguments why definitions are adopted, as should have been seen in the article.
[+] [-] baddox|12 years ago|reply
[+] [-] scythe|12 years ago|reply
y = lim_[x->0] x^[a / log(x)]
We have log y = lim_[x->0] (a / log(x)) log(x) = a. So 0^0 equals any number at all! Of course, nobody in their right mind would define exponents this way, but the indeterminacy is inherent in the definition of the symbols.
[+] [-] stephencanon|12 years ago|reply
First, it refers to a function f:C x N --> C, defined in terms of repeated multiplication. f(x,0) is 1 for all x != 0, and so we adopt the convention that f(0,0) is also 1.
But it also refers to a function g:C x C --> C, defined as g(x,y) = exp(y log(x)), which has a branch cut on the negative real axis of the first argument and an essential singularity at (0,0), and so g(0,0) is necessarily undefined.
The value of 0^0 depends entirely on what sort of mathematics one is doing at the time, and therefore which function one is referring to.
[+] [-] Sniffnoy|12 years ago|reply
Of course, doing this makes exponentiation discontinuous at (0,0), but seeing as it already had an essential singularity there, this isn't really a loss.
[+] [-] rntz|12 years ago|reply
Naive Haskell example code (using Int for Nat):
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] _xhok|12 years ago|reply
[+] [-] sold|12 years ago|reply
[+] [-] SilasX|12 years ago|reply
- [Detailed explanation of the tradeoffs involved in choosing different definitions of exponentiation.]
So, it's not "because mathematicians said so", it's because of a deep review of the tradeoffs of defining how exponentiation generalizes, the kind of thing that mathematicians happen to study more than other identifiable groups.
[+] [-] dhaivatpandya|12 years ago|reply
[+] [-] glhaynes|12 years ago|reply
(Well, and because they can justify it, of course.)
[+] [-] yaks_hairbrush|12 years ago|reply
http://arxiv.org/pdf/math/9205211v1.pdf
See page 6.
[+] [-] Sniffnoy|12 years ago|reply
Edit: To be clear, this is good practice when linking to arXiv in general. From the abstract, one can easily click through to the PDF; not so the reverse. And the abstract allows one to do things like see different versions of the paper, search for other things by the same authors, etc.
[+] [-] jacobolus|12 years ago|reply
I think computer programming would actually be quite excellent exercise for a mathematician, because it involves such heavy intimate experience with the problems of naming things, working with notation, and defining the boundaries of various abstractions. From kindergarten up through the end of an undergraduate degree, mathematics students mostly take existing notation and definitions for granted, and don’t get much hands-on experience with the problems which result from inventing bad notation or bad names. As a result, they have a less visceral understanding of the importance of good notation and good names.
[I also think programming students should spend at least a bit of time working with as many different abstraction styles and notations as they can, as well as e.g. trying to implement new toy programming languages with new semantics.]
[+] [-] davyjones|12 years ago|reply
[+] [-] Tyr42|12 years ago|reply
[+] [-] anonymoushn|12 years ago|reply
[+] [-] gweinberg|12 years ago|reply
A: They don't. The statement that mathematicians uniformly say it is 1 is simply false. My high school teacher had a PhD in math, I think it's fair to say she was a mathematician. And yes, she said it was undefined.
[+] [-] kazagistar|12 years ago|reply
I could come up with whatever crazy definitions in math I wanted to, and sometimes even get useful results. However, this view is not... convenient for teaching, so they pretend that you are learning "math", as opposed to "a particular math".
[+] [-] xamuel|12 years ago|reply
Not really. No serious mathematician would dispute the fact that for every real x, e^x=sum(n=0 to infinity)x^n/n!.
But the above fails at x=0 if we don't define 0^0=1. So even mathematicians who claim to not use 0^0=1, can almost always be convinced to admit that they do indeed use 0^0=1, using the Taylor series for e^x.
[+] [-] j2kun|12 years ago|reply
[+] [-] joliv|12 years ago|reply
[+] [-] olalonde|12 years ago|reply
[+] [-] xamuel|12 years ago|reply
If 0^0 were defined as 0, you could write the above function as f(x)=x^0.
With 0^0 defined as 1, you're forced to write something like f(x)=1-0^|x| (absolute values to avoid division by zero), a bit more complicated.
This is silly though, and of no importance anywhere.
[+] [-] vog|12 years ago|reply
Im stressing this because back in school, I really thought that although 0^0 is undetermined, it can only reach exactly 0 or 1, but nothing else. This is of course wrong, but no teacher was able to tell me why. For some time I even thought I found a new theorem and tried to prove it. Later, I told some math guru about this, he thought about a minute, and told me two functions f(x) and g(x) whose limit is each 0, but for which the limit of f(x)/g(x) is 2 (or any other value, if you adjust f(x) and g(x) accordingly).
Having said that, in most cases "0^0 = 1" is a useful convention, especially in a purely algebraic context when polynomials are involved.
[+] [-] tel|12 years ago|reply
So we can talk about a recursively defined function on naturals where it's convenient to "define" 0^0 = 1. We can talk about various kinds of limit techniques in the reals where 0^0 is not actually a value but instead a shorthand for a particular kind of limit.
This probably forms the most interesting notion of what 0^0 means where we define it as the limit of a particular kind of path in the complex plane and then notice that 0^0 can take any value we choose---depending on exactly what path was taken to get there. [0]
[0] http://en.wikipedia.org/wiki/Complex_logarithm
[+] [-] myhf|12 years ago|reply
http://www.wolframalpha.com/input/?i=y%3Dx%5Ex
[+] [-] vlasev|12 years ago|reply
[+] [-] gameshot911|12 years ago|reply
>However, this definition extends quite naturally from the positive integers to the non-negative integers, so that when x is zero, y is repeated zero times, giving y^{0} = 1, which holds for any y. Hence, when y is zero, we have 0^0 = 1.
When y is zero, don't we have 0^0 = 1 x [y zero times]? Maybe I'm conceptualizing it incorrectly, but I'm envisioning an empty space where y would be, akin to an empty set. 1 times an 'empty set' is _not_ 1, it's one empty set, which rubs me as another way of saying nothing/zero, not 1.
I know my language is imprecise and I'm probably describing empty sets and the definition of zero incorrectly. The point is that the last step of his explanation doesn't sit right with me. Just because Y exists zero times does not mean you can just throw it out of the multiplication.
[+] [-] j2kun|12 years ago|reply
[+] [-] textminer|12 years ago|reply
[+] [-] skrowyek|12 years ago|reply
It speaks to the tragedy that is the high school math curriculum.
[+] [-] anonymoushn|12 years ago|reply
[+] [-] quchen|12 years ago|reply
Wikipedia is probably a better source here: https://en.wikipedia.org/wiki/0%5E0#Zero_to_the_power_of_zer...
[+] [-] forrestthewoods|12 years ago|reply
I hate Wikipedia for math. Absolutely hate it. Unless you are a mathematician by trade Wikipedia is damn near useless for learning new math concepts. I don't even bother checking it anymore.
[+] [-] ars|12 years ago|reply
His presentation was much better because it leads you to the answer instead of dropping it on you from the sky. (The difference between learning by rote vs learning by understanding.)
[+] [-] mxms|12 years ago|reply
Note, I also forgot to check when P(x) = -1, assuming that Q(x) is even there ;P
[+] [-] haddr|12 years ago|reply
[+] [-] socrates1998|12 years ago|reply
I guess I have always been drawn to the application of the concepts in the real world rather than the abstract beauty of it.