(no title)
jpeterson | 1 year ago
1) Create a bunch of variables and initialize them to random values. We're going to add and multiply these variables. The specific way that they're added and multiplied doesn't matter so much, though it turns out in practice that certain "architectures" of addition and multiplication patterns are better than others. But the key point is that it's just addition and multiplication.
2) Take some input, or a bunch of numbers that convey properties of some object, say a house (think square feet, number of bedrooms, number of bathrooms, etc) and add/multiply them into the set of variables we created in step 1. Once we plug and chug through all the additions and multiplications, we get a number. This is the output. At first this number will be random, because we initialized all our variables to random numbers. Measure how far the output is from the expected value corresponding to the given inputs (say, purchase price of the house). This is the error or "loss". In the case of purchase price, we can just subtract the predicted price from the expected price (and then square it, to make the calculus easier).
3) Now, since all we're doing is adding and multiplying, it's very straight-forward to set up a calculus problem that minimizes the error of the output with respect to our variables. The number of multiplication/addition steps doesn't even matter, since we have the chain rule. It turns out this is very powerful: it gives us a procedure to minimize the error of our system of variables (i.e. model), by iteratively "nudging" the variables according to how they affect the "error" of the output. The iterative nudging is what we call "learning". At the end of the procedure, rather than producing random outputs, the model will produce predictions of house prices that correlate with the distribution input square footage, bedrooms, bathrooms, etc. we saw in the training set.
In a sense, ML and AI are really just the next logical step of calculus once we have big data and computational capacity.
No comments yet.