top | item 44942471

(no title)

pncnmnp | 6 months ago

I have a question that's bothered me for quite a while now. In 2018, Michael Jordan (UC Berkeley) wrote a rather interesting essay - https://medium.com/@mijordan3/artificial-intelligence-the-re... (Artificial Intelligence — The Revolution Hasn’t Happened Yet)

In it, he stated the following:

> Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.

I was wondering whether anyone could point me to the paper or piece of work he was referring to. There are many citations in Schmidhuber’s piece, and in my previous attempts I've gotten lost in papers.

discuss

order

drsopp|6 months ago

Perhaps this:

Henry J. Kelley (1960). Gradient Theory of Optimal Flight Paths.

[1] https://claude.ai/public/artifacts/8e1dfe2b-69b0-4f2c-88f5-0...

pncnmnp|6 months ago

Thanks! This might be it. I looked up Henry J. Kelley on Wikipedia, and in the notes I found a citation to this paper from Stuart Dreyfus (Berkeley): "Artificial Neural Networks, Back Propagation and the Kelley-Bryson Gradient Procedure" (https://gwern.net/doc/ai/nn/1990-dreyfus.pdf).

I am still going through it, but the latter is quite interesting!

cco|6 months ago

Count another in the win column for the USA's heavy investment into basic sciences during the space race.

So sad to see the current state. Hopefully we can turn it around.

leokoz8|6 months ago

It is in Applied Optimal Control by Bryson and Ho (1969). Yann LeCun acknowledges this in his 1989 paper on backpropagation:https://new.math.uiuc.edu/MathMLseminar/seminarPapers/LeCunB....

> "Since his first work on the subject, the author has found that A. Bryson and Y.-C. Ho [Bryson and Ho, 1969] described the backpropagation algorithm using Lagrange formalism. Although their description was, of course, within the framework of optimal control rather than machine learning, the resulting procedure is identical to backpropagation."

psYchotic|6 months ago

pncnmnp|6 months ago

Apologies - I should have been clear. I was not referring to Rumelhart et al., but to pieces of work that point to "optimizing the thrusts of the Apollo spaceships" using backprop.

seertaak|6 months ago

Rumelhart et al wrote "Parallel Distributed Processing"; there's a chapter where he proves that the backprop algorithm maximizes "harmony", which is simply a different formulation of error minimization.

I remember reading this book enthusiastically back in the mid 90s. I don't recall struggling with the proof, it was fairly straightforward. (I was in senior high school year at the time.)

duped|6 months ago

They're probably talking about Kalman Filters (1961) and LMS filters (1960).

pjbk|6 months ago

To be fair, any multivariable regulator or filter (estimator) that has a quadratic component (LQR/LQE) will naturally yield a solution similar to backpropagation when an iterative algorithm is used to optimize its cost or error function through a differentiable tangent space.

dataflow|6 months ago

[deleted]

throawayonthe|6 months ago

it's rude to show people your llm output

cubefox|6 months ago

> ... first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.

I think "its" refers to control theory, not backpropagation.