Maybe it's for the better that McCarthy passed away in 2011; if he were still alive, he might have gotten a grant from SpaceX to put his scheme into action.
Some may find it interesting that John McCarthy coined the term Artificial Intelligence and was called a/the father of AI before the deep learning folks took over. This AI referred to the symbolic flavor not the connectionist one.
Nowadays (god-)fathers of AI are Geoff Hinton and Yann LeCun and others, but 20 years ago things were very different…
> This AI referred to the symbolic flavor not the connectionist one
that's not a fair characterization at all. The title contains the term "Artificial Intelligience", and soon after item 3 in the proposal was "Neuron Nets". The word symbolic is not used at all, although the term Abstractions is used.
A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE
J. McCarthy, Dartmouth College
M. L. Minsky, Harvard University
N. Rochester, I.B.M. Corporation
C.E. Shannon, Bell Telephone Laboratories
August 31, 1955
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
The following are some aspects of the artificial intelligence problem:
1 Automatic Computers
If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.
2. How Can a Computer be Programmed to Use a Language
It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.
3. Neuron Nets
How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.
4. Theory of the Size of a Calculation
If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.
5. Self-lmprovement
Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.
6. Abstractions
A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.
7. Randomness and Creativity
A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.
In addition to the above collectively formulated problems for study, we have asked the individuals taking part to describe what they will work on. Statements by the four originators of the project are attached.
Of course one can do this and it might work as a stopgap solution and abstraction layer until something different comes along and I have done similar, but ultimately this will not be very efficient. It might be a better, albeit less beautiful, solution to use any low level language inplemented linear algebra bindings for the lisp at hand. Hopefully there existing one.
Now we know: John McCarthy used (x)emacs. Clearly an important revelation in the ongoing browser wars.
/s (and not to minimize how genuinely interesting this is)
What's interesting to me is that the creator of Lisp found Emacs Lisp acceptable for practical general use. (Although as noted this file can be loaded verbatim into Common Lisp and work fine.)
McCarthy did not invent Lisp with the expectation that it even could be used as a language for real computers. Steve Russell had to convince him that McCarthy's theoretical definition of Lisp would actually work by actually implementing the interpreter himself.
I don't think McCarty would have been too snobbish in dialects, considering he wasn't expecting a practical language at all :D
Very cool, but I wonder why did he define foot as:
(setq foot (* 0.3048 m))
And not
(setq foot (* 12 inch))
It comes to the same thing, but inch is defined in metric as 2.54 cm and the foot is a derived unit of the inch. But this way it clearly spells out the dependency.
Im not criticizing, it was his library for his use, I'm just wondering if there is there a deeper meaning beyond "God enough"?
One thing I always wonder is how people throughout history, on famous historic codebases etc, can indent things.... 90% the way, and then out of dozes of things correct there will be 3 things that are painfully off. Man, just indent them all the way if youre going to do it halfway
Ive seen this in dennis richie code, doom code, etc
Am i the only person who sees these things stand out like a sore thumb? It would drive me mad. They either all have to be not aligned, or aligned, but not 90%
In travel time, no. In energy cost, yes, but only if you're willing to wait a long time to get there. Also impossible to communicate with directly by radio.
So, I know that elisp has historically lacked lexical scope, so setting variables without a prefix has the potential for name clashes since even a variable setq'd into existence inside a defun will be added to the global namespace.
I did an experiment to double check it's still true in a recent-ish emacs:
[+] [-] jaykru|2 years ago|reply
lmfao, anywhere I can read more about McCarthy's plans for this?
[+] [-] wgrover|2 years ago|reply
[+] [-] WalterBright|2 years ago|reply
[+] [-] microtherion|2 years ago|reply
[+] [-] dkbrk|2 years ago|reply
[0]: https://frinklang.org/frinkdata/units.txt
[+] [-] bsza|2 years ago|reply
[+] [-] Archit3ch|2 years ago|reply
[+] [-] mock-possum|2 years ago|reply
[+] [-] agnosticmantis|2 years ago|reply
Nowadays (god-)fathers of AI are Geoff Hinton and Yann LeCun and others, but 20 years ago things were very different…
[+] [-] morelisp|2 years ago|reply
[+] [-] 0xpgm|2 years ago|reply
Schmidhuber part of the story should also considered more, I think
[+] [-] fsckboy|2 years ago|reply
that's not a fair characterization at all. The title contains the term "Artificial Intelligience", and soon after item 3 in the proposal was "Neuron Nets". The word symbolic is not used at all, although the term Abstractions is used.
http://www-formal.stanford.edu/jmc/history/dartmouth/dartmou...
A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE
J. McCarthy, Dartmouth College M. L. Minsky, Harvard University N. Rochester, I.B.M. Corporation C.E. Shannon, Bell Telephone Laboratories
August 31, 1955
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
The following are some aspects of the artificial intelligence problem:
1 Automatic Computers
If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.
2. How Can a Computer be Programmed to Use a Language
It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.
3. Neuron Nets
How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.
4. Theory of the Size of a Calculation
If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.
5. Self-lmprovement
Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.
6. Abstractions
A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.
7. Randomness and Creativity
A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.
In addition to the above collectively formulated problems for study, we have asked the individuals taking part to describe what they will work on. Statements by the four originators of the project are attached.
... the paper continues ...
[+] [-] hedgehog0|2 years ago|reply
[+] [-] Bostonian|2 years ago|reply
;;; multiplying a matrix by a column vector (defun mvmult (matrix vector) (list (scap (nth 0 matrix) vector) (scap (nth 1 matrix) vector)))
;;; sum of two vectors (defun vplus (vec1 vec2) (list (+ (nth 0 vec1) (nth 0 vec2)) (+ (nth 1 vec1) (nth 1 vec2))))
;;; difference of two vectors (defun vminus (vec1 vec2) (list (- (nth 0 vec1) (nth 0 vec2)) (- (nth 1 vec1) (nth 1 vec2))))
;;; scalar product of two vectors (defun scap (vec1 vec2) (+ (* (nth 0 vec1) (nth 0 vec2)) (* (nth 1 vec1) (nth 1 vec2))))
;;; product of scalar and vector (defun svmult (sca vec) (list (* sca (nth 0 vec)) (* sca (nth 1 vec))))
;;; sum of a list of vectors (defun addup (veclist) (if (null veclist) zerovec (vplus (car veclist) (addup (cdr veclist)))))
(defconst zerovec '(0 0) "zero vector with two components"1)
;;; length of a vector (defun length (x) (sqrt (+ (expt (nth 0 x) 2) (expt (nth 1 x) 2))))
(defconst Imatrix '((1.0 0.0) (0.0 1.0)) "unit 2x2 matrix")
(defun smmult (sca matrix) (list (svmult sca (nth 0 matrix)) (svmult sca (nth 1 matrix))))
(defun mplus (mat1 mat2) (list (vplus (nth 0 mat1) (nth 0 mat2)) (vplus (nth 1 mat1) (nth 1 mat2))))
(defun mminus (mat1 mat2) (list (vminus (nth 0 mat1) (nth 0 mat2)) (vminus (nth 1 mat1) (nth 1 mat2))))
(defun mmult (mat1 mat2) (list (list (scap (nth 0 mat1) (col 0 mat1)) (scap (nth 0 mat1) (col 1 mat1))) (list (scap (nth 1 mat1) (col 0 mat1)) (scap (nth 1 mat1) (col 1 mat1)))))
(defun multiplyup (matlist) (if (null matlist) Imatrix (mmult (car matlist) (multiplyup (car matlist) (multiplyup (cdr (matlist)))))))
[+] [-] zelphirkalt|2 years ago|reply
[+] [-] BaculumMeumEst|2 years ago|reply
[+] [-] mckn1ght|2 years ago|reply
Also saw someone else just added it to one of theirs: https://github.com/bbarclay7/bb-emacs/blob/5858823bb033be113...
[+] [-] nxobject|2 years ago|reply
[+] [-] bitwize|2 years ago|reply
[+] [-] Zambyte|2 years ago|reply
I don't think McCarty would have been too snobbish in dialects, considering he wasn't expecting a practical language at all :D
[+] [-] shric|2 years ago|reply
> (setq avogadro 6.0221367e23) ; Avogadro number
This is now standardized to exactly 6.02214076e23
[+] [-] thsksbd|2 years ago|reply
(setq foot (* 0.3048 m))
And not
(setq foot (* 12 inch))
It comes to the same thing, but inch is defined in metric as 2.54 cm and the foot is a derived unit of the inch. But this way it clearly spells out the dependency.
Im not criticizing, it was his library for his use, I'm just wondering if there is there a deeper meaning beyond "God enough"?
[+] [-] melling|2 years ago|reply
[+] [-] jwhitlark|2 years ago|reply
[+] [-] Exuma|2 years ago|reply
Ive seen this in dennis richie code, doom code, etc
Am i the only person who sees these things stand out like a sore thumb? It would drive me mad. They either all have to be not aligned, or aligned, but not 90%
[+] [-] aj7|2 years ago|reply
[+] [-] trwaw|2 years ago|reply
[+] [-] 0xfae|2 years ago|reply
[+] [-] lmm|2 years ago|reply
[+] [-] _jal|2 years ago|reply
To humans.
[+] [-] kagevf|2 years ago|reply
I did an experiment to double check it's still true in a recent-ish emacs:
There is some information in the info file under elisp about lexical binding, but you can just use let to keep variables in a lexical scope.[+] [-] CyberDildonics|2 years ago|reply
[+] [-] Exuma|2 years ago|reply
jigger := 1.5 floz
[+] [-] behnamoh|2 years ago|reply
[deleted]
[+] [-] vjust|2 years ago|reply
[+] [-] AnimalMuppet|2 years ago|reply