top | item 39625987

(no title)

throwaway48r7r | 2 years ago

LLMs solve for the next word. Human intelligence solves for survival with many types of input, visual, audio etc. You can't create an AGI if you don't solve for the problems that created human GI.

discuss

order

mitthrowaway2|2 years ago

Digital data is all 1s and 0s, whether it encodes words, sounds, or pictures. Why do you think transformers only work for predicting words, when they're already successfully being used for other applications as well?

throwaway48r7r|2 years ago

I think much like with a basic Turing machine definition compute is possible on a variety of substrates that some kind of intelligence can be created with a whole class of implementations, transformers included. Indeed the video and image input of LLMs is one of the most exiting use cases.

baq|2 years ago

They’re trained to optimize guessing the next word. What they solve for to get this good at predicting the next word is an open question with answers hidden in plain sight in the weight blob.

keiferski|2 years ago

Yes I don’t think AGI (which is entirely an ill-defined concept, but put that aside) will happen until AI is embodied in the physical world.

tavavex|2 years ago

Why not? For a hypothetical example - if we assume that simulating a human is AGI, and we have some hypothetical space-age magic tech bruteforce the problem by simulating every neuron and connection in the brain... why would being "embodied" factor into this?

throwaway48r7r|2 years ago

Absolutely. The physical world is the input that creates the feedback loop for learning.

I would propose a definition of AGI. "A model capable of effecting the physical world through speech or physical action in a manner indistinguishable from a human."