top | item 45733487

(no title)

fadedsignal | 4 months ago

I don't think AGI will happen with LLMs. For example, can an LLM drive a car?? I know it's a silly question but it's a fact.

discuss

order

torginus|4 months ago

It can?

If you use 'multimodal transformer' instead of LLM (which most SOTA models are), I don't think there's any reason why a transformer arch couldn't be trained to drive a car, in fact I'm sure that's what Tesla and co. are using in their cars right now.

I'm sure self-driving will become good enough to be commercially viable in the next couple years (with some limitations), that doesn't mean it's AGI.

tsimionescu|4 months ago

There is a vast gulf between "GPT-5 can drive a car" and "a neural network using the transformer architecture can be trained to drive a car". And I see no proof whatsoever that we can, today, train a single model that can both write a play and drive a car. Even less so one that could do both at the same time, as a generally intelligent being should be able to.

If someone wants to claim that, say, GPT-5 is AGI, then it is on them to connect GPT-5 to a car control system and inputs and show that it can drive a car decently well. After all, it has consumed all of the literature on driving and physics ever produced, plus untold numbers of hours of video of people driving.

oldestofsports|4 months ago

Okay but then can a multimodal transformer do everything an LLM can?

JoelMcCracken|4 months ago

this is something I think about. state of the art in self driving cars still makes mistakes that humans wouldn't make, despite all the investment into this specific problem.

This bodes very poorly for AGI in the near term, IMO