Might happen. Or not. Reliable LLM-based systems that interact with a world model are still iffy.
Waymo is an example of a system which has machine learning, but the machine learning does not directly drive action generation. There's a lot of sensor processing and classifier work that generates a model of the environment, which can be seen on a screen and compared with the real world. Then there's a part which, given the environment model, generates movement commands. Unclear how much of that uses machine learning.
Tesla tries to use end to end machine learning, and the results are disappointing. There's a lot of "why did it do that?". Unclear if even Tesla knows why. Waymo tried end to end machine learning, to see if they were missing something, and it was worse than what they have now.
I dunno. My comment on this for the last year or two has been this: Systems which use LLMs end to end and actually do something seem to be used only in systems where the cost of errors is absorbed by the user or customer, not the service operator. LLM errors are mostly treated as an externality dumped on someone else, like pollution.
Of course, when that problem is solved, they're be ready for management positions.
> The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.
Is this at all ironic considering we power modern AI using custom and/not non-general compute, rather than using general, CPU-based compute?
The "bitter lesson" is extrapolating from ONE datapoint where we were extremely lucky with Dennart scaling. Sorry, the age of silicon magic is over. It might be back - at some point, but for now it's over.
the way by which things will scale is not only limited to the optimization of low level hardware but also just by brute force investment and construction of massive data centers, which is absolutely happening.
Animats|11 months ago
Waymo is an example of a system which has machine learning, but the machine learning does not directly drive action generation. There's a lot of sensor processing and classifier work that generates a model of the environment, which can be seen on a screen and compared with the real world. Then there's a part which, given the environment model, generates movement commands. Unclear how much of that uses machine learning.
Tesla tries to use end to end machine learning, and the results are disappointing. There's a lot of "why did it do that?". Unclear if even Tesla knows why. Waymo tried end to end machine learning, to see if they were missing something, and it was worse than what they have now.
I dunno. My comment on this for the last year or two has been this: Systems which use LLMs end to end and actually do something seem to be used only in systems where the cost of errors is absorbed by the user or customer, not the service operator. LLM errors are mostly treated as an externality dumped on someone else, like pollution.
Of course, when that problem is solved, they're be ready for management positions.
alabastervlog|11 months ago
dartos|11 months ago
I doubt an expert machine’s accuracy would change if you threw more energy at it, for example.
SecretDreams|11 months ago
Is this at all ironic considering we power modern AI using custom and/not non-general compute, rather than using general, CPU-based compute?
BobbyJo|11 months ago
positr0n|11 months ago
unknown|11 months ago
[deleted]
tliltocatl|11 months ago
bttf|10 months ago
SirHumphrey|11 months ago
fnord77|11 months ago
brookst|11 months ago