top | item 42477713

(no title)

duluca | 1 year ago

The first computers cost millions of dollars and filled entire rooms to accomplish what we would now consider simple computational tasks. That same computing power now fits into the width of a finger nail. I don’t get how technologists balk at the cost of experimental tech or assume current tech will run at the same efficiency for decades to come and melt the planet into a puddle. AGI won’t happen until you can fit enough compute that’d take several data center’s worth of compute into a brain sized vessel. So the thing can move around process the world in real time. This is all going to take some time to say the least. Progress is progress.

discuss

order

8n4vidtmkvmk|1 year ago

I thought you were going to say that now we're back to bigger-than-room sized computers that cost many millions just to perform the same tasks we could 40 years ago.

I of course mean we're using these LLMs for a lot of tasks that they're inappropriate for, and a clever manually coded algorithm could do better and much more efficiently.

adwn|1 year ago

> and a clever manually coded algorithm could do better and much more efficiently.

Sure, but how long would it take to implement this algorithm, and would that be worth it for one-off cases?

Just today I asked Claude to create a jq query that looks for objects with a certain value for one field, but which lack a certain other field. I could have spent a long time trying to make sense of jq's man page, but instead I spent 30 seconds writing a short description of what I'm looking for in natural language, and the AI returned the correct jq invocation within seconds.

arthurcolle|1 year ago

just ask the LLM to solve enough problems (even new problems), cache the best, do inference time compute for the rest, figure out the best/ fastest implementations, and boom, you have new training data for future AIs

globalise83|1 year ago

The LLMs are now writing their own algorithms to answer questions. Not long before they can design a more efficient algorithm to complete any feasible computational task, in a millionth of the time needed by the best human.

lxgr|1 year ago

> take several data center’s worth of compute into a brain sized vessel. So the thing can move around process the world in real time

How so? I'd imagine a robot connected to the data center embodying its mind, connected via low-latency links, would have to walk pretty far to get into trouble when it comes to interacting with the environment.

The speed of light is about three orders of magnitude faster than the speed of signal propagation in biological neurons, after all.

byw|1 year ago

The robot brain could be layered so that more basic functions are embedded locally while higher-level reasonings and offloaded to the cloud.

waldrews|1 year ago

6 orders of magnitude if we use 120 m/s vs 300 km/s

nopinsight|1 year ago

Many of humans' capabilities are pretrained with massive computing through evolution. Inference results of o3 and its successors might be used to train the next generation of small models to be highly capable. Recent advances in the capabilities of small models such as Gemini-2.0 Flash suggest the same.

Recent research from NVIDIA suggests such an efficiency gain is quite possible in the physical realm as well. They trained a tiny model to control the full body of a robot via simulations.

---

"We trained a 1.5M-parameter neural network to control the body of a humanoid robot. It takes a lot of subconscious processing for us humans to walk, maintain balance, and maneuver our arms and legs into desired positions. We capture this “subconsciousness” in HOVER, a single model that learns how to coordinate the motors of a humanoid robot to support locomotion and manipulation."

...

"HOVER supports any humanoid that can be simulated in Isaac. Bring your own robot, and watch it come to life!"

More here: https://x.com/DrJimFan/status/1851643431803830551

---

This demonstrates that with proper training, small models can perform at a high level in both cognitive and physical domains.

bigprof|1 year ago

> Similarly, many of humans' capabilities are pretrained with massive computing through evolution.

Hmm .. my intuition is that humans' capabilities are gained during early childhood (walking, running, speaking .. etc) ... what are examples of capabilities pretrained by evolution, and how does this work?

lumost|1 year ago

The concern here is mainly on practicality. The original mainframes did not command startup valuations counted in fractions of the US economy, they did qualify for billions in investment.

This is a great milestone, but OpenAI will not be successful charging 10x the cost of a human to perform a task.

owenpalmer|1 year ago

> OpenAI will not be successful charging 10x the cost of a human to perform a task.

True, but they might be successful charging 20x for 2x the skill of a human.

BriggyDwiggs42|1 year ago

I wouldn’t expect it to cost 10x in five years, if only because parallel computing still seems to be roughly obeying moore’s.

fragmede|1 year ago

How much does AWS charge for compute?

If it can be spun up with Terraform, I bet you they could.

pera|1 year ago

Maybe AGI as a goal is overvalued: If you have a machine that can, on average, perform symbolic reasoning better than humans, and at a lower cost, that's basically the end game, isn't it? You won capitalism.

harrall|1 year ago

Right now I can ask an (experienced) human to do something for me and they will either just get it done or tell me that they can’t do it.

Right now when I ask an LLM… I have to sit there and verify everything. It may have done some helpful reasoning for me but the whole point of me asking someone else (or something else) was to do nothing at all…

I’m not sure you can reliably fulfill the first scenario without achieving AGI. Maybe you can, but we are not at that point yet so we don’t know yet.

Existenceblinks|1 year ago

Honestly, it doesn't need to be local, API is some 200ms away is ok-ish, make it 50ms it will be practically usable for every majority of interaction.

otabdeveloper4|1 year ago

Intelligence has nothing at all whatever to do with compute.

oefnak|1 year ago

Unless you're a dualist who believes in a magic spirit, I cannot understand how you think that's the case. Can you please explain?

patrickhogan1|1 year ago

Do you think intelligence exists without prior experience? For instance, can someone instantly acquire a skill—like playing the piano—as if downloading it in The Matrix? Even prodigies like Mozart had prior exposure. His father, a composer and music teacher, introduced him to music from an early age. Does true intelligence require a foundation of prior knowledge?