(no title)
duluca
|
1 year ago
The first computers cost millions of dollars and filled entire rooms to accomplish what we would now consider simple computational tasks. That same computing power now fits into the width of a finger nail. I don’t get how technologists balk at the cost of experimental tech or assume current tech will run at the same efficiency for decades to come and melt the planet into a puddle.
AGI won’t happen until you can fit enough compute that’d take several data center’s worth of compute into a brain sized vessel. So the thing can move around process the world in real time. This is all going to take some time to say the least. Progress is progress.
8n4vidtmkvmk|1 year ago
I of course mean we're using these LLMs for a lot of tasks that they're inappropriate for, and a clever manually coded algorithm could do better and much more efficiently.
adwn|1 year ago
Sure, but how long would it take to implement this algorithm, and would that be worth it for one-off cases?
Just today I asked Claude to create a jq query that looks for objects with a certain value for one field, but which lack a certain other field. I could have spent a long time trying to make sense of jq's man page, but instead I spent 30 seconds writing a short description of what I'm looking for in natural language, and the AI returned the correct jq invocation within seconds.
arthurcolle|1 year ago
globalise83|1 year ago
lxgr|1 year ago
How so? I'd imagine a robot connected to the data center embodying its mind, connected via low-latency links, would have to walk pretty far to get into trouble when it comes to interacting with the environment.
The speed of light is about three orders of magnitude faster than the speed of signal propagation in biological neurons, after all.
byw|1 year ago
waldrews|1 year ago
nopinsight|1 year ago
Recent research from NVIDIA suggests such an efficiency gain is quite possible in the physical realm as well. They trained a tiny model to control the full body of a robot via simulations.
---
"We trained a 1.5M-parameter neural network to control the body of a humanoid robot. It takes a lot of subconscious processing for us humans to walk, maintain balance, and maneuver our arms and legs into desired positions. We capture this “subconsciousness” in HOVER, a single model that learns how to coordinate the motors of a humanoid robot to support locomotion and manipulation."
...
"HOVER supports any humanoid that can be simulated in Isaac. Bring your own robot, and watch it come to life!"
More here: https://x.com/DrJimFan/status/1851643431803830551
---
This demonstrates that with proper training, small models can perform at a high level in both cognitive and physical domains.
bigprof|1 year ago
Hmm .. my intuition is that humans' capabilities are gained during early childhood (walking, running, speaking .. etc) ... what are examples of capabilities pretrained by evolution, and how does this work?
lumost|1 year ago
This is a great milestone, but OpenAI will not be successful charging 10x the cost of a human to perform a task.
raincole|1 year ago
https://a16z.com/llmflation-llm-inference-cost/
owenpalmer|1 year ago
True, but they might be successful charging 20x for 2x the skill of a human.
BriggyDwiggs42|1 year ago
fragmede|1 year ago
If it can be spun up with Terraform, I bet you they could.
pera|1 year ago
harrall|1 year ago
Right now when I ask an LLM… I have to sit there and verify everything. It may have done some helpful reasoning for me but the whole point of me asking someone else (or something else) was to do nothing at all…
I’m not sure you can reliably fulfill the first scenario without achieving AGI. Maybe you can, but we are not at that point yet so we don’t know yet.
Existenceblinks|1 year ago
TechDebtDevin|1 year ago
unknown|1 year ago
[deleted]
otabdeveloper4|1 year ago
oefnak|1 year ago
patrickhogan1|1 year ago
unknown|1 year ago
[deleted]