top | item 45916690

(no title)

golol | 3 months ago

The gap between high level and low level control of robots is closing. Right now thousands of hours of task specific training data is being collected and trained on to create models that can control robots to execute specific tasks in specific contexts. This essentially turns the operation of a robot into a kind of video game, where inputs are only needed a in low-dimensional abstract form, such as "empty the dishwasher" or "repeat what I do" or "put your finger in the loop and pull the string". This will be combined with high-level control agents like SIMA 2 to create useful real-world robots.

discuss

order

catgary|3 months ago

I work on a much easier problem (physics-based character animation) after spending a few years in motion planning, and I haven’t really seen anything to suggest that the problem is going to be solved any time soon by collecting more data.

glial|3 months ago

https://danijar.com/project/dreamer4/

"We present Dreamer 4, a scalable agent that learns to solve control tasks by imagination training inside of a fast and accurate world model. ... By training inside of its world model, Dreamer 4 is the first agent to obtain diamonds in Minecraft purely from offline data, aligning it with applications such as robotics where online interaction is often impractical."

In other words, it learns by watching, e.g. by having more data of a certain type.

onlyrealcuzzo|3 months ago

Is Physics-based character animation an easier problem?

Almost any problem can be really hard depending on the amount of 9s.

Maybe there's more room for error in a lot of robotics applications than for your physics-based character animation?

golol|3 months ago

I am pushing the optimism a bit of course, but currently we can see many demos of robots doing basic tasks, and it seems like it is quite easy nowadays to do this with the data driven approach.

wordpad|3 months ago

Why? Physics of large discrete objects (such as a robot) isn't very complicated.

I thought it's fast accurate OCR that's holding everything back.

jcims|3 months ago

I just grabbed a beer about ten minutes ago.

Next to zero cognition was involved in the process. There's some kind of hierarchy of thought in the way my mind/brain/body processed the task. I did cognitively decide to get the beer, but I was focused on something at work and continued to think about that in great detail as the rest of me did all of the motion planning and articulation required to get up, walk through two doorways, open the door on the fridge, grab a beer, close the door, walk back and crack the beer as I was sitting down.

Basically zero thought in that entire sequence.

I think what's happening today with all of this stuff is ultimately like me trying to play Fur Elise on piano. I don't have a piano. I don't know how to play one. I'm going to be all brain in that entire process and it's going to be awful.

We need to learn how to use the data we have to train these layers of abstraction that allow us to effectively compress tons of sophistication into 'get a beer'.

Vegenoid|3 months ago

> This essentially turns the operation of a robot into a kind of video game, where inputs are only needed a in low-dimensional abstract form, such as "empty the dishwasher" or "repeat what I do" or "put your finger in the loop and pull the string"

I don't really understand, how is this like a video game? What about these inputs is "low-dimensional"? How does what you describe interact with a "high-level control agents like SIMA 2"? Doesn't SIMA 2 translate inputs like "empty the dishwasher" into key presses or interaction with some other direct control interface?

golol|3 months ago

Say you want to steer an android to walk forward. You need to provide angles or forces or voltages for all the actuators for every moment in time, so that's high dimensional. If you already have certain control models, neural or not, you can instead just press forward on a joystick. So what I mean low dimensional input is when someone steers a robot using a controller. That's got like, idk, 10-20 dimensions max. And my understanding is that SIMA 2 when it plays No Man's Sky or whatever basically provides such low dimensional controls, like a video game. Companies like Figure and Tesla are training models that can do tasks like folding clothes or emptying the dishwasher given low dimensional inputs like "move in this direction and tidy up". SIMA has the understanding to provide these inputs.