top | item 11383044

Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves

125 points| Osiris30 | 10 years ago |spectrum.ieee.org | reply

67 comments

order
[+] ryanjshaw|10 years ago|reply
I always thought this was the only way to build a true AI -- build a 'virtual baby' that has to go through much the same experiences as a human baby. I'm sure this idea has been explored somewhere already - anybody have any pointers?
[+] jackhack|10 years ago|reply
This is precisely the approach taken at MIT under Prof. Rod Brooks: "Within our group, Marjanovic, Scassellati & Williamson(1996) applied a similar bootstrapping technique to enable the robot* to learn to point to a visual target. " *named Cog, short for "Cognition"

http://people.csail.mit.edu/brooks/papers/CMAA-group.pdf

If I may paraphase, his model is biologically inspired -- believing hierarchical layers of behaviours, lack of a central planning model (distributed processing), and physical and temporal placement in the world (rather than abstractions of the world, or observe/process/react loops) are essential to the formation of a truly intelligent machine.

[+] peterwwillis|10 years ago|reply
Physical babies take a long time to adapt and form complex connections (http://www.urbanchildinstitute.org/why-0-3/baby-and-brain). There are probably a lot of shortcuts one could take to train it for specific tasks.

In 'The Matrix' they had developed programs that could be uploaded to a physical brain to essentially pre-wire the brain with complex synaptic connections (or so I imagine was the effect). That would be a lot more efficient than waiting 3 years just to get to the point of trimming useless connections. We may have to 'grow' a virtual brain via training to develop the basic platform, but then pre-load it with operations to save time.

[+] alan_cx|10 years ago|reply
If that were to happen, I would suggest that an artificial "soul" could or would be created. Once you have that on your hands, things being to get interesting. Ethics, religion law, and so on will have one hell of a job on their hands. Especially when you consider that there would be immediate and obvious military applications.
[+] duaneb|10 years ago|reply
> true AI

What does this even mean?

[+] jing|10 years ago|reply
Are physics engines not yet accurate enough to enable "virtual" pre-training / full training of the networks, lighting conditions, etc? If they are, exclusively using physical robots seems somewhat inefficient.
[+] chriswarbo|10 years ago|reply
Closest thing I can think of is Hod Lipson's self-modelling robots: http://www.creativemachineslab.com/self-modeling.html

Their system evolves a virtual body which is evaluated by comparing its predicted behaviour (e.g. if motor A is rotated by X degrees, sensor B should get response Y) to real physical movements (moving motor A and reading sensor B). Once an accurate virtual body has been made, it's used to evaluate a bunch of (again, evolved) movement styles in simulation. Once an efficient style has been found, it's used to control the physical motors on the robot.

Also related, their lab has a "universal gripper" made out of a balloon filled with coffee granules: http://creativemachines.cornell.edu/positive_pressure_grippe...

[+] louprado|10 years ago|reply
Hmmm... does anyone know if Grand Theft Auto has an API ? I would like to pre-train my autonomous vehicle controller before connecting it to an actual car.
[+] bgalbraith|10 years ago|reply
Ideally, yes, we want to pre-train in a virtual environment using as close to the real model robot as possible. I worked on such a problem as part of my PhD research on mobile robots using the Webots simulator (https://www.cyberbotics.com/overview) as my virtual environment.

In my case, I was working on biologically-inspired models for picking up distant objects. It's impractical to tune hyperparameters in hardware, so you need to be able to create a virtual version that gets you close enough. Once you can demonstrate success there, you then have to move to the physical robot, which introduces several additional challenges: 1) imperfections in your actual hardware behavior vs idealized simulated ones, 2) real-world sensor noise and constraints, 3) dealing with real-world timing and inputs instead of a clean, lock-step simulated environment, 4) having different API to poll sensors/actuate servos between virtual and hardware robots, and 5) ensuring that your trained model can be transferred effectively between your virtual and hardware robot control system.

I was able to solve these issues for my particular constrained research use case, and was pretty happy with the results. You can see a demo reel of the robot here: https://www.youtube.com/watch?v=EoIXFKVGaXw

[+] tgflynn|10 years ago|reply
That's a very interesting question. My guess is that the physics of grabbing things, especially non-rigid things, is very messy and difficult to simulate. It would be great if someone here were able to give a detailed answer to this question though.
[+] Animats|10 years ago|reply
Gazebo with Mike Sherman's physics engine might be good enough. DARPA paid to get a decent physics engine into Gazebo; the ones from games were never quite right.
[+] bliti|10 years ago|reply
There are things you can't simulate (yet). In my experience it's beneficial to run real live testing to gather data about individual parts themselves. For example, I had a robot's navigation fail when it encountered a certain type of water container (one gallon type in a given color found in US supermarkets). Like kissing, you can't replace the real thing.
[+] Animats|10 years ago|reply
This is the bin-picking problem, which has been worked on since the 1980s. For objects of known shape, it's more or less solved.[1] The general case is still a problem. It's good to see Google making progress with this.

[1] https://www.youtube.com/watch?v=TU71MtDC-4E

[+] smegel|10 years ago|reply
That sounds like...machine learning!
[+] basicplus2|10 years ago|reply
Steve Grand.. this is the man for the Job

I can't recomend his book enough "Growing up with Lucy"

[+] logicallee|10 years ago|reply
in the future we will fondly remember the simple times of the 2010's, where you could "solve x" ... by just "letting robots learn for themselves..."
[+] effry-much|10 years ago|reply
Not a good idea giving some form of level III consciousness to the robots.
[+] forgotAgain|10 years ago|reply
So how does the robot know it's supposed to pick up the object?
[+] jonnycowboy|10 years ago|reply
Reinforcement learning (ie: bigger score/reward) the more objects are correctly grasped.