I always thought this was the only way to build a true AI -- build a 'virtual baby' that has to go through much the same experiences as a human baby. I'm sure this idea has been explored somewhere already - anybody have any pointers?
This is precisely the approach taken at MIT under Prof. Rod Brooks: "Within our group, Marjanovic, Scassellati & Williamson(1996) applied a similar bootstrapping technique to enable the robot* to learn to point to a visual target. "
*named Cog, short for "Cognition"
If I may paraphase, his model is biologically inspired -- believing hierarchical layers of behaviours, lack of a central planning model (distributed processing), and physical and temporal placement in the world (rather than abstractions of the world, or observe/process/react loops) are essential to the formation of a truly intelligent machine.
I've come across some curiosity driven skill acquisition research for robots that is similar to this. Below is a link to one of the articles and more related articles can be found from this author.
In 'The Matrix' they had developed programs that could be uploaded to a physical brain to essentially pre-wire the brain with complex synaptic connections (or so I imagine was the effect). That would be a lot more efficient than waiting 3 years just to get to the point of trimming useless connections. We may have to 'grow' a virtual brain via training to develop the basic platform, but then pre-load it with operations to save time.
If that were to happen, I would suggest that an artificial "soul" could or would be created. Once you have that on your hands, things being to get interesting. Ethics, religion law, and so on will have one hell of a job on their hands. Especially when you consider that there would be immediate and obvious military applications.
Are physics engines not yet accurate enough to enable "virtual" pre-training / full training of the networks, lighting conditions, etc? If they are, exclusively using physical robots seems somewhat inefficient.
Their system evolves a virtual body which is evaluated by comparing its predicted behaviour (e.g. if motor A is rotated by X degrees, sensor B should get response Y) to real physical movements (moving motor A and reading sensor B). Once an accurate virtual body has been made, it's used to evaluate a bunch of (again, evolved) movement styles in simulation. Once an efficient style has been found, it's used to control the physical motors on the robot.
Hmmm... does anyone know if Grand Theft Auto has an API ? I would like to pre-train my autonomous vehicle controller before connecting it to an actual car.
Ideally, yes, we want to pre-train in a virtual environment using as close to the real model robot as possible. I worked on such a problem as part of my PhD research on mobile robots using the Webots simulator (https://www.cyberbotics.com/overview) as my virtual environment.
In my case, I was working on biologically-inspired models for picking up distant objects. It's impractical to tune hyperparameters in hardware, so you need to be able to create a virtual version that gets you close enough. Once you can demonstrate success there, you then have to move to the physical robot, which introduces several additional challenges: 1) imperfections in your actual hardware behavior vs idealized simulated ones, 2) real-world sensor noise and constraints, 3) dealing with real-world timing and inputs instead of a clean, lock-step simulated environment, 4) having different API to poll sensors/actuate servos between virtual and hardware robots, and 5) ensuring that your trained model can be transferred effectively between your virtual and hardware robot control system.
I was able to solve these issues for my particular constrained research use case, and was pretty happy with the results. You can see a demo reel of the robot here: https://www.youtube.com/watch?v=EoIXFKVGaXw
That's a very interesting question. My guess is that the physics of grabbing things, especially non-rigid things, is very messy and difficult to simulate. It would be great if someone here were able to give a detailed answer to this question though.
Gazebo with Mike Sherman's physics engine might be good enough. DARPA paid to get a decent physics engine into Gazebo; the ones from games were never quite right.
There are things you can't simulate (yet). In my experience it's beneficial to run real live testing to gather data about individual parts themselves. For example, I had a robot's navigation fail when it encountered a certain type of water container (one gallon type in a given color found in US supermarkets). Like kissing, you can't replace the real thing.
This is the bin-picking problem, which has been worked on since the 1980s. For objects of known shape, it's more or less solved.[1] The general case is still a problem. It's good to see Google making progress with this.
[+] [-] ryanjshaw|10 years ago|reply
[+] [-] jackhack|10 years ago|reply
http://people.csail.mit.edu/brooks/papers/CMAA-group.pdf
If I may paraphase, his model is biologically inspired -- believing hierarchical layers of behaviours, lack of a central planning model (distributed processing), and physical and temporal placement in the world (rather than abstractions of the world, or observe/process/react loops) are essential to the formation of a truly intelligent machine.
[+] [-] cvarjas|10 years ago|reply
https://scholar.google.com/citations?view_op=view_citation&h...
[+] [-] peterwwillis|10 years ago|reply
In 'The Matrix' they had developed programs that could be uploaded to a physical brain to essentially pre-wire the brain with complex synaptic connections (or so I imagine was the effect). That would be a lot more efficient than waiting 3 years just to get to the point of trimming useless connections. We may have to 'grow' a virtual brain via training to develop the basic platform, but then pre-load it with operations to save time.
[+] [-] erichocean|10 years ago|reply
BabyX First Words https://vimeo.com/103501130
Flexible Muscle-Based Locomotion for Bipedal Creatures https://vimeo.com/79098420
[+] [-] alan_cx|10 years ago|reply
[+] [-] beambot|10 years ago|reply
[+] [-] duaneb|10 years ago|reply
What does this even mean?
[+] [-] trump2016chalk|10 years ago|reply
[deleted]
[+] [-] jing|10 years ago|reply
[+] [-] chriswarbo|10 years ago|reply
Their system evolves a virtual body which is evaluated by comparing its predicted behaviour (e.g. if motor A is rotated by X degrees, sensor B should get response Y) to real physical movements (moving motor A and reading sensor B). Once an accurate virtual body has been made, it's used to evaluate a bunch of (again, evolved) movement styles in simulation. Once an efficient style has been found, it's used to control the physical motors on the robot.
Also related, their lab has a "universal gripper" made out of a balloon filled with coffee granules: http://creativemachines.cornell.edu/positive_pressure_grippe...
[+] [-] louprado|10 years ago|reply
[+] [-] bgalbraith|10 years ago|reply
In my case, I was working on biologically-inspired models for picking up distant objects. It's impractical to tune hyperparameters in hardware, so you need to be able to create a virtual version that gets you close enough. Once you can demonstrate success there, you then have to move to the physical robot, which introduces several additional challenges: 1) imperfections in your actual hardware behavior vs idealized simulated ones, 2) real-world sensor noise and constraints, 3) dealing with real-world timing and inputs instead of a clean, lock-step simulated environment, 4) having different API to poll sensors/actuate servos between virtual and hardware robots, and 5) ensuring that your trained model can be transferred effectively between your virtual and hardware robot control system.
I was able to solve these issues for my particular constrained research use case, and was pretty happy with the results. You can see a demo reel of the robot here: https://www.youtube.com/watch?v=EoIXFKVGaXw
[+] [-] tgflynn|10 years ago|reply
[+] [-] Animats|10 years ago|reply
[+] [-] bliti|10 years ago|reply
[+] [-] zellyn|10 years ago|reply
[+] [-] acd|10 years ago|reply
Brett robot folding clothes. https://www.youtube.com/watch?v=Thpjk69h9P8
[+] [-] Animats|10 years ago|reply
[1] https://www.youtube.com/watch?v=TU71MtDC-4E
[+] [-] smegel|10 years ago|reply
[+] [-] basicplus2|10 years ago|reply
I can't recomend his book enough "Growing up with Lucy"
[+] [-] logicallee|10 years ago|reply
[+] [-] effry-much|10 years ago|reply
[+] [-] forgotAgain|10 years ago|reply
[+] [-] jonnycowboy|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]