All of the examples in videos are cherry picked. Go ask anyone working on humanoid robots today, almost everything you see here, if repeated 10 times, will enter failure mode because the happy path is so narrow. There should really be benchmarks where you invite robots from different companies, ask them beforehand about their capabilities, and then create an environment that is within those capabilities but was not used in the training data, and you will see the real failure rate. These things are not ready for anything besides tech demos currently. Most of the training is done in simulations that approximate physics, and the rest is done manually by humans using joysticks (almost everything they do with hands). Failure rates are staggering.
wongarsu|4 months ago
I'm not sure that task needs a humanoid robot, but the ability to grab and manipulate all those packages and recover from failures is pretty good
1: https://x.com/adcock_brett/status/1931391783306678515
fragmede|4 months ago
aDyslecticCrow|4 months ago
An industrial robot arm with air powered suction cups would do the trick... https://bostondynamics.com/products/stretch/ ...
... So the task they work best at is the task there is already cheaper better robots specialized for.
Animats|4 months ago
An obvious application, if this robot could do it, is retail store shelf restocking. That's a reasonably constrained pick and place task, some mobility is necessary, and the humanoid form is appropriate working in aisles and shelves spaced for humans. How close is that?
It's been tried before. In 2020.[1] And again in 2022.[2] That one runs on a track, is closer to an traditional industrial robot, and is used by 7-11 Japan.
Robots that just cruise around stores and inspect the shelves visually are in moderately wide use. They just compare the shelf images with the planogram; they don't handle the merchandise. So there are already systems to help plan the restocking task.
Technical University Delft says their group should be able to do this in five years.[3] (From when? No date on press release.)
[1] https://www.youtube.com/watch?v=cHgdW1HYLbM
[2] https://blogs.nvidia.com/blog/telexistence-convenience-store...
[3] https://www.tudelft.nl/en/stories/articles/shelf-stocking-ro...
MakeAJiraTicket|4 months ago
martythemaniak|4 months ago
https://rodneybrooks.com/why-todays-humanoids-wont-learn-dex...
In short, he makes the case that unlike text and images, human dexterity is based on sensory inputs that we barely understand, that these robots don't have, and it will take a long time to get the right sensors in, get the right data recorded, and only then train them to the level of a human. He is very skeptical that they can learn from video-only data, which is what the companies are doing.
app13|4 months ago
smath|4 months ago
The essay was long so I cant claim I read it in detail - one q in my mind is whether humanoids need to do dexterity the same way that humans do. yes they dont have skin and tiny receptors but maybe there is another way to develop dexterity?
skilled|4 months ago
Indeed, all the videos/examples are marketing pieces.
I would love to see a video like this "Logistics"[0] one, that shows this new iteration doing some household tasks. There is no way that it's not clunky and prone to all kinds of accidents and failures. Not that it's a bad thing - it would simply be nice to see.
Maybe they will do another video? Would love that.
[0]: https://www.youtube.com/watch?v=lkc2y0yb89U
andrewrn|4 months ago
ipnon|4 months ago
lossolo|4 months ago
"Building Figure won’t be an easy win; it will require decades of commitment and ingenuity."
"Our focus is on what we can achieve 5, 10, 20+ years from now, not the near-term wins."
At least it's not Musk's forever "next year".
kibwen|4 months ago
We are nowhere near the same for autonomous robots, and it's not even funny. To continue to use the internet as an analogy for LLMs, we are pre-DARPANET, pre-ASCII, pre-transistor. We don't even have the sensors that would make safe household humanoid robots possible. Any theater from robot companies about trying to train a neural net based on motion capture is laughably foolish. At the current rate of progress, we are more than decades away.
hadlock|4 months ago
If you can make it look believable on camera for 15 seconds under controlled studio conditions... it's probable you can do it autonomously in 10-15 years. I don't think anyone is going to be casually buying these for their house by this time next year, but it certainly demonstrates what is realistically possible.
If they can provably make these things safe, it will have huge implications for in home care in advanced age, where instead of living in an assisted living home at $huge expense for 20+ years, you might be able to live on your own for most of that time.
I am cautiously optimistic.
jcims|4 months ago
Neural networks for motion control is very clearly resulting in some incredible capability in a relatively short amount of time vs. the more traditional control hierarchies used in something like Boston Dynamics. Look at Unitree's G1
https://www.youtube.com/shorts/mP3Exb1YC8o
https://www.youtube.com/watch?v=bPSLMX_V38E
It's like an agile idiot, very physically capable but no purpose.
The next domain is going to be incorporating goals and intent and short/long term chains of causality into the model, and for that it seems we're presently missing quite a bit usable training data. That will clearly evolve over time, as will the fidelity of simulations that can be used to train the model and the learned experience of deployed robots.
pizzathyme|4 months ago
dust42|4 months ago
The video shows several of glitches. From the comments:
Also many of the packages on the left are there throughout the video.But then I think lots of this can be solved in software and having seen how LLMs have advanced in the last few years, I'd not be surprised to see these robots useful in 5 years.
daveguy|4 months ago
godelski|4 months ago
Is it supposed to be taking packages and placing them label face down?
I cannot understand how a robot doing this is cheaper than a second scanner so you can read the label face down or face up. I mean you could do that with a mirror.
But I'm not convinced it is even doing that. Several packages are already "label side down" and it just moves them along. Do those packages even have labels? Clearly the behavior learned is "label not on top", not "label side down". No way is that the intended behavior.
If the bar code is the issue, then why not switch to a QR code or some other format? There's not much information you need in shipping so the QR code can have lots of redundancy, making it readable from many different angles and even if significantly damaged.
The video description also says "approaching human-level dexterity and speed". No way. I'd wager I could do this task at least 10x its speed, if not 20x. And that I'd do it better! I mean I watched a few minutes at 2x speed and man is it slow. Sure, this thing might be able to run 24/7 without breaks, but if I'm running 10-20x faster then what's that matter? I could just come in a few hours a day and blow through its quota. I'd really like to see an actual human worker for comparison.
But if we did want something to do this very narrow task for 24/7, I'm pretty sure there are a hundred different cheaper ways to do it. If there aren't, then it is because there is some edge cases that are pretty important. And without knowing that then we can't actually properly evaluate this video. Besides, this video seems like a pretty simple ideal case. I'm not sure what an actual amazon sorting process looks like, but I suspect not like this.
Regardless, the results look pretty cool and I'm pretty impressed with Figure even if it is an over-simplified case.
JKCalhoun|4 months ago
…and have a surprise dance-off.
sheepybloke|4 months ago
xnyan|4 months ago
WanderPanda|4 months ago
https://youtu.be/nmEy1_75qHk
They for sure did not anticipate that the user would backflip into their robot and knock it (and himself) out :D
kevin_thibedeau|4 months ago
someoneontenet|4 months ago
guerrilla|4 months ago
more_corn|4 months ago
You can control the happy path when the whole thing is your box.
robots0only|4 months ago
The current best neural networks only have around 60% success rates for small horizon tasks (think 10-20 seconds e.g. pick up apple). That is why there is so much cut-motions in this video. The future will be awesome but it will take time a lot of research still needs to happen (e.g. robust hands, tactile, how to even collect large scale data, RL).
brailsafe|4 months ago
Perhaps this is a bit pedantic, but what about the probable eventual proliferation of useful humanoid robots will make the future awesome? What does an awesome future look like compared to today, to you?
tamimio|4 months ago
As someone who worked in the robotics industry, 90% of the demos and videos are cherry-picked, or even blatantly fake. That's why for any new robot in the market, my criteria is: Can I buy it? If it's affordable and the consumer can buy it and find it useful in day to day life, then this robot is useful and has potential; other than that, it's just an investor money grab PR hype.
deadbabe|4 months ago
jayd16|4 months ago
aowie|4 months ago
The fabric wrap is idiotic. Insanely stupid. Let's have an expensive fabric-covered robot wash dishes covered in food. Genius. It's a good thing those "dirty dishes" were already perfectly clean. I doubt this machine could handle anything more. Put it in a real commercial kitchen and have it scrape oven pans and I'll be impressed.
I'm so glad I left robotics. I don't want to have anything to do with this very silly bubble.