(no title)
bglazer | 2 months ago
I do find it interesting that they state that each task is done with a fine tuned model. I wonder if that’s a limitation of the current data set their foundation model is trained on (which is what I think they’re suggesting in the post) or if it reflects something more fundamental about robotics tasks. It does remind me of a few years ago in LLMs when fine tuning was more prevalent. I don’t follow LLM training methodology closely but my impression was that the bulk of recent improvements have come from better RL post training and inference time reasoning.
Obviously they’re pursuing RL and I’m not sure spending more tokens at inference would even help for fine manipulation like this, notwithstanding the latency problems with that.
So, maybe the need for fine tuning goes away with a better foundation model like they’re suggesting? I hope this doesn’t point towards more fundamental limitations on robotics learning with the current VLA foundation model architectures
ACCount37|2 months ago
But it seems like a degree of "RL in real life" is nigh-inevitable - imitation learning only gets you this far. Kind of like RLVR is nigh-inevitable for high LLM performance on agentic tasks, and for many of the same reasons.
tim333|2 months ago
Re. not expecting it for ten years at least, current progress is pretty much in line with Moravec's predictions from 35 years ago. (https://jetpress.org/volume1/moravec.htm)
I wonder if he still follows this stuff?
makeitdouble|2 months ago
What fascinates me is we could probably make self-folding clothes. We also already have non wrinkle clothes where folding is minimally needed. I wager we could go a lot further if we invested a tad more into the matter.
But the first image people seem to have of super advanced multi-thousand dollar robots is still folding the laundry.
tim333|2 months ago
daveguy|2 months ago
v9v|2 months ago