Hacker News
Moravec's Paradox and the Robot Olympics
bglazer
|next
[-]
I do find it interesting that they state that each task is done with a fine tuned model. I wonder if that’s a limitation of the current data set their foundation model is trained on (which is what I think they’re suggesting in the post) or if it reflects something more fundamental about robotics tasks. It does remind me of a few years ago in LLMs when fine tuning was more prevalent. I don’t follow LLM training methodology closely but my impression was that the bulk of recent improvements have come from better RL post training and inference time reasoning.
Obviously they’re pursuing RL and I’m not sure spending more tokens at inference would even help for fine manipulation like this, notwithstanding the latency problems with that.
So, maybe the need for fine tuning goes away with a better foundation model like they’re suggesting? I hope this doesn’t point towards more fundamental limitations on robotics learning with the current VLA foundation model architectures
ACCount37
|root
|parent
|next
[-]
But it seems like a degree of "RL in real life" is nigh-inevitable - imitation learning only gets you this far. Kind of like RLVR is nigh-inevitable for high LLM performance on agentic tasks, and for many of the same reasons.
tim333
|root
|parent
|next
|previous
[-]
Re. not expecting it for ten years at least, current progress is pretty much in line with Moravec's predictions from 35 years ago. (https://jetpress.org/volume1/moravec.htm)
I wonder if he still follows this stuff?
makeitdouble
|root
|parent
|next
[-]
What fascinates me is we could probably make self-folding clothes. We also already have non wrinkle clothes where folding is minimally needed. I wager we could go a lot further if we invested a tad more into the matter.
But the first image people seem to have of super advanced multi-thousand dollar robots is still folding the laundry.
tim333
|root
|parent
|previous
[-]
Animats
|next
|previous
[-]
Here are some of the same tasks being attempted as part of the DARPA ARM program in 2012.[1] Compare key-in-lock and door opening with the 2025 videos linked above. Huge improvement.
We just might be over the hump on manipulation.
godelski
|next
|previous
[-]
> The gold-medal task is to hang an inside-out dress shirt, after turning it right-side-in, which we do not believe our current robot can do physically, because the gripper is too wide to fit inside the sleeve
You don't need to fit inside the sleeve to turn it inside out...Think about a sock (same principle will apply, but easier to visualize). You scrunch up the sock so it's like a disk. Then you pull to invert.
This can be done with any piece of clothing. It's something I do frequently because it's often easier (I turn all my clothes inside out before washing).
Dylan16807
|root
|parent
[-]
DonHopkins
|next
|previous
[-]
Our robotic overlords have come a long way in 35 years!
Check out the cool retro robot photos:
https://en.wikipedia.org/wiki/First_Robot_Olympics
https://en.wikipedia.org/wiki/Turing_Institute
My favorites:
Robug II disqualified for trying to mount Russian competitor during race.
Torchbearer NEL carrying flame to Olympic Venue from Greek Restaurant.
Walking pizza box Biped Walker, University of Wales. Paul Channon & Simon Hopkins.
The Seventh Incarnation of Dr Who (Sylvestor McCoy) opens the event with Sue Mowforth.
Richard 1st robot head commentator from The Turing Institute, Glasgow. (I use this as my Slack avatar!)