I've seen in the past a different trick that is adding an IMU[1] to the robot arm. When combining two different types of sensors, it's called Sensor Fusion[2], and it's really common to put together a IMU with GPS and slap a Kalman Filter[3] for very accurate position reading.
The particularly cool thing of this video though is that they could mount the new sensor within the motor itself, making it all a lot more compact.
If anyone wants to build this sort of thing the new Raspberry Pie Pico 2 is both orders of magnitude more capable than the chip used here and also around half the price.
It's by far the best value for money for an introductory 32bit ARM/Risk embedded device right now.
It’s relatively old at this point, but I’m still getting excellent performance from the Teensy 4.1. It’s a little more expensive, around $30, but runs Cortex M7 @ 600MHz and includes a generous compliment of I/O and protocols.
There are larger industrial robots that use secondary encoders to improve "out of the box" accuracy for more demanding tasks. The secondary joint feedback is paired with a kinematic model of the robot structure/mechanics to accurately predict where the robot tool point actually is.
You can’t judge backlash by how the robot repeats the exact same set of movements over and over. That removes hysteresis from the problem definitionally.
I sort of struggle to see how getting good positioning accuracy from a high backlash system under zero load can have a useful application.
Maybe just lack of imagination on my part.
There is this trend that says make and buy bad hardware, the software will solve it. I haven't noticed that paying off. Tesla using webcams for self driving is an example. Boeing designing their planes and then using faulty attitude sensors is another.
I would be way more impressed if the robot did something useful. My suspicion is that its real world application capabilities are rather limited.
You have oversimplified the Boeing one: their goal was to create an efficient plane to compete with Airbus without needing the expense and delays of a new type certification.
To do this they needed bigger engines on the same frame, which in turn needed to be mounted further forward affecting flight characteristics and requiring retraining. Retraining would be a sales killer so they hacked on some software systems to attempt to make the plane fly like an older 737.
Then they can just use an iPad training course for pilots to upgrade. The augmentation had to avoid the pilot knowing about (I think) the plane getting stuck in a stall at a too high AoA (this is where my memory might be off...) so the MCAS software uses AoA sensors to nose down based on the detected AoA.
The AoA sensors were never designed to be used for a direct life and death critical use case and sometimes they got stuck or failed. MCAS only used one as an input. If MCAS incorrectly asseses a nose down is required and the pilot follows their 737 training they are having their last day. That plane is going down.
Bascially people were murdered by Boeing so at every stage of this wretched plan they can make more money.
I think you are right but Boeing was more of perhaps the worst possible asshole design, and deserves it's own league.
Question for anyone who has used one of these analog measuring devices: the indicator seems to go all the way around before the camera zooms in to read the indicated value. Is this video actually showing the accuracy it is claiming?
I haven’t watched the whole video, but I’m assuming what they were showing was ‘move x from 0.00 to 10.00’ with the gauge showing the final move was to (actual) 10.05.
Which with how floppy that rig is, is pretty impressive.
Notably though, those gauges do need to be ‘preloaded’ (compressed into their ‘positive’ range) to be able to measure negative direction shifts, and while it looks like that was done, I can’t be 100% sure without analyzing it far more than I want to do right now.
Also, those gauges provide a degree of preload (not much, but some), which might be taking a bunch of slop out of the system and giving overly rosy accuracy numbers.
Yes. The sphere at the tip has a certain radius, and the indicator will show zero (again) when the sphere has been deflected by its radius (i.e. the contact point is exactly at the center line). When out of contact, it's essentially telling you that you're missing at least a whole millimeter to the point where you should be.
Often there is a second needle indicating which of these situations you're in, but I assume it's not considered necessary because if you're 1mm off, the situation is (in the contexts in which these devices are used) very obvious.
Yes but they're not robot arms so it's not as fascinating. The length of the arm amplifies error so if you made a "mechanics and steppers" arm with the same positional accuracy as a printer, the motors would have to be much more precise or if you geared them down, the backlash extremely low like an industrial robot arm.
What their video demonstrates is mostly same-direction repeatability, not absolute static accuracy. They can correct for with backlash at individual motors, but not slop or bend in the linkages.
This uses DC motors. If you use modern 3-phase servomotors, you know more of what the motor is doing.
I don't see how the second sensor would improve accuracy (or rather precision). iiuc, the second sensor allows for improved speed. Backlash of the motor (and gears and linkage) could be accounted for using a PID controller, no?
Said that, I'm impressed how precise this rather flimsy looking robot actually is.
I have a hunch that Optimus likewise leans heavily upon inverse kinematic modeling. However not using the paper plate tech.
It would be sick if they use a pure vision ML approach to train a heuristic understanding of its own muscles, instead of these fixed rotary encoders which do not account for material deflection, sensor dislodgement, etc. sort of like meta quest player tracking in the SLAM loop.
I'm not a fan of the youtube link trend on HN, as cool as the latest robots are. I know they're encroaching on territory previously held by much heavier additive and subtractive machines.
And I am okay with YouTube when a video makes sense, but in this case they have basically crammed a short article into a video, making it more awkward to read: slides with texts and diagrams, with some background music, and only a video demonstration in the end.
Are you saying you don't like video and would prefer text, or is it something specific to YouTube that you object to? For many topics, video is really helpful in understanding stuff.
[+] [-] franciscop|1 year ago|reply
The particularly cool thing of this video though is that they could mount the new sensor within the motor itself, making it all a lot more compact.
[1] https://en.wikipedia.org/wiki/Inertial_measurement_unit
[2] https://en.wikipedia.org/wiki/Sensor_fusion
[3] https://en.wikipedia.org/wiki/Kalman_filter
[+] [-] shellfishgene|1 year ago|reply
[+] [-] llm_trw|1 year ago|reply
It's by far the best value for money for an introductory 32bit ARM/Risk embedded device right now.
[+] [-] 5ADBEEF|1 year ago|reply
[+] [-] nativeit|1 year ago|reply
[+] [-] chipdart|1 year ago|reply
That's cool and all but what are the tradeoffs?
[+] [-] enginoor|1 year ago|reply
https://electroimpact.com/Products/Robots/AchievingAccuracy
[+] [-] amelius|1 year ago|reply
[+] [-] gaze|1 year ago|reply
[+] [-] shellfishgene|1 year ago|reply
[+] [-] acyou|1 year ago|reply
I sort of struggle to see how getting good positioning accuracy from a high backlash system under zero load can have a useful application.
Maybe just lack of imagination on my part.
There is this trend that says make and buy bad hardware, the software will solve it. I haven't noticed that paying off. Tesla using webcams for self driving is an example. Boeing designing their planes and then using faulty attitude sensors is another.
I would be way more impressed if the robot did something useful. My suspicion is that its real world application capabilities are rather limited.
[+] [-] boeinggggg|1 year ago|reply
To do this they needed bigger engines on the same frame, which in turn needed to be mounted further forward affecting flight characteristics and requiring retraining. Retraining would be a sales killer so they hacked on some software systems to attempt to make the plane fly like an older 737.
Then they can just use an iPad training course for pilots to upgrade. The augmentation had to avoid the pilot knowing about (I think) the plane getting stuck in a stall at a too high AoA (this is where my memory might be off...) so the MCAS software uses AoA sensors to nose down based on the detected AoA.
The AoA sensors were never designed to be used for a direct life and death critical use case and sometimes they got stuck or failed. MCAS only used one as an input. If MCAS incorrectly asseses a nose down is required and the pilot follows their 737 training they are having their last day. That plane is going down.
Bascially people were murdered by Boeing so at every stage of this wretched plan they can make more money.
I think you are right but Boeing was more of perhaps the worst possible asshole design, and deserves it's own league.
[+] [-] leovailati|1 year ago|reply
[+] [-] shellfishgene|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] imoverclocked|1 year ago|reply
[+] [-] lazide|1 year ago|reply
Which with how floppy that rig is, is pretty impressive.
Notably though, those gauges do need to be ‘preloaded’ (compressed into their ‘positive’ range) to be able to measure negative direction shifts, and while it looks like that was done, I can’t be 100% sure without analyzing it far more than I want to do right now.
Also, those gauges provide a degree of preload (not much, but some), which might be taking a bunch of slop out of the system and giving overly rosy accuracy numbers.
[+] [-] tgsovlerkhgsel|1 year ago|reply
Often there is a second needle indicating which of these situations you're in, but I assume it's not considered necessary because if you're 1mm off, the situation is (in the contexts in which these devices are used) very obvious.
[+] [-] KeplerBoy|1 year ago|reply
It's not control theory, but mechanics and steppers.
[+] [-] mmoustafa|1 year ago|reply
[+] [-] foxglacier|1 year ago|reply
[+] [-] maille|1 year ago|reply
[+] [-] TacticalCoder|1 year ago|reply
It's used to cut precise wood pieces or carve wood or metal etc.
https://en.wikipedia.org/wiki/Pantograph
https://youtu.be/s56J_Rnh_Co
You use the "big" part to drive the "small" one, which gives it great precision.
[+] [-] abecedarius|1 year ago|reply
[+] [-] whamlastxmas|1 year ago|reply
[+] [-] convolvatron|1 year ago|reply
[+] [-] Animats|1 year ago|reply
This uses DC motors. If you use modern 3-phase servomotors, you know more of what the motor is doing.
[+] [-] guenthert|1 year ago|reply
Said that, I'm impressed how precise this rather flimsy looking robot actually is.
[+] [-] elif|1 year ago|reply
It would be sick if they use a pure vision ML approach to train a heuristic understanding of its own muscles, instead of these fixed rotary encoders which do not account for material deflection, sensor dislodgement, etc. sort of like meta quest player tracking in the SLAM loop.
[+] [-] dreamcompiler|1 year ago|reply
Kudos!
[+] [-] luikore|1 year ago|reply
[+] [-] mglz|1 year ago|reply
[+] [-] iancmceachern|1 year ago|reply
[+] [-] emmelaich|1 year ago|reply
[+] [-] smolder|1 year ago|reply
[+] [-] defanor|1 year ago|reply
[+] [-] bob112|1 year ago|reply
[+] [-] d0mine|1 year ago|reply
[+] [-] dr_dshiv|1 year ago|reply