top | item 24208000

(no title)

mike_mg | 5 years ago

I find the inverse surprising: that many algorithms that work on real-life robots _do_ _provide_ error bounds and their optimality / convergence properties are proven in the papers that introduce them.

A great example of this is motion planning, where papers both on sample-based methods (such as SST), and on search based (descendants of the A* family) argue at length the theoretical optimality and convergence properties.

On another note, I think requiring more theoretical analysis as a guarantee of safety could partially be an AI-winter meme rather than practical solution. Point in case: do people run a quick check of aerodynamics maths before boarding a flight? No - they rely mostly on the engineering and regulatory process that gradually made passenger flights safer.

discuss

order

helltone|5 years ago

It seems you are taking about theoretical error bounds, that is proofs in papers with assumptions on input probabilities etc. These don't always apply to actual implementations, in physical real robots. There is a huge gap in safety between practices in aerospace engineering and robotics.