You sue whoever misrepresented the robot's capabilities to you. There's nothing wrong with making a robot that's bad at trading. There is something wrong with pretending it's not bad at trading when you're selling it.
Your answers diverge, depending on who is operating the "robot". If it's being presented as a service by an investment firm, then you obviously have a standard dispute with the investment firm, the same as if one of the human brokers went nuts. If was sold as a software system to be self-administered, then you have an issue with a vendor. The latter may hinge solely on misrepresentation, because software is generally sold with little warranty for precisely these reasons.
So much money slushing around can make things a little grayer because in either case the investor didn't actually perform any work themselves. But I'm confident they can use some of that money to discern who ultimately owned what, who employed who, and what type of relationship was created.
I feel like newswriters just love to generate hot air with this "sue a robot" trope. Who to sue for that?
Is it even possible to misrepresent the robot's capabilities? In the UK most (all?) advertisements come with cautionary notices such as "capital at risk" or "past performance not indicative of future returns" or similar...
Although obviously such warnings can't reach some people... My favourite example of this was the collapse of OptionSellers.com following the huge move in Natural Gas futures. I mean, they were selling options - it's right there in the name...
IANAL but I could see a lawsuit being possible if historical "actual" performance figures were misrepresented or back-tested theoreticals (read: overfit) were passed as actuals.
If they didn't fully understand how it worked, but fraudulently represented that they did, they'd be on the hook. It all hinges on what he was promised when he bought it. If they had a good disclaimer, then he might be out of luck.
There are places where adopting AI is going to be difficult for this reason - notably anywhere that has a legal requirement to explain why a decision went a particular way. That includes things like insurance risk models, pension investments, etc.
mindslight|6 years ago
So much money slushing around can make things a little grayer because in either case the investor didn't actually perform any work themselves. But I'm confident they can use some of that money to discern who ultimately owned what, who employed who, and what type of relationship was created.
I feel like newswriters just love to generate hot air with this "sue a robot" trope. Who to sue for that?
tomp|6 years ago
Although obviously such warnings can't reach some people... My favourite example of this was the collapse of OptionSellers.com following the huge move in Natural Gas futures. I mean, they were selling options - it's right there in the name...
https://www.bloomberg.com/news/articles/2018-11-19/hedge-fun...
TuringNYC|6 years ago
inflatableDodo|6 years ago
bumby|6 years ago
Saying the algorithm makes 95% winning trades may not mean much if those remaining 5% of trades cause you to lose most of your value.
Loughla|6 years ago
hammock|6 years ago
rfugger|6 years ago
onion2k|6 years ago
hinkley|6 years ago
Is cheesing an algorithm illegal?
unknown|6 years ago
[deleted]