top | item 32209674

(no title)

carbadtraingood | 3 years ago

The answer from the federation is unconscionable. They blamed the boy and then said they could not be held responsible. Fuck off, a kid made a reasonable kid movement. If the robot wasn't ready to be around children, it shouldn't have been deployed around children. And there should have been a big red button that immediately everything, within reach of every player.

discuss

order

godelski|3 years ago

Honestly, as someone that researches ML, this is my major concern. It isn't AGI that has the potential to kill us, it is current ML systems that can't handle OOD data and engineers put it in place because "that's the user's fault." Same reason we have Teslas crashing. AI safety might talk about AGI a lot, but their main area of research is modern systems and concerns over that.

OOD data is really hard to deal with FWIW. But personally I don't feel confident that adding more matrix multiplies won't generalize in a way such that OOD isn't of major concern.

ALittleLight|3 years ago

AGI alignment is a vastly bigger problem. Of course poorly built and deployed ML systems will kill and injure people - but these are tragedies of the kind humanity can endure and overcome and has overcome. Poorly aligned AGI is nothing less than the entire species at stake.

imtringued|3 years ago

This is a classic case of undefined behaviour or memory unsafety. Your mistakes can have an infinitely bad outcome but people blame the programmer even though there are memory safe languages. Yes they sacrifice efficiency but who the fuck wants to consider the billion potential ways of operating a robot in a physically unsafe way?

This means we are going to have the equivalent of GC in robots that interact with other humans.

throwaway894345|3 years ago

Genuine question: is Tesla’s autopilot crashing more often or more severely than human drivers?

salawat|3 years ago

By OOD, do you mean Out Of Domain?

dmix|3 years ago

Are there any other major examples of modern ML ethical issues besides some Tesla cars killing their drivers?

Are ML driven robots in factories killing people or Something? Because I haven’t heard of anything else.

The only other modern AI ethics stuff I hear about is making image generators more politically correct and maybe some criminal sentencing algorithms that are being misused (which isn’t really an AI ethics problem but a judicial procedural one).

jrumbut|3 years ago

It's appalling the robot was designed to ever use that much force in the grip. Even if the chess pieces were made of lead I can't see it being needed. In general, more attention to failing safe.

But the kid is some kind of local chess champion, I can't fully fault the decision to have him play with the experimental chess robot. Is it more dangerous than a lawn mower or a blender or any other machine that 9 year olds might begin to operate?

NamTaf|3 years ago

Correct. The fact they made a robot that could crush a human hand means they paid no attention to this hazard. Competent execution of Safety in Design concepts would demand limiting the grip force to only what's necessary to reliably move the pieces, which almost certianly wouldn't break bone. If that isn't possible, then it would imply the requirement to find some other way to resolve this hazard in the heirarchy of controls.

Relying on a human is the last option, not the default, when it comes to safety. Human adaptability is not a licence to hand-wave away design responsibility. The most glaring example is Tesla, who is unforgivably guilty of this.

This is bog-standard competent engineering in almost all domains of engineering. It is the table stakes-level expectation of a reasonable approach to safety. I'd literally end up in jail if something went wrong and I had been found to not consider these factors.

Software- and computer-related domains of engineering are a conspicuous outlier when it comes to this philosophy.

zuminator|3 years ago

If that is the official statement of the federation, I agree it's abysmal. But i question whether Lazarev really blurted all of that in one go, or if he was asked a series of leading questions and then his answers were misleadingly pasted together to make him sound as callous as possible.

carbadtraingood|3 years ago

To be clear, any answer short of, "this was a failure on our part to protect the children who attended, and as the leader of the organization, responsibility falls on me. We will make this right for the family and I will tender my resignation" is not adequate.

imtringued|3 years ago

The kid tries to beat a piece before it was even placed and then the robot tried to place the piece on his piece while his hands were still covering it. This could have lead to a minor injury even with a human opponent. What is strange is that they insisted on using such a powerful robot arm without any compliance in the actuators.

Aeolun|3 years ago

Now it broke a finger, which sucks, but’ll heal. But what if it had taken out an eye?

rowanG077|3 years ago

[deleted]

lupire|3 years ago

That's not what they said. They said they weren't responsible for the robot, and which is true. Same as if a human player injured another player. The robot's operator is responsible.

carbadtraingood|3 years ago

I disagree, if they are running the event, they are responsible for ensuring the safety of participants. You go skydiving and the chute fails in a totally predictable way, the company who takes you up shouldn't just shrug and say, "well, we aren't responsible for the chute, that was provided by another company"

twayt|3 years ago

[deleted]

kortilla|3 years ago

>people sign a disclaimer about the risks, they can not be held responsible.

This isn’t adequate in civilized countries. You can’t run an amusement park that severs the limbs of 1% of the participants under the protection of a disclaimer.

ClumsyPilot|3 years ago

You can't just 'waive' criminal responsebility - if I have a dangerous dog or industrial machinery that chops off hands, it doesn't matter what piece of paper the kid or the legal guardians sign, If I purposefully let kids play with them, I have put kdis in harms way.

makeitdouble|3 years ago

Setting responsibility on guardians for not having forseen or prevented that is a slippery slope, or at least leads to unintended consequences.

Imagine a world where guardians will never let a kid go somewhere or do something that they don’t have 100% knowledge of, or aren’t 100% sure it’s perfectly safe.

In this specific case:

- you wouldn’t expect that issue at first sight

- it looks fun enough to give it a try

- the kid disn’t die. It truely hurts and can have long lasting damages if not treated properly, but a finger broken is not the end of the world for the kid.

beached_whale|3 years ago

If this was any workplace in many countries, the robot owner would totally be culpable, even if it wasn't a child. There's a reason that people are not allowed near robots in many cases.

kstenerud|3 years ago

This is why in civilized societies the government mandates minimum safety standards so that a company is not even allowed to place such dangerous equipment around people.

You wouldn't be allowed to build a mangler anymore in a civilized society, waiver or no.

gaudat|3 years ago

I'm pretty sure there is only a tiny fraction of the population that is even willing to press the emergency stop when something bad happens. Most likely they will scream or freeze in place instead.

Quoting from anothing reply: Video of the incident: https://twitter.com/xakpc/status/1550224137041371144

There is not even an e-stop.

Although I can imagine what it sounded like at the scene though. "Ay Blin" and then people scurrying around looking for the power plug of the robot or hopelessly trying to overpower a heavily geared joint motor.

roenxi|3 years ago

[deleted]

nerdawson|3 years ago

> there should be some debate about whether this is even a significant enough incident to require an apology.

The robot behaved in an unexpected way which caused injury to a child. An apology is the absolute bare minimum they could do.

As a parent, I think it’s perfectly reasonable to expect event organisers to have put adequate safeguards in place.

We don’t need to put kids in front of a bear to teach them about the wonders of nature. Likewise, they don’t need to be in the path of a dangerous robot to discover machines.

userbinator|3 years ago

Are you really going to go for the "think of the children" argument...?

Although I do agree with you about having an e-stop.

dymk|3 years ago

"Day cares should be safe for children"

"OMG did you just use the 'think of the children' fallacy??? Invalid!"

bee_rider|3 years ago

"Think of the children" when used as an expression refers to the situation in which children are used as an excuse to implement rules which would be otherwise unpalatable. Not every case of protecting kids is a "think of the children" situation.

carbadtraingood|3 years ago

Yes, I'm going to make the outrageous claim here: children should not be expected to work next to industrial machinery with inadequate safeguards.

I know this is controversial to some, such as userbinator, who believe that industrial robotics without safeguards are just fine to mix with children.

Wowfunhappy|3 years ago

There are in fact times when we need to think about children. If we don't, humanity won't have much of a future.

Vespasian|3 years ago

There are tons of rules around these kind of robots in regular work environments for precisely this reason.

Usually there need to be either a physical barrier like a cage or a virtual one like a laser waterfall that detects foreign objects in the robots perimeter and emergency stops it.

These rules were disregarded here.

I used to work in a company were such machines were developed and even a very experienced engineer, working on a prototype, was once hit by it (no serious injuries and safety was improved afterwards) because they can move very fast and in unexpected ways.

These days there are better solutions available (so called cobots) which are designed to be work together in very close proximity with humans whiteout physical separation. They feature very sensitive force sensors and are severely restricted in the way the are allowed to move.

So yes "think of the humans/children" does apply here. This is a solved problems and the operators decided to disregard established procedures and went instead for "flashy and cheap" (cobots are more expensive and slow as molasses)

charcircuit|3 years ago

>a kid made a reasonable kid movement

What do you mean? There's no reasonable time where you and your opponent are touching the pieces at the same time. Nor is there a reasonable time where you reach for the same piece.

ordu|3 years ago

Is it normal in chess to break fingers of an opponent who breaks rules? I think not. If it was not a robot but a human being he would be considered guilty. Yeah, boy broke chess rules, and what? He probably should be punished by losing chess points or something, but not by the means of breaking fingers. But the human who broke fingers of his opponent would be disqualified from chess for a life and would face charges. It is robot, it cannot be guilty, so someone else is. Who is? His creators? Or organizers of the event? Or parents of the kid who allowed him to go to face the robot? Some adults are guilty, not the boy.

krallja|3 years ago

Seven-year-old boys are known for always doing things that are reasonable, and never doing anything unreasonable.

carbadtraingood|3 years ago

Dude, children get antsy sometimes. They literally have different brain development than adults do. They do not have the same controls and inhibitions.

Children are literally wired for physical experimentation and echopraxia.

Kids also want to be seen as helping. If this robot put a piece down poorly, the kid might be trying to straighten up after it.

All of which are eminently reasonable for a kid who has no concept of operating around dangerous machinery.

stevage|3 years ago

Sure there is, like if you move a piece, hit the clock, then realise your piece wasn't quite centered on the square. Maybe not technically correct, but reasonable.

rmbyrro|3 years ago

It's a kid, my friend.

lupire|3 years ago

The penalty for an illegal move is maybe forfeit, not a broken finger.