top | item 46032514

(no title)

singiamtel | 3 months ago

I found this principle particularly interesting:

    Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.

discuss

order

Sharlin|3 months ago

Interesting in what sense? Isn't it just stating something plainly obvious?

jacquesm|3 months ago

It is, but unfortunately the fact that to you - and me - it is obvious does not mean it is obvious to everybody.

SiempreViernes|3 months ago

Did you forget the entire DOGE episode where every government worker in the US had to send an weekly email to an LLM to justify their existence?

mk89|3 months ago

I want to see how obvious this becomes when you start to add agents left and right that make decisions automagically...

xtiansimon|3 months ago

Where is “human oversight” in an automated workflow? I noticed the quote didn’t say “inputs”.

And with testing and other services, I guess human oversight can be reduced to _looking at the dials_ for the green and red lights?

SiempreViernes|3 months ago

Someone's inputs is someone else's outputs, I don't think you have spotted an interesting gap. Certainly just looking at the dials will do for monitoring functioning, but falls well short of validating the system performance.

monkeydust|3 months ago

The real interesting thing is how does that principle interplay with their pillars and goals i.e. if the goal is to "optimize workflow and resource usage" then having a human in the loop at all points might limit or fully erode this ambition. Obviously it not that black and white, certain tasks could be fully autonomous where others require human validation and you could be net positive - but - this challenge is not exclusive to CERN that's for sure.

contrarian1234|3 months ago

Do they hold the CERN Roomba to the same standard? If it cleans the same section of carpet twice is someone going to have to do a review?

conartist6|3 months ago

It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!

It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it

MaybiusStrip|3 months ago

"You can use AI but you are responsible for and must validate its output" is a completely reasonable and coherent policy. I'm sure they stated exactly what they intended to.

hgomersall|3 months ago

That doesn't follow. Say you write a proof for a something I request, I can then check that proof. That doesn't mean I don't derive any value from being given the proof. A lack of trust does not imply no use.

SiempreViernes|3 months ago

> So they endorse trusting it is my read, exactly the opposite of what they appear to say!

They endorse limited trust, not exactly a foreign concept to anyone who's taken a closer look at an older loaf of bread before cutting a slice to eat.

miningape|3 months ago

I think you're more reading what you want to read out of that - but that's the problem, it's too ambiguous to be useful