What I find interesting is the implicit priorisation: explainability, (human) accountability, lawfulness, fairness, safety, sustainability, data privacy and non-military use.
I agree, though I would prefer to highlight the first half of the first item - transparency. Also, perhaps make Safety an independent principle than combining with Security.
These are a good set of principles for any company (or individual) can follow to guide them how they use AI.
Good guidelines. My primary principle for using AI is that it should be used as a tool under my control to make me better by making it easier to learn new things, offer alternative viewpoints. Sadly, AI training seems headed towards producing ‘averaged behaviors’ while in my career the best I had to offer employers was an ability to think outside the box, have different perspectives.
How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.
I'm not optimistic about this in the short term. Creative and diverse viewpoints seem to come from diverse life experiences, which AI does not have and, if they are present in the training data, are mostly washed out. Statistical models are like that. The objective function is to predict close to the average output, after all.
In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.
It is a organisation wide document of "General principles", how could it possibly have something more specific to say that about the inherently context specific trade-offs of each specific use of AI?
Organizations above a certain size absolutely cannot help themselves but publish this stuff. It is the work of senior middle managers. Ark Fleet Ship B.
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
In such scientific environment, There are gentlemen agreements about many things that boils down to "Don't be an asshole" or "Be considerate of the others" with some hard requirements at this or that point for things that are very serious.
What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
> CERN’s convention states: “The Organization shall have no concern with work for military requirements and the results of its experimental and theoretical work shall be published or otherwise made generally available.”
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
CERN is in principle opposed to military research. That and stuff like lawfulness, fairness, sustainability, privacy are just general CERN principles restated for fluff.
One reason I can think of is with regard to confidentiality. A lot of AI services are controlled by companies in the US or China, and they may not want military research to leak to these countries.
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
The real interesting thing is how does that principle interplay with their pillars and goals i.e. if the goal is to "optimize workflow and resource usage" then having a human in the loop at all points might limit or fully erode this ambition. Obviously it not that black and white, certain tasks could be fully autonomous where others require human validation and you could be net positive - but - this challenge is not exclusive to CERN that's for sure.
It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
> Responsibility and accountability: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated.
Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
This corporate crap makes me want to puke. It is a consequence of the forced bureaucracy from European regulations, particularly the EU AI act which is not well thought out and actively adds liability and risk to anyone on the continent touching AI including old school methods such as bank credit scoring systems.
‘Sustainability: The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.’ [1]
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
Humans have poured resources into the pursuit of largely impractical pure knowledge for millenia. This has been said of an incredible number of human scientific endeavors, before they found use in other domains.
All this impractical knowledge people accumulated over centuries gave you cars, planes, computers, air condition, antibiotics, iphones, and, in fact, everything you have when human kind left the trees. So I would rather burn this 1,3 terawatt on this than on, say, running Facebook or bitcoins mining.
GranularRecipe|3 months ago
peepee1982|3 months ago
annjose|3 months ago
These are a good set of principles for any company (or individual) can follow to guide them how they use AI.
mark_l_watson|3 months ago
How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.
nathan_compton|3 months ago
In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.
conartist6|3 months ago
SiempreViernes|3 months ago
mariusor|3 months ago
alkonaut|3 months ago
jordanpg|3 months ago
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
Schlagbohrer|3 months ago
elashri|3 months ago
blitzar|3 months ago
oytis|3 months ago
oblio|3 months ago
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
LudwigNagasena|3 months ago
GuB-42|3 months ago
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
singiamtel|3 months ago
Sharlin|3 months ago
xtiansimon|3 months ago
And with testing and other services, I guess human oversight can be reduced to _looking at the dials_ for the green and red lights?
monkeydust|3 months ago
contrarian1234|3 months ago
conartist6|3 months ago
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
hexo|3 months ago
dude250711|3 months ago
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated. Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
eisbaw|3 months ago
DisjointedHunt|3 months ago
fsh|3 months ago
Temporary_31337|3 months ago
macleginn|3 months ago
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
[1] https://home.web.cern.ch/news/official-news/knowledge-sharin... [2] https://home.cern/science/engineering/powering-cern
Jean-Papoulos|3 months ago
Also, the web was invented at CERN.
piokoch|3 months ago
hengheng|3 months ago
Far less power than those projected gigawatt data centers that are surely the one thing keeping AI companies from breaking even.