(no title)
gnur | 4 years ago
If it's about AI, where is the line? Does reinforcement learning count? What about deep learning? Neural networks? 5 clever if statements?
gnur | 4 years ago
If it's about AI, where is the line? Does reinforcement learning count? What about deep learning? Neural networks? 5 clever if statements?
Deukhoofd|4 years ago
> The proposal also wants to prohibit AI systems that cause harm to people by manipulating their behavior, opinions or decisions; exploit or target people's vulnerabilities; and for mass surveillance.
> many technologies currently in use in Europe today, such as algorithms used to scan CVs, make creditworthiness assessments, hand out social security benefits or asylum and visa applications, or help judges make decisions, would be labeled as "high risk," and would be subject to extra scrutiny.
> One of the Commission’s requirements in the draft are that data sets do not “incorporate any intentional or unintentional biases” which may lead to discrimination.
patates|4 years ago
so "5 clever if statements" (which most CV parsers are, if that wasn't clear) too are subject to extra scrutiny depending on application being labeled high risk or not.
I'm not sure how to feel about this. I feel it's good intentions but marketed wrong.
Why not just say, we will evaluate critical software like we evaluate other critical engineering projects/infrastructure? Why mix in AI?
deepstack|4 years ago
I would think the intend is about technology that could hurt citizens. More like responsible use of technologies. As a tech I support this kind of initiative. Anything allows software to identify person become a slippery slop to totalitarian government (in this day and age it won't just be government, also include trans-nation corporations).