(no title)
gsuuon | 1 year ago
46. While responsibility for the ethical use of AI systems starts with those who develop, produce, manage, and oversee such systems, it is also shared by those who use them. As Pope Francis noted, the machine “makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences. Human beings, however, not only choose, but in their hearts are capable of deciding.”[92] Those who use AI to accomplish a task and follow its results create a context in which they are ultimately responsible for the power they have delegated. Therefore, insofar as AI can assist humans in making decisions, the algorithms that govern it should be trustworthy, secure, robust enough to handle inconsistencies, and transparent in their operation to mitigate biases and unintended side effects.[93] Regulatory frameworks should ensure that all legal entities remain accountable for the use of AI and all its consequences, with appropriate safeguards for transparency, privacy, and accountability.[94] Moreover, those using AI should be careful not to become overly dependent on it for their decision-making, a trend that increases contemporary society’s already high reliance on technology.
That is, "an AI told me so" should never be a valid excuse for anything.I also really liked:
62. In light of the above, it is clear why misrepresenting AI as a person should always be avoided; doing so for fraudulent purposes is a grave ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts—such as in education or in human relationships, including the sphere of sexuality—is also to be considered immoral and requires careful oversight to prevent harm, maintain transparency, and ensure the dignity of all people.[124]
I think it should be a legal requirement that AI identifies itself as such given certain key-phrases and that there's no way to prompt engineer it out.Really interesting read overall, thanks for sharing.
unknown|1 year ago
[deleted]