Traditional software is unpredictable, as it gets more complicated, corner cases emerge that are difficult, if not impossible, to anticipate.
AI is so unpredictable that it's impossible to make effective preventable safeguards. For every use case that we want to protect against, there will be many more that we can't anticipate.
I don't think it's possible to build effective safeguards into AI for situations like this, because AI isn't the problem: Mentally ill people will just be triggered by something else.
Furthermore, someone who's going to sit and chat with AI for and endless amount of time will find the corner cases that aren't anticipated.
gwbas1c|6 months ago
AI is so unpredictable that it's impossible to make effective preventable safeguards. For every use case that we want to protect against, there will be many more that we can't anticipate.
I don't think it's possible to build effective safeguards into AI for situations like this, because AI isn't the problem: Mentally ill people will just be triggered by something else.
Furthermore, someone who's going to sit and chat with AI for and endless amount of time will find the corner cases that aren't anticipated.