The same that happens with chatgpt? ie. if you do it in an overt way you get a canned suicide prevention result, but you can still get the "real" results if you try hard enough to work around the safety measures.
my point is, clearly there is a sense of liability/responsibility/whatever you want to call it. not really the same as selling rope, rope doesn't come with suicide warnings
gruez|3 months ago
littlestymaar|3 months ago
tremon|3 months ago
glitchc|3 months ago
hitarpetar|3 months ago