While this is generally correct, we prefer to look at this probabilistically. Do you think the expected number of harmful behaviors would stay the same if anyone could break these safety guardrails? Even if most users are could get this kind of info elsewhere, a small percentage of malicious ones can have an outsized impact. Some of the data we’ve seen—like bomb-making instructions—is highly detailed and convincing, making it far more accessible than just a random Google search. Removing safeguards doesn’t create masterminds, but it does lower the barrier for harm.
thatguy0900|1 year ago
Anyone who wants to make a bomb can easily find the anarchists cookbook, a widely discussed book you can even buy on Amazon that includes detailed guides and instructions for exactly this and more. If anything asking chatgpt for detailed instructions and further questions will probably just make it hallucinate and blow you up, I'd imagine. It's just hard to take seriously.
BdaOOngM|1 year ago