(no title)
CollinEMac | 4 months ago
This sounds a little dramatic. The capabilities of ChatGPT are known. It generates text and images. The qualities of the content of the generated text and images is not fully known.
CollinEMac | 4 months ago
This sounds a little dramatic. The capabilities of ChatGPT are known. It generates text and images. The qualities of the content of the generated text and images is not fully known.
kelvinjps10|4 months ago
kube-system|4 months ago
beyarkay|4 months ago
There's a big difference between generating text which does someone's homework and text which changes peoples opinion about the world (e.g. the r/changemyview experiment done by Meta, in which their AI was better than almost all humans (it was 99th percentile) at changing peoples view, and not a single user was able to spot it as being AI[1])
If you're disagreeing with the precise wording of "capabilities" vs "qualities of the content", then sure, use whatever words make sense to you. But I don't think that's an interesting discussion to have.
I stand by my original statement.
[1]: https://www.reddit.com/r/changemyview/comments/1k8b2hj/meta_...
luxuryballs|4 months ago
alephnerd|4 months ago
Nasrudith|4 months ago
Likewise what to ask it for how to make some sort of horrific toxic chemical, nuclear bomb, or similar isn't much good if you cannot recognize it and dangerous capability depends heavily on what you have available to you. Any idiot can be dangerous with C4 and detonator or bleach and ammonia. Even if ChatGPT could give entirely accurate instructions on how to build an atomic bomb it wouldn't do much good because you wouldn't be able to source the tools and materials without setting off red flags.