(no title)
bodge5000 | 1 month ago
Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose
air7|1 month ago
The obviously fake ones were easy to detect, and the less obvious ones took some some sleuthing to detect. But the good fakes totally fly under the radar. You literally have no idea how much of the images you see are doctored well because you can't tell.
Same for LLMs in the near future (or perhaps already). What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
bodge5000|1 month ago
> What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
The idea is this is a completely different scenario if we're aware of this being a potential problem versus not being at all aware of it. Maybe we won't be able to tell 100% of the time, but its something which we'll consider.