(no title)
LiamPowell | 5 days ago
The ASCII flowcharts all contain jagged vertical lines. This is the biggest indicator of LLM output as no human would ever produce that. You can simply see with your eyes that it's wrong if you even glance at it.
> there’s no way for us to prove that they don’t have access to all of that data anyway. we can only assume that they don’t have access to all of that data. but if you want my two cents, they probably do.
This doesn't quite read as LLM output but it makes the whole article look like a conspiracy theory.
> after trying to write a few exploits, vmfunc decided to browse their infra on shodan. it all started with a Shodan search. a single IP. 34.49.93.177 sitting on Google Cloud in Kansas City. one open port. one SSL certificate. two hostnames that tell a story nobody was supposed to read:
> and the company that runs all of this is the same one that takes your passport photo when you sign up for ChatGPT. same codebase. same platform. different deployment. same facial recognition. same screening algorithms. same data model.
> and as always, the information wants to be free. we didn’t break anything. we didn’t bypass anything. we queried URLs, pressed buttons, and read what came back. if that’s enough to expose the architecture of a global surveillance platform… maybe the problem isn’t us.
These all absolutely stink of LLM writing patterns.
No comments yet.