oliverf's comments

oliverf | 2 years ago | on: Show HN: Secure ChatGPT – a safer way to interact with generative AI

Thanks Ken! We’re just adding support to MD5 and SHA256 hash personally identifiable information but that’s only one way… the next step is definitely tokenization which allows obfuscation and reversal to those with the right permissions.. this would allow you to abide by the growing number of regional privacy restrictions.

oliverf | 2 years ago | on: Show HN: Secure ChatGPT – a safer way to interact with generative AI

That's the scary part and it's definitely coming.. if it isn't here already. If an LLM is a threat actor and using traditional threat vectors like:

- is a bot or traffic is originating from a botnet

- or your network is connecting to a known bot C&C server

- injecting malicious URLs

- directing you to malicious hosted domains

- sending you malicious file objects

- your password has been breached as part of a large scale data breach

Then yes.. although technically these APIs are meant to be embedded into a cloud app which then ensures the user is protected. There's a lot of working being done right now to use LLM's for defense to simulate what a SOC analyst would do triaging a security event.. but there's likely equally amount of working being done to have LLM go on the offense. You could automate Nigerian 419 scams, spear phishing, all kinds of wire fraud.. and if you hooked an LLM up to penetration testing tools it could literally launch real attacks.. it's a new world..

page 1