Show HN: Secure ChatGPT – a safer way to interact with generative AI
35 points| oliverf | 2 years ago |github.com | reply
I’m the founder of Pangea. We’ve built a developer platform where you can easily add security to your code through a simple set of APIs - features like authentication, secrets management, audit logging, redaction of PII, restricting embargoed countries, known threat actor intelligence, etc.
With the ChatGPT and LLM explosion, we thought of ways to reduce the risk of both the inputs and outputs from these services. Our NextJS sample app adds a security layer on top of ChatGPT with various security services that you can implement quickly.
It’s basically a front end to the OpenAI API that you can deploy which does a few security-related things:
- AuthN - it provides authentication to track who inputs what and when
- Redact - provides PII redaction with detection of over 40 different types of sensitive information
- Secure Audit Log - logs the user, cleansed prompt, and model to a secure tamper-proof audit trail
- Sends the cleansed prompt to the OpenAI API and receives the response
- Domain Intel - Performs a Domain Reputation lookup on any domain names in the response
- URL Intel - Performs a URL Reputation lookup on any URLs in the response
- Defangs any malicious domains or URLs found in the response
- On closing your session, the history of prompts disappears
Storing what users have prompted allows you to better train your model, feed it more relevant information, and keep an audit log of the history. The Secure Audit Log service can store the user inputs in a secure log so that you can track who did what and when.
The final layer of defense is a Domain Intel service to detect and neutralize the malicious URLs and domain names in the OpenAI API's response.
The proof-of-concept app is open-source on GitHub. Visit our repo https://github.com/pangeacyber/secure-chatgpt and deploy the app with a simple NPX command.
We’d love your feedback.
-Oliver
[+] [-] nipung|2 years ago|reply
[+] [-] kenwalger|2 years ago|reply
[+] [-] oliverf|2 years ago|reply
[+] [-] erogers44|2 years ago|reply
[+] [-] oliverf|2 years ago|reply
- is a bot or traffic is originating from a botnet
- or your network is connecting to a known bot C&C server
- injecting malicious URLs
- directing you to malicious hosted domains
- sending you malicious file objects
- your password has been breached as part of a large scale data breach
Then yes.. although technically these APIs are meant to be embedded into a cloud app which then ensures the user is protected. There's a lot of working being done right now to use LLM's for defense to simulate what a SOC analyst would do triaging a security event.. but there's likely equally amount of working being done to have LLM go on the offense. You could automate Nigerian 419 scams, spear phishing, all kinds of wire fraud.. and if you hooked an LLM up to penetration testing tools it could literally launch real attacks.. it's a new world..
[+] [-] oliverf|2 years ago|reply