Show HN: Cape API – Keep your sensitive data private while using GPT-4
29 points| gavinuhma | 2 years ago |capeprivacy.com
With Cape, you can easily de-identify sensitive data before sending it to OpenAI. In addition, you can create embeddings from sensitive text and documents and perform vector searches to improve your prompt context all while keeping the data confidential.
Developers are using Cape with data like financial statements, legal contracts, and internal/proprietary knowledge that would otherwise be too sensitive to process with the ChatGPT API.
You can try CapeChat, our playground for the API at https://chat.capeprivacy.com/
The Cape API is self-serve, and has a free tier. The main features of the API are:
De-identification — Redacts sensitive data like PII, PCI, and PHI from your text and documents.
Re-identification — Reverts de-identified data back to the original form.
Upload documents — Converts sensitive documents to embeddings (supports PDF, Excel, Word, CSV, TXT, PowerPoint, and Markdown).
Vector Search — Performs a vector search on your embeddings to augment your prompts with context.
To do all this, we work with a number of privacy and security techniques.
First of all, we process data within a secure enclave, which is an isolated VM with in-memory encryption. The data remains confidential. No human, including our team at Cape or the underlying cloud provider, can see the data.
Secondly, within the secure enclave, Cape de-identifies your data by removing PII, PCI, and PHI before it is processed by OpenAI. As GPT-4 generates and streams back the response tokens, we re-identify the data so it becomes readable again.
In addition to de-identification, Cape also has API endpoints for embeddings, vector search, and document uploads, which all operate entirely within the secure enclave (no external calls and no sub-processors).
Why did we build this?
Developers asked us for help! We've been working at the intersection of privacy and AI since 2017, and with the explosion of interest in LLMs we've had a lot of questions from developers.
Privacy and security remain one of the biggest barriers to adopting AI like LLMs, particularly for sensitive data.
We’ve spoken with many companies who have been experimenting with ChatGPT or the GPT-4 API and they are extremely excited about the potential, however they find taking an LLM powered feature from PoC to production is a major lift, and it’s uncharted territory for many teams. Developers have questions like:
- How do we ensure the privacy of our customer’s data if we’re sending it to OpenAI?
- How can we securely feed large bodies of internal, proprietary data into GPT-4?
- How can we mitigate hallucinations and bias so that we have higher trust in AI generated text?
The features of the Cape API are designed to help solve these problems for developers, and we have a number of early customers using the API in production already.
To get started, checkout our docs: https://docs.capeprivacy.com/
View the API reference: https://api.capeprivacy.com/v1/redoc
Join the discussion on our Discord: https://discord.gg/nQW7YxUYjh
And of course try the CapeChat playground at https://chat.capeprivacy.com/
zb3|2 years ago
garciasn|2 years ago
So far, most we have spoken to are literally SHOCKED that we require SOC3 (one company even told me they'd never even heard of SOC3) and everything needs to be hashed before it goes out and be mapped on our end back to actual. They think we're being too cautious and are really trying to get to sale without understanding that it's literally NOT something we can do and NO ONE else should be doing it either.
gavinuhma|2 years ago
The de-identification itself requires a complex language model, which has its own complexity and costs to operate. At Cape we're going as far as we can to offer a secure API that's self-serve and easy to use to make these feature accessible to developers, but it does require trust in Cape and the underlying AWS Nitro Enclaves that we use. Client-side attestation is a security feature that can help provide cryptographic verification to the client of the secure enclave. But local is always better when possible!
survirtual|2 years ago
I want less parties involved with secure data, not more. This should be an on-prem solution with no external network access and no direct calls to OpenAI. A call is made to this service to obfuscate, then another call to OpenAI, all managed by a coordinating mechanism that is opensource / trusted.
Better yet, maybe LLMs should be required to have weights released considering they are trained on the collective of human knowledge. Seems strange to use a significant sum of human knowledge that is publicly available then deny everyone access to the weights.
stevelacy|2 years ago
moffkalast|2 years ago
Let's say in case of financial statements, it if can't read credit card numbers and names, then it can't tell you which days some credit card was used and by who. Maybe that's not the typical use case, but I would imagine it being very annoying, given the already high typical LLM failure rate.
gavinuhma|2 years ago
Many developers have gotten away from relying on LLMs for facts, toward providing LLMs with facts and having those facts repurposed.
For example, if you ask an LLM about a famous person, like Wayne Gretzky, it may give you a good answer but there is a chance it may hallucinate key details like the number of points he had in his NHL career.
To combat this, you can provide the LLM with a biography of Wayne Gretzky and you may get more factual answers, but the LLM may still hallucinate if you probe for facts that were not provided.
If you redact his name instead, for example asking “Who is [Name1]?” the LLM will be unable to answer the question without further context. But now, if you provide the redacted biography the LLM can answer the question while relying only on the provided context (the biography will contain information about [Name1]). If the question falls outside of the context the LLM will not be unable to answer, which is often the desired result. In other words, the LLM is unable to rely on the training data about Wayne Gretzky because it is only dealing with [Name1] along with redacted locations, organizations, occupations, etc from the biography about [Name1]. You force the model to rely on the provided facts.
The use cases we see are people providing legal contracts and financial statements where names and currencies get redacted, and the LLM must work with the redacted values and any other context provided.
bjtitus|2 years ago
> Re-identification
Wouldn't these two features address your concern? ChatGPT gets a generated unique ID that is still a consistent value for each card, just not the number itself. Then when the results are returned, that generated ID is turned back into the real card number.
This only becomes a problem when the de-identified data itself is needed to answer a question, like tell me how many Visa cards were used in these transactions by checking the card numbers.
luke-stanley|2 years ago
I do think stripping and adding personal info back only when needed is in principle a good idea for some situations. But I have big doubts at the injection of another party into the mix.
gavinuhma|2 years ago
Please see https://api.capeprivacy.com/v1/docs#/ for more info.
bendecoste|2 years ago
luke-stanley|2 years ago
ebg1223|2 years ago
dingobread|2 years ago
sullivanmatt|2 years ago
gavinuhma|2 years ago
Re mechanism, the redactions themselves are powered by a language model.
noqcks|2 years ago
gavinuhma|2 years ago
When the search endpoint is called the encrypted embeddings are pulled into the NE and decrypted. They are then loaded into a vector db in-memory and the search is executed all within the NE. This adds some latency but it’s more secure because embeddings are only accessible to the enclave.
In the case of chat history it is never stored by the API. The developer can develop their own client side. With CapeChat we keep chat history local on the device.