(no title)
chrisnolet | 1 year ago
I worked at Apple for many years and their approach to privacy really left a mark on me. I strongly believe that preserving privacy is a moral obligation. (Especially when you're handling people's emails.)
Now, while the beta is running, when you log in to Pocket, there is a big blue switch above the fold under the title 'Privacy.' It says: 'Share recordings with our team.' If you leave it on, that's really helpful for me! But it does exactly what it says, and if you have anything sensitive you don't want to share with me, turn it off.
For your questions:
- The voice data is routed through Retell and the transcripts are passed to OpenAI's API.
- Sensitive data is retained by Retell for 10 minutes (when sharing is off).
- Sensitive data is retained by OpenAI for 30 days 'to identify abuse.'
I'm working with OpenAI to get Zero Data Retention. As it stands, their commitment has been that they will not use API input or output to train models. (I personally trust that commitment, but I understand the skepticism and if that's a deal-breaker for you.)
Retell is HIPAA-compliant and SOC 2 Type II certified. They've been great to work with.
- Regarding the privacy policy: 'User data obtained through third-party APIs (will not be used) to develop, improve, or train generalized AI and/or ML models.' This language was actually required by Google. The use of the word 'generalized' here is actually less specific; it's not AGI, but includes any kind of foundation model. There might be a point in the future where we can fine-tune one model per user with a LoRA, but I agree that the risk of PII leaking from a shared model is far too great.
- The company is a Delaware C-corp and subject to U.S. and California laws.
I really appreciate the opportunity to discuss this. I want to put privacy and security first always, and make sure that's baked into the company culture. Thanks for advocating!
upwardbound2|1 year ago
Would you consider allowing the user to select between OpenAI vs Anthropic for the foundation model? I'd recommend making Anthropic the default, as does the Perplexity team: https://www.anthropic.com/customers/perplexity
In the Privacy Policy, maybe you can keep the Google-required sentence, and also add another sentence that makes it explicit that user data will only be used to train user-specific models. This would go a long way towards reassuring many people.
I'd love to try your DSL if you are accepting dev partners. You could reach me at strangecompanyventure@gmail.com if so, I'd love to try it out and it seems very powerful if you also used it for the D&D game project.
Is the game still available somewhere? The old link doesn't seem to still point to it but I'm a big fan of the interactive fiction genre and would love to test the game too, and any other examples you have of the DSL you're designing.
Cheers and thank you for your commitment to principles. You have my respect and probably a number of other readers too.