top | item 36991967

(no title)

notjoshjames | 2 years ago

The first thing I look for when I see a new app in the cognitive space is the privacy policy. Here it is in its entirety (https://www.comigo.ai/privacy):

* Information We Collect: We collect information you provide when you register for our Service, including your Google account data. This data includes, but is not limited to, your name and email address.

* Use of Information: We use this information to personalize, understand, and improve our Service, communicate with you, respond to your requests, and enhance the overall user experience.

* Sharing of Information: Comigo will not share your personal information with third parties.

* Security: We prioritize protecting your data and have implemented technical and organizational measures to ensure its safety. However, no method of transmission or storage is entirely secure, thus we cannot guarantee absolute security.

* Changes to this Privacy Policy: We may update this Privacy Policy from time to time. We will notify you of any changes by posting the new Privacy Policy on this page.

I've been working on my own similar, local-first solution for several years, because I simply cannot introduce a significant external dependency to my mental stack. Behavioral data is some of the most sensitive and dangerous data to leak.

There's irony with these types of apps: the more utility I can gain from them, the more risk I have should they introduce dark patterns, become defunct, or even make well-intended changes that break my personal workflows.

The scenario outlined on Comigo's home page begins with the user prompt "Hey Comigo my board meeting is in 4 hours and I wasn't able to work on my presentation slides..." - a context-aware agent would certainly have pre-empted this well before the same day, right?

Sorry if this seems overly critical. I have a lot of passion for this space, and there's a huge need, but there's something like 20,000+ "mental health" apps out there, and nearly every single one I've encountered has big red flags.

discuss

order

jasonhansel|2 years ago

This policy says very little about what happens to the chat logs. In particular: will other humans (e.g. developers, analysts, etc.) have access to them? If so, I wouldn't trust it with anything.