top | item 45998855

(no title)

falleng0d | 3 months ago

From the dialogues in the pictures it doesn’t sound like they are using anyones emails for training. The messaging indicates it’s more like using as context and/or generating embeddings for RAG. Unless there’s something else I’m not aware of.

I know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doing

This doomsday messaging an alarmism is only serves to degrade the whole cause

edit: and before someone say that they also don’t want that then let’s criticize it for what it is (opting users to feature without consent). We don’t need to make stuff up, it really doesn’t help.

discuss

order

johnnyanmac|3 months ago

>When smart features are on, your data may be used to improve these features. Across Google and Workspace, we’ve long shared robust privacy commitments that outline how we protect user data and prioritize privacy. Generative AI doesn’t change these commitments — it actually reaffirms their importance. Learn how Gemini in Gmail, Chat, Docs, Drive, Sheets, Slides, Meet & Vids protects your data.

When I click "Learn more" in toggling the smart features on/off

It may not do it now, but I really don't like the implications. Especially a tone of "it's not actually bad, it's good!"

squigz|3 months ago

> It may not do it now

I don't believe "may" is being used to indicate possibility, but rather permission.

That is to say, there's no reason to think it's not being used, given that wording.

falleng0d|3 months ago

I agree. What I really don’t like is that we have to choose between having smart search and giving up our data.

Is it too much to ask to be able to not give up data for “improvements” but keep the functionality?

neuralkoi|3 months ago

This seems to indeed be confirmed over at https://support.google.com/mail/answer/14615114

"Your data stays in Workspace. We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission."

djhn|3 months ago

But then if the terms include a vague permission and/or license to use the data for improving the results, the text is factually correct while obscuring the fact that they do in fact solicit your permission and thus use the data, with your permission.

x0x0|3 months ago

Discovering new settings that I was opted in to without being asked does not scream good faith.

Separately, their help docs are gibberish. They must use this phrase 20 times: "content is not used for training generative AI models outside of your domain without your permission." Without telling you if that checkbox is that permission; where that permission is set; or indeed, even if that permission is set. From reading their documentation, I cannot tell if that checkbox in gmail allows using my data outside my organization or not.

NaomiLehman|3 months ago

i don't understand one thing - isn't the Venn Diagram of people who use Gmail in 2025 and who are on HN just 2 circles that don't touch?

rootnod3|3 months ago

Sorry, but that "doomsday" "alarmism" is exactly what is needed and warranted. But this practice of sneakily opting users in into things they don't want instead of a very clear full on pop up saying "We now use your data and private emails for AI training" is exactly the problem.

> I know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doing

No no. a) they ARE doing a lot of bad stuff and b) that shit ain't made up and they ARE exactly doing that. Or do you also think that Github is NOT using priI know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doingvate repos to train Copilot? Do you honestly and truly believe that?

If you do truly believe that I got a bunch of bridges to sell to you.