Section 4 of the Jitsi Meet ToS grants them similar rights. It's just with mushier language.
> You give 8×8 (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works..., communicate, publish, publicly perform, publicly display, and distribute such content solely for the limited purpose of operating and enabling the Service to work as intended for You and for no other purposes.
IANAL, but it seems like that would include training on your data as long as the model was used as part of their service.
Everyone who operates a video conferencing service will have some sort of clause like this in their ToS. Zoom is being more explicit, which is generally a good thing. If Jitsi wanted to be equally explicit, they could add something clarifying that this does not include training AI models.
For various reasons I have a bunch of different groups where I use different videocall software for regular meetings - Zoom, Jitsi, Teams, Skype, Google Meet and Webex.
Out of all those, Jitsi is the only one where I can't rely on the core functionality - video calls and screensharing for small meetings (5-6 people); I have had multiple cases when we've had to switch to something else because the video/audio quality simply wasn't sufficient, but a different tool worked just fine for the same people/computers/network.
Like, I fully understand the benefits of having a solution that's self-hosted and controlled, so we do keep using self-hosted Jitsi in some cases for all these reasons, but for whatever reason the core functionality performs significantly worse than the competitors. Like, I hate MS Teams due to all kinds of flaws it has, but when I am on a Teams meeting with many others, at least I don't have to worry if they will be able to hear me and see the data I'm showing.
How does Jitsi handle 500-person+ conference calls these days? This is the killer zoom feature - it looks like Jitsi can handle up to 500 now. https://jaas.8x8.vc/#/comparison .
That's personally not enough for many remote companies. So if we're going to have to have Zoom on our machines anyway (to handle an all-company meeting), why not just use it for the rest?
Worst is relative. Zoom has the lower barrier to entry for normal users (who far outnumber us nerdy type) than any other app in it's class. Worst for privacy, best for usability, many argue.
What I take to be the TOS for Google Meet (it's a little hard to tell!) makes no specific reference to AI, but does mention use of customer data for "developing new technologies and services" more generally. https://policies.google.com/terms#toc-permission
Actually, they only affect their hosted meet.jit.si service, right? Not if you self-host Jitsi on your own server (which you should if you're a medium-large company, for data protection and all that)
Also jitsi can easily be self hosted which means no information will leak altogether.
I've refused to install zoom since they installed a Mac backdoor and refused to remove it until Apple took a stand and marked them as malware until they removed it. And that was far from their only skullduggery.
Tangentially related, but a number of telehealth operations with hospitals/therapists/etc... use Zoom -- I suspect because their clients can connect without an app or an account over a browser.
When you join a Zoom session over the browser, you don't sign a TOS. And I assume that actual licensed medical establishments are under their own TOS provisions that are compatible with HIPPA requirements. Training on voice-to-text transcription, etc... would be a pretty huge privacy violation particularly in the scope of services like therapy. Both because there are demonstrable attacks on AIs to get training data out of them, and because presumably that data would then be accessible to employees/contractors who were validating that it was fit for training.
Out of curiosity, has anyone using telehealth checked with their doctor/therapist to see what Zoom's privacy policies are for them?
I know that many smaller therapists use Zoom for exactly the reasons you mentioned above - ease of use. They often don't have the technical know-how to assess the technology they're using.
The UK, for example, has hundreds of private mental health practitioners (therapists, psychologists, etc.) that provice their services directly to clients. They almost universally use off-the-shelf technology for video calling, messaging, and reporting.
IANAL, but I did health tech for 10 years and had my fair share of interactions with lawyers asking questions about stuff I built.
HIPAA applies to the provider. Patient have no responsibility to ensure the tech used by their care provider is secure or that their medical records don't wind up on Twitter. HIPAA dictates that the care providers ensure that happens by placing both civil and sometimes criminal liability on the provider for not going to great lengths here.
In practice, this means lawyers working with the care providers have companies sign legal contracts ensuring the business associate is in compliance with HIPAA, and are following all of the same rules as HIPAA (search: HIPAA BAA).
Additionally, you can be in compliance with HIPAA and still fax someone's medical records.
Related to this, anyone know if Zoom has a separate offering for education (universities, schools, etc)? I teach at a university, and not only do we use Zoom for lectures etc, but also for office hours, meetings, etc, where potentially sensitive student information may be discussed. I'm probably not searching for the right thing; all I found was this: https://explore.zoom.us/docs/doc/FERPA%20Guide.pdf
(FERPA is to higher ed in the US what HIPAA is to healthcare.)
IANAL but “Zoom for Healthcare” is a business associate under HIPAA and treated as an extension of the provider with some added restrictions.
Covered entities (including the EMR and hospital itself) can use protected health information for quality improvement without patient consent and deidentified data freely.
Where this gets messy is that deidentification isn’t always perfect even if you think you’re doing it right (especially if via software) and reidentification risk is a real problem.
To my understanding business associates can train on deidentified transcripts all they want as the contracts generally limit use to what a covered entity would be allowed to do (I haven’t seen Zoom’s). I know that most health AI companies from chatbots to image analysis do this. Now if their model leaks data that’s subsequently reidentified this is a big problem.
Most institutions therefore have policies more stringent than HIPAA and treat software deidentified data as PHI. Stanford for example won’t allow disclosure of models trained on deidentified patient data, including on credentialed access sources like physionet, unless each sample was manually verified which isn’t feasible on the scale required for DL.
“Limitations on Use and Disclosure. Zoom shall not Use and/or Disclose the Protected Health Information except as otherwise limited in this Agreement or by application of 42 C.F.R. Part 2 with respect to Part 2 Patient Identifying Information, for the proper management and administration of Zoom…”
“Management, Administration, and Legal Responsibilities. Except as otherwise limited in this BAA, Zoom may Use and Disclose Protected Health Information for the proper management and administration of Zoom…”
Not sure if “proper management and administration” has a specific legal definition or would include product development.
“But how should a business associate interpret these rules when effective management of its business requires data mining? What if data mining of customer data is necessary in order to develop the next iteration of the business associate’s product or service? … These uses of big data are not strictly necessary in order for the business associate to provide the contracted service to a HIPAA-covered entity, but they may very well be critical to management and administration of the business associate’s enterprise and providing value to customers through improved products and services.
In the absence of interpretive guidance from the OCR on the meaning of ‘management and administration’, a business associate must rely almost entirely on the plain meaning of those terms, which are open to interpretation.”
edit: I'm retracting my earlier comment. Earlier I wrote that the headline didn't seem to match what was in the TOS, since OP never mentioned which part they're concerned about.
I'm now assuming the part they don't like is §10.4(ii):
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: [...] _(ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof_
Notice that 10.4(ii) says they can use Customer Content "for ... machine learning, artificial intelligence, training", which is certainly allowing training on user content.
But it is saying that your customer content may be used for training AI, in 10.4:
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, [...]
Quibbles over the definition of phrases like “Customer Content” and “Service Generated Data” are designed to obfuscate meaning and confuse readers to think that the headline is wrong. It is not wrong. This company does what it wants to, obviously, given it’s complicity with a regime that is currently engaging in genocide.
I am curious if they have been silently saving voice to text transcription in the background on all calls and if AI will be permitted to ingest all of that data. A great deal could be learned from private one on one calls in the corporate world. The insider knowledge one could gain about corporations and governments would be fascinating.
I feel as if 2023 could become the inflection point where we will finally start investing in our own infrastructure again. Video calls for example are really a commodity service to be set up at this point.
It's quite common for corporate/government contracts to have totally different terms that prohibit any kind of AI training (or recording/access at all). This has been the case for years now. Precisely because of the risks you highlight.
In these cases, companies train on content stored/transmitted in the free/individual consumer version only.
10.2 … You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software. You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement
This is very much like the Black Mirror episode Joan Is Awful.
By using modern services we consent to our data, including our likeness, being used in any way the service can extract value from it. User data is such a gold mine that most services should be paying their users instead. Even giving the service away for "free" doesn't come close to making this a fair exchange.
Not to sound pessimistic, but we are already living in a dystopia, and it will only get much, much worse. Governments are way behind in regulating Big Tech, which in most cases they have no desire in since they're in a symbiotic relationship. It's FUBAR.
Hi there - this is Aparna from Zoom, our Chief Operating Officer. Thank you for your care and concern for our customers - we are grateful for the opportunity to double click on how we treat customer content.
To clarify, Zoom customers decide whether to enable generative AI features (recently launched on a free trial basis) and separately whether to share customer content with Zoom for product improvement purposes.
Also, Zoom participants receive an in-meeting notice or a Chat Compose pop-up when these features are enabled through our UI, and they will definitely know their data may be used for product improvement purposes.
This seems a subtle shift in general with AI is people feel entitled to treat it as an end in itself or a black box. The agreement says they can use "User Content" for:
> the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software
So notice most of these are somehow qualified to the service they are providing you, but the AI part stands alone. If it was to improve the service to me, that would be pretty reasonable, but here it says they can use it for AI as an end unto itself.
Something about the inscrutability of modern AI (nobody knows how it really works, what the limits of its capabilities are etc.) seems to lend itself to this kind of open ended vagueness. If they just wrote "we can use your user generated content for anything we like" it would almost amount to the same thing but people would be outraged. But when they say "it's for AI" everyone nods their head as if it's somehow different to that.
Why not switch? I've had good experiences with competitors. I don't know if they're as nice for mass meetings instead of one-to-one or small groups, but at least for the chats I've had, there's never been any reason to go to Zoom.
(I care more about spyware, privacy, and user sovereignty than AI training.)
At the scale of Zoom and MS Teams ... you could theoretically train an AI model that can autonomously conduct all meetings businesses ever need -- all day every day -- without any human ever needing to attend. So much productivity claimed back!
GenAI provides the agenda, GenAI bots log in with a AI avatar and spout hallucinations, bots agree to disagree and setup a followup meeting next week after resolving fake calendar conflicts amongst themselves. Minutes and action items are sent out and reviewed in next meeting, jiras are updated, CRs approved, budgets allocated and rescinded.
I like how everyone is up in arms over the use of your meetings for AI training specifically, when the ToS clearly says all "Customer Content/Customer Input" AKA your words, text, voice, face, etc can be used for "Product and Services Development" which could as easily be a facial recognition database, a corporate espionage service, a direct competitor to whatever company you work for, or literally anything else before it's an AI.
Does this mean that ZOOM is basically using every attendee's audio and video stream to train their models? How do they define the "Service Generated Data"?
I made a video-conferencing app for virtual events (https://flat.social). No audio and video is ever recorded, packets are only forwarded to subscribed clients. Frankly, while designing the service it didn't even cross my mind to use this data for anything else than the online event itself.
[+] [-] _ddzr|2 years ago|reply
It never ceases to amaze me how companies choose the worst software!
[+] [-] jonas21|2 years ago|reply
> You give 8×8 (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works..., communicate, publish, publicly perform, publicly display, and distribute such content solely for the limited purpose of operating and enabling the Service to work as intended for You and for no other purposes.
IANAL, but it seems like that would include training on your data as long as the model was used as part of their service.
Everyone who operates a video conferencing service will have some sort of clause like this in their ToS. Zoom is being more explicit, which is generally a good thing. If Jitsi wanted to be equally explicit, they could add something clarifying that this does not include training AI models.
[+] [-] PeterisP|2 years ago|reply
Out of all those, Jitsi is the only one where I can't rely on the core functionality - video calls and screensharing for small meetings (5-6 people); I have had multiple cases when we've had to switch to something else because the video/audio quality simply wasn't sufficient, but a different tool worked just fine for the same people/computers/network.
Like, I fully understand the benefits of having a solution that's self-hosted and controlled, so we do keep using self-hosted Jitsi in some cases for all these reasons, but for whatever reason the core functionality performs significantly worse than the competitors. Like, I hate MS Teams due to all kinds of flaws it has, but when I am on a Teams meeting with many others, at least I don't have to worry if they will be able to hear me and see the data I'm showing.
[+] [-] ebiester|2 years ago|reply
That's personally not enough for many remote companies. So if we're going to have to have Zoom on our machines anyway (to handle an all-company meeting), why not just use it for the rest?
[+] [-] arun-mani-j|2 years ago|reply
Pretty bad that many nontechnical users are not aware of it compared to Google Meet or Teams.
[+] [-] boredumb|2 years ago|reply
[+] [-] tradevapp|2 years ago|reply
[+] [-] eddieroger|2 years ago|reply
[+] [-] rst|2 years ago|reply
[+] [-] arielcostas|2 years ago|reply
[+] [-] wkat4242|2 years ago|reply
I've refused to install zoom since they installed a Mac backdoor and refused to remove it until Apple took a stand and marked them as malware until they removed it. And that was far from their only skullduggery.
[+] [-] j45|2 years ago|reply
[+] [-] tikkun|2 years ago|reply
> I have tried most of them: Google Meet, Teams, Slack, Discord, Skype, Jitsi and so far I liked Jitsi the most and Skype the least.
[+] [-] Fire-Dragon-DoL|2 years ago|reply
[+] [-] mycall|2 years ago|reply
[+] [-] mbesto|2 years ago|reply
A local accounting firm with 4 employees just wants their conferencing software to work - Zoom does that better than anyone else.
There is nothing "worst" about that. In never ceases to amaze me that this community is so out of touch with the general populace.
[+] [-] danShumway|2 years ago|reply
When you join a Zoom session over the browser, you don't sign a TOS. And I assume that actual licensed medical establishments are under their own TOS provisions that are compatible with HIPPA requirements. Training on voice-to-text transcription, etc... would be a pretty huge privacy violation particularly in the scope of services like therapy. Both because there are demonstrable attacks on AIs to get training data out of them, and because presumably that data would then be accessible to employees/contractors who were validating that it was fit for training.
Out of curiosity, has anyone using telehealth checked with their doctor/therapist to see what Zoom's privacy policies are for them?
[+] [-] johndhi|2 years ago|reply
https://blog.zoom.us/answering-questions-about-zoom-healthca...
[+] [-] whatatita|2 years ago|reply
The UK, for example, has hundreds of private mental health practitioners (therapists, psychologists, etc.) that provice their services directly to clients. They almost universally use off-the-shelf technology for video calling, messaging, and reporting.
[+] [-] infamouscow|2 years ago|reply
HIPAA applies to the provider. Patient have no responsibility to ensure the tech used by their care provider is secure or that their medical records don't wind up on Twitter. HIPAA dictates that the care providers ensure that happens by placing both civil and sometimes criminal liability on the provider for not going to great lengths here.
In practice, this means lawyers working with the care providers have companies sign legal contracts ensuring the business associate is in compliance with HIPAA, and are following all of the same rules as HIPAA (search: HIPAA BAA).
Additionally, you can be in compliance with HIPAA and still fax someone's medical records.
[+] [-] kkylin|2 years ago|reply
(FERPA is to higher ed in the US what HIPAA is to healthcare.)
[+] [-] haldujai|2 years ago|reply
Covered entities (including the EMR and hospital itself) can use protected health information for quality improvement without patient consent and deidentified data freely.
Where this gets messy is that deidentification isn’t always perfect even if you think you’re doing it right (especially if via software) and reidentification risk is a real problem.
To my understanding business associates can train on deidentified transcripts all they want as the contracts generally limit use to what a covered entity would be allowed to do (I haven’t seen Zoom’s). I know that most health AI companies from chatbots to image analysis do this. Now if their model leaks data that’s subsequently reidentified this is a big problem.
Most institutions therefore have policies more stringent than HIPAA and treat software deidentified data as PHI. Stanford for example won’t allow disclosure of models trained on deidentified patient data, including on credentialed access sources like physionet, unless each sample was manually verified which isn’t feasible on the scale required for DL.
Edit: Zoom’s BAA: https://explore.zoom.us/docs/en-us/baa.html
“Limitations on Use and Disclosure. Zoom shall not Use and/or Disclose the Protected Health Information except as otherwise limited in this Agreement or by application of 42 C.F.R. Part 2 with respect to Part 2 Patient Identifying Information, for the proper management and administration of Zoom…”
“Management, Administration, and Legal Responsibilities. Except as otherwise limited in this BAA, Zoom may Use and Disclose Protected Health Information for the proper management and administration of Zoom…”
Not sure if “proper management and administration” has a specific legal definition or would include product development.
Edit 2: My non-expert reading of this legal article suggests they can. https://www.morganlewis.com/-/media/files/publication/outsid...
“But how should a business associate interpret these rules when effective management of its business requires data mining? What if data mining of customer data is necessary in order to develop the next iteration of the business associate’s product or service? … These uses of big data are not strictly necessary in order for the business associate to provide the contracted service to a HIPAA-covered entity, but they may very well be critical to management and administration of the business associate’s enterprise and providing value to customers through improved products and services.
In the absence of interpretive guidance from the OCR on the meaning of ‘management and administration’, a business associate must rely almost entirely on the plain meaning of those terms, which are open to interpretation.”
[+] [-] vondur|2 years ago|reply
[+] [-] brunes|2 years ago|reply
All of this is a lot of BS about nothing.
[+] [-] jxf|2 years ago|reply
I'm now assuming the part they don't like is §10.4(ii):
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: [...] _(ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof_
Notice that 10.4(ii) says they can use Customer Content "for ... machine learning, artificial intelligence, training", which is certainly allowing training on user content.
[+] [-] jsnell|2 years ago|reply
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, [...]
[+] [-] westcort|2 years ago|reply
https://techcrunch.com/2020/06/11/zoom-admits-to-shutting-do...
Quibbles over the definition of phrases like “Customer Content” and “Service Generated Data” are designed to obfuscate meaning and confuse readers to think that the headline is wrong. It is not wrong. This company does what it wants to, obviously, given it’s complicity with a regime that is currently engaging in genocide.
https://www.bbc.com/news/world-asia-china-22278037.amp
Why do you trust them to generate an AI model of your appearance and voice that could be used to destroy your life? I don’t.
[+] [-] LinuxBender|2 years ago|reply
[+] [-] musha68k|2 years ago|reply
[+] [-] crazygringo|2 years ago|reply
In these cases, companies train on content stored/transmitted in the free/individual consumer version only.
[+] [-] naillo|2 years ago|reply
[+] [-] hammock|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] isodev|2 years ago|reply
[+] [-] zakember|2 years ago|reply
https://web.archive.org/web/20230401045359/https://explore.z...
[+] [-] imiric|2 years ago|reply
By using modern services we consent to our data, including our likeness, being used in any way the service can extract value from it. User data is such a gold mine that most services should be paying their users instead. Even giving the service away for "free" doesn't come close to making this a fair exchange.
Not to sound pessimistic, but we are already living in a dystopia, and it will only get much, much worse. Governments are way behind in regulating Big Tech, which in most cases they have no desire in since they're in a symbiotic relationship. It's FUBAR.
[+] [-] buildbuildbuild|2 years ago|reply
(Presuming of course that their closed source software really E2E encrypts without a backdoor)
[+] [-] aparnabzoom|2 years ago|reply
To clarify, Zoom customers decide whether to enable generative AI features (recently launched on a free trial basis) and separately whether to share customer content with Zoom for product improvement purposes.
Also, Zoom participants receive an in-meeting notice or a Chat Compose pop-up when these features are enabled through our UI, and they will definitely know their data may be used for product improvement purposes.
[+] [-] ta1243|2 years ago|reply
My company pays for zoom, presumably we agreed to some form of terms before this change. Is this the same TOS for paid accounts too?
[+] [-] zmmmmm|2 years ago|reply
> the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software
So notice most of these are somehow qualified to the service they are providing you, but the AI part stands alone. If it was to improve the service to me, that would be pretty reasonable, but here it says they can use it for AI as an end unto itself.
Something about the inscrutability of modern AI (nobody knows how it really works, what the limits of its capabilities are etc.) seems to lend itself to this kind of open ended vagueness. If they just wrote "we can use your user generated content for anything we like" it would almost amount to the same thing but people would be outraged. But when they say "it's for AI" everyone nods their head as if it's somehow different to that.
[+] [-] abecedarius|2 years ago|reply
(I care more about spyware, privacy, and user sovereignty than AI training.)
[+] [-] jamesgill|2 years ago|reply
[+] [-] albert_e|2 years ago|reply
GenAI provides the agenda, GenAI bots log in with a AI avatar and spout hallucinations, bots agree to disagree and setup a followup meeting next week after resolving fake calendar conflicts amongst themselves. Minutes and action items are sent out and reviewed in next meeting, jiras are updated, CRs approved, budgets allocated and rescinded.
[+] [-] hammock|2 years ago|reply
[+] [-] pawelwentpawel|2 years ago|reply
I made a video-conferencing app for virtual events (https://flat.social). No audio and video is ever recorded, packets are only forwarded to subscribed clients. Frankly, while designing the service it didn't even cross my mind to use this data for anything else than the online event itself.
[+] [-] e40|2 years ago|reply
[+] [-] morkalork|2 years ago|reply
[+] [-] deusexml|2 years ago|reply
> Control over your data
> ...
> Google does not store video, audio, or chat data unless a meeting participant initiates a recording during the Meet session.
https://support.google.com/meet/answer/9852160
Though, I suppose this isn't exactly the same as a TOS.
[+] [-] galleywest200|2 years ago|reply