Recently, the popular video conferencing platform, Zoom, brought in a significant change to its terms of service which has sent ripples of worry across its vast user base.
With this revision, Zoom has brought under its wing the permission to use users' data to train Artificial Intelligence (AI). While this has undoubtedly sparked a lot of chatter and apprehension online, let's understand what it truly implies.
The updated terms grant Zoom a "perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license" over customer content. This extends to purposes such as "machine learning" and "artificial intelligence".
In essence, Zoom can use specific user data for the enhancement of machine learning or AI, including algorithmic training and tuning.
The changes were initially spotted by Stack Diary, a developer-centric website, and soon after, it snowballed into a heated debate online. Many users expressed their outrage over Zoom's decision to utilize customer data for AI. The opacity around the specifics of Zoom's usage of data for AI and machine learning further adds to the consternation.
In response to the outcry, on Monday morning, Zoom’s Chief Product Officer Smita Hashim published a blog post that says, essentially, that the company doesn’t do the things described in its Terms of Service.
Zoom reassured its customers that they retain ownership of their content, even though the company can use this content to offer "value-added services". They added that the terms related to AI training referred to the general usage data of their product, which Zoom considers its proprietary data.
Zoom, quite explicitly, has affirmed that, "For AI, we do not use audio, video, or chat content for training our models without customer consent." However, they also stated that if users opt to use Zoom's AI features — like a meeting summary tool, they would be asked to permit sharing that content for AI training. Essentially, users have the choice to turn access to their data on or off.
The company introduced “Zoom IQ” in March, a set of features which summarize chat threads and help you generate automated responses to written chat questions. Zoom IQ is optional. When you enable these features, Zoom there’s a little check box that’s turned on by default.
If you don’t bother to change it, you agree to let the company collect data to build and improve its AI. When a call starts with Zoom IQ enabled, other people in the call get a notification about it titled “Meeting Summary has been enabled.” The popup says “The account owner may allow Zoom to access and use your inputs and AI-generated content to provide the feature and for Zoom IQ product improvement, including model training.”
As a participant on the call, you get two options: “Leave Meeting,” a button that appears in gray, or a cheerier, bright blue button that says “Got it.” That means if you don’t leave the call, someone else has given Zoom consent on your behalf to let the company harness your data to build its AI.
While it's not uncommon for companies to use service-generated data or even vast amounts of user-generated data to hone AI, the thought of our personal video calls being used for such purposes has definitely taken many by surprise. Zoom assures users that they have the final say in this, but the firm language in the terms of service has understandably frightened many.
In a statement, a Zoom spokesperson said
“Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes.”
While they published a blog post, a lot of questions remain https://t.co/55NyfDyxic For example, what other value-added services are in development? How will a user learn about them? Value for whom? at what cost? with what implications & accountabililty in case of failures? 2/ pic.twitter.com/q1R3qAEklo
— Bogdana Rakova (@bobirakova) August 7, 2023
Zoom didn't respond to queries about "Service Generated Content". This content includes everything other than your video, audio, and chats, like usage statistics.
According to Zoom's privacy policy, they can use this data for many things, including training their artificial intelligence. Gizmodo, a technology publication, noted that it looked through Zoom’s settings and couldn’t find any way to opt out of allowing the company to train its AI on service generated content.
In the past, Zoom has not been very good at keeping its promises about user privacy. In 2020, they said they would provide high-level encryption only to users who pay, but they changed their mind after people complained about having to pay for privacy.
Also, Zoom shared user data with big companies like Google and Facebook without telling users. Due to these and other issues, Zoom had to pay $85 million in a settlement in 2021.
Last week, Zoom officially reneged on its work-from-home policy, forcing employees living within 50 miles of an office to be physically present at least two days per week.
At the core of both these issues lies the quintessential aspect of trust. While Zoom seems to be grappling with trusting its employees to work remotely, customers are finding it difficult to place their trust in Zoom's updated terms. The saga continues, and it remains to be seen how these changes will influence Zoom's future in the long run. For now, though, the cloud of uncertainty continues to hover.