Google Confirms It Will Not Train AI Models With User Data Without Consent
Google has announced that the company will not use user data to train its own artificial intelligence models without permission. The search giant also claims that generative AI will not change their commitment to privacy; rather, it “reinforces” them.
This is likely a response to the growing popularity of generative artificial intelligence. This has led to growing concerns among users that companies are using their data to train AI chatbots and large-scale language models (LLMs).
Yulie Kwon Kim, Vice President, Product Management, Workspace Platform has clarified that Google does not use users’ Workspace data “to train or improve underlying generative AI models and large language models that use Bard, Search, and other systems outside of Workspace without permission.”
In addition to creative AI, Google Workspace guarantees that the content you add to its services, such as emails and documents, is yours. Kim went on to say that “we never sell your data, and you can delete it or export it.”
Google has also stated that it does not collect, scan or use your content in Google Workspace services for advertising purposes.
Kim further stated that Google’s privacy commitment with Workspace extends to all of its users. But they’re not just words.” And to ensure that Google continues to meet these “high standards,” independent auditors are confirming Google’s “practices that are inconsistent with international standards.”
Interestingly, Google can still use publicly available data to train its AI models. In July of this year, Google changed the wording of its privacy policy, changing “AI models” to “language models,” and now it can use publicly available data to create feature sets and even full products like Google Bard and others. . This means that anything that is public and available online can now be used to train AI models such as PaLM 2 and in the future even Gemini.