Google Cautions Employees on Potential Risks of Chatbots Powered by Artificial Intelligence
According to four individuals familiar with the situation, Alphabet Inc is warning its staff about the appropriate use of chatbots, including its own Bard, while simultaneously promoting the software globally.
Google’s parent company has instructed employees not to feed its confidential material to artificial intelligence chatbots, the people said, and the company confirmed, citing a long-standing policy on data protection.
Chatbots, including Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to converse with users and respond to countless prompts. Conversations can be read by human reviewers, and the researchers found that a similar AI could reproduce the information it received during the exercise, creating a leak risk.
Alphabet also warned its engineers to avoid directly using computer code generated by chatbots, some said.
Asked for comment, the company said Bard can make unsolicited code suggestions, but it helps programmers regardless. Google also said it strives to be open about the limitations of its technology.
The concerns show how Google wants to avoid business harm caused by software that competes with ChatGPT. At stake in Google’s race against ChatGPT backers OpenAI and Microsoft Corporation are billions of dollars in investment and still countless advertising and cloud revenues from new AI programs.
Google’s caution also reflects what is becoming a security standard for companies, namely warning staff against using publicly available chat programs.
A growing number of companies around the world have put up barriers to AI chatbots, including Samsung, Amazon.com and Deutsche Bank, the companies told Reuters. Apple, which did not respond to requests for comment, has reportedly also done so.
About 43 percent of professionals used ChatGPT or other AI tools in January, often without telling their bosses, according to a survey of nearly 12,000 respondents, including from top U.S. companies, by the website Fishbowl.
By February, Google told staff testing Bard not to give it internal information, Insider reported. Now Google releases Bard in more than 180 countries and 40 languages as a springboard for creativity, and its warnings extend to its code suggestions.
Google told Reuters it has held detailed discussions with Ireland’s data protection commission and is responding to questions from regulators after Politico reported on Tuesday that the company delayed Bard’s EU launch this week pending more information about the chatbot’s privacy implications.
WORRIES ABOUT mundane information
Such technology can draft e-mails, documents, even software itself, which promises to significantly speed up tasks. However, this content may contain misinformation, sensitive information, or even copyrighted passages from the “Harry Potter” novel.
Google’s privacy notice, updated on June 1, also says, “Do not include confidential or sensitive information in your Bard conversations.”
Some companies have developed software to address such concerns. For example, Cloudflare, which defends websites against cyberattacks and offers other cloud services, markets the ability for businesses to flag and restrict some data from flowing out.
Google and Microsoft also offer enterprise customers conversational tools that carry a higher price tag but don’t absorb data into public AI models. The default setting in Bard and ChatGPT is to store users’ chat history, which users can delete if they wish.
It “makes sense” that companies don’t want their staff to use public chatbots at work, said Yusuf Mehdi, head of consumer marketing at Microsoft.
“Companies are taking an appropriately conservative stance,” said Mehdi, explaining how Microsoft’s free Bing chatbot compares to its enterprise software. “There, our policy is much stricter.”
Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs, including its own, although a different executive there told Reuters he had personally limited his use.
Matthew Prince, Cloudflare’s CEO, said that typing confidential things into chatbots was like “deleting a bunch of PhD students from all your private records.”