Know more about the potential privacy warning as Google intensifies its AI efforts for Messages.
Google is bringing AI everywhere, and we really mean it. The company has decided to offer artificial intelligence to businesses and consumers. Its Messages app also features AI in the form of Bard, which brings ChatGPT-like features to Android users. Think of it like an AI assistant in your messaging app.
However, as with every AI tool, Google’s latest AI effort also comes with a big caveat. A new report this week hints at Google’s murky practice of training its AI, and a serious breach of privacy for its users.
Bard is said to analyze the private content of messages, which is supposed to help the AI assistant better understand the content and even the tone of the message you send. A new report here gives some chilling details.
We all agree that AI personalization means that the assistant needs to match your responses to the mood of who you’re talking to. And for that, the AI also tries to understand your relationship with the contact by also reading the message history, which is another important red flag.
Google is unlikely to secure messages behind end-to-end encryption, which is again a major drawback for AI-centric apps and features that need data to learn and learn your patterns, interests, and even messaging style.
Google’s track record with data is controversial, and this development once again puts user privacy in the spotlight. You can also point to the likes of Apple, who spend their time creating big AI waves, and it’s likely that the company keeps privacy at the top of its to-do list.
Apple is reportedly taking its time with AI, but at least make sure all the processing is done on the device, which gives it more credibility than sharing data/content through a server for further research and training. Google faces tough questions as more of its products get the AI treatment, and privacy experts are waiting to see its overall strategy to address those issues.