What Happened: Microsoft AI Bing Raises Concerns About Fake News
Microsoft has a big lead over other tech giants in the AI arena, but it seems the company has some concerns that need to be addressed quickly. Microsoft has released an AI chatbot called Copilot that appears to be producing inaccurate information about the 2024 US election.
Researchers have found that the AI chatbot shares information about upcoming events with information from past incidents and is in many ways wrong about what it has to offer to queries. AI chatbots have raised concerns about people being misinformed, and Microsoft has a lot of work to do to fix the reported mess, especially if it wants to avoid major government interventions in the coming months ahead of major elections.
The company has gone on a rampage with its focus on artificial intelligence, which at one point came close to hiring OpenAI CEO Sam Altman to head its AI research lab.
Its reported $10 billion investment in OpenAI has given Microsoft early access to the latest versions and features of ChatGPT, which is now entering its 5th generation model.
You can have fun with AI chatbots with basic inquiries and requests, but serious matter needs to be investigated so that it doesn’t cause major problems for the company and everyone involved in the episode.
Copilot is also getting a wider release now, which means more people may be logging into the AI chatbot to get their answers, and if they’re inaccurate and fake, Microsoft has a serious problem that can’t be ignored any longer. Such cases also confirm the need for stricter regulations so that artificial intelligence cannot pose a danger to the general functioning of systems and other entities.