Important AI updates you might have overlooked today: UK company successfully neutralizes rogue AI chatbot, concerns over GenAI, and more
Let’s take a look at today’s daily roundup of news: A UK parcel company has disabled its AI chat feature after receiving criticism from a poetic bot about its customer service. In another development, an AI system has proven to be more effective than diagnosis codes in identifying patients’ alcohol risks before surgery. Additionally, a California community has introduced AI technology aimed at preventing school shootings. Lastly, concerns have been raised by experts regarding Mark Zuckerberg’s open source AGI project.
1. UK parcel company removes AI chat after poetic robot criticizes customer service
A UK parcel company has disabled its AI chat function after a user tricked the system into writing a poem criticizing the company’s customer service. Frustrated by the bot’s lack of help, the user requested a poem, which resulted in verses describing the bot as useless. The case garnered attention on social media with 1.1 million views. According to a Reuters report, the company decided to shut down the AI system in response.
2. AI marks patients’ preoperative alcohol risks exceeded diagnosis codes
Artificial intelligence can help detect risky alcohol consumption in patients before surgery, as a study using natural language processing has revealed. By analyzing data from 53,811 surgical patients, the model identified not only diagnostic codes but also contextual indicators of alcohol abuse. While 4.8 percent had diagnosis codes, the AI model accounted for 14.5 percent when factors such as weekly drinking were taken into account. The Washington Post reports that research suggests artificial intelligence could help clinicians identify at-risk patients for intervention or support.
3. California community unveils AI technology to prevent school shootings
A Northern California community is using artificial intelligence technology to prevent school shootings. Spade Security Services introduced AI-powered cameras that use machine learning to detect people wielding weapons, triggering instant alerts and locking doors. Drones track armed individuals and provide police with real-time location information. According to a CBS News report, the community sees this technology as a modern security measure that aims to ease parents’ concerns and improve the partnership between security companies and the community.
4. Mark Zuckerberg’s open-source AGI raises concerns among experts
Facebook founder Mark Zuckerberg’s commitment to developing and open-sourcing a powerful, human-level artificial intelligence (AGI) system has raised concerns. Critics, including computer science professor Dame Wendy Hall, view freely available AGI as alarming because they see it as irresponsible without proper regulation. Zuckerberg emphasizes the importance of open source for the greater good, but experts fear potential harm if AGI falls into the wrong hands, The Guardian reports.
5. Generative artificial intelligence causes growing cyber security threats
The rapid integration of generative artificial intelligence into cybersecurity is raising concerns and prompting global governments to regulate the technology to prevent its misuse, according to a recent Aspen Institute report. The report acknowledges its prodigiousness, but highlights increasing cyber threats and urges regulators and industry bodies to balance the benefits of generative AI with potential harms. The different regulatory approaches of major states and international bodies emphasize the need to coordinate efforts to ensure responsible use and prevent abuse, CSO reports.
Also read these top stories of today:
The affordable Apple Vision Pro headphones are SOLD! What is cooking? A delay suggests that either demand is strong or supply is limited – or something in between. See what’s happening here.
Sam Altman is a worried man! Altman worries that chips won’t be available soon. Some current forecasts for the production of AI-related chips fall short of projected demand. Know where the situation is and what Altman is going to do here. Was it interesting? Go ahead and share it with everyone you know.
In the age of artificial intelligence, THIS is what young people need to learn! The growth of artificial intelligence means that young people should learn to think critically and have good judgment instead of developing coding skills. Does it make sense or not? Find out here. If you enjoyed reading this article, please share it with your friends and family.