The company is being forced to bring the new safety features which should help avoid major deepfake incidents in the future.AI 

Microsoft Introduces Enhanced Safety Measures for Its AI Tool in Response to Taylor Swift Deepfake Concerns

Microsoft has added more protections to the artificial intelligence text-to-image creation tool Designer, which users used to create unsavory sexual images of celebrities.

The changes come after nude photos of American singer-songwriter Taylor Swift that went viral on X last week came from 4chan and Telegram, a channel where people create AI-generated images of celebrities, 404 Media reported.

“We are investigating these reports and taking appropriate action to address them,” a Microsoft spokesperson said.

“Our code of conduct prohibits the use of our tools to create adult or non-consensual intimate content, and any repeated attempts to produce content that violates our policies may result in loss of access to the service. We have large teams working on the development of guardrails and other safety systems in accordance with our responsible AI principles,” it added.

Microsoft stated that an ongoing investigation could not confirm whether the images in Swift on X were created with Designer. However, the company continues to strengthen its text filtering prompts and combat abuse of its services, the report states.

Meanwhile, Microsoft chairman and CEO Satya Nadella has said that clear Swift AI fakes are “alarming and terrible.”

Nadella said in an interview with NBC Nightly News that “I think we have to move quickly on this.” Swift is reportedly weighing possible legal action against the website responsible for creating the deep fakes.

Related posts

Leave a Comment