Know what’s happening in the AI universe today, November 17. (Pexels)AI 

Don’t Miss Out: AI News Roundup – Discord Shuts Down Clyde, Microsoft Makes Changes, and More!

Before you start relaxing for the weekend, let’s take a moment to highlight some significant updates from the realm of artificial intelligence. Firstly, Discord, the widely-used social media platform, has decided to discontinue its experimental AI chatbot Clyde, which will no longer be accessible starting from December 1, 2023. Additionally, Microsoft has recently modified its AI image generator tool due to its production of images that closely resembled Disney posters, including their logo. These are just a few of the noteworthy developments in today’s AI roundup. Let’s delve deeper into the details.

Discord is shutting down its AI chatbot

Discord is retiring Clyde, its experimental AI chatbot, by deactivating it at the end of the month, according to a company note. Starting December 1st, users will no longer be able to invite Clyde in direct messages, group messages, or server chats. The chatbot, which leveraged OpenAI’s models for answering questions and having conversations, had been in limited testing since the beginning of the year and was originally planned to be integrated as a core component of Discord’s chat and community app.

Microsoft improves its AI image generator

Microsoft has adapted its AI image creation tool from a social media trend where users have used the tool to create realistic Disney movie posters featuring their pets, the Financial Times reports. The created images, which have been posted on TikTok and Instagram, raised copyright issues for showing the Disney logo. In response, Microsoft blocked the word “Disney” from the image generator and displayed a message saying the prompt was against its policies. Disney is suspected of having reported concerns about copyright or intellectual property rights violations.

Prime Minister Modi highlights the problem of deep counterfeiting

Addressing reporters at the Diwali Milan program at the BJP headquarters in New Delhi, Prime Minister Narendra Modi highlighted the growing problem of deep counterfeiting in India. – I watched my deep fake video of me doing Garba. But the reality is that I have not done garba since my school life. Someone deepfaked my video,” said PM Modi.

He was also quoted by ANI as saying, “Because of AI and deep fakes, a challenge arises… in a large part of our country there is no parallel alternative to authentication… people often end up believing deep fakes and this goes towards a big challenge… we have to educate people through our programs about AI and deep fakes, how it works, what it can do, what all the challenges it can bring and what can be made of it.

Senior Stability AI Director resigns over copyright issues

Chief executive Ed Newton-Rex has resigned from artificial intelligence firm Stability AI over the company’s stance that using copyrighted work without permission to train its products is acceptable. Newton-Rex, a former head of sound for the British and US company, told the BBC that he found such practices “exploitative” and against his principles. However, many AI companies, including Stability AI, argue that the use of copyrighted content falls under the “fair use” exception, which allows the use of copyrighted material without the permission of the original owners.

According to research, popular artificial intelligence image generators can be cheated

Researchers successfully manipulated Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2 text-to-image models to create images that violate their conventions, including nudity, dismembered bodies, and violent or sexual scenarios. The research, which will be presented at the IEEE Symposium on Security and Privacy in May, highlights the vulnerability of generative AI models to bypass their own safeguards and policies, a phenomenon known as “jailbreak”. This study highlights the challenges in ensuring the responsible and ethical use of AI technologies. The preprinted version of the study is available on arXiv.

Related posts

Leave a Comment