AI powered tools, deep fakes pose the challenge of deception to internet users
Artificial intelligence, deep fakes and social media… Little understood by laymen, the combination of the three forms a mysterious barrier for millions of internet users who are caught up in the daily battle of trying to filter the genuine from the fake.
The fight against misinformation was always challenging, and it has become much more challenging as the development of AI-based tools has made it more difficult to detect deep fakes on many social media platforms. AI’s unintended ability to create fake news – faster than it can stop it – has troubling consequences.
“In India’s rapidly changing information ecosystem, deep fakes have emerged as the new frontier of disinformation, making it difficult for people to distinguish between false and truthful information,” Syed Nazakat, founder and CEO of DataLEADS, a digital media group that builds information literacy and literacy. infodemic management initiatives, reported PTI. India is already fighting the flood of misinformation in various Indian languages. This gets worse as various AI bots and tools run through the Internet of deep fakes.
“The next generation of artificial intelligence models, called Generative AI – for example Dall-e, ChatGPT, Meta’s Make-A-Video, etc. – do not need a source for conversion. Instead, they can generate an image, text or video based on prompts. These are still in development, but there is visible potential to cause harm because we wouldn’t have the original content to use as evidence,” added Azahar Machwe, who worked as an enterprise architect for artificial intelligence at British Telecom.
WHAT IS DEEPFAKE
Deepfakes are photos and videos that realistically replace one person’s face with another. Many artificial intelligence tools are available to Internet users on their smartphones almost for free. In its simplest form, artificial intelligence can be explained as computers using things that otherwise require human intelligence. A notable example can be the ongoing competition between Microsoft’s ChatGPT and Google’s BARD.
While both AI tools automate the creation of human-level writing, the difference is that BARD uses Google’s Language Model for Dialog Applications (LaMDA) and can provide answers to real-time and current research from the Internet. ChatGPT uses its Generative Pre-training Transformer 3 (GPT-3) model, trained on data before the end of 2021.
RECENT EXAMPLES
Two synthetic videos and a digitally altered screenshot of a Hindi newspaper report shared last week on social media platforms including Twitter and Facebook highlighted the unintended consequences of artificial intelligence tools in creating altered photos and videos with misleading or false claims.
A synthetic video is any video created by artificial intelligence without cameras, actors and other physical elements.
A video of Microsoft founder Bill Gates being grilled by a journalist during an interview was shared as real and later found to be edited. A digitally altered video of US President Joe Biden calling for a national draft (mandatory registration of individuals into the armed forces) to fight the war in Ukraine was shared as genuine. In another case, a photo edited to look like a Hindi newspaper report was widely circulated to spread misinformation about migrant workers in Tamil Nadu.
All three incidents – two synthetic videos and a digitally altered screenshot from a Hindi newspaper report – were shared on social media by thousands of internet users who thought they were real.
The issues culminated in stories on social media and mainstream media highlighting the unintended consequences of AI tools creating altered photos and videos with misleading or false claims.
PTI’s Fact Check team investigated the three claims and declared them “deeply fake” and “digitally manipulated” using AI-powered tools readily available on the Internet.
AI AND FAKE NEWS A few years ago, the introduction of artificial intelligence in journalism raised hopes of a revolutionary upheaval in the industry and in the production and distribution of news. It was also seen as an effective way to curb the spread of fake news and misinformation. “The weakness of deepfakes has been that they require the original content to work. For example, Bill Gates’ video masked the original audio with a fake. These videos are relatively easier to expose if the original can be identified, but this takes time and the ability to search for the original content,” Azahar told PTI: for.
He believes that the deepfakes recently shared on social media are easy to trace, but also worried that exposing such synthetic videos will be challenging in the coming days.
“Transforming the original video can lead to errors (e.g., lighting/shadow mismatch) that AI models can be trained to detect. These resulting videos are often of lower quality to hide these errors from algorithms (and humans),” he explained.
According to him, fake news floats in many forms and deep fakes are created today with very basic artificial intelligence-powered tools. These videos are relatively easy to undo.
“But you can’t have 100 percent accuracy. For example, Intel’s version promises 96 percent accuracy, which means four out of a hundred will still make it,” he added.
ROAD AHEAD Most social media platforms claim to reduce the spread of misinformation at the source by building fake news detection algorithms based on language models and crowdsourcing. This ensures that false information is not allowed to spread, rather than being detected and removed afterwards.
While examples of deep fakes highlight the potential threats of artificial intelligence in creating fake news, artificial intelligence and machine learning have provided journalism with a number of tools to facilitate the creation of content for voice recognition transcription tools automatically.
“Artificial intelligence continues to help journalists focus their energy on the development of high-quality content, as the technology guarantees timely and fast distribution of content. The human loop should check the consistency and veracity of content shared in any form – text, image, video, audio, etc.,” Azahar said.
Deepfakes should be clearly labeled as “synthetically generated” in India, which had more than 700 million smartphone users (ages 2 and up) in 2021. According to a recent Nielsen report, there were more than 425 million internet users in rural India, 44 percent more than the 295 million people using Internet city in India.
“People tend to join ‘echo chambers’ of like-minded people. We need media literacy and critical thinking in the core curriculum to raise awareness and create a proactive approach to help people protect themselves from misinformation.
“We need a multifaceted, multidisciplinary approach across India to prepare people of all ages for today’s and future’s complex digital landscape to be alert to deep fakes and disinformation,” Nazakat said.
In a large country like India, the changing information landscape creates an even greater need for information literacy skills in all languages. He added that every educational institution should make information literacy a priority for the next decade. PTI PRN MIN MIN MIN
Read all the Latest Tech News here.