YouTube policies scrutinized as shocking deepfake video of Ripple CEO emerges online
Security researchers have had a dreadful experience in 2023 due to the misuse of artificial intelligence (AI). Although AI has brought about increased efficiency and convenience for users, it has also been exploited by malicious individuals to deceive and engage in illegal activities. Numerous incidents have occurred where hackers have impersonated prominent figures in videos. A recent example involves a deepfake video that emerged on YouTube, featuring a counterfeit CEO of Ripple persuading individuals to multiply their cryptocurrency investments. Learn more about this incident.
Ripple CEO deep fake controversy
The crypto community has seen another rise in deep fakes, with Brad Garlinghouse, CEO of US crypto solutions provider Ripple, on board. In a fraudulent video previously available on YouTube, Ripple’s CEO urged people to invest their XRP tokens in a specific contract and promised to double them. The video also includes a QR code that directs unsuspecting victims to a fake website, raising potential financial risks. This is just another example of the recent increase in XRP scams.
Surprisingly, Google still hasn’t removed the hidden video, according to reports. Concerned Redditors reached out to the Menlo Park-based tech giant. Still, its ethics and safety staff reportedly denied the request, citing that the ad did not violate its policies, and even asked for more information to be provided within six months.
What is a deepfake?
According to a report by the National Cybersecurity Alliance, deep fakes are AI-generated videos, images, and audio that are edited or manipulated to make someone say or do something they didn’t do in real life. Deepfakes can be used to trick, manipulate and defame anyone, be it a celebrity, politician or ordinary people. The NCA said: “If your voice identity and sensitive information fall into the wrong hands, a cybercriminal could use a deep fake voice to contact your bank.”