Report Warns of Potential Danger to Users from Malicious Android Apps Disguised as ChatGPT Programs
On Friday, a new report revealed that there has been an increase in malware designed for the Android platform, which is trying to mimic the well-known AI chatbot ChatGPT app in order to victimize smartphone users.
According to Palo Alto Networks Unit 42 researchers, these malware versions emerged with the release of OpenAI’s GPT-3.5 and subsequent GPT-4, infecting victims interested in using the ChatGPT tool.
Researchers have found two types of active malware – one is the Meterpreter Trojan disguised as a “SuperGPT” application, and the other is a “ChatGPT” application that sends messages to high-cost numbers in Thailand.
Additionally, the report mentioned that researchers found a malicious Android Package Kit (APK) sample that turned out to be a Trojan version of a legitimate application.
The legit app is an AI assistant built on the latest version of ChatGPT. If the exploit is successful, a malicious version of this app allows the actor to remotely access an Android device.
Researchers also found another cluster of APK malware samples. On the surface, the malware appears to display a web page with a description of ChatGPT. However, according to the report, this threat hides a dark purpose.
Additionally, all of these APK samples use the OpenAI logo, which is often associated with ChatGPT, as their app icon, adding to the misleading narrative that this app is related to the ChatGPT AI tool.
These APK malware samples are capable of sending text messages to high-cost numbers in Thailand.
Premium numbers cost more than regular phone numbers and are used in exchange for some kind of service (eg users providing information).
The company behind it collects the revenue, but it can also be misused for scams and fraudulent activity, the report said.