ChatGPT is OpenAI's generative Artificial Intelligence chatbot. (Pexels)News 

Essential Information on ChatGPT-Generated Cybersecurity Risks

The trend of ChatGPT is becoming increasingly popular among the general public, including famous personalities and politicians who incorporate it into their daily routines. Nevertheless, while many people are utilizing the advanced generative artificial intelligence (AI) tools, there is a group of malicious individuals who are exploiting the technology for their own gain – hackers.

While hackers haven’t made great strides in the relatively new genre of generative AI, it’s advisable to keep yourself aware of how they can exploit the technology. A new Android malware has surfaced that introduces itself as ChatGPT, according to a blog post by American cybersecurity giant Palo Alto Networks. The malware appeared just after OpenAI released GPT-3.5 and GPT-4 in March 2022, targeting users interested in using the ChatGPT tool.

According to the blog, the malware contains the Meterpreter Trojan disguised as a “SuperGPT” application. After successful exploitation, it enables remote access to infected Android devices.

The digital code signing certificate used in the malware samples has been linked to an attacker calling himself “Hax4Us”. The certificate has already been used in several malware samples. A malware sample disguised as ChatGPT-themed applications sends text messages to toll-free numbers in Thailand, resulting in charges for victims.

The risk for Android users is that the official Google Play store is not the only place where they can download apps, so unverified apps find their way onto Android phones.

The rise of advanced technologies such as OpenAI’s GPT-3.5 and GPT-4 has inadvertently facilitated the creation of new AI-based threats. Zscaler, Inc.’s 2023 ThreatLabz Phishing Report highlights that these top patterns have empowered cybercriminals to create malicious code, launch Business Email Compromise (BEC) attacks, and develop sophisticated malware that evades detection. In addition, malicious actors take advantage of the InterPlanetary File System (IPFS), leveraging its distributed network to host phishing pages and make their removal more challenging.

Phishing using ChatGPT

In particular, the impact of AI tools like ChatGPT extends beyond this particular malware. Phishing campaigns targeting well-known brands such as Microsoft, Binance, Netflix, Facebook and Adobe have become more common, and the use of ChatGPT and phishing kits lowers the technical barriers for criminals, saving them time and resources.

In April, Facebook’s parent company Meta said in a report that malware posing as ChatGPT was increasing on its platforms. The tech giant’s security teams have found 10 malware families using ChatGPT and similar themes to send malware to users’ devices since March 2023.

The consequences are far-reaching as unsuspecting users fall victim to these increasingly sophisticated attacks.

Even ChatGPT itself has experienced vulnerabilities, such as a recent bug that exposed users’ chat history and payment information. The bug report served as a reminder of the risks associated with open source software, as it can become an unintended gateway to potential security breaches.

Chatbot popularity attracts hackers

Chatbots based on large-scale language models (LLM) are going nowhere. In fact, they have a bright future in terms of popularity, especially in Asia. According to a Juniper Research report, Asia Pacific accounts for 85 percent of global chatbot retail spending, despite the region accounting for only 53 percent of the world’s population. The messaging apps have reached out to several online retailers, including WeChat, LINE and Kakao.

These partnerships have already led to a high level of trust in chatbots as a retail channel. Naturally, then, hackers look for his tool to make a quick buck from the sly or just to get valuable personal information.

Mike Starr, CEO and founder of vulnerability and patch management platform trackd, told ReturnByte: “The tried and true compromise methods that have brought bad guys success for years still work exceptionally well: exploiting unpatched vulnerabilities. , identity theft, and the installation of malware, often through phishing.” According to Starr, the mechanisms behind these three categories of compromise may evolve, but “the basic elements remain the same.”

How it affects consumers

Cyber security threats related to LLM companies can have a number of effects on ordinary consumers at home, whether it’s a student looking for homework help or someone looking for advice on running a small business. Without proper safeguards in place for personal data such as chat logs or user-generated content, LLMs are only a violation away from exposing user data. Unauthorized access to or leakage of sensitive information can have serious consequences for consumers, including identity theft or misuse of personal information.

Does this mean hackers could one day hijack our digital lives via chatbots? Not exactly, says Starr.

“If it ain’t broke, don’t fix it, even for cyber threat actors. AI will likely increase the effectiveness of existing cybercriminals and may make it easier for willing or less skilled hackers to enter the business, but predictions of an AI-driven cyber apocalypse are more of a figment of Hollywood writers’ imaginations. like they’re objective reality,” he says.

So it’s not time to panic, but staying aware is a good idea.

“While none of these activities have risen to the serious effects of ransomware, data extortion, denial of service, cyber terrorism, and so on, these attack vectors are still future possibilities,” said a report by US-based Recorded Future. cyber security company.

To mitigate these effects, it is always better to be critical of the information produced by LLMs, check the facts when necessary and be aware of possible biases or manipulations.

Cyber measures are needed

The emergence of the ChatGPT malware threat underscores the strong need for cybersecurity measures. As this malware disguises itself as a trusted application, users are vulnerable to unknowingly installing malware on their devices. The malware’s remote access capabilities pose a significant risk that could compromise sensitive data and expose users to various forms of cybercrime.

To combat this threat, individuals and organizations must prioritize cybersecurity practices such as regularly updating software, using reliable antivirus software, and being careful when downloading applications from unofficial sources.

Additionally, raising awareness of the existence of such threats and promoting cybersecurity education can enable users to identify and mitigate potential risks associated with ChatGPT malware and other evolving cyber threats.

By Navanwita Sachdev, The Tech Panda

Related posts

Leave a Comment