New AI tool, WormGPT, is posing a huge threat to ordinary users as it enables hackers to target their personal data - even emails can be forged easily. (Pixabay)AI 

Alert: Be Cautious of WormGPT, the Artificial Intelligence Tool that Could Benefit Cybercriminals

Millions of individuals are enthusiastically embracing generative AI and AI chatbots as they seek solutions and engage in daily conversations. However, with the increasing availability of such tools, it is challenging to determine their reliability or identify potential warning signs. Astonishingly, certain AI tools can be utilized to launch cyber attacks on unsuspecting individuals.

A new generative AI tool like ChatGPT called WormGPT has recently been developed. The creator of the tool mentions that his tool is a direct enemy of ChatGPT because it does not limit abuse or illegal text generation.

But what is the threat posed by WormGPT?

WormGPT can be used by criminals to promote malicious activity. According to SlashNext findings, the tool can create phishing and business email compromise (BEC) attacks.

“This tool presents itself as a blackhat alternative to GPT templates designed specifically for malicious activities,” security researcher Daniel Kelley was newsrnd.com/tech/2023-07-16-the-nightmare-is-coming-true–the-artificial-intelligence-tool-that-will-make-it-easier-for-hackers-to-steal-information-%7C-israel-hayom.rkXB5GD-93.html” rel=”noopener noreferrer” target=”_blank”>quoted as saying by newsrnd.com. “Cybercriminals can use such technology to automate the creation of highly persuasive fake emails to the recipient personally, increasing the chances of the attack being successful.”

By taking advantage of WormGPT’s chat memory retention and code formatting features, hackers can also easily create advanced phishing emails that improve the effectiveness of texts and emails. In fact, cybercriminals don’t even require much skill to use WormGPT for malicious purposes.

WormGPT has no pre-defined limits for creating content like ChatGPT, so it has the potential to create a serious cyber attack problem and lead to more and more online scams and crimes.

What ordinary people can do to combat the use of such AI tools is simply to always be cautious about the emails they receive or the conversations they encounter on social media or anywhere else. The best strategy is to not click or interact with anyone on the internet who is a complete stranger to you.

Not just regular messages and chats, but cybercriminals are using AI tools to create AI-based videos that impersonate people, friends or family members and can hold conversations on platforms like WhatsApp to scam people out of their money.

Related posts

Leave a Comment