ChatGPT Raises Concerns for Microsoft and OpenAI Due to New Hacking Risks
Microsoft and OpenAI said on Wednesday that hackers are using large-scale language models (LLMs) such as ChatGPT to improve their existing cyber attack techniques.
Companies have detected attempts by Russian, North Korean, Iranian and Chinese-backed groups to use tools like ChatGPT to spy on targets and build social engineering techniques.
In collaboration with Microsoft Threat Intelligence, OpenAI disrupted five state-linked actors that attempted to use AI services to support malicious cyber activities.
“We disrupted two China-linked threat actors known as Charcoal Typhoon and Salmon Typhoon; an Iran-linked threat actor known as Crimson Sandstorm; a North Korean-linked actor known as Emerald Sleet; and a Russian-based actor known as Forest Blizzard” , said the Sam Altman-led company.
Identified OpenAI accounts associated with these actors were terminated. These bad actors attempted to use OpenAI services to query open source data, compile, find coding errors, and perform basic coding tasks.
“Cybercriminal groups, nation-state threat actors, and other adversaries are researching and testing different AI technologies as they emerge to understand the potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a statement. .
While attackers remain interested in the current capabilities and security controls of AI and investigative technologies, it’s important to keep these risks in context, the company said.
“As always, hygiene practices such as multi-factor authentication (MFA) and Zero Trust defenses are essential as attackers can use AI-based tools to enhance their existing cyber-attacks based on social engineering and discovery of unprotected devices and accounts,” the tech giant noted.