Risks of AI Chatbots Identified by British Officials
Organisations in the UK are being cautioned by British officials against incorporating chatbots driven by artificial intelligence into their operations, as research indicates a growing risk of these bots being manipulated into carrying out detrimental actions.
Britain’s National Cyber Security Center (NCSC) said in two blog posts on Wednesday that experts have yet to get to grips with the potential security problems associated with algorithms that can create human-sounding interactions – called large language models, or LLMs.
Artificial intelligence-based tools are starting to be used as chatbots, which some imagine will displace not only internet searches but also customer service work and sales calls.
According to the NCSC, there may be risks involved, especially if such models are connected to other elements of an organization’s business processes. Academics and researchers have repeatedly found ways to subvert chatbots by feeding them rogue commands or tricking them into bypassing their own built-in safeguards.
For example, an AI-powered chatbot used by a bank could be tricked into performing an unauthorized transaction if the hacker constructed their query correctly.
“Organizations building services that use LLMs need to exercise caution, just as they would use a beta product or code library,” NCSC said in one of its blog posts, referring to the experimental software releases.
“They might not let the product participate in making transactions on behalf of the client, and hopefully wouldn’t fully trust it. The same caution should be applied to LLM companies as well.”
Authorities around the world are grappling with the rise of LLM companies such as OpenAI’s ChatGPT, which companies are combining with a wide range of services, including sales and customer service. The security implications of AI also continue to be highlighted, with US and Canadian officials saying they’ve seen hackers embrace the technology.
A recent Reuters/Ipsos survey found that many corporate employees were using tools like ChatGPT for basic tasks such as drafting emails, summarizing documents and conducting preliminary research.
About 10 percent of those surveyed said their bosses specifically banned external AI tools, while a quarter didn’t know if their company allowed them to use the technology.
Oseloka Obiora, chief technology officer at cyber security firm RiverSafe, said the race to integrate AI into business practices would have “disastrous consequences” if business leaders did not implement the necessary checks.
“Instead of jumping into bed with the latest AI trends, senior managers should think again,” he said. “Evaluating the benefits and risks and implementing the necessary cyber protection to ensure the organization is safe from harm.”