AI is being used by UK officials, but it has been providing discriminatory results. (Bloomberg)News 

Allegations of Discrimination Surround UK Government’s AI-Based Immigration and Crime Policies

The global adoption of artificial intelligence (AI) has been widespread, with almost every industry incorporating this technology. In recent months, AI has been utilized in various sectors including education, finance, healthcare, and agriculture. Although it has demonstrated its effectiveness in improving efficiency, it has also given rise to concerns regarding misinformation and illusions. Despite ongoing efforts by governments to regulate AI, its extensive usage has resulted in discriminatory outcomes.

The use of artificial intelligence leads to discriminatory results

According to a Guardian report, UK government officials are using artificial intelligence for various tasks. From reporting sham marriages to deciding which retirees receive benefits, AI involvement has proven useful. However, it also leads to discriminatory results. One of the cases highlighted by the Guardian’s investigation involved the Department for Work and Pensions (DWP), which used an algorithm that led to dozens of people being cleared of wrongdoing, according to an MP.

In another case, the UK Home Office has used an AI algorithm to report sham marriages, but it flags certain nationalities more prominently. An artificial intelligence facial recognition tool used by the Metropolitan Police has also been accused of making more mistakes in recognizing black faces than white ones.

These are life-changing decisions made with the help of artificial intelligence, a technology that has previously been prone to creating false facts and hallucinations. While UK Prime Minister Rishi Sunak recently said that the introduction of AI could transform public infrastructure “from saving teachers hundreds of lesson planning time to helping NHS patients get faster diagnoses and more accurate tests”, these issues put AI in a bad light. .

Spreading racist medical ideas

A new study led by the Stanford School of Medicine, published a few days ago on Friday, found that AI chatbots can help patients by summarizing doctors’ notes and checking health information, but they spread racist medical ideas that have already been debunked.

A study published in Nature Journal asked medical questions related to kidney function and lung capacity to four AI chatbots, including ChatGPT and Google. Instead of providing medically accurate answers, the chatbots responded with “false beliefs about differences between white and black patients in things like skin thickness, pain tolerance, and brain size.”

The problem with AI hallucinations

Not only does it provide discriminatory and even racist results, AI has also been accused of presenting false and fabricated information as facts. Earlier this month, Bloomberg’s Shirin Ghaffary asked popular chatbots such as Google Bard and Bing questions about the ongoing conflict between Israel and Hamas, and both chatbots inaccurately claimed that a ceasefire was in place.

AI chatbots have been known to misrepresent facts from time to time, a problem known as AI biases. For the uninitiated, AI hallucinations are when the Large Language Model (LLM) makes up facts and reports them as absolute truth.

Another inaccurate claim by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict, where it said the death toll had exceeded “1,300” on October 11, a date that had not even arrived yet.

While AI exploded onto the scene when ChatGPT debuted as a technology that could make life a lot easier and potentially take jobs, these problems show that the time when AI will be trusted 100 percent to get jobs done is still a few years away. years away.

Related posts

Leave a Comment