AI systems are safe, secure, trustworthy and socially responsible. (REUTERS)News 

US NIST develops guidelines to ensure safety and reliability of artificial intelligence

Setting standards for AI safety is a significant challenge for the Biden administration, as artificial intelligence is expected to have a profound impact on our future. Unlike nuclear fission, which had strict regulations, the development of AI has been primarily driven by the private tech sector, which has shown resistance to regulation. With billions of dollars at stake, ensuring the safety, security, trustworthiness, and social responsibility of AI systems is crucial.

It has used a small federal agency, the National Institute of Standards and Technology, to define the parameters. NIST’s tools and measures define products and services from atomic clocks to election security technology and nanomaterials.

The agency’s AI work is led by Elham Tabassi, NIST’s chief AI advisor. He oversaw the AI Risk Management Framework released 12 months ago, which laid the groundwork for Biden’s Oct. 30 AI executive order. It lists such risks as bias against non-whites and threats to privacy.

We are on WhatsApp channels. Click to join.

Born in Iran, Tabassi came to the United States in 1994 for a master’s degree in electrical engineering and joined NIST soon after. He is the main architect of the standard used by the FBI to measure the image quality of fingerprints.

This interview with Tabassi has been edited for length and clarity.

Q: Emerging AI technologies have features that their creators don’t even understand. There is not even an agreed vocabulary, the technology is so new. You have highlighted the importance of creating an AI vocabulary. Why?

A: Most of my work has been in computer vision and machine learning. There, too, we needed a common vocabulary to avoid quickly moving into disagreements. One term can mean different things to different people. Talking about each other is especially common in interdisciplinary fields like artificial intelligence.

Q: You have said that for your work to be successful, you need the input of not only computer scientists and engineers, but also lawyers, psychologists and philosophers.

A: AI systems are inherently socio-technical and are influenced by environments and operating conditions. They need to be tested in real conditions to understand the risks and effects. So we need cognitive scientists, social scientists, and yes, philosophers.

Q: This task is a tall order for a small agency under the Commerce Department that the Washington Post called “notoriously underfunded and flawed.” How many people at NIST are working on this?

A: First, I’d like to say that we at NIST have a great history of engaging with broad communities. When compiling the AI risk framework, we heard from more than 240 separate organizations and received approximately 660 sets of public comments. We do not seem small in the quality and impact of the product. We have more than a dozen people in the team and we are expanding.

Q: Will NIST’s budget increase from the current $1.6 billion for the AI mission?

A: Congress writes the checks for us and we have been grateful for its support.

Q: The executive order gives you until July to create tools to ensure the safety and reliability of AI. I understand you called it a “nearly impossible deadline” at last month’s conference.

A: Yes, but I quickly added that this is not the first time we have faced this type of challenge, that we have a great team, we are committed and enthusiastic. As for the deadline, we’re not starting from scratch. In June, we convened a public task force that focused on four different guidelines, including authentication of synthetic content.

Q: Members of the House Science and Technology Committee said in a letter last month that they learned NIST plans to issue grants or awards through the new AI Security Institute, suggesting a lack of transparency. A: We are investigating alternatives for a competitive process that supports research collaboration opportunities. Scientific independence is really important to us. Although we use a huge engagement process, we are the creators of everything we produce. We never delegate to anyone else.

Q: The consortium created to help the AI Security Institute is prone to controversy due to industry involvement. What must the consortium members agree to?

A: We published this contract template on our website at the end of December. Openness and transparency are our hallmarks. The model is out.

Q: The AI risk framework was voluntary, but the executive order imposes some obligations on developers. This includes submitting large-language models to the government’s red-teaming program (to test for risks and vulnerabilities) when they reach a certain size and computing power threshold. Is NIST responsible for determining which models get the red team?

A: Our mission is to promote the measurement sciences and standards necessary for this work. It contains some estimates. This is what we have done for facial recognition algorithms. As for the mission (red team), NIST is not going to do any of these things. Our mission is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory agency, neutral and objective.

Q: How AIs are trained and the safeguards placed on them can vary greatly. And sometimes features like cybersecurity have been an afterthought. How do we ensure that risks are accurately assessed and identified – especially when we may not know what publicly released models have been trained on?

A: In the AI risk management framework, we came up with a kind of reliability taxonomy that emphasized the importance of considering it during design, development, and deployment—including regular monitoring and evaluation throughout the lifecycle of AI systems. Everyone has learned that we can’t afford to try to fix AI systems after they’ve been decommissioned. It must be done as early as possible.

And yes, a lot depends on the use case. Take facial recognition. It’s one thing if I use it to unlock my phone. Completely different security, privacy and accuracy requirements come into play when, for example, law enforcement authorities use it to investigate a crime. Trade-offs between convenience and security, bias and privacy – all depend on the operating environment.

Also read today’s top stories:

Apple Vision Pro and the future: Apple is already planning future workplace applications for the device, including using it for surgery, repairing airplanes and teaching students. Know what the gadget is ready to do here.

Cyber-skulduggery is becoming a problem of modern life. In 2022-2023, almost 94,000 cybercrimes were reported in Australia, a 23% increase on the previous year. Here you can protect yourself.

AI for good or bad? If rapidly evolving artificial intelligence achieves its lofty goal of digital immortality—as its proponents believe it can—will it be a force for good or evil? Read all about it here.

Related posts

Leave a Comment