AI systems can potentially be biased leading to ethical, social and legal issues. The standard uses a set of procedures used to identify potential biases and a set of metrics to measure the fairness of such systemsAI 

DoT Agency Releases Standard Framework for Evaluating and Assessing Fairness of AI Systems

The department of telecommunications has recently announced that the Telecommunication Engineering Centre, a government agency, has introduced a new standard for evaluating and rating the fairness of artificial intelligence systems.

Although the draft came in December last year, the standard was officially launched on Friday at the Center for Development of Telematics (C-DOT) event in New Delhi.

E-government is increasingly using artificial intelligence systems, but they can be biased and lead to ethical, social and legal problems. Bias refers to a systematic error in a machine learning model that causes it to make unfair or discriminatory predictions.

The standard published by TEC therefore provides a framework for evaluating the fairness of artificial intelligence systems. It includes a set of measures that can be used to identify potential biases and a set of metrics that can be used to measure the fairness of an AI system.

Governments, companies and organizations can use the standard to demonstrate their commitment to fairness. Individuals can also use it to assess the fairness of the AI systems they use. In addition, the standard is based on the principles of responsible artificial intelligence developed by NITI Aayog, which include equality, inclusion and non-discrimination.

A step by step approach

Artificial intelligence is increasingly being used in all sectors, including telecommunications and related information and communication technology, to make decisions that can affect everyday life. Because unintentional bias in artificial intelligence systems can have serious consequences, this standard provides a systematic approach to ensuring fairness.

It approaches certification through a three-step process that includes bias risk assessment, thresholding metrics, and bias testing, where the system is tested in different scenarios to ensure it works equally well for all individuals.

There are various data formats, including table, text, image, video, audio, etc. In simpler terms, data modality refers to the type of data used to train an AI system. For example, table data is data arranged in a table, text data is data presented as text, image data is data presented as images, and so on.

The procedure for detecting anomalies may be different for different data types. For example, a common form of discrimination in text data is due to the encoding of the text input. This means that the way the text is presented on the computer can be biased, which can lead to the AI system making biased predictions.

So at the moment the standard is built on tabular data and there are plans to extend it to other formats. It can be used in two ways: self-certification and independent certification.

What is self-certification?

In this case, the entity that developed the artificial intelligence system performs an internal evaluation of the system to see if it meets the requirements of the standard. If so, the community can then submit a report saying the system is fair.

What is independent certification?

In this case, an external auditor evaluates the artificial intelligence system to see if it meets the requirements of the standard. If so, the inspector can then issue a report saying the system is impartial.

Related posts

Leave a Comment