All developers of general purpose AI systems – powerful models that have a wide range of possible uses – must meet basic transparency requirements, unless they’re provided free and open-source. (AFP)AI 

The European Union’s Approach to Regulating Advanced AI Models Such as OpenAI’s ChatGPT

In a significant development, the European Union has achieved a preliminary agreement that would impose restrictions on the operations of the advanced ChatGPT model. This move is considered a crucial component of the world’s inaugural comprehensive regulation on artificial intelligence.

According to a Bloomberg EU document, all developers of general-purpose AI systems – powerful models with a wide range of uses – must meet basic transparency requirements, unless they are free and open source products.

These include:

  • Acceptable Use Policy
  • Keeping up to date with how they trained their models
  • A detailed summary of the data used to train the models
  • Compliance with copyright law

Models deemed to pose “systemic risk” will be subject to additional rules, according to the document. The EU would determine this risk based on the amount of computing power used to train the model. The threshold is set for models that use more than 10 trillion trillion (or septillion) operations per second.

Currently, the only model that would automatically meet this threshold is OpenAI’s GPT-4, according to experts. Others may be designated by the EU executive depending on the size of the data set, whether they have at least 10,000 registered business users in the EU, or the number of registered end users, among other possible metrics.

These highly valid models should be linked to codes of conduct while the European Commission develops more consistent and long-term supervision. Those who do not sign must prove to the commission that they are complying with the AI Act. The exception for open source models does not apply to those considered to pose a systemic risk.

These models will also:

  • Report your energy consumption
  • Run red-teaming or adversarial tests either internally or externally
  • Assess and reduce possible system risks and report possible deviations
  • Make sure they use adequate cybersecurity controls
  • State the data used to fine-tune the model and their system architecture
  • Comply with more energy efficient standards if they are developed

The preliminary agreement still requires the approval of the European Parliament and the 27 EU member states. France and Germany have previously expressed concerns about the application of excessive regulation to general-purpose AI models and the risk of killing European competitors such as France’s Mistral AI or Germany’s Aleph Alpha.

For now, Mistral is unlikely to have to meet general-purpose artificial intelligence controls because the company is still in the research and development phase, Spanish Foreign Minister Carme Artigas said early Saturday.

Related posts

Leave a Comment