Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

Lock with European flag
  • Industry News

EU AI regulation as a pioneer

Low-risk, limited-risk, too-risky, or prohibited - software systems from providers are to be classified in these four categories, and control authorities are responsible for checking them. This model underlies the European AI regulation. It is the world's first attempt to regulate artificial intelligence and brings more clarity, for example also with regard to the controversial use of biometric data.

The planned European AI regulation divides artificial intelligence into risk classes and demands comprehensive risk management from AI providers. Control authorities will be established for monitoring.

The European AI regulation is the world's first attempt to regulate artificial intelligence. It thus has a pioneering character and could become a model for corresponding approaches outside the EU and by international bodies, similar to the General Data Protection Regulation (GDPR).

  • In the future, AI systems will be divided into four risk classes
  • Manufacturers and suppliers must carry out a risk assessment, which is verified by control authorities
  • The implementation of the regulation is expected for 2026

After tough wrangling, the EU Parliament recently approved the draft AI regulation. It was preceded by a year and a half of negotiations. The Parliament had a hard time with the Commission's draft. After all, it is about regulating a technology that could bring great benefits to humanity, but at the same time harbours unforeseeable dangers. The use of biometric data for AI systems was particularly controversial.

 

Four risk classes for AI systems

The central point of the regulation will be a risk classification of AI applications. Artificial intelligence will then be divided into four classes: low-risk, limited-risk, too-risky and prohibited software systems. The last group includes social scoring systems that evaluate people's behaviour or the automatic recognition of emotions, for example in police interrogations, but also area-wide surveillance with biometric real-time data.

In the low-risk AI group, only minimum standards are to apply to all developers of generative AI systems. In the second category, the limited-risk applications, special rules apply whenever AI is used in software areas that involve a high risk. This is the case in healthcare, for example. The risky systems form the third level. It contains specifications for the cooperation between providers of the basic models and their commercial customers who use these models for their own purposes. For example, OpenAI as a provider of ChatGPT would have to pass on certain information about the functioning of the model to its customers.

In addition, so-called data governance rules should ensure that training data does not lead to prejudice and discrimination, for example because people of colour are underrepresented. Another point: If copyrighted material is used for training the AI models, this must be disclosed so that authors can exercise their rights. In the past, this has already led to conflicts, as shown by the lawsuit filed by the photo agency Getty Images against Stability AI, whose AI functions as an image generator, among other things. A lawsuit worth billions was the result.

 

Control by audit authorities

Compliance with the planned measures is to be monitored by audit authorities. How this control will be designed, however, is still open and is likely to cause controversy. There could also be country-specific differences that complicate the use of AI in the EU. Criticism has already been voiced over the fact that companies are to carry out their own risk analysis and only then will audit authorities check whether this is correct.

In principle, the idea behind the EU regulatory model is that AI companies set up a comprehensive risk management system. This should assess all foreseeable risks to areas such as security, freedom of expression or democracy. From this, risk mitigation measures should emerge. The extent to which this approach is practicable remains controversial. Critics point out that this form of regulation could slow down the development of generative AI in important areas such as medicine or education in the EU. However, rapid implementation is not expected; the laws resulting from the regulation are not due to come into force until 2026.

close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.