The EU AI Act and its Consequences

  • The European Parliament

The AI Act of the EU creates regulatory clarity about the use of AI – including in medicine. It also means new obligations. We support clients and manufacturers in meeting them.

For five years, the European Parliament worked on the first legislation worldwide on the placing of products with artificial intelligence (AI) on the market. The text of the act (EU Regulation 2024/1689)  was published in July 2024. The objective is to promote reliable and trustworthy AI in the European Union – including in medicine. In this connection, the AI Act is based on a risk-based approach, just as the Medical Devices Regulation (MDR). Not every AI system used in medical devices is associated with a risk for humans, which would make it subject to particularly strict procedures. Nevertheless, the AI Act automatically classifies devices that fall into Class IIa or higher according to the MDR as "high-risk systems", because their use may potentially represent a considerable hazard for health, safety or fundamental rights. 

Four action areas in focus

In the words of Dr. Marc Kämmerer, Head of our Innovation Management Unit, "It introduces, not only for producers and distributors but also for the users, obligations that they must clearly understand." Roughly speaking, a distinction in this connection can be made between four fields of action: Governance of data quality is still quite clearly a matter for the manufacturers. Among other possibilities, this prevents fragmentary training data records, which may arise, for example, if a data record, presumably recruited from studies on male subjects, causes the learning system to construct an algorithm that delivers results compatible only with the male but not the female biology. A further requirement concerns minimization of known and foreseeable risks, for example of false positive results becoming the official diagnosis. A third aspect is assurance of adequate performance of the system, which is measured not only by output quality but also, for example, by processing time. Should users become aware of shortcomings in this respect, they are legally obligated to report them to the manufacturer. 

Last but not least, they must also meet obligations in the fourth field of action, which concerns informing and training the users. Marc Kämmerer goes on to say that "The AI Act requires that users know how to deal with an AI system, especially as far as the risks and limits of the significance of diagnoses are concerned." It is also important to understand that all of these requirements must be met over the entire product lifecycle of an AI system.

Reports are mandatory for users

In recent months, our team has intensively examined the implications and possible interpretations of the AI Act – with regard both to its own obligations and to the capabilities of supporting clients and partners. For example, JiveX may be able in future to make a substantial contribution to recording the operating behavior of users and the quality of results of AI systems with a view to Post Market Surveillance. For reliable transmission of these data in a manner complying with the GDPR, our software may additionally be able to make a low-threshold contribution. In this respect, Marc Kämmerer emphasizes that "Even if such channels do not exist, all AI users are obligated to report problems with anomalies in the system to the manufacturers." His advice: Even though the transition deadline for high-risk systems is not until 01 August 2026, manufacturers, distributors and users should make themselves familiar as soon as possible with the new obligations and the necessities arising from them.

You can read the text of the AI Act here.

Dr. Marc Kämmerer - VISUS
"It introduces, not only for producers and distributors but also for the users, obligations that they must clearly understand."

Dr. Marc Kämmerer

Head of our Innovation Management Unit