The threats and risks associated with Generative AI are numerous, and need to be addressed as early as possible by companies. After describing cognitive biaseslet's take a look at the regulatory risks associated with Generative AI.
Contents
Understanding and mastering the regulatory challenges of Generative AI
RGPD compliance
Generative AI, in the same way as any processing of personal data, is subject to the principles of the European Union's General Data Protection Regulation (GDPR). Both the pre-processing of data and the training and use of models are affected, and any failure to comply is liable to a fine of up to 20 million euros or 4% of sales, in addition to a potential reputational risk.
It is therefore crucial to ensure that the RGPD will be respected from end to end before launching a Generative AI initiative.
What regulatory principles must generative AI respect?
- Legality: the collection and processing of personal data must have a legal basis, which is difficult to define for all the data needed to train or use Generative AI.
- Minimization: processing must use the strict minimum of data. This has a negative impact on the performance of generative AI, particularly with regard to hallucinations.
- Automated profiling: personal data processing that has an impact on individuals cannot be purely automated, which is in contradiction with generative AI, over whose operation users have little control.
- Transparency: users must be informed of how their data is used, and on which source the model is based.
- Right to explanation: it is necessary to be able to explain how the algorithm used works in order to be able to justify a result, which is extremely difficult with Generative AI.
- Right to be forgotten: it is necessary to be able to explain how the algorithm works in order to be able to justify a result, which is extremely difficult with Generative AI.
- Data accuracy: data must be accurate and up-to-date. With Generative AI, there's a risk that the information generated or provided will be inaccurate or misleading.
Taking account of the AI Act now
An entry into force in 2026 that needs to be planned today
If compliance with the RGPD when usingGenerative AI can prove challenging for companies, the fact that they have been forced to apply it since May 25, 2018 gives them the necessary hindsight to adapt. Nevertheless, compliance with the forthcoming European regulation that aims to regulate not the processing of personal data but the use of AI itself, the AI Actwill undoubtedly prove much trickier.
Indeed, the AI Act, passed unanimously on February 2, 2024 and due to come into force in 2026, is much broader in scope than the RGPD and intends to limit the risks to fundamental rights as much as possible by ensuring that AI is safe, transparent, traceable, non-discriminatory, environmentally friendly and always supervised by humans.
What are the regulatory risks associated with the AI Act?
The approach consists of defining the following four levels of risk, each requiring appropriate remediation or limitation measures:
- Unacceptable risks: Some AIs will be banned because they are considered a threat to people. AIs enabling social scoring, biometric identification or the categorization of people, for example, fall into this category.
- High risk: AIs that adversely affect security or fundamental rights will be considered highly risky, and will need to be assessed and brought into compliance before they are put on the market and throughout their lifecycle. Bank scoring, for example, falls into this category.
- Limited risk: AIs presenting limited risk will have to comply with minimum transparency requirements to enable users to make informed decisions. Artistic images generated by AI, for example, fall into this category.
- Minimal risk: AIs that do not fall into the other three categories, such as spam filters. They must nevertheless be subject to a code of conduct.
What do companies risk if they fail to comply with the AI Act?
In this context, Generative AI will have to meet transparency requirements at the very least. Its classification as a limited-risk or high-risk system will depend on whether it is open or closed, and on the computing power required to train it. In all cases, publishers will have to publish a summary of the data used for training, and will be subject to copyright.
Fines for non-compliance are substantial and depend on the level of risk:
- 7% of sales or 35 million euros for the marketing of an AI presenting unacceptable risks.
- 3% of sales or €15 million for non-compliant, high-risk AI.
- 1% of sales or 7.5 million euros for a low-risk or minimal-risk AI transmitting false or misleading information.
Compliance by Design to limit the regulatory risks associated with Generative AI
Taking regulatory risks seriously right from the design stage
Regulatory risks must be taken seriously by all business lines and technical teams, and limiting them must be an absolute priority for companies.
Indeed, compliance with Generative AI or use cases based on its use must not be done a posteriori, at the risk of higher costs or an incomplete solution that proves to be irrelevant. It is therefore necessary to follow a strategic approach that integrates the principles of the RGPD and the AI Act at every stage of development and deployment in a logic of Privacy by Design and Trustworthy AI by Design.
This requires strong sponsorship to ensure that these principles are integrated into the company's culture, and an active role for compliance departments in overseeing the implementation of the leaf, in collaboration with development teams and end-users.
Adapting design processes
Procedures must be established to ensure transparent practices, particularly with regard to the explicability of algorithms, secure data processing and ethical use. Continuous monitoring and updating are also essential to maintain high standards of processing compliance.
Last but not least, all users must be fully trained in the various in the various regulations in force, so that they can identify the limits they must impose on themselves when using Generative AI, and also be able to identify the risks resulting from the processing carried out.

12 human biases in Generative AI to understand and master
Generative AI has many biases, not least human ones. Our Data & AI experts will help you to understand

What is the environmental impact of artificial intelligence (AI)?
The rise of artificial intelligence is not without major environmental challenges. The environmental impact of AI is colossal and probably still underestimated.

AI training: explaining and demystifying before taking the plunge
Training employees in AI is not just about responding to a current trend, it's about supporting the transformation of our businesses. All businesses are

