Generative AI: what are the regulatory risks?

The threats and risks associated with Generative AI are numerous, and need to be addressed as early as possible by companies. After describing cognitive biaseslet's take a look at the regulatory risks associated with Generative AI.

ia generative risks

Contents

Understanding and mastering the regulatory challenges of Generative AI

RGPD compliance

Generative AI, in the same way as any processing of personal data, is subject to the principles of the European Union's General Data Protection Regulation (GDPR). Both the pre-processing of data and the training and use of models are affected, and any failure to comply is liable to a fine of up to 20 million euros or 4% of sales, in addition to a potential reputational risk.

It is therefore crucial to ensure that the RGPD will be respected from end to end before launching a Generative AI initiative.

What regulatory principles must generative AI respect?

Taking account of the AI Act now

An entry into force in 2026 that needs to be planned today

If compliance with the RGPD when usingGenerative AI can prove challenging for companies, the fact that they have been forced to apply it since May 25, 2018 gives them the necessary hindsight to adapt. Nevertheless, compliance with the forthcoming European regulation that aims to regulate not the processing of personal data but the use of AI itself, the AI Actwill undoubtedly prove much trickier.

Indeed, the AI Act, passed unanimously on February 2, 2024 and due to come into force in 2026, is much broader in scope than the RGPD and intends to limit the risks to fundamental rights as much as possible by ensuring that AI is safe, transparent, traceable, non-discriminatory, environmentally friendly and always supervised by humans.

What are the regulatory risks associated with the AI Act?

The approach consists of defining the following four levels of risk, each requiring appropriate remediation or limitation measures:

What do companies risk if they fail to comply with the AI Act?

In this context, Generative AI will have to meet transparency requirements at the very least. Its classification as a limited-risk or high-risk system will depend on whether it is open or closed, and on the computing power required to train it. In all cases, publishers will have to publish a summary of the data used for training, and will be subject to copyright.

Fines for non-compliance are substantial and depend on the level of risk:

Compliance by Design to limit the regulatory risks associated with Generative AI

Taking regulatory risks seriously right from the design stage

Regulatory risks must be taken seriously by all business lines and technical teams, and limiting them must be an absolute priority for companies.

Indeed, compliance with Generative AI or use cases based on its use must not be done a posteriori, at the risk of higher costs or an incomplete solution that proves to be irrelevant. It is therefore necessary to follow a strategic approach that integrates the principles of the RGPD and the AI Act at every stage of development and deployment in a logic of Privacy by Design and Trustworthy AI by Design.

This requires strong sponsorship to ensure that these principles are integrated into the company's culture, and an active role for compliance departments in overseeing the implementation of the leaf, in collaboration with development teams and end-users.

Adapting design processes

Procedures must be established to ensure transparent practices, particularly with regard to the explicability of algorithms, secure data processing and ethical use. Continuous monitoring and updating are also essential to maintain high standards of processing compliance.

Last but not least, all users must be fully trained in the various in the various regulations in force, so that they can identify the limits they must impose on themselves when using Generative AI, and also be able to identify the risks resulting from the processing carried out.

Data & AI

Discover our expertise to help your company meet these challenges
christophe vallet partner consulting firm

Christophe VALLET

iQo Partner
LinkedIn

jerome priouzeau associate cabinet conseil

Jérôme PRIOUZEAU

iQo Partner
LinkedIn