Limits and obstacles to the adoption of Generative AI applied to Contract Management

Part two of our thematic dossier on the challenges of adoptingGenerative AI. Find the 1st article here: Generative AI use cases for the Contract Management.

In this article, we will try to understand the obstacles to the adoption of Generative AI for the missions of Contract Management and the ways to approach it.

ia generative contract management

Generative AI: looking beyond the positive impact on productivity

According to a joint study by MIT and BCG the arrival of generative AI has the potential to increase labor productivity by up to 40%.
However, despite the advantages offered by this new technology, there are still limitations, leading even some analysts such as CSS Insight to predict a "cold shower" for 2024.

On the occasion of its annual predictions, CSS Insight reported that "the hype around generative AI has been just immense in 2023, so much so that we believe it is overpriced, and that many hurdles need to be overcome to bring it to market".

As a reminder, the "hype cycle" popularized by Gartner represents the maturity curve of emerging technologies, enabling us to identify their potential and pace of deployment.

The "peak of inflated expectations" is followed by a phase of "disillusionment", during which interest wanes, at least temporarily, before gradually picking up again, often for longer.

The arrival of generative AI has the potential to boost labor productivity by up to 40%.

Source: The state of AI in 2023: Generative AI's breakout year

What are the obstacles to the adoption of Generative AI for Contract Management ?

Data security / confidentiality issues

The first limitation is data security and confidentiality. Companies are naturally not prepared to pour their contracts into third-party AI solutions. The sensitivity of these documents is such that the highest possible level of security is required.

Many companies have simply banned the use of GenAI tools until these risks have been addressed.

Others are developing their own generative AI tools based on open source models, in order to provide a secure architecture for retaining ownership of the content made available to AI models.

Explainability of content

Proposals generated by Generative AI tools must not only be reliable and relevant, but must also be able to be justified to avoid the risk of false, misleading or unsourced content.

To date, however, it remains very difficult to obtain from AI tools a detailed explanation of the provenance of content and the underlying sources.

Note also the impact of variations in "prompts", sometimes seemingly minor, which can lead to very significantly different results depending on the turns of phrase employed by the user.

This has led to the emergence of a new "Prompt Engineer" profession, and a discipline that involves performing sensitivity tests to prompt variations, checking the ability of tools to handle a wide range of queries and respond to complex scenarios, while guaranteeing neutrality and the absence of bias.

Consistency of results andcontextual adaptability of tools are thus key elements in improving the reliability and safety of language models (LLMs).

The central question of cost

The problem of the cost of Generative AI is beginning to emerge, given the mass of data and parameters to be processed by LLMs. This involves running complex mathematical models to provide answers to users' prompts, which requires considerable computing power, and is therefore costly.

Beyond experimentation and proof-of-concept, the issue of costs is a very sensitive one, particularly with a view to scaling up over a wide range of use cases, sometimes using different LLMs, or a larger volume of users.

In this respect, the use of the RAG (Retrieval Augmented Generation) method in particular makes it possible to limit costs by using models that have already been pre-trained with very large volumes of data, thus avoiding the need to ingest massive quantities of training data.

The new European AI regulation: AI Act

The European Union recently reached an agreement to regulate the use of AI in Europe. The aim is toestablish an ethical and secure framework, emphasizing transparency, security and respect for fundamental rights in the processing of certain types of sensitive data (e.g. health data) or high-risk use cases (e.g. facial recognition), even going so far as to ban certain cases (e.g. social scoring or the manipulation of human behavior).

This AI Act will also force companies to : 

  • ensure the quality of the data used to feed the algorithms ;
  • ensure compliance with copyright ; 
  • ensure that generated content is clearly identified as artificial;
  • require developers to write technical documentation and distribute detailed summaries of the content used to train their AI.

As a result, companies may have to postpone the deployment of artificial intelligence solutions and invest more to comply with recently introduced regulations.

5 best practices for embracing Generative AI

If you want to make the most of generative AI, especially for the Contract Managementwe recommend respecting a few simple principles.

Interested in finding out more? Don't hesitate to contact us for more information.

In the meantime, stay tuned for our next article where we'll explore in depth the impact of Generative AI on business.

Contract Management

Discover our expertise to support your challenges

Contact us

Let's talk about your projects and needs together

Further information