


In December 2025, the Direction générale de la sécurité intérieure (DGSI), operating under the authority of the Ministère de l’Intérieur, published Economic Interference Flash No. 117, devoted to the risks associated with the use of artificial intelligence in the professional environment, with a particular focus on generative AI.
The official document commented on in this article may be accessed in full by clicking on the link below.
This publication forms part of the DGSI’s broader economic security mission and aims to raise awareness among French and international companies of new forms of economic interference, data leakage and manipulation, enabled or exacerbated by uncontrolled uses of artificial intelligence.
The French intelligence services first recall that the rapid democratisation of artificial intelligence, and especially generative AI, is profoundly reshaping corporate working practices. While the productivity gains and innovation potential are undeniable, the DGSI stresses that these technologies also create significant legal, economic and strategic vulnerabilities.
These risks are heightened by the frequent use of foreign-developed AI tools, subject to extraterritorial legislation, combined with limited transparency regarding data governance, model training and reuse of information submitted by users.
The first scenario described concerns employees of a strategically important French company who used a public generative AI tool to translate confidential internal documents, without prior authorisation.
The DGSI underlines a critical point: many free or standard versions of generative AI systems reuse user inputs to train their models, creating a substantial risk of loss of control over sensitive or strategic information.
From a legal standpoint, such practices may result in:
The second case highlights the risks arising from the full delegation of partner due diligence to an AI-based tool, without any human verification.
The DGSI identifies several well-known but often underestimated risks, including:
Such practices raise serious issues of corporate governance and directors’ liability, particularly where strategic, financial or compliance decisions are taken solely on the basis of AI-generated outputs.
The third scenario described involves an attempted fraud based on a deepfake, combining the artificial reproduction of a company executive’s face and voice in order to induce an unlawful transfer of funds.
According to the DGSI, AI has become a key vector for economic interference, facilitating:
These developments significantly increase risks related to cybersecurity, fraud, corporate liability and reputational harm.
In response, the French intelligence services set out a series of structured recommendations, including:
These recommendations align closely with broader requirements relating to legal compliance, digital sovereignty and protection of intangible assets.
Conclusion
With this Economic Interference Flash No. 117, the French intelligence services deliver a clear message to company directors, legal departments, CIOs and compliance officers: artificial intelligence, while a powerful driver of competitiveness, must be deployed within a robust legal, contractual and governance framework.
AI is no longer merely a technological issue; it has become a core strategic, legal and sovereignty challenge for companies operating in an increasingly hostile economic environment.

