Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus
Actualité
17/1/26

Artificial Intelligence and Economic Interference: French Intelligence Services Warn Companies about the Risks of Professional AI Use

In December 2025, the Direction générale de la sécurité intérieure (DGSI), operating under the authority of the Ministère de l’Intérieur, published Economic Interference Flash No. 117, devoted to the risks associated with the use of artificial intelligence in the professional environment, with a particular focus on generative AI.

The official document commented on in this article may be accessed in full by clicking on the link below.

This publication forms part of the DGSI’s broader economic security mission and aims to raise awareness among French and international companies of new forms of economic interference, data leakage and manipulation, enabled or exacerbated by uncontrolled uses of artificial intelligence.

1. Generative AI: from productivity driver to legal and strategic vulnerability

The French intelligence services first recall that the rapid democratisation of artificial intelligence, and especially generative AI, is profoundly reshaping corporate working practices. While the productivity gains and innovation potential are undeniable, the DGSI stresses that these technologies also create significant legal, economic and strategic vulnerabilities.

These risks are heightened by the frequent use of foreign-developed AI tools, subject to extraterritorial legislation, combined with limited transparency regarding data governance, model training and reuse of information submitted by users.

2. Disclosure of sensitive information through public AI tools: a major threat to trade secrets

The first scenario described concerns employees of a strategically important French company who used a public generative AI tool to translate confidential internal documents, without prior authorisation.

The DGSI underlines a critical point: many free or standard versions of generative AI systems reuse user inputs to train their models, creating a substantial risk of loss of control over sensitive or strategic information.

From a legal standpoint, such practices may result in:

  • breaches of trade secret protection, as defined under Directive (EU) 2016/943 and its French implementation,
  • violations of contractual confidentiality obligations,
  • non-compliance with the GDPR, particularly where personal data are processed or stored outside the European Union.

3. Blind delegation of strategic decisions to AI: bias, liability and governance failures

The second case highlights the risks arising from the full delegation of partner due diligence to an AI-based tool, without any human verification.

The DGSI identifies several well-known but often underestimated risks, including:

  • excessive reliance on automated recommendations,
  • amplification of biases present in training data,
  • lack of explainability (“black box” effects),
  • and so-called hallucinations, whereby AI systems generate inaccurate or fictitious information.

Such practices raise serious issues of corporate governance and directors’ liability, particularly where strategic, financial or compliance decisions are taken solely on the basis of AI-generated outputs.

4. AI, cybercrime and deepfakes: increasingly sophisticated fraud schemes

The third scenario described involves an attempted fraud based on a deepfake, combining the artificial reproduction of a company executive’s face and voice in order to induce an unlawful transfer of funds.

According to the DGSI, AI has become a key vector for economic interference, facilitating:

  • highly personalised phishing and spear-phishing attacks,
  • the creation of credible fake content,
  • data poisoning techniques,
  • and adversarial attacks targeting AI systems themselves.

These developments significantly increase risks related to cybersecurity, fraud, corporate liability and reputational harm.

5. DGSI recommendations: towards legally sound AI governance in companies

In response, the French intelligence services set out a series of structured recommendations, including:

  • formally regulating AI use through internal IT and AI governance policies,
  • strictly limiting the categories of data that may be submitted to AI tools,
  • favouring solutions hosted in France or within the European Union,
  • ensuring transparency regarding AI use vis-à-vis management and business partners,
  • regularly training employees on AI and cybersecurity issues,
  • and reporting any suspicious AI-related incidents to the DGSI.

These recommendations align closely with broader requirements relating to legal compliance, digital sovereignty and protection of intangible assets.

Conclusion

With this Economic Interference Flash No. 117, the French intelligence services deliver a clear message to company directors, legal departments, CIOs and compliance officers: artificial intelligence, while a powerful driver of competitiveness, must be deployed within a robust legal, contractual and governance framework.

AI is no longer merely a technological issue; it has become a core strategic, legal and sovereignty challenge for companies operating in an increasingly hostile economic environment.

Vincent FAUCHOUX
Image par Canva
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.