


On 13 September 2025, the Joint Advisory Council on the Ethics of the Magistrate–Lawyer Relationship (France) published an important guidance document on the use of generative artificial intelligence (IAG) in legal practice1. This body, which brings together representatives of the French judiciary and the French legal profession, proposes a shared ethical framework to guide the use of AI tools in the courtroom and in the work of lawyers. The objective is to ensure that the development of these technologies remains compatible with the fundamental principles that govern French judicial decision-making and professional secrecy.
The document acknowledges that generative AI may serve as a useful support tool. It can accelerate legal research, help structure and summarise large case files, and assist in drafting working documents. When used strictly as an instrument of assistance, and under human supervision, IAG can contribute to improving efficiency in legal work.
However, the text also identifies significant risks associated with improper or uncritical use. Generative AI cannot replace the legal reasoning of the judge or the lawyer, who remains solely responsible for the content produced or validated. AI systems may reproduce biases from their training data and may generate inaccurate or fictitious references (“hallucinations”). Using such outputs without verification can lead to errors in legal assessment, which, in the French judicial system, would directly undermine the quality and reliability of the decision.
The document also emphasises the centrality of professional secrecy (secret professionnel) and data sovereignty. No confidential information relating to ongoing cases should be processed through AI tools whose security, hosting and data-governance guarantees are not fully controlled, particularly when such tools are not hosted within the European legal framework. Such use would conflict with core ethical and professional duties applicable in France.
Finally, the guidance insists on the requirement of explicability. The legal professional must always be able to explain and justify the reasoning underlying a position or decision. Generative AI does not “reason” in a legal sense; it produces linguistic output. The responsibility for reasoning and deciding remains exclusively human.
What the French guidance therefore advocates is a measured, supervised and critical use of generative AI: Generative AI assists, but does not decide.
It supports reasoning, but never replaces it.
1 Conseil consultatif conjoint de déontologie de la relation magistrats-avocats, Usage de l’intelligence artificielle générative dans la pratique judiciaire, 13 septembre 2025.

