In 2019, the Digital Transformation Agency (DTA) of the Commonwealth of Australia released version 2.0 of its Artificial Intelligence (AI) Model Clauses. This document sets out a modular set of contractual provisions designed to govern the use, development, and procurement of AI systems within public sector contracts. These model clauses reflect a broader commitment by the Australian government to promote ethical, transparent, and accountable deployment of AI technologies, in line with its national AI Ethics Principles and associated regulatory frameworks.
The DTA model clauses are intended for inclusion in public contracts where the supplier either uses an AI system in delivering services, develops a bespoke AI solution for the buyer, or provides software with embedded AI functionalities. The clauses are designed to be flexible and non-binding, allowing buyers to adapt them according to the nature and risk profile of each procurement. They are framed within the context of existing Australian legislation, including the Privacy Act 1988 (Cth), various anti-discrimination statutes, and cybersecurity obligations, as well as international standards such as ISO/IEC 42001:2023 on AI management systems.
A foundational principle of the framework is that suppliers must not use AI systems without prior written approval by the buyer. As stated in clause 1.1.1, “Where the Seller intends to use an AI System for the provision of the products and services, it must notify the Buyer […] and obtain the Buyer’s prior written approval for such use.” The supplier remains fully responsible for contractual performance, including when AI is employed in the supply chain. Furthermore, certain AI systems—such as those developed by DeepSeek—are expressly prohibited, and their use constitutes a ground for immediate termination of the contract.
The clauses impose detailed obligations on suppliers to ensure that the AI system can be subject to human oversight, and that its functioning remains traceable and intelligible to both the contracting authority and, where relevant, affected persons. Clause 5.1.1 requires the supplier to design systems that can be monitored through effective human-machine interfaces. In accordance with clause 5.3.1, the AI system must be developed in a manner that ensures transparency of outputs and allows for meaningful explanation of the logic underlying automated decisions. The supplier must also enable circuit-breaker mechanisms that allow for the interruption of system operations in case of malfunction or risk.
Data governance is treated with particular rigour. The supplier must comply with the Privacy Act and promptly notify the buyer of any eligible data breaches. The clauses strictly prohibit data mining, ingestion into large language models, or use of buyer data for training purposes unless specifically authorised. Clause 11.2.1 provides that the supplier “must not, at any time, conduct Data Mining activities with any Buyer Data or ingest Buyer Data into a large language model or any AI model,” except where permitted by the contract. Additionally, AI datasets must be returned or destroyed at the end of the contract, in accordance with explicit procedures and evidentiary requirements.
The model provides two alternative regimes for intellectual property ownership: either the buyer owns all contract material, or the supplier retains ownership and grants the buyer a broad, irrevocable licence. In both scenarios, the buyer is guaranteed the ability to use, reproduce, adapt and disclose the AI system and its components. The clauses also require suppliers to adopt internal AI governance systems, and to comply with ISO/IEC 42001:2023, particularly where risk assessments indicate heightened exposure. Regular reporting on bias, system updates, and performance monitoring may be required, depending on the buyer’s specifications.
Conclusion
The DTA AI Model Clauses represent one of the most comprehensive examples of contractual standardisation in the domain of AI procurement. By translating abstract ethical and legal principles into enforceable contractual obligations, they offer a robust legal framework for public institutions seeking to deploy AI responsibly. For legal practitioners and policymakers alike, these clauses provide a valuable reference model that combines regulatory foresight with operational clarity.