Purpose of this Practice Notice
This Practice Notice provides guidance to participants regarding the appropriate use of artificial intelligence (“AI”) in proceedings before the Canadian International Trade Tribunal.
It reflects emerging legal guidance, including Principles adopted by the Federal Court of Canada, and establishes expectations for transparency, accountability, and procedural fairness.
This Practice Notice aims to:
- promote responsible and transparent use of AI;
- ensure fairness;
- preserve the integrity of the decision-making process; and
- maintain public confidence in Tribunal decision-making.
AI offers benefits but carries risks including inaccuracies and bias. The Tribunal recognizes both.
This Practice Notice will be updated as AI evolves. The Tribunal commits to transparency and consultation.
Use of AI by participants before the Tribunal
The Tribunal recognizes that participants may use AI tools to assist with their participation in proceedings, including drafting, researching and summarizing.
The use of AI will not, on its own, lead to a negative interpretation of submissions. Said otherwise: participants will not be penalized simply for using AI tools. However, as detailed in this Practice Notice, AI must be used responsibly. Inaccurate, false, or misleading information derived from the use of AI tools could negatively impact a case before the Tribunal.
Standards and responsibilities
Regardless of whether AI is used, participants must comply with the Tribunal’s Rules, this Practice Notice and all applicable professional obligations and duties.
Any participants, including self-represented participants, who sign and/or file a document with the Tribunal:
- have full responsibility for its accuracy and reliability;
- must verify all AI-assisted content; and
- remain accountable for all representations made.
Confidentiality and privacy
It is strictly prohibited for counsel who have been granted access to third party confidential information by the Tribunal to input that information in any AI tool.
Use of AI by the Tribunal
Decision-making
The Tribunal hears cases and makes decisions based solely on the evidence and submissions before it. The Tribunal does not use automated decision-making tools or generative AI systems to make its decisions.
All decisions are made by Tribunal members, who remain fully accountable for their reasoning, conclusions and findings.
Best practices for responsible use of AI
Use caution and validate accuracy
AI tools may generate inaccurate or false information. All AI-assisted content must be independently verified.
Avoid bias and discrimination
AI systems may reflect bias. Parties must critically assess outputs for fairness.
Verify legal sources
Parties must rely on authoritative legal sources. AI should never replace primary legal research.
Participants must exercise caution when relying on legal references or analyses generated by AI. Case names and citations, quoted passages, statutory provisions and legal interpretation produced by AI must be independently verified.
Authorities must be submitted in accordance with the Practice Notice on Citing authorities. When citing jurisprudence, statutes, regulations, policies, or commentary, only well recognized sources should be used, including:
- official court, Tribunal and government websites;
- established commercial legal publishers; and
- trusted public legal services such as CanLII.
Effective date
January 30, 2026