Applying Securities Laws to AI: Key Takeaways from CSA Guidance for Market Participants

On December 5, 2024, the Canadian Securities Administrators (CSA) published Staff Notice and Consultation 11-348 – Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets (the “Notice”). The Notice offers interpretive guidance on how current securities laws apply to market participants using artificial intelligence (AI) systems in their operations.

Key Themes for AI Use

The Notice outlines five principles securities firms should consider when implementing AI systems:

  • Securities Laws Remain Technology-Neutral: Regulatory obligations apply based on the activity conducted, not the technology used. The Notice states that, while securities laws are generally designed to accommodate innovation, different technologies may require different approaches to compliance.
  • Governance and Oversight: Firms should adopt governance frameworks that reflect the operational risks associated with AI systems. This includes ensuring appropriate oversight (such as human review), adequate staff training, and controls across the AI system’s lifecycle.
  • Explainability: AI systems should be explainable to users, compliance teams, and regulators. Firms should be able to describe how the AI system generates its outputs, particularly where those outputs influence decisions subject to regulatory requirements.
  • Disclosure: Material AI use should be clearly disclosed in regulatory filings and marketing documents. The CSA cautions against “AI washing” (the practice of overstating or misrepresenting the use of AI to attract investment).
  • Conflicts of Interest: Firms are expected to identify and address conflicts that may arise from the use of AI (for example, when systems produce biased outputs or favour firm interests over those of clients).

Guidance Specific to Various Market Participants

In addition to the overarching themes, the CSA provides more targeted guidance tailored to different categories of market participants.

Non-Investment Fund Reporting Issuers

Issuers subject to continuous disclosure obligations under Canadian securities laws are expected to take a thoughtful and tailored approach when disclosing their use of AI systems. The Notice emphasizes that generic references to AI, or language that overstates its capabilities, are insufficient where AI use is material to the issuer’s business.

  • Tailored Disclosure in MD&A and AIF: Where AI plays a material role in an issuer’s operations, products, or strategy, the issuer should describe how AI is used, why it is relevant, and what risks it presents. This may include technical risks (such as data integrity or model drift), operational risks, and reputational or regulatory risks. Disclosure should be specific to the issuer’s circumstances.
  • Avoiding Boilerplate and Exaggeration: Vague language and promotional claims about AI should be avoided. Statements such as “leveraging AI to transform the business” may be viewed as misleading unless supported by clear context and evidence of implementation.
  • Forward-Looking Information: Forward-looking statements relating to AI (such as plans to adopt or expand AI capabilities) must be based on reasonable assumptions and disclosed in accordance with applicable rules. Issuers should ensure that related risk factors are provided and that the information is not presented in a misleading manner.
  • Material Change Reporting: Where an issuer’s deployment of AI results in a significant change to its business or operations, this may trigger timely disclosure obligations, including the filing of a material change report.

By providing specific, balanced, and risk-informed disclosure, issuers can help ensure investors are properly informed while mitigating regulatory and reputational risk.

Registrants

Registrants (including advisers, dealers and investment fund managers) must assess how their use of AI aligns with existing regulatory obligations under National Instrument 31-103 – Registration Requirements, Exemptions and Ongoing Registrant Obligations. While securities laws remain technology-neutral, the CSA notes that AI introduces distinctive challenges in areas such as oversight, suitability, and client communication.

  • Disclosure and Registration Updates: Material AI use should be disclosed in registration applications. Registrants must also notify regulators if the use of AI changes how registrable services are delivered or supervised.
  • Governance and Oversight: Firms should implement supervision and control systems tailored to AI-related risks. This includes documented policies and procedures, testing and validation processes, and the ability to monitor and explain system outputs. Firms remain responsible for the outcomes of AI-assisted processes, including those affecting client advice or portfolio decisions.
  • Outsourcing: Registrable activities (such as portfolio management or suitability determinations) may not be outsourced to third-party AI service providers. Where AI is used to support non-core functions, firms remain accountable for oversight, performance, and data security.
  • Conflicts of Interest: Firms should identify and mitigate conflicts arising from AI-generated outputs, including those that may result from biased algorithms or system-driven promotion of proprietary products. Policies should provide for testing, ongoing monitoring, and corrective action where needed.

Investment fund managers

Investment fund managers (IFMs) are expected to evaluate how the use of AI affects their obligations under applicable securities laws, particularly where AI is used in a fund’s investment strategy, risk management, or marketing.

  • Disclosure in Offering Documents: Where AI forms a material part of a fund’s investment process, its use should be clearly disclosed in the fund’s prospectus and summary documents. This includes a description of how the system functions and any related risks.
  • Material and Fundamental Changes: A change in strategy involving the use of AI may qualify as a fundamental change requiring investor approval or may constitute a material change requiring updated disclosure and a material change report.
  • Sales Communications: Claims about AI capabilities in marketing materials should be accurate and consistent with the fund’s disclosure record. IFMs should avoid AI washing and ensure that statements are substantiated by the fund’s operations.
  • Index Funds: Where funds track AI-generated indices, IFMs must ensure the index methodology is rules-based, transparent, and free from discretionary elements. If these conditions are not met, the fund may not qualify as an index-tracking fund.
  • Conflict Management: IFMs should consider whether the use of AI gives rise to conflicts of interest requiring referral to the fund’s independent review committee (for example, if an affiliated AI system is used to select investments).

Marketplaces and Infrastructure Providers

Marketplaces and Infrastructure Providers (such as clearing agencies, trade repositories, and matching service utilities) are expected to:

  • Maintain robust internal controls over AI systems including testing, validation, risk mitigation, and incident response protocols.
  • Ensure the design and deployment of AI aligns with existing obligations relating to fair access, system integrity, and record-keeping.
  • Ensure that regulators retain appropriate access to relevant system data and outputs where required for oversight purposes.

Rating Organizations and Benchmark Administrators

Rating Organizations and Benchmark Administrators should:

  • Publicly disclose material use of AI (including models, assumptions, and methodology).
  • Prioritize transparency and ensure stakeholders can understand how outputs are generated.

Next Steps Following Consultation

The CSA’s consultation period on the use of AI in capital markets closed on March 31, 2025. The CSA is now reviewing stakeholder feedback to assess whether updates to the existing regulatory framework are needed to address AI-related risks and opportunities. Depending on the outcome, new guidance or rule proposals may be published in late 2025 or 2026.

Practical Implications

Although the Notice does not introduce new legal requirements, it underscores the CSA’s focus on AI as a priority area for regulatory oversight. Firms using (or planning to use) AI should review their internal controls, disclosure practices, and governance frameworks to ensure alignment with the CSA’s expectations.

For further information regarding the CSA’s guidance on AI systems, please contact any member of our Capital Markets Groups.