


AI and Canadian Capital Markets: CSA Guidance for Non-Investment Fund Reporting Issuers
AI and Canadian Capital Markets: CSA Guidance for Non-Investment Fund Reporting Issuers
On December 5, 2024, the Canadian Securities Administrators (the “CSA”) published CSA Staff Notice and Consultation 11-348 regarding the Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets (the “CSA Notice”) to provide market participants with guidance on the use of artificial intelligence (“AI”) systems in Canadian capital markets.
The guidance from the CSA Notice is premised on the principle that investors should have accurate, equal and timely access to material information when making investment decisions. It emphasizes the importance of transparency, accountability, and risk mitigation when leveraging AI in capital markets. The CSA Notice highlights the current state of AI and its application to securities law requirements and includes guidance for the following market participants:
- Non-Investment Fund Reporting Issuers;
- Registrants;
- Marketplaces and Marketplace Participants;
- Clearing Agencies and Matching Service Utilities;
- Trade Repositories and Derivatives Data Reporting;
- Designated Rating Organizations; and
- Designated Benchmark Administrators.
This bulletin focuses on the guidance for Non-Investment Fund Reporting Issuers (“Issuers”).
Requirements for Non-Investment Fund Reporting Issuers
The CSA Notice provides guidance to Issuers with respect to securities law requirements under National Instrument 51-102 Continuous Disclosure Obligations (“NI 51-102”). The guidance applies to Form 51-102 F1 Management’s Discussion and Analysis and Form 51-102F2 Annual Information Form. Additionally, the guidance is expected to apply to certain prospectus filings.
The CSA Notice emphasizes that there is no “one size fits all” model when determining disclosure obligations regarding AI systems. Disclosure should be tailored to the Issuer. The CSA Notice focuses on the following four considerations for Issuers: (1) Disclosure of Current AI Systems Business Use; (2) AI-related Risk Factors; (3) Promotional Statements about AI-related Use; and (4) AI and Forward-Looking Information.
(1) Disclosure of Current AI Systems Business Use
The CSA recommends that Issuers include disclosure where the use of AI is material to an Issuer’s business operations. The disclosure should be tailored to inform investors and potential investors of the impact and risk of AI on the operations and finances of the Issuer. Examples of specific disclosure that may be relevant for an Issuer to incorporate in its disclosure filings include, but are not limited to:
- the Issuer’s definition of AI;
- how AI is being used by the Issuer;
- the current and anticipated impact that the use or development of AI systems will have on the Issuer’s business and financial condition;
- any material contracts relating to the Issuer’s use of AI;
- the impact of the adoption of AI systems on the competitive position of the Issuer in the market; and
- whether the AI system used by the Issuer is supplied by a third-party.
Additionally, the CSA notes that it may be prudent for an Issuer to disclose its sources and providers of any data used to develop the AI systems used by the Issuer.
(2) AI-related Risk Factors
The CSA encourages Issuers to provide entity-specific risk factors and to avoid the use of boilerplate language in the preparation of risk factor disclosure. Examples of AI-related risk factors include, but are not limited to:
- operational risks tied to the development and use of AI, such as the impact of disruptions, unintended consequences, misinformation, inaccuracies and errors, bias and technological challenges of AI;
- third-party risks associated with the reliance on AI systems offered by third-party service providers;
- ethical risks arising from the use of AI that may impact the Issuer’s reputation, liability and costs;
- regulatory risks associated with the evolving regulatory and legal landscape with respect to AI systems;
- competitive risks in relation to products, services and industry standards involving AI; and
- cybersecurity risks associated with the use and development of AI.
Furthermore, the CSA recommends that Issuers consider and include the source and nature of AI-related risks, the potential consequences of such risks, the adequacy of any preventive measures to AI-related risks, and any prior material incidents where AI has raised concerns relevant to the Issuer.
(3) Promotional Statements about AI-related use
The CSA expects Issuers to avoid overly promotional disclosure regarding the use and development of AI systems. AI-related disclosure should abide by general securities law prohibitions against false or misleading statements and should provide balanced and fair disclosure by including favourable and unfavourable information. For example, if an Issuer discloses the use of AI in its business, the CSA expects the Issuer to provide adequate details to substantiate any claims by the Issuer as it relates to the use of AI. The CSA also cautions that the scrutiny of promotional statements with respect to AI may not be limited to an Issuer’s continuous disclosure filings and could also extend to an Issuer’s social media activities.
(4) AI and Forward-Looking Information
The CSA Notice provides that Issuers should consider if the disclosure of the prospective use of AI systems constitutes Forward-Looking Information (“FLI”). Pursuant to applicable securities laws,[1] the CSA Notice provides that AI-related FLI disclosure must:
- clearly identify the information as forward looking;
- caution that actual results may vary from the FLI;
- disclose material factors and assumptions used to develop the FLI; and
- identify material factors that could cause actual results to differ from the FLI.
The CSA reminds Issuers that statements about the prospective use of AI systems should not be included unless the Issuer has a reasonable basis for the FLI.
Guidance in Other Jurisdictions
The CSA is not the first regulator to provide AI-related guidance and regulators across the world have had to grapple with the risks of AI systems to both issuers and investors. The following section outlines various approaches taken by different jurisdictions to address the rapidly increasing adoption of AI systems.
United States
The United States Securities and Exchange Commission (the “SEC”) released a series of guidance relevant to the use of AI-systems in 2024. The guidance was released in video format by the Chair of the SEC and covered topics such as conflicts of interest in artificial intelligence,[2] systemic risk in artificial intelligence,[3] AI washing[4] and fraud and deception in artificial intelligence.[5] Additionally, in June 2024, the SEC charged a CEO and founder for using AI to defraud investors by making false and misleading statements.[6]
Furthermore, the SEC’s Division of Examinations (the “Division”) released their examination priorities for the 2025 fiscal year.[7] The Division plans to review the accuracy of registrant representations regarding AI use and capabilities. The Division also plans to assess the implementation of adequate policies and procedures to monitor the use of AI with a focus on fraud prevention and detection, back office operations, anti-money laundering and trading functions.
United Kingdom
The United Kingdom (“UK”) Financial Conduct Authority (the “FCA”) released an AI Update in April 2024 to promote the safe and responsible use of AI in UK financial markets.[8] The AI Update states that the risks associated with AI are not necessarily unique to AI itself and can be mitigated within the FCA’s existing legislative and regulatory frameworks. The FCA states that although the existing regulatory framework covers the use of technology such as AI, the FCA will continue to monitor the current situation and may consider future regulatory adaptations if needed.[9]
Additionally, the London Stock Exchange Group has developed Responsible AI Principles, drawing from existing frameworks to guide their approach to AI adoption.[10] No specific guidance regarding the use of AI has been provided by any other major stock exchange in Canada, the United States or Australia.
Australia
The Australian Securities and Investments Commission (“ASIC”) has provided guidance on regulating AI in the financial sector in the report “REP 798 Beware the gap: Governance arrangements in the face of AI innovation” (“REP 798”).[11] REP 798 states that ASIC’s regulatory framework for financial services is technology neutral. However, ASIC states that it supports the introduction of regulatory measures to mandate guardrails for the use of AI in high-risk settings, which the Australian government has proposed to do.
Moving Forward
In considering the approaches of Canada and other jurisdictions, continuous efforts by both regulators and issuers are necessary to keep up with the developing uses and risks of AI systems. While the CSA Notice sheds light on how Canadian securities regulators may approach AI, the CSA Notice primarily serves as guidance and does not outline definitive requirements for businesses and investors. Issuers should be proactive in monitoring the evolving regulatory landscape as AI is here to stay.
For More Information
If you have any questions regarding the CSA Notice and how it may impact your business, please contact a member of our Capital Markets & Securities Group at McMillan LLP.
[1] NI 51-102, section 4A.3 and Companion Policy 51-102CP Continuous Disclosure Obligations, section 4A.3, 4A.5 and 4A.6.
[2] Gary Gensler, “Conflicts of Interest in Artificial Intelligence | Office Hours with Gary Gensler” (13 August 2024).
[3] Gary Gensler, “Office Hours with Gary Gensler: Systemic Risk in Artificial Intelligence” (19 September 2024).
[4] Gary Gensler, “Office Hours with Gary Gensler: AI Washing” (4 September 2024).
[5] Gary Gensler, “Office Hours With Gary Gensler: Fraud and Deception in Artificial Intelligence” (10 October 2024).
[6] Securities Exchange Commission, “SEC Charges Founder of AI Hiring Startup Joonko with Fraud” (11 June 2024).
[7] Securities Exchange Commission, “Fiscal Year 2025 Examination Priorities – Division of Examinations” (21 October 2024) at page 13.
[8] Financial Conduct Authority, “AI Update” (22 April 2024).
[9] Financial Conduct Authority, supra note 8 meaning at 21.
[10] Jenny Cosco & Sabrina Feng, “Towards a unified roadmap for responsible AI policy” (11 June 2024).
[11] Australian Securities & Investments Commission, “REP 798 Beware the gap: Governance arrangements in the face of AI innovation” (29 October 2024).
By Andjela Sabet, Julia Currie, Rhythm Jethi (Articling Student) and Anica Villamayor (Articling Student)
A Cautionary Note
The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.
© McMillan LLP 2025
Insights (5 Posts)View More
Tax Consequences of Rental Property Conversions
Case Comment on a recent, and potentially troubling, court decision concerning the tax consequences of a conversion of multi-unit rental property to condos.
New Sick Leave Rules Coming Soon: What BC Employers Need to Know
The BC Government introduced Bill 11, proposing amendments to the Employment Standards Act addressing requirements for sick notes in the workplace.
The Planning Act and the Duty to Consult First Nations Communities after Galibier Materials Inc. v. Springwater (Township)
How extensive is the duty to consult with First Nations communities under the Planning Act
Manitoba’s New Accessibility Regulation: What Franchisors Need to Know
In this update we explore some considerations for franchisors that may arise through Manitoba's new accessible information regulation.
All Things Tariffs: A Comprehensive Overview of Where We Stand Today
In response to US tariffs, Canada has imposed retaliatory tariffs on a wide variety of products. Recent developments now include opportunities for remission.
Get updates delivered right to your inbox. You can unsubscribe at any time.