Listen to this post

In a speech before the Yale Law School February 2024, SEC Chair Gary Gensler had AI top of mind. Interrupted only by a colorful collection of movie references, Chair Gensler focused almost the entirety of his remarks on AI and the SEC’s corresponding regulatory duties. Chair Gensler addressed the risks associated with AI while cautioning reporting companies to avoid “AI washing” and making boilerplate AI disclosures that are not particularized to the company. The speech nicely underscores the SEC’s two-fold, and at times juxtaposed, concerns about the important emerging technology.

Chair Gensler spent the bulk of his time discussing the risks and potential abuses the capital markets are exposed to as AI continues to evolve. These include:

  • systemic risks associated with independent financial institutions relying on the same AI models;
  • inaccurate AI predictions;
  • difficulty avoiding conflicts of interest when using AI models; and
  • using AI for fraudulent schemes.

In contrast to these “inherent risks,” Chair Gensler also devoted a portion of his speech to cautioning against “AI washing.” Much like the “greenwashing” term the SEC sometimes uses in the ESG space, “AI washing” encompasses not just outright false claims, but also overly generalized disclosures that do not actually help investors. With AI making the headlines almost daily, companies may feel pressured to reference AI in some way in their public disclosures, even if there is not anything concrete to report. This, Chair Gensler says, is a mistake. In particular, he cautioned against:

  • boilerplate AI disclosures not particularized to the company;
  • disclosing the use of AI models when the underlying technology is not actually AI-driven; and
  • AI-related projections that do not have a reasonable basis.

In the midst of reporting season, companies must strike a balance between these competing AI disclosure concerns. On the one hand, companies should report material risks, developments and financial results as they relate to AI. Chair Gensler advised companies should ask themselves, “if we are discussing AI in earnings calls or having extensive discussions with the board, is it potentially material?” Such AI disclosures could be appropriate in a variety of places, including on Form 10-K in risk factors, the MD&A and financial statements and in proxy statements in letters to shareholders and the CD&A.

On the other hand, companies should not reference AI in public disclosures just for the sake of doing so—which is not the first time the SEC has issued this warning. So far in 2024 we have seen a sharp uptick in mentions of AI on Form 10-Ks filed with the SEC. A recent Bloomberg study estimated that over 40% of S&P 500 companies referenced AI in their 2024 Form 10-K, a majority of those disclosing certain AI risks. Certainly, there are companies that have serious AI risks to disclose, but do 40% of companies really have a particularized AI disclosure to make? The SEC may be growing suspicious.