Artificial intelligence has rapidly become a defining feature of corporate strategy. Public companies now routinely describe AI as a driver of growth, efficiency, innovation, and competitive advantage. Earnings calls, investor presentations, and periodic filings increasingly reference AI-enabled products, operational improvements, and strategic positioning. At the same time, the Securities and Exchange Commission has made clear that disclosures involving AI must be accurate, complete, and grounded in fact.
For investors and securities attorneys, the SEC’s approach has helped open the door to a growing category of securities litigation focused not only on exaggerated AI capabilities, but also on understated AI-related risks. Recent enforcement actions, regulatory commentary, and newly filed securities class actions suggest that AI disclosures are entering a familiar trajectory. Indeed, AI-related filings were one of the largest trend category in securities class action litigation in 2025, and accounted for a disproportionate share of total market capitalization losses, underscoring how consequential these cases have become for investors and companies alike.[1] While regulators debate whether additional guidance is necessary, private litigants are already testing how long-standing disclosure principles apply when AI materially affects a company’s operations, revenue, or risk profile.
In 2024, the SEC brought two enforcement actions against investment advisers alleging that they overstated their use of artificial intelligence. Those actions[2] alleged that the advisers marketed AI-driven investment strategies and analytics that were either nonexistent or materially exaggerated.
In announcing the settlements, then-Chair Gary Gensler cautioned against so-called “AI washing,” emphasizing that companies may not invoke AI as a marketing tool without substantiation. Enforcement Director Gurbir Grewal likewise stressed that claims about emerging technologies would be scrutinized under traditional anti-fraud standards. These actions sent a clear signal that AI-related representations are material to investors and subject to the same disclosure obligations as any other operational claims.
The SEC has continued this enforcement trajectory into 2025. In January 2025, the SEC settled an action against Presto Automation, Inc. for allegedly misrepresenting the capabilities of its AI-powered restaurant ordering technology, including falsely claiming that its system eliminated the need for human intervention when in fact the vast majority of orders still required it. In April 2025, the SEC and the U.S. Department of Justice filed parallel civil and criminal actions against Albert Saniger, the former CEO of Nate, Inc., alleging that he raised more than $42 million from investors by falsely claiming his mobile shopping app used AI to complete purchases automatically, when transactions were in fact processed manually by overseas contractors.[3] Together, these actions confirm that AI washing enforcement is a sustained priority, not a passing focus.
Against that backdrop, the SEC’s December 4, 2025 Investor Advisory Committee meeting offered a revealing contrast. Citing a lack of consistency in contemporary AI disclosures that is “problematic for investors seeking clear and comparable information,” the Committee voted in favor of recommending that the SEC issue guidance requiring issuers to define AI, disclose board oversight mechanisms, and report on how AI deployment affects their operations and consumer-facing activities. However, despite that vote, SEC Chair Paul Atkins and Commissioner Hester Peirce both delivered remarks signaling skepticism about adopting prescriptive AI-specific requirements. Their comments stressed that the SEC’s existing disclosure framework is sufficient to inform investors about AI’s impact on a business, and warned that technology-specific mandates could chill innovation or discourage companies from accessing public markets.
These remarks framed AI as simply another business input that should be captured, if material, by existing disclosure rules. According to this view, investors could rely on general requirements concerning material risks, trends, and uncertainties, without the need for tailored AI guidance. From a securities litigation perspective, however, this framing understates how AI-related disputes are likely to arise. The central question in most securities cases is not whether the SEC has adopted AI-specific rules that companies were required to follow, but whether companies made statements that were materially misleading in light of information known at the time. Indeed, the framework invoked at the December 2025 meeting is exactly what plaintiffs currently rely on when bringing securities claims. When companies speak about growth, stability, traffic, margins, or strategic resilience, they must ensure that those statements are not misleading in light of known AI-related risks. The absence of prescriptive SEC rules does not insulate issuers from liability when optimistic narratives conflict with internal data or external developments.
A securities class action filed in June 2025 against Reddit illustrates how these issues are already playing out.[4] Reddit relies heavily on traffic generated through Google Search. When Google introduced AI-powered search features, including AI Overviews, users increasingly received answers directly from Google’s interface without clicking through to external websites. According to the complaint, this zero-click dynamic materially reduced Reddit’s traffic and advertising revenue.
Rather than alleging that Reddit overstated its own AI capabilities, the complaint focuses on what the company allegedly failed to disclose. Plaintiffs contend that Reddit repeatedly reassured investors that the impact of Google’s AI initiatives was manageable and temporary, while downplaying information suggesting a more permanent disruption to its business model. Analysts later downgraded the stock after concluding that Google’s AI tools posed a lasting threat to Reddit’s traffic-dependent revenues.
The Reddit action is significant for two reasons. First, it reflects a shift away from prior AI-washing claims toward allegations of understated AI-related risk. The theory is not that Reddit promised too much about AI, but that it failed to adequately disclose how AI developments outside its control were already affecting core business metrics.
Second, the alleged risk arose from AI deployed by a third party. Google’s implementation of AI search features altered referral patterns across the internet, with direct consequences for companies dependent on search traffic. The case underscores that AI-related disclosure obligations are not limited to a company’s internal technology. AI adoption by customers, suppliers, competitors, or platform partners can create material risks that must be addressed when companies speak publicly about performance or outlook.
The Reddit lawsuit is not an isolated event. A securities class action was filed against Apple around the same time, alleging that the company made misleading statements concerning the timing and readiness of AI-powered Siri enhancements for the iPhone 16.[5] That complaint similarly focuses on the disconnect between public assurances and the alleged realities of AI development and deployment. Apple has since moved to dismiss the action, arguing that there was no proof its executives knew at the time that the features would be significantly delayed — a defense that will test how courts evaluate the line between optimistic forward-looking statements and actionable misrepresentations in the AI context. Together with earlier AI-related suits, these cases demonstrate that artificial intelligence issues are likely to become a recurring source of securities litigation exposure.
As AI becomes more deeply embedded across industries, courts are likely to see an increasing number of claims alleging that companies failed to disclose the operational risks associated with rapidly evolving AI technologies. These risks may arise from a company’s own AI initiatives, from delays or limitations in AI development, or from the adoption of AI by third parties that alters competitive dynamics, user behavior, or revenue streams.
The SEC’s Division of Examinations has reinforced this trajectory. In its fiscal year 2026 examination priorities, released in November 2025, the Division identified AI as a top focus area, signaling that it will scrutinize whether companies’ AI-related disclosures, supervisory frameworks, and controls align with their actual practices.[6] While new SEC leadership has cautioned against prescriptive AI-specific requirements, the existing disclosure framework remains fully capable of supporting securities fraud claims where companies minimize or misstate AI-related risks. Investors are increasingly attentive to how AI affects corporate performance, sustainability, and competitive positioning, and disclosures in this area are now subject to heightened scrutiny by both regulators and private litigants.
The Reddit lawsuit may represent the leading edge of this trend. It provides a blueprint for future AI-related securities actions focused on undisclosed or understated AI risks, including risks arising from third-party AI adoption beyond a company’s direct control. For plaintiff-side securities attorneys, these cases signal fertile ground for claims, particularly where companies provide confident or reassuring narratives that conflict with internal data or known external developments.
[1] According to Cornerstone Research, AI-related filings reached 16 in 2025, more than the 2024 total, and accounted for 57% of the total Maximum Dollar Loss Index for the year, reflecting the outsized financial stakes of AI-related securities litigation. Alexander Aganin et al., Cornerstone Research & Stanford Law School Securities Class Action Clearinghouse, Securities Class Action Filings—2025 Year in Review 2, 5 (Jan. 28, 2026), https://www.cornerstone.com/wp-content/uploads/2026/01/Securities-Class-Action-Filings-2025-Year-in-Review.pdf.
[2] SEC Press Release No. 2024-36, Advisor Charged with Fraud for Claiming to Use AI When Making Investment Decisions (Mar. 18, 2024); SEC Press Release No. 2024-57, SEC Charges Investment Adviser with Fraud for Falsely Claiming to Use AI (Apr. 26, 2024).
[3] See SEC Press Release No. 33-11352, SEC Charges Restaurant Technology Company with Making Misleading Statements About AI Product (Jan. 14, 2025); SEC v. Saniger, No. 25-cv-02937 (S.D.N.Y. filed Apr. 9, 2025).
[4] Tamraz. v. Reddit, Inc., No. 3:25-cv-05144 (N.D. Cal. filed June 18, 2025).
[5] Tucker v. Apple Inc., No. 3:25-cv-05197 (N.D. Cal. filed June 20, 2025).
[6] The SEC’s Division of Examinations identified AI as a top priority in its fiscal year 2026 examination priorities, released in November 2025, signaling that it will scrutinize whether companies’ AI-related disclosures, supervisory frameworks, and controls align with their actual practices. See SEC Div. of Examinations, Fiscal Year 2026 Examination Priorities (Nov. 17, 2025), https://www.sec.gov/files/2026-exam-priorities.pdf.
Sign up to receive expert insights on securities and class action law straight to your inbox.