March 15, 2024 | Law360 | 4 minute read

Months after US Securities and Exchange Commission Chair Gary Gensler warned[1] regulated business owners about potential securities law liability over conduct known as AI washing,[2] or misleading investors as to the company’s true artificial intelligence capabilities, a class of plaintiffs has filed suit in New Jersey federal court, seeking damages under this novel theory.[3]

The Innodata Lawsuit

In February, a software and data engineering company, Innodata Inc., was sued[4] in the US District Court for the District of New Jersey by a proposed class of investors alleging its stock price dropped more than 30% after a financial research firm, Wolfpack Research, published a report saying its artificial intelligence technology was “smoke and mirrors,” and that its marketing claims were akin to “putting lipstick on a pig.”

Innodata investor David D’Agostino filed the complaint against the company, its President and CEO Jack Abuhoff, interim Chief Financial Officer Marissa Espineli, and former CFO Mark Spelker on behalf of all individuals and entities that purchased or acquired Innodata common stock from May 9, 2019, through Feb. 14, 2024.

According to the complaint, over the preceding decades, Innodata’s original data solutions business had slowly decayed as automatic data annotation made the company’s historical business of offshoring manual data annotation less profitable. In 2019, Innodata had allegedly begun implementing AI and machine learning processes, and began marketing itself as an AI company.

The complaint asserts that, throughout the proposed class period, Innodata advertised its new AI-focused operations to investors, and the individual defendants repeatedly made positive statements about the company’s AI expertise and capabilities, along with touting a growing number of Silicon Valley contracts.

But plaintiffs claim that Innodata’s promises were falsehoods, and that while touting supposed new AI capabilities, the named defendants contradicted their own public statements by privately reducing Innodata’s research and development spending, in a move that was not disclosed to investors.

The complaint further alleges that the defendants made material misrepresentations regarding Innodata’s actual lack of any viable AI technology, that its Goldengate AI platform was rudimentary software, and that it was not going to use AI in any significant way to gain new Silicon Valley contracts.

Crucially, D’Agostino states that Innodata had been able to “pose as an AI company as a result of contracts with true AI companies to use its [data solutions arm] to outsource the labor intensive” task of using its own data to train large language models.

The complaint alleges that public perception of Innodata, and therefore Innodata’s stock price, began to plummet when financial research firm Wolfpack Research published a report, revealing Innodata’s misrepresentations.

“While defendants touted Innodata’s status as an AI pioneer, other companies were only hiring Innodata for cheap labor and its operations were powered by thousands of low-wage offshore workers, not proprietary AI technology,” the complaint states.

A Preview of Future AI Washing Enforcement Actions

While the D’Agostino complaint does not explicitly label Innodata’s actions as AI washing, the behavior alleged is identical to what the SEC has recently cautioned against using that same phrase.

Indeed, D’Agostino’s allegations echo Gensler’s admonition to regulated entities that “[y]ou can sell a security, you can promote an opportunity,” but that an entity also must “fairly and accurately describe the material risks” as they relate to their AI capabilities. Nevertheless, it is notable that this theory is first being tested in private litigation instead of in the form of a government enforcement action.

While legal observers may have expected the SEC to act as the vanguard of a new era of AI-related securities litigation by bringing an initial enforcement action, especially in light of the SEC’s recent rulemaking movement[5] in that space, the regulator may want to allow private litigation to test these novel theories before wading in on its own.

Recent Comments From Regulators Underscore AI Washing Risk

Nevertheless, an enforcement action related to AI washing appears to be a question of when, not if, regulators will take that step.

To be sure, during a March 6 panel discussion at the American Bar Association‘s White Collar Crime 2024 conference in San Francisco, Gurbir S. Grewal, director of the SEC’s Division of Enforcement, emphasized that the SEC has its eye on AI washing, once again drawing comparisons between it and so-called ESG washing.

“We saw a lot of issuers talk about their ESG practice and products, and those products didn’t live up to what was stated,” he said, comparing the two different modes of securities law violations.

“We see advisors saying they are incorporating AI when making investment decisions for individuals when they aren’t,” Grewal added, emphasizing that AI washing is certainly top of mind for the Enforcement Division, and also that it was a priority for its Examinations Division.

The SEC is not alone, and during the same March 6 panel discussion, Ian McGinley, the Commodity Futures Trading Commission‘s director of its Division of Enforcement, stated, “You can see [AI washing] also playing out with more sophisticate firms, where investors want to know what AI capability a firm has. Everyone will want to make sure [the disclosures] are entirely accurate.”

Considerations for Covered Entities

Going forward, entities subject to SEC regulation should maintain a keen awareness as to the accuracy of their representations related to AI. Technology generally qualifies as AI only if it exhibits some level of learning, adapting and autonomy.[6] Technologies providing advanced automation and statistical analysis are not necessarily considered AI.

Companies should be mindful of this distinction when representing their capabilities to current and potential investors, and should always make good faith efforts to describe capabilities accurately.

Entities should especially steer clear of touting their own AI-related advancements when they are instead privately reducing internal spending on research and development in that area, as Innodata is alleged to have done in the instant class action complaint.

As with all reporting, the best defense against future litigation and investigation related to AI disclosures is a robust compliance program with controls for reporting and verification to ensure that all public statements are accurate and not misleading.



[3] D’Agostino v. Innodata Inc. et al., 2:24-cv-00971 (D.N.J. 2024).




Article originally published by Law360 on March 14, 2024.