In a significant ruling under U.S. consumer protection law, a New Mexico jury has imposed a $375 million civil penalty on Meta Platforms, finding that the company engaged in widespread deceptive practices by misrepresenting the safety of its platforms for children.

The verdict, delivered unanimously, held that Meta violated the state’s New Mexico Unfair Practices Act by making misleading claims about the safety features of its platforms, including Facebook, Instagram, and WhatsApp. The law prohibits false advertising, deceptive trade practices, and the exploitation of consumer ignorance.

According to court findings, the jury identified approximately 75,000 instances of “unfair or unconscionable” trade practices. Each violation carried a statutory penalty of $5,000, reflecting what the court described as a pattern of deliberate and sustained misconduct.

The case focused on Meta’s public representations that its platforms were equipped with “robust safety measures” for minors. However, evidence presented during the trial included internal corporate documents indicating that the company was aware of significant risks related to child safety, including online grooming and exposure to harmful content.

Prosecutors argued that Meta failed to adequately disclose these risks while continuing to promote its platforms as safe environments for young users. The jury concluded that this omission constituted actionable deception under state law.

The ruling reinforces the application of state-level consumer protection laws to large technology platforms. Under the Unfair Practices Act, liability arises not only from false statements but also from the failure to disclose material information that could influence consumer decisions.

New Mexico Attorney General Raúl Torrez led the case, arguing that Meta’s conduct amounted to misleading commercial speech. The jury agreed that the company’s representations fell short of statutory requirements for transparency and truthfulness.

Notably, the court rejected Meta’s reliance on constitutional and federal statutory defenses. The company had invoked First Amendment of the United States Constitution protections, asserting that its statements were protected speech. However, the court held that misleading commercial claims do not receive the same level of protection under U.S. law.

In addition, Meta’s argument for immunity under Section 230 of the Communications Decency Act was dismissed earlier in the proceedings. The court determined that the case did not center on third-party content alone but on the company’s own design choices and safety representations, which created independent grounds for liability.

A key aspect of the case involved the role of platform algorithms. Prosecutors argued that Meta’s systems amplified harmful or explicit content, including material that could reach minors. The failure to disclose these operational risks, while simultaneously marketing the platforms as safe, was considered a critical factor in establishing deception.

The verdict signals increasing judicial scrutiny of how technology companies design and promote their platforms, particularly when vulnerable users such as children are involved.

The case will now move into a second phase, expected in May 2026, where the state will seek injunctive relief. Proposed measures include mandatory age verification systems, stricter content moderation protocols, and enhanced mechanisms to detect and remove predatory behavior.

If granted, these remedies could have far-reaching implications for social media regulation, potentially influencing compliance standards across jurisdictions in the United States.

The ruling represents one of the largest consumer protection penalties imposed on a technology company at the state level. Legal experts note that it underscores a broader shift toward holding digital platforms accountable not only for user-generated content but also for their business practices and public representations.

By affirming that misleading safety claims can constitute consumer protection violations, the decision may pave the way for similar actions in other states. It also highlights the growing intersection of technology regulation, child safety, and consumer rights within the evolving legal landscape.

As scrutiny intensifies, the case sets a precedent for stricter enforcement of transparency and accountability standards in the digital economy.