 
									Advertisement
The Federal Trade Commission (FTC) announced on Thursday an extensive study focused on major players in the artificial intelligence sector, including Amazon, Alphabet (Google), Microsoft, Anthropic, and OpenAI. FTC Chair Lina Khan revealed the inquiry during the agency’s tech summit on AI, characterizing it as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.”
The investigation, conducted under the authority of a 6(b) study, enables the FTC to scrutinize AI companies separately from its law enforcement arm. This allows the regulator to issue civil investigative demands, including ordering companies to submit specific reports and respond in writing to inquiries about their businesses.
FTC Chair Lina Khan emphasized that the rapid development and deployment of AI are informing the agency’s work, and there is no AI exemption from existing laws. The focus of the inquiry is on ensuring that companies are not leveraging their power to thwart competition or mislead the public.
A Google spokesperson welcomed the study, expressing hope that it would shed light on companies’ approaches to AI services, emphasizing the openness of Google Cloud.
Microsoft, in a statement by Rima Alaily, Corporate VP of Microsoft’s competition and market regulation group, highlighted that partnerships between independent companies, such as Microsoft and OpenAI, contribute to competition and innovation. Microsoft expressed readiness to provide the necessary information for the FTC’s study.
Both Amazon and OpenAI declined to comment on the inquiry, while Anthropic did not immediately respond to CNBC’s request for comment.
The FTC had previously conducted similar inquiries, such as the 2022 study into the prescription drug middleman industry and a study in 2020 that looked into past acquisitions by major tech companies, including Alphabet, Amazon, Apple, Microsoft, and Facebook (now Meta).
FTC Chair Lina Khan concluded by emphasizing that the evolving landscape of AI liability regimes remains an open question, and the agency’s enforcement experience in other domains will inform its approach to this work.
 
