Elon Musk, during testimony in a California federal court, acknowledged that his artificial intelligence startup, xAI, has utilized technology from OpenAI to train its own models. This admission came during cross-examination in his ongoing lawsuit against OpenAI, where he alleges the company abandoned its nonprofit mission. The revelation sheds light on a controversial but widespread industry practice known as model distillation.
The Courtroom Admission
When questioned by OpenAI's attorney, William Savitt, about whether xAI had used OpenAI's models for training, Musk initially deflected, stating it is a general practice among all AI companies. Pressed for a direct answer, Musk conceded, "Partly," framing the action as a standard procedure for validation. This exchange occurred within the broader context of his legal battle against the ChatGPT creator.
Understanding Model Distillation
Model distillation is a technique where a smaller AI model learns to mimic the behavior of a larger, more powerful one, effectively acting as a "student" to a "teacher" model. This process can significantly reduce the time and cost required to develop competitive AI systems. While labs often use distillation internally on their own models, its application on a competitor's technology is a contentious issue.
The practice allows smaller or newer companies to rapidly close the capability gap with established leaders who have invested heavily in data and computing infrastructure. This threatens the competitive advantage built by frontier labs like OpenAI and Anthropic. The controversy lies in whether this constitutes intellectual property theft or is simply a clever engineering shortcut.
An Industry-Wide Concern
Musk's statement that "all the AI companies" engage in this practice points to a broader, albeit often unspoken, reality in the sector. Major AI developers have become increasingly concerned about distillation, particularly from international competitors. OpenAI, Anthropic, and Google have reportedly formed an initiative through the Frontier Model Forum to combat these efforts.
The primary focus of these defensive measures has been on Chinese firms, with companies like DeepSeek and Moonshot being publicly named for allegedly using distillation. Frontier labs are actively working to harden their models against systematic querying designed to extract their underlying knowledge. This has become a key front in the global race for AI supremacy.
The Competitive Landscape and Legal Gray Area
While the legality of model distillation remains ambiguous, it frequently violates the terms of service set by AI providers. This creates a complex situation where companies are trying to enforce their own rules to protect their innovations. There is a notable irony, as these same companies have faced criticism for their own data acquisition methods.
The intense competition has led to a more guarded ecosystem, with some companies cutting off rivals' access to their platforms entirely. For instance, Anthropic has blocked both OpenAI and xAI from using its models for certain tasks. Musk's testimony underscores the aggressive tactics employed by companies striving to gain an edge in the rapidly evolving AI landscape.
Elon Musk's courtroom admission pulls back the curtain on the fierce and ethically complex race for AI dominance. It confirms that even major U.S. labs leverage competitors' technology, a practice they publicly condemn when done by others. This development highlights the urgent need for clearer rules and ethical guidelines surrounding intellectual property and competitive practices in the artificial intelligence industry.

