{"id":5607,"date":"2026-05-05T22:55:34","date_gmt":"2026-05-05T22:55:34","guid":{"rendered":"https:\/\/stock999.top\/?p=5607"},"modified":"2026-05-05T22:55:34","modified_gmt":"2026-05-05T22:55:34","slug":"the-elon-musk-openai-trial-provides-more-heat-than-light-on-the-debate-over-who-should-control-ai","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=5607","title":{"rendered":"The Elon Musk-OpenAI trial provides more heat than light on the debate over who should control AI"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/05\/GettyImages-2273248179-e1778004327484.jpg?w=2048\" \/><\/p>\n<p>Hello and welcome to Eye on AI\u2026In this edition: Sparks fly as Musk and Brockman testify in battle over OpenAI\u2019s restructuring\u2026the White House does a 180 degree U-turn on AI regulation and may begin reviewing AI models prior to release\u2026OpenAI and Anthropic both target PE-backed companies with new joint ventures\u2026a breakthrough in a foundation model for robotics\u2026AI scientists may still be a ways off.<\/p>\n<p>People in Silicon Valley and far beyond have been enthralled by the drama playing out in a courtroom in Oakland, California, where a jury is currently hearing testimony in Elon Musk\u2019s lawsuit against OpenAI cofounders Sam Altman and Greg Brockman. The judge and jurors in the case (the jury\u2019s verdict is merely advisory) will need to decide whether Altman\u2019s and Brockman\u2019s communications with Musk around the formation of OpenAI established a formal \u201ccharitable trust\u201d and whether Altman and Brockman subsequently violated that trust when they restructured OpenAI so that its non-profit board no longer had sole control over its for-profit arm. They will also have to decide on Musk\u2019s allegations that Altman and Brockman unjustly enriched themselves as OpenAI re-oriented from a research-oriented lab to being primarily a commercial entity.<\/p>\n<p>Most legal analysts say Musk\u2019s case is weak and that he\u2019s likely to lose. In fact, I\u2019m surprised the case has even come to trial. I thought that Musk would opt to settle at the last minute. I had long-assumed that this was one of those legal cases where the lawsuit itself was the whole point, not whether Musk ultimately prevailed. I thought his intention was two-fold: 1) to sow enough investor doubt about the viability of OpenAI\u2019s new for-profit company structure to make it harder for OpenAI to raise further investment and possibly go for an IPO and 2) to use the discovery process to surface lots of embarrassing emails, internal documents, and details about Altman, Brockman, and the constant drama at OpenAI that would tarnish the reputation of his former cofounders.<\/p>\n<p>Has Musk\u2019s lawsuit already accomplished what he wanted?<\/p>\n<p>So far, it\u2019s not clear the litigation has had much impact on OpenAI\u2019s ability to continue to raise money. It has held several successful funding rounds since Musk filed his suit, including an additional $122 billion fundraise at a $852 billion valuation that closed in March. An IPO still appears to be on the cards\u2014and to the extent that it is looking shaky, it has nothing to do with Musk\u2019s lawsuit.<\/p>\n<p>But plenty of documents have emerged that paint Altman and Brockman in a less than flattering light and those documents have helped feed lots of media coverage about internal strife at OpenAI. So you might think Musk would say: blows landed, mission accomplished, time to cut bait. Yet Musk apparently thought there was more potential to damage that could be done by going to trial. We know this because Musk said so explicitly in an email to Brockman on the eve of the trial\u2014an email that OpenAI\u2019s lawyers made public on Sunday and tried, unsuccessfully, to have admitted into evidence.<\/p>\n<p>According to OpenAI\u2019s lawyers, Musk reached out to Brockman about discussing a settlement of the case in the week before the trial. Brockman suggested that both sides drop their respective claims (OpenAI has counter-sued Musk claiming harassment.) Musk wrote back that \u201cBy the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.\u201d<\/p>\n<p>The email was a spectacular moment in a trial that has, so far, resulted in few bombshell revelations on the witness stand. That\u2019s because much of the sensational stuff has already been disclosed in the documents that surfaced through the pre-trial discovery process. Hearing those details repeated on the stand doesn\u2019t change the public narrative much.<\/p>\n<p>A few fireworks from both Musk and Brockman<\/p>\n<p>There have been a couple of wowzer moments though: One was Musk\u2019s admission that his AI company, xAI, had trained its Grok model in part by \u2018distilling\u2019 OpenAI\u2019s GPT models. Distillation is the process of training a model on the answers from another model. This tactic violates OpenAI\u2019s terms of service, so it is likely that this was done using fake or fraudulent OpenAI accounts, and Musk\u2019s admission to this conduct was something of a bombshell. Musk\u2019s excuse was essentially \u201ceveryone does it.\u201d<\/p>\n<p>The other startling moments so far came in Monday\u2019s testimony from Brockman, which included a number of potentially damaging moments. Brockman acknowledged he never followed through on his own initial pledge to donate $100,000 to OpenAI\u2019s non-profit when it was set up, but now has a stake in the for-profit company worth $30 billion. <\/p>\n<p>Musk\u2019s lawyers also questioned Brockman about his own journal entries from November 2017 in which he wrote about being \u201cwarm to steal the nonprofit from [Musk] to convert to b corp without him.\u201d He also wrote, \u201c[Musk\u2019s] story will correctly be that we weren\u2019t honest with him in the end about still wanting to do for profit just without him.\u201d Brockman\u2019s words may prove damning, since they seem to confirm some of the key allegations Musk makes in his suit. So too may be Brockman\u2019s admission that he was an investor in the AI chip startup Cerebras at the time OpenAI was discussing a potential acquisition of the company and that he never disclosed his investment to Musk. Altman was also a Cerebras investor. That may help Musk\u2019s attorneys make the case for unjust enrichment although the merger proposal did not go ahead. (OpenAI did later sign a major partnership with Cerebras that significantly boosted the chip startup\u2019s valuation.)<\/p>\n<p>Still, it\u2019s far from certain Musk will prevail, either legally, or in shifting public opinion against his one-time-cofounders-turned-bitter-rivals, Brockman and Altman. In many ways, the trial is a distraction, generating much more heat than it is shedding light on the bigger concerns about who controls AI and the risks the technology presents. While the Musk-OpenAI courtroom showdown has been billed as the first great technology trial of the AI era, a legal showdown that matters far more will take place two weeks from now in a courtroom in Washington, D.C. That\u2019s when a federal appeals court panel will hear arguments in Anthropic\u2019s challenge to the \u2018supply chain risk\u2019 designation the Trump Administration slapped on it for refusing to agree to its specified contract terms for providing its AI models to the U.S. military. That\u2019s a case with huge implications not just for Anthropic and the fate of the AI industry, but also for the balance of power between the state and industry more generally.<\/p>\n<p>Even as that case moves forward, the ground is shifting in D.C. Anthropic\u2019s Mythos model, with its powerful cyber capabilities, combined with growing public fears about AI technology, seem to have convinced the Trump administration to perform a head-spinning U-turn: moving from a highly-laissez faire approach to AI to a mandate that the government receive early access to AI models and essentially license their release to the wider public. (More on that in the news section below.) This policy reversal may not have the drama of a trial, but it matters far more for the shape of AI development.<\/p>\n<p>Ok, with that, here\u2019s this week\u2019s AI news.<\/p>\n<p>Jeremy Kahn<br \/>jeremy.kahn@fortune.com<br \/>@jeremyakahn<\/p>\n<p>But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech\u2019s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colo., for Fortune Brainstorm Tech, the year\u2019s best technology conference. And this year will be even more special because we are celebrating the 25th anniversary of the conference\u2019s founding. We will hear from CEOs such as Carol Tom\u00e9 from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.<\/p>\n<p>FORTUNE ON AI<\/p>\n<p>UK-based Google DeepMind workers vote to unionize over military AI contracts amid internal backlash over its Pentagon deal\u2014by Beatrice Nolan<\/p>\n<p>Employee revolt once forced Google to back off on military contracts. But, in the wake of a new Pentagon AI contract, their leverage appears limited\u2014by Beatrice Nolan<\/p>\n<p>A decade after the \u2018Godfather of AI\u2019 said radiologists were obsolete, their salaries are up to $571K and demand is growing fast\u2014by Marco Quiroz-Gutierrez<\/p>\n<p>AI IN THE NEWS<\/p>\n<p>White House looks to control access to advanced AI models. The Trump administration\u2014which spent the past year tearing up the Biden-era AI rulebook\u2014is now weighing an executive order to convene a working group of tech executives and officials to design frontier-model oversight, with a formal pre-release review process reportedly among the options on the table, the New York Times reports citing sources familiar with the deliberations. White House officials briefed Anthropic, Google and OpenAI on the plans last week, and some inside the administration are pushing for a system that would give the government first access to new models but without the ability to block their release. The abrupt policy shift has been driven in part by Anthropic&#8217;s Mythos model, whose cyber-vulnerability discovery capabilities prompted the company to withhold a public release, and by mounting bipartisan public concern about AI&#8217;s impact on jobs, energy, education and mental health. It also tracks a leadership change at the West Wing: AI czar David Sacks departed in March, and Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent\u2014who recently held a &#8220;productive&#8221; meeting with Dario Amodei aimed at thawing the Pentagon-Anthropic standoff\u2014have stepped in to shape policy. Meanwhile, the Wall Street Journal reports that Google, Microsoft, and xAI have already agreed to give early access to their advanced models to the U.S. government. It also reported previously that the White House has opposed Anthropic sharing Mythos with more companies to help them safeguard their systems\u2014although it is unclear if this is because it fears that sharing the model more widely will increase the chance it will wind up in the hands of bad actors or because it wants to hoard Mythos\u2019 potential offensive cyber capabilities for itself and doesn\u2019t want more companies using it to harden their defenses.<\/p>\n<p>OpenAI and Anthropic both set up companies to push AI into private equity-backed companies. The two AI rivals unveiled competing joint ventures within minutes of each other on Monday, both designed to push their AI tools deep into the operations of private equity-backed companies. OpenAI&#8217;s &#8220;Deployment Company&#8221; drew more than $4 billion from 19 investors\u2014led by TPG, Brookfield Asset Management, Advent and Bain Capital, with Dragoneer and SoftBank also participating\u2014at a $10 billion valuation, with OpenAI itself contributing capital and retaining majority control. The PE backers were, according to press reports citing leaked documents, offered a 17.5% guaranteed annual return floor over five years. Anthropic&#8217;s $1.5 billion vehicle, by contrast, is anchored by Blackstone, Hellman &amp; Friedman and Goldman Sachs\u2014with General Atlantic, Leonard Green, Apollo, GIC and Sequoia also backing it. It is targeting mid-sized businesses, and will see \u201cforward-deployed engineers\u201d sent to implement Anthropic\u2019s AI models inside those companies. You can read more from the Wall Street Journal here and Bloomberg here.<\/p>\n<p>Anthropic announces new financial services agents. The company debuted 10 new AI agents built for banks and financial services firms\u2014handling tasks like building pitchbooks, closing the books, and drafting credit memos\u2014as it deepens its push into a sector that&#8217;s central to its enterprise strategy ahead of an anticipated IPO this year. Anthropic\u2019s arch rival OpenAI has also been targeting financial services use cases, but the new roll out also puts Anthropic in more direct competition with vendors like Microsoft and Salesforce, as well as specialist financial data providers such as Bloomberg and Alpha Sense. Read more from the Wall Street Journal here.<\/p>\n<p>SAP moves to stop OpenClaw and other third-party agents from using its software. SAP last month told customers it could throttle, suspend or terminate access for those using unauthorized external AI agents to pull data from its apps\u2014an escalation in the brewing data wars between incumbent enterprise software vendors and vendors of AI tools, the Information reports. SAP has its own AI agent called Joule, but many customers prefer the functionality that third-party agents have to handle workflows across many different software applications. SAP CEO Christian Klein framed the move as protection against &#8220;mass data requests&#8221; that strain performance and as a defense of SAP&#8217;s proprietary semantic models, but the policy lands amid clear signs of pressure: SAP shares are down roughly 28% this year and longtime customer Mercedes-Benz has cut its SAP instances by 40% in recent months while leaning on its own and frontier-lab AI models to clean and analyze data. SAP says it already permits agents from some other companies, including Microsoft, Google, Amazon and IBM, and hinted at &#8220;agentic integration architectures&#8221; with Anthropic\u2014suggesting Claude Code or Cowork access may be close\u2014while singling out open-source harnesses like OpenClaw as a security risk. SAP\u2019s stance mirrors that of Workday, Salesforce and ServiceNow, which have all made moves to erect some form of tollgates around their data.<\/p>\n<p>OpenAI changes privacy policy to share user data with advertisers. OpenAI updated its U.S. privacy policy on April 30 to allow the use of cookies and limited identifiers (like email addresses or cookie IDs) to promote its products on third-party websites and measure ad effectiveness, Wired reported. The company has said, however, that ChatGPT conversations remain private and aren&#8217;t shared with marketing partners. Wired found that this marketing tracking was enabled by default for free accounts but off by default for Plus and Enterprise subscribers, with users able to opt out by changing a toggle in account settings. The change comes as OpenAI expands its own in-product advertising (rolling out ads beneath ChatGPT outputs in February) and prepares for a potential IPO later this year, with the off-platform ads aimed largely at converting free users into paying subscribers.<\/p>\n<p>EYE ON AI RESEARCH<\/p>\n<p>Foundation models for robotics makes a big leap forward. Physical Intelligence, a San Francisco-based company with some pedigreed cofounders (ex-Google DeepMind and both Stanford and UC Berkeley robotics profs) that builds foundation models for robotics, achieved a breakthrough with a new foundation model called \u03c00.7. The model can recombine learned skills to handle new situations, something large language models can do, but which has proved elusive in physical AI. A single \u03c00.7 model can fold laundry, operate an espresso machine, peel vegetables, and take out the trash without any task-specific fine-tuning, matching the performance of specialized models trained for each individual task. More striking, \u03c00.7 showed that it could transfer those skills between different brands and types of robots without additional training\u2014although here the performance only matched that of a human operator who had never done the task before operating the robot by remote control. The team also showed it can be &#8220;coached&#8221; through entirely new multi-stage tasks, such as loading a sweet potato into an air fryer, using only verbal step-by-step instructions.<\/p>\n<p>All of this is a pretty big deal that will make it far easier for more companies to begin to deploy robots in more settings far faster than before. One of the big breakthroughs that Physical Intelligence made was in what they call \u201cdiverse context conditioning&#8221;\u2014training the model not just on what to do but on rich metadata describing how each demonstration went, including quality scores, speed, mistakes, and AI-generated images of intermediate subgoals. The meta data labels seem to be key, helping the model learn which intermediate actions were most likely to result in success. You can read the research paper here on arxiv.org and see the company\u2019s blog on \u03c00.7 here.<\/p>\n<p>AI CALENDAR<\/p>\n<p>June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.<\/p>\n<p>June 17-20: VivaTech, Paris.<\/p>\n<p>July 6-11:\u00a0International Conference on Machine Learning (ICML), Seoul, South Korea.<\/p>\n<p>July 7-10:\u00a0AI for Good Summit, Geneva, Switzerland.<\/p>\n<p>Aug. 4-6:\u00a0Ai4 2026, Las Vegas.<\/p>\n<p>BRAIN FOOD<\/p>\n<p>Maybe AI scientists aren\u2019t so close after all. There\u2019s been a lot of hype recently about how fast AI scientists are coming along and that AI models will soon be able to automate scientific research. AI research itself certainly seems on the cusp of automation with AI, and there have been promising experiments in other fields, such as drug discovery and material discovery. <\/p>\n<p>But researchers from Germany\u2019s Friedrich Schiller University Jena and the Indian Institute of Technology Delhi found that large language models (they tested OpenAI\u2019s GPT-4o and GPT-OSS, as well as Anthropic\u2019s Claude Sonnet 4.5) that have not been specifically trained to act as AI scientists, can produce scientific results that seem superficially valid but actually lack key evidence and reasoning steps.<\/p>\n<p>The results are actually pretty abysmal. Hypotheses were stated but left untested by experiments in 63% of cases. In 68% of cases, the models failed to incorporate available experimental evidence into their process. In 71% of reasoning traces, the models\u2019 hypotheses are not updated in the face of counter-evidence. Only 26% of reasoning traces showed any belief revision based on new evidence from experiments. Using multiple experiments and independent lines of evidence to bear on a single hypothesis occurred in less than 10% of cases. Results like these make it seem like scientists\u2019 jobs will be safe for quite a while longer than some AI boosters claim. You can read the research here.<\/p>\n<p>AI Playbook: Keeping up with AI&#8217;s rapid evolution<\/p>\n<p>AI is becoming an even more useful\u2014and dangerous\u2014tool as it gets smarter. Fortune AI Editor Jeremy Kahn breaks down best practices for deploying AI agents, how to protect your data from AI-powered cyberattacks, and just how smart AI can really get. Watch the playbook.\u00a0<\/p>\n<p>#Elon #MuskOpenAI #trial #heat #light #debate #control<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hello and welcome to Eye on AI\u2026In this edition: Sparks fly as Musk and Brockman&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[578,3251,2283,1038,1862,2027,1225,11088,406,439,810],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/5607"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5607"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/5607\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5607"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5607"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5607"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}