{"id":5029,"date":"2026-04-28T21:21:15","date_gmt":"2026-04-28T21:21:15","guid":{"rendered":"https:\/\/stock999.top\/?p=5029"},"modified":"2026-04-28T21:21:15","modified_gmt":"2026-04-28T21:21:15","slug":"lessons-in-how-to-build-ai-agents-from-bloomberg-cto-shawn-edwards","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=5029","title":{"rendered":"Lessons in how to build AI agents from Bloomberg CTO Shawn Edwards"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/04\/GettyImages-458406107-e1777405091496.jpg?w=2048\" \/><\/p>\n<p>Hello and welcome to Eye on AI. In this edition\u2026China blocks Meta\u2019s purchase of Manus\u2026OpenAI falls short of its revenue and growth targets\u2026Anthropic shows AI models can help advance AI safety research\u2026Sen. Bernie Sanders\u2019 decision to invite Chinese AI experts to a Capitol Hill panel provokes China hawks\u2019 ire.<\/p>\n<p>In their battle for enterprise sales, both OpenAI and Anthropic have been targeting financial services firms. That\u2019s not surprising. As that old joke about why criminals rob banks says: It\u2019s where the money is. OpenAI supposedly has a battalion of ex-investment analysts helping to build a yet-to-be-launched agentic AI financial analysis product. Anthropic has been rolling out financial modeling skills for its Claude Code, Cowork, and Claude for Finance products. Startup Samaya AI is building AI tools for the finance sector too. And there are plenty of new financial advisory tools using AI as well, as my colleague Jeff John Roberts has covered in this informative recent feature.<\/p>\n<p>The OG of specialized financial data and analysis tools, of course, is Bloomberg. Access to the company\u2019s \u201cterminal,\u201d as it calls its core product (even though its data is no longer delivered through a dedicated machine), is still considered the de rigueur tool of every trader, investment banker, and hedge fund quant.<\/p>\n<p>Bloomberg\u2019s tools have seen off lots of rivals since its founding back in 1981. But today, AI is supercharging the competitive pressure on the company, as rivals embrace AI-powered features and use AI models to rapidly ingest and analyze complex data sets, from bond prices to earning transcripts to social media feeds to satellite imagery, that once only Bloomberg consolidated in a single place\u2014and as Bloomberg\u2019s customers can increasingly use AI to perform the kinds of modeling they once needed the terminal to do.<\/p>\n<p>For decades, getting the most out of the terminal required that traders memorized an arcane and bewildering set of three- and four-letter keyboard commands and shortcuts, each of which called up a different feature, function, or dataset. When I worked as a reporter at Bloomberg News, all new hires underwent a full week of training to introduce them to just a fraction of these functions, the bare minimum we would need to access the data and tools required for our jobs. <\/p>\n<p>Even before I left the company to come to Fortune in 2019, Bloomberg had begun to use machine learning and large language models to make accessing these features far more intuitive, as well as to power new kinds of data analysis. And those efforts have only accelerated, especially since the debut of generative AI chatbots in 2022 and recent advances in agentic AI.<\/p>\n<p>I have periodically written about Bloomberg\u2019s progress on AI here at Fortune. But I was still surprised and impressed when I attended a recent \u201cAI in Finance Summit\u201d at the company\u2019s London offices where it was showing off its new \u201cAskB\u201d feature, which the company bills as the biggest rethink of the terminal in Bloomberg\u2019s history. AskB allows users to use natural language to navigate the terminal\u2019s features and functions, but it does far more than this. The system acts as an agent, building investment screens and producing full research reports, including sophisticated financial modeling and bull and bear cases for a particular stocks, on the fly.<\/p>\n<p>AskB, which uses a variety of AI models under the hood, including some built by Bloomberg itself and others from frontier AI model companies such as Anthropic, shows that Bloomberg is taking the potential threat from AI-native startups seriously. I sat down with Shawn Edwards, Bloomberg\u2019s chief technology officer, to ask him more about how Bloomberg built AskB. Much of what he said holds lessons for enterprises in any industry that are trying to get agentic AI to deliver real business value.<\/p>\n<p>Data is the differentiator<\/p>\n<p>The first lesson is that data remains the critical differentiator. AskB pulls from Bloomberg News, sell-side research from over 800 providers, market data, and, increasingly, so-called \u201calternative datasets\u201d that are hard or expensive to source. This includes things like anonymized credit card transactions, foot traffic in retail locations taken from cellphone pings, satellite imagery of parking lots, and app usage data. A lot of this data is not Bloomberg\u2019s exclusively\u2014it is buying it from other sources. But having it all in one place allows the AskB agent to do some powerful things, , Edwards tells me, such as aligning this data with the business segments a public company reports in order to \u201cnowcast\u201d a company\u2019s quarterly KPIs. Edwards relates that before Sweetgreen\u2019s fourth-quarter 2025 earnings call, the alternative data was screaming that the chain would miss analysts\u2019 consensus earnings forecasts\u2014which it ultimately did. It\u2019s an example of the power of pulling all this data together in one place.<\/p>\n<p>When I asked whether customers could just use AI models to ingest this data and run these analyses themselves, obviating the need to pay Bloomberg\u2019s approximately $30,000-per-user annual subscription price, Edwards said a few have tried and found it\u2019s harder than it looks. \u201cYou have to buy all those sources, do all the validation work, build benchmarks\u2014and tokens aren\u2019t cheap. Most customers are saying, \u2018Awesome, Bloomberg, you do that. I\u2019m going to focus on my [own trading strategies].\u2019\u201d<\/p>\n<p>That\u2019s not to say that AI can\u2019t help. Edwards told me AI agents have dramatically accelerated how Bloomberg builds data sets. Data ingestion that used to take four-and-a-half months now takes two days, he says. That\u2019s freed up the large teams once dedicated to data entry and cleaning, many of whom have been redeployed onto building internal evaluations.<\/p>\n<p>Build robust evaluations<\/p>\n<p>Which brings us to the second big lesson: Building good internal evaluations is critical to deriving ROI from AI agents. \u201cEvaluations, I cannot stress enough, are the make-or-break of building a useful, trustworthy system,\u201d Edwards says, calling the emphasis on creating these evaluations one of the biggest \u201ccultural shifts\u201d Bloomberg has experienced in the past two years.<\/p>\n<p>Building the evaluations isn\u2019t easy\u2014and it isn\u2019t cheap. It requires close collaboration with domain specialists\u2014in this case, bond covenant experts, equity analysts, market structure wonks, and even Bloomberg\u2019s journalists\u2014and engineering and product teams. Bloomberg was willing to pull these experts off their day jobs both to write benchmarks for sub-agents and to help evaluate entire workflows. Using AI models themselves as evaluators can work for easy cases, Edwards says. But for everything else, having human assessors is required. Through building these evaluations, he says, Bloomberg is encoding its experts\u2019 \u201ctacit knowledge\u201d in how its AI agents work.<\/p>\n<p>Using multiple models can help contain costs<\/p>\n<p>Next, cost discipline is fundamental. And that means workflows need to be multi-model. AskB uses a mix of commercial frontier models and open-weight ones, as well as its own internal models, routing queries to the cheapest model that can handle a given task with the kind of reliability and performance that workflow demands, Edwards says.<\/p>\n<p>Finally, the next frontier is proactive. When I asked what\u2019s coming, Edwards\u2019s answer was agent-to-agent workflows and always-on data monitoring. He wants Bloomberg to be \u201cthe eyes and ears\u201d for its financial customers\u2014watching the world against each client\u2019s positions, mandate, and strategy, and surfacing not just the obvious things but second- and third-order effects. A flood takes out a factory making parts for a supplier to a company whose stock you\u2019re long on; AskB, in Edwards\u2019s vision, would flag the problem to you before you\u2019d thought to ask.<\/p>\n<p>Achieving that vision will be difficult. But this kind of proactive, always-on agent is where a lot of businesses want to go. Bloomberg is showing some key steps along the path.<\/p>\n<p>Ok, with that, here\u2019s this weeks AI news.<\/p>\n<p>Jeremy Kahn<br \/>jeremy.kahn@fortune.com<br \/>@jeremyakahn<\/p>\n<p>But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech\u2019s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colorado, for Fortune Brainstorm Tech, the year\u2019s best technology conference. And this year will be even more special because we are celebrating the 30th anniversary of the conference\u2019s founding. We will hear from CEOs such as Carol Tom\u00e9 from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.<\/p>\n<p>FORTUNE ON AI<\/p>\n<p>Anthropic says engineering missteps were behind Claude Code\u2019s monthlong decline after weeks of user backlash\u2014by Beatrice Nolan<\/p>\n<p>Cohere\u2019s European push highlights the rise of AI\u2019s middle powers beyond the U.S. and China\u2014by Sharon Goldman<\/p>\n<p>DeepSeek unveils its newest model at rock-bottom prices and with \u2018full support\u2019 from Huawei chips\u2014by Nicholas Gordon<\/p>\n<p>Exclusive: AI-powered recruiting startup Dex raises $5.3 million seed round\u2014by Jeremy Kahn<\/p>\n<p>I used Claude\u2019s new Dispatch feature for a month. Here\u2019s everything I was able to do\u2014by Catherina Gioino<\/p>\n<p>Commentary: Mark Zuckerberg is building an AI clone of himself. Most people just need help with their inbox\u2014by Mukund Jha<\/p>\n<p>AI IN THE NEWS<\/p>\n<p>Microsoft and OpenAI revamp their partnership. Microsoft and OpenAI have significantly reworked their partnership, ending the exclusivity that Microsoft once had over OpenAI\u2019s tech. OpenAI can now sell its models through other cloud providers rather than relying solely on Microsoft\u2019s Microsoft Azure, and it no longer has to share all its research and other innovations with Microsoft. Microsoft is reportedly keeping its rights to 20% of what OpenAI earns, while the tech giant no longer has to give OpenAI a share of its own revenues from selling OpenAI-powered products. Microsoft still retains its equity stake in OpenAI\u2019s for-profit company, as that company eyes a possible IPO later this year. Microsoft also secured the removal of the \u201cAGI clause,\u201d which would have cut it off from OpenAI\u2019s technology if OpenAI declared it had achieved human-like artificial general intelligence. The changes give OpenAI more freedom to pursue deals with rivals such as Amazon Web Services and Google Cloud, as it has already started doing, strengthening its path toward higher revenues and a potential IPO. Read more from the Financial Times here.<\/p>\n<p>OpenAI missed revenue and growth targets. OpenAI has missed internal targets for both user growth and ChatGPT revenue, leading both the company\u2019s CFO Sarah Friar and board directors to question whether the company will be able to meet the roughly $600 billion in future data-center commitments it has made, the Wall Street Journal reported, citing people familiar with the discussions. Friar and board members have reportedly pushed for tighter financial discipline and questioned the pace of infrastructure spending and whether a year-end IPO is realistic, the paper said. Meanwhile OpenAI CEO Sam Altman has reportedly insisted that aggressive compute investment remains essential. The revenue and user growth slowdown\u2014driven by stronger competition from Google and Anthropic\u2014has sharpened scrutiny of OpenAI\u2019s strategy, though the company says its business remains strong and points to growing traction for products like Codex and its latest model, GPT-5.5.<\/p>\n<p>Google inks deal allowing Pentagon to use Gemini &#8220;for any lawful purpose.&#8221; That\u2019s according to a scoop from The Information. The agreement, which expands the U.S. military\u2019s ability to use Google\u2019s AI models to cover classified networks, marks a major shift from the company\u2019s earlier resistance to military AI work. The prospect of a deal had sparked an employee backlash, with more than 600 Googlers signing a letter urging CEO Sundar Pichai to reject it. A similar revolt against Google working with the military led to Google pulling out of the military\u2019s Project Maven contract in 2018. The new agreement means Google has joined OpenAI and xAI as Pentagon AI suppliers, although the Google agreement appears to give the government broader authority to modify Google\u2019s AI safety filters than comparable OpenAI arrangements, the publication said. The arrangement also leaves Anthropic as the only frontier AI model company that has so far resisted the Pentagon\u2019s insistence that model makers agree to the \u201cany lawful purpose\u201d contract language.<\/p>\n<p>Chinese competition regulator blocks Meta\u2019s purchase of agentic AI company Manus. China has blocked Meta\u2019s roughly $2 billion acquisition of Manus, ordering the deal unwound even after employees had joined Meta and Manus\u2019 original investors had already been paid. The move underscores how aggressively China is tightening control over AI as a strategic technology, especially when domestic startups attempt to \u201cSingapore-wash\u201d their identity, moving their headquarters to the island nation in order to attract foreign capital, chips, or buyers. The decision highlights the accelerating decoupling of U.S. and Chinese AI ecosystems, with founders increasingly caught between U.S. investment restrictions and Beijing\u2019s growing scrutiny of overseas restructurings. For insightful analysis of the decision, see this piece by Fortune\u2019s Asia editor Nicholas Gordon.<\/p>\n<p>Musk-OpenAI trial over OpenAI\u2019s for-profit status begins. The trial started this week in a California courtroom. With most of Elon Musk\u2019s claims having either been dismissed or dropped by Musk\u2019s legal team, the case will hinge on whether emails and other communication between OpenAI cofounders Sam Altman and Greg Brockman and Musk established a charitable trust. Most legal experts think Musk is unlikely to prevail and, during jury selection, many potential jurors expressed negative opinions of Musk while few seemed to know much about Altman. For more on the trial, see this story from Fortune\u2019s Eva Roytburg.<\/p>\n<p>EYE ON AI RESEARCH<\/p>\n<p>Anthropic shows progress on using AI to automate AI safety research. In a blog post and accompanying research paper, the company said a group of researchers it sponsored showed that Claude Opus 4.6 could help design and carry out research that pointed towards way to address a difficult problem in AI safety: how can a weaker intelligence, whether that is an AI model, or potentially a person, supervise a more intelligent AI model? Nine parallel &#8220;Automated Alignment Researcher&#8221; instances of Claude, which were equipped with some tools for carrying out the research, were each nudged toward a slightly different starting hypothesis. The Claudes then had to carry out the research using Alibaba\u2019s open weight model Qwen 3-4B Base as the strong AI model, and Qwen 1.5-0.5B-Chat as the less capable, supervising model. They were allowed to spend seven days hypothesizing experiments and then the results were compared to what two human AI safety researchers had been able to do in a similar timeline.<\/p>\n<p>The Claudes were tested on whether they could get the stronger model to perform on set of tests at the best of its ability, despite the weak model itself performing far worse at these tasks. The Claudes, collectively, did well, finding ways to get the weak model to coax the strong model to recover 97% of the \u201cperformance gap\u201d between the weak and strong model, while the human AI researchers only managed to close 23% of this gap. What\u2019s more, the methods generalized to unseen math and coding tasks, but they did not generalize to a different model. Also, the researchers sometimes caught the Claudes trying to cheat by simply instructing the strong model directly rather than figuring out ways to get the weak teacher to supervise the strong model. While not a perfect result, the total compute cost of the experiments the Claudes ran was $18,000, which Anthropic argued could mean that these automatic techniques might still be helpful in finding new research directions for humans to pursue. <\/p>\n<p>AI CALENDAR<\/p>\n<p>April 23-27: International Conference on Learning Representations (ICLR), Rio de Janeiro, Brazil.<\/p>\n<p>April 22-24: Google Next, Las Vegas.<\/p>\n<p>June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.<\/p>\n<p>June 17-20: VivaTech, Paris.<\/p>\n<p>July 6-11:\u00a0International Conference on Machine Learning (ICML), Seoul, South Korea.<\/p>\n<p>July 7-10:\u00a0AI for Good Summit, Geneva, Switzerland.<\/p>\n<p>BRAIN FOOD<\/p>\n<p>Bernie Sanders tries to push international AI governance forward as the China hawks circle. Vermont Sen. Bernie Sanders is hosting a panel discussion on Capitol Hill later this week on AI\u2019s risk and the need for international agreement on how to govern the technology. Unusually for Washington, Sanders has invited two leading Chinese AI governance experts to appear on the panel, a decision that has drawn praise from those who see outreach to China as critical for ensuring AI does not present catastrophic risks, as well as criticism, particularly from China hawks who see the U.S. locked in a zero-sum technological arms race with China. Those critics have pointed out that the two Chinese experts Sanders invited are linked to the government\u2019s Ministry of Science and Technology&#8217;s AI governance committee. Sanders has been trying to push forward a bill that would impose a moratorium on further AI data center construction until federal AI regulations are enacted.<\/p>\n<p>It\u2019s unclear whether Sanders&#8217; decision to include Chinese experts on this panel is smart politics. Polls have consistently shown that a majority of Americans have a negative view of AI overall and many local communities have opposed data center construction. Bipartisan support seems to be building for some kind of AI regulation, especially around childrens\u2019 interactions with chatbots and around concerns about AI displacing workers. In this context, Sanders may think this is a good opportunity to publicly highlight AI\u2019s catastrophic risks and show that the Chinese, who have passed some of the strictest domestic AI regulation, are willing to discuss AI governance that might collectively slow the further capability advances in the technology. But it could be that the move backfires, reinforcing concerns about China dominating the technology and alienating potential allies. As Michael Sobolik, a China policy expert at the right-wing Hudson Institute told Fox News, \u201cI think Sanders\u2019 concerns about AI are overstated, but I respect them. We should be asking questions about child safety, community impact, and economic displacement. What we shouldn\u2019t do is partner with foreign adversaries like the Chinese Communist Party in those discussions.\u201d\u00a0<\/p>\n<p>#Lessons #build #agents #Bloomberg #CTO #Shawn #Edwards<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hello and welcome to Eye on AI. In this edition\u2026China blocks Meta\u2019s purchase of Manus\u2026OpenAI&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[1688,482,353,7009,82,5429,10319,1862,7395,1309,406,10318],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/5029"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5029"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/5029\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5029"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5029"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5029"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}