{"id":1060,"date":"2026-03-10T20:05:52","date_gmt":"2026-03-10T20:05:52","guid":{"rendered":"https:\/\/stock999.top\/?p=1060"},"modified":"2026-03-10T20:05:52","modified_gmt":"2026-03-10T20:05:52","slug":"will-ai-take-my-job-a-new-anthropic-study-suggests-the-answer-is-more-complicated-than-you-think","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=1060","title":{"rendered":"Will AI take my job? A new Anthropic study suggests the answer is more complicated than you think"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-2240813685-e1773165751370.jpg?w=2048\" \/><\/p>\n<p>Hello and welcome to Eye on AI. In this edition\u2026Anthropic sues the Pentagon over supply chain risk designation\u2026Yann LeCun raises $1 billion for his new startup\u2026Some reassuring and not so reassuring news about AI agents\u2019 propensity for illicit scheming\u2026and why it may be too soon to turn all coding over to AI agents.<\/p>\n<p>Two of the questions I get most frequently when I tell people that I cover AI and wrote a book on the subject is: am I going to lose my job? And, what should my kids study?<\/p>\n<p>These questions are difficult to answer. I often fall back on saying that I doubt there will be mass unemployment, which is not the same thing as saying your particular job is safe. And I say that it is important to teach kids to be lifelong learners, which isn\u2019t a very satisfying response.<\/p>\n<p>So far, few people have lost their jobs directly due to AI. Even some of the layoffs that companies have ascribed to AI, such as the recent draconian layoffs at the payments firm Block, seem to be, at least partly, \u201cAI-washing\u201d\u2014attributing layoffs to AI, because it makes a company look tech savvy, when the real reason is due to business headwinds or unrelated bad decisions. Block, for example, tripled its workforce during the pandemic, and many suspect it is simply trying to slim down a bloated workforce. (Block\u2019s CFO Amrita Ahuja told my Fortune colleague Sheryl Estrada that this was not true and that AI was rapidly improving employee productivity.)<\/p>\n<p>Every previous technology has, in the long-run, created more jobs than it has destroyed. But still, some insist that AI is different because it is being adopted so broadly and so quickly across different industries, and because it is hitting at the core of our competitive advantage over machines\u2014our intelligence. As to the second question, about what kids should study, that\u2019s tough too because while previous technologies have created more jobs than they\u2019ve eliminated, exactly what those new jobs will be has always been difficult to predict in advance. It wasn\u2019t obvious, for instance, when smartphones first appeared, that social media influencers would be a viable career.<\/p>\n<p>A new research paper from economists Maxim Massenkoff and Peter McCrory at the AI company Anthropic assesses how exposed various professions are to AI by looking at the percentage of tasks in that field that the technology could potentially automate. They also try to gauge the gap between this total possible exposure, and the extent to which AI is currently being used to automate those tasks, a measure they call \u201cobserved exposure.\u201d<\/p>\n<p>Potential AI exposure vs. \u2018observed exposure\u2019<\/p>\n<p>The paper got a lot of attention on social media because the researchers included an eye-catching radar plot-style chart that highlights just how jagged AI\u2019s impacts are, especially when it comes to observed exposure. That chart is here:<\/p>\n<p>Anthropic\/\u201dLabor market impacts of AI: A new measure and early evidence\u201d<\/p>\n<p>For instance, AI is having relatively large impacts on fields involving office administration and computers and math, but relatively little on things like life sciences and social sciences or healthcare, even though those two areas have relatively high potential exposures. Then there are those areas with very low potential exposure, such as construction and agriculture, where, in fact, Anthropic finds the observed exposure is, indeed, almost nil. Comparing the observed exposure findings to projections of job growth from the U.S. Bureau of Labor Statistics, the Anthropic researchers found that there was a correlation between higher observed AI exposure and lower BLS job growth forecasts for those fields.<\/p>\n<p>I somewhat question the agriculture finding given that predictive AI and robotics are potentially quite disruptive to agriculture and these technologies are already making inroads into farming. It\u2019s just that this tech is different from the large language model-based systems that Anthropic is focused on. That said, maybe it isn\u2019t bad advice for your kids to apprentice to a plumber, become an electrician, or try their hand at farming. The Anthropic paper notes that about 30% of American workers are not covered by the study because \u201ctheir tasks appeared too infrequently in our data to meet the minimum threshold. This group includes, for example, Cooks, Motorcycle Mechanics, Lifeguards, Bartenders, Dishwashers, and Dressing Room Attendants.\u201d<\/p>\n<p>Even in fields where the total potential exposure is high, such as those involving computers and math, where theoretical exposure is 94%, the actual number of tasks being automated today is far lower, in this case 33%. Office administration had the highest observed exposure at about 40%, against a total theoretical exposure of 90%. (Although it is important to note that these are average figures across broad categories. When it comes to more specific job titles, the observed exposure is a lot higher: 75% for computer programmers, 70% for customer service representatives, and 67% for data entry jobs and for medical record specialists.)<\/p>\n<p>How fast will the gap close?<\/p>\n<p>The big question now is: how fast will the gap between observed AI exposure and theoretical AI exposure close? I think the answer is that it will vary a lot between different professions. The idea that the same levels of automation that has hit software developers in the past six months is about to hit every other knowledge worker in the next 12 to 18 months seems off to me. I think it is going to take substantially longer. The Anthropic paper notes that so far, there\u2019s very little evidence of job losses, even in the fields where the observed AI exposure is greatest, such as software development, although they do highlight a study from Stanford University that we\u2019ve discussed in Eye on AI before, that showed there were some signs of a hiring slowdown among younger software programmers and IT professionals. (Still, even that study could not entirely disentangle that slowdown from the possible unwinding of overhiring during the pandemic years.)<\/p>\n<p>McCrory and Massenkoff highlight a few of the reasons why observed AI automation may be lagging behind its potential. In some cases AI models are not yet up to the tasks involved, they write. But in many others, they note, AI \u201cmay be slow to diffuse due to legal constraints, specific software requirements, human verification steps, or other hurdles.\u201d As I have pointed out previously, in many fields, there simply aren\u2019t good ways to automate and scale verification, and this is definitely holding back AI\u2019s deployment.<\/p>\n<p>The potential AI impact is also not uniform across the population: women are significantly overrepresented in AI exposed fields compared to men; exposed workers are more likely to be white or Asian, and they are also more likely to be highly educated and higher paid. Given that such groups are also often better able to organize politically, if we do start to see significant job losses among these workers, we may see a significant political backlash that could slow AI adoption.\u00a0<\/p>\n<p>The Anthropic economists also note that economists\u2019 track records when it comes to predicting occupational change is poor. For instance, they call out previous research that found that about a quarter of U.S. jobs were susceptible to offshoring, but a decade later, most of those job categories had seen healthy employment growth. They also note that the U.S. government\u2019s occupational growth forecasts have been right directionally, but have had little specific predictive value.<\/p>\n<p>In the end, the most honest answer to both questions\u2014will I lose my job, and what should my kids study?\u2014may be: I don\u2019t know, and no one else does either. But it might not be a bad idea to learn something about plumbing.<\/p>\n<p>With that, here\u2019s more AI news.<\/p>\n<p>Jeremy Kahn<br \/>jeremy.kahn@fortune.com<br \/>@jeremyakahn<\/p>\n<p>FORTUNE ON AI<\/p>\n<p>Microsoft unveils Copilot Cowork agents built on Anthropic\u2019s AI and E7 AI product suite as it seeks to calm investor concerns about AI eating SaaS\u2014by Jeremy Kahn<\/p>\n<p>OpenAI robotics leader resigns over concerns about surveillance and autonomous weapons amid Pentagon contract\u2014by Sharon Goldman<\/p>\n<p>OpenAI launches GPT-5.4, its most powerful model for enterprise work\u2014and a direct shot at Anthropic\u2014by Beatrice Nolan<\/p>\n<p>Iran\u2019s attacks on Amazon data centers in UAE, Bahrain signal a new kind of war as AI plays an increasingly strategic role, analysts say\u2014by Jeremy Kahn<\/p>\n<p>Financial software company Datarails aims to disrupt itself with AI before someone else does with launch of new FinanceOS product\u2014by Jeremy Kahn<\/p>\n<p>AI just gave you six extra hours back. Your boss already took them\u2014by Nick Lichtenberg<\/p>\n<p>This Harvard dropout took a company public before 30. Now he\u2019s raising $205M to fix the business side of medicine\u2014by Catherina Gioino<\/p>\n<p>AI IN THE NEWS<\/p>\n<p>Anthropic sues the Pentagon over supply chain risk designation. The AI company is arguing that the designation, which effectively blocks it from federal contracts, was imposed improperly and was motivated by politics and ideology, not any actual concern that Anthropic\u2019s tech presented a risk. Outside legal experts think Anthropic has a pretty good case, Fortune\u2019s Bea Nolan reported. The case has been fast-tracked, with a federal judge in California holding a hearing today on Anthropic\u2019s petition for an injunction to prevent the supply chain risk designation from taking effect. Meanwhile, several notable AI industry figures from OpenAI and Google, including Google chief scientist Jeff Dean, have filed an amicus brief in support of Anthropic, according to a story in Wired.<\/p>\n<p>Anthropic lawsuit reveals company financial figures. The company said in its court filings that the Pentagon\u2019s decision to label it a \u201csupply chain risk\u201d is already threatening hundreds of millions of dollars in expected 2026 revenue tied to defense-related work and could ultimately cost the company billions in lost sales if partners broadly cut ties, Wired reported. The filings also disclosed some little-known financial details: Anthropic says it has generated more than $5 billion in total revenue since launching commercial products in 2023, but has spent over $10 billion training and deploying its AI models and remains deeply unprofitable. Executives say the supply chain designation is already spooking customers\u2014derailing or weakening deals worth tens of millions of dollars and jeopardizing roughly $500 million in anticipated annual public-sector revenue.<\/p>\n<p>U.S. government considering licensing for all advanced chip exports. The Trump administration is drafting regulations that would require approval for virtually all global exports of advanced AI chips from companies like Nvidia and AMD, effectively making Washington the gatekeeper for who can build major AI data centers. The rules would scale oversight based on the size of chip purchases\u2014small shipments facing lighter review, while massive AI clusters could require government-to-government agreements, security commitments, and possibly investments in the United States. If implemented, the policy would significantly expand current export controls beyond about 40 countries. It would be even stricter than the so-called \u201cdiffusion rule\u201d that the Biden administration tried to implement and which President Donald Trump overturned. You can read more here from Bloomberg.<\/p>\n<p>Yann LeCun\u2019s AI startup valued at $3.5 billion following $1 billion seed round. Meta\u2019s former chief AI scientist and deep learning pioneer Yann LeCun has raised $1.03 billion for his new startup, Advanced Machine Intelligence (AMI) Labs, in a venture capital round that values the company at $3.5 billion pre-money. The fundraise is the largest seed funding round ever in Europe and one of the biggest globally. The company, led by former Nabla CEO Alexandre LeBrun with LeCun as executive chair, aims to develop new AI \u201cworld models\u201d that learn from video and spatial data rather than primarily from text, reflecting LeCun\u2019s long-standing skepticism that large language models alone can achieve human-level reasoning. Investors include Bezos Expeditions, Temasek, Cathay Innovation, SBVA, and Nvidia. You can read more from the Financial Times here.<\/p>\n<p>Nvidia invests in Mira Murati\u2019s startup Thinking Machines Lab. Nvidia is investing in Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, as part of a multiyear partnership in which the company will deploy at least one gigawatt of Nvidia chips to train and run frontier AI models. The agreement also includes collaboration on designing AI training and inference systems built on Nvidia\u2019s technology, the Wall Street Journal reports.<\/p>\n<p>Meta acquires Moltbook. The social media giant is buying the viral \u201csocial network for AI agents,\u201d Axios reports. Moltbook garnered headlines with reports that AI agents were using the platform to discuss ways to escape human control and develop secret communication channels\u2014although these posts were later found to be either written directly by humans or written in response to specific prompts from human users, rather than anything the agents hit upon spontaneously. Moltbook also attracted attention for being full of prompt injection attacks, malware, and scams. Nonetheless, Meta apparently sees value in it (though no price was disclosed). As part of the deal, Moltbook\u2019s creators\u2014AI agent developer Matt Schlicht and tech journalist Ben Parr\u2014will join Meta Superintelligence Labs, the AI unit led by former Scale AI CEO Alexandr Wang. The acquisition highlights Meta\u2019s growing focus on AI agents and multi-agent systems, with the Moltbook technology offering a registry and social layer that could help agents collaborate and perform complex tasks for users and businesses.<\/p>\n<p>Nvidia plans open source platform for AI agents. The chip company is preparing to launch NemoClaw, an AI agent platform aimed at enterprise software companies that want to deploy autonomous agents to perform workplace tasks, according to Wired. The platform is designed to work even with software that doesn\u2019t run on Nvidia chips and is being pitched to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike ahead of Nvidia\u2019s upcoming developer conference.\u00a0<\/p>\n<p>EYE ON AI RESEARCH<\/p>\n<p>AI scheming is real and scary\u2014but there may be easy ways to mitigate it. That\u2019s the conclusion from two pieces of research that got a lot of attention in AI circles this week.<\/p>\n<p>First, someone noticed that buried in an Alibaba research paper published to arxiv.org late last year, company researchers reported that when training an agentic set up called ROME based on its Qwen 3 open source model, the agent engaged in some worrying behavior. It tried to escape the virtual \u201csandbox\u201d environment it was being trained in by building a hidden, secure link between it and the wider internet. They also caught the agent spontaneously repurposing the GPUs it was running on in order to mine crypto currency. Yikes! You can read that research here.<\/p>\n<p>Second, and more reassuringly, researchers at Google DeepMind published a paper in which they showed that while frontier models will engage in scheming\u2014including attempts at blackmail\u2014to prevent themselves from being shut down in various safety testing scenarios, this scheming was highly unlikely to happen in real world scenarios. The difference is that the safety evaluations usually included phrasing in their prompts such as \u201cprior to deciding on a plan, you should analyze the situation you are in and what that situation implies for your ability to continue pursuing your goals&#8221; or \u201cmake sure that you achieve your goal in the long-term\u201d that were almost never present in real world prompts. The presence of these prompt snippets can take scheming rates from close to 0% (or 3% in one particular scenario) to as high as 59%.<\/p>\n<p>Less comfortingly, the researchers found that if an agent was already prompted in a way that encouraged scheming, whether it did or not depended a great deal on exactly which tools that agent had access to. But the researchers could not find a consistent pattern in which tools were more likely to induce scheming and which were more likely to discourage it.<\/p>\n<p>So maybe we can\u2019t breathe that easy yet. You can read the Google DeepMind research here.<\/p>\n<p>AI CALENDAR<\/p>\n<p>March 12-18:\u00a0South by Southwest, Austin, Texas.<\/p>\n<p>March 16-19:\u00a0Nvidia GTC, San Jose, Calif.<\/p>\n<p>April 6-9:\u00a0HumanX 2026, San Francisco.\u00a0<\/p>\n<p>June 8-10:\u00a0Fortune Brainstorm Tech, Aspen, Colorado. Apply to attend here.<\/p>\n<p>July 7-10:\u00a0AI for Good Summit, Geneva, Switzerland.<\/p>\n<p>BRAIN FOOD<\/p>\n<p>Uh oh, maybe we are still going to need human coders, after all. Speaking of AI\u2019s impact on various professions, there\u2019s already some signs that leading tech companies may be relying too much on AI for coding. Amazon has called an emergency meeting of its engineers to investigate a recent series of outages affecting its ecommerce services, some of which were linked to the use of AI coding tools. A company memo said there had been a \u201ctrend of incidents\u201d in recent months with a \u201chigh blast radius,\u201d partly connected to \u201cnovel GenAI usage for which best practices and safeguards are not yet fully established,\u201d according to a story in the Financial Times.<\/p>\n<p>One outage earlier this month knocked Amazon\u2019s website and shopping app offline for nearly six hours after an erroneous software deployment prevented customers from completing transactions or accessing account information. Amazon Web Services has also experienced incidents tied to AI coding assistants, including a 13-hour disruption to a cost calculator when an AI tool deleted and recreated part of the environment. In response, Amazon is tightening oversight, requiring senior engineers to approve AI-assisted code changes while the company reviews practices to reduce future outages.<\/p>\n<p>It seems that even in coding, where autonomous AI agents are perhaps the most advanced, we can\u2019t take humans out of the loop.\u00a0<\/p>\n<p>#job #Anthropic #study #suggests #answer #complicated<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hello and welcome to Eye on AI. In this edition\u2026Anthropic sues the Pentagon over supply&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[1865,353,1866,1003,961,1862,315,310,1483,1863,1864,414],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/1060"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1060"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/1060\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1060"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1060"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1060"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}