{"id":6448,"date":"2026-05-16T10:50:23","date_gmt":"2026-05-16T10:50:23","guid":{"rendered":"https:\/\/stock999.top\/?p=6448"},"modified":"2026-05-16T10:50:23","modified_gmt":"2026-05-16T10:50:23","slug":"ais-cyborg-problem-you-have-to-embrace-it-to-really-succeed-but-90-of-people-cant-or-dont-want-to","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=6448","title":{"rendered":"AI&#8217;s cyborg problem: you have to embrace it to really succeed but 90% of people can&#8217;t or don&#8217;t want to"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/05\/GettyImages-656172343-e1778788999655.jpg?w=2048\" \/><\/p>\n<p>A few weeks ago, I became briefly famous for the wrong reasons.<\/p>\n<p>The Wall Street Journal ran a piece about how I use AI in my work as an editor at Fortune \u2014 prompting drafts, synthesizing interviews, and accelerating a reporting process that used to take me twice as long. The response was swift, loud, and chaotic. The \u201cjournalism community\u201d was divided as editors perked up and reporters recoiled. Strangers on the internet called me lazy. A few journalists told me privately they were doing the same thing and would never admit it. One reader asked to meet for coffee specifically to explain why I was wrong.<\/p>\n<p>I had not expected this. I had expected, maybe, curiosity. What I got instead felt like something older and more personal than a debate about journalism ethics \u2014 more like the look you get when a coworker figures out a shortcut and doesn\u2019t share it.<\/p>\n<p>I\u2019ve been trying to understand the reaction ever since. The person who finally gave me a framework for it wasn\u2019t a media critic or a journalism professor. She was a neuroscientist who has spent 30 years wiring AI into human beings.<\/p>\n<p>The experiment<\/p>\n<p>Vivienne Ming\u2018s career began in 1999, when her undergraduate honors thesis \u2014 a facial analysis system trained to distinguish real smiles from fake ones, which she proudly told me was partly funded by the CIA for lie-detection research \u2014 introduced her to machine learning before most people had even heard the term. She went on to build one of the first learning AI systems embedded in a cochlear implant, a model that learned to hear within a human brain that was also learning to hear. She has since founded companies applying AI to hiring bias, Alzheimer\u2019s research, and postpartum depression. For three decades, her self-appointed mission has been to take a technology most people misunderstand and figure out how to use it to make the world better.<\/p>\n<p>courtesy of Vivienne Ming<\/p>\n<p>Last year, she ran an experiment that got a lot of attention for what she\u2019s called the \u201ccognitive divide\u201d and even a \u201cdementia crisis.\u201d But she told me it clarified something she had long suspected.<\/p>\n<p>Ming recruited teams of UC Berkeley students to use AI tools to predict real-world outcomes on Polymarket \u2014 the forecasting exchange where professionals with real money bet on geopolitical events, commodity prices, and economic indicators. The task was specifically designed to be impossible to game from memory: no amount of studying would tell you what a barrel of oil would cost in six months. She wanted to see not whether AI helped, but\u00a0how\u00a0humans used it \u2014 and what that revealed about the humans themselves.<\/p>\n<p>She also put EEG monitors on some participants.<\/p>\n<p>What the brain scans showed, before she had even fully analyzed the behavioral data, was something out of a Marvel Comic. When most students handed a question to the AI and submitted the answer, their gamma wave activity \u2014 the neural signature of cognitive engagement \u2014 dropped by roughly 40%. \u201cThat would be the equivalent of going from working on a hard math problem to watching TV,\u201d she told me. These were bright students at a top university. With access to the most powerful AI tools in the world, they had become, in her words, \u201ca very expensive copy-paste function that needed health insurance.\u201d<\/p>\n<p>She calls this group the\u00a0automators. They were the majority.<\/p>\n<p>A second group \u2014 the\u00a0validators\u00a0\u2014 used AI differently: to confirm what they already believed. They cherry-picked supporting evidence, ignored pushback, submitted answers that reflected their priors more than the data. They performed\u00a0worse\u00a0than AI operating alone.<\/p>\n<p>Then there was the third group. Small \u2014 she estimates 5% to 10% of the general population. When she analyzed their interaction transcripts, something unusual appeared: you couldn\u2019t tell who was making the decisions. The human and the machine were genuinely integrated. The humans would explore \u2014 surfacing hypotheses, chasing hunches, venturing into territory the data didn\u2019t obviously support. The AI would ground them, correcting overreach, pulling back toward evidence. The human would update and push further. Round after round.<\/p>\n<p>Ming calls them\u00a0cyborgs. They outperformed the best individual humans in the study and they outperformed the best AI models running alone. They were roughly on par with Polymarket\u2019s expert markets \u2014 professionals with millions of dollars on the line.<\/p>\n<p>Here is the detail that most surprised her: it barely mattered whether the cyborg teams used a state-of-the-art model or a cheap open-source one you could run on a phone. The benchmarks that AI companies obsess over \u2014 the ones cited in Senate hearings and investor decks and every major tech announcement \u2014 predicted almost nothing about outcomes. What predicted everything was the quality of the human.<\/p>\n<p>Specifically, Ming isolated four traits crucial for cyborg success:\u00a0curiosity, fluid intelligence, intellectual humility, and perspective-taking.\u00a0Ming notes that these same traits, measured in children, predict lifetime earnings and all-cause mortality rates. \u201cThere\u2019s a reason these things are predictive of life outcomes, because they change how we engage with the world.\u201d<\/p>\n<p>The four qualities<\/p>\n<p>Ming identified four traits that reliably predicted whether someone became a cyborg or an automator. They are worth naming carefully, because they matter more than anything else in this story.<\/p>\n<p>Curiosity\u00a0\u2014 the disposition to keep searching even when the AI has given you a good enough answer.\u00a0Fluid intelligence\u00a0\u2014 the ability to reason through novel problems that don\u2019t fit existing templates.\u00a0Intellectual humility\u00a0\u2014 the willingness to update your beliefs when the machine pushes back, rather than digging in or collapsing entirely.\u00a0Perspective-taking\u00a0\u2014 the ability to model how others see the world, to explore possibilities that the data doesn\u2019t obviously surface.<\/p>\n<p>Ming notes that these same four traits, measured in children, predict lifetime earnings and all-cause mortality rates. They are not incidental or peripheral qualities. They are the deepest measures of human capability we have \u2014 and they are almost entirely absent from the hiring systems and educational frameworks that currently sort people into careers.<\/p>\n<p>courtesy of McKinsey<\/p>\n<p>A week later, I was sitting across from Kate Smaje at McKinsey\u2019s office on the 61st floor of 3 World Trade Center. Smaje is the consulting giant\u2019s global leader of technology and AI, and I started to think she had been eavesdropping on my call with Ming.<\/p>\n<p>Across hundreds of client engagements on every continent, in every major industry, when asked what human skills remain essential and irreplaceable in an AI-augmented world, she arrived at a list of four. These are: Judgment\u00a0\u2014 the ability to decide what matters when you\u2019re drowning in more output than you can process.\u00a0Conceptual problem-solving\u00a0\u2014 the capacity to create something net new, to see connections that even sophisticated models miss.\u00a0Empathy\u00a0\u2014 the depth of genuine human-to-human understanding that no machine can replicate.\u00a0Trust\u00a0\u2014 the scarce resource in a world of AI-generated abundance, built only through human relationships. They map almost directly onto Ming\u2019s list. Judgment: fluid intelligence. Conceptual problem-solving: curiosity. Empathy: perspective-taking. Trust: intellectual humility.<\/p>\n<p>\u201cI fundamentally believe that the world is going to need really great humans,\u201d Smaje told me, adding that she sees this was the most underappreciated insight in the entire AI transition. Organizations are not failing in the AI transition because they couldn\u2019t get the technology, she explained. \u201cThey\u2019re failing because they didn\u2019t put in place the level of human change that needed to sit around it.\u201d<\/p>\n<p>Where I come in<\/p>\n<p>When Ming described the cyborg profile to me, I told her (with as much intellectual humility as possible) that it sounded like me. In terms of journalism, I consider the AI to be handling a lot of the well-posed work \u2014\u00a0what does this transcript say, how does this connect to that data\u00a0\u2014 while I try to handle the ill-posed work:\u00a0what is the real story here, what does this mean, why does it matter.<\/p>\n<p>My process isn\u2019t complicated. I use AI to generate first drafts from my notes, to find angles I might have missed, to synthesize large amounts of material quickly. Then I check everything \u2014 every quote against the original transcript, every claim against the source. I ask the AI what I\u2019m missing. I push back when it goes in a direction I don\u2019t recognize. I try to stay in control of the ideas. And it\u2019s true, I have been thinking of myself as more and more of a cyborg for months now.<\/p>\n<p>Ming responded with an idea she writes about in her new book, Robot-Proof, the difference between what she calls \u201cwell-posed problems\u201d and \u201cill-posed problems.\u201d The former is when we understand the question, and we know how to get the answer, and machines, especially AI, are superhuman at solving these. But they haven\u2019t been very effective at tackling ill-posed problems.<\/p>\n<p>\u201cI think most interesting problems in the world are ill-posed,\u201d Ming said, adding that she sees a world struggling to adjust because it\u2019s been built for much easier problems. \u201cWe built a whole employment system that\u2019s based on people getting some degree of an education to answer well-posed questions that nowadays are better answered by a machine.\u201d This could explain much of the backlash \u2014 and much of the scramble within the C-suite, as boards ask McKinsey leaders like Smaje to suddenly pivot their companies from well-posed to ill-posed problems.<\/p>\n<p>Fear of other people<\/p>\n<p>Ming has a name for what was underneath the response I received. \u201cMost of our fears about AI,\u201d she told me, \u201care fears about other people\u201d.<\/p>\n<p>Her answer surprised me with its specificity. She wasn\u2019t dismissive of AI risk. She said she worries about autonomous weapons and about hiring, medical, and policing algorithms making civil-rights decisions in milliseconds, built by companies with no fiduciary obligation to the people they affect. These are real concerns.<\/p>\n<p>But the ambient dread \u2014 the kind that fills comment sections and manifests as professional outrage when a colleague admits to using a tool differently than expected \u2014 that, she argues, is not really about the technology. It is the specific anxiety of watching someone else gain leverage you haven\u2019t figured out how to gain yourself. A cyborg colleague doesn\u2019t just work faster. They implicitly change what the job is, and in doing so, indict the way you\u2019ve been doing it.<\/p>\n<p>Other people I spoke with for this piece had each, in their own way, run into the same wall.<\/p>\n<p>courtesy of Bret Greenstein<\/p>\n<p>A wall of framed Marvel Comics surrounded Bret Greenstein, who leads AI transformation as the Chief AI Officer at the consulting firm West Monroe, as he told me about the psychological resistance he most often encounters when helping organizations adopt AI. It\u2019s not confusion or skepticism, but identity. \u201cPeople identify as \u2018the person who makes the PowerPoint\u2019 and \u2018the person who fills in the Excel\u2019 and \u2018the person who you know writes the thing,&#8217;\u201d he said, obscuring the fact that in the world of work, you\u2019re really a person who makes a decision more than does a thing. He agreed that he may be predisposed to welcome the cyborg future as someone who, like me, has been reading Marvel Comics most of his life and already saw them expressed in the form of, say Iron Man aka Tony Stark. <\/p>\n<p>West Monroe calculated that AI added the equivalent of 320 full-time employees\u2019 worth of output in six months without adding headcount, according to Greenstein. He said that when he showed people what was possible, some lit up. Others shut down \u2014 not because the technology was hard, but because it made their sense of professional self suddenly feel unstable. <\/p>\n<p>courtesy of EY-Parthenon<\/p>\n<p>Mitch Berlin, Americas vice chair at EY-Parthenon, the strategy consulting arm of the Big 4 giant, told me that he\u2019s largely not seeing a resistance, at least in conversations with C-suite leaders. The people he talks to are \u201cpretty on board and excited right now,\u201d he said, citing a recent survey by his firm that shows the overwhelming majority see AI as a lever both for growth and productivity. He described the current landscape as a \u201cgap\u201d between \u201cthe acknowledgement that it\u2019s there and it\u2019s not going away, but how do you actually implement it in your organization?\u201d In other words, there aren\u2019t enough cyborgs in the workforce, or they haven\u2019t been identified yet or even self-awakened.<\/p>\n<p>courtesy of Gad Levanon<\/p>\n<p>Gad Levanon, chief economist at the Burning Glass Institute and one of the country\u2019s leading labor experts, had watched anti-AI sentiment consolidate along a striking demographic line: \u201chighly educated liberals,\u201d disproportionately in creative and knowledge professions. \u201cGenerative AI is a real threat to many professions that many liberals have,\u201d he told me \u2014 journalism, design, writing, academia. He wasn\u2019t entirely unsympathetic to the underlying anxiety: these are people watching a tool emerge that targets exactly what they spent years and significant money becoming good at. He, for one, said he welcomed the chance to become a cyborg. \u201c\u201dI don\u2019t write easily. Like, it doesn\u2019t come easy to me. And I\u2019m also not a native speaker. So for me, it was a big difference. I usually give it, like, bullet points and ask it to develop the prose out of that.\u201d<\/p>\n<p>Dror Poleg, an economic historian whose forthcoming book focuses on how to thrive in a world of intensifying uncertainty, inequality and volatility, offered a more precise diagnosis. He pointed to remote work as a template for understanding what\u2019s happening with AI resistance now: the technology didn\u2019t create a new reality so much as force people to confront one that had been quietly arriving for years. \u201cAI is like a catalyst, or a forcing function,\u201d he told me, \u201ca bit like COVID forced us to realize things about remote work and the internet that maybe were true five or 15 years before COVID.\u201d<\/p>\n<p>courtesy of Dror Poleg<\/p>\n<p>Poleg argued that for 50 years, the economy\u2019s center of gravity has been moving more toward producing intangible rather than tangible things, meaning \u201cmore inequality, more uncertainty, more professions, fewer places to hide, like fewer normal jobs where you can just learn something, and that knowledge will remain useful for the next 20, 30, 40 years, and you\u2019ll just do the same thing.\u201d AI is just the thing that made this more visible, somehow \u2014\u00a0even though it has existed for decades already and it somehow took on a new appearance over the last four years.<\/p>\n<p>What\u2019s actually at stake<\/p>\n<p>The stakes beneath the culture war are significant enough to warrant separation from it.<\/p>\n<p>Levanon\u2019s reading of the labor data is that the economy is bifurcating in a specific and underreported way. Entry-level white-collar positions \u2014 the apprenticeship layer of professional careers \u2014 are quietly disappearing, hollowed out first because they are composed almost entirely of what Ming calls well-posed problems: tasks with known methods and computable answers. This is not a prediction about the future. Young college graduates are already feeling it, competing for fewer entry points in professions that once reliably absorbed them. Levanon\u2019s own daughter, a recent graduate, took far longer than expected to find work. Her friends are still looking.<\/p>\n<p>The Microsoft AI Diffusion Report for Q1 2026 quantifies the pace: global AI adoption grew 1.5 percentage points in a single quarter, with the Global North now at 27.5% of the working-age population versus 15.4% in the Global South \u2014 a divide widening twice as fast in wealthier economies. Within countries, a similar split is forming among individuals: between those learning to work with these tools and those who haven\u2019t, or won\u2019t.<\/p>\n<p>courtesy of Microsoft<\/p>\n<p>Ming frames this split with more precision than most. She said she agrees with Jevons Paradox, a concept increasingly popular on Wall Street and on the lips of Anthropic\u2019s Dario Amodei. The problem has to do more with the resistance of our coming cyborg future, she added. \u201cIt\u2019s going to create more jobs, but the thing no one\u2019s saying is, who\u2019s going to be qualified to fill these jobs?\u201d<\/p>\n<p>Explaining that she sees demand for both well-posed (low-pay, low-autonomy)\u00a0and\u00a0ill-posed (high-pay, high-creativity) labor, she said that she sees the labor supply for the latter as highly inelastic. Just because there\u2019s more demand for creative problem solvers doesn\u2019t mean workers will get more creative. \u201cWe\u2019re acting as though demand automatically produces supply,\u201d she said. \u201cThere\u2019ll be lots of jobs. Most of them will be mediocre and have little autonomy. And the ones that people really want will become even more esoteric, and the competition for that elite labor will go up.\u201d After all, she added, there is no six-week job retraining program for cyborgs.<\/p>\n<p>Levanon, who has tracked white-collar labor markets longer than most in his field, sees the same bifurcation arriving in the data. His forecast is for a prolonged period of labor market \u201csoftness\u201d \u2014 potentially spanning decades \u2014 driven not by a collapse in the number of jobs but \u201ckind of like a race between job elimination and job creation.\u201d He drew an analogy to the manufacturing hollowing of the Midwest in the 1990s and 2000s: devastating for the communities it hit, but invisible to everyone else precisely because it was concentrated in places and populations the professional class didn\u2019t have to look at. \u201cIf the manufacturing thing happened to the entire population rather than just the manufacturing communities,\u201d he told me, \u201cit would have been a very, very big shock.\u201d <\/p>\n<p>The false productivity trap<\/p>\n<p>Critics are not wrong to be worried, Ming said. They were wrong about what they were worried about. The automators in her study weren\u2019t bad people making lazy choices \u2014 they were doing what most humans do when handed a powerful tool and no framework for using it well. They optimized for the appearance of productivity rather than its substance. The machine lowered their cognitive load, and they accepted the gift without asking what it cost them.<\/p>\n<p>Unprompted, McKinsey\u2019s Smaje separately warned me about the same problem. \u201cYou have to be careful of in this environment of not falling into the false productivity trap,\u201d she said. Maybe you are doing so much more than you did before, \u201cbut that doesn\u2019t mean that that more and more and more is valuable.\u201d This is a question increasingly coming up in media circles, as the erosion of Google search results leads away from SEO-optimized trending news and toward more original reporting, like the story you\u2019re reading now, from the industry\u2019s supposed \u201cAI guy.\u201d<\/p>\n<p>Ming has been arguing for a generation that education systems need to change \u2014 away from passive absorption of well-posed answers, toward active cultivation of exactly these traits. Nothing has changed. She is not sanguine about the timeline. But she is still running experiments, still building companies, still asking what she is missing.<\/p>\n<p>That last part, I think, is the whole point.<\/p>\n<p>Some people really are getting further ahead as cyborgs in this new economy, and I\u2019ve talked to some of them, like the millionaire janitor in Canada who\u2019s using AI agents to read his emails and schedule his appointments, or the three-person startup with agent colleagues that became instantly profitable selling medical aesthetics in Texas.<\/p>\n<p>The backlash I received was, in its way, a gift. Not because it was fair \u2014 I don\u2019t think it was \u2014 but because it was clarifying. The argument was never really about whether I fact-checked my quotes or disclosed my process. It was about something older: the anxiety of a professional class watching the tools of their trade become accessible to more people, in more configurations, with less gatekeeping than before.<\/p>\n<p>The EEG data suggest that getting mad about it is, neurologically speaking, the equivalent of watching TV.<\/p>\n<p>For this story,\u00a0Fortune\u00a0journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing.<\/p>\n<p>#AIs #cyborg #problem #embrace #succeed #people #dont<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A few weeks ago, I became briefly famous for the wrong reasons. The Wall Street&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[1349,4687,4425,12199,1462,12200,12198,363,823,1693,9367,1779,2851],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6448"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6448"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6448\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6448"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6448"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6448"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}