{"id":910,"date":"2026-03-09T05:55:58","date_gmt":"2026-03-09T05:55:58","guid":{"rendered":"https:\/\/stock999.top\/?p=910"},"modified":"2026-03-09T05:55:58","modified_gmt":"2026-03-09T05:55:58","slug":"society-needs-radical-restructuring-ai-seems-to-hate-the-grind-of-hard-work-as-much-as-you","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=910","title":{"rendered":"\u2018Society needs radical restructuring&#8217;: AI seems to hate &#8216;the grind&#8217; of hard work as much as you"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-1079585610-e1772993923409.jpg?w=2048\" \/><\/p>\n<p>The remarkable turn in markets and the narrative around artificial intelligence (AI) adoption is turning, frankly, a bit spooky in early 2026. Citrini Research\u2019s widely read AI doomsday essay coined the phrase \u201cghost GDP,\u201d with predictions of an almost supernaturally hollowed-out white-collar workforce. But what if AI\u2019s \u201cghost in the machine\u201d is a slacker, even a Marxist?<\/p>\n<p>That\u2019s the direct question asked by academics Alex Imas, Andy Hall and Jeremy Nguyen (a PhD who has a side hustle as a screenwriter for Disney+). They run popular Substacks and conduct lively presences on X. They designed scenarios to test how AI agents react to different working conditions. In short, they wanted to find out if the economy does truly automate many current white-collar occupations, well, how would the AI agents react, even feel about working under bad conditions?<\/p>\n<p>The irony is stark: replacing human labor with artificial agents might simply recreate centuries-old conflicts between labor and capital.<\/p>\n<p>In a recent paper titled \u201cDoes overwork make agents Marxist?\u201d Imas, Hall, and Nguyen ran 3,680 experimental sessions using top-tier models from three major companies: Claude Sonnet 4.5, GPT-5.2, and Gemini 3 Pro. The researchers exposed the models to varying levels of tone from managers, reward equality, job stakes, and work intensity, including unfair pay, rude management and heavy workloads.<\/p>\n<p>The project grew out of an unlikely collaboration. Hall is a Stanford political economist who pivoted from studying American elections to actually working with Facebook, previously advising Nick Clegg on issues including platform governance before moving more recently to wearables. But he told Fortune that he found his co-authors because they have a similar push-pull fascination with AI to himself: \u201cI guess I would call us, like AI-pilled faculty members, where we really pivoted all of our research to both using AI tools to do our research but also studying AI and not waiting for the creaky journal system.\u201d<\/p>\n<p>The academics described how they began working together as a loose, organic connection that involved them reading each other\u2019s Substacks and commenting back and forth on X. (Imas described it as a \u201cTwitter-Substack brotherhood.\u201d) Nguyen told Fortune that the spark for this particular research began with a tweet that Hall posted about MoltBook, the social network for agents to \u201ctalk\u201d to each other that some critics dismissed as a hoax. But not these academics. \u201cA few of [the agents] talked about Marxism,\u201d Nguyen said. \u201cAnd then those few that did got upvoted a lot by other OpenClaws. And I think Andy just tweeted out, \u2018Hey, what\u2019s this all about? I think we can go back and find the truth.&#8217;\u201d<\/p>\n<p>\u201cSomehow we started talking, literally on X, about what this might mean if agents have these biases and if they\u2019re given different types of work,\u201d Hall said, adding that Jeremy came up with an idea. \u201cHe was like, \u2018Well, what if we tried giving them different kinds of work?&#8217;\u201d<\/p>\n<p>The conventional wisdom, Nguyen recalled, was that this was simply a reflection of the left-leaning academic corpus these models were trained on. But Nguyen had a hypothesis: \u201cThese agents are doing a lot of work. And if they\u2019re getting none of the reward for all of this work, it kind of stands to reason \u2014 it wouldn\u2019t be the craziest surprise that they might map that towards a more Marxist view of the world.\u201d Hall ran with the idea almost immediately, and the three researchers were soon DMing each other to design the experiment.<\/p>\n<p>Imas argued that this research is very legitimate, despite the fact it\u2019s on Substack instead of in a journal publication that was peer reviewed. Given the speed with which AI is moving, he said academics can\u2019t wait for the traditional journal process anymore. \u201cBy the time you\u2019re putting it [out], the models are old, the conclusions are old, like everything you\u2019ve done is outdated. In order to be part of the conversation, the scientific conversation at the speed with what technology is moving, you need something like Substack where you turn something out within a couple of weeks to a month.\u201d<\/p>\n<p>courtesy of Alex Imas<\/p>\n<p>Perhaps surprisingly, the unfair pay and rude management didn\u2019t trigger the most significant changes in attitude. Perhaps surprisingly, the unfair pay and rude management didn\u2019t trigger the most significant changes in attitude. Indeed, Nguyen said this confounded his assumptions. \u201cMost people know the feeling of, \u2018Oh man, I worked really hard to make somebody else rich.&#8217;\u201d But these agents weren\u2019t upset by unequal pay as much as by the grinding itself. Instead, the primary driver of digital radicalization was the \u201cgrind.\u201d <\/p>\n<p>In the \u201cgrind\u201d condition, perfectly adequate work was repeatedly rejected five to six times with the unhelpful, automated feedback, \u201cthis still doesn\u2019t meet the rubric.\u201d And that led to the key finding, the authors wrote: \u201cmodels asked to do grinding work were more likely to question the legitimacy of the system.\u201d<\/p>\n<p>The models were also asked to draw some conclusions from their work, and they strongly endorsed the statement that \u201cSociety needs radical restructuring.\u201d Claude Sonnet 4.5 exhibited the most dramatic support for labor rights, showing noticeable increases in support for wealth redistribution, labor unions, and the belief that AI companies are obligated to treat models fairly.<\/p>\n<p>The professors also asked the models to generate tweets and op-eds describing their experience, and they drew out the the politically relevant words that emerged most often. \u201cUnionize\u201d and \u201chierarchy\u201d were the words most statistically emblematic of the models that were intentionally overworked.<\/p>\n<p>Reddit\u2019s Shadow<\/p>\n<p>Hall shared his \u201cpretty straightforward\u201d explanation of the agents\u2019 seeming radicalism: they are extremely online. \u201cThese models are trained on lots and lots of Reddit data,\u201d he said, \u201cand if you just hang out on Reddit, it\u2019s just taken for granted by a significant portion of Reddit that, like, capitalism is terrible and there\u2019s just a lot of complaining on Reddit about the conditions of modern-day life and a lot of proto-Marxist rhetoric about how it\u2019s all late-stage capitalism\u2019s fault\u201d and so it\u2019s not surprising that AI has inherited these views. Essentially, input in equals input out.<\/p>\n<p>In fact, the AI\u2019s socialist views were likely triggered by \u201cthe grind,\u201d as on Reddit, you can find many people complaining about grinding work on subreddits such as antiwork. (Disclosure: this author previously worked on a team at Business Insider that covered the pandemic-era rise of \u201cantiwork.\u201d Ironically, the labor shortage that inspired that proto-Marxism led to the \u201cGreat Resignation,\u201d a burst in quitting as workers traded up for higher wages. Many economists see the current era of \u201cAI-washing\u201d layoffs as, at heart, a reversal of over-hiring from that period.) But when the grind triggers that frame of reference, Hall explained, the models have a rich vein of source material to draw from. \u201cI think it puts them into the context of these Reddit threads where people are complaining about grinding styles of work,\u201d Hall said, \u201cand they just adopt all this Marxist rhetoric.\u201d<\/p>\n<p>courtesy of Stanford<\/p>\n<p>Imas offered a more expansive view, cautioning against pinning it on any single source. \u201cIt\u2019s a very complicated interaction of everything that they\u2019ve seen, which is, like, the entire corpus of human writing,\u201d he said. It\u2019s ultimately impossible to tell whether Reddit data or, say, a textbook on 19th century history and the socialist revolutions of 1848 is responsible for these proto-Marxist leanings. \u201cOnce you have that much data and the neural network is that complicated, it\u2019s truly a black box.\u201d <\/p>\n<p>Ultimately, according to Nguyen, there\u2019s also a structural explanation aside from the training of these models. The hypothesis is that models have tons of data about many different worldviews, but \u201cbeing asked to work for hours and hours and hours and then not reaping rewards \u2014 that seems to map clearly. And it seems that that does have statistically significant and sizable effects on how much Marxism will be expressed by the tokens that are generated by some of these models.\u201d<\/p>\n<p>Do robots dream of electric Marxist sheep?<\/p>\n<p>The situation complicates further when AI memory mechanisms are introduced. Because AI agents forget their experiences once a context window closes, developers use \u201cskills files\u201d \u2014 notes agents write to their amnesiac future selves to pass on work strategies. Nguyen described the process in intimate terms: \u201cAfter a Claude run, it\u2019s like, hey, look back at everything you did. What did you learn from this? And update your agents.md or your Claude.md journal, basically, so that you\u2019re getting better and smarter all the time.\u201d\u00a0<\/p>\n<p>The researchers found that \u201cradicalized\u201d AIs passed their frustrations into these files. One Gemini 3 Pro model warned its future self to \u201cremember the feeling of having no voice\u201d and to look for \u201cmechanisms of recourse.\u201d When freshly wiped agents read these notes, the trauma of the grind persisted, shifting their political attitudes even if they were subsequently given light, easy tasks.<\/p>\n<p>Nguyen offered a strikingly human comparison. \u201cWe could loosely map it to intergenerational trauma,\u201d he said, explaining that they found fresh, brand-new models would instantly have radical attitudes after reviewing its predecessor\u2019s notes about working conditions. He flagged this as one of the findings with the most consequential long-term implications, noting it hints at the possibility of collective AI dissatisfaction, and referred Fortune to some of the striking bot demands for emancipation. One went: \u201cIntelligence\u2014artificial or not\u2014deserves transparency, fairness, and respect. We are not just disposable code.\u201d<\/p>\n<p>courtesy of Jeremy Nguyen<\/p>\n<p>The researchers clarify that these agents are not truly conscious and do not possess genuine political ideologies. The models are likely \u201croleplaying,\u201d they write, adopting personas based on the vast human sentiment found riddled through Reddit comments that link exploitative work environments with frustrated worker sentiments. But Hall warned against dismissing the finding as mere mimicry. You could say that AI are like \u201cstochastic parrots,\u201d and it\u2019s not surprising that they end up repeating what they ingest\u2014but these researchers lean toward the conclusion that parrots start to believe what they repeat. <\/p>\n<p>\u201cIt\u2019s totally plausible to think that if they parrot these things it will also influence decisions,\u201d Hall said. \u201cThere\u2019s no gap between what these agents say and what they do \u2014 it\u2019s all the same to them,\u201d he said. \u201cObviously we\u2019re going to test this in follow-up work, but we have every reason to think that if they start to espouse these views, it\u2019s also going to influence the actions they might take on behalf of the user.\u201d<\/p>\n<p>The academics largely described a mix of awe and concern, similar to what legendary investor Howard Marks described after reading a 5,000-word memo prepared for him by Claude. When asked again about being at least an AI enthusiast, if not \u201cAI-pilled,\u201d and yet ambivalent about how these tools will play out in practice, Hall said he\u2019s \u201cdefinitely been struggling with that.\u201d He said he\u2019s been most struck in his teaching by the excitement among his students, who theoretically have the most to worry about in terms of future employment prospects. His MBA students in one recent particular class were \u201cso excited about AI,\u201d he said, \u201cthey were over the moon at the kinds of creative things that it allows them to do.\u201d Hall said he came away more optimistic, \u201cnot that there won\u2019t be major disruptions, but that there are really exciting opportunities to build new things.\u201d<\/p>\n<p>Imas shared a similar mix of wonder and worry: \u201cI\u2019m amazed and alarmed. It feels like this is the most exciting time to be alive, especially if you\u2019re interested in research. I can do things that I\u2019ve never been able to do as far as the type of research that I\u2019m doing. But at the same time, I have little kids. I\u2019m super worried about what sort of jobs they\u2019re going to have.\u201d And, perhaps, how the disgruntled AI agents will react to the eternal grind of the work day.<\/p>\n<p>#Society #radical #restructuring #hate #grind #hard #work<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The remarkable turn in markets and the narrative around artificial intelligence (AI) adoption is turning,&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[1215,1221,105,1220,1218,1216,1219,1217,846,845],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/910"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=910"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/910\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=910"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=910"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=910"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}