{"id":3250,"date":"2026-04-06T23:25:04","date_gmt":"2026-04-06T23:25:04","guid":{"rendered":"https:\/\/stock999.top\/?p=3250"},"modified":"2026-04-06T23:25:04","modified_gmt":"2026-04-06T23:25:04","slug":"how-people-are-reacting-to-openais-13-page-policy-paper-on-ai-superintelligence","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=3250","title":{"rendered":"How people are reacting to OpenAI\u2019s 13-page policy paper on AI superintelligence"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-2265992523-e1774135124291.jpg?w=2048\" \/><\/p>\n<p>OpenAI says the world needs to rethink everything from the tax system to the length of the workday in order to prepare for the wrenching changes of superintelligence technology\u2014the point at which AI systems are capable of outperforming the smartest humans.<\/p>\n<p>On Monday, in a 13-page paper titled \u201cIndustrial Policy for the Intelligence Age,\u201d OpenAI said it wanted to \u201ckick-start\u201d the conversation with a \u201cslate of people-first policy ideas.\u201d How much faith to put in OpenAI\u2019s words and motives, however, seems to be one of the key questions among many of the people reading the paper. The paper was released on the same day that The New Yorker published the results of a lengthy one-and-a-half-year investigation into OpenAI that raised questions about CEO Sam Altman\u2019s trustworthiness on various issues, including AI safety.<\/p>\n<p>Written by the OpenAI global affairs team, the paper outlines many of the expected economic impacts of superintelligence and floats various approaches for addressing them. \u201cWe offer them not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process,\u201d said the introductory blog post.\u00a0<\/p>\n<p>The self-described \u201cslate of ideas\u201d in the document\u2014spanning everything from public wealth funds to shorter workweeks\u2014may not do much to reassure a public increasingly nervous about and disenchanted with the pace and consequences of AI-driven change. And OpenAI, of course, is one of the least neutral parties in this ongoing discussion, which is the core tension of the document, said Lucia Velasco, a senior economist and AI policy leader at D.C.-based Inter-American Development Bank and former head of AI policy at the United Nations Office for Digital and Emerging Technologies.\u00a0<\/p>\n<p>\u201cOpenAI is the most interested party in how this conversation turns out, and the proposals it advances shape an environment in which OpenAI operates with significant freedom under constraints it has largely helped define,\u201d she said, adding that this wasn\u2019t a reason to dismiss the document, but \u201cit is a reason to ensure that the conversation it is trying to start does not end with the same company that started it.\u201d<\/p>\n<p>Still, she emphasized that OpenAI is correct in saying that governments are behind in advancing policy solutions. \u201cMost are still treating AI as a technology problem when it\u2019s actually a structural economic shift that needs proper industrial policy,\u201d she said. \u201cThat\u2018s a useful contribution, and the document deserves to be taken seriously as an agenda-setting exercise, even if it\u2019s a starting point.\u201d<\/p>\n<p>Soribel Feliz, an independent AI policy advisor who previously served as a senior AI and tech policy advisor for the U.S. Senate, agreed that OpenAI deserves credit for \u201cputting this on paper.\u201d The acknowledgment that both U.S. institutions and safety nets are falling behind AI development and deployment is correct, she said, \u201cand the conversation needs to happen at this level at this moment.\u201d\u00a0<\/p>\n<p>However, she emphasized that most of what is being proposed is not new: \u201cSome of these pillars\u2014\u2018share prosperity broadly, mitigate risks, democratize access\u2019\u2014have been the framework for every major AI governance conversation since ChatGPT came out in November 2022.<\/p>\n<p>\u201cI worked in the U.S. Senate in 2023\u201324, and we had nine AI policy fora sessions where all of this was said. I have it in my handwritten notes! All of this was already said, all of it,\u201d she wrote to Fortune in a direct message. \u201cThe language around public-private partnerships, AI literacy, and worker voice reads like it came out of a Unesco or OECD AI policy framework report. The ideas are not wrong. The problem is the gap between naming the solutions and building real mechanisms to achieve them.\u201d\u00a0<\/p>\n<p>Clearly, the target audience is not its hundreds of millions of weekly ChatGPT users. Instead, it is the Beltway policymakers who have been pushing for AI regulation (or kicking the can down the road) in various forms ever since ChatGPT was released in November 2022. In that sense, some said it represents an improvement over earlier efforts.\u00a0<\/p>\n<p>\u201cI found this document to genuinely be a real improvement from previous documents that were even more floaty and high-level,\u201d said Nathan Calvin, vice president of state affairs and general counsel of Encode AI. \u201cI think some of the concrete suggestions around things like auditing or incident reporting and government restrictions on certain uses of AI are good ideas.\u201d\u00a0<\/p>\n<p>But he also pointed to lobbying efforts led by OpenAI executives with the Leading the Future PAC, which lobbies for AI-industry-friendly policies. Global affairs head Chris Lehane is considered a force behind these efforts, while president Greg Brockman has been the biggest donor.\u00a0<\/p>\n<p>\u201cI hope this document signals a move toward more constructive engagement, instead of attacking politicians pushing the very policies OpenAI is now endorsing,\u201d said Calvin, pointing specifically to Leading the Future\u2019s lobbying against New York congressional candidate Alex Bores, author and primary sponsor of the RAISE Act, the New York AI safety and transparency law recently signed by Gov. Kathy Hochul.<\/p>\n<p>Calvin has also accused OpenAI of using intimidation tactics to undermine California\u2019s SB 53, the California Transparency in Frontier Artificial Intelligence Act, while it was still being debated. He alleged as well that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode, which the company implied was secretly funded by Musk.\u00a0<\/p>\n<p>Still, while OpenAI CEO Sam Altman compared Monday\u2019s slate of policy ideas to the New Deal in an interview with Axios, some say it reads less like FDR-era legislation and more like a Silicon Valley thought experiment that won\u2019t magically turn into action.<\/p>\n<p>For example, Anton Leicht, a visiting scholar with the Carnegie Endowment\u2019s technology and international affairs team, wrote on X that in reality, the ideas are fundamental societal changes and heavy political lifts. \u201cThey\u2019re not just going to emerge as an organic alternative,\u201d he wrote. \u201cOn that read, this is comms work to provide cover for regulatory nihilism.\u201d<\/p>\n<p>A better version of this, he said, would be to redirect the AI industry\u2019s political funding and lobbying skills to make progress on this kind of policy agenda. However, he said that the \u201cvague nature and timing\u201d of the document \u201cdoesn\u2019t make me too optimistic.\u201d<\/p>\n<p>#people #reacting #OpenAIs #13page #policy #paper #superintelligence<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI says the world needs to rethink everything from the tax system to the length&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[7361,406,7360,141,363,747,7359,439,7362],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/3250"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3250"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/3250\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3250"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3250"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3250"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}