{"id":6099,"date":"2026-05-12T04:42:16","date_gmt":"2026-05-12T04:42:16","guid":{"rendered":"https:\/\/stock999.top\/?p=6099"},"modified":"2026-05-12T04:42:16","modified_gmt":"2026-05-12T04:42:16","slug":"microsofts-chief-scientific-officer-weighs-in-on-the-dangers-of-a-i-and-the-open-letter-for-a-6-month-pause","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=6099","title":{"rendered":"Microsoft\u2019s Chief Scientific Officer weighs in on the dangers of A.I. and the open letter for a 6-month pause"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2023\/04\/Microsoft-Chief-Scientific-Officer-Eric-Horvitz.jpg?w=2048\" \/><\/p>\n<p>Eric Horvitz, Microsoft\u2019s first chief scientific officer and one of the leading voices within the rapidly evolving sector of artificial intelligence, has spent a lot of time thinking about what it means to be human.<\/p>\n<p>It\u2019s now, perhaps more than ever, that underlying philosophical questions rarely mentioned in the workplace are bubbling to the C-suite: What sets humans apart from machines? What is intelligence\u2014how do you define it? Large language models are getting smarter, more creative, and more powerful faster than we can blink. And, of course, they are getting more dangerous.<\/p>\n<p>\u201cThere will always be bad actors and competitors and adversaries harnessing [A.I.] as weapons, because it\u2019s a stunningly powerful new set of capabilities,\u201d Horvitz says, adding: \u201cI live in this, knowing this is coming. And it\u2019s going faster than we thought.\u201d<\/p>\n<p>Horvitz speaks much more like an academic than an executive: He is candid and visibly excited about the possibilities of new technology, and he welcomes questions many other executives might prefer to dodge. Horvitz is one of Microsoft\u2019s senior leaders in its ongoing, multibillion-dollar A.I. efforts: He has led key ethics and trustworthiness initiatives to guide how the company will deploy the technology, and spearheads research on its potential and ultimate impact. He is also one of more than two dozen individuals who advise President Joe Biden as a member of the President\u2019s Council of Advisors on Science and Technology, which met most recently in early April. It\u2019s not lost on Horvitz where A.I. could go off the guardrails, and in some cases, where it is doing exactly that already.\u00a0\u00a0<\/p>\n<p>Just last month, more than 20,000 people\u2014including Elon Musk and Apple cofounder Steve Wozniak\u2014signed an open letter urging companies like Microsoft, which earlier this year started rolling out an OpenAI-powered search engine to the public on a limited basis, to take a six-month pause. Horvitz sat down with me for a wide-ranging discussion where we talked about everything from the letter, to Microsoft laying off one of its A.I. ethics teams, to whether large language models will be the foundation for what\u2019s known as \u201cAGI,\u201d or artificial general intelligence. (Some portions of this interview have been edited or rearranged for brevity and\/or clarity.)<\/p>\n<p>Fortune: I feel like now, more than ever, it is really important that we can define terms like intelligence. Do you have your own definition of intelligence that you are working off of at Microsoft?<\/p>\n<p>Horvitz: We don\u2019t have a single definition. I do think that Microsoft [has] views about the likely beneficial uses of A.I. technologies to extend people and to empower them in different ways, and then we\u2019re exploring that in different application types. It takes a whole bunch of creativity and design to figure out how to basically harness what we\u2019re considering to be these [sparks] of more general intelligence.<\/p>\n<p>That also gets into the whole idea of what we call responsible A.I., which is, well, how can this go off the rails? The Kevin Roose article in the New York Times\u2014I heard it was a very widely read article. Well, what happened there exactly? And can we understand that? In some ways, when we field complex technologies like this, we do the best we can in advance in-house. We red-team it. We have people doing all sorts of tests and try different things out to try to understand the technology. We characterize it deeply in terms of the rough edges, as well as the power for helping people out and achieving their goals, to empower people. But we know that one of the best tests we can do is to put it out in limited preview and actually have it in the open world of complexity, and watch carefully without having it be widely distributed to understand that better. We learned quite a bit from that as well. And some of the early users, I have to say, some were quite intensive testers, pushing the system in ways that we didn\u2019t necessarily all push the system internally\u2014like staying with a chat for, I don\u2019t know how many hours, to try to get it to go off the rails, and so on. These kinds of things happened in limited preview. So we learn a lot in the open world as well.\u00a0<\/p>\n<p>Let me ask you something about that: Some people have pushed back against Microsoft and Google\u2019s approach of going ahead and rolling this out. And there was that open letter that was signed by more than 20,000 people\u2014asking companies to sort of take a step back, take a six-month pause. I noticed that a few Microsoft engineers signed their names on that letter. And I\u2019m curious about your opinion on that\u2014and if you think these large language models could be existentially dangerous, or become a threat to society?<\/p>\n<p>I really actually respect [those that signed the letter]. And I think it\u2019s reasonable that people are concerned. To me, I would prefer to see more knowledge, and even an acceleration of research and development, rather than a pause for six months, which I am not sure if it would even be feasible. It\u2019s a very ill-defined request in some ways. On the Partnership on A.I. (PAI), we spent time thinking about what are the actual issues. If you were going to pause something, what specific aspects should be paused and why? And what\u2019s the cost and benefits of stopping versus investigating more deeply and coming up with solutions that might address concerns?\u00a0<\/p>\n<p>In a larger sense, six months doesn\u2019t really mean very much for a pause. We need to really just invest more in understanding and guiding and even regulating this technology\u2014jump in, as opposed to pause. I do think that it\u2019s more of a distraction, but I like the idea that it\u2019s a call for expressing anxiety and discomfort with the speed. And that\u2019s clear to everybody.<\/p>\n<p>What concerns you most about these models? And what concerns you least?<\/p>\n<p>I\u2019m least concerned with science-fiction-centric notions that scare people of A.I. taking over\u2014of us being in a state where humans are somehow outsmarted by these machines in a way that we can\u2019t escape, which is one of these visions that some of the people that signed that letter dwell on. I\u2019m perhaps most concerned about the use of these tools for disinformation, manipulation, and impersonation. Basically, they\u2019re used by bad actors, by bad human actors, right now.\u00a0<\/p>\n<p>Can we talk a little bit more about the disinformation? Something that comes to mind that really shocked me and made me think about things differently was that A.I.-generated image of the pope that went viral of him in the white puffer jacket. It really made me take a step back and reassess how even more prevalent misinformation could become\u2014more so than it already is now. What do you see coming down the pipeline when it comes to misinformation, and how can companies, how can the government, how can people get ahead of that?<\/p>\n<p>These A.I. technologies are here with us to stay. They\u2019ll only get more sophisticated, and we won\u2019t be able to easily control them by saying companies should stop doing X, Y, or Z\u2014because they\u2019re now open-source technologies. Soon after DALL-E 2, which generates imagery of the form you\u2019re talking about, was made available, there were two or three open-sourced versions of it that came to be\u2014some quite better in certain ways, and doing even more realistic imagery.\u00a0<\/p>\n<p>In 2016, or 2017 or so, I saw my first deepfake. I gave a talk at South by Southwest on this and I said: Look what\u2019s happening\u2026 I said this is a big deal, and I told the audience this is going to be a game-changer, a big challenge for everybody. We need to think more deeply about this as a society. Things have gone from there into\u2014we see all sorts of uses of these technologies by nation-states that are trying to foment unrest or dissatisfaction or polarization all the way to satire.\u00a0<\/p>\n<p>So what do we do about this? I put a lot of my time and attention into this, because I think it really threatens to erode democracies, because democracies really depend on an informed citizenry to function well. And if you have systems that can really misinform and manipulate, it\u2019s not clear that you\u2019ll have effective democracy. I think this is a really critical issue, not just for the United States, but for other countries, and it needs to be addressed.\u00a0<\/p>\n<p>In 2019, in January, I met with the [former director general of BBC, Tony Hall] at the World Economic Forum. We had a one-on-one meeting, and I showed him some of the breaking deepfakes and he had to sit down\u2014he was beside himself. And that led to a major effort at Microsoft that we pulled together across several teams to create what we call the authentication of media provenance to know that nobody has manipulated from the camera and the production by a trusted news source like BBC, for example, or the New York Times, nobody has faked it or changed things all the way to your display. Across [three] groups now, there are over 1,000 members participating and coming up with standards for authenticating the provenance of media. So someday soon you\u2019ll be seeing, when you look at video, there\u2019ll be a sign that tells you, and you can hover over it, that certifies that it is coming from a trusted source that you know, and that there has been no manipulation along the way.\u00a0<\/p>\n<p>But my view is there\u2019s no one silver bullet. We\u2019re going to need to do all those things. And we\u2019re also probably going to need regulations.<\/p>\n<p>I want to ask you about the layoffs at Microsoft. In mid-March Platformer reported that Microsoft had laid off its ethics and society team, which was focused on how to design A.I. tools responsibly. And this seems to me like the time when that is needed most. I wanted to hear your perspective on that.<\/p>\n<p>Just like A.I. systems can manipulate minds and distort reality, so can our attention-centric news economy now. And here\u2019s the example. Any layoff makes us very sad at Microsoft. It\u2019s something that is really a challenge when it happens. In this case, the layoff was a very small number of people who were in a design team and, from my point of view, quite peripheral to our major responsible and ethical and trustworthy A.I. efforts.\u00a0<\/p>\n<p>I wished we would talk more publicly about our engineering efforts that went into several different work streams\u2014all coordinated on safety, trustworthiness, and broader considerations of responsibility in shipping out to the world the Bing chat, and the other technologies\u2014incredible amounts of red-teaming. I\u2019d say, if I had to estimate, over 120 people altogether have been involved in a significant set of work streams, with daily check-ins. That small number of people were not central in that work, although we respect them and I like their design work over the years. They\u2019re part of a larger team. And it was poor timing, and kind of amplified reporting about that being the ethics team, but it was not by any means. So I don\u2019t mean to say that it is all fake news, but it was certainly amplified and distorted.<\/p>\n<p>I\u2019ve been on this ride, [part of] leading this effort of responsible A.I. at Microsoft since 2016 when it really took off. It is central at Microsoft, so you can imagine we were kind of heartbroken with those articles. It was unfortunate that those people at that time were laid off. They did happen to have ethics in their title. It\u2019s unfortunate timing.<\/p>\n<p>[A spokeswoman later said that fewer than 10 team members were impacted and said that some of the former members now hold key positions within other teams. \u201cWe have hundreds of people working on these issues across the company, including dedicated responsible A.I. teams that continue to grow, including the Office of Responsible A.I., and a responsible A.I. team known as RAIL that is embedded in the engineering team responsible for our Azure OpenAI Service.\u201d]<\/p>\n<p>I want to circle to the paper you published at the end of March. It talks about how you\u2019re seeing sparks of AGI from GPT-4. You also mentioned in the paper that there\u2019s still a lot of shortfalls, and overall, it\u2019s not very human-like. Do you believe that large language models like GPT, which are trained to predict the next word in a sentence, are laying the groundwork for artificial general intelligence\u2014or would that be something else entirely?<\/p>\n<p>A.I. in my mind has always been about general intelligence. The phrase \u201cAGI\u201d only came into vogue in large use by people outside the field of A.I. when they saw the current versions of A.I. successes being quite narrow. But from the earliest days of A.I., it\u2019s always been about how can we understand general principles of intelligence that might apply to humans and machines, sort of an aerodynamics of intelligence. And that\u2019s been a long-term pursuit. Various projects along the way from 1950s to now have shown different kinds of aspects of what you might call general principles of intelligence.\u00a0<\/p>\n<p>It\u2019s not clear to me that the current approach with large language models is going to be the answer to the dreams of artificial intelligence research and aspirations that people may have about where A.I. is going to build intelligence that might be more human-like or that might be complementary to human-like competencies. But we did observe sparks of what I would call magic, or unexpected magic, in the system\u2019s abilities that we go through in the paper and list point by point. For example, we did not expect a system that was not trained on visual information to know how to draw or to recognize imagery.<\/p>\n<p>And so, the idea that a system can do these things, with very simple short questions without any kind of pre-training or fancy prompt engineering, as it\u2019s called\u2014it\u2019s pretty remarkable. These kinds of powerful, subtle, unexpected abilities, whether it be in medicine, or in education, chemistry, physics, general mathematics and problem solving, drawing, and recognizing images\u2014I would view them as bright little sparks that we didn\u2019t expect that have raised interesting questions about the ultimate power of these kinds of models, and as they scale to be more sophisticated. At the same time, there are specific limitations we described in the paper. The system doesn\u2019t do well at backtracking, and certain kinds of problems really confound it. And the fact that it\u2019s fabulously brilliant and embarrassingly stupid other places means that this is not really human-like. To have a system that does advanced math, integrals, and notation\u2014and then it can\u2019t do arithmetic\u2026 It can\u2019t multiply but it can do this incredible proof of the infinite numbers of primes and do poetry about it and do it in a Shakespearean pattern.\u00a0<\/p>\n<p>Just taking a step back, to make sure I understand clearly how you\u2019re answering the first part of my question. Are you saying that large language models could be the foundation of these aspirations people have for creating human intelligence, but you\u2019re not sure?<\/p>\n<p>I\u2019d say I am uncertain, but when you see a spark of something that\u2019s interesting, a scientist will follow that spark and try to understand it more deeply. And here\u2019s my sense: What we\u2019re seeing is raising questions and pointers and directions for research that would help us to better understand how to get there. It\u2019s not clear that when you see little sparks of flint, you have the ability to really do something more sustained or deeper, but it certainly is a way.\u00a0 We can investigate, as we are now and as the rest of the computer science community is now.<\/p>\n<p>So I guess, to be clear, the current large language models have given us some evidence of interesting things happening. We\u2019re not sure enough if you need the gigantic, large language models to do that, but we\u2019re certainly learning from what we\u2019re seeing about what it might take moving forward.<\/p>\n<p>You don\u2019t have access to OpenAI\u2019s training data for its models. Do you feel like you have a comprehensive understanding of how the A.I. models work and how they come to the conclusions that they do?<\/p>\n<p>I think it\u2019s pretty clear that we have general ideas about how they work and general ideas and knowledge about the kinds of data the system was trained on. And depending on what your relationship is with OpenAI and our research agreements\u2026 There are some understandings of the training data and so on. <\/p>\n<p>That doesn\u2019t mean that there\u2019s a deep understanding of every aspect. We don\u2019t understand everything about what\u2019s happening in these models. No one does yet. And I think to be fair to the people that are asking for a slowdown\u2014there\u2019s anxiety, and some fear about not understanding everything about what we\u2019re seeing. And so I understand that, and as I say, my approach to it is we want to both study it more intensively and work extra hard to not only understand the phenomenon but also understand how we can get more transparency into these processes, how we can have these systems become better explainers to us about what they\u2019re doing. And also understand any potential social or societal implication of this.<\/p>\n<p>I think today there are lots of questions about how these systems work, at the details, even when, broadly, we have good understandings of the power of scale and the fact that these systems are generalizing and have the ability to synthesize.\u00a0<\/p>\n<p>On that thread\u2014do you think that the models should be open source so that people can study them and understand how they work? Or is that too dangerous?<\/p>\n<p>I\u2019m a strong supporter of the need to have these models shared out for academic research. I think it\u2019s not the greatest thing to have these models cloistered within companies in a proprietary way when having more eyes, more scientific effort more broadly on the models, could be very helpful. If you look at what\u2019s called the Turing Academic Program research, we\u2019ve been a big supporter of taking some of our biggest models and making them available, from Microsoft, to university-based researchers.<\/p>\n<p>I know how much work that OpenAI did and that Microsoft did and we did together on working to make these models safer and more accurate, more fair, and more reliable. And that work, which includes the colloquial phrase \u201calignment,\u201d aligning the models with human values, was very effortful. So I\u2019m concerned with these models being out in their raw form in open source, because I know how much effort went into polishing these systems for consumers and for our product line. And these were major, major efforts to grapple with what you call hallucination, inaccuracy, to grapple with reliability\u2014to grapple with the possibility that they would stereotype or generate toxic language. And so I and others share the sense that open sourcing them without those kinds of controls and guardrails wouldn\u2019t be the greatest thing at this point in time.<\/p>\n<p>In your position serving on PCAST, how is the U.S. government already involved in the oversight of A.I. and in what ways do you think that it should be?<\/p>\n<p>There\u2019s been regulation of various kinds of technologies, including A.I. and automation, for a very long time. The National Highway Transportation Safety Administration, the Fair Housing Act, the Civil Rights Act of 1964\u2014these all talk about what the responsibilities of organizations are. The Equal Employment Opportunity Commission oversees and makes it illegal to discriminate against a person for employment, and there\u2019s another one for housing. So systems that will have influences\u2014there is opportunity to regulate them through various agencies that already exist in different sectors.<\/p>\n<p>My overall sense is that it will be the healthiest to think about actual use cases and applications and to regulate those the way they have been for decades, and to bring A.I. as another form of automation that\u2019s already being looked at very carefully by government regulations.\u00a0<\/p>\n<p>These A.I. models are so powerful that they\u2019re making us ask ourselves some really important underlying questions about what it means to be human, and what distinguishes us from machines as they get more and more capable. You\u2019ve spoken before about music, and one of my colleagues pointed out to me a paper that you wrote about captions for New Yorker cartoons a few years ago. Throughout all of the research and time you\u2019ve spent digging into artificial intelligence and the impact it could have on society, have you come to any personal realizations of what it is that distinctly makes us human, and what things could never be replaced by a machine?<\/p>\n<p>My reaction is that almost everything about humanity won\u2019t be replaced by machines. I mean, the way we feel and think, our consciousness, our need for one another\u2014the need for human touch, and the presence of people in our lives. I think, to date, these systems are very good at synthesizing and taking what they\u2019ve learned from humanity. They learn and they have become bright because they\u2019re learning from human achievements. And while they could do amazing things, I haven\u2019t seen the incredible bursts of true genius that come from humanity.<\/p>\n<p>I just think that the way to look at these systems is as ways to understand ourselves better. In some ways we look at these systems and we think: Okay, what about my intellect and its evolution on the planet that makes me who I am\u2014what might we learn from these systems to tell us more about some aspects of our own minds? They can light up our celebration of the more magical intellects that we are in some ways by seeing these systems huff and puff to do things that are sparking creativity once in a while.\u00a0<\/p>\n<p>Think about this: These models are trained for many months, with many machines, and using all of the digitized content they can get their hands on. And we watch a baby learning about the world, learning to walk, and learning to talk without all that machinery, without all that training data. And we know that there\u2019s something very deeply mysterious about human minds. And I think we\u2019re way off from understanding that. Thank goodness. I think we will be very distinct and different forever than the systems we create\u2014as smart as they might become.<\/p>\n<p>Jeremy Kahn contributed research for this story.<\/p>\n<p>#Microsofts #Chief #Scientific #Officer #weighs #dangers #A.I #open #letter #6month #pause<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Eric Horvitz, Microsoft\u2019s first chief scientific officer and one of the leading voices within the&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[11754,11753,2665,697,619,5815,879,6197,699,181,406,6544,11752,786],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6099"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6099"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6099\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6099"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6099"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6099"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}