AI’s risks are already being forgiven – Daily Business Magazine
5 min readPressure to exploit artificial intelligence is defying attempts, even by its creators, to control it, says IAN RITCHIE
The fourth Artificial Intelligence (AI) Summit was held recently in India, following others in the UK, South Korea, and France, each one with a subtle change of title. The first, held at Bletchley Park in 2023, was called the AI Safety Summit, but the most recent was labelled the AI Impact Summit. It’s almost as if the safety aspects of AI have been solved and our main concern now is on the effective deployment of this revolutionary technology.
If only that were true.
Modern AI systems are trained by feeding them as much written material that can be found, after which they are encouraged to make new answers by stitching together similar ideas to which the system has been exposed to in training – maybe they should really be labelled ‘Artificial Plagiarism’.
There is a name for dodgy information systems – ‘GIGO’ short for ‘garbage in garbage out’. In other words, if the original data is not reliable then the results also cannot be trusted.
So, we need to ask, what is the source of the material that has been used to train the current AI systems and, unfortunately, the AI companies have been less than helpful in telling us, largely because most of it is subject to copyright protection and the companies might be exposed to legal challenge.
One such lawsuit was raised by a group of book publishers and was settled out of court by Anthropic in mid 2025 at a cost of $1.5 billion. Early this year, interesting details emerged amongst 4,000 pages of disclosure for this case. An undercover exercise called Project Panama had been set up to destructively scan all the books in the world by buying all the volumes that could be found and slicing off their spines, after which the loose pages were scanned into the system.
A leak of correspondence emerged between two employees at Meta worrying that they were being required to download millions of books, free of charge, from ‘torrent’ platforms that encourage online piracy. There was even some evidence that this practice had been approved in an email by an ‘mz’, thought to be Meta CEO Mark Zuckerberg.
In all these cases there was no attempt to assess the quality of the texts used in training the systems, much of it may well contain outdated facts or unacceptable old social attitudes.
It is also well known that scraping information from the internet to train AI sites is extremely unreliable. Centres in Russia, Iran, North Korea and China, among others, remain very active in creating false postings on social media as they try to create unrest in western democracies.
The Oxford Internet Institute reported that one third of social media postings during the recent Swedish election were faked, aimed at disrupting the vote.
There is even a Scottish dimension to this. Last year, research by anti-disinformation firm Cyabra found that around a quarter of social media profiles discussing Scottish independence on X were linked to a state-backed Iranian influence campaign.
These accounts posted thousands of messages using pro-independence and anti-Brexit narratives in an attempt to disrupt free debate in the UK. This suspicion was confirmed when the recent shutdown of the internet in Iran caused this disinformation material to suddenly dry up.
A quarter of social media profiles on Scottish independence were linked to Iran
So, what about scientific papers, which are subject to peer review – checking by other experts – before publication, surely they must be reliable? Sadly, no. A Nobel Prize winner at a top US university recently retracted 15 of his scientific papers, and the esteemed Dana-Farber Cancer Institute was forced to settle a lawsuit, paying, $15 million, in a case alleging falsified data.
The uncomfortable fact is that scientific papers are often used to back applications for grant funding and so there is an inbuilt encouragement to ‘enhance’ the results reported. A non-profit science journalism site, called Retraction Watch, run by the Center for Scientific Integrity, currently reports over 500 retractions of scientific papers every month, with over 63,000 retractions logged in its database.
AI companies are now concluding deals with major publishers. OpenAI, for example, has reached agreement with the Associated Press, Reuters, Le Monde, the FT, and News Corp, the owners of the Wall Street Journal and The Times to access their reports to train their systems.
News sources are often referred to as ‘the first draft of history’ and are prone to error, especially in the age of ‘click-bait’ where social media often use exaggeration, or outright falsehoods, to encourage the reader to click on a story where they are then subjected to excessive advertising, which then funds the site.
The chief acientist responsible for the ChatGPT2 product, Ilya Sutskever, resigned from OpenAI when it reduced the funding of the team working on safety features. Shortly after leaving, Sutskever co-founded a new company, Safe Superintelligence, which focuses exclusively on developing safe superintelligent AI systems, without the commercial product pressure typical of major AI labs.
After leaving Google in 2023, Geoffrey Hinton – often referred to as the ‘godfather of AI’ – became one of the most outspoken voices about potential dangers, predicting that AI creates existential risks if systems are not regulated and warns that the possibility of catastrophic outcomes to mankind is a real one.
Even the ethical stance taken by AI suppliers is under threat. Dario Amodel, the CEO of Anthropic recently declared that the US Government is not to be authorised to use Claude, Anthropic’s AI system, for mass surveillance of the public, or for automatically controlling autonomous weapons, two tasks which they considered too risky to permit.
The Trump administration is furious over this restriction as Claude is heavily integrated into its operations, not least at the Pentagon.
In response, Trump has ordered the US government to ‘immediately cease’ the use of Anthropic technology by all government agencies describing it as a ‘woke’ and ‘radical left’ company, and Pete Hegseth, the Secretary of War, labelled the company a ‘supply risk to national security’.
Other AI companies have indicated support for Anthropic’s stance, however in a commercial world the government should easily be able to find cooperative suppliers with lower ethical standards. Trump has also signed an executive order that restricts individual US states from imposing ‘onerous’ laws to regulate AI systems.
We are now facing a stand-off between the AI community, who know about the risks of AI, and a free-for-all government, who want to take off all the guardrails and effectively expose the world to unforeseen risks.
It is a dangerous time to be alive.
>Latest Daily Business news
Like this:
Like Loading…
Related
#AIs #risks #forgiven #Daily #Business #Magazine