Google’s parent company Alphabet has rejected a long-standing principle and lifted a ban on the use of artificial intelligence (AI) to develop weapons and surveillance tools.
Alphabet has consistently dropped a section like previous applications that were “likely to cause harm”.
In a blog post, Google defended the change, arguing that businesses and democratic governments needed to collaborate on AI that “supports national security.”
Experts say that artificial intelligence could be used broadly on the battlefield – although there are fears about its use as well, especially with regard to autonomous weapon systems. – In their blog, Google said that democracies should lead in AI development, guided by what it called “core values” such as freedom, equality and respect for human rights.
“And we believe that companies, governments, and organizations that share these values should work together to create AI that protects people, promotes global growth, and supports national security,”
Senior vice president James Manyika and Demis Hassabis, who head AI lab Google DeepMind – said the company’s original AI principles published in 2018 needed to be updated as the technology had evolved.
“Murder on a large scale”
Awareness of the military potential of AI has been growing recently.
In January, MPs argued that the conflict in Ukraine had shown that the technology “provides serious military advantages on the battlefield” As AI becomes more widespread and sophisticated, it will “change the way defence works, from the back office to the front line,” wrote Emma Lewell-Buck MP, who chaired a recent Commons report on the British military’s use of AI.
Now we know why the “president of peace” Donald Trump wants to spend $500 billion to build artificial intelligence in the United States.
But there is debate among AI experts and professionals about how the powerful new technology should be managed broadly, how far commercial gains should be allowed to determine its direction, and how best to protect oneself from risks to humanity in general.
Concern is greatest about the potential for AI-powered weapons capable of carrying out lethal actions autonomously, with campaigners arguing that controls are urgently needed. The Doomsday Clock (Link to BBC)—symbolizing how close humanity is to destruction—cited this concern in its latest assessment of the dangers facing humanity.
“Systems that incorporate AI into military targeting have been used in Ukraine and the Middle East, and more countries are moving to integrate AI into their militaries,”
“Such efforts raise questions about the extent to which machines will be allowed to make military decisions – even decisions that can kill on a large scale”,
This sentiment has been echoed by Catherine Connolly from the organization Stop Killer Robots.
“The money we’re seeing being poured into autonomous weapons and the use of things like AI targeting systems is extremely worrying,” she told the Guardian.
“Don’t be evil”
Originally, long before the current wave of interest in AI ethics, Google’s founders, Sergei Brin and Larry Page, said that their motto for the firm was “don’t be evil”.
When the company was restructured under the name Alphabet Inc in 2015, the parent company transitioned to “Do the Right Thing”. Since then, Google employees have sometimes pushed back against the approach taken by their managers.
In 2018, the firm did not renew a contract for AI work with the US Pentagon after layoffs and a petition signed by thousands of employees. They feared that “Project Maven” was the first step towards using artificial intelligence for lethal purposes.
The blog was published just ahead of Alphabet’s end-of-year financial report, showing results that were weaker than market expectations, hitting back at the stock price. That was despite a 10% increase in revenue from digital advertising, its largest revenue, bolstered by U.S. election spending.
In its earnings report, the company said it would spend $75 billion on AI projects this year, 29% more than analysts on Wall Street had expected.
The company is investing in the infrastructure to power AI, AI research, and applications such as AI-powered search.