Google rules out using artificial intelligence for weapons

Roman Schwartz
June 9, 2018

Google on Thursday said it would not allow its artificial intelligence programme to be used to develop weapons or for surveillance efforts that violate worldwide laws.

It planted its ethical flag on use of AI just days confirming it would not renew a contract with the USA military to use its AI technology to analyse drone footage. Thousands of Google employees petitioned the contract and some quit in protest.

The United States military is increasing spending on a secret research effort to use artificial intelligence to help anticipate the launch of a nuclear-capable missile, as well as track and target mobile launchers in North Korea and elsewhere. "Google is already battling with privacy issues when it comes to AI and data; I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry".

"In other words, the company acknowledges that some AI developed for one goal may in fact be re-purposed in unintended ways, even by the military", she said Friday.

In a blog post this morning, Google CEO Sundar Pichai outlined the principles that will govern the company's military work going forward.

Google has also made other moves that it may not have done in the past, such as block apps and tools that try to avoid censorship in other countries from using its cloud platform.

Google Pixel 3 XL live images reveal notched display, dual front cameras
And if this is actually an early prototype of the phone, it's possible the final product could end up looking very different. A separate report on the site reveals that Pixel 3 XL's code-name is "Crosshatch", which is the name of a type of fish.

Aside from making the principles public, Pichai didn't specify how Google or its parent Alphabet would be accountable for conforming to them. We aspire to high standards of scientific excellence as we work to progress AI development.

CNBC also noted that Pichai's vow to "work to limit potentially harmful or abusive applications" is less explicit than previous Google guidelines on AI. "This is the reality faced by any developers of what are usually called dual-use technologies".

"As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides", Pichai wrote in a blog, reiterating basic corporate responsibility.

The principles might bring to mind sci-fi legend Isaac Asimov's "Three Laws of Robotics", which boil down to robots shouldn't harm humans, they should protect them. The company points toward a variety of categories, including military training and cybersecurity, as areas where it will work with the government/military.

"While we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas", Pichai wrote. "These collaborations are important and we'll actively look for more ways to augment the critical work of these organisations and keep service members and civilians safe", he said.

Other reports by

Discuss This Article

FOLLOW OUR NEWSPAPER