
Google has recently undertaken a significant overhaul of its AI principles, marking one of the most substantial updates since their initial release in 2018. As reported by The Washington Post, the tech giant has revised its guidelines to eliminate previous commitments that explicitly stated it would refrain from designing or deploying AI tools intended for military or surveillance applications. The earlier version of the guidelines featured a dedicated section labeled “applications we will not pursue,” which has now been removed in the latest iteration.
In place of the previous commitments, Google has introduced a new section entitled “responsible development and deployment.” Within this section, the company outlines its pledge to implement “appropriate human oversight, due diligence, and feedback mechanisms,” ensuring alignment with user objectives, social responsibility, and the widely recognized principles of international law and human rights. This shift indicates a broader approach to AI ethics than the more specific guidelines previously in place.
For example, the prior guidelines clearly stated that Google would not develop AI technologies intended for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Similarly, the earlier principles prohibited the development of AI surveillance tools that could infringe upon “internationally accepted norms.” The removal of these explicit prohibitions raises questions about the direction in which Google is steering its AI initiatives.
In response to inquiries regarding these changes, a representative from Google directed Engadget to a blog post released by the company on Thursday. The post features insights from DeepMind CEO Demis Hassabis and James Manyika, Google’s senior vice president of research, labs, technology, and society. They emphasize that the rise of AI as a “general-purpose technology” has necessitated this policy revision to better reflect evolving technological and societal contexts.
Their statement underscores the belief that democracies should spearhead the development of AI technologies, guided by fundamental values such as freedom, equality, and respect for human rights. They advocate for collaboration among businesses, governments, and organizations that share these principles, aiming to create AI systems that safeguard individuals, foster global development, and enhance national security. They reiterated that adherence to their AI Principles will guide their ongoing focus on research and applications that are consistent with their mission and expertise, while carefully weighing the benefits against any potential risks.
When Google initially unveiled its AI principles in 2018, it was in the wake of Project Maven, a controversial government initiative. This project involved the potential provision of AI software to the Department of Defense for analyzing drone footage, leading to mass employee protests and resignations when the company opted not to renew the contract. CEO Sundar Pichai expressed his hope that the newly published guidelines would endure and remain relevant over time.
However, by 2021, reports indicated that Google had resumed its pursuit of military contracts, including a competitive bid for the Pentagon’s Joint Warfighting Cloud Capability cloud contract. At the beginning of this year, The Washington Post revealed that Google employees had been collaborating with Israel’s Defense Ministry to enhance the use of AI technologies within the government framework, raising further concerns about the implications of these partnerships.
