Google employees are pressing chief executive Sundar Pichai to bar the company’s artificial intelligence tools from being used by the US military, according to reports in the Financial Times. The initiative comes as Google is reported to be close to striking a new agreement with the US Department of Defense that would expand the Pentagon’s access to Google’s AI and cloud capabilities, reigniting long‑running internal tensions over militarized technology.

More than 100 staff members working on Google’s AI projects have signed an internal letter urging the company’s leadership to set strict “red lines” on military contracts, mirroring positions recently adopted by rivals such as Anthropic. The employees have asked Google to refuse to provide AI systems that could be used for mass surveillance, autonomous weapons, or other forms of warfare, echoing a broader push across big‑tech firms to limit how advanced AI supports military operations.

The letter specifically warns that allowing the Pentagon to use Google’s AI without clear restrictions could deepen public distrust and damage the company’s reputation, especially at a time when the US is already using AI‑enabled systems in conflicts such as the ongoing war‑like operations involving Iran and the Strait of Hormuz. Some employees have also drawn parallels to earlier protests in 2018, when thousands of Google workers signed a petition against the company’s involvement in the Pentagon’s Project Maven, which used AI to analyse drone‑video footage.

In response to those earlier protests, Google promised not to design AI for use in weapons and adopted a set of AI principles aimed at limiting harmful applications, including military uses that violate human rights or international law. However, employees now argue that those principles are not binding enough and that the company risks drifting back toward deeper military integration unless Pichai issues a stronger, explicit veto on specific Pentagon‑linked AI deployments.

The current push reflects a wider wave of employee activism in Silicon Valley, where engineers at multiple firms are demanding that their AI not be used to enhance drone targeting, surveillance, or autonomous decision‑making on the battlefield. As the US government seeks more AI‑driven capabilities, business leaders and workers are increasingly at odds over the ethical boundary between national‑security partnerships and what employees call the “business of war.”