A militarized Google goes for the gold
Although the discussion of AI focused on improving the lives of ordinary people in the rather unconvincing words of Donald Trump, it is clear now that a major part of the AI spending will be in the defense and intelligence space, with much of the spending and intentions being entirely obscured from view.
The decision of Google to jump back into the weapons market in response to the current giveaway by the AI-empowered Department of Defense under the rule of Steve Feinberg, deputy secretary of defense and CEO of defense-budget-gobbling private equity firm Cerberus (named for the three-headed beast that guards the gates of hell) tells us quite a lot.
Google had pledged to stay out of the weapons business after its deception of the public, and of its workforce, regarding its deep involvement in the Air Force’s Maven project for supplying data to drones was revealed. Google employees resigned en masse in protest of how they had been deceived about this covert misuse of Google assets.
But the implications of Google bidding for even more defense and intelligence contracts takes on a sinister tone as the nature of defense and intelligence is being rapidly transformed. The decision of the Trump administration to deploy the military domestically, starting with the border with Mexico, and to combine military operations with those of domestic security organizations like the FBI and ICE, to such a degree that it is hard to tell them apart, is most revealing. We saw on television the FBI and ICE in military gear and in armored personnel carriers sent out to round of illegal aliens, and some legal aliens and citizens as well (by mistake), to make it clear that almost anything goes now. And anything will go in a few years.
February 5, 2025 roundup of immigrants by FBI in military uniforms
That it to say that we are forced to recognize as accepted practice the militarization all things. It is now fine for the government, the military, to threaten to send citizens who protest government actions to jails in El Salvador, or to prison camp in Guantanamo Bay. Both are best known for their torture programs.
In such an environment, what will Google do with the enormous databases it has amassed about every citizen through the use of Google searches, Gmail, Google Scholar, and Google Drive over the last twenty five years? Remember that we were forced to use Google and Gmail; we never had a chance to make any real choice because the entire system was designed to force feed these services to us.
Will that information be used to track down and kill citizens who are on the wrong list, or maybe put there by (a convenient) mistake? What might be done in the near future when lethal drones are employed for law enforcement by the FBI and ICE, in combination with the next generation of Star Link and Star Shield low-orbit drones? How might those systems be combined with Google’s massive data troves? Could it be that the horrific Maven drone program that Google participated in previously will now be coming home to roost as the Trump administration increasingly benchmarks Israel and Argentina in American domestic policies?
And the suggestion that China somehow is already ahead in AI, as mentioned in this article, is all the more justification to import such systems without any concern for how they are laying the foundations for totalitarian rule. As Professor Johannes Himmelreich, interviewed below, remarks,
“Military and surveillance tech aren’t bad or unethical as such. Instead, supporting national security and doing so in the right way is incredibly important. And supporting national security is, in fact, arguably the ethical thing to do."
Can’t argue with that—unless you want to be placed in a camp by a Google drone!
“What Google’s return to defense AI means”
Defense One
February 6, 2025
Patrick Tucker
Google has discarded its self-imposed ban on using AI in weapons, a step that simultaneously drew praise and criticism, marked a new entrant in a hot field, and underscored how the Pentagon—not any single company—must act as the primary regulator on how the U.S. military uses AI in combat.
On Tuesday, Google defended its decision to strip its AI-ethics principles of a 2018 prohibition against using AI in ways that might cause harm.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” it reads.
The move is a long-overdue correction to an overcorrection, one person familiar with the company’s decision-making process told Defense One.
That “overcorrection” was Google’s 2018 decision not to renew its contract to work on the Air Force’s Maven project. At the time, Maven was the Pentagon’s flagship AI effort: a tool that vastly reduced the time needed to find useful intelligence in hours and hours of drone-video footage. Within defense circles, the program wasn’t controversial at all. Military officials describing the program always said Maven’s primary purpose was to enable human operators, especially in performing time-sensitive tasks under enormous cognitive burdens to understand large data volumes. Many praised the effort as pointing the way toward other AI-powered decision aids.
But Google was less than perfectly transparent about its involvement in the project, particularly with its workforce, which, in part, led to an employee revolt in the form of mass resignations and protests. The company soon dropped the contract—but at the cost of competing for other important Pentagon IT contracts.
The episode catalyzed the 2019 drafting of the Defense Department’s own AI ethics principles, which were far more comprehensive than those of most Silicon Valley companies. They aimed to reassure the American tech community and international partners that the Pentagon could lead in the ethical use of AI in combat.
The person familiar with the decision-making process at Google said that this week’s announcement was driven by the rapidly shifting landscape around military use of AI.
“The primary driver of this decision was to ensure Google remains a leading voice in responsible AI. The technology frontier and business landscape is totally altered since 2018, so it was time to turn the page on Maven once-and-for-all,” the person said.
Not everyone is pleased, including some Google employees and human-rights groups.
But Greg Allen, director of the Wadhwani AI Center at the Center for Strategic and International Studies, told Defense One, “This is a fabulous decision and one that Google should have made years ago. Helping to protect America is ethical.”
Google is joining a crowded field of AI-focused firms that are increasingly collaborating to shape Pentagon AI use. But Google brings with it unique cloud and AI capabilities, which are part of the reason it was chosen for Project Maven in the first place. Google’s decision, and the emergence of other rival players in the AI defense space, shows how much sentiment in Silicon Valley has changed to allow collaboration with the military.
Syracuse University professor Johannes Himmelreich, who researches the ethics of artificial intelligence and political philosophy and co-edits the Oxford Handbook of AI Governance, said in an email, “Military and surveillance tech aren’t bad or unethical as such. Instead, supporting national security and doing so in the right way is incredibly important. And supporting national security is, in fact, arguably the ethical thing to do."
Google’s original ban was “probably was overly zealous to begin with,” Himmelreich said.
But Google’s decision also highlights the importance of the Defense Department as the ultimate monitor of how it uses AI in warfare. Whether that means changes to the AI ethics principals under the new administration, or as China and Russia rapidly advance their own capabilities, is an open question.
One AI entrepreneur suggested that China was already ahead.
“We don't really have industrial policy,” Noosheen Hashemi, CEO of health-app maker January AI, said Thursday at the Globsec Transatlantic Forum. “And, of course, [China’s] AI is all in the military. They have an AI military doctrine and they already have incorporated AI into at least 300 different programs in their military. And we don't have an AI military doctrine, which is really unfortunate because, you know, we have a lot of bureaucracy, slow approval cycles, but we have insisted on having a human on the loop, and they have not insisted on that. So they have set themselves up for autonomous warfare, which will be faster.”
Google, Facebook, X, has always been a front for the NSA, DOD, CIA, DARPA, etc... In fact, Google showed China how to establish an internet surveillance system.
Julian Assange discussed in detail the connection between Google and the intelligence agencies.
So it's absolutely no surprise that AI technology whether through biomedicine, biometrics, and DOD surveillance will tighten the noose around humanity's neck. 🤐
https://wikileaks.org/google-is-not-what-it-seems/
https://theintercept.com/2019/07/11/china-surveillance-google-ibm-semptian/
Scary times for any resistance.