We live in a world where AI is already widely used in a variety of weapons systems by a number of countries.
Drones and UAVs are a perfect example, with AI selecting and engaging targets without human intervention, as well as Loitering Munitions (kamikaze drones) identifying and engaging targets. In development are also โswarming technologiesโ, in which multiple AI-controlled drones operate in coordination.
But thereโs much more: Missile Defense Systems use AI for automatic detection and engagement of incoming missiles or aircraft; AI-Enabled Targeting Systems that identify targets in conflict zones; autonomous Naval Systems (unmanned ships), and even DARPAโs Air Combat Evolution (ACE) Program in which AI can pilot an actual F-16 in flight.
On top of it all, there are AI-enhanced logistics and Decision Support optimizing resource allocation and tactical decisions.
So, it would make no sense, really, for a top-tier player in the AI landscape, like Google, to be opting to remain out of this ongoing revolution in weapons and surveillance systems.

Gizmodo reported:
โGoogle dropped a pledge not to use artificial intelligence for weapons and surveillance systems on Tuesday. And itโs just the latest sign that Big Tech is no longer concerned with the potential blowback that can come when consumer-facing tech companies get big, lucrative contracts to develop police surveillance tools and weapons of war.โ
Google was revealed in 2018 to have a contract with the US Department of Defense for a โProject Mavenโ, using AI for drone imaging.
โShortly after that, Google released a statement laying out โour principlesโ, which included a pledge to not allow its AI to be used for technologies that โcause or are likely to cause overall harmโ, weapons, surveillance, and anything that, โcontravenes widely accepted principles of international law and human rightsโ.โ

But Google has announced it has made โupdatesโ in their AI Principles โ now, all the previous vows not to use AI for weapons and surveillance are gone.
There are now three principles listed, starting with โBold Innovationโ.
โWe develop AI that assists, empowers, and inspires people in almost every field of human endeavor; drives economic progress; and improves lives, enables scientific breakthroughs, and helps address humanityโs biggest challenges,โ the website reads in the kind of Big Tech corporate speak weโve all come to expect.โ
They, at this point, promise to develop AI โwhere the likely overall benefits substantially outweigh the foreseeable risksโ.
When it comes to โethics of AIโ, Google defends โemploying rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair biasโ.
Read more:
Google Scraps Diversity Hiring Targets โ Will Also โReviewโ All Its DEI Programs
The post TECH DYSTOPIA: Google Drops Pledge Not To Use AI for Weapons or Mass Surveillance Systems appeared first on The Gateway Pundit.
Source: The Gateway Pundit
TruthPuke LLC hereby clarifies that the editors, in numerous instances, are not accountable for the origination of news posts. Furthermore, the expression of opinions within exclusives authored by TruthPuke Editors does not automatically reflect the viewpoints or convictions held by TruthPuke Management.