The Pentagon is combing the AI developer community to try to build a self-starting swarm of AI-enabled drones capable of recognizing and tracking a target – ostensibly for search and rescue, but potentially for search and destroy.
The Joint Artificial Intelligence Center (JAIC) issued a request (RFI) late in December seeking the latest and greatest in AI and drone swarm tech – and looking for clever ideas on how to combine the two. The resulting smart-swarm, the agency said, should be self-directing, able to find and track humans (“and manmade objects,” the RFI specifies) on its own, capable of streaming video of its activities, and willing to nudge its minders (who won’t be minding it most of the time, since it’s primarily AI-powered) when it has latched on to something interesting. The swarm will have to be able to move at least 50 knots (93 km/h), stay in the air for at least 2 hours, and cover 100 square nautical miles (343 square km).
The official purpose of the “smart swarm” is search and rescue, the RFI explains, while the JAIC program comprises four “mission initiatives” – “predictive maintenance, humanitarian aid and disaster relief, and cyberspace and robotic process automation.“
So, where is number four? The RFI isn’t telling, but we can speculate based on what JAIC is known to do. The AI-specialist department, which only debuted in 2018, ended up running Project Maven, the initiative to weaponize machine-learning and Big Data that was supposedly rejected by Google after employees got cold feet about helping the Pentagon kill people.
Project Maven has not only beefed up the quality of video input from tactical drones, or used computing power to organize the mountains of data obtained through surveillance, it has taught participants how to “use algorithms for war.”
Will a smart swarm capable of hunting humans with minimal outside direction – along the lines specified in the RFI – follow the rules and stick to search and rescue, or will its AI confuse it with “search and destroy”? If the Pentagon’s humans can’t tell a civilian from a terrorist, how are its drone swarms supposed to be able to? And even if the algorithms work just fine, what’s to stop the humans in charge from using their new toys for killing?
JAIC works with DARPA and other secretive high-level Pentagon agencies; if the US military were ever to develop a swarm of killer drones capable of operating without human input (they already have drone swarms that work with human guides), it would happen there.
Before you think “but they wouldn’t dare,” it would be exceedingly hard to prosecute an AI algorithm for war crimes, especially one written in open source.
We’ve seen this movie before, literally. The “Hated in the Nation” episode of the techno-dystopian TV show Black Mirror features a swarm of autonomous drone “bees” who’ve taken the place of the real thing after a mass die-off; this digital hive proves hackable, and every day, a single solitary drone bee assassin is weaponized to kill whichever public figure garners the most social media hate. Bees lost at such a slow pace, no matter the kamikaze drama of their last seconds, mean the hive-minders never mind.
The bee-masters in Black Mirror defend their creation by blustering about “military-grade” encryption, insisting no one could ever hack their high-tech better-than-nature-made-them pollinators. By that point, however, someone already had.
Military-grade encryption is only as good as the backdoors deliberately installed in it, and Attorney General William Barr and the whole Five Eyes intelligence network are intent on installing enough backdoors to keep them busy peeking through your private data for the rest of their lives. Once they do that, everyone’s drone swarm – literally and figuratively – belongs to everyone else. If there was ever a time not to rush into manufacturing an apocalyptic technology, it would be now.
Like this story? Share it with a friend!