About That Killer Drone…

Today, we’re going to review how to fact-check information. For today’s example, I turn to the recent surge in interest over whether military-use artificial intelligence has murderous tendencies or not. 

First, the sensationalist headline: news agencies picked up a story during a simulation, a US Air Force drone using artificial intelligence had essentially attempted to kill its operator when faced with what it perceived as interference, and when that failed, decided to take out a communications tower instead. In other words, news agencies attributing a source no less than the USAF chief of AI Test and Operations said that an AI drone had gone after its controller, a la HAL 9000.

The problem revealed by the team at Reuters Fact Check is that the simulation itself never happened. The whole story was a thought experiment – a hypothetical told as though it was something that really happened. And why not? It’s plausible enough, right? So how–and why–did it happen this way?

A partial answer can be found buried in the highlights of the Royal Aeronautical Society, who hosted the Future Combat Air & Space Capabilities Summit and provided primary media coverage for event. In their recap, at the section provocatively titled AI – is Skynet here already?, the author quotes US Air Force Col. Tucker ‘Cinco’ Hamilton’s retelling of a simulation wherein an operator was killed by a drone because it was preventing the drone from taking out bad targets. 

With a seeming expert speaking to an interested and educated cohort of fellow professionals, it seems like the sort of story that would lend itself to credibility. But as Reuters found out, the facts of the case did not hold up to additional scrutiny. 

As Reuters pointed out, the Royal Aeronautical Society issued a correction in which Hamilton admits the above is a hypothetical scenario, and that the experiment as described was never run. He further suggests that this story was part of highlighting the potential dangers of AI. But doesn’t such an exaggeration, if not carefully framed, also post its own potential disinformation danger? After all, if the primary source cannot be trusted, who can?

The Air Force’s clarifying statement, issued by their press office, should have been enough to clear the air, but stories persisted on social media and mainstream media.

Ars Technica published a credible piece on the whole story, concluding that it was a case of ‘too good to be true’ and sensationalist virality that led to the hype, and presenting the assertion and its subsequent debunking. Contrast that with Sky News’ version of the story presented as a more imminent danger,or a lie that had been covered up. The difference in credibility between the two really lies in the presentation: which is meant to address an echo chamber, and which is an objective retelling? Witness Sky News’ “no real person was harmed,” a statement that is unnecessary if explaining the story properly. More egregiously, Sky News closes their article with unattributed statements that AI’s rapid rise “has raised concerns it could progress to the point where it surpasses human intelligence and will pay no attention to people.” Even a casual reader deserves more context and explanation.

Reuters’ final verdict on this was that it was missing additional context. I think we can all agree that a killer drone living only in one’s mind is far less frightening than one being actively trained. Just as misinformation unleashed on an unsuspecting society has far greater consequences than an objective retelling of the facts.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *