This Israeli Presentation on How to Make Drone Strikes More “Efficient” Disturbed Its Audience

New technology would help find people fleeing drone strikes and predict their movements. An Israeli conference audience was not impressed.

A picture shows an Israeli army UAV landing in an airfield, in the Israeli-annexed Golan Heights, on January 20, 2015, two days after an Israeli air strike killed six Hezbollah members in the Syrian-controlled side of the Golan Heights. The strike on Syria killed an Iranian general, Tehran confirmed on January 19, as thousands of supporters of Lebanon's Hezbollah gathered to bury one of the six fighters killed in the same raid. AFP PHOTO / JACK GUEZ        (Photo credit should read JACK GUEZ/AFP/Getty Images)
A picture shows an Israeli army UAV landing in an airfield, in the Israeli-annexed Golan Heights, on January 20, 2015, two days after an Israeli air strike killed six Hezbollah members in the Syrian-controlled side of the Golan Heights. The strike on Syria killed an Iranian general, Tehran confirmed on January 19, as thousands of supporters of Lebanon's Hezbollah gathered to bury one of the six fighters killed in the same raid. Photo: Jack Guez/AFP/Getty Images

Research backed by the U.S. and Israeli military scandalized a conference near Tel Aviv earlier this year after a presentation showed how the findings would help drone operators more easily locate people — including targets — fleeing their strikes and better navigate areas rendered unrecognizable by prior destruction.

The doctoral student who presented the research demonstrated how pioneering data visualization techniques could show a drone operator, using lines and arrows of varying thickness, which direction fast-moving people and vehicles were most likely to travel, for example, at an intersection or while fleeing a building. The presentation clearly angered at least some of the crowd, including the moderator, prompting hostile questions.

“The guy’s talk (and its video documentation) revealed much of what’s very wrong about UAV warfare,” said Mushon Zer-Aviv, a web designer and activist and an organizer of the conference, the data visualization confab known as ISVIS.

The incident at ISVIS underscores the extent to which drone warfare’s deeply technological basis and inhumanity has become a major part of global public debate around its use. Once viewed (and still promoted) as an efficient, safer way to target terrorists, the growing ubiquity of lethal drone strikes in global hotspots is increasingly seen as helping to create wastelands and fomenting the sort of terroristic support it’s designed to eradicate.

Part of the controversy over the research presentation traces back to the desensitized environment in which drone pilots operate, which is not frequently seen by outsiders. In this world, the pilots ask questions that might sound absurd outside the context of aerial robot-aided killing: What happens when you want to kill someone, but they’ve run into a building, and you’re not sure where they’ll exit? What happens when a town has been so thoroughly destroyed, you can’t recognize it anymore and get lost?

The presenter of the drone material, Yuval Zak, told the Intercept he was surprised by the audience reaction and hostile questioning after his presentation. “The conversation changed from dealing with visualization and improving information presentation on a … map to a discussion about the ethical issues of using drones,” he wrote in an email. “But the focus of the conference and my paper is entirely different.” The technology he presented could just as easily be used for policing and search and rescue as for drone strikes, he said — any time-critical scenario involving a map.

Still, Zer-Aviv said he was stunned as the presentation unfolded. He was the co-chair of ISVIS, which has billed itself as Israel’s first data visualization event, bringing together “design, engineering, and psychological perspectives on visualization.” Like many conferences in any field, ISVIS put out an open call for presentations, hoping to bring a sampling of the burgeoning world of data visualization under one roof at Shenkar College of Engineering, Design and Art in Ramat Gan.

“What is gained and what is lost in the transition from data, through images, to insights?” read the ISVIS manifesto. The programming looked thoughtful and sharp, covering topics from storytelling and journalism to political activism and aesthetics. One session promised to explain how “for museum curators it is imperative to learn, analyze, and understand the behavior patterns of the visitors,” in part through “recent developments in the field of indoor positioning systems.”

This sort of work is central to a lot of applied data science: how to make things we’re already doing more efficient, more effective, less laborious. But what if we’re talking about shooting missiles at people from flying robots? Should drone warfare, already so remote and clinical, receive further layers of software abstraction? Should killing be engineered to be more efficient?

These were the sorts of urgent, necessary questions that Zak ignored. His presentation focused on nuts and bolts, presuming that drone warfare ought to be made more efficient in the first place. His slides indicated his work was part of a “research collaboration between Ben-Gurion University,” the Israeli military, and the U.S. Research, Development, and Engineering Command’s Army Research, Development and Engineering Command, or, in poetic Pentagon-speak, RDECOM AMRDEC.

The ISVIS organizers were “obviously very curious” when Zak submitted his talk, said Zer-Aviv, and decided to place it in a segment titled “Power and Change,” alongside a presentation on feminist data visualization. “This panel was expected to take on visualizations use both by those in power and by citizens who may want to grapple with or oppose this power,” explained Zer-Aviv.

Yuval Zak speaks at an Israeli data visualization conference.

Zak opened his presentation with a startling statement that must have, somehow, felt matter-of-fact:

It has been said that in the upcoming round of combat, for example, the Israel Air Force will knock down some 1,000 buildings or more, so anyone who goes into Gaza won’t even be able to identify what he thought he should be able to see there.

Herein lies the problem confronting Israeli’s high-tech air power, as Zak’s team sees it: What happens when you’ve so devastated an urban area that it’s no longer recognizable? How will you navigate, for the purposes of killing and destruction, a place that you’ve been transforming by said killing and destruction? Therein lies a main problem of drone warfare, relying heavily on sensor-laden robots that are still operated by humans with finite memories and with visual processing easily confused by rubble and ruin. This is where Zak’s research comes in. He explained in his remarks that the goal of his research was “at the end of the day, to improve the efficiency of unmanned drone operators in the army in their missions.”

Zak then described the work environment of the drone operator, who has video from the aircraft and a map, typically with some sort of overlay, which might show existing forces. “What he does not have,” Zak said, “is some sort of aggregate information about past missions.”

In other words, he takes off, he knows where the enemy is expected to be, where our forces are expected to be. He won’t know how the enemy acted in yesterday’s mission unless he remembers, he won’t know how he acted in last week’s mission or two weeks ago and even so, he has an information load and coping with it is very difficult for him.

The issue at hand, then, boils down to one with which an MBA candidate or Deloitte consultant might grapple: How can our organization make sense of an over-abundance of data and increase employee productivity by leveraging 21st century software techniques? The only difference here is that the organization in question is interested in the business of killing, and an increase in employee productivity means killing more easily. Israel’s record of civilian deaths in the course of its unmanned drone campaigns is already well-documented.

Zak covered four different visualization techniques explored during his research, noting that the first in the series was “a visualization that most of the [drone] operators we consulted liked very much.” Suppose you’re tailing a person or a car filled with people. Now, you’re piloting a drone equipped with a litany of hard-to-pronounce imaging sensors capable of incredible visual detail, day or night. But one thing the cameras and lasers can’t discern is what a person on the ground, at a street intersection, for instance, will do next:

You’re following a vehicle, a suspect, you come to a junction, and you have the possibility of going straight, turning right, or turning left. In other words, not you, the target you’re following. So what is the probability that that target turns to each of the directions at the junction? When we can display this probability with either a number that we add to the visualization or using the thickness of the line, and some filtering can be done about this, perhaps the time, the type of target, the date, if it’s a moving target, a vehicle or a pedestrian.

The drone operators Zak has been working with, he said, were particularly tickled by this visualization because there are missions during which “they follow a vehicle and … sometimes lose it, because you go into some kind of a cloud, and then they get out of the cloud, and they want to know ‘OK, we’ve lost the target, and there was a junction, so where do we look for it?’”

It’s unclear where the data necessary for such a narrow prediction is coming from, and it’s not the only example of its kind Zak trotted out. Other visualizations under consideration by the Israeli-American research team include one for following individuals as they might flee on foot, in which drone operators would receive a colorful visual display of “the probability of entering and exiting each door in each building,” designated by arrows of varying thickness, and a system for tracking a “permanent target” like Ismail Haniyeh, senior Hamas leader and former Palestinian Authority head. For people like Haniyeh, Zak said “we can build a movement grid for him, where the places where he was and the probabilities are shown via the thickness of the lines or of those dots.” The “surveillance grid for an individual target received a very high efficiency ranking” from drone operators, Zak noted with pride. It’s a bit like Netflix suggestions, only for people to fire missiles at.

Zak quickly lost the crowd.“I think no one in the room really expected this,” Zer-Aviv told The Intercept. Sure enough, according to a transcript of the Q&A session following Zak’s talk, the first question was actually a denunciation: “I’m just saying that when you hurt so many people, not all of whom are Ismail Haniyeh, for these purposes, we can look a bit less self-satisfied,” an audience member said. “Not everything is inherently honorable.”

The segment’s moderator tried to press Zak on this point:

We hear a lot of talk these days about predictive policing. About using algorithms, too, to make certain policy decisions. Be it policing policy, in our case, it is targeted assassination policy. Making life-and-death decisions based on data. What is the role both of your data processing and of the visualizations in these complex ethical questions?

In his reply, Zak sidestepped the ethical issues, stating that, “In the big picture, our job is to make the work of a drone operator more efficient.” He added that his visualization work would not take two targets and determine “that one has to be destroyed and that one not.” This role, he said, is made “by people who … view video screens and evaluate the situation based on that.”

In his email to The Intercept, Zak stated that the benefits of increased accuracy for drone operators go beyond efficient killing:

If an operator has better information, there will be less chance of errors or accidents.

Most UAV accidents and mishaps are related to human errors so the technology calls for developing UAVs holistically, which includes human factors in addition to technology. Unfortunately, most UAVs are developed to achieve certain technical goals, without considering the human cognitive limitations in operating the system, or the decision-making process. This is where our research can contribute to improving operators’ performance.

For example, take a reported U.S. case in which UAV operators failed to observe and report on the presence of children in a suspected crowd in Afghanistan, causing a helicopter to kill 23 civilians. These are precisely the incidents we aim to avoid by improving operators’ abilities to focus.

If you can make those video screens as rich and information-packed as possible, well, why wouldn’t you? Isn’t it smarter? Better? But these completely ethics-agnostic replies — so reminiscent of Silicon Valley accountability dodging — are basically the “guns don’t kill people” of drone warfare. Accountability lies with the button-pushers, the reasoning goes, rather than the people who designed and built the buttons in the first place. The view of drone operators as merely passive consumers of content who need the best content available in order to make the best decisions possible allows us to avoid uncomfortable questions and debates over whether this system ought to be used to frequently in the first place and allows critics to be waved off with promises of better data just around the corner. Maybe the problem with the so-called kill chain used to authorize robotic killing isn’t that it’s an abstracted, desensitizing, information-centric form of remote assassination, but that we’re just not throwing enough good data in the war sluice?

Top photo: A picture shows an Israeli army unmanned aerial vehicle landing in an airfield, in the Israeli-annexed Golan Heights, on Jan. 20, 2015, two days after an Israeli airstrike killed six Hezbollah members in the Syrian-controlled side of the Golan Heights.

Join The Conversation