•  ()
  •  ()
  • Print this Story
  • Email this Story

How to Make Crowdsourcing Disaster Relief Better

We must identify successes and mistakes to optimize usefulness of data collected

text size AAA
January 3, 2013

This article originally appeared in U.S. News and World Report on November 23, 2012.

By Jennifer Chan

In the wake of Sandy's destruction, digital volunteers mobilized again. From their homes and offices, using iPads and laptops, hundreds of volunteers crowd-sourced information and took on microtasks to help FEMA and other agencies process large swaths of information and speed humanitarian response.

For instance, in the first 48 hours after the hurricane, 381 aerial photos collected by the Civil Air Patrol were viewed by hundreds of volunteers, with the goal of quickly giving an overview of the extent of storm and flood damage. This project was called the Humanitarian OpenStreetMap MapMill project. In response to a request from FEMA, project developer Schuyler Erle volunteered to launch and lead the project. By mid-afternoon November 2nd, more than 3,000 volunteers had assessed 5,131 images, viewing them more than 12,000 times. Just a week later, more than 24,000 images had been assessed. Each view from a digital volunteer—a mother, a researcher, a friend, a colleague—helped FEMA determine the degree of damage along the eastern seaboard, assessing the condition of buildings, roads, and houses, with the aim of helping the agency in its post-disaster recovery and planning. That's an amazing effort.

But did it actually help?

This isn't the first time digital volunteers have supported disaster-affected communities. Nor is it the first time that those volunteers and other disaster responders have been left wondering: Did that help? If so, how?

After the 2010 Haiti earthquake, 650 volunteers began tracing roads from annotated maps and satellite imagery into an online map called OpenStreetMap. This created a post-disaster map of Haiti, especially Port au Prince, revealing what remained of its roads, buildings, hospitals, and shelters. At the same time, more than 80,000 text messages, mostly in Haitian Kreyol, poured over the country's mobile telephone networks, asking for help, via the short emergency code 4636. "Mission 4636" was a predominantly Haitian-run initiative but was set up with the help of a few international individuals, including Rob Munro, a computational linguist. At first, those messages went to a Web platform where online volunteers—Haitians around the world, including in Haiti—translated and organized these messages. Messages were sent onward to relevant first responders, including the U.S. military, for search and rescue and other emergency activities, and back to radio stations and community groups in Haiti. Each translation took a volunteer about five minutes—not much time for each effort, but cumulatively, an enormous amount of work. Munro's evaluation of the Mission 4636 project found that the power of this effort was that it helped Haitians communicate with one another during the disaster—showing that the work needed to make this happen can occur all around the world and often simultaneously. In other words, a distributed workforce of Haitians with powerful local knowledge was able to help international organizations respond to a disaster—and, more important, to help Haitians help themselves.

In still another project called Ushahidi Haiti, Tufts students had collected information from several data sources, including texts, Twitter, and news websites; they then helped categorize and geolocate this information, offering a map for the responders. Afterwards, the Tufts students insisted on learning whether or not the time they put in resulted in helping. Researchers found that the U.S. Southern Command, which was tasked with undertaking support, search, and rescue operations in Haiti, used some of this information. But we still don't have a deep understanding of how many other responding organizations used it, for what purpose, and if it impacted their response effort. And despite wanting to know, we still know little about how many Haitian nationals knew about it, and if they did use it, whether it affected their lives.

As a physician and public health provider who has worked on the ground with humanitarian missions, I learned in Haiti how important—and difficult—such assistance can be. Consider the postdisaster mapping of an area. Predisaster maps do not fully reflect a disaster, and sometimes actually get in the way when you're trying to respond on the ground. During my daily work in Haiti, I needed to know where and how quickly I could transfer patients from our hospital to other care centers. To do so, I needed an updated map, and one that could focus on a specific area. When there's no updated map of an area postdisaster it can be terribly difficult to plan how to give healthcare services, where to send trucks to deliver food, or what areas need urban search and rescue. Volunteers' digital assessments have made a tremendous difference on the ground.

But when those tools have been evaluated and improved based on that evaluation, they have been still more helpful to first responders. For instance, after the Deepwater Horizon oil spill in 2010, Jeffrey Warren at the Public Laboratory for Open Technology created a software called MapMill, an online platform that helps people crowdsource and categorize images to create maps from pictures taken of the Gulf. In 2012, a group of people from Humanitarian Open StreetMap, FEMA, the Civil Air Patrol, and the National Geospatial Intelligence Agency's Readiness Response Recovery Team took this platform, carefully customized it, and assessed how well it could connect aerial imagery with MapMill, allowing people to assess the pictures. The Civil Air Patrol took pictures as it flew over the Camp Roberts simulation site in Pasa Robles, Calif. Humanitarian Open Street Map then determined how well people could rate these images, and simultaneously improved the platform at that time to make efforts faster and more efficient.

But such evaluation efforts need to be systematized, making ongoing assessment and feedback central to digital humanitarianism. Some volunteers prefer to dive directly into saving lives without building relationships with local or international responders—but that runs the risk of creating digital solutions that aren't what those on the ground actually need. When many different groups of digital volunteers pour out their different streams of information—as happened in Haiti and again with Sandy—it's extremely difficult for those on the ground to sort through what's available, how relevant it is, and how to use it.

In 2011 John Crowley of the Harvard Humanitarian Initiative and I coauthored Disaster 2.0, a report commissioned by the United Nations Foundation. In our interviews we spoke with humanitarian responders who often told us that they had trouble making sense of so much information coming from so many different groups of people they hadn't worked with before. And sometimes they didn't even know that digital volunteers were part of the response. What was missing, essentially, were humanitarian intermediaries: people who could identify the different efforts, match incoming information with those on the ground who needed that particular information, make sure it was presented in a manner useful to those who would receive it, and then deliver it appropriately.

Until there are credibility and relevance checks, the groups on the ground—and the people who need assistance—won't have the patience or trust to wade through the data pouring from those passionately dedicated volunteers.

But even before we have intermediaries handling and passing along that information, we need better assessments of what's useful and what isn't. Ad hoc efforts need to be scrupulously researched as they are happening—so that we learn, next time, how best to use our volunteer resources, both human and economic. Which communication pathway is most effective? What digital information is most needed at each phase of an emergency? How can we prepare next time so that Web applications don't crash under the load of all that volunteer generosity? We need to be systematic about identifying our successes and searching for our mistakes, so that we can learn from them. If we don't know what the mistakes are, then we are going to keep repeating them. If we don't know what our success and "wins" are we may not know how to make them happen again. Before we can fix these emergency communications pathways, we have to evaluate our efforts—so that each time our response is faster, more efficient, and more useful to those on the ground.

Disasters happen. We have technologies and incredible creativity standing by to help us respond. Isn't it time to invest in evaluating and improving as we go, so that our digital volunteers can be confident that their efforts are genuinely making a difference?

- Dr. Jennifer Chan is the director of Global Emergency Medicine in the Department of Emergency Medicine at Northwestern University's Feinberg School of Medicine.