News and culture through the lens of Southern California.
Hosted by A Martínez
Airs Weekdays 2 to 3 p.m.

Satellite technology's new role in the fight against poverty




This image of Earth's city lights was created with data from the Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS).
This image of Earth's city lights was created with data from the Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS).
NASA/Getty Images

Listen to story

08:54
Download this story 8MB

More than a billion people are living in poverty around the world, according to researchers at Stanford University. But one of the biggest obstacles to providing aid is the lack of resources to pinpoint the places that need the most help.

Aid organizations have typically used door-to-door surveys to find information on impoverished communities. But now, Stanford researchers are using satellite technology to get a clearer picture, literally, of impoverished areas.

In a new study, images of the earth were taken at night, and depending on how bright an area was, the pictures were used as a guide to figure out which areas are in need. The study focused the countries of Malawi, Nigeria, Tanzania and Uganda

A Martinez spoke to the study's co-author, Neal Jean, a doctoral candidate in electrical engineering at Stanford, to explain how this new technology could be a catalyst for providing aid to people in poverty around the world. 

Interview Highlights

How does this technology work?

We don't have as much information on much poverty data as we need to work on the poverty problem, and we have lots of satellite images that contain a lot of unstructured information, some of which tell us about poverty and socioeconomic data that we care about. And our job is to take a machine learning algorithm to take in these raw pixels in the form of satellite imagery and output predictions about poverty.

What are you looking for with these images?

We teach the computer to take in these satellite images and predict whether those areas are light or dark at night. In that process, the computer learns to pick up image features such as buildings or roads or forests or water. And our hypothesis is that some of these features are useful in predicting poverty as well.

But isn't pinpointing poverty based on how light or dark an area is too simple? Is that the only factor this technology uses?

People have tried to use nighttime lights alone to predict poverty in certain outcomes, but night lights is just one value, so it doesn't carry that much information. For example, it's hard to use night lights to separate an area that's densley populated to an area that's poor to an area that's rich but sparsely populated. So our hope is that by using daytime satellites as well, we can pick up on a lot of other information.

Do you think we can expect to see this technology in action on a larger scale?

I'm not sure how much of this technology could be visible right away, but it could happen in the next few months. We've been talking to some people who would be interested in using these detailed poverty maps that we can produce and overlay them with maps of their current operations. Then these organizations would be able to see if they're deploying their resources to the right places.