Should you grab your umbrella before you walk out the door? Checking the weather forecast beforehand will only be helpful if that forecast is accurate.
Spatial prediction problems, like weather forecasting or air pollution estimation, involve predicting the value of a variable in a new location based on known values at other locations. Scientists typically use tried-and-true validation methods to determine how much to trust these predictions.
But MIT researchers have shown that these popular validation methods can fail quite badly for spatial prediction tasks. This might lead someone to believe that a forecast is accurate or that a new prediction method is effective, when in reality that is not the case.
The researchers developed a technique to assess prediction-validation methods and used it to prove that two classical methods can be substantively wrong on spatial problems. They then determined why these methods can fail and created a new method designed to handle the types of data used for spatial predictions.
In experiments with real and simulated data, their new method provided more accurate validations than the two most common techniques. The researchers evaluated each method using realistic spatial problems, including predicting the wind speed at the Chicago O-Hare Airport and forecasting the air temperature at five U.S. metro locations.
Their validation method could be applied to a range of problems, from helping climate scientists predict sea surface temperatures to aiding epidemiologists in estimating the effects of air pollution on certain diseases.
“Hopefully, this will lead to more reliable evaluations when people are coming up with new predictive methods and a better understanding of how well methods are performing,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society, and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Broderick is joined on the paper by lead author and MIT postdoc David R. Burt and EECS graduate student Yunyi Shen. The research will be presented at the International Conference on Artificial Intelligence and Statistics.
Evaluating validations
Broderick’s group has recently collaborated with oceanographers and atmospheric scientists to develop machine-learning prediction models that can be used for problems with a strong spatial component.
Through this work, they noticed that traditional validation methods can be inaccurate in spatial settings. These methods hold out a small amount of training data, called validation data, and use it to assess the accuracy of the predictor.
To find the root of the problem, they conducted a thorough analysis and determined that traditional methods make assumptions that are inappropriate for spatial data. Evaluation methods rely on assumptions about how validation data and the data one wants to predict, called test data, are related.
Traditional methods assume that validation data and test data are independent and identically distributed, which implies that the value of any data point does not depend on the other data points. But in a spatial application, this is often not the case.
For instance, a scientist may be using validation data from EPA air pollution sensors to test the accuracy of a method that predicts air pollution in conservation areas. However, the EPA sensors are not independent — they were sited based on the location of other sensors.
In addition, perhaps the validation data are from EPA sensors near cities while the conservation sites are in rural areas. Because these data are from different locations, they likely have different statistical properties, so they are not identically distributed.
“Our experiments showed that you get some really wrong answers in the spatial case when these assumptions made by the validation method break down,” Broderick says.
The researchers needed to come up with a new assumption.
Specifically spatial
Thinking specifically about a spatial context, where data are gathered from different locations, they designed a method that assumes validation data and test data vary smoothly in space.
For instance, air pollution levels are unlikely to change dramatically between two neighboring houses.
“This regularity assumption is appropriate for many spatial processes, and it allows us to create a way to evaluate spatial predictors in the spatial domain. To the best of our knowledge, no one has done a systematic theoretical evaluation of what went wrong to come up with a better approach,” says Broderick.
To use their evaluation technique, one would input their predictor, the locations they want to predict, and their validation data, then it automatically does the rest. In the end, it estimates how accurate the predictor’s forecast will be for the location in question. However, effectively assessing their validation technique proved to be a challenge.
“We are not evaluating a method, instead we are evaluating an evaluation. So, we had to step back, think carefully, and get creative about the appropriate experiments we could use,” Broderick explains.
First, they designed several tests using simulated data, which had unrealistic aspects but allowed them to carefully control key parameters. Then, they created more realistic, semi-simulated data by modifying real data. Finally, they used real data for several experiments.
Using three types of data from realistic problems, like predicting the price of a flat in England based on its location and forecasting wind speed, enabled them to conduct a comprehensive evaluation. In most experiments, their technique was more accurate than either traditional method they compared it to.
In the future, the researchers plan to apply these techniques to improve uncertainty quantification in spatial settings. They also want to find other areas where the regularity assumption could improve the performance of predictors, such as with time-series data.
This research is funded, in part, by the National Science Foundation and the Office of Naval Research.
GIPHY App Key not set. Please check settings