Is drought on the horizon? Researchers turn to AI in a bid to improve forecasts

This post has been republished via RSS; it originally appeared at: The AI Blog.

As winter drags on, some people wonder whether to pack shorts for a late-March escape to Florida, while others eye April temperature trends in anticipation of sowing crops. Water managers in the western U.S. check for the possibility of early-spring storms to top off mountain snowpack that is crucial for irrigation, hydropower and salmon in the summer months.

Unfortunately, forecasts for this timeframe — roughly two to six weeks out — are a crapshoot, noted Lester Mackey, a statistical machine learning researcher at Microsoft’s New England research lab in Cambridge, Massachusetts. Mackey is bringing his expertise in artificial intelligence to the table in a bid to increase the odds of accurate and reliable forecasts.

“The subseasonal regime is where forecasts could use the most help,” he said.

Mackey knew little about weather and climate forecasting until Judah Cohen, a climatologist at Atmospheric and Environmental Research, a Verisk business that consults about climate risk in Lexington, Massachusetts, reached out to him for help using machine learning techniques to tease out repeating weather and climate patterns from mountains of historical data as a way to improve subseasonal and seasonal forecast models.

The preliminary machine learning based forecast models that Mackey, Cohen and their colleagues developed outperformed the standard models used by U.S. government agencies to generate subseasonal forecasts of temperature and precipitation two to four weeks out and four to six weeks out in a competition sponsored by the U.S. Bureau of Reclamation.

Mackey’s team recently secured funding from Microsoft’s AI for Earth initiative to improve and refine its technique with an eye toward advancing the technology for the social good.

“Lester is working on this because it is a hard problem in machine learning, not because it is a hard problem in weather forecasting,” noted Lucas Joppa, Microsoft’s chief environmental officer who runs the AI for Earth program, as he explained why his group is helping fund the research. “It just so happens that the techniques he is interested in exploring have huge applicability in weather forecasting, which happens to have huge applicability in broader societal and economic domains.”

Fields being irrigated on the edge of the desert in the Cuyama Valley Photo by Getty Images.

AI on the brain

Mackey said current weather models perform well up to about seven days in advance, and climate forecast models get more reliable as the time horizon extends from seasons to decades. Subseasonal forecasts are a middle ground, relying on a mix of variables that impact short-term weather such as daily temperature and wind and seasonal factors such as the state of El Niño and the extent of sea ice in the Arctic.

Cohen contacted Mackey out of a belief that machine learning, the arm of AI that encompasses recognizing patterns in statistical data to make predictions, could help improve his method of generating subseasonal forecasts by gleaning insights from troves of historical weather and climate data.

“I am basically doing something like machine learned pattern recognition in my head,” explained Cohen, noting that weather patterns repeat throughout the seasons and from year to year and that therefore pattern recognition can and should inform longer-term forecasts. “I thought maybe I can improve on what I am doing in my head with some of the machine learning techniques that are out there.”

Using patterns in historical weather data to predict the future was standard practice in weather and climate forecast generation until the 1980s. That’s when physical models of how the atmosphere and oceans evolve began to dominate the industry. These models have grown in popularity and sophistication with the exponential rise in computing power.

“Today, all of the major climate centers employ massive supercomputers to simulate the atmosphere and oceans,” said Mackey. “The forecasts have improved substantially over time, but they make relatively little use of historical data. Instead, they ingest today’s weather conditions and then push forward their differential equations.”

A combine harvester moving on a snow-covered fieldPhoto by Getty Images.

Forecast competition

As Mackey and Cohen were discussing a research collaboration, Cohen received notice of a competition sponsored by the U.S. Bureau of Reclamation to improve subseasonal forecasts of temperature and precipitation in the western U.S. The government agency is interested in improved subseasonal forecasts to better prepare water managers for shifts in hydrologic regimes, including the onset of drought and wet weather extremes.

“I said, ‘Hey, what do you think about trying to enter this competition as a way to motivate us, to make some progress,’” recalled Cohen.

Mackey, who was an assistant professor of statistics at Stanford University in California prior to joining Microsoft’s research organization and remains an adjunct professor at the university, invited two graduate students to participate on the project. “None of us had experience doing work in this area and we thought this would be a great way to get our feet wet,” he said.

Over the course of the 13-month competition, the researchers experimented with two types of machine learning approaches. One combed through a kitchen sink of data containing everything from historical temperature and precipitation records to data on sea ice concentration and the state of El Niño as well as an ensemble of physical forecast models. The other approach focused only on historical data for temperature when forecasting temperature or precipitation when forecasting precipitation.

“We were making forecasts every two weeks and between those forecasts we were acquiring new data, processing it, building some of the infrastructure for testing out new methods, developing methods and evaluating them,” Mackey explained. “And then every two weeks we had to stop what we were doing and just make a forecast and repeat.”

Toward the end of the competition, Mackey’s team discovered that an ensemble of both machine learning approaches performed better than either alone.

Final results of the were announced today. Mackey, Cohen and their colleagues captured first place in forecasting average temperature three to four weeks in advance and second place in forecasting total precipitation five and six weeks out.

A flooded river under a walking bridgePhoto by Getty Images.

Forecast for the future

After the competition, the collaborators combined their ensemble of machine learning approaches with the standard models used by U.S. government agencies to generate subseasonal forecasts and found that the combined models improved the accuracy of the operational forecast by between 37 and 53 percent for temperature and 128 and 154 percent for precipitation. These results are reported in a paper the team posted on arXiv.org.

“I think we will continue to see these types of approaches be further refined and increase in the breadth of their use within the field of forecasting,” said Kenneth Nowak, water availability research coordinator with the U.S. Bureau of Reclamation, who organized the forecast rodeo. He added that government agencies will “look for opportunities to leverage” machine learning in future generations of operational forecast models.

Microsoft’s AI for Earth program is providing funding to Mackey and colleagues to hire an intern to expand and refine their machine learning based forecasting technique. The collaborators also hope that other machine learning researchers will be drawn to the challenge of cracking the code to accurate and reliable subseasonal forecasts. To encourage these efforts, they have made available to the public the dataset they created to train their models.

Cohen, who kicked off the collaboration with Mackey out of a curiosity about the potential impact of AI on subseasonal to seasonal climate forecasts, said, “I see the benefit of machine learning, absolutely. This is not the end; more like the beginning. There is a lot more that we can do to increase its applicability.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

The post Is drought on the horizon? Researchers turn to AI in a bid to improve forecasts appeared first on The AI Blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.