Skip to main content

Google shares rain ‘nowcasting’ ML research, more accurate than current forecasts

ML has the potential to be applied to every field. Google today shared research on “Machine Learning for Precipitation Nowcasting from Radar Images” that hopes to tackle how forecasts for localized rain storms and other short-term weather events are still “particularly challenging.”

Numerical methods today that directly simulate atmospheric dynamics, ocean effects, thermal radiation, and other processes/effects are limited by computational resource limits. For example, the National Oceanic and Atmospheric Administration (NOAA) is collecting nearly 100 terabytes of data per day.

Numerical methods also take multiple hours to run. If it takes 6 hours to compute a forecast, that allows only 3-4 runs per day and resulting in forecasts based on 6+ hour old data, which limits our knowledge of what is happening right now.

The company wants to tackle short-term forecasts — known as “nowcasting” — that are not well-served by current methods. This is useful for “immediate decisions” like traffic routing, logistics, and even evacuation planning.

Google’s approach uses radar data, and treats weather prediction as a computer vision problem. The neural network will learn about “atmospheric physics from the training examples alone, not by incorporating a priori knowledge of how the atmosphere actually works.”

As a typical example of the type of predictions our system can generate, consider the radar-to-radar forecasting problem: given a sequence of radar images for the past hour, predict what the radar image will be N hours from now, where N typically ranges from 0-6 hours.

Left: Existing model. Center: Actual. Right: Google model.

Compared to three widely used forecasting models, Google’s ML-powered rain forecasting “outperforms all three of these models” in the short term.

One of the advantages of the ML method is that predictions are effectively instantaneous, meaning that our forecasts are based on fresh data, while HRRR is hindered by computational latency of 1-3 hours. This leads to better forecasts for computer vision methods for very short term forecasting.

Moving forward, Google is looking at combining its system with a current method known as High Resolution Rapid Refresh (HRRR) that’s better at long-term forecasts by leveraging a 3D physical model.

As can be seen, the quality of our neural network forecast outperforms all three of these models (since the blue line is above all of the other model’s results).

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com