WeatherFusionNet — Our Model Wins Weather4cast Challenge

WeatherFusionNet — Our Model Wins Weather4cast Challenge

Weather radars are the way to go for real-time monitoring of severe weather in high resolution. For the past couple of years, we have been working to take this information further from monitoring to accurate nowcasting. However, severe weather also happens in areas with unreliable or no access to radar data. Here, only satellite imagery is available. The second edition of the Weather4cast challenge, organized by IARAI as a part of last year's famous AI conference NeurIPS, allowed our joint team of Meteopress and Czech Technical University in Prague to dive into precipitation nowcasting from satellite data for these areas. Despite our initial doubts, high-resolution nowcasting of precipitation from the satellite is possible, and we have won the CORE part of the challenge.

The task of the challenge was to predict precipitation as a binary image (rain / no rain) for 8 hours into the future for an area of 504x504 km with a 2 km resolution and 15-minute time step. On the input are 4 satellite images from the previous hour, covering a larger 3024x3024 km area in a 12 km resolution. Thus, both input and output images have 252x252 pixels, and the objective was to predict for the center 42x42 pixel window in six times higher resolution. The ground truths were based on OPERA radar composites and each EUMETSAT satellite image consists of 2 visible (VIS), 2 water vapor (WV), 7 infrared (IR) bands.

Relation of input and output regions. Source: (cropped).

The approach of our joint team builds on our previous work in radar-based precipitation nowcasting. The WeatherFusionNet fuses three types of processing input satellite data and consists of three separately trained neural networks.

  • Firstly, our already tested and tuned architecture based on the PhyDNet is used to predict 10 future satellite images.
  • Secondly, a module based on the U-Net, which we call sat2rad, is employed to create precipitation estimates from the input satellite sequence.
  • Finally, another U-Net module merges all the input images, PhyDNet, and sat2rad predictions to create the output sequence in the original satellite resolution.

The final upscaling of the center 42x42 window is done by a simple Upsample layer with bilinear interpolation.

Diagram of the model architecture. The dimensions in parentheses denote the temporal and channel dimension sizes, respectively.
sat2rad module prediction examples. Each row illustrates one sample time instance, the first three columns show input satellite data (three different channels). The black/white square highlights the target radar area. The fourth column presents predicted rain probability and the last column is the target radar image.

The challenge had two parts – CORE, with test data from the same regions and years, and TRANSFER, where the test dataset was shifted spatially (new regions) and temporally (different years). Our WeatherFusionNet won the CORE challenge, proving to be best for precipitation prediction for a targeted region from satellite imagery among teams from universities and companies worldwide. You can find more information about the WeatherFusionNet in our NeurIPS paper.