The US Climate Prediction Center issues forecasts beyond the normal National Weather Service's 10-14 day window. They provide weekly and monthly forecasts out to 3 months. Given the timeframe and the fact that their forecasts cover the entire contintental US, its not surprising that the forecasts are often wrong. But how wrong? And is their skill improving over time?
I analyzed their 3 month temperature and precipitation forecast skill using data provided on their "Gridded Seasonal Verifications" webpage.
Note that skill is measured on a scale from -50 to 100, where -50 would be a forecast that was exactly wrong in every area, 0 would be a prediction that did no better than chance, and 100 is a prediction that was exactly right in every area.
They provide data starting in 1995. Since that time in the mid 1990's, linear trendlines show that their forecast skill has slightly improved for both Temperature and Precipitation. Precipitation skill started out lower, but has almost doubled (from 10 to 20) while Temperature skill started higher but has not increased as much (from 22 to 28).
However, the last 10 years have not been as successful:
Since 2012, neither Precipitation nor Temperature skill have increased. In fact, mean temperature forecast skill has decreased markedly since 2018. Before that, Temperature skill had been doing quite well in the period 2014-2018. It is not clear what changed in 2018. A similar transition may be happening with Precipitation, where the period 2019-2022 saw consistently good predictions, but since the beginning of 2023 the forecast skill has fallen off a cliff.
With increased use of machine learning, it seems likely that long-term weather forecast skill should increase. However, complex chaotic weather patterns are most impactful to climate predictions in the 1-3 month time frame, so this area of weather/climate prediction may continue to have lower than hoped for success.
No comments:
Post a Comment