Verification, impacts and post-processing

 The VIPP team generate weather forecasts for the Public Weather Service (PWS) and for incorporation into many customer products. This team also completes the verification and diagnosis of NWP and forecast products. 

Main responsibilities

  • The post-processing which converts NWP output and nowcasting systems into forecast products used by operational meteorologists and automated forecasts for PWS customers, including the public through the Met Office website and App.
  • Tools to support prediction of the impact of weather for PWS applications. For example, first-guess weather warnings, weather pattern forecasts and Hazard Impact Modelling.
  • Verification systems which allow us to measure the quality of forecasts and diagnose model performance.

VIPP works closely with other areas of Science, and also with meteorologists and technology teams to implement the operational forecasting and verification systems. More details on the work carried out by the VIPP team is given below.

Verification

The VIPP team carries out continual monitoring of the end-to-end forecast production system is essential. End-to-end means that each stage of the forecast production process is considered, to understand where post-processing adds value.

Forecasts are compared against a variety of observations, from standard observing sites to remotely-sensed observations  (e.g. radar, satellite). To do this both area-based and station-based systems are required, and these systems need to change and evolve, just like the models that they evaluate. The team is responsible for the upkeep and development of these systems.

Quality control of observations forms an integral part of activities. Many observations are totally automated and instruments are subject to malfunctions. From a statistical perspective erroneous observations will lead to misleading results. New observation types are also becoming available all the time, especially satellite-based observations. Often these new observation types need specialised handling and new methods. They need to be assessed as to whether they are fit for purpose.

Key aims

  • To develop and maintain operational verification systems.
  • To monitor the forecasting system.
  • To help model developers understand model forecast behaviour.
  • To monitor observations quality and assess new observation types and their usefulness.
  • To develop new, novel methods for assessing forecasts.

Current projects

  • Verification system upgrades and renewal.
  • Forecast Monitoring: Continuous monitoring is essential for diagnosing problems.
  • Developing novel tools for understanding the behaviour of probabilistic forecasts.
  • Assessing satellite observation types for verification use.

Post-processing

The VIPP team adds value to the raw model output using a number of different methods. Very short range forecasts are improved by accounting for differences in model grids and resolutions and integrating model forecasts with latest observations. Systematic errors in longer range forecasts are corrected using statistical techniques and used to generate optimal site-specific forecasts from raw model fields.

Forecast data is also to fed into downstream applications, some of which are able to assess the impact of weather on a customer's problem, thus making Met Office products more focused on customer needs.

Key aims

  • To improve systematic, near-surface errors in model forecasts using physically and/or statistically based techniques.
  • To tailor model output to specific customer needs.

Current projects

  • Nowcasting: Extrapolating the latest observations forward as the best forecast for the next 1-2 hours, before merging with model forecasts for further ahead.
  • Using output from ensemble forecasts: Using ensemble forecasts to provide a complete picture of the risks and uncertainties in the weather forecast.
  • Developing a new post-processing and verification system (IMPROVER): IMPROVER stands for "Integrated Model postPROcessing and VERification". Under the Met Office Transformation and Efficiency Programme we are developing a complete new post-processing system which will replace existing systems in 2019-20. This will be probabilistic at its heart to fully exploit the ensemble forecast data and will have integrated verification at every step to enable improved assessment of future developments. IMPROVER code is open source to enable wide use by collaboration partners and encourage shared development contributions.

Forecast model development and diagnostics

The VIPP team carries out continual monitoring of the models performance, in conjunction with detailed comparisons against a wide range of satellite and remote observations and feedback from forecasters, enables us to develop physically based and testable hypothesis of the causes of model problems and systematic biases, and develop model changes which address them.

This process involves specialists both within the Met Office from the around the world.  We also work with the Seamless Ensemble Prediction group to examine the performance of the model across the full range of temporal and spatial scales forecast by the Unified Model.

The proposed model changes are then tested by running a forecast trial, where an operational forecast model is re-run for a period, modified by the inclusion of one or more proposed changes. The impact of the changes can then be evaluated by looking in detail at the verification results and any other suitable observations.

As the issue is investigated further a package of complementary changes is built which is then tested, prior to operational use, in a Parallel Suite. The parallel suite is run alongside the operational NWP configuration of the Unified Model. This acts as a final quality control on the changes and to ensure that the forecasters and post-processing systems are prepared for the changes.

Key aims

  • To monitor the performance of the Met Office operational forecasting models.
  • To diagnose the causes of any systematic biases and problems.
  • To fix problems directly or with the wider research community.
  • To test proposed model changes to ensure that they improve forecast quality without unintended consequences.