Using ensemble forecasts in probability and decision-making

Ensemble prediction allows the uncertainty in forecasts to be assessed quantitatively, and attaching numbers to the confidence or uncertainty can allow the user to assess the risks more accurately.

Ensemble forecasts contain a huge amount of information (see Fig. 1).

Rainfall forecast output from the individual ensemble members


Fig. 1: Rainfall forecast output from the individual ensemble members

Using Fig. 1, at two days ahead we can be confident that there will be showers and bands of heavy rain around the UK, but there is considerable uncertainty about the location and extent of the heavy rain (shown in the reddish colours).

How we use the ensembles to help decision-making

Our forecasters often like to see the individual forecasts, but for other users we need to find efficient ways to summarise the information. One way is using probability forecasts.

To make best use of a probability forecasts, users must choose a probability threshold which gives the correct balance of alerts and false alarms for their particular application.

Probability forecasts

Probability forecasts can be used in two main ways:

  • Using a range of values

  • Using percentages

Range of values

For a specific weather element, such as temperature or wind speed, a range of values can be provided, along with a measure of how confident we are that the actual value will fall within that range.

Fig. 2 shows the range of uncertainty in temperature at a specific location, plus some indication of the most probable values. At each forecast time a range of possible values are plotted, along with an estimate of the probability that the temperature will fall within that range.

Possible temperature values with associated levels of confidence


Fig. 2: Possible temperature values with associated levels of confidence

Percentages

A probability forecast can give a percentage of how likely a defined event is to occur, which can help users to assess the risks associated with particular weather events to which they are sensitive.

Ensembles are designed to estimate these probabilities by sampling the range of possible forecast outcomes. The probability of a particular event occurring is estimated by counting the proportion of ensemble members which forecast that event to occur. So if six out of the 24 members predict more than 5 mm of rain at a specified location in a defined period, we would estimate there to be a 1-in-4, or 25%, chance of the event happening. 

Fig. 3 is an illustration of a probability forecast. The darker the blue becomes, the greater the probability of the rainfall exceeding 5 mm in six hours. For additional information, the contour lines show the pressure at mean sea level predicted by averaging all the ensemble members. This gives an indication of the weather system producing the risk.

A chart showing the spatial variation in the probability of the six-hour rainfall exceeding 5 mm
Fig. 3: A chart showing the spatial variation in the probability of the six-hour rainfall exceeding 5 mm

If the probability of an event occurring is 10%, this means that the event will only occur on one occasion in every 10 (or equivalently 10 in 100). Therefore, on the other nine out of 10 occasions the event will not occur. This means that we can never say whether a single probability forecast is right or wrong. We can only measure how good our probability forecasts are by looking at a large set of forecasts. Then we can group all the 10% forecasts together and check that the event occurred on one in 10 of these occasions.

Although they give a useful guide, ensembles cannot provide a perfect representation of probability. By reviewing past performance we can use statistics to calibrate the forecast and give improved probability forecasts.

Examples in practice

Both users below will take the same probability forecasts from the Met Office, but they will respond to them in different ways.

User A is liable to suffer a loss when a particular weather event occurs so they would like to be able to protect themselves. However, actually protecting themselves is also expensive (but less expensive than being unprotected when an event occurs), so they should only protect themselves when the probability of the event is high.

User B is sensitive to the same weather event but is liable to suffer a much larger loss than User A, but with a warning can protect themselves quite cheaply. This user should therefore protect themselves at much lower probabilities. They will get a larger number of false alarms but have the best chance of being protected when an event does occur.

User B will react at low probabilities, perhaps anything more than 20%, but User A may only take action when the probability reaches 80%. The precise level at which each user should start to react depends on their cost of protection and their potential losses. The Met Office offers a consultancy service in how to maximise the benefit of the forecasts for any particular application.

Last updated: 14 September 2010