Perceptions of accuracy
How accurate are Met Office forecasts - and how accurate do people think they are?
Every provider of weather forecasts needs a means of validating how well they are performing and the Met Office is no exception. Verifying the accuracy of our forecasts is one key way to measure performance enabling us to test the performance characteristics of our models and identify areas of research needed to make improvements. We do this by testing various parameters – such as temperature or wind direction – against actual observations and then assessing performance trends against established benchmarks.
It may sound straightforward, but with the Met Office issuing thousands of forecasts every day, how do we decide which parameters to verify? Here’s where the science becomes something of an art, as Richard Orrell, Deputy Head of the Public Weather Service, explains: “The key is being able to assess how well we’re doing around the criteria that matter to our customers. We spend a lot of time working with people who use our services to find out what those criteria are.”
For example, while a key concern for members of the public is whether it’s going to rain or not, an aviation customer primarily wants to know the wind and temperature forecasts at cruising altitude, or whether the weather at destinations is likely to cause problems on arrival.
How the accuracy of these different parameters is measured also varies according to the needs of the end user. For example, ‘percentage of correct forecasts’ may be a general and accessible measure, but it may be less relevant to a transport network customer more interested in unusual or severe weather. With this in mind, the Met Office gears statistics and measures to the specific requirements of each customer, ensuring they stay relevant and useful.
While the focus is on establishing and communicating the objective truth behind the science, understanding the perceived truth is also key. Different people experience the weather differently, in different places and at different times of year. The context in which a forecast is used also has a significant impact on perception. This inevitably influences their perception of a forecast’s accuracy. Other potential drivers could include brand affiliation, usability of the product, how weather dependent a person’s job is, and so on. The Met Office is looking at identifying these drivers with a view to potentially tracking them in the future.
The rise of social media has added a further element to an already complex picture, with likes, comments and dislikes also influencing how forecast accuracy is perceived. While this presents challenges, opportunities also abound, such as mining data from social conversations to gain insights into where and how the weather is affecting people.
The Met Office is also embracing ‘citizen science’. For example, we are working with local Flood Action Groups to see whether they can engage local communities in providing weather impact reports. Not only could this information help with verification, it could also enhance how the Met Office validates success. As Richard says, “We can objectively define our accuracy, but if that doesn’t correlate with people’s perception, we have to redefine what success actually is.”
So, if accuracy is improving, why has the public’s perception of forecast accuracy remained relatively unchanged at around 77% over the last decade? There are many factors but the answer could partly be in the correlation between consistency and perceptions of accuracy: for instance it is possible for the public to receive differing forecasts from different providers, so they perceive that the forecast is inaccurate. One other factor may be that people’s expectations have risen to the point where they expect more. This is particularly true in a world in which people are used to constant updates and information on demand.
Surveys tell us that 87% of people trust the Met Office. As ever, our challenge is to raise the public’s confidence in forecast accuracy further, quantify the inherent uncertainty within a forecast and communicate a consistent message across different channels.
Getting better all the time
Are the Met Office’s forecasts getting better? Answering this question relies on establishing a benchmark.
To make sure individual weather events or seasonal influences don’t skew results, the Met Office takes three years of performance data to establish benchmark measurements – even longer for unusual or severe weather.
Against these set benchmarks, Met Office forecasts are generally improving at a rate of one day of additional accuracy per decade. In other words, a three-day forecast today is about as accurate as a two-day forecast was a decade ago. This is true for surface pressure but does vary for other parameters as some types of weather are easier to forecast than others.
According to methodology set by the World Meteorological Organization (WMO), the Met Office’s global Numerical Weather Prediction model is the most accurate operational model in the world. Accuracy varies between forecasts, but as an example, one-day temperature forecasts in the UK are 95% accurate within two degrees of the observed temperature. This accuracy decreases the further ahead you look, so the five-day temperature forecast is 75% accurate.