Land surface climate station records - frequently asked questions
A few of the most frequently asked questions about the HadCRUT3 subset.
Please select a question to open or close the answer.
1. Are the data that you are providing the 'value-added' or the 'underlying' data?
The station data that we are providing are from the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods or site location.
The database, therefore, consists of the 'value added' product that has been quality-controlled and adjusted to account for identified non-climatic influences. Adjustments were only applied to a subset of the stations, so in many cases the data provided are the underlying data minus any obviously erroneous values removed by quality control. The Met Office do not hold information as to adjustments that were applied and, so, cannot advise as to which stations are underlying data only and which contain adjustments.
2. What about the underlying data?
Underlying data are held by the National Meteorological Services (NMSs) and other data providers. Such data have in certain cases been released for research purposes under specific licences that govern their usage and distribution.
It is important to distinguish between the data released by the NMSs and the truly raw data, e.g. the temperature readings noted by the observer. The data may have been adjusted to take account of non-climatic influences, for example changes in observing methods, and in some cases this adjustment may not have been recorded, so it may not be possible to recreate the original data as recorded by the observer.
3. Why is there no comprehensive copy of the underlying data?
The data set of temperatures, which are provided as a gridded product back to 1850, was largely compiled in the 1980s and 1990s when it was technically difficult and expensive to keep multiple copies of the database.
For IT infrastructure of the time this was an exceedingly large database and multiple copies could not be kept at a reasonable cost. There is no question that anything untoward or unacceptable, in terms of best practices at the time, occurred.
4. How can you be sure that the global temperature record is accurate?
The methods used have been peer-reviewed. There are three independent sets of global temperature that all clearly show the rise in global temperatures over the last 150 years. Furthermore, the strong scientific evidence that climate is changing as a result of human influence is also based on the growing evidence that other aspects of the climate system are changing. These include the atmosphere becoming more moist; global rainfall patterns changing; reductions in snow cover; glacier volume and Arctic sea-ice decreases; increases in sea-level. and changes in global scale circulation patterns. There are also numerous changes in phenological records which point towards a general warming and support the veracity of the instrumental record.
5. Why have you not previously shared the HadCRUT data?
We have always provided the gridded HadCRUT product freely and without restriction for research use. The data set is available through the Met Office, University of East Anglia Climatic Research Unit (CRU) and British Atmospheric Data Centre (BADC) and has been used widely in research papers and the media.
6. What about the underpinning observations on which the grid box averages are based?
The underpinning land station data are owned by other countries and institutions and previous releases were made only with agreement from these data providers. Following the ruling of the Information Commissioner we have decided to release all station data, except some stations in Poland which we do not currently have permission to release.
The underpinning ocean data component of HadCRUT is available publicly at International Comprehensive Ocean-Atrmosphere Data Set (ICOADS).
7. Why can you not release all the data?
When National Met. Services were contacted, many did not reply. A lack of a reply could not be considered to imply consent. Of the countries that did reply, only Poland stated that we could not release their data (aside from those data we released in December 2009 that were either in the Regional Basic Climatological Network (RBCN) or in the Global Climate Observing System (GCOS) Surface Network). Following the ruling of the Information Commissioner we have decided to release all station data, except those stations in Poland which we do not currently have permission to release.
8. Will releasing a subset skew the principal findings?
No, all stations, except some stations in Poland, have now been released and any differences between the full data set and the near-complete subset will be insignificant at a global scale.
9. Does this subset constitute a new data set?
This is not a new data set. Data sets are only released when they have gone through the proper process of scientific review.
It is important that due scientific process is followed if we are to have confidence in our findings. If we were proposing this as a new data set then we would have submitted it for peer-review and only released it once accepted. The three principal data sets have all undergone this process and, therefore, retain primacy.
10. Why aren't all the underpinning land station records available for free?
Making observations costs substantial amounts of money and requires a degree of technical expertise and training to meet internationally agreed standards prescribed by the WMO. Furthermore, these data, even at a monthly mean resolution, can have significant economic value to the rights holders. In many parts of the world, NMSs are expected to act as commercial entities. Removing potential revenue streams could substantially harm many such organisations. We, therefore, cannot guarantee that all NMSs will permit release of station level data.
11. When will you release all the data?
We will release the remaining station data once we have the permissions in place to do so. We are dependent on international approvals to enable this final step and cannot guarantee that we will get permission from all data owners.
12. How have you dealt with the freedom of information requests regarding releasing the underpinning global temperature data?
We take our responsibilities under the Freedom of Information Act very seriously and have, in all cases, handled and responded to requests in accordance with its obligations under the legislation.
We have been consistent in our responses in stating that the Met Office is not in a position to release the underpinning land station data as we do not have the authority to do so, as the data are owned by other countries and any such release would need to be agreed with data providers. We are in the process of seeking this agreement from the Polish NMS, so that we will hopefully be in a position to release the last few stations in the future.
13. What have you done to gain permissions?
We, with the CRU, have written a letter to all rights holders requesting permission to publish the underlying station data. We have monitored responses and actively pursued the rights holders for a decision.
14. Who is ultimately responsible for the land data record?
15. Why is this responsibility with the University of East Anglia CRU and not the Met Office Hadley Centre?
During the 1980s the University of East Anglia CRU was funded, primarily by the United States 'Department of Energy', to collate a global land temperature record. Since then they have undertaken several major updates to the record increasing station density and time-series completeness. This is why they own the primary intellectual property rights (IPR) for the land climate records.
16. So does the Met Office Hadley Centre have any involvement with the land climate data?
Since 2002 the Met Office Hadley Centre has formally assisted University of East Anglia CRU by providing quality control and real-time updates for the land climate data set. The Met Office Hadley Centre is entirely responsible for the global sea surface temperature component of the global mean temperature and also responsible for merging these series to create the HadCRUT product. This underpinning ocean data component of HadCRUT is available publicly at ICOADS.
17. Are there gaps in your data sets?
By virtue that observations are not available from every location around the world, there will be places from which data are not received. HadCRUT does not infill any of these gaps in the data.
18. As there are gaps in the HadCRUT data set, does this have an effect on the global temperatures announced so far?
The HadCRUT product headline number provides an estimate solely for that portion of the globe where data is reported. This coverage changes over time and needs to be accounted for. Climate change is not identical everywhere around the globe. The HadCRUT product comes with uncertainty estimates. The main reason for this uncertainty is the sampling effect which is particularly large back in the late 19th century when observational coverage was poorer. The warming trend is much larger than uncertainty estimates so the headline finding of a warming world is unaffected by sampling issues.
19. If some land stations are shown to be incorrect does this invalidate the whole data set?
A single station or even a handful of stations that are incorrect will not materially impact the global estimates, although they will have impacts at the immediate regional level. Any issues or discrepancies arising from the data would be rectified as a matter of course, as soon as we are informed of them.
20. I can find more data on the web for a station than you hold, why is this?
Our data sets include all the historical data that were available at the time when CRUTEM3 was published. Following that we only undertake monthly updates to the most recent portion of the record. When further historical records are found to supplement a station these are applied only at the time when a new data set version is released. This allows us to better control the accurate update of the temperature records and ensures that the record does not change substantially between versions, except for the most recent period to allow intercomparison of published research.
21. I can see obviously erroneous individual values in the data - do these make it through to the final product?
The gridding code includes a final check against the station record standard deviations which removes observations that are very different from observations, so most obviously erroneous values don't make it through. There are, inevitably, a few bad individual observations that are included in the final product, but as the product is an average of observations from many stations their effect is negligible upon regional or global scale means.
22. Do sources for a station differ over time?
The historical component of the record is generally (but not always) from a single source, but updates depend upon data routinely exchanged between NMSs in the form of CLIMAT messages and this source may differ from that for the historical record.
A single source is generally the case, but it isn't always. Sources of much of the early CRU data are documented within United States Department of Energy Technical Reports 22, 27 and 17.
23. I am having problems unzipping the data files. What is the solution?
We are aware that some users are getting error messages when trying to unzip the data files. The following are some suggestions for how to get around this problem.
- If you have a choice of web browser, try using Firefox, Google Chrome or Opera, as these browsers do not appear to generate errors.
- If you are using Internet Explorer, please consult the advice in the Microsoft Knowledge Base.
- If you still have problems with Internet Explorer, try changing your Internet Options to disable the use of HTTP 1.1.
24. How do I use the code that you have provided to calculate gridded statistics from the data files?
The code is a computer program that is held in two files, using the scripting language 'Perl'.
The first, called 'station_gridder.perl', takes the data files and creates a gridded data set. The second, called 'make_global_average_ts_ascii.perl', calculates the time series of global average temperatures from the gridded files that were created by the first program.
You will need Perl installed on your computer. This is available from the Perl programming language website.
You will then need to create a directory (or folder), place the two code files in it, and also create a directory (folder) in it called 'station_files' into which you should extract the station files from the zip file containing the data.
Linux users: Set the first directory you created as your current directory and then type: station_gridder.perl | make_global_average_ts_ascii.perl > global_time_series.txt
Windows users: Open an MS-DOS window, change to the folder you first created and then type: station_gridder.perl | make_global_average_ts_ascii.perl > global_time_series.txt
25. Why are there differences between the data that were released in December and the data released in January?
Feedback from users and internal review of the data identified a number of minor issues with some of the station data. We have corrected these. The corrections are described on the CRUTEM3 January 2010 update webpage. The minor corrections that have been applied have minimal impact on the global temperature record. Even in the two regions most affected by the changes, the United States and Australasia, the impact is only discernible in the 19th century when there are few stations and the published uncertainties are largest. The new version falls within the stated 95% confidence limits of the old version much more than 95% of the time.
26. Why do the data differ from those released by CRU?
Station data to be released by CRU are for the latitude zone 30°N to 40°S and relate to a version of the data as of January 2009. Corrections to the CRUTEM3 station dataset were made by the Met Office in January 2010 (in conjunction with CRU) and were reported at the time.