Monday 20 September 2021

Adapting the QC to account for the June 2021 North American Heatwave (part 3)

We have seen in parts 1 and 2 that the Climatological and Distribution checks have been adjusted because some values at some stations were likely being erroneously flagged during the heatwave over North America in June 2021.

In order to see how much of an effect these updated checks have had on the flagging rates, we look at a spatial distribution of the stations during the last days of June 2021 showing the temperatures at each station, with observations flagged with our unmodified QC appearing in green (Figure 1). As is clear, a large number of stations are flagged during this time. 

Figure 1.  Temperatures at HadISD stations on 28-June-2021 at 00:00UTC (17:00PDT on 27th June).  Flagged stations are shown in green, and non-reporting as transparent.


We produced the same map after the modifications in the two QC tests, and also the appropriate adjustment to the buddy check to allow any tentative flags to be unset (Figure 2).  Many fewer stations are flagged with these updated tests.  However, a number are still flagged.  These are flagged throughout the period of interest rather than during the hottest part of the day, and so were likely the result of a test which flags an entire month. After some spot checks on these stations, these flags are from the excessive variance test.

Figure 2.  As for Figure 1 but after updated QC tests.

The excessive variance test looks at the distribution of the within-month variance, and identifies months with exceptionally low or exceptionally high variance.  The scaling used for this test is the interquartile range, and months which have a variance more than 8 IQR from the average are flagged. As can be seen for the example of Osoyoos (712150-99999,  latitude 49.033, longitude -119.433), the variance for June 2021 is much larger than any previously seen June (Figure 3).

Figure 3.  The Variance Check for Osoyoos (BC, 49.033, -119.433) showing the all of June 2021 flagged.

The variance check uses a fixed threshold of 8 IQR rather than thresholds generated from the properties of the distribution (as in the climatological and distribution checks).  To update this check to determine thresholds from the distribution itself (as in the e.g. climatological and distributional checks) would be a larger change than the relatively small ones we have done so far. Also, in the example in Figure 3, a reasonable threshold determined from the distribution may still have excluded June 2021 (note, the y-axis is a log-scale) and we might struggle to be objective in this change if tailoring to this specific event, perhaps causing issues in other regions. In contrast, the changes in the climatological and distributional checks were easily motivated (and perhaps should have been spotted during development of the monthly updates).

As we noted in the HadISD papers, the automated QC is a balance between removing erroneous/dubious observations but retaining true extremes, and what we do not want to do is make changes with inadvertently large impacts elsewhere.  Our plan at this point in time is to note this as an issue for this test (and event) to look at in the future in any next major update to HadISD.  Any flagged data in HadISD is removed from the netCDF data fields, but remains available within the netCDF files should users wish to access it.  If you have thoughts on this, please do get in touch or comment below.

The next update to HadISD (in October 2021) will show a version number increment to reflect these changes in the QC tests (3.2.0.202109p).

[Animations of three days of the heatwave showing the flagged stations before and after the QC test updates are available on the HadISD homepage].

Tuesday 14 September 2021

Adapting the QC to account for the June 2021 North American heatwave (part 2)

Continuing this took longer than planned, so there has been another version release of HadISD (v3.1.2.202107p) in the meantime.

In the last post, we went through the changes that were made to the Climatological Outlier check as a result of the temperatures experienced in North America in June 2021.  Since then, there have continued to be heatwave events across the world, with temperatures and impacts around the Mediterranean being the current focus (at time of writing).  We will continue to use the North American heatwave for these changes for consistency, but note that of course changes to our QC will affect all stations and variables, and hence events.

Distributional Gap Check

In this check there are in fact two. The first uses monthly aggregated data, to look for asymmetries in the distribution, and we haven't changed that one.  The second is what we delve into here, which uses all observations within a calendar month, and identifies gaps in this distribution to decide where to flag.  We'll use the same station for our plots as in the previous post, 711130-99999 (Agazziz, BC, Canada).

As we use a very similar approach in this test, we also had the same issue where our diagnostic plots initially were not showing data from the incomplete calendar year.  But that was an easy fix, see Figure 1.

Fig. 1  the distribution of scaled anomalies for June from Agassiz (711130-99999), with the flagged ones highlighted in red.  Distribution from all years before 2021 is in black, and from all years including 2021 in grey. Blue is the fit to the data including skew and kurtosis using Gauss-Hermite polynomials. Note the logarithmic y-axis. 

Here again, a handful of observations were being flagged because they fall beyond the bulk, but only when ignoring others from the incomplete calendar year.  What we also noted was the single observation at the low end being flagged.  This test should look for gaps at least two bin-widths wide (so two consecutive empty bins), and this doesn't seem to have been the case.  We fixed that at the same time as the other changes.

As for the climatological check, we treated the complete and incomplete years separately, which meant that these observations were now tentatively flagged, which can be unset by the neighbour check (Figure 2).

Fig 2. Same as Fig. 1, but with the data from June 2021 now being correctly marked as tentatively flagged.  Orange vertical lines are derived from the fit of the distribution (blue) to complete years only (black), and red ones are where a gap has been found.  The Purple and Pink are derived when including the final month.

The final thing that we wanted to change was the nature of the curve being used to fit the distribution.  When putting this code together, we wanted to include skew and kurtosis, as the distributions were clearly non-gaussian.  At the time, we used Gauss-Hermite polynomials to obtain the fit with these higher moments of the distribution.  However, we have since found that these sometimes have artefacts which result in some "wiggles" in the distributions (see Figure 1).  Although this approach is still useful for gauging where to start looking for gaps in the distribution, but we thought that this was an opportunity to see what else could be done.  We tried using the same skewed distribution (no kurtosis) as for the climatological outlier check.

Fig. 3: Same as Fig. 2, but now using a skewed-Gaussian ditribution for the fit rather than the Gauss-Hermite polynomials.

For this month, it is a more sensible fit, and also has a co-benefit of moving the value from which this test starts searching for a gap to the right, and so includes all of the hot temperatures in June.

Monday 26 July 2021

Adapting the QC to account for the June 2021 North American heatwave (part 1)

As we noted on the previous post, the extreme temperatures during June 2021 in western Canada and USA were being erroneously flagged by some of the HadISD QC tests.  This is the first in likely a number of posts as we delve deeper into the causes and resolutions.

Climatological Outlier Check

We go back to the station which we showed in the previous post, 711130-99999 (Agassiz, BC).  The plot we showed was one of the raw diagnostic outputs from HadISD which we ran to see what was going on.

Firstly, while implementing changes for this test, we noticed that the plot was incomplete. It does correctly show the distribution from which the flagging thresholds were calculated as well as the highlighted flags in red.  

However, for this test, the thresholds are determined from the data themselves, using the distribution of the anomalised observations.  So that the addition of each month during the year does not affect the thresholds set by any test, the thresholds are calculated from a distribution using the complete years only. In this case, that's all data from all Junes up to the end of 2020.  What was shown on that plot was the distribution (black) of all the observations from complete years only, and the fitted Gaussian (blue).  The red values were the observations that were flagged, which were from data in 2021 only, but missing were data from June 2021 which were not flagged.  The updated plot is below (Fig. 1), where the grey histogram includes all data from June 2021.  This shows that there are other observations in June 2021 which are warmer than the average of previous years.  Some are even warmer than all previous years, but not sufficiently so to be flagged by this test.

[As a reminder, this test fits a Gaussian to the histogram, and then uses this to determine a threshold (from where the fit crosses y=0.1).  Observations which are further from the peak than the threshold are treated in two ways.  If they are separated by an empty bin from the main distribution, then they are flagged. However, if they are "attached" (no empty bin) then they are tentatively flagged (and could be re-instated by the buddy check).]

Fig. 1: the distribution of scaled anomalies for June from Agassiz (711130-99999), with the flagged ones highlighted in red.  Distribution from all years before 2021 is in black, and from all years including 2021 in grey. Note the logarithmic y-axis.

This updated plot highlighted something we hadn't realised when adapting this test to work for the monthly updates.  The thresholds are set on the complete-year data (up to 2020), and because these data all fall into a single distribution, this test identifies any observations further than this value as bad and flags them (highlighted in red).  However, when including the 2021 data in the monthly update, there isn't an empty bin in the distribution.  We note that had some observations in June in earlier (complete) years been very hot or very cold, they would have been correctly flagged by the test.

So, the first thing we have done is to rectifiy this, and ensure for cases like this, rather than flags being set, only tentative flags are set (Fig. 2), as these values are part of a contiguous distribution rather than being separate from it.  In the case of monthly updates, the threshold for flagging (requiring that empty bin) is re-estimated, and shown by the purple line in Fig. 2. As in the case of complete years, any observation in the final year further from the peak of the distribution is only tentatively flagged as there is no empty bin (pink).

Fig. 2: Same as Fig. 1, but with the data from June 2021 now being correctly marked as tentatively flagged.  Red and Orange vertical lines are derived from the distribution from complete years only (black) and fit (blue).  The Purple and Pink are derived when including the final month.

The other thing to address was to allow for a skew in the Gaussian fit as it is clear that a symmetric function is not the best fit to these data, which is shown in Fig. 3.  This now reduces the number of observations on which a tentative flag is set to single figures.  However, at the low temperature end, the threshold for the tentative flag has reduced, so should Agassiz get a very cold June, then it's possible some values may get tentative flags set instead

Fig. 3: Same as Fig. 2, but now using a skewed-Gaussian ditribution for the fit. 

The automated quality control works on data from meteorogical stations from around the world.  For the case of Agassiz in Canada, we could be reasonably confident that all the data from June 2021 has been included in this update to HadISD, and therefore we could not bother with the "complete" versus "in progress" year distinction.  However, for other locations, we do get data coming through in earlier months than the most recent one (e.g. data filling in during January through to May for the release that included June).  In that case it is possible (though maybe unlikely) that thresholds for this test in earlier months could change from monthly release to monthly release, resulting in values being flagged or unflagged in different releases.  Our approach is more stable during the monthly updates, and so we keep this distinction.

At the end of the calendar year, we run the QC on the data for the final data release of that HadISD version.  For this release, it is on a complete year, so for that release only (the ones processed in January each year), then this distinction isn't made.  Therefore all the June data will go into the distribution from which the thresholds are determined.  The original form of the test would have received a contiguous distribution for this station, and so only set tentative flags.  However, but updating it to use the skew-Gaussian, in fact no observations are flagged (Fig 4.).

Fig 4. Same as Fig 3. but for the version of the test as would be run for the update at the end of a calendar year.

We will continue checking other QC tests as well as run further diagnostics before these changes are implemented in the HadISD QC suite, with a version number increment to reflect the changes.  It is likely that these will not be available in time for the release in August 2021 (including data up to the end of July).

Friday 9 July 2021

The June 2021 North American Heatwave and v3.1.2.202106p

We have just run the automated quality control (QC) for the latest monthly update, which includes data from June 2021.  I'm sure you are all aware of the severe heat wave which affected the western part of North America in the last part of that month.  British Columbia (Canada), Oregon and Washington (USA) experienced a number of days of exceptionally high temperatures, well over 40C in some cases.

Records measured by stations did not only fall, they were smashed, with new values set that were up to 5C higher than the previous records. A report by the World Weather Attribution project indicates that this event was virtually impossible without human induced climate change. Given that this event was so much warmer than anything experienced in this region in the past, we thought to check how the automated QC handled these exceptional values.

Any QC procedure will always result in retaining some bad values (false negatives), and also erroneously removing some good ones (false positives).  It is impotant, however, to minimise these as best possible, and do "least harm".  To this effect, observations that have been flagged by the QC are removed from the main data stream, but are available in a separate data field in the netCDF files, should any user wish to re-insert them into the time series.

HadISD QC

The town of Lytton (BC) recorded the highest temperatures during this event, but that station does not form part of the HadISD.  We had a look to find nearby stations, and show these in Figure 1 for Agassiz, which is south of Lytton and further down the Fraser River, towards Vancouver.

Figure 1. Temperature timeseries for Agassiz (BC, 711130-99999) for (a) all of 2021 and (b) the latter half of June 2021.  Observations are in black with any flagged by the climatological QC test in red.

As you can clearly see, the highest values at the peak of the heatwave on the 27-29th June have all been removed, in this case by the Climatological Outlier check.  At some level, this is unsurprising, given that the temperatures experienced surpassed anything in the previous record.  Climatologically speaking they are exceptional values, and so without any other information to go on, could be dubious.  

The Current QC

Of course we know that these are likely to be valid observations, and as HadISD has been designed to retain true extremes, some adjustments to the QC algorithms are necessary.   Firstly, let's have a look at how the current test is identifying and flagging these values. For full details see the HadISD paper (Dunn et al, 2012).

The Climatological Outlier check works on a monthly basis, and calculates climatological values for each hour of the day for each month using the winsorized observations (Winsorizing is a process where all values exceeding a certain threshold [5% & 95% in this case] are replaced by these threshold values).  Using these 24 climatological values, anomalies are calculated, and then scaled using their inter-quartile range.

We have included a way to account for some of the effects of a shifting climate using a low-pass filter.  However, this is only applied to complete years of data, and so on our monthly updates has so far not been included. The resulting distribution is fitted with a Gaussian, and we use where this fitted Gaussian crosses the y=0.1 line to set our threshold, rounded up to the next whole degree.

Figure 2: the distribution of scaled anomalies for June from Agassiz (711130-99999), with the flagged ones highlighted in red.  Note the logarithmic y-axis.

The test operates two levels of flagging, depending whether there is an empty bin between those further from the centre than the threshold values.  If there is a gap, then these are flagged, as shown in Figure 2.  If there isn't an empty bin and the are bins part of a contiguous distribution but are further from the mean than the threshold, then these are "tentatively" flagged (see Figure 10 in the HadISD paper).  When running the neighbour checks, these tentatively flags can be removed if sufficient neighbours indate these are reasonable.

In the case of Agassiz, the observations were so extreme, that this test has flagged them without the option of the neighbour check undoing this (Figure 2).

Amending the QC

There are a number of options as to what we could do to improve the actions of this automated QC.  However, the important thing is to make sure that whatever we implement, there are as few knock-on effects in other regions and flags as possible.  The intention being that we improve this test in a robust, responsible way.

A number of options have so far come to mind, including:

  • Amend the low-pass filter to include data from the year in progress.

  • Amend the fitting function from a pure Gaussian, to one which allows skew or even kurtosis.  As seen in Figure 2, the distribution has a high tail above the fitted Gaussian, and accounting for this will affect the threshold used.  This approach is already used in a different check in the HadISD QC.

  • Use a rolling range to determine the years contributing to the climatologies used when creating anomalies, so values from 1931 are not contributing to 2021.

  • Amend the neighbour check so that spatially coherent anomalies result in flags being unset from a greater subset of QC tests.

All of these approaches will need to be tested with care to ensure that any updates do not result in detrimental performance of the QC suite elsewhere in time or space.

We will release this version of HadISD (v3.1.2.202106p) with a note that observations from this event have been erroneously flagged.  As this is a preliminary version of HadISD, this is reasonable, and gives us time to implement a solution in "slow time".  Watch this space for an update.

References:

Dunn, R. J. H., Willett, K. M., Thorne, P. W., Woolley, E. V., Durre, I., Dai, A., Parker, D. E., and Vose, R. S.: HadISD: a quality-controlled global synoptic report database for selected variables at long-term stations from 1973–2011, Clim. Past, 8, 1649–1679, https://doi.org/10.5194/cp-8-1649-2012, 2012  

[Edited 9-Jul-2021 10.50BST to add option of amending the neighbour check]

Friday 5 March 2021

v3.1.2.202102p

It's been over a year since I last posted an update on this blog about HadISD.  I suppose that's because everything has been ticking along nicely.

The dataset has been updated every month with version 3.1.2.202102p just being released.  That contains data from 1-Jan-1931 to 28-Feb-2021.

In the recent update of all data in the deep past to create v3.1.2.202101p (and the new station selection that goes with that), we have filled in a gap during April 2015 which was inadvertently left in the ISD.  There has been a healthy increase in the number of stations in that update as well, with now 9278 stations in the dataset.

At the end of last year, we released a Product User Guide, to help users with the dataset.  If you have any thoughts or suggestions about this, or anything else with HadISD, do get in touch.