Accurate Climate Change Assessment, An Impossible Task?

By | October 14, 2007

Recently, a critical adjustment[1] was made by NASA to its US Annual mean temperature record since 1895, due to discovering an error in their adjustments found by Steve McIntyre, who also blew the whistle on the flawed hockey stick of Mann, Bradley and Hughes. Tellingly, the adjustment came without a press release or even an explanation on the site, sparking considerable attention in the blogosphere but drawing little mention in the national media. One can rest assured had the adjustment been the other way (a warming), there would have been press releases, widespread hype and headlines for several days.

The adjustment made was primarily to the temperatures in the years post 2000 for which the average year declined by 0.15C.

NASA GISS’s Gavin Schmidt and James Hansen have responded to the media that the adjusted downward correction of 0.15 deg C was not significant.

If one accepts that as true and examines the new US temperature curve since 1930, one finds a trend of only 0.12C (0.22F) for the 77 years of measurement. Thus, according to the recent judgment of Schmidt and Hansen’s, the warming in the United States over the past 77 years has also been “insignificant”.

This change is comparable to the change I found using the old NCDC USHCN data set and presented at the AMS Annual Meeting in a paper entitled Multi-Decadal Scale Temperature Cycles-Trends, Causes and Modifiers. I found an approximate 0.25F change over that same period.

How much of this warming is real remains a question we can’t answer with any certainty. That is so because a number of researchers including Roger Pielke Sr have published peer review papers documenting siting issues with stations and insufficient urban adjustments in the official temperature records. Recently Anthony Watts, a former TV meteorologist has actually initiated a grass roots effort to have volunteers across the country survey and photograph the instruments used and post them on the site http://www.surfacestations.org/ [2]. To date about 33% of the 1221 have been surveyed and photographed.

The preliminary results show that more than half the sampled sites appear to fall short of federal guidelines for optimum placement. Some sites include instrument stations placed near sewage treatment plants, parking lots, buildings and air-conditioners — all well-known artificial heat sources which could bias temperature records upward.

On Watt’s home page, he gives just one example of two nearby sites, one a rural station Orland, well sited and stable for 100 years and the other a small town of 12,000 populations whose sensors have been encroached upon by buildings and other warming factors.

The station in Orland, CA is not surrounded by artificial heat emitters. The record shows that current temperatures are not unusual, and even cooler than periods in the recent past.

On the other hand, the station record in near-by Marysville, CA exists in a progressively virtual “heat island” causing temperatures to climb.

“It’s really for the best assessment of the climate," Pielke said. "We need temperature data that is located in locations more fairly representative of a large area."

Pielke said the National Weather Service should have had a station-checking process similar to Watts‘ "years ago." He said Watts‘ work is serving a need to know how the stations gather data. Pielke’s previous research has shown many weather stations have been poorly placed.

Again, most all of the poor placements infect the official records with an artificial warming bias. Thus, it is entirely possible that current temperatures are no warmer than they were in the 1930s and 1940s. This is indicated what we find when we look at temperatures in the arctic (Polyakov) and Greenland (NASA GISS) where siting issues and urbanization are not contaminating the data.

THE NEW US DATA AND TOP TEN WARMEST YEARS

With NASA’s revised data base, 6 of the top 10 warmest years now fall within the span from the 1920s to 1950s, and only 4 in the last two decades. This suggests a cyclical US warming and not an accelerating warming due to greenhouse gases. Recall the claims that 10 of the warmest years on record were in the last 11 years. The revised data counters such claims.

CAN WE HOPE TO GET AN ACCURATE ASSESSMENT OF PAST CHANGES?

Securing an accurate global temperature record is an even more dauntingly impossible task.[3]

First, the oceans cover 2/3rds of the world. In a paper in 2007, Viktor Gouretski and Klaus Peter Koltermann estimated because of instrument related biases the ocean heat content increases since the 1950s needed to be reduced by a factor of 0.62.

Secondly, if we focus on land, we have the global-wide issue of station dropouts (6000 in 1970 to 2000 today), missing monthly data, changes in instrumentation, changes in time of day of observations, changes in instrument location or land use and urbanization. In many cases, these changes were not well documented – let alone properly adjusted.

At least the stations in the United States have been somewhat more stable. NCDC made an attempt in 1990 to make adjustments to the raw data to come up with data sets that perhaps could be used for climate change assessments, but without a proper study of the siting as Pielke and Watts have shown, these adjustments must be questioned. Steve McIntyre has also recently reported on the lack of an adjustment for the change to new instrumentation that has a known warm bias[4] in the 1990s.

Also, aggregate population size alone doesn’t affect change as much as the change in population or growth of the cities around the sites. A city like New York has been a city for 100 years and not changed significantly in that time. Its raw data shows only a cyclical change with little net warming. Even though the 5 boroughs has a population of around 8 million, the physical changes around little Marysville with its population of just 12,000 were more significant and it showed a warming not seen in the big apple. Making the correct adjustment decisions may prove nearly impossible because the whole process introduces the human subjective judgment factor and the possibility of data manipulation and/or error. This appears clear in the following case study.

CASE STUDY THE BIG APPLE

style=”text-align: center;”>

style=”text-align: center;”>THREE RADICALLY DIFFERENT US GOVERNMENT VERSIONS

Our national centers as we have noted above regard station data as critical to measure recent climate change. As we just noted, the raw observations are taken from the stations then adjusted to account for local factors like site changes, changes in instrumentation, time of observation and in some cases urbanization (Karl 1988). One would think the differences would be small and that once adjusted, the data would stand the test of time.

However, we found that to be far from the truth by examining the data sets for our biggest city, New York City and the climate station in Central Park.

Historical Central Park observations were taken from the periphery of the park from 1909 to 1919 at the Arsenal Building 5th Ave (between 63rd & 64th) and then since 1920 at the Belvedere Castle on Transverse Rd (near 79th & 81st).

Belvidere Castle, Central Park, New York City

The National Climate Data Center takes this raw data and makes adjustments for the factors mentioned. The first major compilation and station by station adjustment occurred with HCSN Version 1[5] in 1990. I compared the results of the data with the raw data taken directly from the NWS New York City website for Central Park. I chose the two extreme temperature months – July and January for the comparison.

JULY COMPARISON

The two data sets for July are plotted above.

Note the adjustment was a significant one (a cooling exceeding 6 degrees from the mid 1950s to the mid 1990s.) Then inexplicably the adjustment diminished to less than 2 degrees.

Thus what was a flat trend for the past 50 years became one with an accelerated warming in the past 20 years. It is not clear what changes in the metropolitan area occurred in the last 20 years to warrant a major upward adjustment. The park has remained the same and there has not been a population decline but a spurt in the city’s population in the 1990s.

JANUARY COMPARISON

I repeated the analysis for January in Central Park using the same two data sources. A similar UHI adjustment pattern was seen.

It had the same result on the adjusted temperatures, showing recent warming not before seen in the raw (unadjusted) data.

If government officials had left the urban heat island (UHI) adjustment consistent after 1990, the following would have been the adjusted result.

Clearly no warming trend is evident in either the unadjusted or the uniformly UHI adjusted plots for one of world’s largest cities in January or in last the half century or more in July.

Now though the larger the city, the more the Urban Heat Island (UHI), most of incremental warming from UHI occurs for cities that increase rapidly in population or where the observing site (airport) initially rural has the city grow around it. However, in New York City, Central Park is in the center of the city which has been a big city for a long time. Though there is no doubt it is warmer in the city than in rural areas, significant incremental UHI induced warming should not be expected. Certainly no precipitous decline should be expected either.

Steve McIntyre at Climate Audit became interested in the data at this point. He was able to confirm my results after exchanging emails with NCDC. Then he went further.

“I’ve been able to emulate the [Tom] Karl adjustment. If one reverse engineers this adjustment to calculate the New York City population used in the USHCN urban adjustment, the results are, in Per’s words, gobsmacking, even by poor climate science standards.”

Here is the implied New York City population required to justify Karl’s “urban warming bias” adjustments.”

In other words, for the HCN Version 1 data to be valid for Central Park, the population of the metro area would have had to decline to pre-1900 levels!

Version 2 of the USHCN is about to be released. It uses a different approach with no adjustment for urbanization. We eagerly await the results.

The NCDC GHCN Version 2 Data Set

The story doesn’t end there. The same NCDC maintains a global data base of station data used for climate change assessment called GHCN. Version 2 contains some of the same adjustments except for the Karl urban adjustment. Central Park is one of the GHCN sites.

I decided to compare Central Park in that GHCN data set (the latest V2) with the HCN data (V1) set relative to the raw data.

The differences between the data sets is startling large for the July monthly mean through much of the record (11F!). It diminishes since 1990 as HCN adjustments for urbanization have inexplicably diminished even as NYC population grew.

Ironically, GHCN agrees with the raw data in January for Central Park with only minor seemingly random adjustments. But recall the January adjustment in HCSN was as much as 6F until the recent years when inexplicably the adjustment diminished.

These kinds of huge variances in the “data” for one location raise serious questions as to whether we can trust any surface station based data set to determine changes the order of a tenth of a degree for climate change assessment and policy prescriptions.

THE FUTURE LOOKS BRIGHTER…BUT IT IS OUT THERE

NOAA has begun an effort to address this issue buy reestablishing a national network of carefully sited and maintained stations in a network project called NERON. As of earlier this year they had 114 stations with a goal of over 1000. The equipment will be placed at a rate of about 50 per year at current budget levels so it will be a while (over a decade) before the network is of sufficient density and have sufficient set of years to even start assessing change.

Jay Lawrimore of NCDC says this new network of weather stations called the Climate Reference Network is being built with climate in mind and is geared to avoid artificial factors that affect readings.

Money should be diverted to accelerate the implementation of this new network. Without it, we are at the mercy of imperfect raw data that is adjusted subjectively without oversight and independent auditing.

Amazingly for policy makers, we are attempting to ascertain changes the order of a tenth of a degree from what clearly is imperfect data, and using those assessments for crafting policy decisions that may cost hundreds of billions or even trillions of dollars and affect the livelihood and life expectancy of every American in potentially significant ways. What is wrong with this picture?

Joseph D’Aleo: CCM, AMS Fellow

Joseph D’Aleo has over 35 years experience in professional meteorology. He was the first Director of Meteorology and co-founder of the cable TV Weather Channel. Mr. D’Aleo was Chief Meteorologist at Weather Services International Corporation and Senior Editor for WSI’s popular Intellicast.com web site. He is a former college professor of Meteorology at Lyndon State College. He is the author of a Resource Guide on El Nino and La Nina. Mr. D’Aleo has frequently written about and made presentations on how research into ENSO and other atmospheric and oceanic phenomena has made skillful seasonal forecasts possible as well as the roles cycles in the sun and oceans have played in climate change. He is currently Executive Director of the International Climate and Environmental Change Assessment Project.

>

[1] SPPI US Temperature Ranking

[2] http://www.surfacestations.org/

[3] See discussion at: SPPI: Fallacies about Global Warming

[4] http://www.climateaudit.org/?p=1954

[5] http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html