Positive Feedback: Have We Been Fooling Ourselves?

By | August 15, 2007


There are three main points/opinions/issues I’d like to explore, which are all interrelated:

1. The traditional way in which feedbacks have been diagnosed from observational data has very likely misled us about the existence of positive feedbacks in the climate system.
2. Our new analyses of satellite observations of intraseasonal oscillations suggest negative cloud feedbacks, supporting Lindzen’s Infrared Iris hypothesis.
3. I am increasingly convinced that understanding precipitation systems is the key to understanding climate sensitivity.

Unfortunately, the three of these represents too much material to present today. Since the second (Infrared Iris) results were just published by us in GRL (August 9, 2007), it would seem to be the logical one for me to discuss before the others. But the first issue is, in some sense, much more important and fundamental, and will help us put the newly published results in a more meaningful context.

So, for now, I’m going to discuss just the first issue (potential biases in feedback diagnosis) and then maybe Roger will have me back to continue with the second and third issues.

What you are about to read is, I believe, more than a little alarming. And maybe someone here will even point out the obvious error in my analysis that will render my conclusions silly and meaningless. After all, that would save me the effort of writing and submitting our next journal article, wouldn’t it? So, let’s forge ahead with the first, feedback diagnosis issue.

The Feedback Concern

Feedbacks are at the heart of most disagreements over how serious man-induced global warming and climate change will be. To the climate community, a feedback is by definition a RESULT of surface temperature change. For instance, low cloud cover decreasing with surface warming would be a positive feedback on the temperature change by letting more shortwave solar radiation in.

But what never seems to be addressed is the question: What caused the temperature change in the first place? How do we know that the low cloud cover decreased as a response to the surface warming, rather than the other way around?

For awhile, a few people had me convinced that this question doesn’t really matter. After all, cause and effect are all jumbled up in the climate system, so what’s the point of trying to separate them? Just build the climate models, and see if they behave the way we observe in nature, right? Well, that’s true – but I think I can demonstrate that the way we have been doing that comparison is seriously misleading.

Feedbacks from observational data have traditionally been diagnosed by plotting the co-variability between top-of-atmosphere radiation budget changes and surface temperature changes, after the data have been averaged to monthly, seasonal, or annual time scales. The justification for this averaging has always remained a little muddy, but from what I can gather, researchers think that it helps to approach a quasi-equilibrium state in the climate system.

The trouble with this approach, though, is that when we average observational data to seasonal or annual time scales in our attempts to diagnose feedbacks, it turns out that there are a variety of very different physical ways to get the very same statistical relationships. (Be patient with me here, I’ll demonstrate this below).

In particular, ANY non-feedback cloud variations that cause surface temperature to change will, necessarily, look like a positive feedback — even if no feedback exists. And the time averaging that everyone employs actually destroys all evidence that could have indicated to us that we were misinterpreting the data.

I am not the first one to discuss this issue, although the way I am expressing it might be different. Graham Stephen’s 2005 J. Climate review paper on cloud feedbacks (if you read carefully) was implying the same thing. Similarly, Aires and Rossow (2003 QJRMS) presented a new method of diagnosing feedbacks, arguing that one needs to go to very short time scales in our diagnostics to have any hope of providing meaningful validation for climate models.

But the issue has not been well articulated, and I fear that many climate scientists simply haven’t understood what these few investigators were trying to get across to us. For instance, Stephens spent a lot of time discussing how clouds are very dependent upon aspects of the atmospheric circulation, not just upon surface temperature, but it took me a while before I realized the practical importance of what he was saying.

Stephens was pointing out that our diagnosis of what has caused a certain relationship in observational data depends entirely upon on how we view the climate “system”. In other words, it matters a lot what we think is causing what. Again, once you have averaged the data to seasonal or annual time scales, you have destroyed most of the information that would have allowed you to diagnose what kind of system you are looking at.

More recently, a 2006 J. Climate paper by Forster and Gregory presented equations to allow us to discuss individual terms in feedback analysis; theirs is the most thorough treatment I’m aware of in this regard. But they made a critical assumption – a claim – that sounded good at first, but upon a little reflection, I find it can not be supported. In fact, it was a single sentence that ends up totally changing the analysis of feedbacks.

Forster and Gregory included a term to represent internal variability – appropriately called an “X” term – but they claimed that, to the extent that any internal variability was uncorrelated to surface temperature change, it would not corrupt the regression slope when plotting radiation changes versus temperature changes. In other words, we’d still diagnose a good feedback number, even in the presence of internal variability.

Well, while that statement is literally true, the assumption that any internally-caused fluctuations in the radiation budget would be uncorrelated with surface temperature is not true. It is the radiation changes that CAUSE temperature change – the two cannot be uncorrelated!

So far, what I have presented is admittedly hand waving, and all of the above-mentioned investigators also addressed the problem in a hand-waving fashion. So, what to do? How do we quantitatively demonstrate something in simple terms that is also physically realistic?

I know! Let’s build a model!

A Simple Model Demonstration

So, Danny Braswell and I built a simple energy balance model based upon the global-average vertical energy flux diagram that is famously attributed to Trenberth. But our model has some enhancements. It has three time-dependent temperature equations, for (1) the ocean surface, (2) a lower atmospheric temperature that radiates downward, and (3) an upper atmospheric temperature that radiates out to space. We gave it a swamp ocean with ten times the heat capacity of the atmosphere (about 190 m deep). We found that the model equilibrates to a new energy balance state in about 5 years after an imbalance in any of the terms is imposed.

In order to demonstrate elements of the problem, we need up to three sources of temperature variability. We chose the following: (1) daily random non-cloud SST forcing (e.g. from evaporation), (2) daily random cloud forcing, and (3) cloud feedbacks on any surface temperature changes.

With these three sources of variability, we discovered we could get a wide variety of model behaviors, so I decided that we had to constrain our simulations to physically realistic ranges.

To do this, I computed from 6 years of Terra CERES tropical radiation budget data that the standard deviation of 30 day anomalies in tropical oceanic reflected shortwave (SW) was about 1.3 W m-2. So, we made model runs where the SW variability (from all cloud variations, no matter the source) produced similar 30-day statistics.

The following is a 30 year plot from one run, forced only with daily random cloud variations, and no cloud feedback. Note that yearly, and even decadal, variability in the surface temperature occurs in a random walk fashion, but one that is constrained to meander around the equilibrium SST value of 288 K (the value which is consistent with Trenberth’s energy balance numbers).

Now, when we plot this model run’s output of SW variability versus surface temperature variability (365 day averages), we get a diagnosed “feedback” parameter of -1.4 W m-2 K-1. This is very close to the average of what the IPCC AR4 models produce for their SW cloud feedback — even though we haven’t yet imposed a feedback in the model!

Furthermore, note that the explained variance is relatively low. This is just like what has been reported for “feedbacks” diagnosed from observational data (Forster and Gregory, 2006 J. Climate). In contrast, when the source of the SW variability in the model is specified to be through cloud feedback, the explained variance is always very high.

In other words, it appears that low explained variance is evidence of non-feedback cloud forcing, as opposed to cloud feedback.

Finally, we also find that there is NO WAY to get anywhere near a 30 day s.d. of 1.3 W m-2 in SW variability out of the model with only cloud feedback. You must invoke non-feedback sources of cloud variability.

In other words, the large amount of variability in the CERES SW data argues for a non-feedback cloud source of SST variability.

After running many different combinations of model forcings and feedbacks, we concluded the following: To the extent that non-feedback cloud sources of SST variations occur, they ALWAYS lead to positive bias in diagnosed “feedback”. The bias is especially strong if the real cloud feedback is negative, and can easily obscure a negative cloud feedback with a diagnosed “false positive”. Note that the reason the bias is always in the direction of positive feedback is because the alternative is energetically impossible (you can’t force an SST increase by reducing SW input into the ocean).

This is indeed the general behavior I expected to find, but I needed a simple model demonstration to convince myself.

Pinatubo: A Negative Feedback “Unmasked”?

Now, what we really need in the climate system is some big, non-cloud source of radiative forcing, where the cloud feedback signal is not so contaminated by the obscuring effect of cloud forcing. The only good example we have of this during the satellite era is the cooling after the 1991 eruption of Mt. Pinatubo.

And guess what? The SW cloud feedback calculation from the Pinatubo-caused variability in Forster and Gregory was – surprise, surprise! – anomalously negative, rather than positive like all of their other examples of feedback diagnosed from interannual variability!

Conclusion

I think it is time to provoke some serious discussion and reconsideration regarding what we think we know about feedbacks in the real climate system, and therefore about climate sensitivity. While I’ve used the example of low cloud SW feedback, the potential problem exists with any kind of feedback.

For instance, everyone believes that water vapor feedback is positive, and conceptually justifies this by saying that a warmer surface causes more water to evaporate. But evaporation is only half the story in explaining the equilibrium concentration of atmospheric water vapor; precipitation is the other half. What if a decrease in precipitation efficiency is, instead, the cause of the surface warming, by not removing as much water vapor from the atmosphere? Then, it would be the water vapor increase driving the surface temperature change, and this would push the (unknown) diagnosed water vapor feedback in the positive direction.

Of course, researchers still have no clue about what control precipitation efficiency, although our new GRL paper suggests that, at least in the case of tropical intraseasonal oscillations, it increases with tropospheric warming.

What I fear is that we have been fooling ourselves with what we thought was positive cloud feedback in observational data, when in fact what we have been seeing was mostly non-feedback cloud “forcing” of surface temperature. In order to have any hope of ferreting out feedback signals, we must stop averaging observational data to long time scales, and instead examine short time-scale behavior. This is why our GRL paper addressed daily variability.

Will this guarantee that we will be able to observationally estimate feedbacks? No. It all depends upon how strong they are relative to other non-feedback forcings.
It seems like this whole issue should have been explored by someone else that I’m not aware of, and maybe someone here can point me in that direction. But I think that a simple model demonstration, like the one I’ve briefly presented, is the only way to convincingly demonstrate, in a quantitative fashion, how much of a problem this issue might be to the observational determination of climate sensitivity.

Roy W. Spencer is a principal research scientist at the University of Alabama in Huntsville, where he directs a variety of climate research studies using satellite data. Dr. Spencer is also the U.S. Science Team leader for the Advanced Microwave Scanning Radiometer (AMSR-E) flying on NASA’s Aqua satellite.