If someone with a hand-held stopwatch tells you that a runner cut his time by 0.00005 seconds, you should be skeptical. If someone with a climate model tells you that a 0.036 Wm-2 CO2 signal can be detected within an environment of 150 Wm-2 error, you should be just as skeptical.
This is from David R. Henderson and Charles L. Hooper, “Flawed Climate Models,” Defining Ideas, April 4, 2017.
The whole thing, which is not long, is worth reading.
READER COMMENTS
John Hall
Apr 6 2017 at 10:39am
“Defining Ideas, April 4, 2107. ”
Typo: should be 2017
Paul T
Apr 6 2017 at 11:56am
0.8+-1 is a confidence interval, which means it is normally distributed around the estimate, with the +- giving the edges of the 95% confidence interval. That means (with Franks own numbers) there is less than 95% certainty that the increase is positive, but still a high probability.
If it’s 80% (not the precise integral) likely that there is an increase, 50% likely that there is at least a 0.9deg increase, and as likely that there is actually at least a 2deg increase as no increase, is it not prudent to hedge until the error bars come down?
Also, it doesn’t really mean much that Frank has published a paper claiming the temperature record is much less reliable than previously thought; you can’t just cherry-pick a single paper and use that to invalidate the results from an entire field. You need to do a full literature review, and critique the methodology of the papers on both sides of the argument. That said, it’s worth investigating further, as if correct it is a significant result.
pyroseed13
Apr 6 2017 at 12:07pm
I have to agree with Paul T above me here. While some of the concerns raised in this column are valid, I didn’t really get the sense you are presenting a complete picture of the literature. Some of the papers you cited are dated (not to imply that they are “wrong,” but surely more papers have been published since then which address similar issues), and you seem to be relying on the research of some controversial figures (Lindzen, Soon).
Robert Simmons
Apr 6 2017 at 12:45pm
Is using the warmest since 1998 cherry-picking? I honestly don’t know, but what would the results look like if you moved back or forward one or two years? Admittedly, fully reliable models wouldn’t be much affected by one outlier year, but we are dealing with (hopefully) pretty good models, not perfect ones.
David R. Henderson
Apr 6 2017 at 1:12pm
@John Hall,
Correction made. Thanks.
@Paul T,
If it’s 80% (not the precise integral) likely that there is an increase, 50% likely that there is at least a 0.9deg increase, and as likely that there is actually at least a 2deg increase as no increase, is it not prudent to hedge until the error bars come down?
Depends on what’s causing it. That’s why I highlighted the section I did rather than the point about confidence intervals.
Robert Simmons
Apr 6 2017 at 1:15pm
Oops, meant warming, not warmest.
LK Beland
Apr 6 2017 at 1:29pm
Michaels, Lindzen and Knappenberger stop their comparison in 2014. Indeed, global surface temperatures were in the low-end of the model predictions at that time.
More recently, however, global temperatures have been near the median model prediction.
https://usercontent1.hubstatic.com/13153328_f520.jpg
Likewise, sea-level rise has been well within model predictions.
http://www.climatechange2013.org/images/figures/WGI_AR5_Fig1-10.jpg
Artic ice extent is on target as well:
http://onlinelibrary.wiley.com/store/10.1029/2012GL052676/asset/image_n/grl29477-fig-0002.png?v=1&t=j16o0v4w&s=e07a6912c9d96d0954afdaa5993593d150c7b320
So is ocean heat content:
http://159.226.119.58/aosl/article/2015/1674-2834-8-6-333/thumbnail/img_4.png
I’ll try to limit the snark here. But I would recommend talking to actual climate scientists at Stanford and ask for clarification, rather than solely contacting synchrotron scientists…
Andrew_FL
Apr 6 2017 at 4:54pm
@LK Beland-ENSO noise bring a single year up toward the ensemble mean level does not mean that the long term trend is not below the ensemble mean trend. Your graphic is not comparable to Michaels et al’s and therefore tells us nothing about what adding two additional years of data would tell us about how well the models are doing.
“Actual climate scientists” seem to have difficulty understanding basic statistical concepts like the difference between levels and trends.
Thaomas
Apr 6 2017 at 5:25pm
It seem to me that the dead-weight loss of a carbon tax is small enough to make a policy of trying to prevent as much build up in CO2 as possible to make it a very cost effective policy even at low probability of large losses from the accumulation. There is very little chance of my house burning this year, but I do not mind paying the insurance premium.
Thomas Sewell
Apr 6 2017 at 9:22pm
@Thaomas,
The economic distortion effects and the wastage from government processing dwarf the dead-weight costs of a carbon tax.
And that’s if you managed to pass a “perfect” carbon tax, as opposed to what you’d really get if dealing with Congress deciding what to do with the collected taxes.
First you should establish with evidence (which the climate models aren’t) that a projected slight increase in world temperature would even be a net negative (plants grow better, there are more deaths from cold weather, etc…), THEN maybe we can start talking about the most cost effective method mankind changing the climate to suit us better.
This movement has always struck me as more of a solution in search of a problem than the other way around.
LK Beland
Apr 6 2017 at 9:26pm
Andrew_FL
“Your graphic is not comparable to Michaels et al’s”
Actually, it is quite comparable. Michaels et al’s histogram is equivalent to a vertical slice of this graph (that is, a vertical slice at year 2014):
https://usercontent1.hubstatic.com/13153328_f520.jpg
“mean level does not mean that the long term trend is not below the ensemble mean trend”
Take a look at the graph again. The trend is pretty much spot on.
Pat Frank
Apr 6 2017 at 11:22pm
Paul T, the interval is an empirical standard deviation. It is non-normal and represents systematic sensor measurement error, arising mostly from solar irradiance and wind speed effects. This problem is discussed in some detail in my first paper here (870 kb pdf).
This systematic measurement error shows up in temperature sensor calibration experiments, including those for SST sensors. They are roundly ignored by the consensus community.
In fact, only since my papers came out has John Kennedy at the UK Met mentioned systematic error in his papers (we’ve exchanged emails, so I know he’s aware of my work). Then he expresses faith in the Central Limit Theorem, assumes all the systematic error averages away as though it were iid, and moves on.
Not only do they ignore systematic error, the community also ignores sensor resolution. They construct their global temperature averages as though the instruments had infinite resolution, never mind about measurement error.
I’ve published an overview paper critiquing the entire field, from models through measurements. If anyone would like a reprint, email me at pfrank_8_3_zero_AT_earthlink_d_net.
Take a close look at their papers — the entire community — UKMET, NASA GISS, Berkeley BEST — they all publish global temperature constructions with improbably small uncertainty bars. Their papers rarely mention systematic sensor measurement error and do not mention instrumental resolution at all.
Reading their papers with the critical eye of an experimentalist, one is left wondering whether any of those people ever made a measurement or struggled with an instrument.
LK Beland
Apr 7 2017 at 9:12am
Dr Frank,
I wonder if you had any comments about this take on your critique:
https://www.youtube.com/watch?v=rmTuPumcYkI
Capt. J Parker
Apr 7 2017 at 1:43pm
The 0.036 Wm-2 CO2 signal seems to be an unusually tiny number for a climate forcing caused by CO2.
Here’s Judith Curry in a blog post called: CO2 no-feedback sensitivity: “The IPCC TAR adopted the value of 3.7 W/m2 for the direct CO2 forcing, and I could not find an updated value from the AR4. This forcing translates into 1C of surface temperature change. These numbers do not seem to be disputed, even by most skeptics. Well, perhaps they should be disputed.” She never states what she believes the correct number to be but the issues involved are differences of maybe a factor of 2 and not two orders of magnitude.
The 2011 Frank paper is gated so I’m not able to see where the .036 W/m2 is coming from or what it actually is describing. 3.7 W/m2 is still a very small signal to pick out of 150 W/m2 noise so, why not reference the more generally accepted figure? The conclusions would not change.
Pat Frank
Apr 7 2017 at 3:17pm
Climate model projections are not predictions in the scientific sense, e.g., in the sense that calculations from Maxwell’s EM theory are predictions of the behavior of electromagnetic radiation.
The reason is that models are tuned to yield reasonable-seeming projections.
Tuning means model parameters are adjusted to yield target observables, before any projection is carried out. To achieve correspondence, parameters end up with off-setting errors.
It is never known, therefore, whether the underlying physics, and the physical relationships, expressed in the model are correct.
Climate models are not capable of unique solutions. Unique solutions from theory are the sine qua non of falsifiability. They constitute the test of physical reliability. They establish (or not) causality.
Parameter uncertainties are never propagated through model projections. We are never given any indication of the physical reliability of, e.g., a temperature projection, except by crude visual comparisons or non-physical tests of statistical correspondence.
Their inability to produce unique solutions to the problem of the climate energy-state means that climate models are incapable of establishing any causal relation between CO2 emissions and the terrestrial climate.
Andrew_FL
Apr 7 2017 at 5:25pm
@LK Beland-
No it isn’t, but the fact that you think it is shows you either don’t know the difference between levels and trends-a common problem among “climate scientists”-or that you don’t understand what Michaels et al’s histogram actually is of. It is not a histogram of levels but of trends. A single slice at a particular year would be a histogram of levels
I’d be quite impressed that you are apparently able to do least squares regression by eye, if I actually believed you could.
Either come back with a chart that is actually comparable to Michaels et al’s histogram of trends or you’ve got no actual evidence that two additional years of data would completely over turn their analysis. This is what we in the world of actual science call replication.
Pat Frank
Apr 8 2017 at 2:39am
LK Beland, Yes, Patrick Brown and I had a long conversation about his analysis, but on his own site where he originally posted his video response; not at Youtube.
You can see that discussion here.
After a detailed evaluation of his analysis, which is laid out on his site, it’s my very considered view that it has no critical impact.
Pat Frank
Apr 8 2017 at 2:50am
Capt. J Parker, I believe the 2011 paper is open access, downloadable here (1 mb pdf).
However, that paper discusses the air temperature record, not climate models or CO2 emissions.
The 0.036 W/m^2 is the average annual (per year) change in forcing since 1979, due to CO2 emissions.
The 3.7 W/m^2 you mention is the change in forcing due to a doubling of CO2.
Mark Bahner
Apr 8 2017 at 10:40am
Yes, indisputably.
Capt. J Parker
Apr 9 2017 at 10:30pm
@ Pat Frank, many thanks.
Mark Bahner
Apr 10 2017 at 12:38pm
From the article (published April 4, 2017):
As I commented previously, using 1998 as a starting year is the worst kind of cherry-picking. But even using 1998 as the starting year, if one used 2016 as the final year, and used NASA GISSTEMP global average combined land and sea-surface temperature from January to December:
1998: 63
2016: 98
Delta = 35
Time elapsed = 18 years
Temperature change per decade = 0.35 degrees Celsius divided by 1.8 decades = 0.19 degrees Celsius per decade.
Which is…basically what the models predicted.
And if one uses, say, 1999 to 2016, one gets:
1999: 41
2016: 98
Decadal change = 0.57 degress Celsius divided by 1.7 decades = 0.34 degrees Celsius per decade. So now the models are significantly under-predicting the temperature change.
In following the climate debate for about 20 years, one thing seems clear to me as someone who makes a living doing environmental analyses…neither side of the climate debate uses much science.
Pat Frank
Apr 10 2017 at 8:38pm
Mark Bahner, the burden of the Henderson/Hooper argument is that climate models do not, and cannot, make predictions.
Climate model air temperature projections are not predictions in the scientific sense. They do not indicate causality. They are not unique solutions to the problem of the climate energy-state.
They are elaborations of the assumptions about CO2 forcing built into the models.
It does not matter if air temperature exactly follows the model outputs. Climate models cannot eliminate the possibility that the observed changes in air temperature are entirely natural.
Thus far, the shifts in air temperature since 1900 (or 1850) are completely indistinguishable from natural variation.
+++++++++++++
Capt. J. Parker, you’re welcome. 🙂
Mark Bahner
Apr 10 2017 at 10:41pm
OK, so make some predictions that are predictions in the scientific sense.
What will the globally averaged lower tropospheric temperature anomaly relative to the 5 years centered around 2000 be in the five years centered around 2020, 2040, 2060, 2080, and 2100?
Capt. J Parker
Apr 10 2017 at 11:01pm
From the Henderson and Hooper Defining Ideas article
Henderson and Hooper have misquoted or misunderstood Frank. The value .036 W/m^2 is not the “annual anthropogenic greenhouse gas contribution.” The value .036 W/m^2 is the annual change in the anthropogenic greenhouse gas contribution since 1979. It’s not valid to compare the small rate of change of anthropogenic greenhouse gas contribution to the magnitude of solar radiative forcing. Eighteen years of greenhouse gas contribution increasing at a rate of .036 W/m^2 per year gets us to 1.4 W/m^2. IPCC AR4 gives a value of 1.6 W/m^2 for anthropogenic greenhouse gas contribution. This is still a small number compared to 150 W/m^2 total climate model uncertainty though not dramatically smaller than the 4 W/m^2 forcing uncertainty due to clouds. I think the overall conclusion of Henderson and Hooper remains intact even if they used the more appropriate value for anthropogenic greenhouse gas contribution so, they really ought to do so or risk being accused of being misinformed about. a basic parameter in climate models.
Pat Frank
Apr 12 2017 at 2:11pm
Mark Bahner, “OK, so make some predictions that are predictions in the scientific sense.”
No one can make such predictions. That’s the point; that’s the message of the error analysis.
Climate models are non-predictive.
There exists no predictive physical theory of climate. Ignorance yet reigns the field.
Pat Frank
Apr 12 2017 at 2:25pm
Capt. J. Parker, the ±4W/m^2 is an average annual error in cloud forcing. It’s not appropriate to compare a single-year error with a multi-year estimate of CO2 forcing.
The ~0.036 W/m^2 annual average change in CO2 forcing has to be resolved against a lower limit annual error of ±4 W/m^2. That lower limit of error, alone, dwarfs the perturbation.
It’s also fair to compare that 0.036 W/m^2 with the total radiant energy in order to understand what effect is being teased out against what background of energy flux.
The models are being used to resolve the effect of a tiny change in energy in the midst of a huge flow. One needs an extremely accurate and complete physical theory in order to accomplish that feat.
Climate science does not attain that level of completion or accuracy of theory. Nowhere close.
Mark Bahner
Apr 13 2017 at 12:47pm
I wrote:
Pat Frank responds:
More than 120 years ago, in 1896, Svante Arrhenius wrote the paper, “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground”.
Isn’t that a predictive physical theory of the effect of atmospheric carbon dioxide on global average surface temperature?
Capt. J Parker
Apr 13 2017 at 1:14pm
@ Pat Frank,
My fundamental complaint against the Henderson Hooper paper is that their statement:
is not correct. The current extra energy flux from anthropogenic CO2 is not .036 W/m^2 it is 1.6 W/m^2.
You said:
This statement would only be true if the effect being teased out was an anthropogenic change in global temperature from one year to the next. But no one is claiming to be able to tease out such and effect. The claim is that the climate models can tease out a change in global temperature due to anthropogenic CO2 that takes place over decades not from one year to the next.
More generally, the comparison of the rates of change of the anthropogenic forcing of .036 W/m^2/year to background solar forcing of 342 W/m^2 is problematic because the two values have different units. If such a comparison of those two magnitudes had validity you could just as easily have picked .000098 W/m^2/day or .0000041 W/m^2/hour to compare to 342 W/m^2
Pat Frank
Apr 16 2017 at 8:40pm
Mark Bahner, “Isn’t that a predictive physical theory of the effect of atmospheric carbon dioxide on global average surface temperature?”
Nope.
Explanation: did he include, for example, that the climate is convection driven? Was that taken properly into account in deriving air temperatures?
And that’s just one element. One really does need a pretty complete theory of climate to predict the effect of such a small perturbation as CO2 forcing.
Capt. J Parker in order to tease out the effect of CO2 forcing over decades, you’ve got to add up the effect of the forcing change per unit intervening time. There’s no getting around that.
One cannot just jump the climate across 100 years, add in the 1.6 W/m^2 CO2 forcing and claim to know the outcome.
Agreed about the units. Nevertheless the point remains that CO2 forcing is tiny against the background irradiance.
Climate model time steps are of order 30 minutes. When projecting the effects of CO2, there are software calls for time-wise parameter updates. The changes in CO2 forcing are additive across the year, which means they must be resolved per time step against the constant irradiance background of 342 W/m^2.
Given climate non-linearity, the variation in local irradiance with cloud cover, and its evolution in time, the 342 W/m^2 is an artificially constant global average.
The Hooper/Henderson comparison should be seen in this light. It’s a qualitative heuristic for how to evaluate the claims of extreme resolution for climate models against the reality of the tiny perturbation within a huge cascade of solar energy flux.
Pat Frank
Apr 19 2017 at 10:08pm
Mark Bahner, just to follow up a bit, Arrhenius does get full credit for making the first real calculation (prediction) about the radiation physics of atmospheric CO2.
There isn’t any doubt that CO2 converts radiant energy (15 micron IR from the surface) into kinetic energy, by collision with atmospheric nitrogen and oxygen molecules.
The question about its effect is, how does the climate respond to this energy. The standard assumption is that this energy just shows up as heat. But it could show up as increased convection. And/or as increased evaporation of water and, later, increased condensation into clouds.
No one knows the answer to this. The physical theory of climate is not advanced enough to answer the question of which response channel is dominant.
It’s been known at least since the late 1950’s that a small increase in tropical cloudiness and precipitation can entirely compensate for the increased kinetic energy put into the atmosphere by CO2 emissions.
But such processes cannot be modeled. The physical theory is totally inadequate.
Comments are closed.