Patrick Frank is a scientist at the Stanford Synchrotron radiation Lightsource (SSRL), part of the SLAC (formerly Stanford Linear Accelerator Center) national Accelerator Laboratory at Stanford University. The SSLR produces extremely bright x-rays as a way for researchers to study our world at the atomic and molecular level.
In a bit of a shift, Frank has shone a bright light on general circulation models (GCMs)–models used to predict long-term changes in climate–and illuminated some fatal flaws. His bottom line is that these models, as they stand today, are useless for helping us understand the relationship between greenhouse gas emissions and global temperatures. This means that all the predictions of dramatic impending warming and ancillary calls for strong government action are based on conjecture.
These are the opening two paragraphs in Charles L. Hooper and David R. Henderson, “A Fatal Flaw with Climate Models,” Regulation, Winter 2016-2017.
A key paragraph:
The IPCC has looked at a number of different cases and it reports that temperatures could be, in the worst case, up to 4 ̊C higher by 2100. However, based on Frank’s work, when considering the errors in clouds and CO2 levels only, the error bars around that prediction are ±15 ̊C. This does not mean–thankfully– that it could be 19 ̊ warmer in 2100. Rather, it means the models are looking for a signal of a few degrees when they can’t differentiate within 15 ̊ in either direction; their internal errors and uncertainties are too large. This means that the models are unable to validate even the existence of a CO2 fingerprint because of their poor resolution, just as you wouldn’t claim to see DNA with a household magnifying glass.
Charley discovered Professor Frank on line. See the Readings at the end of the piece for the items we read and watched. So we took him for coffee up at Stanford in October. It was a scintillating conversation, not only about the science and his struggles with getting editors of climate science journals to understand confidence intervals, but also about his own personal immigration story. I told him that this was actually the most exciting intellectual interaction I had had that year. He answered that I need to get out more. Both statements are true.
READER COMMENTS
John Hall
Dec 16 2016 at 12:18pm
On the subject of the statistical knowledge of climate scientists, you should read the Hockey Stick Illusion by AW Montford. He goes into a lot of detail on the statistical errors made in creating the hockey stick graph.
Thaomas
Dec 16 2016 at 1:09pm
So does the increases in the uncertainty of the estimates of the effects of CO2 accumulation lead you to raise or to lower your recommendation for the optimal level of a Carbon tax/subsidy?
David R. Henderson
Dec 16 2016 at 2:47pm
@Thaomas,
So does the increases [sic] in the uncertainty of the estimates of the effects of CO2 accumulation lead you to raise or to lower your recommendation for the optimal level of a Carbon tax/subsidy?
It’s not an increase; it’s a recognition of an existing level of uncertainty.
Given that I have recommend a zero carbon tax, I recommend neither an increase nor a decrease.
Chip Knappenberger
Dec 16 2016 at 4:09pm
Reading through the Regulation piece, I was struck by this sentence:
“this error is 114 times as large as the estimated extra energy from excess CO2 (±4.0 Wm–2 versus 0.035 Wm–2).”
What is the source for the 0.035W/m2 of forcing from CO2? The IPCC AR5 gives this number as ~1.68W/m2 (change from 1765 to 2011, AR5 SPM Figure 5).
Still smaller than the error in cloud forcing, but not as dramatically.
Thanks!
-Chip
Ben H.
Dec 16 2016 at 6:09pm
[Comment removed. Please consult our comment policies and check your email for explanation.–Econlib Ed.]
Pat Frank
Dec 17 2016 at 3:27pm
Chip, that 0.035 Wm^-2 is the average annual change in CO2 forcing since 1979, as calculated a couple of years ago.
The annual change in CO2 forcing is far more relevant a number than the total change since 1900 (your number) when appraising climate models, because it represents the size of the annual perturbation climate models must resolve when projecting future air temperatures.
As an annual average thermal flux error of CMIP5 models, that ±4.0 Wm^-2 is indeed ±114 times larger than the perturbation they are trying to evaluate.
Pat Frank
Dec 17 2016 at 3:31pm
By the way, I thank David for the promotion, but I’m merely scientific staff at SLAC, not a Professor.
Science staff a better position, really, because I get to do my own research, rather than consigning the work to grad students and postdocs. 🙂
David R. Henderson
Dec 17 2016 at 5:00pm
@Pat Frank,
Thanks for answering Chip.
Also, I apologize for promoting you. 🙂
Mark Bahner
Dec 18 2016 at 12:37am
Other than that humans have caused the atmospheric CO2 concentration to increase from about 280 ppm to over 400 ppm, and humans will likely cause the atmospheric CO2 concentration to increase to about 600 ppm by the end of this century (barring significant removal of CO2 from the atmosphere).
Also, humans have caused the atmospheric methane concentration to increase from approximately 700 ppb to approximately 1800 ppb, and the atmospheric methane concentration is also rising.
Roger Sweeny
Dec 18 2016 at 9:51am
There seems to be an inconsistency in the article. You tell how Dr. Frank thinks there is a large uncertainty in recent temperature measurements:
The 1856-2004 global surface air temperature anomaly with its 95% confidence interval is 0.8 C +- 0.98 C. Thus, the global average surface air temperature trend is statistically indistinguishable from 0 C.
But you also quote a recent speech of his:
The climate has warmed and cooled in the past without any changes at all from us or from changes in carbon dioxide or apparently in greenhouse gases, and the changes that we’ve seen are well within natural variability.
But if we don’t know recent temperatures with great accuracy, it seems even less likely that we know changes previous to that with greater accuracy. And if we don’t, we can’t make very precise statements about what is natural variability.
Pat Frank
Dec 18 2016 at 1:56pm
Roger, the ice core records and some spleothems do show climate swings in temperature. Because of confounding effects, though, it’s nearly impossible to convert the physical changes in the cores into Celsius.
One can compare the recent core record with the record of the past to get a qualitative idea of the relative magnitude of temperature changes, though, even if numbers can’t be derived.
Another indication of recent variability involves the movement of the northern treeline. It has been migrating poleward over the last century-plus, but has not yet reached the latitude it occupied during the Medieval Warm Period. That latitude is indicated by old stumps of dead trees, that can be C-14 dated. So, it’s likely the MWP was warmer than now.
Tree ring records do reveal warmer/wetter vs. colder/drier, and trends in climate can be compared in that way, too. But there’s not yet a physical method to convert tree ring metrics into temperatures.
Other qualitative indicators include using old farm records or stratigraphic pollen counts to reconstruct the length of the growing season, and the warmth requirements and cold tolerance of the vegetation and the crops grown.
But you’re right, that the uncertainty in the measurement record makes it impossible to presently know the temperature changes quantitatively, during all this time.
Pat Frank
Dec 18 2016 at 8:41pm
Mark, “humans have caused the atmospheric CO2 … [and] methane [concentrations] to increase.”
You’re right, and I believe no one here disputes that. Neither increase seems to have had any measurable effect on the terrestrial climate, though.
Despite that, the increase in CO2 has had a couple of important impacts. It’s caused the global ecology to green up since 1980, and is apparently responsible for a significant improvement in farm yields.
Chip Knappeberger
Dec 18 2016 at 10:01pm
[Comment removed pending confirmation of email address. Email the webmaster@econlib.org to request restoring this comment. A valid email address is required to post comments on EconLog and EconTalk.–Econlib Ed.]
Mark Bahner
Dec 19 2016 at 1:16pm
Do you dispute the global average surface temperature anomaly measurements as reported by, for instance, NASA?
NASA global average surface temperature anomalies
If so, what are your proposed values?
And do you dispute the satellite lower tropospheric temperature anomalies as reported by, for instance, UAH?
Satellite-measured lower tropospheric temperature anomalies
If so, what are your proposed values?
Roger Sweeny
Dec 19 2016 at 4:04pm
Patrick, Thanks for your reply. When you wrote spleothems, I assume you meant speleothems, mineral deposits in caves:
https://www.ncdc.noaa.gov/data-access/paleoclimatology-data/datasets/speleothem
Pat Frank
Dec 19 2016 at 10:47pm
Roger, yes, thank-you, and apologies for the mistake.
Mark, increasing global air temperature need not be caused by CO2 emissions. Pointing to them establishes nothing about human causality.
In any case, I’ve published on the surface air temperature record. Two of the papers are open access.
The first is here (869.8 kb pdf), and the second here (1 mb pdf).
The workers in the field have thoroughly neglected systematic measurement error. When that’s taken into account, the entire surface air temperature record is revealed as unreliable. It can say nothing of the rate or magnitude of the global air temperaure change since 1850, or 1880, or 1900.
I intend a third paper that will truly nail the point. If you’d like a preview of much of that work, it’s here, from a plenary talk given at the World Federation of Scientists conference in Erice, Sicily, August 2015.
Mark Bahner
Dec 20 2016 at 12:20pm
That’s right, it doesn’t establish causality. But it does establish that the average global temperature has increased. Surface temperature measurements show that. Satellite lower tropospheric temperature measurements show that. Balloon lower tropospheric temperature measurements show that.
According to the NASA webpage to which I previously linked, the global average surface temperature anomaly rose by about 0.4 degrees Celsius from the period 1890-1899 to 1979. Do you agree with that assessment, or do you disagree? If you disagree, what do you think the actual global average surface temperature change was?
According to both the NASA website and the UAH website to which I linked, both the global average surface temperature and global average lower tropospheric temperature rose by more than 0.6 degrees Celsius from 1979 to 2016. Do you agree or disagree? If you disagree, what do you think the actual numbers should be for the global average surface temperature change and the global average lower tropospheric temperature change from 1979 to 2016?
Rich Berger
Dec 20 2016 at 1:41pm
Pat-
This is a fascinating analysis and you are to be commended for the calm and cordial way you have responded to questions or criticisms. If your approach were more widely followed, there would be a lot less heat and much more light in scientific discussions.
Mark-
Did you read the post based on the talk that Pat gave at the World Federation of Scientists’ conference? It may help to answer your questions. It made me wonder how much error there is in my local measurements with my non-aspirated Davis weather station, located on my deck, which is often snow-covered in the winter!
Mark Bahner
Dec 20 2016 at 5:48pm
Hi Rich,
Yes, I read the post based on the talk Pat gave, and it didn’t help to answer my questions.
I want to know what Pat thinks the follow temperature changes were:
1) Global average surface temperature change from the average of 1890-1899 to 1979.
2) Global average surface temperature change from 1979 to 2016.
3) Global average lower tropospheric temperature change from 1979 to 2016.
The sites I gave have numerical answers to those questions. For Pat to say that the numerical answers provided on those two sites are wrong doesn’t mean much, unless Pat can also say what numbers he thinks are right.
Pat Frank
Dec 20 2016 at 7:51pm
Mark, the work I’ve done shows that no one knows what the air temperature change has been since 1850, or 1900 (you pick), to better than about ±1 C (2-sigma interval).
The “right answer” is unknowable. Accurate data do not exist.
The historical air temperature sensors suffered from too much systematic error for any determination to be made, that is more accurate than about ±1 C.
The numbers you see on the NASA or GISS sites fall under that accuracy limit, despite that they do not report it.
The satellite temperatures are likely not better than ±0.3 C in accuracy; nor are the balloon measurements.
However, one never sees a presentation of air temperatures that includes a full analysis of instrumental errors, and the trend lines are always displayed without measurement error bars.
Given all that, it should be clear that I can offer no alternative estimate of air temperature change. Nor can anyone else. No alternative estimate is possible. And the estimates from your cited sources are not analytically supportable.
My work indicates that the official temperatures, as presented, represent false precision, and that no estimate at all is available, better than about ±1 C accuracy (2-sigma confidence interval).
Apart from that, we ought to be celebrating the increased global warmth. The world blooms during warm periods.
Pat Frank
Dec 20 2016 at 7:56pm
Rich, thank-you for your kind words.
You’re right, too, that a more dispassionate discussion would have spared us all the heated intemperance in the debate.
But honestly, I expect that much of that heat was deliberately injected.
And given the fear so many have to counter the AGW assertion, that heat has had its intended effect.
David S
Dec 20 2016 at 8:59pm
Pat, in your article you state that the temperature record was recorded in (at best) 0.25 C increments. You then stated that averaging cannot improve that resolution.
Do you have a source I could learn more on that from? In my own field of electrical engineering, we use low pass filtering (an average is a cheap low pass filter) to improve the bit resolution of ADCs, at the expense of frequency. What is the essential difference? Is it that the samples being averaged are coming from different devices?
From my own sampling experience, I’d be more concerned with Nyquist rate/frequency based sampling errors. From what I’ve seen these temperature readings are taken at most a few times a day. If the thermometer responds faster to temperature swings than that then you have aliasing in the samples. Aliased samples contain only corrupted information – essentially the high frequency content is mapped onto the low frequency. For example, take these hourly actual temperatures:
0,1,2,3,1,4,2,0,-1,-2,-1,-4,-3,-2,0
The actual average is 0. But when sampled on every fourth element, the average is -1/4, -3/4, 1/4, or 3/4. Aliasing has destroyed the low frequency data. This is definitely a case where averaging will not increase accuracy – the errors are not normally distributed.
Of course, maybe this doesn’t apply to temperature measurements for some reason. But the theoretical foundation looks pretty bad. The only way to avoid this would be to make the thermometers respond very slowly to temperature changes – if you take one sample a day, the thermometer needs to have a maximum bandwidth of 0.5 cycles/day.
Pat Frank
Dec 22 2016 at 2:12pm
David S, interesting comments, thanks. The early instruments were physical thermometers inside a louvered box, and that gave single-point data. So, it doesn’t seem likely that sampling continuous waveforms is a good analogy to the problem.
This image gives a pretty good idea of what the standard thermometer set-up looked like prior to about 1980. Extending back before about 1880, the systems varied ever more wildly. The thermometers are liquid-in-glass (LiG). The high-temp used mercury, the low-temp typically used methyl alcohol.
The best LiG thermometers had 1 C graduations. Some were graduated in 2 C intervals, and occasionally even 5 C intervals.
Up through about 1980, only each daily low and high were recorded; just two temperatures a day. The thermometers had floaters inside that fixed at the low or high temperature, to be read later. So, the aliasing problem you describe would not have been an issue. LiG thermometers have a time-constant of several seconds.
After 1980, the Min-Max Temperature System (MMTS) sensors gradually replaced the LiG thermometers in louvered boxes. The MMTS uses a thermistor in a gilled shield, and delivers a 10-second averaged temperature to a data logger. The daily highs and lows are taken from that.
The resolution problem reflects the minimum change the instrument can record. The instrument does not yield the effects of perturbations smaller than the resolution limit. That means, for example, temperature differences less than 0.25 C are not readable on the 1 C-graduated LiG thermometers. So, these data literally do not exist in the record.
This means the recorded temperatures will always contain an error with respect to the true air temperature, unless the true air temperature happens to be exactly at one of the 0.25 C intervals. But even then, that accidental correspondence would not be known. The only way to deal with the resolution problem is to append the resolution uncertainty to every recorded temperature.
The MMTS thermistors can be calibrated to ±0.1 C. If they retain calibration, their temperature readings ought to be that good.
The systematic error problem with both the LiG thermometers and the MMTS sensors (apart from resolution limits), arises due to the impact of solar irradiance and wind-speed on the temperature of the air inside the louvered box or gilled shield, respectively. Too much sun or not enough wind and the recorded temperature is different from the external air temperature.
These are deterministic and systematic effects that vary from hour-to-hour, day-to-day, and season-to-season. The effects of these external impacts are average systematic errors of about ±0.45 C for the LiG thermometer record, and about ±0.35 C for the MMTS sensors.
Does that address your concern? 🙂
Pat Frank
Dec 22 2016 at 4:26pm
Hmm, the link to the picture of the LiG thermometers in the CRS shelter didn’t come out.
Here it is in long-form: http://www.wicklowweather.com/Photos/inside%20screen.jpg
Mark Bahner
Dec 28 2016 at 12:34pm
This is demonstrably false. With surface temperature measurements, there are alternatives to NASA GISS, as discussed here:
“Why so many global temperature measurements?”
And with lower tropospheric measurements, there are also multiple analyses. Remote Signal Systems (RSS) provides one alternative to UAH (University of Alabama Huntsville).
Then there are balloon measurements.
The simple fact is that all of these measurement types and analysis types indicate somewhat greater than 0.6 degrees Celsius of warming from 1979 to 2016. Closing one’s eyes and covering one’s ears doesn’t change that fact.
Pat Frank
Dec 28 2016 at 9:52pm
Mark Bahner, your proposed alternative surface temperature constructions are not independent. They all use the identical set of temperature measurements; all variations on the same theme.
They are all subject to the same systematic measurement error; which is not shown on any of the plots.
I’ve done the work, Mark. The systematic measurement errors are large enough to render completely moot the entire surface air temperature trend since 1850.
Someone may be closing their eyes, but it’s not me.
Mark Bahner
Jan 5 2017 at 12:16pm
That’s simply not true, as the link I provided on December 28th makes clear.
No you haven’t…as the link I provided makes clear. There are four major groups who provide assessments of global surface temperature changes: NASA GISS, NASA NCDC, the Met Office Hadley Center/CRU, and the Japanese Meteorological Agency. They’ve have provided four different estimates of the global surface temperature change in the 20th and 21st century. You haven’t provided a fifth estimate.
What does “completely moot” mean? I’m aware of one definition of “moot” being, “open to argument or debate”…that’s a given in science. That’s why there are four different major group estimates of the surface temperature rise in the 20th and 21st century. But as the link I provided makes clear, all four assessments agree there has been substantial warming (greater than 0.8 degree Celsius) from 1900 to 2016.
And the warming at the surface has been approximately equal to the warming in the lower troposphere as measured by satellites and balloons.
Pat Frank
Jan 9 2017 at 2:43pm
Mark, all your “major groups” use the identical air temperatures measured using the identical instruments. They’re not independent.
My work is about error analysis. It shows the measured surface temperatures are not reliable to better than ±0.5 C.
My work is _not_ about providing an estimate of global air temperature. Your continued insistence that I provide one is an irrelevance, now bordering on foolishness.
The point requires understanding the meaning of measurement accuracy, and knowing about instrumental resolution.
If you don’t get those concepts, you’ll never understand why the published record is an unreliable measure of the air temperature change since 1900; no matter one group, four groups, ten groups, or a gazillion groups.
They’re all using the same error-ridden data.
Mark Bahner
Jan 11 2017 at 6:24pm
So you accept that the best estimates available of the global average surface temperature change and global average lower tropospheric temperature change from 1979 to 2016 are an increase of greater than 0.6 degrees Celsius?
Pat Frank
Jan 12 2017 at 2:56am
Why is it so hard to understand the concept of measurement error and uncertainty?
Let’s put it this way, Mark: at the 95% confidence interval, the best estimate for mid-tropospheric air temperature from satellites and balloons is 0.6±0.6 C (your number).
The 95% confidence interval applied to the global surface air temperature change from 1979 to 2016 is 0.7±1 C (GISSTemp trend).
And those are lower limit estimates of uncertainty because they only include instrumental measurement accuracy.
So, what does it mean when the minimum of uncertainty is as large as, or larger than, the measurement value?
Can you guess?
Mark Bahner
Jan 12 2017 at 12:16pm
Have you analyzed the *lower* tropospheric temperature data from satellites (i.e., TLT values) and balloons? Where is your analysis?
I don’t need to guess. I know what it means. It means the entire scientific establishment agrees that both the global average surface temperature and the global average lower tropospheric temperature increased by more than 0.6 degrees Celsius from 1979 to 2016.
The only real matter for scientific debate is what the global average surface temperature change and global average lower tropospheric temperature change will be from 2016 to…say, 2050 or 2100.
I made predictions in that regard more than a decade ago:
Long Bets #181
Pat Frank
Jan 12 2017 at 1:00pm
Mark, I’ve looked at the satellites and balloons enough to see that their temperature measurements are not more accurate than ±0.3 C.
The testimony of instruments and calibration outweigh the views of your entire scientific establishment.
Calling on establishment views, rather than on data, is just an argument from authority.
Predictions of future air temperature require an accurate climate model. There are none.
Your predictions are not predictions in any valid scientific sense. They’re just you giving us the benefit of your personal opinion.
Mark Bahner
Jan 12 2017 at 5:27pm
There is no argument, Pat.
You have not provided numbers that dispute that the best available estimate of the global average surface temperature rise from 1979 to 2016 is more than 0.6 degrees Celsius, as provided by NASA GISS, Hadley CRU, etcetera (aka, the “scientific establishment”).
And you have not provided numbers that dispute that the best available estimate of the global average lower tropospheric temperature increase from 1979 to 2016 is more than 0.6 degrees Celsius, as provided by UAH, RSS, etcetera (aka, the “scientific establishment”).
You’ve merely complained that you don’t like their numbers. You have not provided what you think are better numbers. So there is no argument.
Nonsense. That’s no more true than saying that future predictions of air temperature require an accurate assessment of future CO2 emissions. Tom Wigley and Sarah Raper published a paper in Science magazine, in which they predicted global temperatures out to the year 2100. There they are…predictions. In what is generally considered as one of the top scientific journals in the world.
Oh, really? Why don’t you point to some predictions of future emissions of CO2, and future atmospheric concentrations of CO2, and future global average temperatures that you do think are “valid” in a “scientific sense”?
Yes, sort of like Einstein giving us his personal opinion that an atomic clock would go faster on the top of a mountain than at the Dead Sea.
🙂
Again, if you don’t think my predictions are “predictions in any scientific sense” point to some predictions of future CO2 emissions, CO2 and methane atmospheric concentrations, and future average global temperature changes (preferably global average lower tropospheric temperature changes) that you think are scientifically valid.
Pat Frank
Jan 14 2017 at 11:00pm
Mark Bahner, you like the satellite and radiosonde temperatures. We begin with them:
Radiosonde air temperature measurement uncertainty: ±0.3 C:
R. W. Lenhard, Accuracy of Radiosonde Temperature and Pressure-Height Determination BAMS, 1970 51(9), 842-846.
F. J. Schmidlin, J.J. Olivero, and M.S. Nestler, Can the standard radiosonde system meet special atmospheric research needs? Geophys. Res. Lett., 1982 9(9), 1109-1112.
J. Nash Measurement of upper-air pressure, temperature and humidity WMO Publication-IOM Report No. 121, 2015.
The height resolution of modern radiosondes using radar or GPS = 15 m = (+/-)0.1 C due to lapse rate alone.
That makes the lower limit uncertainty of modern radiosonde temperatures (inherent + height) rmse = ±0.32 C.
Satellite Microwave Sounding Units (MSU): ±0.3 C accuracy lower limit:
Christy, J.R., R.W. Spencer, and E.S. Lobl, Analysis of the Merging Procedure for the MSU Daily Temperature Time Series Journal of Climate, 1998 11(8), 2016-2041 (MSU ≈±0.3 C mean inter-satellite difference)
Mo, T., Post-launch Calibration of the NOAA-18 Advanced Microwave Sounding Unit-A IEEE Transactions on Geoscience and Remote Sensing, 2007 45(7), 1928-1937.
From Zou, C.-Z. and W. Wang, Inter-satellite calibration of AMSU-A observations for weather and climate applications. J. Geophys. Res.: Atmospheres, 2011 116(D23), D23113.
Quoting from Zou (2011) “Although inter-satellite biases have been mostly removed, however, the absolute value of the inter-calibrated AMSU-A brightness temperature has not been adjusted to an absolute truth [i.e., the accuracy]. This is because the calibration offset of the reference satellite was arbitrarily assumed to be zero [i.e., the accuracy of the satellite temperature measurements is unknown].”
The inter-satellite calibrations and bias offset corrections that are used to improve precision do not improve accuracy.
Infrared Satellite SST resolution: ±0.3 C
W. Wimmer, I.S. Robinson, and C.J. Donlon, Long-term validation of AATSR SST data products using shipborne radiometry in the Bay of Biscay and English Channel. Remote Sensing of Environment, 2012. 116, 17-31.
Land surface air temperature uncertainty,
Lower limit of measurement error: ±0.45 C (CRS LiG thermometer prior to 1980); ±0.35 C (MMTS sensor after 1980, but only in the technologically advanced countries).
Hubbard, K.G. and X. Lin, Realtime data filtering models for air temperature measurements Geophys. Res. Lett., 2002 29(10), 1425; doi: 10.1029/2001GL013191.
Huwald, H., et al., Albedo effect on radiative errors in air temperature measurements Water Resources Res., 2009 45, W08431.
P. Frank Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit Energy & Environment, 2010 21(8), 969-989.
X. Lin, K.G. Hubbard, and C.B. Baker, Surface Air Temperature Records Biased by Snow-Covered Surface. Int. J. Climatol., 2005 25, 1223-1236; doi: 10.1002/joc.1184.
Sea Surface Temperature uncertainty: ±0.6-0.9 C for ship engine intakes:
C. F. Brooks, C.F., Observing Water-Surface Temperatures at Sea Monthly Weather Review, 1926 54(6), 241-253.
J. F. T. Saur A Study of the Quality of Sea Water Temperatures Reported in Logs of Ships’ Weather Observations J. Appl. Meteorol., 1963 2(3), 417-425.
SST uncertainty from buoys, including Argo: ±0.3-0.6 C:
W. J. Emery, et al., Estimating Sea Surface Temperature from Infrared Satellite and In Situ Temperature Data. Bull. Am. Meteorol. Soc., 2001 82(12), 2773-2785.
R. E. Hadfield, et al., On the accuracy of North Atlantic temperature and heat storage fields from Argo. J. Geophys. Res.: Oceans, 2007 112(C1), C01009.
T.V.S. Udaya Bhaskar, C. Jayaram, and E.P. Rama Rao, Comparison between Argo-derived sea surface temperature and microwave sea surface temperature in tropical Indian Ocean. Remote Sensing Letters, 2012 4(2), 141-150.
Those are all 1-sigma uncertainties.
Anyone who understands measurement uncertainty must conclude from the above published calibrations that the 95% lower limit uncertainty bounds for air temperatures are:
Surface air temperature: ±1 C
Radiosonde: ±0.6 C
Satellite: ±0.6 C
The climate consensus people never produce plots with physically real error bars.
The entire field runs on false precision.
Pat Frank
Jan 14 2017 at 11:55pm
Mark, you wrote, You have not provided numbers that dispute that the best available estimate of the global average surface temperature rise from 1979 to 2016 is more than 0.6 degrees Celsius, as provided by NASA GISS, Hadley CRU, etcetera (aka, the “scientific establishment”).
I have done.
In response, you’ve either not understood the impact of measurement uncertainty or just dismissed the demonstration by appeal to authority.
You wrote, “And you have not provided numbers that dispute that the best available estimate of the global average lower tropospheric temperature increase from 1979 to 2016 is more than 0.6 degrees Celsius, as provided by UAH, RSS, etcetera (aka, the “scientific establishment”).”
I have provided numbers, and have now cited published calibrations for each sort of temperature measurement. They validate the measurement uncertainty (i.e., the numbers) I provided.
Is it rude to point out that you have uncritically accepted the numbers of GISS, HADCRU, and the rest? That is, you’ve provided no reason to believe their numbers apart from a grant of authority.
For the purposes of this blog conversation, partiality in grants of authority does not make a case. In support of the notice of uncriticality, I previously pointed out that none, not one, of the GISS, HadCRU and other surface temperature compilations displays physically valid error bars (which, in stark contrast, I have done. See the above). They present their temperatures as though the measurements are perfectly accurate. Maybe they’re visiting from another universe, or something; the Platonoverse.
I wrote, “Predictions of future air temperature require an accurate climate model.”
To which you replied, “Nonsense. That’s no more true than saying that future predictions of air temperature require an accurate assessment of future CO2 emissions.”
Hmm. So, you’re saying that prediction of an experimental outcome doesn’t require a physically valid theory. Can you give an example of a scientifically valid prediction made in the absence of a valid physical theory?
Something like, ‘I predict the moon is made of green cheese‘ doesn’t count. Karl Popper called testing such guesses naïve falsifiability. Such guesses are not rigorously deduced from a physical theory, i.e., there’s no reason to think they’re true.
In any case, if one had a perfectly accurate knowledge of future CO2 emissions, one would still need an accurate climate model to convert that knowledge into a prediction of future air temperatures. Without an accurate climate model, after all, how would anyone know how CO2 emissions would affect the climate?
In that context, I’ll point out that your authoritative Wigley and Raper Science paper discussed no physically valid uncertainties. All they had is the usual PDFs having to do with model variance around a model mean: all about model precision, not physical accuracy.
You wrote, “Why don’t you point to some predictions of future emissions of CO2, and future atmospheric concentrations of CO2, and future global average temperatures that you do think are “valid” in a “scientific sense”?
Predictions of future CO2, emissions or concentrations neither I nor anyone else can hazard.
However, given various possible CO2 scenarios, one can hazard the reliability of the IPCC/consensus climate science projections of future air temperature: they’re unreliable. I’ve shown that beyond a shadow of a scintilla of a soupçon of a doubt. Also, here.
Comments are closed.