Open Mind

‘Tain’t Likely

January 10, 2009 · 78 Comments

Every year this century is among the top-10 hottest years on record. In fact the clustering of hottest years is even more lop-sided than that would indicate; by the end of 2006 it was noted that the 13 hottest years on record had all occurred since 1990 (and we’ve added a couple more to that list since then). How unlikely is that?

Precisely that question is addressed in “How unusual is the recent series of warm years?” (Zorita et al. 2008, Geophysical Research Letters 35, L24706 doi:10.1029/GL036228). The conclusion can be summed up in a phrase best said with a strong Maine accent: ’tain’t likely.


It has sometimes been suggested that the warming we’ve experienced over the last century is simply due to natural variations in the climate system. After all, noise (random fluctuations) certainly exists in global temperature, and perhaps the nature of the noise is such that it can lead to warming like we’ve seen recently purely through random fluctuations, without the need to invoke an anthropogenic cause. Also, not all noise is the simplest “white noise” type, and the noise in global temperature is most certainly not of that ilk. Temperature noise exhibits the phenomenon of autocorrelation, in which nearby (in time) noise values tend to be correlated. The autocorrelation is surely positive, which means that if a given noise value is on the high (low) side then nearby noise values are likely to be high (low) also. It’s important to emphasize that the noise is still random — but it’s a random process in which nearby noise values are have a tendency to be more alike than noise values separated by a greater span of time.

So Zorita et al. (hereafter referred to as Z08) calculated how unlikely it would be to have as many record warm years observed recently as is seen in the temperature record. They studied annual-mean global temperature as estimated by GISS, HadCRU, and NCDC, covering the interval 1880 through 2006, as well as regional temperatures for 26 geographical areas from 1850 to 2006 using the HadCRU gridded data set, and 11 long individual station records.

Computing the unlikelihood of a recent spate of warm years would be easy if the noise were plain old white noise. But that’s not the case, and deriving the probability for autocorrelated noise is tricky business. It also depends on the nature of the random process. Hence Z08 estimated likelihoods by Monte Carlo simulation. They also tested two models of autocorrelation behavior, an AR(1) model which has what’s sometimes called “short-term memory,” and a fractional noise model which shows long-term persistence.

That doesn’t mean you can’t get some pretty long-term persistence (in the colloquial rather than mathematical sense) from an AR(1) process. Each of these models depends on a parameter which gives the severity of the persistence; the AR(1) model depends on the autoregressive coefficient \alpha, and the fractional-noise parameter depends on the fractional differencing parameter d. If the AR(1) parameter is high enough the autocorrelation can persist for a long time, and Z08 tested some pretty high AR(1) parameters (they tested pretty high fractional differencing parameters too). But in general, autocorrelation lasts longer for fractional differencing models than for AR(1) models.

Results of the Monte Carlo simulations for global temperature conditions are given in their figure 1:

fig1

It’s also necessary to estimate the noise parameters under the assumption that global temperature is just noise. This doesn’t mean that temperature noise has those values, because if there’s signal rather than just noise, the signal will vastly inflate the estimates. Parameter estimation is also complicated by the fact that very modern data may “pollute” the estimates due to strong anthropogenic influence, that there’s evidence of differences in sea-surface temperature measurements during 1940-1950 (the “bucket problem”), and that the very early data are more uncertain than later data. Nonetheless, Z08 made a conservative estimate:


To be conservative, i.e., risking an overestimation, a value of \alpha in the range of 0.75 to 0.85 (auxiliary material) can be assumed. Within this range the probability of a random occurrence of event E is about 10^{-5} to 10^{-3}, a higher likelihood than under the null-hypothesis of white noise, but still an extremely rare event.

For a fractional differencing process, the result was similar:


The dependence of log(p) on the value of the fractional differencing parameter d is not so steep in the range of 0 to 0.45, but p(E) is still quite low for values of d close to 0.5, yielding a probability for event E of about 10^{-3} as well.

Turning their attention to regional averages, they note that the higher noise level in regional temperature causes fewer extremes in recent times, but that the generally lower persistence in the data series makes a given number of extremes in a limited recent time span less likely. The log-likelihood of the observed recent behavior using the AR(1) model, if the series are random noise, is displayed in their figure 2b:

fig2b

Although a number of areas show only modest unlikelihood in their records (10^{-1} to 10^{-2}), a number also showed extreme unlikelihood, in the range 10^{-5} to 10^{-4}. Under the fractional differencing model, the unlikelihoods tended not to be so severe, although several regions in Africa and Eurasia were unlikely at the 10^{-3} to 10^{-2} level (their figure 3b)

fig3b

For the individual station records, which are mostly in Europe, the signal-to-noise ratio is even smaller so they show fewer records and smaller parameter estimates than even the regional data series. Since the two factors (fewer records and smaller parameters) tend to partly cancel, the results are comparable to those for regional data (figure 4b for the AR1 model and 4d for the fractional differencing model):

fig4bd

Their conclusion is that global, regional, and station records all show a clustering of warm years recently that is extremely unlikely if the recent fluctuations are just natural variation:


For the global mean temperature, conservative values of the lag-1 autocorrelation for global temperature yield very low likelihoods for the recent clustering of record warmth and under the long-term-persistence model this probability is even lower. The analysis of the regional records indicates that for some this likelihood is lower than for the global mean. This may seem surprising, as global records should inherently contain less internal noise and thus a higher signal-to-noise ratio. Two factors are regionally at play: a smaller number of record years and a smaller statistical persistence, rendering lower likelihoods of random clustering. This conclusion also holds for the individual station records.

Categories: Global Warming
Tagged:

78 responses so far ↓

  • jyyh // January 10, 2009 at 5:16 am

    Very good post, Tamino. Unfortunately I’m not competent enough in statistics to get (let alone evaluate) this one… so are my conclusions out of line?

    “How unusual is the recent series of warm years (barring anthropogenic influence)?”, asked Zorita et al., and they give an answer of 1/100000 - 1/1000 (by two methods). Is it now then that this is the _statistical_ maximum error of the uncertainty of the argument:”anthropogenic CO2 is a GHG”? Since we are almost certain (by the proxies) that no such event (as described in your post) has happened in 1000 years, the error is on the smaller side, say once in 10000 - 20000 years. On this scale of paleocliamate we have the warming that started the holocene, which was about as rapid as the one seen today (ice core records of oxygen isotopes). Now, if we want to further diminish the maximum uncertainty of AGW, we’d have to prove the rate of warming today is higher now than it was in the end of the last glacial period? I’m sorry that I’m so unclear in my questions, statistics is not my strong suite, which is why I avoid any arguments involving statistics. Not a very good trait to have in chemistry, my (M.Sc.) thesis paper has such error bars you would be terrified…

    Or do I get this totally wrong?

    [Response: They didn't test the question "Is anthropogenic CO2 a greenhouse gas?" (they didn't have to, because most certainly it is). They tested the question "Is the most recent collection of hottest years possibly due to random variation?" Essentially, the answer they got is: no, it isn't.

    The start of the holocene (and deglaciations in general) was nowhere near as rapid as modern warming. In isolated areas it was -- Greenland ice cores show some very rapid shifts indeed. But for the global average, a deglaciation involves average temperature change of about 5 or 6 deg.C and takes about 5000 years or longer! That's a sustained rate of about 0.001 deg.C/yr; what we've been experiencing and expect to sustain, about 0.02 deg.C/yr, is around 20 times faster.]

  • John Finn // January 10, 2009 at 11:50 am

    Every year this century is among the top-10 hottest years on record.

    In January 1948 a very similar claim could have been made. The hottest 10 years on record all occurred in the previous 11 years - the exception being 1946 which was 14th hottest.

    In fact the clustering of hottest years is even more lop-sided than that would indicate; by the end of 2006 it was noted that the 13 hottest years on record had all occurred since 1990 (and we’ve added a couple more to that list since then).

    It was even more lop-sided in the 1930s and 1940s. Seventeen of the hottest years all occurred between 1930 and 1947 - the exception being 1926. And - we could have added a couple more to the list, i.e. 1948 and 1949, which would have made it 19 out of 20.

    Yet, we now know that by 1948 the world had begun to cool. I’m not sure, therefore, that this sort of ranking tells us very much and could actually be misleading. Even if global cooling was now underway (and I’m not saying that it is) it’s likely that the next few years will still rank in the top 12 or 15 or whatever. Barring extreme rapid change, clustering of warm or cool years is bound to happen.

    How unlikely is that?

    Perhaps not as unlikely as may first appear.

    [Response: In 1940, we'd seen "n" warmest years near the end of a span of 60 (in the record since 1880); today it's nearly 130. Statistically, that's a helluva lot different. Yes, clustering of warm years is bound to happen by accident, but not at the level we've seen over the last 2 decades. The recent behavior is every bit as unlikely as Zorita et al. have calculated, if recent warming is just natural variation.

    I surely don't buy into your "it seems just as unlikely to me" line of reasoning. Zorita et al.'s "we computed the probability by Monte Carlo simulation using two plausible stochastic processes and conservative estimates of their parameters" argument is genuine evidence; "it seems" is not.

    And don't make a straw man out of their results; they're not claiming anything other than "it's more than just natural variation."]

  • TCOisbanned? // January 10, 2009 at 11:53 am

    I have the paper. Like it because it combines things at least into a “if you beleive this, you have to believe that” format. I see so much talking past each other and ignoring each other on those who want to emphasis one aspect or the other in climate science.

  • Sekerob // January 10, 2009 at 1:38 pm

    What’s more John Finn, we know why the cooling phase was after 1948. Does the word “Anthropogenic” strike you as been mentioned before?

    It never ceases to amaze me how a new discussion is engaged and all information from previous discussions is conveniently ignored. Is that Denialist 101 again?

  • george // January 10, 2009 at 3:19 pm

    Nice post.

    You have often emphasized the fact that the noise is not AR1.

    How would that affect the result (if one assumes ARMA noise, for example)

    Also, I am curious if anyone has performed a similar type of analysis for several different parameters simultaneously — eg, global temp increase over recent decades, glacier mass decrease over recent decades arctic sea ice mass decrease (thinning and decrease in cover) …, stratospheric cooling…, etc.

    In other words, what is the probability that all of those things would occur simultaneously due to “natural variability”?

    I expect it is probably low, but it would be interesting to see an actual calculation.

    [Response: It's important to emphasize we're working with different hypotheses. I've estimated the behavior of the noise in global temperature after removing the estimated signal. Zorita et al. didn't remove the estimated signal, because they're working with the hypothesis that there is no signal at all -- it's nothing but noise. That's why they estimate vastly more autocorrelation than I've estimated. The main result of this work is to reject that hypothesis.

    Even so, it would be worthwhile to test other noise models, including ARMA models. But they were conservative enough in their parameter estimates that they've pretty well "covered the bases" in terms of high persistence in their noise models.

    One thing that would be most interesting (and Z et al. mention they hope to do so) is to reproduce these estimates assuming that the data are a combination of short-term (like AR1) and long-term (fractional differencing) noise.]

  • TCOisbanned? // January 10, 2009 at 4:33 pm

    I think with the long Euro temp records, the recent excursions are less remarkable. Just noting this, so calm down DanO.

    In general, I like this since you can decide what the implications of beleiving something is. If you decide that you beleive the recent excursion is just chance you have accept a very high unlikelihood, very high autocorr paramaters or very peculiar/elebortate noise model. That doesn’t mean you can’t by the way. But it just joins the issues. I also like tha Zo look at it in a few differnt ways and include things with different levels of implication. What I don’t like from McI is that he almost cherry picks his crits or his observations. Zorita instead tries to characterize parameter space.

  • John Finn // January 10, 2009 at 10:23 pm

    Sekerob

    What’s more John Finn, we know why the cooling phase was after 1948. Does the word “Anthropogenic” strike you as been mentioned before

    I assume you’re suggesting it [the coolin] was due to aerosols. If so, answer this:

    Why did the Arctic cool at 4 times the rate of the industrialised regions during 1944-75? If you analyse the GISS dataset you’ll notice that the amount of cooling in this period was related to latitude band. The greatest cooling occurred at the more northerly latitiudes and least cooling at the equator.

    Industrial (tropospheric) Aerosols are “regionally specific” [See Mann & Jones 2000]. They only last about 10 days in the atmosphere before they fall to earth or are ‘rained out’. There was actually very little change in aerosol production until the 1950s by which time most of the cooling had already occurred (as Tamino has previously shown).

    [Response: Aerosols do not rain out as quickly as you claim; the estimates I've seen in the literature range from 12 days to a several months.

    If I'm not mistaken you've made this false claim before -- and been corrected before.]

  • John Finn // January 10, 2009 at 10:30 pm

    Tamino

    I’ve got no problem with the study other than to suggest care with it’s interpretation. To address one of your points , though.

    In 1940, we’d seen “n” warmest years near the end of a span of 60 (in the record since 1880); today it’s nearly 130. Statistically, that’s a helluva lot different

    True - unless you also consider proxy reconstructions in which case it’s at least 600 years.

    [Response: So now you want to use proxy reconstructions?

    Too bad for you -- looking at Mann et al. 2008, the highs in the 1940s were not exceptional -- but the highs in the 1990s and 2000s sure are.]

  • John Finn // January 11, 2009 at 12:23 am

    Too bad for you — looking at Mann et al. 2008, the highs in the 1940s were not exceptional —

    They were compared to the previous 600 years.

    [Response: So now you'll appeal to proxy reconstructions but only if you can leave out the part you don't like?

    And you still haven't actually run any numbers. Even using only the last 600 years I doubt they'd be as unlikely as the values for the 1990s-2000s found by Zorita et al.

    You're not trying to raise issues, ask questions, or learn more. It's abundantly clear that your only motive is to poke holes. And you're doing a lousy job of it. Troll is as troll does.]

  • BillBodell // January 11, 2009 at 4:36 am

    I read the post several times and looked up autocorrelation, and I’m still not sure I understand if this study is treating each year as an independent event (like a coin flip) or whether there is assumed to be an underlying signal (my failure to understand is all on me).

    If each year is independent, then it’s clear that having so many warm years near each other would be incredibly unlikely. But, I’m not sure who is making the arguement they are refuting.

    If there is an underlying signal that is increasing (due to AGW, solar, ANSO, ENSO, aliens, or some combination thereof), then I don’t think it would be surprising to find so many high values in the vicinity of a record high.

    [Response: The hypothesis being tested is that there's no signal, but that each year is not independent. Under the hypothesis, each yearly value is pure noise -- no "signal" whatever.

    Not all noise is independent; some noise is what's called "autocorrelated" noise. In most such cases, nearby (in time) values have a tendency to be alike, even though the sequence of noise values is purely random. It has been suggested that global temperature is just random fluctuations, but that it's strongly autocorrelated so it mimics having a signal even though it doesn't. The authors are testing whether this is a realistic possiblity, and conclude that it isn't.]

  • Kipp Alpert // January 11, 2009 at 6:33 am

    Recently, The Hockey Stick graph has been vindicated by scientists using the same methods, of the measurements for the graph. If Global warming is the signal, there was a lot of noise as well. Five El Nino’s of 7, in fifteen (1985-2000) years. Also unusually warm sst’s is a positive influence in the latter part of the last century. What I don’t understand is, if natural variations are warmer than Global Warming, why is Global Warming the signal and not the Noise.Is it because in spite of local phenomenon, Global Warming is a constant positive with more significance. Kipp
    http://data.giss.nasa.gov/gistemp/2007/
    http://www.ucar.edu/news/releases/2005/ammann.shtml
    http://www.nasa.gov/vision/earth/lookingatearth/elnino_split.
    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears

  • michel // January 11, 2009 at 7:06 am

    John Finn makes a reasonable point. You may not like the implications, but its reasonable and its not trolling. The issue is, you’ve excluded the idea that over the last 150 years “global temperature is just random fluctuations, … strongly autocorrelated”.
    That’s interesting in itself.

    But it shows nothing much about how unusual the recent series of record warm years is, and still less, what the outlook is going forwards. To show how unusual, you’d have to take each year, take the last (for instance) 150 years. Then calculate how often it happens, for any 150 year period, that the last 10 years contain such a cluster. Either of warm or of cool years, because you’re testing for extremes.

    [Response: You're trying to knock down a straw man. Z08 didn't make claims about how unusual the recent warming is, and I didn't claim they did. They show that the recent warm spell is too unusual to be believed under the hypothesis of natural variations. So we can reject the hypothesis of "just natural variations" with high confidence. I'd say that both my post, and Z08, make it quite clear that that's all that's claimed.

    But both John Finn and you are trying to make something else out of it -- in particular you're trying to defend against "the recent warming is unusual." He's also trying to do so by claiming that the 1940s were just as unusual in the context of this test, as the 1990s-200s. But it's not. And he hasn't run any numbers to support that, but he does make claims about it.

    So he brings up proxy reconstructions. When I look at 'em, they show that the 1940s were definitely not unusual in this context, but the 1990s-2000s were. He doesn't like that, so he tries to use proxy reconstructions but omit the part of it he doesn't like. That's nothing but obfuscation.]

    To show prognosis, you’d have to take the episodes with high scores and see what happened next.

    The point of Finn’s remark about 1948 is that the event being studied, a clustering of record warm observations, has occurred before, and was followed by cooling. That is historically true. How often has this happened?

    [Response: What a load of crap. The proxies I looked at show that the 1940s were not record warmth. And in the context of the instrumental data, neither one of you has run any numbers to show it could be anything other than natural variations.

    You're both just trying to cloud the issue and cast a pall over the results of Z08. Which is pathetic, because all they're saying is that the recent warming is not just natural variations.

    It seems to be because you're deathly afraid that the results imply something about prognosis. Well they do. They imply that rather than just fluctuation, the recent warming is due to forcing. And we know which way the forcing is going today -- up.]

    Your reply seems to be that the event of 1948 was more probable based on the shorter number of observations. True purely statistically. To which Finn is in effect replying, yes, that’s a problem, but not with the uniqueness of 1948. Rather, with the effort to show that 2008 is unique as compared to it. The only way to show that would be to take 2008 and some other candidate years further back - as far as 2008 has been taken back. And we can’t do that. Or not without using proxies.

    Suppose we could. Suppose we then found that it was most unusual for there to be a series of 10 warmest years in the last 15 of 150 observations? Then we’d know that at least in terms of the period studied, the present warming was unprecedented.

    Suppose we found the reverse? That it was quite common, when you look at some time interval, lets say 150 years, for there to be 150 year segments in which the last 10 out of 15 observations are the highest of the 150? Then we’d just conclude that cyclical peaks and falls are common.

    It would be an interesting project.

    Pointing that out is not trolling - its simply pointing out that one can draw only limited conclusions from limited data.

    Finn is also right to say that one must be careful what conclusions to draw from the article. It proves an interesting point, but its very limited in its implications.

    [Response: I neither stated nor implied anything but the limited conclusion that Z08 negate the hypothesis of natural variations. But you and Finn are dead-set on confounding the issue.]

    By the way, one can be deeply skeptical about MBH, without being sceptical about proxies in general. This is McIntyre’s point of view. There is or may be data someplace in there, its just that MBH including the latest one don’t get it out.

    [Response: In truly trollish fashion, you couldn't resist getting in a shot at Mann's reconstruction, including the latest one. Pathetic.]

  • TCOisbanned? // January 11, 2009 at 11:20 am

    Yank:

    Similar paper albiet proxy records:

    http://www.uni-giessen.de/physik/theorie/theorie3/publications/RybskiD_GRL_2006.pdf

    My notes on the paper:

    http://www.climateaudit.org/?p=712

  • TCOisbanned? // January 11, 2009 at 11:29 am

    Powerpoint by Storch on whole concept of hyopthesis testing temp numbers. see in particular refs on pge 4 which help with how to think about the problem:

    http://w3k.gkss.de/staff/storch/PPT/statistics/070524.goeteborg.ppt#291,1,Slide 1

  • John Finn // January 11, 2009 at 11:51 am

    Response: Aerosols do not rain out as quickly as you claim; the estimates I’ve seen in the literature range from 12 days to a several months.

    If I’m not mistaken you’ve made this false claim before — and been corrected before.

    Tamino

    The majority of aerosols are only resident for 2 or 3 days in the atmosphere. Some last a bit longer. This was discovered and is implied in a number of studies over the years which have looked at weekly weather variation in industrialised regions. Basically it seems that Mondays and Tuesdays are sunnier than Saturdays and Sundays. The week-end also tends to be wetter. The reason is thought, to be due to industrial pollution. The theory is that aerosols build up during the working week and peak at the week-end. This causes the less sunnier, wetter weather which then clears by monday when the cycle repeats.

    These are newspaper reports of 2 studies, i.e.

    One from Germany here , where it’s claimed temps are 0.2 deg lower at the week-end

    http://www.telegraph.co.uk/news/uknews/1545166/Weekends-really-are-wetter-than-Mondays.html

    and this one from Barcelona

    http://www.dailymail.co.uk/sciencetech/article-1050011/The-weather-IS-miserable-weekends-say-scientists.html

    I remember a number of similar studies in the UK some years ago which reached similar conclusions. Whether these studies are valid or not I don’t know, but the assumptions in them are clear. Aerosols in the atmosphere clear pretty quickly- and are very specific in their effect - as has been acknowledged in other studies (e.g. Levitus, Mann & Jones)

    You’re not trying to raise issues, ask questions, or learn more. It’s abundantly clear that your only motive is to poke holes.

    To be honest it’s not important enough to bother me that much. I’ve already said that whatever the current status of the world’s climate (i.e. cooling or warming) it will almost certainly be possible, for at least a few more years, to make claims about the 10 hottest years, 15 hottest years or 20 hottest years and so on.

  • jlaan // January 11, 2009 at 11:59 am

    Tamino,

    I came into a discussion where someone stated that a lineair trend on temperature data would say nothing, ’cause the variability is too high.
    I think this is plain wrong, but I couldn’t find anything in my textbook to react quickly on this.

    [Response: Whether or not a linear trend is informative depends on the size of the signal (for global temperature it's pretty small), the size and character of the noise (it's pretty big and complex), and on how much data you've got and how long they cover.

    To determine whether a trend is meaningful, you apply well-known methods of estimating the uncertainty in that trend which take into account all those factors. When you do so, it turns out that under present conditions you need a time span of about 20 years or more to get meaningful results.

    So if you're talking about trends over 10 years, or 5 years, then no it's not really very informative. Trends over 20 or 30 years do give us genuine information.]

  • Ray Ladbury // January 11, 2009 at 2:19 pm

    John Finn, The only contention of the study is that the current warming epoch is extremely unlikely to be due to noise, regardless of how complicated a noise model you use. That conclusion stands. The question of whether past warm spells would be due to noise is not examined. If you conducted a similar analysis and got similar conclusions, it would simply show that period was also not likely due to noise.
    The explanation for the warming in the 30s and 40s has been discussed previously here.
    Noise. Still ain’t likely, regardless of your noise model.

  • Ray Ladbury // January 11, 2009 at 5:20 pm

    Michel, Look, I think some of us understand and appreciate what you are trying to do–serve as a sort of intermediary between CA and the the climate science community. The thing is that there is a lot of water (and animosity) under the bridge with the two communities. It’s going to take a lot for scientists to forgive McI et al. all those accusations of fraud and dishonesty. That’s something scientists take pretty seriously. At a minimum, the prerequisite for dialog must be an acceptance that there is a b*ttload of evidence favoring the consensus, and that the consensus model provides a reasonable and largely self-consistent interpretation of that evidence. I don’t see that happening. McI is happy preaching to the choir. I don’t think he even cares about the science that much–at least not in comparison with gratifying his ego. And for climate scientists, well, they have to ask what they would have to gain from engaging with the denialosphere, particularly given the level of abuse they would be subjected to for doing their jobs.

    Now as to your characterization of Zorita08, your approach simply isn’t viable. Zorita’s goal is very simple–how often would a broad class of autocorrelated noise models reproduce a prominent aspect of the current warming trend. Answer: not bloody often.
    If we were to find a similar set of years, all it would show was that we that also was not a product of noise. Since we know that 1)greenhouse forcing was operative in the 30s and 40s; 2)that insolation had increases slightly and 3)that volcanic forcing was in a lull, that would not pose much of a problem. Again, however, we don’t yet have any evidence at all regarding how out of the ordinary the 30s and 40s were.
    So as it stands right now, you and John are arguing a hypothetical situation as if it were a reality AND exaggerating its significance even if it were a reality.

  • John Finn // January 11, 2009 at 5:21 pm

    The explanation for the warming in the 30s and 40s has been discussed previously here.

    Remind me. What was the explanation for it?

  • Ray Ladbury // January 11, 2009 at 6:09 pm

    John Finn, Memory goin’ on ya, huh? OK, a combination of decreased volcanic activity, slightly increased insolation and greenhouse forcing.

  • san quintin // January 11, 2009 at 8:26 pm

    Hi All.
    Has anyone here got any view on the news that Mauna Loa has recorded a low increase in Co2 for 2008? Some denialist sites are talking about this.
    I think the global mean is high.

  • Kipp Alpert // January 11, 2009 at 8:49 pm

    CanAnyone answer my question,please.Kipp

  • Ray Ladbury // January 11, 2009 at 8:50 pm

    san quintin says “Has anyone here got any view on the news that Mauna Loa has recorded a low increase in Co2 for 2008?”

    We-ell! It was the start of a global recession and a La Nina year. And it is a SINGLE YEAR.

    He adds: “Some denialist sites are talking about this.”

    I’m shocked! Shocked!

  • P. Lewis // January 11, 2009 at 9:00 pm

    First, ESRL state, in red, that this last year’s data for Mauna Loa are preliminary. It may change, though usually not by much.

    Mauna Loa CO2
    1999 0.94
    2000 1.74
    2001 1.59
    2002 2.56
    2003 2.25
    2004 1.62
    2005 2.53
    2006 1.72
    2007 2.14
    2008 0.24

    This 2008 preliminary value is low (why? don’t know). The closest in magnitude is 1992, when there was a 0.43 reading.

    Second, ESRL also give the global value (again prelimanary, in red!). The provisional 2008 global figure is not out of line with previous years’ values.

    Global CO2
    1999 1.36
    2000 1.23
    2001 1.85
    2002 2.38
    2003 2.21
    2004 1.64
    2005 2.44
    2006 1.76
    2007 2.17
    2008 1.82

  • Sekerob // January 11, 2009 at 9:03 pm

    As of November the annual increase was nearly matching the record annual ppmv hike. Cyclically December normally adds to the value. December 2008 was also warmer than December 2007.

    Update: This is what I just found:

    yyyy mm mean.. interp. trend
    2008 8 384.14 384.14 385.91
    2008 9 2008.708 383.07 383.07 386.33
    2008 10 382.98 382.98 386.34
    2008 11 384.11 384.11 386.19
    2008 12 384.11 384.11 385.03

    Looks like same mean as last month, with a trend figure
    Last month or the month before they corrected the values after lab analysis.

    My first guess: Massive Fossil fuel burning reduction due acute economic downturn. Quite a twist. Certainly passing through several airports in the past few weeks, it was eerie quite there.

  • tamino // January 11, 2009 at 9:04 pm

    The increase in CO2 value from MLO is not that low for 2008; the annual average is up 1.7 ppm over 2007.

    The value for December 2008 is less than expected, so that Dec. 2008 is up relative to Dec. 2007 by only 0.22 ppm. But if you take the data since 2000, remove the annual cycle and a linear trend, then study the residuals, the Dec. 2008 value is way out of line compared to all the others — about 5 standard deviations. I strongly suspect it’s simply an error.

    I guess making a fuss over a single month’s data which runs counter to a well-established long-term trend is par for the course in the denialosphere.

  • fred // January 11, 2009 at 9:08 pm

    san quintin:
    It’s obvious that the ESRL site with the co2 data has failed to update correctly for Dec 2008*

    Yes I saw Watt’s had a post up about it. It looks to me that he knows full well it’s an error but is trying to play the implication for all it’s worth while giving himself a backdoor.

    *The graph on the ESRL site shows a seasonal correction value for Dec 2008, but there is no monthly mean value for 2008. That alone tells us that something has not updated correctly.

    Additionally the drop from last month in the seasonal corrected value is way off the scale of believablity. There’s no plausible explaination for co2 dropping like that in a single month. Watt’s suggestion of cool sea surface temperature causing that kind of drop is beyond ridiculous.

  • Hank Roberts // January 11, 2009 at 9:23 pm

    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    Compare the several years after the USSR fell apart — remember, first their economy fell apart, their currency devalued, they were mired in a hopeless long war in Afghanistan, their government had lost its credibility, their army refused orders to control demonstrators in their own streets. Amazing sequence of events. CO2 growth shows a noticeable dip in the same years.

    And then consider looking into the statistics on, for example, shipping and fuel use generally for the past year and the current year as they come available

    http://www.investmenttools.com/futures/bdi_baltic_dry_index.htm

    http://finance.yahoo.com/news/Judy-Chu-3rd-Quarter-2008-Gas-bw-13944942.html

  • Hank Roberts // January 11, 2009 at 9:25 pm

    Oh, when you’re looking at the Mauna Loa page, don’t neglect to read about the data set and compare it to the global:
    http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_trend_gl.png

  • curious // January 11, 2009 at 9:48 pm

    Ref Tamino response to Michel comment 11 Jan 7:06 am:

    “They imply that rather than just fluctuation, the recent warming is due to forcing. And we know which way the forcing is going today — up.”

    Incontrovertible (it seems) in relation to CO2 levels. But then if the relationship is as you describe it, how come temperatures haven’t increased over the past decade? And wouldn’t the data suggest that sensitivity of temperature to CO2 increases is (for some reason) declining?

    [Response: Temperature is the combination of trend and noise. If the trend continues inexorably, that doesn't negate the existence of noise.

    The evidence suggests that CO2 forcing is proportional to the logarithm of concentration. But since the trend in CO2 increase is definitely accelerating, the trend in CO2 forcing is actually rising faster than linearly.

    You've been here for a while but apparently you haven't been paying attention. Or are you just trying to create confusion?]

  • Ray Ladbury // January 11, 2009 at 10:24 pm

    Kipp, I presume you mean the question about why warming is the trend and not the noise. The thing is–as Ray Pierrehumbert emphasizes–noise goes up and down, up and down, while CO2 pushes things up. Thus the highs are higher, but the lows are not quite as low, either. A consistently upward trend stands out from random or oscillating signals–that’s why it’s the signal.

  • Chris // January 11, 2009 at 10:28 pm

    Hank Roberts: “Compare the several years after the USSR fell apart”

    Don’t forget cooling from Pinatubo June 91 - may at least partially explain 1992 dip?
    1991 1.02
    1992 0.43
    1993 1.35

    ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_gr_mlo.txt

  • Kipp Alpert // January 11, 2009 at 10:29 pm

    Hank Roberts:Could you answer this question,please. Recently, The Hockey Stick graph has been vindicated by scientists using the same methods, of the measurements for the graph. If Global warming is the signal, there was a lot of noise as well. Five El Nino’s of 7, in fifteen (1985-2000) years. Also unusually warm sst’s is a positive influence in the latter part of the last century. What I don’t understand is, if natural variations are warmer than Global Warming, why is Global Warming the signal and not the Noise.Is it because in spite of local phenomenon, Global Warming is a constant positive with more significance. Kipp
    http://data.giss.nasa.gov/gistemp/2007/
    http://www.ucar.edu/news/releases/2005/ammann.shtml
    http://www.nasa.gov/vision/earth/lookingatearth/elnino_split.
    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuffse

  • Kipp Alpert // January 11, 2009 at 10:50 pm

    Ray:Thankyou sir.
    Now I’ll try to follow the conversation.Dive!Dive! KIPP

  • Hank Roberts // January 11, 2009 at 11:15 pm

    Kipp, Robert Grumbine started in April 2008 a site that is worth reading from the beginning.

    This topic, if you work your way through it slowly and follow each step, answers the question:

    http://moregrumbinescience.blogspot.com/2009/01/results-on-deciding-trends.html

  • John Finn // January 12, 2009 at 12:36 am

    John Finn, Memory goin’ on ya, huh? OK, a combination of decreased volcanic activity, slightly increased insolation and greenhouse forcing.

    Thanks, Ray- memory is a bit dodgy these days. Right, so we have an increase in ghgs, lack of volcanos and a warmer sun. Any chance we know the respective contributions of each of these forcings (up to 1940-ish). I assume there must be something somewhere else we wouldn’t be able to detect the anthropogenic contribution in recent warming.

    As CO2 concentrations were only ~315 ppm in 1958 then I guess they were less in 1930s, so the ghg forcing can’t have been much more than a few tenths of a w/m2, so the contribution from the ‘other’ factors must have been significant.

    It occurs to me , that since the sun was more active in the second half of the 20th century than it was in the pre-1945 period and we haven’t had a major volcano for ~18 years, those particular factors may have contributed (or still are) contributing to recent warming.

    What do you think?

  • Kipp Alpert // January 12, 2009 at 12:49 am

    Hank Roberts:Thanks,KIPP

  • John Finn // January 12, 2009 at 1:01 am

    Chris // January 11, 2009 at 10:28 pm

    Don’t forget cooling from Pinatubo June 91 - may at least partially explain

    That would be a factor. Mauna Loa also seems to be more sensitive to ENSO which, given it’s location, makes sense. Quickly eyeballing the data it does look as though low growth tends to occur more often in (or following) La Nina years. I do think the Dec value is an error, though, so I’d better wait a while to see the final figure.

    Also - this strikes me as the sort of thing, which if true, would be widely known, so I could be way off the mark.

  • P. Lewis // January 12, 2009 at 1:08 am

    OMG! John Finn appears to have rewritten the scientific evidence:

    It occurs to me , that since the sun was more active in the second half of the 20th century than it was in the pre-1945 period

    Reference please, cos I see lots of this type of evidence on solar output in the C20.

    [Response: There's evidence that solar irradiance was greater in the 2nd half of the 20th century than in the first half, mainly based on the hypothesis that stronger sunspot cycles correspond to greater solar irradiance (but not everybody agrees). However, there appears to be no trend since about 1950, and the "solar increase" hypothesis doesn't really match the observed temperature behavior post-1950.]

  • Richard Steckis // January 12, 2009 at 3:57 am

    Tamino:

    “The evidence suggests that CO2 forcing is proportional to the logarithm of concentration. But since the trend in CO2 increase is definitely accelerating, the trend in CO2 forcing is actually rising faster than linearly.”

    I think the following graph of the mean annual growth rate of co2 at Mauna Loa does not support your premise. Ignore the 2008 value as it is not confirmed yet.

    http://penoflight.com/climatebuzz/wp-content/uploads/2009/01/CO2NewGraph1.jpg

    [Response: You're mistaken. In fact the graph you link to shows an upward trend in the growth rate, which indicates an accelerating trend of CO2 concentration. Yes I've run the numbers, yes CO2 growth has accelerated, and yes it's statistically significant.]

  • Phil. // January 12, 2009 at 4:11 am

    fred // January 11, 2009 at 9:08 pm

    san quintin:
    It’s obvious that the ESRL site with the co2 data has failed to update correctly for Dec 2008*
    …….

    *The graph on the ESRL site shows a seasonal correction value for Dec 2008, but there is no monthly mean value for 2008. That alone tells us that something has not updated correctly.

    Additionally the drop from last month in the seasonal corrected value is way off the scale of believablity. There’s no plausible explaination for co2 dropping like that in a single month. Watt’s suggestion of cool sea surface temperature causing that kind of drop is beyond ridiculous.

    Also this time of year is when d[CO2]/dt and d2[CO2]/dt2 are maximum and so the effect of any error or fluctuation would be large.

  • Richard Steckis // January 12, 2009 at 4:17 am

    I disagree. Is not acceleration the square of the velocity (or in this cast the increase in co2 per year)? I would agree that graph shows in increase in the rate but not an acceleration of the rate.

    [Response: I didn't say that the rate of increase was accelerating. I said that the trend is accelerating. Perhaps I was unclear; I didn't mean to imply that the trend rate was accelerating, only that the CO2 concentration was accelerating. And it is, fast enough that CO2 the climate forcing is accelerating as well.

    Acceleration has nothing to do with the square of the velocity. It's the rate (time derivative) of change of velocity. If the rate of change of velocity is positive, we generally say it's accelerating; if the rate of change of velocity is negative we generally say it's decelerating.]

  • Richard Steckis // January 12, 2009 at 4:18 am

    Could you please provide me with the analytical procedure you used (particularly test for significance) so that I can test for myself (part of my learning process).

  • Hank Roberts // January 12, 2009 at 5:41 am

    It looks almost like a blog, lacking only an apostrophe: “… from a skeptics viewpoint …”

  • Richard Steckis // January 12, 2009 at 5:50 am

    Ok. I shouldn’t have said the square of the velocity.

  • Gus // January 12, 2009 at 10:09 am

    Please excuse me if I have misunderstood something as I have not read the article in question, but the approach does not appear to me to be a hypothesis test at all unless they specified the starting point before the temperatures were observed.

    A similar problem comes up frequently in analyses of disease clusters. People living in an area note what they believe to be an unusually high number of disease events and then calculate the likelihood of that frequency of events, given the usual background rate. Using this approach, natural geographical clustering on a larger scale can be mis-identified as unusual within the locality under consideration. Similarly, this proposed analysis could simply be the same process of people drawing circles around unusual events (in time rather than space).

    An informative hypothesis test would have been to identify the likelihood of unusually high rates of warm weather over the next ten years (2009-2018), for example.

    Please note, I am not disagreeing with the body of evidence in relation to AGW. Merely drawing attention to what appears to be a possible flaw in the example under consideration.

  • John Finn // January 12, 2009 at 11:53 am

    Tamino, P.Lewis

    Response: There’s evidence that solar irradiance was greater in the 2nd half of the 20th century than in the first half, mainly based on the hypothesis that stronger sunspot cycles correspond to greater solar irradiance (but not everybody agrees).

    I’m not sure I agree (I think the oceans are the key). But if the IPCC cite solar activity based on sunspots as an explanation for early 20th century warming then they can’y dismiss it as having no effect in the post-1970 period.

    However, there appears to be no trend since about 1950, and the “solar increase” hypothesis doesn’t really match the observed temperature behavior post-1950.

    Hmmm. The strongest cycle (SC19) ever recorded peaked in 1958. The next 2 strongest cycles peaked in ~1980 and ~1990 respectively. To use a previous analogy - just because you’ve turned down the gas flame a bit it doesn’t mean the pot of water will stop warming.

    [Response: But: if the gas flame is constant after 1950, then the temperature won't remain constant until 1975 then warm rapidly rapidly -- not even with a large reservoir of thermal inertia.]

  • John Finn // January 12, 2009 at 12:24 pm

    P. Lewis

    Page 2 in the following document shows the Lean TSI reconstruction since 1610.

    http://www.geo.umass.edu/faculty/bradley/lean1995.pdf

    Note the 3 strongest cycles all occur in the second half of the 20th century. It is this reconstruction on which the IPCC bases it’s solar contribution to 1915-44 warming. However, as Tamino says, not everyone agrees with Lean. In particular, Leif Svalgaard has suggested that the change in sunspot number (and by implication TSI) over the 400 year period is much smaller than that depicted by the Lean reconstruction.

    This would suggest that the sun’s role in ‘recent’ (last ~400 years) climatic changes is minimal - as a direct influence at least.

    This might, at first, appear to provide more support for the CO2 warming case - but not really. The IPCC would still need to explain the sudden warming trend which began ~1915 plus, of course, their impressive-looking and persuasive (to some) reconstructions of 20th century climate with and without GHGs would be wrong since they would be based on incorrect data and assumptions.

    [Response: And the moon might be made of green cheese.

    The warming from about 1915 to 1945 does not depend critically on an increase in solar output, there's also a significant volcanic lull which has a far more profound impact than apparently you believe. We've been over this before, and I'm tired of repeating myself.]

  • John Finn // January 12, 2009 at 1:42 pm

    The warming from about 1915 to 1945 does not depend critically on an increase in solar output, there’s also a significant volcanic lull which has a far more profound impact than apparently you believe. We’ve been over this before, and I’m tired of repeating myself.]

    Fine. Let’s go with the volcanic lull. However , there was an almighty volcano in 1883 and a another pretty big one in 1902. That’s only 12-15 years before the warming kicked in. Pinatubo, the most recent volcano was 18 years ago. In terms of volcanic activity conditions are at least as favorable for warming now- if not more so - than they were then. There has to be something else. Something beginning with ‘O’ perhaps.

    Go on, T, say it to yourself quietly so no-one will hear you. Just say it might have something to do with the O..C..E…A ….

    [Response: The only thing the oceans have to do with it, is to provide the thermal inertia which makes the impact of a volcanic lull take a long time to be fully realized. You've also mischaracterized the volcanic forcing record, which can be seen here

    That the observed volcanic lull is sufficient to explain most of the early 20th-century warming is elucidated here. A similar analysis reveals that volcanic inactivity is not sufficient to explain modern warming. And the volcanic lull is not the whole story of the early 20th century, since there's that possible (I might even say probable) increase in solar output, Dr. Svalgaard notwithstanding, and yes there is an increase in GHG forcing whether you want it to be true or not.

    The ignorance you demonstrate is astounding, but you still pontificate about what you don't know so you've definitely crossed the "dumb" threshold. You've also decided to adopt a sneering attitude in spite of astounding ignorance, which pretty well crosses the "ass" threshold. Which makes you a real -- say it loud so everyone can hear -- D...U...M...B...A...S...S.

    I'm sick of your sneering attitude and profound ignorance. I'm also done with you.]

  • Phil. // January 12, 2009 at 2:51 pm

    John Finn // January 12, 2009 at 12:24 pm

    P. Lewis

    Page 2 in the following document shows the Lean TSI reconstruction since 1610.

    http://www.geo.umass.edu/faculty/bradley/lean1995.pdf

    Note the 3 strongest cycles all occur in the second half of the 20th century. It is this reconstruction on which the IPCC bases it’s solar contribution to 1915-44 warming. However, as Tamino says, not everyone agrees with Lean. In particular, Leif Svalgaard has suggested that the change in sunspot number (and by implication TSI) over the 400 year period is much smaller than that depicted by the Lean reconstruction.

    As does Judith Lean herself in subsequent publication: Wang, Y.-M., Lean, J.L., Sheeley Jr, N.R., 2005, “Modeling the Sun’s Magnetic Field and Irradiance since 1713”, Astrophys. J., 625, 522–538.

  • Barton Paul Levenson // January 12, 2009 at 3:45 pm

    I regressed GISS temp. anomalies on ln CO2 for 1880-2007 and got 76% of variance accounted for. Ran it 1880-2000 with Lean’s TSI and got… 68% accounted for, and TSI insignificant (t = 1.4). Ran it 1880-2007 with Svalgaard’s TSI and got 77%, but TSI even more insignificant (t = 0.7). I know I didn’t account for the structure of the residuals here, and for all I know anomalies and CO2 are integrated, but it looks to me like TSI had no clear effect at all. What am I missing?

  • Sekerob // January 12, 2009 at 5:38 pm

    For the interlude, MLO’s CO2 values were revised, producing a same annual of 1.86 ppmv addition as what 2007 contributed

    2008 12 2008.958 385.54 385.54 386.28 385.57

  • Sekerob // January 12, 2009 at 5:58 pm

    John Finn, v.v. Lean 1995, what happened to Lean 2000 and does he even support that work still as current or has he moved on to more advanced knowledge? Many seem to have since.

  • san quintin // January 12, 2009 at 6:00 pm

    Thanks for all the responses re the Mauna Loa data. I thought that it might be a consequence of La Nina plus economic downturn, but take on board the other responses too.

  • Phil. // January 12, 2009 at 7:04 pm

    Sekerob // January 12, 2009 at 5:58 pm

    John Finn, v.v. Lean 1995, what happened to Lean 2000 and does he even support that work still as current or has he moved on to more advanced knowledge? Many seem to have since.

    ‘She’ has revised her results in the light of later work:
    Wang, Y.-M., Lean, J.L., Sheeley Jr, N.R., 2005, “Modeling the Sun’s Magnetic Field and Irradiance since 1713”, Astrophys. J., 625, 522–538.

  • Sekerob // January 12, 2009 at 7:36 pm

    oops, My Avvocatesse actually prints the male title Avvocato on here business cards/stationary. Yes, ‘S’he ;>)

    Thanks for the update info. TSI’s been going down since the 50’s as I read it most all places not suffering from denialitis, so I’m throroughly discounting any strong temp hiking as sourced from solar flux.

  • Hank Roberts // January 12, 2009 at 7:38 pm

    Check the ‘Cited By’ list
    for Wang, Lean, Sheeley 2005:

    http://www.journals.uchicago.edu/doi/abs/10.1086/429689

  • Sekerob // January 12, 2009 at 8:20 pm

    Thorough and thanks.

  • Bob North // January 12, 2009 at 11:23 pm

    Tamino - I think it might be appropriate to clarify that by “natural variation in the climate system” what is meant is “unforced” natural variations including unexplained interannual variability (often referred to here as “noise” or “weather”) as well as variations due to ENSO events, volcanic eruptions (?), etc.. As statistical analysis, Z08 shows that it is highly unlikely that the recent string of record warm years is due to unforced variation and that there is almost certainly some forcing that must account for it. Since it is really a pure statistical analysis, it doesn’t speak to what the forcing is.

    As for colloquialisms, I prefer (said in a southern drawl) “Your chances are slim or next to none, and Slim just left town.”

  • Lee Kington // January 13, 2009 at 1:22 am

    Tomino:
    [Response: You're mistaken. In fact the graph you link to shows an upward trend in the growth rate, which indicates an accelerating trend of CO2 concentration. Yes I've run the numbers, yes CO2 growth has accelerated, and yes it's statistically significant.]

    I disagree. The graph Richard Steckis linked to was off of my site ( I have no objections to him doing so). Your response to him gave cause for me to have a look at the data. In order to illustrate a response in a meaningful manner I have generated a thread for that purpose on My Climate Buzz.

    http://penoflight.com/climatebuzz/?p=182

    [Response: You are very, very seriously confused. Go read this post.

    What you purport to show on your post is that the slope of the trend line fit to the earlier section of the data is not as high as it is in the later part of the data. That would mean that the data you graph are not accelerating.

    But that's not the issue here, because the data you graph are claimed to be CO2 growth rate. I haven't claimed acceleration of CO2 growth rate -- I've demonstrated increase of CO2 growth rate, which is acceleration of CO2 concentration.

    Quite apart from the fact that you're trying to contradict a claim I haven't made, you don't provide any statistical justification for any of your claims. So it's not possible for us to evaluate whether any of the differences of slopes in trend lines are at all meaningful, or just random fluctuations. The graph where you claim the slope (in growth rate) is flat since 1998 -- what's the probable error in that estimated slope, plus or minus a billion?

    This is exacerbated by the fact that your graph is just too smooth -- it looks like some kind of smoothed approximation to CO2 growth rate, but I don't see any kind of clue about what method was used to generate the data, or what's the source of that graph. Smoothing exaggerates autocorrelation, increasing the chance of false belief in statistical significance if it's not compensated for.

    Finally, the results are about the increase in the CO2 growth rate (which is acceleration of CO2 concentration, not acceleration of growth rate) over a long enough time frame to give meaningful results. The error ranges (based on proper compensation for autocorrelation) demonstrate the statistical significance beyond doubt. I get the impression that you make a habit of drawing conclusions from too short a time span while disputing conclusions based on plenty of data.]

  • Lee Kington // January 13, 2009 at 3:49 am

    Tomino:

    The graph where you claim the slope (in growth rate) is flat since 1998 — what’s the probable error in that estimated slope, plus or minus a billion?
    >>>>>>>>>>>>>>>>>>>>
    Only if the Mauna Loa data is off by plus or minus a billion.

    Smoothing only alters the graphical representation it does NOT alter the data points.

    Your OWN graph
    http://tamino.files.wordpress.com/2009/01/mloannrate.jpg?w=489&h=361

    Addresses growth rate from 1959 to 2008 as a whole but do NOT look at the current trend. If the growth rate is not increasing an acceleration of CO2 concentration cannot exist in present context.

    The resolution and context of your computations and graphics are creating a false picture. Essentially you are doing the same thing that I mentioned another did with temperature. Since it was waming in the past…. it has to be warming now. UNTRUE.

    You cannot have warming when the temperatures continually drop.

    You cannot have acceleraton of atmospheric CO2 concentrations when the annual addition is decreasing. It is impossible to fill the bucket quicker by turning the faucet down.

    Something has changed in the way the atomosphere is reacting to the total (natural and anthropogenic) amount of emissions of CO2 in recent years. To me that is identifiable, it is significant. The only way to begin to understand what is occuring is to look at the relevent time period. Ignoring it makes no more sense than Mann ignoring the MWP and covering it up.

    Nations are spending massive amounts of money, altering peoples lives, and going broke. I think it is important to look at the little details that so many simply want to gloss over.

    Again, there are two very specific anomalies in the data. The way the atmospheric responds to CO2 following 1988 is different. The way it responds following 1998 is different yet again. The amount of CO2 being added to the atmosphere annually is on a decreasing trend. In the meantime our level of emissions has increased. Let us not pull a Mann and pretend that is not real.

    [Response: It's obvious that the growth rate of CO2 shows both trend and fluctuations. But it looks like you're another one of those who doesn't believe in the existence of noise, or doesn't understand its effect on trend analysis, or both. That must be why you have this mistaken notion that you can fit a line to the "current" data and declare what the "current" trend is.

    The noise in the data makes any trend estimate uncertain. Everybody knows that the trend in growth rate since 1998 isn't plus or minus a billion. But that was your chance to say, "Oh! It's -0.015 ppm/yr/yr plus or minus 0.1 ppm/yr/yr." The long-term trend rate +0.025 is well within that range.

    It's not just possible for the slope of the "current" trend line to differ from the actual trend rate, it's impossible for it not to. Unless you don't believe in the existence of noise. Looking at the details is fine, but you're not qualified for the job.]

  • Richard Steckis // January 13, 2009 at 5:10 am

    Hank:

    “Check the ‘Cited By’ list
    for Wang, Lean, Sheeley 2005:”

    Yeah. So?

  • Sekerob // January 13, 2009 at 5:54 am

    Richard Steckis, don’t worry you missed it.

    Fun part was, Leif Svalgaard posted a link to the full Lean 2005 paper when someone asked for more current work on later data and I gave the 2008 presentation link: http://www.leif.org/research/LeanRindCauses.pdf

    Enjoy updating your knowledge.

  • Hank Roberts // January 13, 2009 at 7:20 am

    So, compare the “cited by” list for the 1995 paper.
    What changed between the 1995 and the 2005 paper?

  • Philippe Chantreau // January 13, 2009 at 9:05 am

    Lee Kington: “Nations are spending massive amounts of money, altering peoples lives, and going broke.”

    The only one I can think of right now that is really going broke is Iceland. Of course, their going broke has nothing, zero, zilch to do with any action against AGW.

    So please elaborate: Which nation is spending how much on what? Who is going broke, except Iceland? How are people’s lives being altered?

  • Dano // January 13, 2009 at 4:06 pm

    Nations are spending massive amounts of money, altering peoples lives, and going broke. I think it is important to look at the little details that so many simply want to gloss over.

    Presumably this means on environmental actions.

    Evidence plz.

    Best,

    D

  • Hank Roberts // January 13, 2009 at 4:18 pm

    > no end
    Tag team is part of the alt.syntax.tactical routine; they alternate, they change names (watch those IP numbers), they change roles.

    Remember all the kids who just hated teachers and would do anything to screw up a good teacher just for the fun of ruining a lesson?

    They’re on the Internet. They act like dogs, but you can tell if you watch them carefully.

    More show up the more you allow them to see you’re human and vulnerable to the tactic.

    Einstein left out of his list one of the most common things in the universe: hydrogen, check; stupidity, check; and maliciousness.

    You attract the latter two to the extent you’re changing people’s minds as a good teacher.

    Killfile is a really, really, really good tool.

    killfile for Greasemonkey
    Jul 3, 2008 … Provides a killfile for certain blogs. Covers livejournal, haloscan comments, most typepad blogs, most blogspot blogs, scienceblogs.com, …
    userscripts.org/scripts/show/4107

  • Ray Ladbury // January 13, 2009 at 6:03 pm

    Lee Kington says “I think it is important to look at the little details that so many simply want to gloss over. ”

    OK, but is it too much to ask that you treat them properly?

  • Dave A // January 13, 2009 at 11:10 pm

    OK,

    So there was a rise in temperature during the early part of the 20thC which roughly matches that during the latter part of the 20thC.

    If Zorita et al had been able to do their study in, say, 1948 could they not have perhaps reached the same conclusion about temperatures then as they now have in 2008?

    And if so does this latest effort tell us very much at all, except the temperature has been rising, slowly, over the last 150 or so years?

  • David B. Benson // January 13, 2009 at 11:37 pm

    Dave A // January 13, 2009 at 11:10 pm — The temperature increase is proceeding very rapidly indeed.

  • Ray Ladbury // January 14, 2009 at 1:26 am

    Dave A., Did you even bother to read the preceding comments where John Finn raised exactly the same point and was politely bitch-slapped by Tamino? First, as Tamino pointed out, 1880-1948 is only 68 years and 13 our of 68 is a whole lot less impressive than 13 out of 128. Second, we also know greenhouse warming was at least partly behind the warming in the 30s and 40s (along with a lull in volcanic activity, etc.).

    Maybe go back and read it through again and see if you pick up more this time, huh?

  • Barton Paul Levenson // January 14, 2009 at 12:33 pm

    Has anybody extended the dust veil index or some other index of volcanic activity to 2007 or 2008? The data I have ends in 2000, and I’d really like to be able to compare 1880-2007 regressions with other 1880-2007 regressions.

    [Response: I too have had difficulty finding up-to-date data on that. I've sometimes just assumed negligible impact from volcanoes since 2000 since we haven't had any major eruptions, but it'd be nice to work with actual data.]

  • Sekerob // January 14, 2009 at 3:15 pm

    Soon to add the ABC index. Maybe some annual opaqueness index, but ideally, get Triana up post haste to know what our undisputed energy balance is. Is that anywhere on the Obama science calendar?

  • Gavin's Pussycat // January 14, 2009 at 4:47 pm

    Tamino:

    [Response: It's important to emphasize we're working with different hypotheses. I've estimated the behavior of the noise in global temperature after removing the estimated signal. Zorita et al. didn't remove the estimated signal, because they're working with the hypothesis that there is no signal at all -- it's nothing but noise. That's why they estimate vastly more autocorrelation than I've estimated. The main result of this work is to reject that hypothesis.

    I have a slightly different take on this. I read
    ( http://www.climate.unibe.ch/~stocker/papers/zorita08grl.pdf ):

    "The last decades of the 20st century are probably too strongly affected. Temperature records in the decade 1940-1950 have been found to be distorted by changing in the measuring devices of sea-surface temperatures [Thompson et al., 2008] and temperature data in the late 19th century are burdened by higher uncertainties [Brohan et al., 2006].”

    I don’t get at their supplementary material, but this appears to state that they avoid, for building their covariance model, using the same years that they are testing.

    Is it even possible in principle to distinguish mismodelling from noise if you don’t have out-of-sample info on the covariances? Serious question.

    [Response: They remove the all the data which might skew their estimates, including the ending part (might be anthropogenic), the WW2 era (inconsistent data collection) and the earliest part (reduced precision). But they treat the data that remains as *all noise*, no signal, whereas I consider it to be a mixture of signal and noise.

    By doing so, they estimate vastly more autocorrelation than one gets treating it as signal-plus-noise. But that's the point; without that much autocorrelation there's no chance at all of recent warmth; they've shown that even *with* that much, there's implausibly little chance.]

  • Ray Ladbury // January 14, 2009 at 4:48 pm

    Sekerob, See:
    http://www.desmogblog.com/dscovr-killed-dick-cheney-nasa-insider-climate-change-satellite

    A parting shot by the ancien regime.

  • Timothy Chase // January 14, 2009 at 6:48 pm

    Other governments have offered to put DSCOVR into space for us. Free of charge. Desmog covered that too in an earlier article. Obviously somebody really didn’t want that thing leaving the atmosphere. And now we know who. I just have to wonder what would have happened had it made it aboard the shuttle.

    -6 d.

  • Timothy Chase // January 14, 2009 at 6:55 pm

    Barton Paul Levenson (January 14, 2009 at 12:33 pm) wrote:

    Has anybody extended the dust veil index or some other index of volcanic activity to 2007 or 2008? The data I have ends in 2000, and I’d really like to be able to compare 1880-2007 regressions with other 1880-2007 regressions.

    We really don’t have the data we would like…

    Hansen sent out a link earlier today to this:

    Volcanic aerosols: Colorful sunsets the past several months suggest a non-negligible stratospheric aerosol amount at northern latitudes. Unfortunately, as noted in the 2008 Bjerknes Lecture [ref. 9], the instrument capable of precise measurements of aerosol optical depth depth (SAGE, the Stratospheric Aerosol and Gas Experiment) is sitting on a shelf at Langley Research Center. Stratospheric aerosol amounts are estimated from crude measurements to be moderate. The aerosols from an Aleutian volcano, which is thought to be the primary source, are at relatively low altitude and high latitudes, where they should be mostly flushed out this winter. Their effect in the next two years should be negligible.

    GISS Surface Temperature Analysis
    Global Temperature Trends: 2008 Annual Summation
    Originally posted Dec. 16, 2008
    http://data.giss.nasa.gov/gistemp/2008/

  • Gavin's Pussycat // January 14, 2009 at 7:53 pm

    OK thanks Tamino. I see your point: out of sample but not “clean”.

    Isn’t the essential difference between your approach and theirs the choice of the two alternatives that one is trying to compare the relative likelihoods of?

    Your approach, it’s the likelihood of fitting an antropogenic warming trend as against that of attributing the data to natural variability; the former, producing a drastic reduction in residual variance at the cost of just one more free parameter, is so much more likely. That’s like the idea behind AIC: replacing noise by signal, by better modelling, drives up the likelihood.

    Their approach, comparing the likelihood of natural variability producing something looking like the data we see, against the much greater likelihood of producing something not even remotely like the data we see. Testing the natural variability explanation on its own merit, as it were, not against the AGW alternative.

    Clumsy language, but isn’t this the idea? And there is a place for both.

    [Response: I'm not *precisely* sure what you mean, but if I read you right, then I think you've got the idea.]

Leave a Comment