Open Mind

How Long?

December 15, 2009 · 180 Comments

Time and time again, denialists try to suggest that the last 10 years, or 9 years, or 8 years, or 7 years, or 6 years, or three and a half days of temperature data establish that the earth is cooling, in contradiction to mainstream climate science. Time and time again, they’re refuted — shown to be either utterly foolish or downright dishonest or both. Logic seems to have no effect on them.

The simple fact is that short time spans don’t give enough data to establish what the trend is, they just exhibit the behavior of the noise. Of course that raises an interesting question: how long a time span do we need to establish a trend in global temperature data? It’s sometimes stated that the required time is 30 years, because that’s the time span used most often to distinguish climate from weather. Although that’s a useful guide, it’s not strictly correct. The time required to establish a trend in data depends on many things, including how big the trend is (the size of the signal) and how big, and what type, the noise is. Let’s look at GISS data for global temperature and test how much data we need to establish the most recent trend.


To do so, we’ll need to know how the noise behaves. We can get an idea of that by modeling the data since 1975 as a linear trend plus noise. We can estimate the trend by fitting a straight line to the data using linear regression:

Then we can estimate the noise as the residuals from this model:

When we estimate the trend using linear regression, and particularly when we estimate the probable error in that trend estimate, we can’t just model the noise as “white noise” because it shows rather strong autocorrelation:

The autocorrelation means that the simple “white-noise” error estimate will be far too low — we’ll have way too much confidence in our trend estimate, so we’re likely to conclude we’ve established a trend when we really haven’t. It’s a common mistake, I’ve made it myself.

The most common method to compensate for autocorrelation in geophysics is to model the noise as an AR(1) process. But that model is also insufficient, as we can see by comparing the autocorrelation estimated from the data to that from the AR(1)-model:

A much better model is to treat the noise as an ARMA(1,1) process:

This model is a realistic approximation, so we can use it to get realistic estimates of the uncertainty of trend rates. They aren’t perfect, in fact there’s additional uncertainty because the parameters of the ARMA(1,1) model are only estimates — so the uncertainties we estimate this way are still too low because of this unaccounted-for factor. But at least such estimates can be viewed as realistic, and they’re certainly more realistic than the AR(1) or white-noise estimates.

With a reasonable error model, we can choose any starting time, estimate the trend rate from then to the present, and estimate a confidence interval for the trend rate. Here’s the result for starting times from January 1975 to November 2008 (only 1 year ago), with the trend estimate in black and the confidence interval in red:

Clearly as we start later and later, the confidence interval gets wider and wider. By the time we get to using only 1 year of data, the confidence interval is so wide that we can’t be sure the trend isn’t as high as +0.55 deg.C/yr, a whopping 55 deg.C per century! That’s dangerous warming!!! Of course, we can’t be sure the trend isn’t as low as -0.2 deg.C/yr, a dangerous cooling at 20 deg.C per century! The conclusion is obvious: using such a short time span only tells us about the noise, not the trend. Only a fool would draw conclusions on that basis.

We get a better picture of the situation by expanding the y-axis:

When the lower confidence limit (the lower red line) is above zero, we have some confidence that the trend rate is definitely positive. If the upper confidence limit (the upper red line) were below zero, we’d have some confidence that the trend rate was negative — but that hasn’t happened. The last time the lower confidence limit was above zero was 1996, as we can see more clearly if we expand the x-axis:

Therefore we need at least 14 years of GISS data (from 1996 to the present) to draw a confident conclusion about the most recent trend. In fact, since we have additional unaccounted-for uncertainty (such as the parameter estimates for our ARMA(1,1) model), we actually need a bit more. Let’s say that less than 15 years of data allows no confident conclusion about whether the trend in GISS data is warming or cooling.

That does not mean that there’s been no warming trend in those 15 years — or in the last 10, or 9, or 8, or 7, or 6 years, or three and a half days. It only means that the trend cannot be established with statistical signficance. Of course, it’s another common denialist theme that “there’s been no warming.” This too is a fool’s argument; any such claims are only statements about the noise, not about the trend. It’s the trend that matters, and is cause for great concern, and there’s no evidence at all that the trend has reversed, or even slowed.

Categories: Global Warming
Tagged:

180 responses so far ↓

  • Tom Dayton // December 15, 2009 at 2:59 pm | Reply

    This post is great! The level of explanation is just right for a range of people’s statistics background. The progression of the description is wonderful–won’t lose readers mid-post.

  • Jim Bouldin // December 15, 2009 at 3:03 pm | Reply

    A couple other points are that (1) the stability of the estimated trend, from your 6th figure, is very high for any start date before the mid-90s, and (2) even after 1996, the probability of a positive trend is still greater than the probability of a negative trend, for perhaps 6 to 7 starting date years. And then there’s the larger point that any trend over any time period needs to be analyzed wrt known physical processes if one is going to make any statements about GHG forcing “ending” (which is what deniers often are trying to say).

  • Neven // December 15, 2009 at 3:14 pm | Reply

    Very nice, Tamino. This should convince a few people that the ‘It’s Cooling since 1998′-myth is a sign that the person who utters it either doesn’t know what he’s talking about or bluntly lies to manipulate the uninformed.

    Hopefully there will be some more warming the coming years so the propaganda swings back like a boomerang in the face of the denialists, just like Pat Michaels has warned in the recent Disinformation Conference in New York.

    These people deserve catastrophic AGW.

  • ABG // December 15, 2009 at 3:27 pm | Reply

    Thanks Tamino – I also wonder why I never hear more about ocean warming in the rebuttal to the “there’s been no recent warming” claim. Seeing as the vast majority of heating from the energy imbalance ends up in the oceans, shouldn’t this be the more appropriate metric anyway? The recent paper by Schuckmann et al. (2009) demonstrates that the ocean heat content has steadily increased all throughout this period of “no warming”.

  • KenM // December 15, 2009 at 3:43 pm | Reply

    Tamino – this is great. I was just wondering about this today.
    I read somewhere that if you pick the right interval of time (it was an 8-year span, if I recall correctly) between 2000 and today, you can show statistically significant cooling. I’m sure there are other intervals within the last 15 that can show warming as well.
    So, I understand why you chose 30 years, it’s kind of an accepted practice, but why is it accepted?
    Who decided that 30 was sufficient? Is there any kind of scientific/mathematical reasoning behind it? Why not 50, or 20, or 2000?

    If we had decided 2000 years is “climate” and anything less is “weather”, how far back would you have to go to establish a statistically significant trend (up or down)?

  • Hank Roberts // December 15, 2009 at 4:09 pm | Reply

    Hmmm, how long til the first posting from someone who hasn’t read the text and just sees an opportunity to copypaste old noise?

    “‘Senator, you have the vote of every thinking person!’
    ‘That’s not enough, madam, we need a majority!”

  • Jim Eager // December 15, 2009 at 4:11 pm | Reply

    Unfortunately, Neven, the rest of us do not.

    Thanks for another great post, Tamino.

    It’s even more clear a presentation than Robert Grumbine’s Results on Deciding Trends
    http://moregrumbinescience.blogspot.com/2009/01/results-on-deciding-trends.html
    which I use when confronting those parroting the “it’s been cooling for the last 10 years” nonsense

  • Ray Ladbury // December 15, 2009 at 4:34 pm | Reply

    I’ve started to give up on the whole “it’s not significant” argument. Now I just emphasize that it isn’t climate. If they want to be weather-watchers, great. Fascinating hobby. Just don’t pretend it’s climate science.

  • Hvordan // December 15, 2009 at 4:40 pm | Reply

    Is there a way to quantify the probabilities of a positive vs negative trend for a particular x value? For instance, do the confidence limits represent two sigma and the probability for a given slope follows a bell curve or something of that sort?

  • george // December 15, 2009 at 4:50 pm | Reply

    I think is worth pointing out that the upper confidence interval line is above 0.02C/yr for trends starting in every year back to 1975.

    In other words, every one of those confidence intervals back to 1975 encompasses the (rough) estimate indicated by the IPCC AR4 statement

    For the next two decades a warming of about 0.2°C per decade is projected for a range of SRES emissions scenarios.

    So, it is neither valid to claim that
    1) the IPCC “over-predicted” [sic] warming in the AR4
    NOR that
    2) the projection of “about 0.2°C per decade” is significantly different from the observed trend since 1975

    at least not based on GISS data.

  • SNRatio // December 15, 2009 at 5:00 pm | Reply

    @KenM:
    To talk about ‘trend’ in a climatically meaningful way, we must have some stability. If you look at the estimates with starting years back to 1975, you will see that they change very little as soon as you start earlier than ca 1994. If we use 30 years, we will have averaged out most shorter term cyclicity (but not necessarily longer term!). The basic principle, is that either the trend signal must dominate the fluctuations, or we must have the fluctuations averaged out. Of course there are several problems with too long periods, too – a relevant one right now is the more-than-linear increase in radiative forcings we have had during recent years.

  • MapleLeaf // December 15, 2009 at 5:03 pm | Reply

    Tamino, than you for this.

    KenM you ask a good question about 30 years. There doe snot seemt o be an official statement on why this is so. There are numerous reasons why one could think of 30 yrs:

    1) The time frame has to allow for climate noise and internal climate variability. Some of these modes operate at decadal time scales (e.g., PDO), so one needs a few decades to encompass a full cycle.
    2) It has to describe today’s climate, or climate that people are expected to experience for a large portion of their lifetimes.
    3) I also imagine one has to have sufficient data points to make any stats statistically significant.
    4) Maybe the sigma (std. deviation) of surface air temperatures levels off after 30 years or so?

    These were just off the top of my head. There are probably more appropriate and sophisticated reasons.

  • Dan Olner // December 15, 2009 at 5:04 pm | Reply

    Thanks for this. I’d add – for a lot of people who are led to doubt the science because of anti-AGW FUD, it might be enough just to point out something like: “remember that week in November that was a lot warmer than the one before it? Does that mean we’re going towards Spring? No. A week is too short a time to pick up the seasonal trend. We’re definitely still heading toward winter. Same with climate, just on a bigger scale.”

    It’s also a handy example for anyone saying “chaos!complexity! Can’t possibly predict!” No, we can’t predict the weather in two weeks, because it’s too complex – but yes, we can predict seasons because we know the boundary conditions: the angle of the Earth to the sun.”

    It’s handy to have one or two intuitive examples on-hand, as well as more statistically detailed stuff for anyone as wants to dig deeper.

  • Layman // December 15, 2009 at 5:19 pm | Reply

    Sorry that this is tangential but am not really sure where to post it. As someone who has had his fits of denial in the past, I always come back to the question – what is it that drives this denial in so many people?

    I think that one challenge, at least for me, has been trying to come to grips with the notion that something could be going on, that we are causing, that could have such dire consequences. It’s almost too fantastical to comprehend – something like a Hollywood “end of the world” movie this stuff of glaciers melting, mass migrations and famines, sea level rising, etc.. It is completely outside our everyday life experiences of getting to work, taking care of the kids, going to the supermarket, etc.

    So I look for ways to demystify this, turn the fantastical into the concrete. For what it’s worth, I like to recall a visit to the Chicago Museum of Natual History for help in this regard. There is an exhibit where you can walk through the history of the Earth. In doing so, you see that there have been, in fact, 5 mass extinctions already (some people consider the current era to be the sixth). All of them were related to significant changes in climate (be it due to continental drift, meteorites, or other catalysts).

    If you’re interested, here they are in summary format:
    http://www.fieldmuseum.org/evolvingplanet/cambrian_4.asp
    http://www.fieldmuseum.org/evolvingplanet/silurian_6.asp
    http://www.fieldmuseum.org/evolvingplanet/permian_5.asp
    http://www.fieldmuseum.org/evolvingplanet/mesozoic_5.asp
    http://www.fieldmuseum.org/evolvingplanet/mesozoic_9.asp
    http://www.fieldmuseum.org/evolvingplanet/quater_5.asp

    Thanks as always to those of you who have been so helpful with my basic questions.

  • sod // December 15, 2009 at 5:30 pm | Reply

    very good post, at exactly the right time. thanks.

  • JohnV // December 15, 2009 at 6:00 pm | Reply

    Tamino:
    I enjoyed this post. It would be interesting to see a little more detail around the confidence limit. Two questions come to mind:

    1. Is this a 95% confidence limit?
    2. If you plot just the size of confidence limit, is there a “breakpoint” where it starts to grow rapidly?

    [Response: It's a 2-sigma confidence interval, which is a good estimate of 95% confidence. As for a "breakpoint" for the start of rapid growth ... not sure -- how does one define "rapid"?]

  • Lake // December 15, 2009 at 6:17 pm | Reply

    I’m am open-minded but currently skeptical about AGW. I’m new to this blog, and it looks like you are making some convincing arguments. In the couple of posts that I’ve read, you seem to be trying to definitively answer the question, “Was this last decade warmer than previous decades in the last century, as predicted by climate scientists?” Is this correct?

    Anyway, I have a couple of broader questions that hopefully have relatively straightforward answers. I’m a science teacher with a technical, data-centric background, and answers to these questions with relevant links will help me make up my mind about the issue.

    Why, specifically, do you trust measures of the global average temperatures, either based on surface stations or satellite measurements? Is the variability due to external factors not greater than the measurement of the anomaly itself?

    [Response: As the time span and the number of data grow, the effective "signal-to-noise-ratio" grows. Eventually the signal becomes identifiable with confidence. That's what's happened with global temperature data.]

    Why do you think the CRU scientists were so guarded about scrutiny of their data, acting like they had something to hide? I can understand being annoyed at critics, but threatening to delete files rather than make them available to McIntyre and others seems shady, no?

    [Response: I can't speak for them, and I don't have any special insight into the situation. My best guess is that they have been accused of fraud, lying, and conspiracy so many times by denailists -- not just criticized scientifically but essentially called liars and criminals -- that their anger at the unfair accusations caused them to behave in a less-than-totally-coolheaded manner.]

    Finally, if you are convinced (are you?) that there is accelerated warming and that it is man-made, what would it take for you to doubt it? Is there something that could happen in the future, some measurement or revelation, that would cause you to reconsider? Or is there so much evidence (in your eyes) that it is now in the category of fact and therefore unquestionable?

    [Response: I'm convinced that there's warming and that it's man-made. I expect it to accelerate if greenhouse-gas emissions follow business-as-usual projections, but that remains to be seen. I don't see convincing statistical evidence of accelerated warming over the last 35 years. As for what might alter my opinion, see this.]

    I’m truly asking here, not attacking or being a ‘denialist’. Thanks for any direct and non-sardonic answers. If you’ve already spelled all of this out somewhere, please say so — I am new to this blog.

  • Lake // December 15, 2009 at 6:50 pm | Reply

    Thanks for the quick and direct answers!

    Regarding your last point and link, is this about global warming in general? I know there are plenty of people who deny that, but I’m not one. My question is more the human piece:

    What convinced you that the warming, which is certainly happening, is primarily unnatural and due to human GHG emissions?

    [Response: The warming effect of human emissions is beyond dispute. Greenhouse gases absorb infrared radiation; that they warm the planet is basic physics.

    No other climate-forcing agent (solar, cosmic rays, etc.) is on the increase, or has been since 1950.

    Murder victim + suspect on the scene + smoking gun + every other possible suspect has an alibi + suspect's fingerprints and DNA on the murder weapon and the victim. Draw your own conclusion.]

    Which sets of data show that the warming we’re seeing now is outside of the realm of normal, natural warming? Does it have to come down to proxies, or is the scientifically measured data (stations, satellites) enough to show that the warming is not natural?

    [Response: See this.]

    Sorry to keep asking fundamental questions. These are what my students want to know, and I want to do the issue justice. You seem like a reasonable person. Thanks.

  • MapleLeaf // December 15, 2009 at 7:17 pm | Reply

    Tamino, do yo have plans to update the data shown in the link (provided in response to Lake)? Looks like 2009 is going to be in the top 5 warmest years (GISS). It would be nice if you could update the graphics shown in that link with each passing year, to see how things are panning out.
    Thanks for considering.

  • Hank Roberts // December 15, 2009 at 7:23 pm | Reply

    > MapleLeaf // December 15, 2009 at 5:03 pm |
    > … KenM you ask a good question about 30
    > years. There doe snot

    You and Ken are asking for what’s explained right at the top of the screen. Please go to the original post and read.

    • KenM // December 15, 2009 at 9:45 pm | Reply

      Actually I read it very closely – this post was of particular interest to me. Perhaps you did not fully understand my question. There’s nothing magical about 30 years, or 15 for that matter. I can find statistically significant warming AND cooling periods in the past 15 years.

      [Response: No, you can't.]

      Hell, I can name that tune in 10 years. But we all know that 10 years is weather and 30 is climate.
      In this example, Tamino gathers data from 1975 to about 1 year ago and shows you’d have to go back 15 years to get a statistically significant trend.
      So here’s the question re-worded:
      How many years do you have to go back if your start date is 2,000 years ago? In that time scale, is the last 50 years of warming a statistically significant trend or not?

      [Response: The last 50 years is statistically significant. Over the last 2000 years (or even the last 50) it can be shown that the data do not admit a "linear trend + noise" model. You can compute a trend if you want, but you can't maintain that the difference between the linear trend and the data is just noise.]

      If not, then why is 30 years an acceptable stretch of time to use? Why can’t a denialist say “30 years is weather – 2,000 years – now *that’s* climate!”? They kind of do say that already – so I’m assuming the last 50 years *is* a significant trend over all time periods we have temperature data for. If it wasn’t I’m sure we would have seen the statistical “proof” somewhere by now.

      [edit]

      • KenM // December 16, 2009 at 4:05 am

        No, you can’t

        From the link I provided that you snipped:

        • For the past 8 years (96 months), no global warming is indicated by any of the five datasets.

        • For the past 5 years (60 months), there is a statistically significant global cooling in all datasets.

        Now, before you jump to conclusions (or dhog or anyone else), I’m really curious as to what you think of that post-that-shall-not-be-linked.
        My curiosity on this point has nothing to do with AGW – it might as well be talking about widgets sold. Does that post make a valid point or not? If not, why not? I don’t even care if you delete this post in its entirety – I’m not trying debate anything or play “gotcha”- feel free to email me directly (and I appreciate your time in any case)!

        [Response: First: the claim "For the past 5 years (60 months), there is a statistically significant global cooling in all datasets" is just plain wrong. 'Tain't so.

        Second: the claim "For the past 8 years (96 months), no global warming is indicated by any of the five datasets" is absolutely no different from the statement "For the past 8 days no global warming is indicated in any of the five datasets." Is that a valid point?]

      • Deech56 // December 16, 2009 at 2:08 pm

        Ken M, I am guessing you linked to Chip K’s piece. Gavin had an in-line comment about that here. Basically, use of monthly data without taking autocorrelation into consideration makes his conclusions suspect.

      • KenM // December 16, 2009 at 3:09 pm

        Hi Deech – yes – I did – and I saw Gavin’s comment as well. Lucia did a followup that purportedly shows Gavin was mistaken. The auto-correlation was taken in to account, or at the very least, when Lucia took the auto-correlation into account there was no difference between her results and Chip’s.

        [blockquote]
        the claim “For the past 8 years (96 months), no global warming is indicated by any of the five datasets” is absolutely no different from the statement “For the past 8 days no global warming is indicated in any of the five datasets.”
        [/blockquote]
        I think you’ve lost me here Tamino. If my store has been open for nine years and I have eight years of widget sales data showing no significant trend, isn’t that uncertainty more, um, certain, than just looking at eight days worth? Isn’t there more information in the eight year sample?

        [Response: Suppose you have 8 days of sales data showing no significant trend. You also have 8 minutes of sales data showing no signficant trend. Yes there's more information in the 8 days' data than in the 8 minutes'. But neither data set gives a useful answer about whether sales are increasing or decreasing. Those guys telling you to go out of business because sales are down over the last 8 days/hours/minutes ... you gonna trust 'em?]

        For example, do those eight days include Black Friday?
        That actually leads me to the other question. Suppose I do have 8 years of sales data with no significant trend, isn’t it possible to pull out a smaller subset of data (say the months of October and November last year, while ignoring all of the other data) and show a statistically significant upward sales trend?

        [Response: If 8 years of data shows no significant trend according to linear regression, but a small subset shows a significant trend according to linear regression (when done properly), then what you've shown is that the "true" trend (the signal not the noise) is nonlinear. If, for example, data follows a parabola + noise, then the entire set may show no linear trend while early or late subsets do show a linear trend.

        I see no statistical evidence of nonlinearity in the global temperature trend since 1975. That doesn't mean the trend is linear -- only that the statistics doesn't demonstrate otherwise.]

  • Eric L // December 15, 2009 at 8:03 pm | Reply

    Another good post, but it would be useful to have a definition of AR and ARMA processes, even if just links to Wikipedia pages or something.

    [Response:

    http://tamino.wordpress.com/2008/08/22/alphabet-soup-part-1-ar/
    http://tamino.wordpress.com/2008/09/11/alphabet-soup-part-2-ma/

    ]

  • MapleLeaf // December 15, 2009 at 8:10 pm | Reply

    Hank, not quite. It is my understanding that someone asked about why 30 years is used to determine trends versus the time used to define “climate”.

    Agreed, Tamino addresses the trend question. But what about KenM’s question? P Lewis provided a good reference which speaks to that question.

  • dhogaza // December 15, 2009 at 8:19 pm | Reply

    Lake:

    Why do you think the CRU scientists were so guarded about scrutiny of their data, acting like they had something to hide? I can understand being annoyed at critics, but threatening to delete files rather than make them available to McIntyre and others seems shady, no?

    Along with Tamino’s points (which I agree with) keep in mind that the researchers had every reason to believe that requests weren’t being made in good faith.

    Ignore Jones and the raw station data for a moment, consider his colleague at CRU, Keith Briffa.

    Briffa specializes in paleoclimate reconstructions using tree ring proxy data. Not using his own data (i.e. the field work – site selection and boring trees – is done by others).

    One reconstruction, the Yamal area of Siberia, has been done using data collected by Russian researchers. While making that data available to Briffa, they did not give Briffa permission to release that data to other researchers.

    McIntyre et al screamed for years that “Briffa’s hiding the data! Smoking gun! Fraud”.

    He e-mailed Briffa asking for the data. Briffa politely responded, “You’ll have to get it from the Russians”. “Hiding! Smoking gun! Fraud” was the response to this entirely reasonable and proper response.

    Later, Briffa and one of the researchers published additional work, and at that point the (Russian) researcher made the data available as required by the journal in which they published.

    McIntyre et al: “Our pressure forced Briffa to release the data!” etc etc.

    Recently, it has become apparent that McIntyre must’ve asked the Russians some years back and got the data, because it turns out he had it in his hands since 2004 – five years!

    Now, do you understand why Briffa, at least, might think that McIntyre has not been acting in good faith, that his hounding Briffa for data that 1) Briffa had no right to release and 2) McIntyre had in hand back in 2004, might be viewed as nothing more than harassment?

    This isn’t an isolated incident, but rather describes the modus operandi of McIntyre and his goons at Climate Audit.

    Now … are you aware that Ben Santer of LLNL reports having gotten his first death threat as far back as 1996, in reaction to his research into global warming? Do you understand that this might tend to color one’s opinion of those screaming fraud, misconduct, etc, that beyond the screaming there might be actual physical risk? That one might not care to consider such people as being friends?

    • Lake // December 16, 2009 at 2:35 am | Reply

      Responding to Tamino’s reason he’s convinced in AGW:

      “The warming effect of human emissions is beyond dispute. Greenhouse gases absorb infrared radiation; that they warm the planet is basic physics.”

      Yep, we’ve done this in my classroom and it’s true. But you’re completely confident in extrapolation from a simple lab setup to a massive global system which is almost entirely heated by the sun and dominated by water vapor? You may be entirely and completely convinced, but I’m not. The extrapolation is too big — what lab experiment factors in everything important to mirror this huge system?

      “No other climate-forcing agent (solar, cosmic rays, etc.) is on the increase, or has been since 1950.”

      Okay, but that seems like negative proof. You’re positive that *everything* has been exhaustively disproved, so the *only* thing left is CO2? I know a fair bit about TSI and solar cycles, but I don’t think the magnetic fields of Earth and the Sun and the plasma that permeates space in the solar system has been completely explained away. Has it? If so, can you point me to the data?

      “Murder victim + suspect on the scene + smoking gun + every other possible suspect has an alibi + suspect’s fingerprints and DNA on the murder weapon and the victim. Draw your own conclusion.”

      Meh. Extended analogies aren’t worth much in this context. I could argue that there’s no ‘murder victim’ yet because the catastrophe hasn’t happened yet, but I digress — we can stick to the facts, no?

      Responding to the CRU emails part:
      dhogaza, I have to part ways with you on this one. Friends? Harassment? This has nothing to do with science. My own background is in physics and astronomy, and the notion of withholding data that was used to publish journal articles is just anathema. If science is about truth, nothing is do be gained from hiding data, no matter who requests it. And as far as I understand it, it wouldn’t have been extra work to hand over the data (as long as the data was maintained properly). In Jones’ case, it was the specific stations used to make the graph, yes?

      Imagine the shoe was on the other foot — a known non-AGW, skeptical, ‘denialist’ scientist somehow manages to publish an article with a graph that shows that, say, X-rays from distant supernovae are truly the cause of global climate change. Wouldn’t you ask to see the data? Wouldn’t you be right in asking? And if he said, ‘forget it, we’re not friends, you’re harassing me!’, would he be right?

      Wouldn’t you agree that that would be entirely non-scientific of him?

      And as for ‘ignore Jones and the raw station data’, why? Aren’t they at the heart of this?

      Politics, tribalism, circles of friends, even death threats — these are all very human, but they mask the scientific issues; for data to be credible, it must be available, and re-processable, right? Other climate scientists have said as much. Scientists in any other field would insist on it.

      I’d agree with you if the data sets were released (especially since it is taxpayer-owned) and the harassment continued. Attacking a scientist (or anyone) personally because you disagree with them is ridiculous.

      • mark // December 19, 2009 at 2:17 pm

        What about studies such as reported in Science (4 Dec 2009): “Coupling of CO2 and ice sheet stability over major climate transitions of the last 20 million years” that support the contention that carbon dioxide is an important climate driver?

  • dhogaza // December 15, 2009 at 8:22 pm | Reply

    Hank, early on:

    Hmmm, how long til the first posting from someone who hasn’t read the text and just sees an opportunity to copypaste old noise?

    Posted a bit earlier, but not approved before Hank posted, by KenM:

    I read somewhere that if you pick the right interval of time (it was an 8-year span, if I recall correctly) between 2000 and today, you can show statistically significant cooling.

    Hank, not long ago:

    You and Ken are asking for what’s explained right at the top of the screen. Please go to the original post and read.

    Hank, I hope you use your ability to predict the future for the good of humanity! :)

  • David B. Benson // December 15, 2009 at 8:25 pm | Reply

    Tamino — Very good! (Including a link back to your ARMA page would have helped, IMHO.)

    As for “accelerated” warming, I don’t see any looking through the entire GISTEMP of almost 130 years. What I see ar some wobbles due to changes in various forcings which more or less cancel out over the 130 years. What is left is the forcing proportional to ln(CO2) for which we approximate using the Arrhenius formula. From the data in
    http://bartonpaullevenson.com/Correlation.html
    one determines that the appropriate constant in the Arrhenius formula is about 2+ K for 2xCO2 as the sole forcing. This agrees rather well with various GCM studies of “transient climate response” (TCR) and a Charney equilibrium climate sensitivity of close to 3 K for 2XCO2.

    If that is not bad enough, there may be accelerating freedbacks coming forth now, primarily in the Arctic. Those are, I am under the distinct impression, so far insignificantly small, but that may well change in the next few years.

  • Gavin's Pussycat // December 15, 2009 at 8:35 pm | Reply

    Lake, I found this thoughful:

    http://initforthegold.blogspot.com/2008/05/falsifiability-question.html

  • Jim Bouldin // December 15, 2009 at 9:32 pm | Reply

    Hvordan, my take:

    If using strictly the global means, as done here, you could (would have to?) create a reference distribution of slopes using a permutation test on all data after a given start year, calculating a slope each permutation and then taking the ratio of positive to negative slopes. Couldn’t just assume a normal distribution with so few data. You would not need to ARMA() model in such a case, because the test doesn’t assume independence, which is an advantage. The C.I.s are not important here.
    Tamino may disagree or have other ideas.

  • JCH // December 15, 2009 at 9:37 pm | Reply

    If I could just be Tamino for one day.

    But I can’t. So I just go with short-term trends, which is what they apparently understand. 2009 is warmer than 2008; therefore, the theory of Global Cooling/No Warming has been killed by a short-term trend.

  • TrueSceptic // December 15, 2009 at 9:39 pm | Reply

    Excellent post Tamino, and well timed too!

    Might it be a good time to update the ‘You Bet!’ article?

  • dhogaza // December 15, 2009 at 9:56 pm | Reply

    So I just go with short-term trends

    Extrapolating from the last four days here in Portland, Oregon gives a truly frightening trend! It’s warming even though we’re still approaching the solstice! My oh my!

  • David B. Benson // December 15, 2009 at 10:06 pm | Reply

    MapleLeaf // December 15, 2009 at 8:10 pm — take a look at BPL’s latest:
    http://BartonPaulLevenson.com/30Years.html

  • Jim Bouldin // December 15, 2009 at 10:45 pm | Reply

    There’s nothing magic about the number 30 wrt climate. You can define climate at any temporal (or spatial) scale you want, depending on the question at hand. Climate to an oak leaf might be measured in days or weeks, but to a glacier, in millenia. Back when there were no computers it was nice to have a set of tables of means and variances, and they settled on 30 years for those. Now we can almost instantly define and subset any data to any length of time, for any given purpose.

  • WAG // December 15, 2009 at 10:48 pm | Reply

    The hypocrisy is amazing – deniers say that 30 years isn’t enough to separate the present warming trend from natural variability, but then they say that 7 years is enough to definitively conclude that the earth is cooling.

    i’ve been compiling a list of other examples of deniers’ hypocrisy. Up to 24 so far. Feel free to add to the list:

    http://akwag.blogspot.com/2009/12/climate-of-hypocrisy.html

  • MapleLeaf // December 15, 2009 at 11:11 pm | Reply

    David Benson, Thanks.

  • WAG // December 15, 2009 at 11:16 pm | Reply

    Tamino – along the same lines as this post, I’d suggest the next one cover why paleoclimate is not critical to the conclusion that “present warming is unusual in historical terms.” Basically, a response to the line of reasoning in this argument:

    http://wattsupwiththat.com/2009/12/12/historical-video-perspective-our-current-unprecedented-global-warming-in-the-context-of-scale/

    As a layperson, what I’ve always said is that while the present 1 degree C of warming may not be out of the ordinary, the important thing to look at is what’s coming over the next 100 years. With CO2 at its highest level in 600,000 years (and probably 15 million), and a multi-decade lag between CO2 and temperature finally catching up with the energy imbalance, it’s pretty clear that we’ve got some unusual changes in the pipeline. It’s the FUTURE temperature increases that will be unique, not the warming we’ve already experienced.

    • Mark // December 18, 2009 at 3:05 am | Reply

      WAG: “As a layperson, what I’ve always said is that while the present 1 degree C of warming may not be out of the ordinary, the important thing to look at is what’s coming over the next 100 years…”

      I couldn’t agree more! I would like to draw your attention to a recent post on Michael Tobis’s blog

      http://initforthegold.blogspot.com/2009/12/30th-anniversary-of-global-warming.html

      and the comment from climatesight:

      “It’s all a question of whether scientists 1) knew that our emissions would eventually cause warming, and then watched it happen; or 2) saw that it was warming, realized that it correlated with our emissions, and accepted it as causation.

      The scientific community knows it is 1. But most of the public thinks it is 2.”

  • David B. Benson // December 15, 2009 at 11:25 pm | Reply

    Jim Bouldin // December 15, 2009 at 10:45 pm & WAG // December 15, 2009 at 10:48 pm — For some time now I have thought 30 years as too short an interval for some purposes. Do look at PBL’s latest contribution, linked in my prior.

    Based on some understanding of some ocean oscillations and intermediate ocean temperature response times, I prefer at least 90 years of data when I can obtain it.

    In any case, using all the GISTEMP data certainly supports AGW:
    http://bartonpaullevenson.com/Correlation.html

  • dhogaza // December 15, 2009 at 11:46 pm | Reply

    As a layperson, what I’ve always said is that while the present 1 degree C of warming may not be out of the ordinary, the important thing to look at is what’s coming over the next 100 years.

    Bingo. That’s one of the points of the whole MWP diversion, arguing about today vs. the MWP rather than focusing on what we need to do to prevent > 2C rise tomorrow.

  • David B. Benson // December 15, 2009 at 11:50 pm | Reply

    WAG // December 15, 2009 at 11:16 pm — It is still hard to be certain, but likely the global temperature is now higher than at any time since the Eemian interglacial, 115,000 years ago.

  • Sceptical Guy // December 16, 2009 at 12:18 am | Reply

    Another good post. And IMHO very well written this time. I’m familiar with S/N ratios from my work in electronics (so working in dB), but I admit to being hopelessly ignorant of the maths behind them.

    Can I ask why you used the zero-crossing point of the lower confidence limit to determine how many years of data you’d need? As a layperson, I expected you to define an acceptable confidence interval and use the date for the last time it was within that limit. As an over-simplified example, “the latest year where the confidence interval is within 0.x degrees is YYYY, meaning we need 2010 – YYYY years of data”. Is this because you are looking simply for +ve vs. -ve trends? Or have I grossly misunderstood?

  • Jim Eager // December 16, 2009 at 1:04 am | Reply

    Gavin’s Pussycat, thanks very much for the link to Michael Tobis piece on falsifiability. I somehow managed to miss it before now.

  • Hank Roberts // December 16, 2009 at 1:14 am | Reply

    “I wasn’t trying to predict the future. I was trying to prevent it.”

    – Ray Bradbury

    http://screenwritingfromiowa.wordpress.com/2009/05/31/screenwriting-quote-of-the-day-88-ray-bradbury-pt-3/

  • Jim Bouldin // December 16, 2009 at 1:22 am | Reply

    David, BPL’s idea with the stabilization of s.d. with increasing sample size is very similar to a method used in ecology when you don’t know how many samples to take to get a reliable estimate–you keep a running computation of the mean as you add each sample. When the slope of the curve has leveled off, you stop–you’re at the point of diminishing information returns. Still some subjectivity there (what defines “level”?) but it’s a lot better than just picking some arbitrary sample number out of the air.

    What WAG and BPL said made me realize that there are multiple levels of discourse on this topic. At the low end are the responses to deniers’ mindset that BPL notes in his opening P–that a short term cooling somehow invalidates something (i.e. global warming in general or AGW in particular). But at a completely separate level of discussion is the need to define temporal spans that are appropriate for the scientific research topic at hand, be that ocean circulation, glacier mass balance, carbon sequestration by trees, or any of the thousands of possible topics under the sun.

    Gavin once said, that in the context of the first type of discussion, he finds the topic not terribly interesting. I agree. But in the context of the second…well that’s what keeps us going as scientists (although I admit to a strong attraction to temporal and spatial scaling issues whereas others find it akin to eating cardboard).

  • Ray Ladbury // December 16, 2009 at 1:57 am | Reply

    Sceptical Guy,
    Most typically, one defines the most likely value and some confidence interval around it. As long as the lower bound of the confidence interval (say 95% ) is positive, we can say that our quantity is positive–at least with that level of confidence (e.g. 95%). However, once the lower limit crosses the axis, zero is within our confidence interval, and we cannnot say whether the quantity is positive or negative.

    • Sceptical Guy // December 16, 2009 at 3:29 am | Reply

      Thanks Ray – got that. I did misunderstand the question and I was confused ‘cos I saw that the zero-crossing point of the lower confidence limit is partly a function of the size of the trend itself (e.g if the trend was +0.04 deg/yr, it would shift up, meaning less years of data required, or if the trend was +0.01 deg/yr (shifted down), we would need more data). I have re-read the 2nd para and I understand now, “The time required to establish a trend in data depends on many things, including how big the trend is (the size of the signal) and how big, and what type, the noise is.” Also, “Let’s look at GISS data for global temperature and test how much data we need to establish the most recent trend.” The key words being MOST RECENT. I originally interpreted this as “how much data is required to make a trend statistically significant”.
      D’Oh!

  • The Wonderer // December 16, 2009 at 2:06 am | Reply

    If the denialist camp were held to the same standard as climate scientists whose e-mails have been stolen, we’d be getting somewhere. But instead, I think it’s going to be awhile.

    • TrueSceptic // December 16, 2009 at 11:33 pm | Reply

      They would be laughed at (or maybe worse) wherever they went.

      Of course, that would be the ones who are merely mistaken. What should be the response to those who know they are lying?

  • Richard Steckis // December 16, 2009 at 2:23 am | Reply

    If I may be permitted. I actually enjoyed this thread and it has certainly educated me.

  • David B. Benson // December 16, 2009 at 2:25 am | Reply

    Jim Bouldin // December 16, 2009 at 1:22 am wrote “… the need to define temporal spans that are appropriate for the scientific research topic at hand, be that ocean circulation, …” that is thoughtful, but I would change that from “define” to “determine”. It is, at least for me, a perplexing issue within climatology.

    So here are a couple of paragraphs of “cardboard” for you, which I had previously written:
    No cycles in central Greenland ice core temperature proxies during the Holocene — Computing the fft spectrum of the GISP2 temerature anomolies for the interval from 10,400 years ago onwards results in a graph which is simply (close to) pink noise:
    http://en.wikipedia.org/wiki/Pink_noise
    down to the approximately 22 year period limitation of the data. There is no period which clearly stands out in this data.

    Using a technique based on the Lomb periodogram, intended for detecting quasi-preiodic phenomena, it is just barely possible to detect a band from 27 to 90 years which sticks (a little) above the noise. It is conjectured that this is the result of the various ocean oscillations affecting the North Atlantic sea temperatures, but this would require some confirmation. However, nothing else even noticable appears out to the limit of about 400 year quasi-periods.
    end of “cardboard”.

    But at these multidecadal scales it actually is not clear to me that these so-called ocean oscillations are even quasi-periodic; this might well just be pinkish noise. Not necessarily; there is some evidence that the PDO has had a 40 year quasi-period for over a thousand years; before that is unknown. On the other hand, the intensity of El Nino has varied rather remarkably during the Holocene.

    The climate system, even just considering the interactions of air and ocean, is sufficiently complex that it is unclear any constitutive approach is of use for long range studies. In any case, a combination of the above “cardboard” and some vague understanding of ocean temperature reesponse times usually leads me to want to see at least 90 years of data.

  • Vincent van der Goes // December 16, 2009 at 2:43 am | Reply

    Great post, Tamino.

    [quote]The time required to establish a trend in data depends on many things, including how big the trend is (the size of the signal) and how big, and what type, the noise is.[/quote]
    This is something I had been wondering about. Thanks for going into this much detail.

  • t_p_hamilton // December 16, 2009 at 3:13 am | Reply

    The Wonderer said:”If the denialist camp were held to the same standard as climate scientists whose e-mails have been stolen, we’d be getting somewhere. But instead, I think it’s going to be awhile.”

    The problem with the deniers is that they do not hold themselves to ANY standards, unlike scientists. Character and discipline comes from within, not externally, as immature children think it does.

    Data fabrication is a deadly offense in science. Peer review is the minimum standard.

  • dhogaza // December 16, 2009 at 5:02 am | Reply

    If the denialist camp were held to the same standard as climate scientists whose e-mails have been stolen, we’d be getting somewhere. But instead, I think it’s going to be awhile.

    Maybe we need our own Russians …

  • Barton Paul Levenson // December 16, 2009 at 1:08 pm | Reply

    P. Lewis,

    I demonstrate how the 30-year period was chosen here:

    http://BartonPaulLevenson.com/30Years.html

    • Ed Davies // December 18, 2009 at 5:39 pm | Reply

      I’m not at all convinced by that page.

      I wrote some quick code to reproduce the standard-deviation graph then re-ran it but using data ending at earlier years (e.g., 1998 or 1988 instead of 2008) and found that the point of reversal in the graph appeared for shorter periods. In each case the point of reversal corresponded to the late 1960s/early 1970s. I think the technique that BPL has used does not determine the appropriate period for considering climate but rather the time since the start of the modern global warming era.

      The reason that the standard deviation increases on the first part of the graph is not more randomness but simply the effect of the underling temperature trend since around 1970.

      Maybe it would help if somebody a little less statistically naive than me could have a look into it.

  • Barton Paul Levenson // December 16, 2009 at 1:09 pm | Reply

    Sorry, that should have been directed to Ken M. My misreading.

    • KenM // December 16, 2009 at 3:20 pm | Reply

      Thanks BPL – I may have said this before but I always enjoy your pages – very informative.
      One question though, if you have an infinite amount of data, how do statisticians know when the SDs are stable (or maybe that’s impossible)?
      See, my initial question is more subtle. You’ve covered very nicely why 45 years is a good sample size when the entire sample is 140 years. Suppose I’ve been in business for 100 years, is 100 days a good enough sample to determine a trend in sales?

  • J // December 16, 2009 at 1:23 pm | Reply

    Lake writes:

    I’m am open-minded but currently skeptical about AGW. I’m new to this blog, and it looks like you are making some convincing arguments. In the couple of posts that I’ve read, you seem to be trying to definitively answer the question, “Was this last decade warmer than previous decades in the last century, as predicted by climate scientists?” Is this correct?

    Anyway, I have a couple of broader questions that hopefully have relatively straightforward answers. I’m a science teacher with a technical, data-centric background, and answers to these questions with relevant links will help me make up my mind about the issue.

    By all means, keep following this blog. But I’d also encourage you to check out John Cook’s excellent site, “Skeptical Science” (http://skepticalscience.com/). Tamino’s blog and Cook’s site complement each other nicely, I think. Tamino does a great job of demonstrating statistical concepts in very clear, convincing ways. Cook specializes in reviewing the literature and pointing out the totality of the evidence for any given point in question. Two different approaches, but both immensely valuable IMHO.

  • J // December 16, 2009 at 1:27 pm | Reply

    TrueSkeptic writes:

    Might it be a good time to update the ‘You Bet!’ article?

    Check out the last comment in the thread there. I redid Tamino’s calculations in October of this year, just for fun. Neither 2008 nor 2009 brought us any closer towards resolving the “bet”, since both years fall nicely inside the zone of overlap between the two claims (assuming December 2009 doesn’t suddenly veer into -100C or +100C anomalies…..)

  • Tony O'Brien // December 16, 2009 at 2:14 pm | Reply

    How long before we hear global warming stoped in 2012?

    The flat earth society have no sense of internal consistancy. But they do know how to sell. Tamino this is great work makes the statistics so much more understandable, but it would sell better if you put your face to it.

    Although I can see why you might not want to.

  • Andrew Dodds // December 16, 2009 at 2:43 pm | Reply

    Lake -

    We do actually have a ‘real world’ example of CO2-only warming.

    As a classroom exercise, you can show that allowing for distance from the sun and albedo, Venus actually receives a very similar amount of sunlight to Earth, yet is several hundred degrees hotter. There is very little water vapour in the atmosphere of Venus; indeed, relative to Earth the climate is quite simple (No ice to melt or water vapour feedback).

    • Lake // December 16, 2009 at 4:55 pm | Reply

      Okay, now we’re getting into something that I know about as an astronomy instructor. I’d argue that you can’t compare Earth and Venus fairly, what with a 0.0383% CO2 composition here on Earth (and rising…) and a 96.5% CO2 comp. on Venus. Does CO2 lead to runaway warming on Venus? Absolutley. On Earth? For me, the jury is still out, but I’m happy to entertain ideas or links that are more complex and multi-factored than just CO2 = trapped heat = warming. Clearly, there is far more at work.

      [Response: Nobody seriously expects runaway warming on earth (for very good reason). But the fact remains: more CO2 means more infrared absorption means more heating. If you deny that, you've gone beyond skepticism to denialism.

      And the fact also remains: there is no other plausible explanation for observed warming.

      The fact also remains: the warming due to increased greenhouse gases was predicted before it was observed, and observation has matched prediction well.

      I'm beginning to suspect that you just want to be contentious -- that there's no amount of evidence that would convince you. Feel free to prove me wrong.]

  • Ray Ladbury // December 16, 2009 at 2:59 pm | Reply

    Richard Steckis says, “If I may be permitted. I actually enjoyed this thread and it has certainly educated me.”

    We’ll see. Perhaps you could use your newfound education to correct some of the misinformation (and no, I don’t think it is disinformation) you have spread over at RC?

    [Response: Richard complimented this post and showed no trace of contentiousness. It was gracious, I suggest a gracious response (as in, not raising contentious issues).]

  • george // December 16, 2009 at 3:00 pm | Reply

    The CI tells you about the confidence you can place in a given trend estimate.

    But so does the slope of the central line on the above graph — ie, rate at which the trend (through the present) changes as a function of starting point of that trend.

    One thing that is clear on the graph is that it makes a rather substantial difference whether one starts the trend in 2000 or 2001 because the graph is fairly “steep” at that point.

    In fact the change in the (central) trend when one switches from using 2001 as the starting point to using 2000 as the starting point is just about equal to the yearly trend calculated using 1975 as the starting year (about 0.017C/yr).

    The effect of this is that the confidence interval for the trend starting in 2000 includes values up to almost 0.04C/yr while if one starts the trend just a single year later, the confidence interval “shifts” downward substantially and now includes values “only” up to about 0.025C/yr.

    Suppose one were trying to assess whether a trend of 0.035C/yr is consistent with the data “at the 95% confidence level”.

    If one used the data from 2000 onward (started the trend in 2000), one might conclude that it is consistent because 0.035C/yr falls within the confidence interval.

    But starting the trend just one year later in 2001 would tell you that the 0.035C/yr trend is notconsistent with the data — not even close!

    So, what does it really mean (practically speaking) to say that one has “rejected” or “falsified” a particular projection “at 95% confidence” (because it falls outside the CI) when one starts the trend in 2001, when starting the trend just a single year earlier leads to a “failure to reject”?

    Clearly, “statistical confidence at the 95% level” is not always something you can be confident in practically speaking.

    In the latter case, I certainly wouldn’t bet the farm on it, or even the chicken.

    I think those who have claimed to have “falsified IPCC” using the beginning of 2001 as the starting point of their trend would do well to look carefully at the above graph.

    It would be interesting to see a similar graph to the one Tamino shows above for the other data sources (HADCRU and the satellite data).

    But I would guess that they all probably show a relatively large change in the trend depending on whether one starts at the beginning of 2001 or the beginning of 2000.

  • J // December 16, 2009 at 3:21 pm | Reply

    Tamino:

    I looked at something like this myself, after reading one too many comments about “warming stopped after 2000!”

    Using the annual GISS Land+Ocean data, I calculated the linear trend from 1975-2000 (inclusive). It’s +1.6C/century. That’s using the full 26 years.

    Next, I looked at the two 25-year trends, the three 24 year trends, the four 23-year trends, etc. In other words, all the possible combinations of shorter-duration trends that fall within the same 1975-2000 period.

    If you go to periods as short as 10 years, you can find one interval when the trend was negative. If you go to periods of up to 18 years, you can still find intervals when the trend was less than +0.1C/century. All of these, of course, are within a longer period when we know the actual trend was +1.6C/century.

    Of course, those negative or near-zero short-term trends included the Mt Pinatubo years. One could argue that this is an invalid comparison, since there hasn’t been a similar eruption post-2000.

    So … I went back and removed the effects of Mt Pinatubo, by replacing the annual data from 1992, 1993, and 1994 with the values for those years from the 1975-2000 linear trend. Then, for the heck of it, I similarly boosted the 1982 datum to fall on the trend line, to compensate for the El Chichon eruption. No volcanoes in these data!

    Having done that, the analysis of trends still finds a couple of 8-year negative trends, and quite a few 9-14 year trends that are at or below +0.1C/century. For comparison, recall that the actual trend over this entire period was +1.6C/century (and in my de-volcanoized version it was 1.7C/century).

    Someone else could do a much better job of this, I’m sure. But it was enough to convince me that even in the absence of a volcanic eruption looking at 10-15 year trends in global mean surface temperature is a fool’s errand. And looking at trends of less than 10 years (say, 2001-present) is even more so.

    • Gareth // December 16, 2009 at 8:36 pm | Reply

      You might enjoy playing with the Java gadget cunningly wrought by one of my readers, embedded in this post. An excellent demonstration of what happens as you change the base period for a trend analysis.

  • Hank Roberts // December 16, 2009 at 3:40 pm | Reply

    Wait, George, no need to guess, ask a statistician what happens
    when you change from 2000 to 2001– you reduce the size of your sample and widen the uncertainty. You can’t just look at a picture and compare the lines and pick different starting points.

  • D. Robinson // December 16, 2009 at 3:54 pm | Reply

    Tamino,

    You state that there is no cooling trend since ~1999, 2000, 2001, whichever – you’re right.

    And it’s trumpeted by some denialists, so fine fair enough point to dispute.

    But, the whole discussion is in reference to AGWarming. So, it’s fair to ask – is there a statistically significant warming trend for the same time period?

    [Response: The post states:

    Therefore we need at least 14 years of GISS data (from 1996 to the present) to draw a confident conclusion about the most recent trend. In fact, since we have additional unaccounted-for uncertainty (such as the parameter estimates for our ARMA(1,1) model), we actually need a bit more. Let’s say that less than 15 years of data allows no confident conclusion about whether the trend in GISS data is warming or cooling.

    ]

  • george // December 16, 2009 at 3:58 pm | Reply

    Tamino,

    I am curious what the above graph would look like if you used “least absolute deviation” rather than “least squares” to calculate the trends.

    I would suspect that it might make the central trend values more stable from year to year.

    In particular, I would expect that the peak in the above graph just before 2000 (almost certainly due to the 1998 el nino “outlier”) would be suppressed to some degree by using least absolute deviation.

  • J // December 16, 2009 at 4:02 pm | Reply

    Yet another response to the “No warming since 2000!” crowd:

    Every year since then, except for 2008, has been above the 1975-2000 trend line.

    I think a lot of “skeptics/contrarians/deniers” (choose your term) have been misled by short-term variation, much of which is related to ENSO. The early 2000s had normal-to-negative SOI index values and were were much warmer than the 1975-2000 trend. Then, the last few years had normal-to-positive SOI index values, and temperatures were around (or, for 2008, below) the 1975-2000 trend.

    In other words, we took a big step up, then a very small step down. If you look at a short enough interval, that looks like cooling. But that’s an artifact of the time frame.

    What’s wryly amusing (or, unfortunate, depending on your tolerance for dark humor) is that this apparent “leveling off” from 2001-2008 happened to coincide with the development of the blogosphere. So a lot of people got caught up in some rather foolish analysis of short-term trends. It will pass.

  • Barton Paul Levenson // December 16, 2009 at 4:06 pm | Reply

    Ken M,

    I have no idea. I don’t know anything about the statistics of marketing. There’s probably a different time scale for different phenomena.

  • george // December 16, 2009 at 5:45 pm | Reply

    Hank says

    ask a statistician what happens when you change from 2000 to 2001– you reduce the size of your sample and widen the uncertainty. “

    The above graph makes it clear that it’s more than just a “widening of the uncertainty” (increase of the magnitude of the “+-” about the central line) that is important when one changes the starting point of the trend from from 2000 to 2001.

    The entire confidence interval is not only being widened, but is also being “shifted” (downward) so that it is “centered on” a new value (for trend starting in 2001 ) that is quite different from the one it was originally centered on (for trend starting in 2000 )

    For 2000, the red upper confidence limit line lies about 0.025C above the black central trend line . For 2001, the distance of the red line above the black has increased slightly to about 0.028C. So, reducing the sample size (by one year’s data in this case) increases the uncertainty by about 0.003C (I’m eyeballing the above graph, so I might be off slightly)

    But the actual upper bound for the confidence interval has gone from about 0.038C/yr (for trend starting in 2000) to about 0.025C/yr (for trend starting in 2001), a decrease of by about 0.013C.

    So, the “range of probable trend values” (ie, range of values falling within the confidence interval) has been primarily affected by this “shifting” of the central trend (downward) rather than by the widening of the (+-) uncertainty (increase in the the width of the interval) .

    In fact, when one reduces the “sample size”(uses one year less data), one might expect based solely on the increase of the uncertainty, that the CI for the trend starting in 2001 would encompass larger trend values,.

    But just the opposite is true in this particular case because the increase in the width of the uncertainty has been more than offset by a downward shift in the central line.

    Lots of statisticians talk about increase in the width of the confidence interval with decrease in sample size, but the “shifting effect” (due to the change in the central trend value) is much less frequently discussed and can make even statistical claims of “rejection at 95% confidence” questionable in some cases.

  • WAG // December 16, 2009 at 6:14 pm | Reply

    In case you’re wondering, John Cook at SkepticalScience has a post up on the WUWT video trying to show that the current warming trend is nothing unusual by historical standards:

    http://skepticalscience.com/Hockey-sticks-unprecedented-warming-and-past-climate-change.html

  • David B. Benson // December 16, 2009 at 8:30 pm | Reply

    J // December 16, 2009 at 3:21 pm — That procedure just removed the rapid response portion of the volcano eruptions forcing. There is also the slower response, with a characteristic time of about 30 years.

  • Hank Roberts // December 16, 2009 at 10:09 pm | Reply

    Gareth, I wonder if that lovely Javascript gadget could be improved with the information Tamino demonstrates here — by adding in the error bars and having those move with the slider as well?

    (usually when I wish for stuff like this programmers tell me it’s “trivial” — I ask what that means,and they say “not worth my time to do, and you’d never figure it out in a hundred years” — so I keep wishing)

    • Gareth // December 17, 2009 at 2:05 am | Reply

      The author of the gadget did intend to improve it, or so he said at the time, but whether he’s actually done so I can’t say. And I think he was in favour of keeping it relatively simple… I will ask, though.

  • Ray Ladbury // December 16, 2009 at 10:53 pm | Reply

    Tamino said, “It was gracious, I suggest a gracious response (as in, not raising contentious issues).”

    Bah, Humbug. ;-) I’m afraid I am not full of the milk of human kindness this holiday season after dealing with the fallout from James Randi’s latest brainfart on climate change and various other stupidity.

    I merely think that if Steckis found your post helpful, he ought to say so in a public forum where he demeaned your argument only recently–e.g. Realclimate. Who knows, maybe some of the recently accumulated barnacles might even come over here and learn something.

    Hell, in the spirit of the season, if Richard makes such a post, I will publicly take back half the nasty things I’ve ever said about him!

    God bless us, every one!

  • Ray Ladbury // December 17, 2009 at 12:31 am | Reply

    True Skeptic asks about climate denialists:

    “What should be the response to those who know they are lying?”

    The truth… always the truth. Their hatred for it makes it the harshest weapon we can weild.

  • Lake // December 17, 2009 at 12:52 am | Reply

    Tamino, you wrote:

    “I’m beginning to suspect that you just want to be contentious — that there’s no amount of evidence that would convince you. Feel free to prove me wrong.”

    Not intentionally contentious — I just wanted to state the case for my skepticism. You’re fully convinced, I’m just not yet. Here’s my current state:

    1. I’m convinced that the Earth is currently warming, long term. I’m not yet convinced that it is warming outside of the bounds of a natural cycle or necessarily bad and dangerous. To be convinced of this, I would need to see a trend that is severely and clearly different from the norm — not linear, but exponential. That’s what the hockey stick predicts, and it remains to be seen if it will play out like that.

    [Response: You're mistaken. First, the "hockey stick" doesn't predict anything. Expected warming doesn't necessarily call for acceleration, let alone growth that is "exponential" -- only that after a delay due to thermal inertia, warming will follow forcing. And forcing, over the last 30+ years, has not shown acceleration with statistical significance. We can't even rule out that temperature itself has accelerated over that time span, only that it too doesn't show it with statistical signficance.

    The fact is that warming has followed forcing (allowing for the delay due to thermal inertia) exceptionally well. See this.

    As for "different from the norm," that is what the hockey stick (actually, a large number of hockey sticks) shows.]

    2. I’m convinced that humans have been increasing the level of CO2 in the atmosphere. I’m not convinced that this is driving the warming. I think it might be correlated — I know heating causes the oceans to release more CO2.

    [Response: Essentially you're repeating the "warming is causing CO2 increase" canard. Let's be perfectly clear: the increase in CO2 is due entirely to human emissions, nothing else. This is shown by so many evidences that it really beyond any doubt whatever. This includes the isotopic signature of atmospheric CO2, and simple accounting -- the CO2 we've emitted is nearly double the atmospheric increase. The fact is that the planet has been a net absorber of CO2 except for the human element.

    This idea is so wrong, you really need to check yourself. And whoever gave you that idea cannot be trusted about anything.]

    You have made the case for it, but you’ve been quickly dismissive of any other options, including ones not thought of yet. The conclusion is completely settled in your mind (yes?), but I think science has a rich history of turning settled facts into the next big revolution in discovery. So I’m waiting and seeing. What would convince me on that front? Again, it’s just a matter of time — there’s no small scale experiment that could be done that could fairly scale up to the massive Earth-encompassing system.

    The issue that’s getting me is certainty. You are incredibly certain about this, correct? I am just not willing to throw away my sense of questioning, which is at the root of science, for certainty that I will take to the grave, not on this issue.

    I *can* be certain in many areas of physics, because there are experiments that I can do myself that verify the equations every time. This climate issue is a one-shot deal, and there are no real ‘experiments’, just statistical predictions. So we have to see if those predictions bear out. If warming was predicted and that has happened, great, that’s a start. If it continues on the timescales that matter — decades, not months or even years — then I will be convinced. But it will take the time.

    [Response: Do you believe that cigarette smoking increases the chance of getting lung cancer? Have you done the experiments yourself?

    As for warming, it has already happened as predicted on timescales of decades already. See this.]

    One reason that I can’t be certain is the data-gathering. It has been presented as solved and solid, but it is heavily based on statistical modeling. I’m inclined, for now, to agree with William Briggs (the ‘enemy’?) from another blog, when it comes to AGW certainty expressed by climate scientists and bloggers alike:

    [Response: Briggs has shown his incompetence as a statistician here and elsewhere.]

    “Here is a list of all the sources of error, variability, and uncertainty and whether those sources—as far as I can see: which means I might be wrong, but willing to be corrected—are properly accounted for by the CRU crew, and its likely effects on the certainty we have in proxy reconstructions:

    1. Source: The proxy relationship with temperature is assumed constant through time. Accounted: No. Effects: entirely unknown, but should boost uncertainty.
    2. Source: The proxy relationship with temperature is assumed constant through space. Accounted: No. Effects: A tree ring from California might not have the same temperature relationship as one from Greece. Boosts uncertainty.
    3. Source: The proxies are measured with error (the “on average” correlation mentioned above). Accounted: No. Effects: certainly boosts uncertainty.
    4. Source: Groups of proxies are sometimes smoothed before input to models. Accounted: No. Effect: a potentially huge source of error; smoothing always increases “signal”, even when those signals aren’t truly there. Boost uncertainty by a lot.
    5. Source: The choice of the model m(). Accounted: No. Effect: results are always stated the model is true; potentially huge source of error. Boost uncertainty by a lot.
    6. Source: The choice of the model m() error term. Accounted: Yes. Effect: the one area where we can be confident of the statistics.
    7. Source: The results are stated as estimates of ß Accounted: No. Effects: most classical (frequentist and Bayesian) procedures state uncertainty results about parameters not about actual, physical observables. Boost uncertainty by anywhere from two to ten times.
    8. Source: The computer code is complex. multi-part, and multi-authored. Accounted: No. Effects: many areas for error to creep in; code is unaudited. Obviously boost uncertainty.
    9. Source: Humans with a point of view release results. Accounted: No. Effects: judging by the tone of the CRU emails, and what is as stake, certainly boost uncertainty.

    [Response: Think about how ridiculous some of his claims are. He objects to the computer code being complex? Multi-part? As for unaudited, some paleo reconstructions (the latest Mann et al. for instance) have made all of their data and code publicly available. Briggs is over-the-top ludicrous with such statements as "... results about parameters not about actual, physical observables. Boost uncertainty by anywhere from two to ten times."? He just wants to rattle off as long a list as he can think of, without devoting any real work to understanding their validity. He's just talking out of his ass.

    It's not just "the CRU crew" who've done paleoclimate reconstructions. There are a dozen or more. They all disagree on the fine details -- because of course there's uncertainty, which every qualified researcher in this field emphasizes. But they agree on the big picture: modern warming is unprecedented for the last few thousand years.

    As for uncertainty, those who know what they're doing (which doesn't include Briggs) go to great lengths to account for it. Yet you take the word of a crackpot over that of dozens of investigators who have devoted their lives to the study of this. That's incredibly naive.]

    There you have it: all the potential sources of uncertainty, only one of which is accounted for in interpreting results. Like I’ve been saying all along: too many people are too certain of too many things.”

    [Response: Certainty: CO2 causes warming. That's the laws of physics at work. That's why it was predicted before it was observed. That's why it has happened as predicted.

    Neither you nor Briggs nor anybody has given a rational explanation how it's even possible for this not to happen.

    Like I've been saying all along: too many people are too opinionated about things they haven't studied in sufficient detail and they really don't understand -- while those who have devoted their entire lives are called fools. The arrogance is astounding.]

    [edit]

    So your site, his site, and many other sites are offering up the ideas that I am weighing. I am not contentious; I am trying to learn.

    [Response: You say you want to learn? Prove it.

    Read The Discovery of Global Warming. The whole thing.]

  • dhogaza // December 17, 2009 at 1:23 am | Reply

    TrueSceptic …

    Randi drinks the denialist kool-aid.

    At least … a bit.

  • Lazar // December 17, 2009 at 3:44 am | Reply

    “IF you can keep your head when all about you
    Are losing theirs and blaming it on you,
    If you can trust yourself when all men doubt you,
    But make allowance for their doubting too;
    If you can wait and not be tired by waiting,
    Or being lied about, don’t deal in lies,
    Or being hated, don’t give way to hating,
    And yet don’t look too good, nor talk too wise” — Rudyard Kipling, IF.

  • Sceptical Guy // December 17, 2009 at 4:19 am | Reply

    Would anyone mind if I went back to the Venus vs Earth thing briefly (Andrew Dodds and Lake)?

    I have heard an argument that a major player in comparing Venus, Earth, and Mars is actually atmospheric pressure. Some quick wikipedia-ing gave me this:
    Mars: mean surface level pressure = 0.6 kPa; 95% CO2; average temp = -55 degC
    Earth: mean surface level pressure = 101.3 kPa; 0.0383% CO2; average temp = 15 degC
    Venus: mean surface level pressure = 9122 kPa; 96.5% CO2; average temp = 480 degC

    I realise these are approximations, and real values vary, and please correct me if I’m an order of magnitude out. They are for rough comparison only.
    I have also seen that the surface temperature of Venus, as determined from the amount of sunlight it receives and it’s albedo, should be much less than it is, but the extra CO2 causes “runaway global warming” (as Andrew alluded to above). Does anyone know where I can find a similar comparison for Mars? I.e.- what should it’s surface temp be as determined from the amount of sunlight it receives and it’s albedo?

    Also, Mars has thin clouds of frozen CO2, Earth has clouds of water, and Venus has thick clouds of sulfuric acid. Are these planets just far too different to even bother trying to infer anything from comparing climate?

    I don’t intend to be contentious, and I’m honestly not trolling. Apologies if it looks that way – I’m genuinely curious. I have seen this argument used by a true denialist, and I’m trying to see if it stands up to some honest scrutiny.
    If I’ve gone too far off-topic then by all means ignore me and I’ll go back to trying to find out for myself.
    Yes I am familiar with Boyle’s law.

  • Barton Paul Levenson // December 17, 2009 at 12:39 pm | Reply

    SGL Try here:

    http://BartonPaulLevenson.com/NewPlanetTemps.html

  • Gavin's Pussycat // December 17, 2009 at 1:45 pm | Reply

    > Randi drinks the denialist kool-aid.

    Sigh… an amazing lack of scepticism.

    I never liked Randi, though his take-down of Uri Geller (and others of that ilk) was useful. I suspected he was ideology driven, which now seems to be confirmed.

    It’s not just about winning, it’s about the high ground winning.

  • J // December 17, 2009 at 2:36 pm | Reply

    Lake: Tamino responds to many of your points, and you can find much more by perusing the pages at Skeptical Science (http://skepticalscience.com/).

    However, I’d just like to address this one comment of yours:

    2. I’m convinced that humans have been increasing the level of CO2 in the atmosphere. I’m not convinced that this is driving the warming. I think it might be correlated — I know heating causes the oceans to release more CO2.

    First of all, this makes no sense. You admit that humans are adding lots of CO2 to the atmosphere, but you don’t want to admit that that’s causing the oceans to warm? You instead postulate some mysterious unknown force that’s warming the ocean and also causing it to give off CO2, while somehow canceling out the radiative effects of increased CO2 in the atmosphere?

    I’m not trying to be rude, but it sounds like you’re struggling to construct a Rube Goldberg machine to somehow avoid the simple, obvious explanation here.

    In any case, there’s lots of papers out there that show a net flux of CO2 from the atmosphere to the ocean, not vice versa. In other words, we’re emitting a lot of CO2; the oceans can absorb some, but they can’t absorb it as fast as we produce it, so atmospheric CO2 increases and the planet warms.

    Takahashi et al. 2009. Climatological mean and decadal change in surface ocean pCO2, and net sea–air CO2 flux over the global oceans. Deep Sea Research Part II: Topical Studies in Oceanography, Volume 56, Issues 8-10, April 2009, Pages 554-577.

    Sabine, et al. 2004. The Oceanic Sink for Anthropogenic CO2. Science, Vol. 305. no. 5682, pp. 367 – 371.

    And lots, lots more. This is the kind of science that is completely ignored on blogs like WUWT, CA, etc. Lots of patient, time-consuming, un-controversial work by geochemists, oceanographers, glaciologists, etc., slowly piecing together the fundamentals of the climate system, the work that ultimately gets encapsulated in the IPCC reports.

  • george // December 17, 2009 at 2:51 pm | Reply

    William Briggs (quoted above by Lake) says:

    Source [of uncertainty]: Humans with a point of view release results. Accounted: No. Effects: judging by the tone of the CRU emails, and what is as stake, certainly boost uncertainty.

    So, does that mean we need to add in some uncertainty to the temperature data based on the number of time that Phil Jones, Mike Mann and others have called Steve McIntyre names over the years?

    Like this?

    Global temp anomaly = 0.56 +- 0.1 + (“McIntyre you $#@**$%#!” * N)*(0.01)

    where N= number of occurrences

    If that’s the case, then I’d say that we probably can’t be very certain about much of anything at this point — and are probably becoming less certain with each passing day.

    PS: Note that the second contribution to uncertainty in the equation above is always positive (unless you’re McIntyre, of course)

  • Jim Bouldin // December 17, 2009 at 2:55 pm | Reply

    You ought to do a post on why you favor approaches that account for autocorrelation followed by traditional statistical tests, when testing for significance, rather than just slapping a permutation test on it. I’m sort of continually puzzled why this seems to be the preference of many. For predictive modeling purposes I can understand modeling the AC structure, but for sig testing purposes, I do not.

    [Response: A simple permutation test destroys the autocorrelation structure, hence it is not valid.]

  • george // December 17, 2009 at 3:01 pm | Reply

    I realized after submitting that the sign on the second uncertainty in my above equation is wrong.

    Should be negative

    Global temp anomaly = 0.56 +- 0.1 – (“McIntyre you $#@**$%#!” * N)*(0.01)

    second uncertainty should be negative since the actual temperature might be lower than the stated value on account of systematic bias toward the high temperatures

  • Lake // December 17, 2009 at 3:36 pm | Reply

    Fair enough. You guys have given me plenty to read and to think about, which is what I was looking for. I’m just trying to get the point of understanding this as solved and settled, which you are at. I am not because I don’t know enough yet. I don’t think it’s ‘foolish’ to ask questions and not accept arguments from authority (on either side). Anyway, thanks for the direct, if rather forceful, response. I will read the whole thing, as you say, and check out skepticalscience.

    • Neven // December 17, 2009 at 5:09 pm | Reply

      “Rather forceful”?

      I think the response has been rather courteous (not only by the host, but several others as well). I hope you understand there have been many people showing up here for many years, not to learn or debate, but to ‘troll’, as they call it. Sometimes this leads to aggravated responses, because people just don’t have the patience for it any more.

      I personally get suspicious of someone’s intentions when I read texts like these:

      I know a fair bit about TSI and solar cycles, but I don’t think the magnetic fields of Earth and the Sun and the plasma that permeates space in the solar system has been completely explained away. Has it? If so, can you point me to the data?

      Meh. Extended analogies aren’t worth much in this context. I could argue that there’s no ‘murder victim’ yet because the catastrophe hasn’t happened yet, but I digress — we can stick to the facts, no?

      That to me sounds more like someone who has made his mind up and is not really asking questions. And Tamino is right to say the same thing. It does sound/read like you just want to be contentious — that there’s no amount of evidence that would convince you.

      If you are really looking for answers, then please understand that there were many here before you who acted like they were looking for answers but were actually denialists who left indignant after being told off for spreading nonsense or misinformation. If you really are a denialist in disguise, then please go back to WUWT and join the masses cheering for intellectual heavyweight Monckton.

      [Response: Galileo urged us to give each writer the benefit of the best possible interpretation of his writings. That can be carried to excess ... but I think it appropriate in this case.]

      • Lake // December 17, 2009 at 7:00 pm

        Yes, ‘rather forceful.’ Polite, but terse, and quite *certain*. But I do have to thank Tamino for responding, for posting my questions, and, as he quoted Galileo, giving me the benevolent assumption.

        I get it, you guys have been burned by ‘denalists in disguise’; I am truly a skeptic who is trying to dig deeper. I do have a problem with the Sun issue — I just can’t fathom that the number one source of energy on Earth might not have some other cycle or change that we don’t understand yet.

        But I’ve been pointed in the right direction, I got what I came for. I’m in the middle of reading the AIP.org history, and I’m hoping I’ll find the answers there.

        Let me ask: do you (all) like having people come and ask questions? I know that you wouldn’t want to entertain trolls (which I have tried not to be), but I imagine you’d like it when people are coming to you, trying to find the truth and your take on it… no? I mean, these CRU emails and the blowback might be incredibly annoying to all of you on the ‘other’ side, but at least some group of honest people will be looking deeper and harder for some good information, right?

        Anyway, I’ll leave you alone — Tamino, you pointed me where I wanted to go, thanks.

      • Neven // December 17, 2009 at 8:05 pm

        I just can’t fathom that the number one source of energy on Earth might not have some other cycle or change that we don’t understand yet.

        I used to read on a daily basis at WUWT (in the hope something would show up that convincingly showed the theory AGW to be false, because of course I’d rather wish it weren’t true) and every time there was an article on sunspots lots of people proposed mechanisms by which the sun would have another influence on the Earth’s climate. Now, there is a guy over at WUWT called Leif Svalgaard, I believe he is a solar physicist who is quite respected in his area of research, who practically only comments on articles having to do with the sun. He is well-respected over at WUWT, even though he has stated many times that all those theories about the sun having that hidden influence are just fantasies in the realm of astrology. This is something that has always convinced me that, apart perhaps from Svensmark’s GCR theories, the role of the sun is pretty much well understood. Of course, it is always possible that the sun is more influential, but for now there is no scientific evidence of it.

        Anyway, if this stuff interests you, you might want to keep an eye out for those comments by Leif Svalgaard on WUWT.

        All the best with your search and research, and thanks for not taking the suspicion personally.

  • Jim Bouldin // December 17, 2009 at 4:58 pm | Reply

    A simple permutation test destroys the autocorrelation structure, hence it is not valid

    Not sure I follow that. Permutation tests, by definition, always destroy the original structure in any data. That doesn’t make them invalid. It’s just a different way of assessing significance, one where temporal dependence of the data is not an issue.

    [Response: But when time series exhibit autocorrelation they have temporal dependence which is not due to any signal, only due to randomness. That's why they so often give the false conclusion of the existence of a trend. A proper test for the presence of a signal must preserve the temporal structure of the noise while eliminating the temporal structure of the signal. A simple permutation test fails to do that.

    Test it yourself. Take very strongly autocorrelated noise with no signal (hence no trend), but which gives a false indication of trend when the noise is treated as white noise. Run a linear regression and estimate the trend. Then run a permutation test to see whether or not that confirms the "trend."]

  • Lazar // December 17, 2009 at 6:10 pm | Reply

    Lake,

    What Neven and Tamino said. Many have come here and elsewhere posing as ‘open minded but skeptical’ whilst being anything but and wasted a huge amount of everyone’s time. Hence the wariness (and weariness) of our regulars. On the other hand there are ’skeptic’ sites where people will instantly be ‘your friend’ and praise your knowledge and wisdom. You pays your money and you takes your choice. It appears you are not taking the wariness personally, which is a good sign and a good response. I second Tamino’s suggestion of Spencer Weart’s “The Discovery of Global Warming” as a first stop. There are many good ‘introduction to atmospheric physics books, David G. Andrew’s “An Introduction to Atmospheric Physics” is one I can recommend. Definitely read the IPCC reports. If you want to get a handle on statistical properties of climate data, read this site and search the archives. A good companion ‘intro stats book’ which is practical and intuitive (no probability distribution theory stuff) is Kachigan’s “An Interdisciplinary Introduction to Univariate & Multivariate Methods”. For debunkings of common ’skeptic’ claims there is the aforementioned skeptical science, RealClimate, and many others. That’s plenty of fun reading to be getting on with.

  • David B. Benson // December 17, 2009 at 8:04 pm | Reply

    Lake // December 17, 2009 at 12:52 am — Along with all the other reading, you might care to see just how much was understood about CO2 and climate 30 years ago:
    http://books.nap.edu/openbook.php?record_id=12181&page=R1

  • Jim Bouldin // December 17, 2009 at 8:38 pm | Reply

    There’s something at a philosophical level that I cannot quite get my mind around wrt the whole AC topic in context of significance testing, and although I can and will do some tests, I don’t know that they will help. Either you or I may not have time to thoroughly explore this topic here, which is fine.

    It seems to me that a permutation test framework requires no assumptions about preserving anything–other than the values taken by the original data. One is simply asking “how likely is it that the observed data (or derivatives from them, e.g. a linear regression through them, the slope of which functions strictly as a test statistic), would have arisen by chance. AC, or existing patterns of any kind be damned, they don’t matter, because all such patterns are irrelevant to whatever test statistic one is evaluating by the permutations, in this case, the slope of a straight line. Such patterns only matter when you’re trying to use a standard statistical testing framework, with its insistence on independent, normally distributed errors, but those don’t apply in a perm. test.

    Me thinks.

    [Response: Suppose you do a linear regression and get a result which is significant when viewing the data as white noise. But you know, without any doubt at all, that the result is false -- because you created the data with a random-number generator, using an autocorrelation process.

    You then run a permutation test. It indicates that the result is strongly significant -- undeniably so -- because the vast majority of the permutations show no significant response. You suspect that's because the permutations have destroyed the temporal relationship, and conclude that the temporal relationship is a trend.

    The permutations did indeed destroy the temporal relationships. But they're not from trend, they're from autocorrelation. That's why the permutation test gives confirmation of significance. To conclude that the permutation test establishes trend is mistaken; it merely established that there's a temporal signature which is removed by the permutations. Indeed there is, but that signature is autocorrelation, not trend.

    Seriously: try it yourself with artificial data.

    And by the way, the standard statistical tests don't require normally distributed errors (the central limit theorem applies), and the whole point of correcting for autocorrelation is to abandon the assumption of independence.]

    • Tom Dayton // December 19, 2009 at 12:42 am | Reply

      Jim, I might be completely off-target with respect to what you’re asking, but…

      Maybe the “trend” you’re thinking of is different from the “trend” that Tamino is talking about. Maybe you are focusing on the trend of the numbers at face value, regardless of the cause, whereas Tamino is focusing on the “interesting” or “substantive” trend–the trend that is not due to mere autocorrelation.

      Both trends are legitimate to ask about, but they are relevant to different questions. The trend Tamino is asking about is the one that is most relevant to climate questions. I can’t think of a case in which the trend including autocorrelation would be the more relevant one, but it might exist in some domain.

  • Jim Bouldin // December 17, 2009 at 10:36 pm | Reply

    Thanks Tamino. I still don’t see it. I’ll run some tests. Do you work in R?

    [Response: Yes.]

  • David B. Benson // December 18, 2009 at 1:29 am | Reply

    One paper I found (and posted a link on open thread #16) claimed that an AR(1) process with a parameter of 0.69 produced pink (1/f) noise. However, the second of
    1/f noise: a pedagogical review
    http://arxiv.org/abs/physics/0204033
    1/f noise
    http://www.scholarpedia.org/article/1/f_noise
    iseems to contradict that.

    Which brings me to a question I seem unable to find a decent answer to: given an ARMA(1,1) model, what is the power spectrum shape?

  • Douglas Watts // December 18, 2009 at 4:19 am | Reply

    Hey! I’m learning more!

    Thanks, T.

  • Joseph // December 18, 2009 at 4:54 am | Reply

    If you go to periods as short as 10 years, you can find one interval when the trend was negative. If you go to periods of up to 18 years, you can still find intervals when the trend was less than +0.1C/century. All of these, of course, are within a longer period when we know the actual trend was +1.6C/century.

    I did something similar a while back. I looked at 118 11-year GISS trends, after detrending the whole temperature series. I found that the 11-year trends could deviate as much as (+/-) 2.7C/century from the very long term trend, and that the distribution of relative slopes is roughly normally distributed.

  • Paula Thomas // December 18, 2009 at 1:02 pm | Reply

    I recently had a friend of mine tell me that the UK’s snowfall this year disproved climate change!! UGH

  • J // December 18, 2009 at 2:20 pm | Reply

    Thanks, Joseph. That’s an interesting approach.

  • Jim Bouldin // December 18, 2009 at 5:22 pm | Reply

    So I did some sims and got some puzzling results. I set up two short, highly autocorrelated series, both with n = 9 time steps, one using a sine function, the other a cosine function, hence both purely deterministic, covering one full cycle (0 to 2pi radians, in steps of pi/4, starting at 0). I then randomly permuted the 9 trig. values 10,000 times (about a 3% sample of possibilities), calculating a least squares linear regression slope each time. I then computed the probability of the slope of the linear regression of the original data, using this distribution.

    The results differed depending on the function used. When using a sine function, the results followed exactly what you argued: standard linear regr. gave a negative slope and a p value of about 0.07, nearly significant, and clearly wrong. The permutation approach gave a mean slope very near zero, as expected, with p = 1, and the same p value (.07) for the slope of the linear regression of the original data. So clearly a big difference in trend results, apparently due to ac of the original data, as you argued.

    When using a cosine function however, both the standard linear regression model, and the permutation results, gave slopes near zero and p values near one. That is they agreed with each other, and were both approximately correct; there was no influence of the high ac on the regression. It seems from this that there’s a factor involving the nature of the autocorrelated data used in the analysis, aside from the autocorrelation stucture itself.

    As an aside, what I didn’t realize in our discussion was that the slope (trend) using standard linear regr., was computed incorrectly, which then caused the significance to be over-estimated (= p value under-estimated). I had in my mind that the slope would be calculated correctly, but the problem lie in that the p value was calculated incorrectly. Not the case: standard linear regression and permutations both give the same p value. The issue, then, is in the calculation of the slope of the regression line, not the calculation of the p value, when using standard methods.

    Anyway, interesting stuff. I’ve got the code if you want it.

    [Response: If you got a slope near zero, that's probably because of the time span chosen for your functions. For example, if you use cos(t) for a span from -T to +T, you'll necessarily get a slope of zero. And of course even with random autocorrelated noise, there's a chance of getting a near-zero slope anyway.

    Try using autocorrelated noise rather than a deterministic data set; I ran the experiment generating AR(1) noise with a AR parameter 0.95 and got very illustrative results. A few runs gave near-zero slope (and no significance from a permutation test) just by random chance, but most gave very high signficance from the permutation test despite being just random noise.]

  • Arthur Smith // December 18, 2009 at 5:31 pm | Reply

    On short-term trends, an interesting metric of understanding is how well people do at guessing the future. If the underlying trend is truly positive, and the present year is cooler than the trend-line, then there’s a much better than even chance the next year will be warmer than this year. If the trend has suddenly switched from warming to cooling, though, your best bet is on cooling.

    So all the “coolers” who were so excited by the not-really-that cool but colder-than-recent-trend temperature of January 2008 had their go at predictions over at ClimateAudit in February of that year. I also put in my guesses, closer to the real average for the year than anybody. And as a bonus threw in guesses for 2009 through 2012 as well (expecting a warming trend to continue). So what are the chances, that out of the 10 guesses posted in February 2008, mine would be by far the closest, with all the others too low? :) And it looks like I’ll be just as close this year too. A few more details here.

    [Response: Given that the other guesses were from regular CA readers, the chances they'd all be too low is near certainty.]

  • Deech56 // December 18, 2009 at 5:58 pm | Reply

    James Randi corrects himself.

  • Riccardo // December 18, 2009 at 7:11 pm | Reply

    Arthur Smith,
    GISS has silently updeted its numbers and the graph as well. November is at 0.68 °C as the hottest on record. The Dec-Nov average is 0.56 °C; but if you average the first 11 month you get 0.57 °C. This would push 2009 in second place.

  • george // December 18, 2009 at 7:39 pm | Reply

    When I posted my above comment

    I think those who have claimed to have “falsified IPCC” using the beginning of 2001 as the starting point of their trend would do well to look carefully at the above graph.

    I was actually not aware that Lucia had actually just posted the latest rendition of her now classic “IPCC projections [sic] falsified” meme.

    She shows this graph (based on GISS data) and claims

    As you can see, when the residuals from a least squares trend are assumed to be AR(1), the trend of 0.2 C/decade continues to fall outside the 95% uncertainty intervals. So, given these analytical choices, the nominal projection by the IPCC remains inconsistent with observations. FWIW, the actual multi-model mean for the AR1 models exceeds 0.2C/decade, and it also remains inconsistent with the observed data.

    Interestingly, she has toned down her “falsified” language considerably and now says “inconsistent with” (much improved language, by the way) and adds in caveats (eg about noise model).

    But what I find most interesting is this:

    Even based on the AR1 noise assumption, what she has termed the “Nominal IPCC projection for surface temperatures of about 0.2C/decade” lies just barely outside her “95% uncertainty intervals” (as she herself notes).

    A borderline case is reason enough to question the reliability of the result.

    But the result is even less reliable than it might first appear because of the “shifting” that I referred to above:

    When one goes from starting the trend in (beginning) 2001 to starting it just one year earlier (beginning) 2000, the entire confidence interval gets shifted upward by about 0.017C.

    This “shifting” of the confidence interval due to the shift of the central trend with starting year has a direct bearing on how much “confidence” one should place in Lucia’s claim — not statistically, but practically speaking.

    Even if one assumes an AR1 noise model as she has done, starting the trend in 2000 instead of 2001 would bring the 0.02C/year(0.2C/decade) value well within her own “95% confidence” interval.

    Tamino’s ARMA(1,1) confidence intervals are a little wider than her AR1 intervals, but that makes no material difference to the conclusion about the effect of the shift.

    That’s because the central trend (deg C/yr) gets shifted up by almost 0.017C , which means that the upper bound even on the AR1 confidence interval becomes about 0.035C/yr (0.35C/decade), well above 0.02C/yr (0.2C/decade)

    In fact, even if the width of the AR1 confidence interval were only 0.004C/yr for a trend starting in 2000 (much narrower than it actually is) , 0.02C/yr would still fall within the CI about the new (central) trend value.

    So, the critical question is this:

    Just how reliable/robust is a claim of “outside the 95% uncertainty intervals” really when it is so highly dependent upon the starting year of the trend? — ie, when changing the starting year by just one year can completely change the outcome.

    I’d have to say, not very.

    It’s kind of funny: Lucia makes a big deal in the comments of her post about how “Tamino ARMA(1,1,) [noise model assumption for climate data] is doo-doo”, but what she does not seem to recognize is that her own result (using her own noise model) is doo-doo !

    [Response: I think this graph shows that the AR(1) error model is "doo-doo." Her claims about the model forecasts are likewise.

    She's rather full of doo-doo.]

  • Ray Ladbury // December 18, 2009 at 8:24 pm | Reply

    Deech56,

    I read Randi’s notpology. It appears that now he’s in denial about being in denial.

    • Deech56 // December 18, 2009 at 9:06 pm | Reply

      Ray, I can see your point – apparently he didn’t know what he was writing about, didn’t have enough space to expand his lack of knowledge and he was misunderstood. Sounds like he hit the trifecta. A “nevermind…” would have been OK, too.

  • Jim Bouldin // December 18, 2009 at 10:41 pm | Reply

    Can’t figure out how to impart definite red noise to a short time series with no trend, so if you have some code for that, I could use it.

    I still think there’s something else involved besides strict ac effects, having to do with the configuration of the data.

    [Response: Here's an R script to generate ARMA noise. If you call it as

    x = armanoise(100,ar=.95)

    it will generate a 100-element time series for an AR(1) process with AR coefficient 0.95.]

    ##########################
    # generate ARMA(p,q) noise
    ##########################
    armanoise = function(n.points=1000,ar.coef=0,ma.coef=0,wn.sig=1){
    p = length(ar.coef)
    q = length(ma.coef)
    z = numeric(n.points+200)
    x = numeric(n.points+200)
    n0 = max(p,q)+1
    z[1:n0] = rnorm(n0,0,1)
    x[1:n0] = z[1:n0]
    for (n in (n0+1):(n.points+200)){
    z[n] = rnorm(1,0,1)
    x[n] = z[n]
    if (p>0){
    for (j in 1:p){
    x[n] = x[n] + ar.coef[j]*x[n-j]
    }
    }
    if (q>0){
    for (j in 1:q){
    x[n] = x[n] + ma.coef[j]*z[n-j]
    }
    }
    }
    x = wn.sig*x[201:(n.points+200)]
    x
    }

  • David B. Benson // December 18, 2009 at 11:33 pm | Reply

    Notes on ARMA Processes
    http://ees.nmt.edu/Geop/Classes/GEOP505/Docs/arma.pdf
    has the formula for the power spectrum of ARMA processes in equation 14.

  • Joseph // December 19, 2009 at 12:32 am | Reply

    Interestingly, she has toned down her “falsified” language considerably and now says “inconsistent with” (much improved language, by the way) and adds in caveats (eg about noise model).

    I bet that by 2010 Lucia will have to change her tune completely.

  • David B. Benson // December 19, 2009 at 12:54 am | Reply

    Jim Bouldin // December 18, 2009 at 10:41 pm — That generates AR noise without any MA. That’s ok, but it shoudln’t be called “arma”.

    With a large coefficient, say 0.99, an AR(1) process should give a power spectrum (practically) indistinguishable from red (1/f^2) noise except at the very lowest frequencies. With a modest coefficient, say 0.69, you should see several decades of pink (1/f) noise rolling off into red noise at the highest frequencies. With a coefficient of 0.0 one obtains white noise, of course.

    [Response: If the routine is called as

    x = armanoise(n.points,ar.coef=phi,ma.coef=theta,wn.sig=sigma)

    it will generate n data points of ARMA noise with AR coefficients phi, MA coefficients theta, and the underlying white-noise process will have std.dev sigma.]

  • David B. Benson // December 19, 2009 at 1:34 am | Reply

    Tamino — Thanks! I made the mistake of reading the comments and not the code. :-)

  • Ray Ladbury // December 19, 2009 at 1:52 am | Reply

    Deech56,
    Unfortunately, I think Randi is damaged goods now. I don’t see how he can credibly claim the skeptic mantle. At best, he’s selectively gullible.

    I don’t think it actually hurts climate science. His arguments are so piss poor that if any denialist cites him as an authority, all we have to do is quote what Randi said back to him, thereby ripping him a brand new and fully functional body orifice.

  • Barton Paul Levenson // December 19, 2009 at 12:08 pm | Reply

    I knew that about Randi a long time ago, from his silly and ignorant comments about theology. I’m grateful for the good work he’s done in arguing against ESP and exposing Uri Geller. But he’s just not competent at everything.

    • TrueSceptic // December 20, 2009 at 12:09 am | Reply

      BPL,

      I hope this doesn’t become an argument about religion. Look what happened at Deltoid, where you are missed every day.

      PS I’m an atheist.

  • Didactylos // December 19, 2009 at 12:23 pm | Reply

    Ray,
    I thought it was fairly clear that a) Randi was spouting off on a topic that he knew almost nothing about, and b) he retracted those errors explained to him. Yes, his input wasn’t helpful, and he even made more errors in his retraction. I imagine he got confused by the way those deniers like to misuse the word “sceptic”.

    Eric Raymond, now – he has sunk beneath contempt. He’s like an unguided missile: useful if he happens to hit the right target, but an embarrassing disaster when he misses.

  • dhogaza // December 19, 2009 at 1:55 pm | Reply

    Eric Raymond, now – he has sunk beneath contempt.

    He is also an HIV/AIDS denier:

    “I believe, but cannot prove, that global “AIDS” is a whole cluster of
    unrelated diseases all of which have been swept under a single rug for
    essentially political reasons, and that the identification of HIV as
    the sole pathogen is likely to go down as one of the most colossal
    blunders in the history of medicine.”

  • WT // December 19, 2009 at 3:36 pm | Reply

    Tamino, in January 2008 you published a bet to decide about continuation (or not) of global warming, using the GISS data. As you know, other databanks like HadCRU and UAH are considered by many to be as least as reliable as GISS, and they have lower trends than GISS for several years now (and I am not talking about three and a half days, but for instance the period 1997-2009). Have you done the calculations with respect to the ‘betting scenario’ for HadCDRU as well, and are you willing to publish those results on your website?

  • Mesa // December 19, 2009 at 7:18 pm | Reply

    Good post.

    I read it as saying that with 95% confidence we can estimate the trend as being at least .01C per year, or .1C per decade, or 1 C per century. I think not too many people would be uncomfortable with that interpretation of the data.

  • Jim Bouldin // December 20, 2009 at 7:17 pm | Reply

    Tamino, I’ve been obsessed with this issue. First, to clear up my comment earlier about getting a nearly significant slope using a sine function but not with cosine: this was not due to autocorrelation in the former, nor anything about the configuration of the data, but simply to the fact that a sine function from 0 to 2 pi really does have a negative slope, while a cosine function does not (as you referred to later in your comment about the appropriate starting points). The difference in results could not have been due to differing ac, because the two functions are identical there.

    I have now set up and run many different simulations with autocorrelated noise and various trends, and just cannot get any result that supports the common belief that ac affects the statistical significance of a trend (or the trend estimate itself).

    I’d really appreciate it if you would look at and run the following script and tell me what you think. Note that I also fuzzed up the cosine function, by adding some white noise to it, but that didn’t affect the results. I always get a linear regression trend that agrees, on average, with what I imposed, and always get a permuted p value that agrees with that given in the regression results.

    Completely boggled by these results!!!

    ### script to compare the slopes and estimated p values from standard linear models vs. permutation methods, for autocorrelated data
    ### parameters to alter: step, periods, defined.slope, perms

    step = 6 # denom. of angular step for cosine function
    periods = 2
    defined.slope = 1/25 # the “signal”
    perms = 2500

    ##create red noise data with signal, angle step and periods as defined:

    timelength = 2*periods*step + 1
    time = c(1:timelength) # timestep vector
    x = (pi/step)*c(0:(timelength – 1)) # angle vector in radians
    cosx = cos(x) # the “noise”
    model.series = defined.slope*time + cosx # time series w/ signal + noise–can fuzz this up with some additional white noise (rnorm(n,0,1)), if desired, doesn’t affect results on average.

    ## compute and retain slope of standard linear regression on original data

    linear.reg = lm(model.series~time)
    l.r.results = summary(linear.reg)$coefficients;l.r.results # linear regr. results
    l.r.slope = linear.reg$coefficients[2] #assign l.r. slope for later use as test statistic
    l.r.slope.prob = l.r.results[2,4] # p value for computed t stat of lm
    plot(time,model.series);abline(coef = l.r.results[,1]) # 1st col. of l.r.results contains intercept and slope

    ## permutations of original data

    slopes = numeric(perms) # vector to hold sample of permutes
    for (i in 1:perms){
    temp = lm(sample(model.series)~time) #temporary holder for lm results on permuted model series, sampling without replacement
    slopes[i] = temp$coefficients[2] #slope coeff. extracted and assigned
    }

    # permutation diagnostics and parameters:

    mean.permuted.slope = mean(slopes) # crude perm. integrity test, should be near one
    CI99 = quantile(slopes, probs = c(0.005,0.995),names=F) + defined.slope # upper and lower should be about = distance from mean.permuted.slopes
    CI95 = quantile(slopes, probs = c(0.025,0.975),names=F) + defined.slope # as before
    CI90 = quantile(slopes, probs = c(0.050,0.950),names=F) + defined.slope # as before
    permuted.CIs = as.matrix(rbind(CI90,CI95,CI99));colnames(permuted.CIs)=c(“lower”,”upper”);rownames(permuted.CIs)=c(“90%”,”95%”,”99%”)
    mean.permuted.slope
    permuted.CIs

    ## permuted probability and comparison

    permuted.prob = length(slopes[abs(slopes) > abs(l.r.slope)])/perms # permuted probability of test.slope
    comparison = matrix(c(defined.slope,l.r.slope,l.r.slope.prob,permuted.prob),4,1)
    rownames(comparison) = c(“defined slope”,”lin. regr. slope”,”lin. regr. prob”,”permuted prob.”)
    colnames(comparison) = “value”
    comparison

  • David B. Benson // December 20, 2009 at 10:54 pm | Reply

    Jim Bouldin // December 20, 2009 at 7:17 pm — I’m not following this at all. Over the interval [0,pi] the cosine has a negative slope; the sine function first goes up and then down.

    Color me confused…

  • David B. Benson // December 20, 2009 at 11:20 pm | Reply

    Well, I wrote out the formula for the power spectrum of an AR(1) process and studied it. Other than producing an S curved shape, largest at low frequencies and smallest at high frequencies, I don’t seem to be able to elicite the claimed 1/f or 1/f^2 rolloff; might be there and I’m not seeing it.

    What I did find was the very nice
    Autoregressive Spectral Estimation for Quasi-Periodic Oscillations
    http://www.iop.org/EJ/article/1009-9271/5/5/007/chjaa_5_5_007.pdf
    from which at least look at Figure 1.

    From the solution in that paper it is clear that the ENSO QPO could be similarly simulated via an AR(2) process. This seems quite sensible to me, but various details of simulated spectral width versus width from observations, etc., would have to be done before taking this idea with some seriousness. Still, it provides a convenient shorthand on the way to a fuller understanding of ENSO.

  • Jim Bouldin // December 20, 2009 at 11:39 pm | Reply

    David: 0 to 2 pi, not 0-1 pi; a sine function gives a negative slope on that interval because the shift from positive to negative values occurs at the half-way point (pi), whereas pos and neg are balanced in a cos function on same interval–therefore no slope.

    The overall point is that using a sine vs cosine function to model the noise has no bearing on the analysis of whether AC affects trend significance. I had incorrectly concluded earlier that it did, thinking it supported Tamino’s argument.

  • David B. Benson // December 21, 2009 at 12:07 am | Reply

    Jim Bouldin // December 20, 2009 at 11:39 pm — I’m still confused. What can “a sine function gives a negative slope” posibly mean to you?

    For a mathmatician, a differentiable function, such as the sine function, has a slope at every point given by the derivative. Obviously on the interval [0,2pi] those slopes, i.e., the tangent lines, are not just negative.

    Something still isn’t clear, but from your second paragraph prehaps it is not worth the attempt to clarify…

  • Jim Bouldin // December 21, 2009 at 12:49 am | Reply

    David, I’m talking in the context of fitting a standard linear regression line to the points created by taking the sine of a constant angular step. E.g. from time = 1 to 17 at pi/8 intervals, from 0 to 2 pi. Because all such values from 0 to pi are positive, while all from pi to 2 pi are negative, you will necessarily get a negative slope from a linear model fit. It was just a clarification of that fact.

  • David B. Benson // December 21, 2009 at 8:33 pm | Reply

    Jim Bouldin // December 21, 2009 at 12:49 am — Thanks. Clear now.

  • David B. Benson // December 21, 2009 at 10:59 pm | Reply

    This thread shows off an ARMA(1,1) model for recent global temperatures. This peaked my interest in attempting to understand these models and especially the power spectral propreties of these and similar statistical models.

    (1) An ARMA(1,1) model has two parameters, one for the AR part and one for the MA part. If the two parameter values are euqal, the ARMA(1,1) model generates white noise. [This is easy as could be from the form of the power spectrum function, but I was surprised.]

    (2) An AR(1) model cannot exhibit aa quasi-periodic oscillation (QPO), but has I previously posted, an AR(2) model can; indeed such a model can have but one QPO, I think. However, in general an ARMA(1,1) model exhibts two QPO; these might well be quite broad.

    (3) While the NIST handbook page on time series analysis suggests treating SOI as an AR(2) process, I now suspect that is not adequate. From a power spectrum for the North Pacific one discovers a bit of power at close to 2 years, more close to 3.75 years and a larger and broader peak around 7–8 years. All three somehow make up ENSO.

  • Rattus Norvegicus // December 22, 2009 at 6:52 am | Reply

    I susect that the 7-8 year peak constitutes the dominant mode of El Nino. It does appear as though the system can get “stuck” in periods of frequent El Ninos and conversely get stuck in periods of ENSO neutral or La Ninas. See El Nino: Storming Through the Ages, a fascinating look at this phenomena.

  • steven mosher // December 22, 2009 at 5:57 pm | Reply

    Tamino, brilliant post.

    Can you help out by posting the parameters you used for your ARMA?

    :They aren’t perfect, in fact there’s additional uncertainty because the parameters of the ARMA(1,1) model are only estimates — so the uncertainties we estimate this way are still too low because of this unaccounted-for factor. But at least such estimates can be viewed as realistic, and they’re certainly more realistic than the AR(1) or white-noise estimates.”

  • Walter // December 22, 2009 at 6:33 pm | Reply

    Am I correct in saying that the warming rate has been stable (to 1990) and declining to 2005, and then really shot up in 2006.

    [Response: No. Note that the error ranges expand greatly in recent years, so we can't place precise limits on the warming rate based on such short time spans. The most sensible hypothesis is that the warming rate has been stable since about 1975.]

    Also, does the global warming hypothesis assume the warming rate should grow overtime?

    [Response: That depends on emissions. If we keep emitting greenhouse gases as we have been recently ("business as usual"), then yes warming will accelerate in coming decades.]

  • WT // December 22, 2009 at 9:04 pm | Reply

    Hansen claimed in 1988 that CO2 forcing caused global warming, based on 13 years of rising temperatures. So can someone explain why 13 years (1997-2009) of zero trend are irrelevant?
    How many years of zero trend would be considered “significant”?

    [Response: If ignorance is bliss you must be deliriously happy. At least, you're delirious.

    Hansen did not claim CO2 caused global warming based on 13 years of rising temperatures. He claimed CO2 causes global warming based on the laws of physics.]

  • Timothy Chase // December 22, 2009 at 9:42 pm | Reply

    Walter asked:

    Also, does the global warming hypothesis assume the warming rate should grow overtime?

    Inline, Tamino responded:

    That depends on emissions. If we keep emitting greenhouse gases as we have been recently (“business as usual”), then yes warming will accelerate in coming decades.

    Tamino,

    Under BAU carbon dioxide emissions would grow more or less exponentially. Since temperature grows logarithmically with carbon dioxide levels, one might naively conclude that temperature will grow only linearly with time.

    I take it that what such an analysis does not take into account is the carbon cycle feedback. Shallow methane hydrate deposits along the continental shelves of Siberia. Permafrost and thermokarst lakes. Ocean saturation and higher temperatures turning the ocean from a sink into a source.

    Or are there factors beyond the carbon cycle feedbacks that I am not taking into account?

  • WT // December 22, 2009 at 9:54 pm | Reply

    CO2 forcing was the mechanism that he proposed. The evidence for global warming was the temperature increase 1975-1988.

    [edit]

    [Response: I'm calling you a liar.]

  • David B. Benson // December 22, 2009 at 10:46 pm | Reply

    Rattus Norvegicus // December 22, 2009 at 6:52 am — I’ve read the claim that while El Nino re-occurance varies from 2 to 8 years, 4 years appears to be most common. For a different take see the Christopher M. Moy, Geoffrey O. Seltzer, Donald T. Rodbell & David M. Anderson letter to Nature linked in a comment in the latest thread.

  • Ray Ladbury // December 22, 2009 at 10:49 pm | Reply

    WT, On the off chance that you have simply been duped by a denialist propaganda site, Arrhenius predicted that the globe would warm due to anthropogenic CO2 way back in 1896. What Hansen said in 1988 was that it looked as if we were starting to see it.
    The warming had been masked previously due to the aerosols released when burning dirty petroleum and coal fuels. With clean air legislation, the aerosols decreased and we started seeing warming in earnest. Please have a look at Spencer Weart’s excellent history of climate change. It will fill in a lot of details that are clearly blank in your mind right now.

    [Response: By 1988 we had worked out that CO2 absorption bands were not saturated, we had learned a lot about paleoclimate, and of course Hansen et al. had a fully functional computer model with which to predict the course of future warming which has since been fully vindicated.

    My guess: WT isn't "duped," he's just an outright liar.]

  • george // December 23, 2009 at 4:52 am | Reply

    steven mosher asks

    Can you help out by posting the parameters you used for your ARMA?

    Tamino already posted on those parameters (calculated from the GISS data for 1975-June, 2008 , or just a little over one year less than 1975-present) inAlphabet Soup, part 3b: Parameters for ARMA(1,1) (continued) including the values and the details of how he calculated them (so you could re-calculate the values including the additional data since June 2008)

    Remember? I know it was ages ago, but you did comment on that very post.

    BTW, how’s the libel of Michael Mann coming along these days? Still spreading your idiotic juvenile word play (“Piltdown Man_”) far and wide?

  • Riccardo // December 23, 2009 at 12:39 pm | Reply

    Timothy Chase,
    CO2 emissions grow exponentially not CO2 concentration, the latter being the integral of the former times a constant representing the sinks. This will give you the expected accelerated warming even without other feedbacks. Crudely speaking.

  • David B. Benson // December 23, 2009 at 11:05 pm | Reply

    Riccardo // December 23, 2009 at 12:39 pm — exp(x) is a fixed point under the operation of taking the derivative and so is also a fixed point under the operation of taking the anti-derivative.

  • Riccardo // December 25, 2009 at 6:07 pm | Reply

    David B. Benson,
    you’re right, untill the exponent is exactly a first order polynomial.

  • tristram shandy // December 26, 2009 at 6:11 pm | Reply

    george,

    I think he wanted to know the parameters used in the current post. They might be different, although tamino could clear it up easily

  • george // December 27, 2009 at 3:54 am | Reply

    Tristram says

    I think he wanted to know the parameters used in the current post. They might be different, although tamino could clear it up easily

    As I commented above

    Tamino already posted on those [ARMA(1,1)] parameters …including the values and the details of how he calculated them (so you could re-calculate the values including the additional data since June 2008)

    Surely, Mosher (a self-proclaimed computer jock at Climate Audit) can use Tamino’s detailed methods (outlined in his previous post) to recalculate the values of the parameters using the somewhat extended interval from 1975-present(!?)

    Then again, based on how much trouble he seemed to have compiling NASA GISTEMP, maybe not.

  • dhogaza // December 27, 2009 at 5:27 am | Reply

    Folks, let’s keep in mind that GISTEMP is doing something that is really very simple – taking station data and making area and zonal averages. It’s the sort of thing that’s very difficult to do “wrong” and the “Y2K” error was rather a surprise.

    It’s been a long time since I’ve read McI, who I believe to be totally dishonest, and lo and behold …

    The Y2K error had to do with data transmitted from Russia, not the GISTEMP FORTRAN program or algorithms. The quality control checks they run didn’t catch it, but that’s not what McI is dishonestly implying is the problem.

    Maybe I should spend time over there, pointing out the obvious.

    There are some peculiar biases in the present program, but I don’t expect them to have too big an effect.

    So at this point in time, the CA folk can’t get it to run, don’t think that understanding it by reading it is important, but McI confidently states “there are some peculiar biases in the program” without stating what they are, or which lines of code introduce it.

    In my opinion, the main purpose of getting it to work is to see what it really does.

    Just fucking read it, assholes. If you’re as smart as you think you are – smart enough to overturn climate science – you’ll catch the errors without running the program.

    What dreck.

    • Deech56 // December 27, 2009 at 12:45 pm | Reply

      I like this comment:

      anonymous
      Posted May 20, 2008 at 1:05 AM | Permalink | Reply

      I think GISS should be forgotten, thrown away, whatever, just use satellite data and maybe the ocean bobs. Hadcrut land seesm [sic] to be more reliable as well

      Really, so much for showing the world that all this openness will lead to great revelations.[/sarcasm]

      What a year of denial – starting with the Asher article about “recovering sea ice” to the Spencer claim that the added CO2 is natural, to all of the “global cooling” stories. Maybe it’s time to review the denialist memes and see how well they’ve held up. Hold their feet to the fire.

      Nature will win out in the end; we need to be moving in the right direction so that when our society finally gets that “Oh, $#!t” moment we will at least be on the path towards taking steps that are necessary.

  • Steve L // December 27, 2009 at 7:26 am | Reply

    Jim Bouldin, I haven’t read all of what you’re doing, but I think I know of a way to get you past this hurdle. Think of temporal autocorrelation in terms of pseudo-replication — learn a few examples of this general problem and then see how it applies. Permutation is a powerful technique, but regardless of statistical method garbage in = garbage out. In this case, problems in sampling limit the frame of reference for the result.

  • george // December 27, 2009 at 3:27 pm | Reply

    Maybe we should open a betting pool on whether Steven Mosher can himself re-calculate the values of the ARMA(1,1) parameters that Tamino used to estimate the noise for his above post using the methods that Tamino outlined in his alphabet soup part 3b post and the slightly augmented GISS data set.

    I’m betting Wiltdown Mosher can’t do it.

    or, more likely, that he will simply never take up the challenge, which will also indicate (to me at least) that he can’t do it.

  • Florifulgurator // December 27, 2009 at 11:17 pm | Reply

    Great maths. I love it.
    BUT isn’t it overkill? I mean: Isn’t it obvious from the temp chart that this “cooling since 1998″ is grotesque BS? Anybody who peddles this BS either 1) has never seen a 30y temp chart, or 2) halluzinates, or 3) is a liar. Don’t waste any more time than stating this conclusion.

  • Ray Ladbury // December 28, 2009 at 1:41 am | Reply

    Florifulgurator says: “Great maths. I love it.
    BUT isn’t it overkill? I mean: Isn’t it obvious from the temp chart that this “cooling since 1998″ is grotesque BS?”

    Evidently not–at least not to everyone. Of course they won’t even read this, but I think it is an excellent illustration of why “30 years is climate” is not arbitrary.

  • Florifulgurator // December 28, 2009 at 2:57 am | Reply

    (Ray, my problem is I once was trained in stochastic calculus, so for me it’s glaringly obvious.)

    I’m still quite sure it is also obvious for healthy lay persons: Put on your glassed and follow the wiggles/fluctuations.

    The problem for those poor souls is not the maths or interpreting noisy graphs. It’s a psycho problem. You can’t do anything about that with doing them maths! I meanwhile classify these challenged (and challenging) individuals as a separate species of hominids, Homo Antisapiens Antisapiens. Maths and empiricism is not theirs, and they are most easily insulted by telling them facts.

    ’nuff brainlard wasted!

  • dhogaza // December 28, 2009 at 4:35 am | Reply

    The problem for those poor souls is not the maths or interpreting noisy graphs. It’s a psycho problem.

    If you’re suggesting that american libertarians, and other far-right wingnuts (including many bible- thumpers), have a psychological barrier that makes it difficult for them to accept science …

    No one here’s going to argue with you :)

    Now as to why there are enough of these people in the US Senate to make passing a climate bill tough … that’s just reality at this point.

  • Hank Roberts // December 28, 2009 at 5:59 am | Reply

    How numerous these innumerates are.

  • Deech56 // December 28, 2009 at 1:36 pm | Reply

    RE dhogaza

    Now as to why there are enough of these people in the US Senate to make passing a climate bill tough … that’s just reality at this point.

    When ideological and business interests converge…

    In Ray noted in his excellent, eloquent post in the other thread, “Our secret weapon is the truth.” But we are dealing with people who believe they create their own reality.

  • Ray Ladbury // December 28, 2009 at 1:43 pm | Reply

    Florifulgurator,
    Actually, I would say that denial is an all too human characteristic. If we go to the doctor and are screened for cancer, and the results come back negative, we hardly ever ask for a second opinion, regardless of the false negative rate. OTOH, the first thought when we get a positive is, “Well, maybe the test is wrong.”

    Humans tend to limit their risk calculus to immediate rather than long-term consequences. Thus, the risk of facing illness, chemo, doctors’ bills, weighs more heavily than even the risk of eventual death.

    Science (and engineering) tell us that this is wrong. 90% confidnce is 90% confidence, whether the subject matter is winnning the jackpot or death. And in a world where we’ve mostly eliminated threats like having a leopard fall on you from out of a tree, the scientific way of objective risk calculus is the way we need to look at things.
    I think that in a way Lomborg’s insistence on discounting even as he refuses to admit the unknown risks that arise from delayed action constitutes a slightly more sophisticated approach to the same sort of distorted risk calculus.

    We have to understand that quantitative and objective risk analysis is a relatively new idea–certainly less than two centuries old. It is not surprising that it hasn’t caught on, especially when you have people trying to reinforce our ingrained tendencies by telling us it’s ok to believe whatever we want to believe.

  • Florifulgurator // December 28, 2009 at 2:56 pm | Reply

    Ray, I’d like to modify the cancer diagnosis picture thus: The doc shows an X-ray or CT image where the cancer is obviously visible (e.g. some big speck eating up the pancreas). And the patient already needs morphine against the pain.
    Would a mentally healthy patient deny there’s a problem?

    My point is: Elaborate statistical explanations are futile to convince denialists. The “cooling since 1998″ crowd needs attention by psychiatrists and criminologists, but not mathematicians.

    Tamino sure has better and more productive things to do. It’s such a sad waste of a great educator’s energy.

    • Gavin's Pussycat // December 28, 2009 at 3:32 pm | Reply

      Florifulgurator, I beg to differ. Yes, died-in-the-wool denialists are never going to be convinced. But for every denialist there are many victims, members of the public taken in by these simplistic arguments. Statistical literacy is not at all common.

      Tamino is talking, over the denialists’ shoulders, to these victims, which may be lurking quietly on this site. It’s a bit like what those do that help to deprogram people escaping from sects. In those terms I think it is a valuable effort, also since many folks don’t respond with kindness to being lied to. They become our allies.

  • Ray Ladbury // December 28, 2009 at 3:50 pm | Reply

    Well, unless you want to dismiss a large proportion of the population as crazy (an arguable position, I will grant)…

    We have to realize that the human brain is even better at rationalizing than it is at rational thought. That’s precisely why we need science and statistics–to keep us from lying to ourselves. Smart people like Freeman Dyson are managing to fool themselves–not because they don’t understand the physics, but because it doesn’t fit in with his rosy view of a boundless human future in technological paradise. He’s managed to convince himself that even though it is undeniable that we are changing the climate and even if things are as bad as they say, we’ll just engineer carbon-gobbling trees.

    Our brains evolved in a big scary world that we couldn’t control, and so they developed comforting lies that made it seem as though someone or something was driving the bus–be it deities or the force of history or manifest destiny or whatever. Our brains are very good at conjuring what isn’t. We need rigorous methods to make us pay attention to what is.

    Some of us are fortunate. We tend to think in a quantitative way. But look at the “Cyclical? Not” thread. Max is convinced he sees cycles there–because he WANTS TO SEE THEM.

    Tamino has done a good job giving the A students something to chew on while he occasionally drops a post designed to reach the D and F crowd.

  • Hank Roberts // December 28, 2009 at 5:04 pm | Reply

    Yup. I’ve been wishing for something like a Tamino-moderated, more focused conversation including the RC and other real scientists, talking about the really interesting stuff (which includes much uncertainty) — for those of us who are ‘B’ and ‘C’ students to watch and learn from.

    It might need to be by invitation only, though, to avoid having the copypasters taking stuff out of context to make messes of elsewhere, if they weren’t allowed to crap in the actual thread.

    A climate ‘graduate school’ by invitation only, for us camp followers (grin)

  • Eric Rasmusene // December 29, 2009 at 5:22 pm | Reply

    Hi. I wrote up a lengthy response to this good post at
    http://rasmusen1.blogspot.com/2009/12/regressions-and-global-warming.html

    My post starts like this:

    The webpost http://tamino.wordpress.com/2009/12/15/how-long/ has a nice step-by-step exposition of how to estimate whether there is a warming trend in temperature data 1975-2008, first using OLS, then using an AR-1 process, then an ARMA. The trend is significant. But the post is responding to the observation that the trend has flattened out since 2000. It doesn’t really respond to that.

    To see why, note the graph above. It has artificial temperatures that rise from 1975 to 2000 and then flatten out. If you do an OLS regression, though, YEAR comes in significant with a t-statistic of 25.33 and an R2 of .95.

    [Response: It looks like you don't understand what this post says, or what it's about.

    First: this post is about how long a time span is required to establish a non-zero trend in GISS data ending at the present day. I have in the past directly addresses the claim that "the trend has flattened out since 2000." The recurrent theme is that such a short time span contains too little information to tell. I have emphasized that those who claim to show a flattening since 2000 (or 1998, or 2001, or 2002, or whatever the cherry-point du jour) are foolish because there's no valid evidence to that effect. I repeat: such short time spans don't tell us one way or another. Under that circumstance, both occam's razor and the laws of physics favor the continued-warming hypothesis. But I repeat: the bare statistics of data since 2000 (or whatever too-recent time) won't answer the question. Hell, that's what this post is about.

    Early in the post I stated that "short time spans don’t give enough data to establish what the trend is, they just exhibit the behavior of the noise." As for your artificial data, from your graph it looks like it's noise-free data -- certainly if there is noise it's vastly smaller than the signal -- so it hardly applies to the real world situation.

    You've also drawn conclusions about my analysis based on artificial data for which the signal is nonlinear. That raises a host of interesting issues, which might make for a good "teaching moment" if you're willing to learn. But it's not relevant to GISS data since, although the signal is almost certainly nonlinear since 1975, it's statistically indistinguishable from a linear trend plus noise.]

  • Ray Ladbury // December 29, 2009 at 6:48 pm | Reply

    Tamino, put Eric down for special needs.

  • Eric Rasmusene // December 30, 2009 at 1:56 am | Reply

    Let me puts things a bit differently. You say:

    “It’s the trend that matters, and is cause for great concern, and there’s no evidence at all that the trend has reversed, or even slowed.”

    Equally, there is no evidence that the trend has continued. The data was noisy enough that it was hard to pick up even earlier, and now the evidence of steady warming is getting weaker over time. The evidence for growth 1975- 2000 is as strong as it’s ever been. But the data certainly can’t reject that warming has halted, and if we just look for the best-fitting model, I bet that’s what we’d pick.

    [Response: I bet that's what you would pick because you're too easily fooled.]

  • Kevin McKinney // December 30, 2009 at 6:01 am | Reply

    Eric, if it was just about correlations, we might.

    But it’s not–we have a physical mechanism well-validated by experimental observations. That makes “Gee, maybe it will all just go away” rather a mug’s game.

  • Rattus Norvegicus // December 30, 2009 at 4:56 pm | Reply

    It’s amazing what you can do with Excel and just enough knowledge to be dangerous.

  • Ray Ladbury // December 30, 2009 at 5:22 pm | Reply

    Eric, you are positing a much more complicated model than a simple linear fit. In effect you have two linear fits, plus the date of the switch between the two regimes. This is of course in addition to the characterization of the noise about the above “mean” behavior. So, in effect you are going from something like a 3 parameter model to a 5-7 parameter model.

    If you consider the Akaike Information Criterion, the likelihood for your fit would have to be 7.4 to 50 times better to justify the additional complexity–and that is not taking into account all of the correlations in the data. That sounds to me like a fairly tall order. Effectively, you are trying to follow in the footsteps of the Wall Street Urinal:

    http://blogs.discovermagazine.com/cosmicvariance/2007/07/13/the-best-curve-fitting-ever/

  • FredT34 // December 30, 2009 at 5:28 pm | Reply

    Eric, can’t you see that if you choose a short time span, any hot year will explode the “global warming stopped in 1998″ meme ?

    While the same hot year probably wouldn’t change much of the slope for the trend on a 30y period ?

Leave a Comment