Open Mind

Cyclical? Probably not.

December 31, 2009 · 143 Comments

It’s way too easy to look at data and think you see cycles. After all, if it went down, then up, then down, then up — it must be cyclic, right? The amount of analysis that goes into such conclusions is often limited to “Looks pretty cyclical to me.” Such “analysis” is tantamount to seeing a rock formation on Mars that looks vaguely like a face, and concluding that aliens constructed it millions of years ago as a message to future humanity.

The tendency to see cycles where there are none spurs people to extrapolate their imaginary “cycles” into the future, leading to some expectation about economic recovery, or a world series victory for the Mets, or what is an all too common, imminent global cooling. It also enables those in denial to explain away the global warming we have observed with a wave of the hand, dismissing it as “natural cycles.”


Setting aside the folly of an overactive imagination, let’s consider the question, how do we tell if some phenomenon is cyclic? We should first define what we mean by “cyclic.” Merriam-Webster online defines it as “of, relating to, or being a cycle” — which is no help at all! They do, however, define “cycle” thus:


1: an interval of time during which a sequence of recurring events or phenomena is completed
2: a course or series of events or operations that recur regularly and usually lead back to the starting point

Wiktionary defines cyclic this way:


Characterized by, or moving in cycles, or happening at regular intervals.
A process that returns to its beginning and then repeats itself in the same sequence.

The essence of these definitions — the essence of cyclic behavior — is that a pattern will recur. To be cyclic, it has to repeat, and not just once or twice such as could happen by accident, it has to repeat with sufficient regularity, and often enough, that given the most recent observed cycles, we can at least make some useful prediction about the behavior during the next cycle.

The repetition doesn’t have to be perfect! Consider for example, the light curve (plot of brightness as a function of time) for the variable star Mira (magnitude is an inverted scale of brightness, so smaller numbers mean greater brightness, which is why the numbers on the y-axis go from bigger to smaller):

Clearly the ups and downs of Mira repeat, but just as clearly, each cycle is a bit different from the others; like snowflakes, no two are alike. Nonetheless, they’ve repeated often enough that we can be supremely confident they’ll continue at least into the immediate future (for the next few cycles, and more likely than not for the next several hundred), and we can certainly make a useful prediction of the course of this star’s brightness in the near future. Hence we can conclude that the brightness variations of Mira are periodic, although they’re not perfectly so.

There are numerous ways to search for, and test for, periodic behavior in time series. Probably the most common is Fourier analysis, which gives rise to the Fourier periodogram. Yet even then we have multiple versions of the Fourier periodogram, such as the Fast Fourier Transform (FFT), the Lomb-Scargle modified periodogram, the date-compensated discrete Fourier transform (DCDFT), etc. Even within a given method there are different conventions for definitions, especially about how the periodogram is scaled. In geophysics, e.g., it’s common to scale the periodogram so that the total integrated power is equal to 1, while in astronomy the custom is to scale the periodogram so that the average power level is equal to 1. Here’s a Fourier periodogram (the DCDFT using astronomical scaling) of the Mira data:

The periodogram is dominated by a tall spike at frequency 0.002987 cycles/day, corresponding to a period 334.7 days. That’s the mean period for Mira during this time span; each cycle is a bit different, and individual periods will vary, but the mean is reasonably stable (which helps us make useful short-term predictions).

There are ways other than Fourier analysis to compute a periodogram; one of the most ingenious is based on applying the analysis of variance (AoV), and was developed as a period-search method which is very fast to compute. Here’s the AoV periodogram for Mira:

Both Mira periodograms are dominated by a single tall peak at the genuine signal frequency, which indicates the actual periodic behavior (and enables us to estimate the frequency and therefore the period). They also show smaller peaks at 2 and 3 times the signal frequency, which indicates the presence of overtones in the spectrum. They also show a number of other small peaks which actually don’t indicate genuine periodic behavior, especially the one at very low frequency.

Both periodograms also return a “power” level which we can treat as a test statistic for periodic behavior. For Fourier analysis, the power level is often viewed as proportional to a chi-square statistic with 2 degrees of freedom, and the highest power level is treated as the maximum of a number of independent chi-square statistics with 2 degrees of freedom. Unfortunately, using these statistics to conclude periodicity is a very problematic procedure. For one thing, the null hypothesis on which these statistics are based is that the data are nothing but white noise. A significant value doesn’t mean the data are probably periodic, it just means the null hypothesis is probably not true — and there are lots of ways for that to happen. For example, any nontrivial signal can lead to exaggerated power levels, especially at low frequencies. Here, for instance, is the Fourier periodogram of data which follow a perfectly straight line with no noise:

Many of the low-frequency peaks are well above the “statistical significance” level, but as I say, that only indicates that these data are not pure white noise. Which we already knew. We can take the straight-line data and add noise, in particular noise which isn’t white, giving this:

The periodogram looks like this:

Again, quite a number of the periodogram peaks are “significant” but none of them are significant of periodicity; these data are a straight-line signal plus noise. Significance doesn’t demonstrate periodicity, it just negates the null hypothesis.

Another problem arises when the time spacing of the data (the sampling) isn’t perfectly regular. Then, the values of the usual Fourier (or other) periodogram aren’t independent, so we can’t treat them as a set of independent statistics. Furthermore, even with regular time sampling the Fourier periodogram values are only independent at certain pre-defined frequencies. But it’s commonplace (and in fact often highly desirable) to sample the Fourier periodogram at a lot of frequencies which don’t fall into the predefined set: to oversample the periodogram. Then the Fourier values aren’t independent even if the time sampling is regular and the data are pure white noise! The bottom line is that there are many ways for the simple “test statistic” treatment of periodogram analysis to go wrong. This is especially true for low frequencies (periods which are long compared to the total time span of data).

It’s my opinion that a better way to treat a Fourier periodogram is to consider the power levels of the peaks to be proportional to a chi-square statistic with three degrees of freedom. Furthermore, the constant of proportionality is not what a naive analysis would indicate, in fact the constant of proportionality can change throughout the spectrum so we see different regimes in frequency space. Hence the constant of proportionality should be estimated by the “background” level of the spectrum, and that in the neighborhood of the peak we happen to be testing. All of which makes for a very complicated procedure, with a lot of uncertainties that are hard to quantify! Welcome to period analysis.

That’s what I do in practice: compare the level of a periodogram peak to the levels of the other peaks in its neighborhood. In other words, one uses the “background” level to define the scale, and tests whether the given peak is sufficiently taller than its neighbors. After many years, one gets a good intuitive “feel” for significance, which is refined by the many times one fooled oneself into believing periodicity when there was none. Generally, one learns to be conservative, and not apply the label “periodic” without considerable confidence. As for claims about periodicity based on the “standard” treatment of test statistics from periodograms — I don’t put much stock in that. In fact, other than in the most exceptional circumstances I don’t put any stock in it at all!

One also learns that very low-frequency periodogram peaks are just plain untrustworthy. If the period is longer than the time span, then of course one can’t conclude periodic behavior — there hasn’t been enough observed time for the “cycle” to repeat at all! Even if the time span is two full periods, so we’ve seen a full “repetition,” I never trust conclusions of periodicity. There are just too many other signals that look periodic on such a short time frame, and they happen far too often, for a single repetition to justify a conclusion of periodicity. Even with two full repetitions (covering three full “periods”) I tend to be skeptical — but at least then I begin to entertain the idea that genuine periodic behavior is plausible.

Not all periodic behaviors are visible in a time series plot; that’s one of the reasons we apply period analysis. Not all apparent periodic behaviors in a time series plot are genuine; that’s another reason we apply period analysis. And not all apparent significant values in a period analysis are genuine either! That’s why we apply healthy, hefty skepticism and lots of experience. Hence the title of this post. Asking whether or not your data show genuine periodic behavior is a bit like asking whether or not you can afford a Rolls-Royce automobile: if you have to ask, probably not.

Categories: Global Warming · mathematics
Tagged:

143 responses so far ↓

  • Andy // December 31, 2009 at 6:25 pm | Reply

    Tamino, great treatment as always.

    I have spent a lot of time analyzing the Pacific Decadal Oscillation. It’s the “O” in “PDO” that drives me bonkers. Two or three switches in sign does not an oscillation make! There was a movement a few years back to call it Pacific Decadal Variability but that didn’t stick. Alas. Enough whinging. Here’s my question: Since you are born again as a Bayesian, can you construct a better null than white noise? One that distinguishes between long-lived phenomena that follow an ARMA(p,q) model and a random pattern? The standard null model in a lot of geophysics is now an assumption of red noise – often an AR1 model with some reasonably high coeff for the first order autocorrelation. But it seems to me that you’d like to specify a null that relies on a mechanism.

    I suppose I’m Bayes-curious. How would you do it?

    [Response: I agree completely about PDO. It's an "oscillation" in the sense that it's a variation, but I don't see any firm evidence of periodicity. But the name "oscillation," whether correct or not, is stuck.

    I don't think you need to "go Bayesian" to improve the evaluation of period analysis. But it makes for some great puns.

    Alternative noise processes, and non-periodic signals, are what makes the scaling for a Fourier transform different from the "naive" (white-noise model) scaling. But they don't change the scaling for a single frequency, rather for a broad range of frequencies, so they're also what makes the scaling change from one period regime to another. That's why I advocate using the "background" level in the "neighborhood" of a spectral peak, to scale it for evaluation. Doing so accommodates the alternative behaviors you're talking about. So I think that's the right way to go about it.]

  • Andy // December 31, 2009 at 7:15 pm | Reply

    Tamino, thanks for your thoughtful reply. I think I now understand how using the background level allows you to create a better test. Got it. But at the end you are still asking if some signal is periodic or not periodic, right?

    The issue in many earth processes (be they incoming shortwave radiation, ocean currents, whatever) is that autocorrelation can give the impression of cyclic behavior from a simpler model. Not knowing a helluva lot about Bayesian modeling, can you compare models of
    some higher order model that correspond to an understood _mechanism_ and have a model cook-off rather than a comparison to a null? In some cases that would desirable and in some cases not I suppose but I usually want to know why something shows a pattern and not just if something shows a pattern .

    Correct me if I’m wrong (please!).

    (PS. Regarding our innate desire to see patterns. When I took a spatial stats class in grad school the instructor asked us to draw a random pattern of dots on a screen that we then analyzed for clumping. Almost everybody in the class ended up over correcting and drew a uniform pattern because they saw clusters that didn’t really exist. It was an interesting lesson in our tendency to see patterns that aren’t there.)

    • Joel Shore // January 10, 2010 at 2:15 am | Reply

      Yup…This desire to see patterns is quite strong. In my work, I was involved in some simulations of the random packing of discs…and the random packing of colored discs (e.g., R, G, and B)…And, indeed, the degree of clumpiness in the random patterns did tend to make people think that they were not really random. (And, in fact, the original motivation for undertaking these simulations were to test against the real physical system where people thought clumping was occurring. We wanted to demonstrate that people would see the same thing in a pattern that we knew for a fact was random.)

  • Philippe Chantreau // December 31, 2009 at 7:35 pm | Reply

    Very interesting once again. This is really the stuff you do best, I feel enlightened :-)

  • Cugel // January 1, 2010 at 2:29 am | Reply

    Thank you so much for this post. As an old dog faced with new tricks this is exactly what I need.

  • Cugel // January 1, 2010 at 2:48 am | Reply

    Andy :

    I agree wholeheartedly about the PDO. A few “cycles” at best when first proposed, and recent behaviour does not support the hypothesis.

  • John Mashey // January 1, 2010 at 3:06 am | Reply

    Good material as usual.
    Nit: “how the peirodogram is scaled”

    [Response: Fixed, thanks.]

  • billy t // January 1, 2010 at 8:57 pm | Reply

    Don’t forget truncation effects – e.g. the apparent ‘peaks’ in your periodogram of the flat line are because you’re seeing the Fourier transform of a square (truncated line).

  • David B. Benson // January 2, 2010 at 12:06 am | Reply

    Andy // December 31, 2009 at 7:15 pm — I don’t think it is a simple binary choice between periodic and not periodic. Rather, it is a (not very well defined, for me at least) notion of predictability, forecasting with skill. So here are four stages:
    (1) periodic;
    (2) pseudoperiodic, as in sunspot cycles;
    (3) quasi-periodic oscillations, maybe ENSO qualifies;
    (4) other “oscillations”.
    I don’t (yet) see this issue as a Bayesian matter, except to the extent that the maximum likelhood ideas used during model training are Bayesian.

    I believe I am at least as skeptic as Tamino when it comes to declaring “cycles!” In
    Variability of El Niño/Southern Oscillation activity at millennial timescales during the Holocene epoch
    http://www.nature.com/nature/journal/v420/n6912/full/nature01194.html
    there is a claim of detecting an approaximation 2000 yearr period “cycle” after seeing but 6 periods. Sorry, but from looking at the wavelet figure and the text proposing a mechanism, I view this as simply some of the (1/f^1.5) pink-red noise which happens to look sorta periodic during the Holocene, in the proxy used. [It is quite a good paper, just not on this one point.]

    However, some papers by McCoy and co-workers have proposed GARMA models to treat what are thought to be periodicities in a time series. One could use their MCMC procedure to develop GARMA and also ARMA models for the time series and then use BIC for the model cook-off.

    • AndyB // January 4, 2010 at 9:19 pm | Reply

      Looks like we have 2-Andy problem. I’m the one from the first two comments. David, I intuitively like the approach of comparing models against each other rather than against a null. But I’m a babe in the woods when it comes to BIC vs AIC and so on. The philosophy fits my Occam’s razor approach to science in general.

  • HankHenry // January 2, 2010 at 5:50 pm | Reply

    Hey, there aren’t just faces on Mars. Don’t forget the canals.

  • Andy // January 3, 2010 at 4:11 am | Reply

    Ok, any opinions out there regarding the Atlantic Multi-decadal Oscillation? Whoever wrote the Wiki entry sees it as likely to be real, but I also recall some controversey over whether or not this was an artifact of global warming forced sea surface temp changes.

  • Hank Roberts // January 3, 2010 at 5:21 am | Reply

    > Andy
    > recall some controversy….

    Can you find that again?
    I spent about 20 minutes searching just now and didn’t find anything resembling that either with Google or Scholar.

  • David B. Benson // January 3, 2010 at 11:03 pm | Reply

    Andy // January 3, 2010 at 4:11 am — I am (currently) of the opinion that the AMO is just noise. See
    http://en.wikipedia.org/wiki/Atlantic_multidecadal_oscillation#Prediction_of_AMO_shifts

  • Todd Friesen // January 4, 2010 at 2:04 am | Reply

    Regarding AMO, I asked Real Climate, and Mike responded with this:

    [Response: I commented on this very issue in an earlier comment thread a few months back: "In control simulations with GCMs such as used in Delworth and Mann (2000) and Knight et al (2005), there is no need to separate the internal multidecadal variability from the forced long-term trend, because there is no change in radiative forcing, and thus no forced (including anthropogenic) large-scale trend to contaminate the estimate of internal multidecadal variability. No such luck in the real world, where both are present. In that case, one needs to use some technique for separating the multidecadal variability from the long-term trend. In many papers, this is simply done by subtracting off a linear trend and defining the residual as e.g. the "AMO". I don't particularly like that approach, because the radiatively forced temperature trend is extremely unlikely to be linear in time [this is an issue we discussed in Mann and Emanuel (2006)]. I prefer frequency-domain signal detection techniques such as the “MTM-SVD” technique (for obvious reasons) which was employed by both Delworth and Mann (2000) and Knight et al (2005).” -mike]

    Mann and Emanuel (2006) is found here:

    http://www.meteo.psu.edu/~mann/shared/articles/MannEmanuelEos06.pdf

    “obvious reasons” was hyperplinked to this:

    http://holocene.meteo.psu.edu/shared/articles/mtmsvd.pdf

    To me, linear de-trending is very suspect. With regard to AMO signal, see the Mann link above.

  • Andy // January 4, 2010 at 5:19 am | Reply

    Hank: this is embarrassing but I don’t know how to embed links in comments yet.

    Look at two Real Climate Posts: Hurricane Spin and a guest post by Crowley – NOAA: Hurricane Forecasts. Also look at Santer et al.’s paper that tries to tease out climate change impacts on N. Atlantic SST trends from the AMO.

    I thought Emmanuel had a comment on the AMO somewhere but I couldn’t find it. He does mention “proponents of the AMO” (i.e. Dr. Gray) as if he wasn’t one of them.

    http://www.pnas.org/content/103/38/13905.abstract

    While most of these don’t dismiss the AMO, they seem to relegate it to a role as a cyclical SST temperature change that is perhaps an artifact of other overlapping “true occillations”. i.e. two sets of waves that occasionally lap into one another create an apparent third set.

    This is my opinion and I maybe inserting something into these articles that isn’t there.

    The important point is: occillations can be pulled out of global temperature analyses to try to provide a true climate trend. Does the AMO qualify as such?

  • Aslak // January 4, 2010 at 8:35 am | Reply

    ANDY: I agree with David that the term “Atlantic Multidecadal Oscillation” is founded on an extremely weak basis. I guess that is why you brought it up. How it ever gained traction is beyond my understanding.

    From wikipedia: “The AMO signal is usually defined from the patterns of SST variability in the North Atlantic once any linear trend has been removed.” This definition does not make any sense without a baseline period defined. Ofcourse this residual contains “real signal”, and it is plausible that it is linked to Atlantic overturning, but to call it an “oscillation” is very premature given the evidence.

    Imagine how a reconstruction of past AMO would look (do a google scholar search). I dare you to tabulate all the periodicities used for AMO in the litterature. Most often the AMO is just used as “the low-freq variability I can not account for”.

  • Sergey // January 4, 2010 at 6:20 pm | Reply

    I am glad I found this blog on suggestion from a friend. This is exactly the issue I am struggling with but in a different domain. I am working with spontaneous speech samples and the cyclical nature of hesitation observed in spontaneous narratives. There has been some work by Roberts and Kirsner, 2000 in Language and Cognitive Processes that uses time series analysis to demonstrate quantitatively that when we speak we tend to alternate between relatively hesitant and relatively fluent segments at regular intervals. As part of their methodology, they used power spectra and autocorrelation plots to determine if their speech data contained a periodic signal. I am currently trying to replicate these analyses on some of my own data but starting to wonder a bit if I am asking whether I can “afford a Rolls-Royce”. Most of the data I am working with has a very clear peak in the power spectrum but usually at the lower end of the frequency range. I have lots of samples for each subject, so even at this low frequency I do see several repetitions. So, according to Tamino, this is a bit more trustworthy than if I saw a cycle repeating only once or twice. A typical cycle I am observing is around every 10-15 seconds in a 200-400 second speech sampled at 200 milliseconds. I tried creating white noise signal as well as perfectly periodic signal at the same frequency (using sinusoidal function) at the same frequency as what I think I see in the real data. The real data fall right in between the two extremes. I guess what makes me feel a bit more comfortable is that I am not claiming to be able to make predictions using these observations. All I am trying to do is compare two (or more) populations with different proportions of the energy in the highest peak to the total energy in the periodogram. I guess I am struggling with the following question: If I find a significant difference between the proportion of the mean energy in the peak to the total mean energy in two different populations (without even assessing if the individual peaks are statistically significant), can I then claim that I have found a characteristic that is sensitive to the differences in the two populations with respect to the degree of regularity with which they hesitate in spontaneous speech. I don’t know if this question makes sense – I am very new to time series analysis, so I am just seeking some guidance. Also, I realize that my question has nothing to do with environmental science but I think the underlying methodology for time series analysis and interpretation would be somewhat similar.

    Thanks,
    Serguei

  • Ray Ladbury // January 4, 2010 at 7:41 pm | Reply

    Serguei,
    Hmm, interesting research. A couple of questions. Is there any objective reason for such a period–e.g. the length of an average sentence in the speech, etc.? Does it persist across different speeches? Can you correlate it with other information–e.g. eye movement, etc.

    How much does the frequency vary from one period to another? It sounds as if you might have a periodicity. A mechanism might add confidence.

    • Sergey // January 7, 2010 at 12:01 am | Reply

      Ray, thank you very much for the reply. I am not aware of any objective reason or corollary for the periods I am observing – that’s partly what I am trying to find out. The period length seems to vary across subjects – I have seen periods as long as 30 seconds. No eye movement information is available in my dataset but I do have at least a rough idea of where the subject is looking as the speech comes from a picture description task. I also have some other linguistic information ( sentence length as you mentioned, part-of-speech, syntactic phrases, etc. ) that’s I’ll try to correlate with the periodicity of hesitations. Generally, prior work suggests that these cycles arise as a result of underlying “cognitive rhythms” in the brain perhaps having something to do with working memory capacity. The nature and the origin of these rhythms is not known, at least not to me. In fact, there are plenty of critics of this idea so even the very existence of these rhythms has not been clearly established. However, there seems to be some evidence that, at least behaviorally, there are temporal cycles spontaneous speech whether they are associated with a cognitive process or not.

      I am not sure I understand your last question about the varying frequency from one period to another. Could you please explain?

    • Ray Ladbury // January 7, 2010 at 1:48 am | Reply

      Serguei,
      Sorry, I should have been clearer. There is probably some spread in the measured period from one cycle to another (e.g. one at 28s, one at 31s…), since this is probably quasi-periodic rather than periodic.

      Have you looked at a histogram of the distributions of periodicity from subject to subject? You might expect that to be normal.

      One last thing, what do your sources of error look like? Presumably there are a variety of things that could give rise to pauses, but presumably at random intervals. This might give you an idea of the sort of noise spectrum you have and so what sort of analysis to do. Good luck.

  • David B. Benson // January 4, 2010 at 10:32 pm | Reply

    AndyB // January 4, 2010 at 9:19 pm — Actually using AIC and BIC is quite simple; try Wikipedia. Which to use is not a settled matter. For a paper I hope is finished before spring (finally obtaining some decent data after over a year of pointing out that better data was required) I will use AIC to place a standard model (of an obscure phenomenon) in the dustbin in favor of a quite unusual alternative. I’ll use AIC because it treats the standard model (having one more parameter) as gently as I can while showing that it explains far less of the variance in the residuals.

    And maybe by that time Tamino will have finished his series of AIC so I can better explain what I am doing to an audience which hardily knows of its existence, much less significance.

  • Matt Andrews // January 4, 2010 at 11:08 pm | Reply

    Thanks, Tamino, for another excellent post.

    If the period is no longer than the time span, then of course one can’t conclude periodic behavior

    Should that be “If the period is longer than the time span”?

    [Response: Yes indeed! Doh! Fixed.]

  • Frank // January 5, 2010 at 10:59 pm | Reply

    We know that weather and climate are often best described by mathematical relationships that show chaotic behavior. This chaotic behavior is often characterized by periods that show repeating or oscillating patterns that are occasionally interrupted by more irregular behavior. Under these circumstances, should we demand that putative oscillations meet demanding statistical standards before they are given a name and discussed in the literature? Or should we tentatively accept them as likely explanations for how Nature behaves with an appropriate qualifying term like “putative”, while being careful not to place too much confidence in its periodic behavior until it meets more demanding tests? The latter makes more sense to me.

    [Response: I agree that it may be useful to use the term "oscillation" to describe behavior which hasn't passed demanding tests of periodicity. But it's a mistake to infer from that term that past behavior is merely "natural variations" and future behavior is imminent global cooling; many do so disingenuously.

    By the way: I have no doubt that weather exhibits (mathematical) chaos, but I'm not aware of solid evidence that climate does.]

  • Hank Roberts // January 5, 2010 at 11:26 pm | Reply

    “Despite the name “Arctic Oscillation”, there’s little discernible pattern to how the pressure difference varies, or what causes it – perhaps ‘Arctic Random Fluctuation’ would be a better name.”

    ARF!

    http://www.bbc.co.uk/blogs/thereporters/richardblack/2010/01/arctic_conditions_arctic_cause.html

  • David B. Benson // January 5, 2010 at 11:39 pm | Reply

    Frank // January 5, 2010 at 10:59 pm — Strictly speaking, neither weather nor climate can exhibit mathematical chaos as that is deterministic wile our universe is fundamenatally quantum mechanical (as far as we can ascertain now) and so nondeterministic.

    Even when assuming, for simplicity, deterministic continumm mechanics when onlinear effects are important it soon becomes the case that matters are best further simplied by assuming random components and effects. Even a billiards table with enough billiard balls ought to be enough to convince you.

    One way to attempt to describe random bechviors is via the linear ARMA processes, about which Tamino has threads explaining and using these. Each ARMA process may for a tiem exhibit a bechvior which appears “cyclical” for awhile but then the pattern goes away. This is especially noticable for red-pink noise
    http://en.wikipedia.org/wiki/Pink_noise
    and climatological data seems to have scales demonstrating everything from white noise (wind) to pink noise multidecadal oscillations through millennial scale red-pink noise. So it might be best to consder the climate system as a rather complex random process, maybe even more complex than the linear ARMA processes, and not attempt to pretend we can hope to understand it via deterministic continuum mechanics, chaotic or not.

  • t_p_hamilton // January 5, 2010 at 11:50 pm | Reply

    Tamino said:”By the way: I have no doubt that weather exhibits (mathematical) chaos, but I’m not aware of solid evidence that climate does.”

    Do you know of a clear, simple math example of a chaotic system where the long-term average is not chaotic?

    Thanks.

    • Ernst K // January 6, 2010 at 6:11 pm | Reply

      t_p_hamilton: “Do you know of a clear, simple math example of a chaotic system where the long-term average is not chaotic?”

      Again, not a simple math example, but image the following:

      A large, well (not perfectly) insulated room, empty except for two objects: a small gas flame and a low power electric fan. The fan blows air over the flame, generating a circulation around the room. The fan runs and the flame burns at a constant rate.

      Just to complicate things, you could make the fan direction slowly change in a randomly (perhaps under the control of a computer outside of the room). I only add this if you think the motion of air in the above example would be “simple”.

      Question 1: is the motion of air in the room chaotic?

      Question 2: what is the long term behaviour of the average temperature of the room?

  • Hank Roberts // January 6, 2010 at 12:46 am | Reply

    Popular question, but as to climate, the answer differs:
    http://www.google.com/search?q=a+chaotic+system+where+the+long-term+average+is+not+chaotic%3F

  • Gilles // January 6, 2010 at 6:40 am | Reply

    Hi all

    I am not so convinced by the argument that you can not PROVE there is a multidecannal PDO (or any century scale fancy oscillation). The question is whether you can DISPROVE it, or not. To be sure that a model is correct, it is not enough to prove it can reproduce the data : you must convince that the others cannot . If the sampling is not accurate enough or the time span not large enough, you simply have a large uncertainty, and you cannot affirm that it is “very likely” that … anything.

  • Gavin's Pussycat // January 6, 2010 at 10:06 am | Reply

    I think the question cannot be answered for climate itself, but general circulation models (which approximate climate) have been found to be non-chaotic in their long-term average behaviour.

    I remember from an old quantum theory book the example of an ideal, circular billiard table. Any small uncertainty in the angle at which the ball gets kicked off will eventually produce the uniform [0,2pi) distribution for the locations where the edge is hit. The smaller the uncertainty, the longer it takes to reach this distribution, but it always gets there in the end, non-chaotically. The actual location of the ball is then completely unpredictible, and sensitively dependent on kick-off angle, i.e., chaotic.

    There is a set of kick-off angles that will produce a strictly periodic orbit, but this set is countably infinite and of measure zero.

    Is this a useful model?

  • Aslak // January 6, 2010 at 1:03 pm | Reply

    I am curious to know what you people think of the evidence for the 1470 year Bond cycle? Is it appropriate to call it a cycle and are you convinced by the solar harmonic idea (see below)?

    A short intro to the Bond cycle: It has been observed that Dansgaard/Oeschger events seem to be triggered almost exactly on integer multiples of 1470 years on the GISP2 timescale. There are broad spectral peaks at this frequency associated with this in many proxy records (as they all have D/O signatures). I think the bond cycle is the basis for the “unstoppable global warming” theories you can find around on the net.

    A mechanism involving the MOC has been proposed (See Braun, Nature 2005). Basically, 1470 years is a harmonic of the deVries (210 years) and Gleissberg solar cycles (~86.5 years). [1470/7 and 1470/17].

    [Response: First: the D/O events don't seem to be global warming, they're hemispheric, and while the NH warms the SH cools and vice versa, lending support to the theory that they're a switch of ocean circulation patterns, diverting heat from one hemisphere to the other ("heat piracy") but not affecting the global budget.

    Second: it's not certain that the 1470-yr "period" is actually a period. The occurrences of D/O events are consistent with a random process.

    And if the 1470-yr cycle is truly a cycle, I'm highly skeptical it's a subharmonic of the deVries or Gleissberg cycles. But then, I'm skeptical of the reality of those "cycles."]

  • Joseph // January 6, 2010 at 3:49 pm | Reply

    Do you know of a clear, simple math example of a chaotic system where the long-term average is not chaotic?

    Evolution might be one such example, but that’s probably not what you’re looking for.

    How about plate tectonics? Over the long term, you can assume there’s a fairly constant rate of plate displacement. But in the short term, the shifts can be chaotic and abrupt.

  • Jaydee // January 6, 2010 at 4:34 pm | Reply

    Do you know of a clear, simple math example of a chaotic system where the long-term average is not chaotic?

    Not a math example, but one from nature: A River

    Rivers are chaotic both at a small and large scale, but they always flow down hill into the sea.

    You can compare this to the earths energy balance: Energy from the sun goes in on side and comes out the other. Any imbalance causes heating or cooling (the trick seems to be measuring it in all the places it happens).

  • Hank Roberts // January 6, 2010 at 4:47 pm | Reply

    From that Google search result page:
    http://www.grist.org/article/chaotic-systems-are-not-predictable

  • Hank Roberts // January 6, 2010 at 4:52 pm | Reply

    zomg — hat tip to some guy who pointed it out at Stoat as proof of cooling:
    http://climate.gi.alaska.edu/ClimTrends/Change/TempChange.html

  • Gilles // January 6, 2010 at 5:10 pm | Reply

    just to remind you that no model of the Sun is able to reproduce the 11 years activity cycle, not to speak of any supermodulation. So..it does definitely prove they don’t exist, doesn’t it ?

    I don’t see any proxy reconstruction reliable enough to test this kind of thing. Actually they aren’t reliable enough to show the current recent warming …

  • Ray Ladbury // January 6, 2010 at 5:52 pm | Reply

    Gilles,
    I’ll agree that SOME proxy reconstructions are not reliable in the modern era, but that is not what we need them for. What is more, there are plenty of reasons why there might be changes–the current era is very different from the recent pass (~2000 years or hell even 1000000 years)

    And as to solar cycles, there are models that do show flips in the heliodynamo.

    In the end it’s about evidence. A couple of “periods” isn’t enough data to establish a periodicity reliably, but its more than enough for the brain to think one is there, whether it is or not.

  • Gilles // January 6, 2010 at 7:16 pm | Reply

    Ray , I meant that there is no composite global indicator, based on proxies, that shows a “very different behaviour” now, compared to the last millenium.
    The only “very different ” behaviour is shown by instrumental measurements , but not confirmed by proxies.

    In simple logics, it should mean that either the instruments have a problem, or the proxies have a problem, or both. In any case, you cannot infer any sensible conclusion about the comparison with the past two millenia.

    For solar dynamo, I agree, models show cycles, but not with the right period. Climate models show cycles , too. I can’t say is they have the right period – and I can’t see how one can be sure of that. I just mean that the absence of something IN A MODEL doesn’t prove anything, in general.

    Again, the problem is not whether the cycles are REAL, or not. It is about whether they can be EXCLUDED, or not. I can’t see how they could.

    • Ray Ladbury // January 6, 2010 at 8:30 pm | Reply

      Gilles, Well, while they cannot be excluded, there’s no evidence for them. I cannot exclude the existence of invisible pink unicorns in my trunk either.

      And I agree that there is something wrong with using proxies after 1960. However, there is very good agreement throughout most of the calibration period AND lots of reasonable explanations for why the current epoch is different.

  • David B. Benson // January 6, 2010 at 8:04 pm | Reply

    Aslak // January 6, 2010 at 1:03 pm — There is a difference between DO events and Bond events. DO events occur between interglacials. Bond events are posited to occur during the Holocene:
    http://en.wikipedia.org/wiki/Bond_event
    wherein the assumed recurrance time is highly variable. Worse, IMO, neither Bond event 8 nor Bond event 5 ought to qualify as both were the results of large proglacial lake drainages. Moreover, it seems the evidence for Bond event 1 is slight, to put it mildly.

    Now if these actually were coolings in the North Atlantic, one would certainly expect to see this in the Central Greenland GIPS2 ice core. None except for the end of Younger Dryas (up) and the 8.2 kya event (down, then right away up).

    At the relevant timescale, the climate appears to behave as red-pin noise at about (1/f^1.5) rolloff. That implies large excursions “just happen” due to the noise. Sorry, whatever was going on, it was just noise.

  • Joseph // January 6, 2010 at 8:24 pm | Reply

    The only “very different ” behaviour is shown by instrumental measurements , but not confirmed by proxies.

    That’s not exactly true. If you look at a reconstruction, say Mann & Jones (2003) – which is one I’ve looked at – ending in 1980, you still see an anomaly. The period 1968-1980 is statistically anomalous.

    It’s not necessarily unprecedented, though. But if you consider that the period 1981-2009 is much warmer instrumentally than 1968-1980, what does this tell you?

    An additional point is that the rate of temperature change probably matters as much if not more than the absolute temperature. If you go back far enough, you’ll find a time that was warmer. In the Eocene optimum, temperatures were perhaps 10C higher than today, but species had 10s of millions of years to adapt to that climate.

    I haven’t seen an analysis of 30-year slopes in reconstructions. That might be something interesting.

    • Gilles // January 6, 2010 at 10:41 pm | Reply

      Joseph : anomalous with respect to WHAT ?

      I have done personnally a graph of the 30-year slope for instrumental records. It IS interesting. You find almost the same value in the 1910-1940 period than in the modern one. BTW, a careful inspection of the claimed “good agreement” between the GCM and the pre-AGW (before 1940 ) period shows that the agreement is not that good

      [edit]

      [Response: Evidence please.]

    • Gilles // January 6, 2010 at 10:53 pm | Reply

      I am not arguing that solar cycles explain everything. I am just reminding you that models of complex, highly non linear systems don’t catch generally the right amplitudes and frequencies of limiting cycles, because they depend on very precise details that are generally beyond the capabilities of spatial and temporal discretisation. So the fact that cycles of a given frequency don’t appear in computations does not tell anything, in general, about their real existence, or not. Again even if they are not PROVED, it is not sufficient to prove that they “probably not” exist.

  • dhogaza // January 6, 2010 at 8:31 pm | Reply

    The only “very different ” behaviour is shown by instrumental measurements , but not confirmed by proxies.

    Uh, the instrumental measurements – remember, there’s good agreement between satellite reconstructions and thermometer data – are undoubtably correct. You’ve got the notion of confirmation backwards.

    In simple logics, it should mean that either the instruments have a problem, or the proxies have a problem, or both.

    Not all proxies show the problem, it’s restricted to a *subset* of tree-ring chronologies. A *subset* – for instance, the western US high-altitude bristlecone pine chronologies don’t show the problem.

    This is why the problem with that *subset* of high-latitude tree-ring chronologies is called the *divergence problem*. The proxies are *diverging* from the temperature record, i.e. reality.

    And those showing the divergence problem only show it for about the last 1/3 of the instrumental record. They match well reasonably well with the rest of it, and with various other proxies going backwards into prehistory.

    So something is different about the last few decades that’s causing this divergence in some proxies.

    One natural possibility is that some sort of temp threshold has been reached causing significant changes in precip, or timing of precip, in those regions containing the trees which show the problem, leading to drought stress. If such temps have happened in the past then the proxies might lead to temp reconstructions containing some periods of time where the reconstructed temps are too low.

    Still doesn’t make the proxies useless.

    There are also a bunch of candidates for anthropogenic causes, which mostly fall under the heading “pollution”, including stress due to acid rain …

    An anthropogenic cause doesn’t make the proxies useless, either.

    • Gilles // January 6, 2010 at 10:45 pm | Reply

      It’s true, a subset of high latitude tree-rings show a BIG divergence, plunging after 1960 when the temperatures keep rising.

      But I repeat : NO proxy reconstruction really shows the raise in temperature of the last decades.

      [edit]

      [Response: Boreholes.]

      • Ray Ladbury // January 7, 2010 at 1:27 am

        Gilles, you have a LOT of misinformation in your posts. First, the analysis Tamino has done is on data, not model output. It is the temperature record that shows no evidence of the periodicity. Second, your characterization of proxy reconstructions is entirely a straw man. What is your source, and why do you consider them “reliable”.

  • Spaceman Spiff // January 6, 2010 at 9:46 pm | Reply

    This paper (click on PDF in the upper right corner for a copy of the full paper) is a very recent overview of the current state of the science of the cyclical and other variations in total solar irradiance.

    In summary — the current state of understanding is pretty rudimentary, although progress is being made. Keep in mind, too, that on-going solar observations are also contributing to our understanding.

    Certainly, solar variability plays various roles in Earth’s climate on a variety of time scales. However, at present we’re changing the composition of Earth’s atmosphere at a rate far more rapid than that of the growth of our physical understanding in solar variability.

    And don’t forget that what we do know of solar total irradiance variations in time is included in the climate model predictions.

  • Joseph // January 6, 2010 at 11:59 pm | Reply

    Joseph : anomalous with respect to WHAT ?

    @Gilles: With respect to 95% of all years in the reconstruction. This is after removing the MWP and the LIA by means of detrending. Details here.

    I have done personnally a graph of the 30-year slope for instrumental records. It IS interesting. You find almost the same value in the 1910-1940 period than in the modern one.

    I’m not surprised by that, and I mentioned exactly that last night at Deltoid. If you look at a logarithm of the CO2 concentration, the increase in the first part of the century is not too different to the increase in the second part. There’s a flat trend of about 20 years between 1933 and 1953 or thereabouts.

    Now, are there slopes like that before the 20th century? How common are they?

  • Jim Bouldin // January 7, 2010 at 1:21 am | Reply

    “But I repeat : NO proxy reconstruction really shows the raise in temperature of the last decades.

    [Response: Boreholes.]

    Bud burst, leaf-out, date of first flower, and plant species latitudinal and altitudinal limits, for starters.

  • AndyB // January 7, 2010 at 4:45 am | Reply

    Hey hey! Right on target so direct:

    http://www.skepticalscience.com/1998-DIY-Statistics.html

  • Aslak // January 7, 2010 at 9:25 am | Reply

    The reason I asked about Bond cycles on this particular thread is because i am also skeptical of it really being a cycle. The primary evidence as far as i know is something like thisfigure (vertical lines show are spaced by 1470 years). But the ice core time scale is not that well determined. It is an intriguing idea though.

    That said, I must add that i think that the Braun solar harmonic mechanism, and the kind of behaviour he gets out of his extremely simple model is very interesting. Although I am not convinced that it has any connection to bo.

    @David Benson: As you can see from the title of the 1997 Bond paper, the Bond cycle is not only during the Holocene: “A Pervasive Millennial-Scale Cycle in North Atlantic Holocene and Glacial Climates”.

  • san quintin // January 7, 2010 at 10:58 am | Reply

    Gilles: glacier length records also show this. But boreholes are the proxies that clinch the argument. They show a hockey-stick shape, have nothing to do with tree rings, have global spread and can’t be contaminated by the UHI. That’s probably why sceptics rarely discuss them!

  • anarchist606 // January 7, 2010 at 12:37 pm | Reply

    Global Warming Denial Bingo – A fun game for all the family!

    Got an uncle who keeps banging on about the global warming hoax every time there is a family gathering? Does your Granddad read the Daily Express and insist on pointing out ’sceptical’ arguments at dinner? Seen one too many online debates with the same old-same old zombie arguments that global warming is not happening/is happening but is caused by the sun, volcanoes
    or communists? Turn this tiresome pseudo-science into fun with Global Warming Denial Bingo!

    http://tinyurl.com/warmingbingo

  • P. Lewis // January 7, 2010 at 1:17 pm | Reply

    Borehole data from 1993! And something a bit more recent.

  • Gilles // January 7, 2010 at 3:33 pm | Reply

    So what do you read in the most recent borehole curves ?

    [Response: What you refuse to see.]

  • san quintin // January 7, 2010 at 6:35 pm | Reply

    Gilles. I read that global warming has occurred and present temps are at least higher than for the past 500 years. I also see little sign of a global LIA. Do you agree? My understanding of the science (I’m a Quaternary scientist working on palaeoclimate reconstruction) suggests that there probably isn’t a global MWP either.

    There’s also a lot of evidence for recent warming from the PACE borehole project.

    All the objections used against the instrumental record or tree ring record don’t apply to boreholes. Yet they seem to show the same hockey-stick patterns. Seems rather telling to me!

  • Brian Angliss // January 7, 2010 at 6:51 pm | Reply

    I’d like to attempt to replicate the results above (or close enough), so I’ve got a technical question about how to go about it. I’m thinking of scaling the ENSO index and adding a lag manually until the error is minimized, and then doing the same thing with volcanic forcing via AOD. Is that the right basic approach, or is there a better one?

    If there’s a better approach, I’d love to be pointed toward it.

    Thanks.

  • Hank Roberts // January 7, 2010 at 9:12 pm | Reply

    > http://www.scribd.com/doc/24897388/Global-Warming-Denial-Bingo-A-fun-game-for-all-the-family

    Nice update!
    http://scienceblogs.com/deltoid/2005/04/gwsbingo.php

  • cthulhu // January 7, 2010 at 9:33 pm | Reply

    is the denialosphere getting more stupid?

    serious question. Anyone else noticed a whole new batch of gullibe morons weighing in recently?

    A year ago would commenters on denier blogs have been misinterpreting a certain study as saying co2 levels weren’t rising?

    • Ray Ladbury // January 8, 2010 at 1:13 am | Reply

      I think what we’re seeing is an all-out assault called by the denialist mothership to counter Copenhagen and climate legislation in the Congress. So expect to see lots of thrice killed zombie arguments weilded by equally brain-dead ideologically blinkered denialobots. Yes, folks, in addition to “brains…brains”, there’s also astroturf on the menu

  • David B. Benson // January 7, 2010 at 9:50 pm | Reply

    san quintin // January 7, 2010 at 6:35 pm — LIA made it as far as Patagonia. You could check Antarctic ice core data to see if Antarctica participated or did a polar see-saw.

  • David B. Benson // January 7, 2010 at 10:06 pm | Reply

    Aslak // January 7, 2010 at 9:25 am — Yes, but the “glacial climate” events are otherwise known as DO events, after the first discoverers:
    http://en.wikipedia.org/wiki/Dansgaard-Oeschger_event
    I have no doubt of the reality of DO events as QPO during the last (and prior) glacials. It is the extension into the Holocene about which I am quite skeptical.

  • san quintin // January 7, 2010 at 11:07 pm | Reply

    Thanks David. yes, I’ve worked in Patagonia on the LIA….there’s lots of evidence for glacier recession from the late 19th century….not too much evidence about the onset of cooling. Without this being nailed it’s difficult to prove synchrony with the Northern hemisphere. We should probably be thinking of Little Ice Ages…at different times around the world.

  • David B. Benson // January 8, 2010 at 12:11 am | Reply

    san quintin // January 7, 2010 at 11:07 pm — Liminology work seems to indicate an onset of (slight) cooling at about the same time as the European LIA. I find this quite believable due to the global impact of major vulcanism and whatever cooling effect the Maunder Minimum (also the other minimum) might have had.

  • deech56 // January 8, 2010 at 1:11 am | Reply

    RE cthulhu

    A year ago would commenters on denier blogs have been misinterpreting a certain study as saying co2 levels weren’t rising?

    I dunno – since the proprietor of a certain blog thought such a study was a “bombshell” I think the quality of owner and commenters is pretty evenly matched. there are just more people commenting.

  • dhogaza // January 8, 2010 at 1:50 am | Reply

    Yeah, and last year the same proprietor thought that one of Lu’s earlier papers proved that CFCs have nothing to do with ozone depletion, because he didn’t understand that “halogenated” and “CFC” are not contradictory terms.

    And a year or so ago he thought it was very likely that there’s CO2 snow in the Antarctic.

    I think the big change over there is that very, very few rational people try to set the record straight over there any more, and his audience has grown. And the IQ of the growing audience seems to be dropping.

  • san quintin // January 8, 2010 at 9:58 am | Reply

    Hi David
    Yes, but some of the glacier behaviour is anomalous….the Soler glacier for instance advanced in the 15th century, then receded before its ‘LIA’ behaviour. The paleolimnology we have done to the east of the icefields is equivocal about LIA.

    Whether or not there is a global LIA, it’s clear that the borehole record is an invaluable proxy and something that the sceptics know they can’t attack.

  • deech56 // January 8, 2010 at 10:32 am | Reply

    RE dhogaza

    I think the big change over there is that very, very few rational people try to set the record straight over there any more, and his audience has grown. And the IQ of the growing audience seems to be dropping.

    Hmmm…That may be true. Of course, it is good to finally discover that it can get cold in winter. Growing up on the balmy shores of Lake Erie, I was only vaguely aware of that fact.

    I also notice that The Other Science Blog of the Year has given up on science and is going all Scooby Doo on the e-mails, examining every molehill they can. Readers may want to check out the December 25th Science Friday panel discussion about the biggest science stories of 2009. The panelists discussed the hack and climate denial in the context of opposition to evolution and vaccines.

  • Barton Paul Levenson // January 8, 2010 at 12:08 pm | Reply

    Gilles,

    What evidence would you accept that a cycle doesn’t exist in reality? Please tell us what evidence would falsify your hypothesis.

    [Response: Gilles has attempted to post a lot of nonsense, so I've deleted his comments. Let's not engage the troll.]

    • Gilles // January 8, 2010 at 2:58 pm | Reply

      BPL, I would be happy to answer you elsewhere than on this blog, since I am apparently not welcomed. Please let me know if you have a public email address.

  • Carel // January 8, 2010 at 12:13 pm | Reply

    Very interesting work. Could anyone direct me to a paper or any reference where I could learn more about using chi-squared statistics to test for periodic behavior? I am studying seasonal epidemic data (and other epidemic data with apparent periodic effects) and I am hopeful that a similar statistic could be derived to test for periodicity in these data. Many thanks.

    [Response: You can find an introduction to this here. That online source gives some good general references, namely:

    Anderson, T. W. (1958). The Statistical Analysis of Time Series, John Wiley & Sons.
    Hartley, H. O. (1949). Tests of significance in harmonic analysis. Biometrica, 36, 194.
    Priestley, M. B. (1981). Spectral Analysis and Time Series, Academic Press.

    Some more complicated stuff from the astronomical literature:

    Probability Distributions Related to Power Spectra,
    Groth, E. J. 1975, Astrophysical Journal Supplement Series 29, 285.

    Studies in Astronomical Time Series Analysis II: Statistical Aspects of Spectral Analysis of Unevenly Sampled Data, Scargle, J. 1982, Astrophysical Journal, 263, 835.

    Time Series Analysis by Projection. I: Statistical Properties of Fourier Analysis, Foster, G. 1996, Astronomical Journal, 111, 541.]

  • george // January 8, 2010 at 2:36 pm | Reply

    Ray says

    I think what we’re seeing is an all-out assault called by the denialist mothership to counter Copenhagen and climate legislation in the Congress. So expect to see lots of thrice killed zombie arguments”

    Unfortunately, Copenhagen is itself a zombie agreement, with the essential difference being that zombies (the young ones, at least) have teeth.

    That’s not just my opinion, by the way.

    Jim Hansen recognizes it (James Hansen: Good Riddance, Copenhagen. Time for Better Ideas. ) and so does Nobel Prize winning economist Joe Stiglitz (Overcoming the Copenhagen failure

  • Carel // January 8, 2010 at 4:07 pm | Reply

    Thanks for your prompt reply. The links you gave are very useful.

  • dhogaza // January 8, 2010 at 5:13 pm | Reply

    I also notice that The Other Science Blog of the Year has given up on science and is going all Scooby Doo on the e-mails, examining every molehill they can.

    Well, it’s been obvious from the beginning that McIntyre really has been a conspiracy theorist (the very phrase The Team, combined with all his innuendo, is enough to confirm that).

    So it’s not at all suprising that McIntyre and The Mob are devoting their attention to The Team’s (so-called) Conspiracy Against Science.

  • Dr Richard Lawson // January 9, 2010 at 7:47 pm | Reply

    Ahem. Nervous throat clearing from medically trained Green Party activist who has been trying to introduce some sense into deniers on the Daily Mail website for the past 6 weeks. (Someone has to do it).

    OK, I understand that we need a lot of data before we can establish that there is a cycle operating, but on the other hand, when I look at carbon derived solar change
    http://www.globalwarmingart.com/wiki/File:Carbon_Derived_Solar_Change_png
    it is going up and down like a Weston donkey, taking round about 200 years each time.

    The line stops about 1950, but this graph shows its continuation to 2000, falling from 1950, with a slight elevation around 1980.
    http://www.globalwarmingart.com/wiki/File:Sunspot_Numbers_png

    The question is, where is the solar activity line going to go in future? It took about 200 years to fall from the Mediaeval Max to the Wolf minimum, 160 years for the fall 600-500BP, and 80 years for the fall 400-300 years BP.

    These falls in solar intensity do seem to be reflected in temperature records.
    http://greenerblog.blogspot.com/2010/01/climate-change-what-if-solar-input.html
    (Fig 2)

    My point is this: do (or should) projections factor in these long term solar trends?

    Please don’t hit me. I’m on your side.

    • guthrie // January 10, 2010 at 12:02 am | Reply

      I don’t think the projections include much information about changing solar output, because then you can just make even more projections giving various answers.

      What’s better to do is emphasise that a period of relative solar inactivity doesn’t in anyway invalidate greenhouse gases nor do away with the issue of oceanic acidicification.

      IIRC, what they are expecting is a return to solar activity more like that of the late 19th/ early 20th century, which might knock a mere 0.2C off the warming, which would be made up in a decade. This is ofcourse an unscientific off thecuff estimate.
      HAve you looked in the IPCC or the websites of the groups who do the modelling?

  • sean // January 9, 2010 at 8:38 pm | Reply

    Conspiracy has a negative sense implying the conspirators knowing their actions are wrong or illegal and needing to be secret to succeed. The site which famously uses the term “the Team” and “off island” does not charge the IPCC crowd believe themselves to be acting badly or illegally, just the lesser charge of being a conceited clique.

  • Lazar // January 9, 2010 at 9:55 pm | Reply

    Tamino writes;

    That’s what I do in practice: compare the level of a periodogram peak to the levels of the other peaks in its neighborhood. In other words, one uses the “background” level to define the scale, and tests whether the given peak is sufficiently taller than its neighbors.

    This is fascinating. For those like myself who probably lack the necessary intuition to estimate the background level — would it be reasonable to fit a SARIMA model to the data, and use the power spectrum of the ARMA component?

  • Joseph // January 9, 2010 at 10:01 pm | Reply

    when I look at carbon derived solar change
    http://www.globalwarmingart.com/wiki/File:Carbon_Derived_Solar_Change_png
    it is going up and down like a Weston donkey, taking round about 200 years each time.

    Solar irradiance is cyclical. That’s very clear in the short term, with a period of about 12 years. In the long term, I don’t doubt there are other cycles (even outside Milankovitch cycles) and I don’t think this is being disputed at all.

    Can cyclical changes in solar irradiance explain the recent observed changes in temperature? From the reconstructions I’ve seen, I don’t think so. Those are minor changes of up to 2 W/m^2 at the top of the atmosphere, which would translate to 0.3 W/m^2 of radiative forcing.

    To do it another way, the temperature of a gray body is proportional to the irradiance it gets to the 1/4th power (with all else being equal.) It’s not going to be 1/3rd power or 1/5th power. You can estimate the impact of irradiance fluctuations this way.

    [Response: I'm skeptical of other cycles in solar irradiance. By the way, Milankovitch cycles aren't changes in the sun, but in the orbit and axial tilt of earth.]

  • Riccardo // January 9, 2010 at 10:40 pm | Reply

    Dr Richard Lawson,
    before taking anything into account it must be well assessed and predictable. The 200 years Suess cycle is not. As far as i know there is no physical explanation of its origin, the cycle may vary from about 170 to 260 years and intesity is not reproducible.
    Even accepting it’s real it could not be considered in models for projections.

  • David B. Benson // January 9, 2010 at 10:41 pm | Reply

    Dr Richard Lawson // January 9, 2010 at 7:47 pm — That looks like red-pink noise to me. Very much more data would be required even to establish a QPO.

  • Johnmac // January 9, 2010 at 10:55 pm | Reply

    “The site which famously uses the term “the Team” and “off island” does not charge the IPCC crowd believe themselves to be acting badly or illegally, just the lesser charge of being a conceited clique.”

    It’s a very extensive” clique” then, isn’t it? Or are scientists a “clique” in such circles?

    Reality distortion devices and Orwellian language transmuters must do s roaring trade out there in Denialistville.

  • dhogaza // January 9, 2010 at 11:02 pm | Reply

    The site which famously uses the term “the Team” and “off island” does not charge the IPCC crowd believe themselves to be acting badly or illegally, just the lesser charge of being a conceited clique.

    The second “M” of “M&M” accused Briffa of scientific fraud in a Canadian newspaper just last summer.

    And the first “M” has insinuated fraud (and, indeed, has slipped up and made a direct accusation on occasion) for years.

    We have eyes, and we can read. Comments like yours are as silly as comments from people saying Watts doesn’t delete anyone’s posts at WUWT.

  • Ray Ladbury // January 10, 2010 at 12:04 am | Reply

    Richard Lawson,
    Duck!! Nah! You did not trip the stupid meter.

    We do not have a model of solar activity that currently takes into account solar activity. We do know that the general trend for medium-sized yellow stars like Sol is to increase in intensity over time. The variations you note–based on the imperfect metric of sunspot number–are part of the “noise” on that trend. However note the timescale on the changes–on the order of decades, right? The effects of CO2 last for centuries. So while a Grand Solar Minimum might postpone a crisis, it wouldn’t take us out of the woods. In any case the current prolonged solar minimum looks to be ending, so I rather doubt we’re seeing anything like that.

  • David B. Benson // January 10, 2010 at 1:08 am | Reply

    Lazar // January 9, 2010 at 9:55 pm — Link to something explaining SARIMA, please.

  • Rattus Norvegicus // January 10, 2010 at 5:40 am | Reply

    Ray, even with the current minimum, 2009 is still going to come in at #3 for D-N temps. I doubt even a prolonged minimum would put us off the hook if the response time to solar forcing is on the order of 1-2 years, as seems likely.

  • Dr Richard Lawson // January 10, 2010 at 12:51 pm | Reply

    Many thanks to the helpful responses to my query. I now know that the long-term pattern that I saw has been called the Suess or de Vries cycle, but that its mathematical status as a true cycle is not certain. (Ma and Vanquero 2008 conclude: “The most relevant characteristic of the periodogram is a cycle with a frequency very close to the Suess cycle, though this cycle is not significant statistically”.

    Despite this uncertainty, it seems to me that it would be wise to include in climate projections reasonable maximum and minimum guesses into what the sun is going to do over the next 100 years.

    Otherwise, if projections assume the sun to be constant at present levels, projections may overshoot observations, which will give more opportunity to the skeptics to muddy the waters.

  • Lazar // January 10, 2010 at 3:05 pm | Reply

    David,

    Link to something explaining SARIMA

    This is ok. Or if you have Shumway & Stoffer 2nd ed., pp 154-163.

  • Lazar // January 10, 2010 at 3:21 pm | Reply

    Should’ve written… power spectrum of the nonseasonal ARMA component.

  • David B. Benson // January 10, 2010 at 8:43 pm | Reply

    Lazar // January 10, 2010 at 3:05 pm — Thank you. Your proposed comparison might work, but it seems (naively) simplier to remove the suspected seasoanlity from the data and compare the power spectrum for the two time series.

  • Lazar // January 10, 2010 at 9:58 pm | Reply

    David,

    it seems (naively) simplier to remove the suspected seasoanlity

    But then my problem of guessing what’s seasonal and what’s noise remains. So, I can hopefully rely on using the significance of seasonal AR and MA terms to decide… if I’m reading your response correctly in saying ‘that’s not nonsense’ :-)

    Thanks!

  • David B. Benson // January 10, 2010 at 11:08 pm | Reply

    Lazar // January 10, 2010 at 9:58 pm — Well, if it doesn’t appear to have a seasonal component then the time series probably doesn’t have one. The problem arises in looking at the power spectrum from a finite interval; spuriors peaks may arise. So my naive thought is to remove those from the time series by subtracting out some A.cos(wt-phase) until the peak goes away. Then one has the power density in the vicinity of the peak.

    A warning about using an ARMA process is that it can be viewed as a linear filter of white noise. In particular, AR(2) processes faithfully represent damped harmonic oscillators. Generate the output of such excited by white noise to notice that some AR(2) process gives the same spectrum. Isn’t that cyclic?

    However, using SARIMA versus ARMA, determine both and then subject the two to an AIC model shoot-out. Not sure that will be quite the answer you want.

    Finally, and probably my best advice, Tamino earlier suggested a number of references for those attempting to obtain a periodic signal from noisy data, if there is one at is.

  • sean // January 11, 2010 at 12:16 am | Reply

    Johnmac
    The suggestion is that the data sharing clique is relativily small inner circle who can be trusted not to make waves for other insiders; and data and code is certainly not made available freely to all professional climate scientists, much less all scientists, or the general public.

    While certain actions are illegal and or wrong, the insiders do not see themselves as behaving badly as they believe the priority is to avoid debate or express doubt which would allow wiggle room to avoid urgent action.

  • Ray Ladbury // January 11, 2010 at 12:49 pm | Reply

    Sean, that is absolute bullshit. Climate science has been pioneering in its openness. It has opened all its data and methodology to inspection by the National Academies, and NASA GISS has pretty much all of its code and data available on-line.

    The issue is not the unavailability of code/data, but the utter inability of denialists to do anything constructive with it.

  • dhogaza // January 11, 2010 at 3:16 pm | Reply

    Sean, that is absolute bullshit

    Worse, it’s a lie that rises close to the definition of libel.

  • Gunner // January 11, 2010 at 7:38 pm | Reply

    At the 1994 Man In Space Symposium held in Washington, D.C. , I met a scientist who was doing climate research for NASA. He didn’t know me and I didn’t know him, but he was offering CDs of data and models and was encouraging people to take them and play with the data. My friend and I both took copies. Little did I know that by doing so I have become a part of a small, exclusive clique. I’ll have to add that to my CV next time I update it.

  • johnmac // January 11, 2010 at 8:35 pm | Reply

    “Johnmac
    The suggestion is that the data sharing clique is relativily small inner circle who can be trusted not to make waves for other insiders; and data and code is certainly not made available freely to all professional climate scientists, much less all scientists, or the general public.”

    As the posts above make clear, this is simply untrue. You’ve provided a fine example of the standard model denialist reality distortion device in action, and it really does work well.

    I imagine you must believe this? If so, where on earth do you get your information from?

    If you are actually interested in the truth, you need to get out more and spend some time at sites frequented by scientists, like this one.

    Somehow I doubt whether you will do that, Sean.

  • sean // January 11, 2010 at 10:03 pm | Reply

    Ray,
    Which is bull? Have miss summarised the position taken, or just the position itself is unreasonable?

    Technically it would be possible for all published paper to provided an online archive giving access to a snapshot of the exact code and data used in each and every calculation and graph used in the paper.

    Apart from the time it may take to prepare the snapshot, the main argue against this is not that they can not use this data and code constructivily, rather that they can misuse the data and code. They will seize on the most irrelevant issues to attack reputations and and create public doubt. It may even be inaction agenda plot.

  • george // January 11, 2010 at 11:08 pm | Reply

    gunner said:

    He [NASA climate scientist] didn’t know me and I didn’t know him, but he was offering CDs of data and models and was encouraging people to take them and play with the data. My friend and I both took copies. ‘

    Just don’t let Steve McIntyre get wind of it. He’ll be hounding you for sure.

    In fact, it may already be too late. His minions monitor this blog.

    Better destroy the CD’s with a hammer just to be safe.

  • Ray Ladbury // January 11, 2010 at 11:38 pm | Reply

    Sean, most of the data–in fact all of it that can be distributed–are available. Most of the code as well. Just try doing that with the particle physics community.

    Now perhaps you are merely repeating a lie you read elsewhere, but it is still a lie. Repeating it again will remove the benefit of the doubt.

  • dhogaza // January 12, 2010 at 12:14 am | Reply

    Eh, the GHCN site is down for updating at the moment, and more maintenance will be done on Wednesday, so I can’t give Sean a link but …

    Sean … dude! Listen up!

    The roughly 95% of the data McIntyre et al scream about which *can* be made available to all, *is* available to all, at the GHCN website.

    Includes things like monthly summaries, daily data in a documented format, and even … even …

    the much discussed RAW DATA that you and your kind like to scream “isn’t available!”. They have image files of scans of the *original datasheets* filled in by hand by observers. If you want, you can wade through countless piles of these if you suspect that the GHCN people are adding “warming bias” stuff to the data as they transcribe it! You can print out the scanned copies and wallpaper your bedroom with them!

    Anything! No strings attached!

    Now let me ask you a question – did you try to *verify* the lie you are repeating before deciding it was true, and deciding to make yourself look foolish here?

    As far as code goes … it’s true that US researchers have been more open with code than the UK people, but it’s also true that the UK Met office – a government organ – is *required* to, for instance, make money using their models to run forecasts etc (or at least to try to), and therefore they are *required* to keep their modeling software proprietary.

    Blame Thatcher and her fellow conservatives for making “make money”, rather than “scientific openness”, drive the UK Met mission. The researchers themselves aren’t the decision makers …

  • Lazar // January 12, 2010 at 1:32 am | Reply

    dhogaza,

    for making “make money”, rather than “scientific openness”, drive the UK Met mission

    Good point — these entities are competing in the private sector why?

  • dhogaza // January 12, 2010 at 4:43 am | Reply

    Good point — these entities are competing in the private sector why?

    Because charging makes the private sector more competitive than giving it away free.

    Rick Santorum of Pennsylvania, before he lost, had introduced a bill that would’ve forbid the National Weather Service from giving out forecast information for free if equivalent information could be obtained from companies.

    I.E. the NWS would’ve been (it didn’t pass) forbidden from giving out free forecasts if a company (Accuweather, in this case) was selling them.

    It’s all about using government to line the pockets of private industry. In the UK Met case, under the mantra of “make them help pay for their own budget”.

    Competition in the marketplace, blah blah blah. Of course, if the “marketplace” doesn’t include political entities concerned primarily with guiding public policy, well, so much the better!

  • SNRatio // January 12, 2010 at 12:24 pm | Reply

    In the openness issue, we must distinguish sharply between ‘valid’ and ‘relevant’. It is a sad fact that because of diverse national policies, the UK Met office being a typical example, we may for years be stuck in a situation where 100% reproducibility by outsiders of all results is not possible. To get the most precise results, most of us will want to use all available data, whether open or not.

    So, the lack of openness may, sadly, still be a valid objection – but so what? If the results are still reproducible, this problem will in most respects not be relevant at all. Typically, at least 95% of all data will be open, and if some published results depend in a crucial way on non-disclosed data, we should be suspicious towards them. Normally, results obtained with open data should show all the essential features of results from the full data sets.

    The same applies to code, but even more. If a description of methods does not allow us to make out own implementation, it’s not good enough, and it must be improved. If it’s good enough, we don’t really need the code originally used, and we don’t need to understand it.

  • Ray Ladbury // January 12, 2010 at 1:49 pm | Reply

    SNRatio, I think you need to distinguish between the two questions of:

    “Are the results reproducible?”

    and

    “Are the results reproducible by any idiot regardless of education, expertise and ability?”

    The latter seems to be the standard favored by the denialist fringe.

  • george // January 12, 2010 at 3:50 pm | Reply

    Ray,

    The standard favored by the denialist fringe is called the “stupid button”.

    You know, the one labeled “I’m stupid. Push me…”

    It’s what these folks actually expected to see come up on the screen when they “ran” the released NASA GISTEMP code.

    Compile? WATTS up with that?

  • sean // January 12, 2010 at 10:41 pm | Reply

    I have seen is the monthly on the GHCN public ftp server. but I have to say I have not seen the daily dhogaza talked about so I can not comment on how close to raw it is or how complete or upto date. I look forward to dhogaza posting the url.

  • ligne // January 13, 2010 at 4:41 pm | Reply

    here, sean, let me google that for you: http://lmgtfy.com/?q=ghcn+daily

    (protip: it’s the first result. the one called “Global Historical Climatology Network – Daily”.)

  • ligne // January 13, 2010 at 5:20 pm | Reply

    Ray Ladbury: “Most of the code as well. Just try doing that with the particle physics community.”

    i doubt your tolerance for bad FORTRAN is that high. mine certainly isn’t :-)

    the endless screaming about “seeing teh coeds” really is stupid. running the code and getting the same result doesn’t prove it’s bug-free. running the code and getting a different result doesn’t prove it’s wrong. so the whole exercise demonstrates sweet FA.

    anyway, it’s almost always easier to write your own code from the published algorithms than it is to read and understand someone else’s. especially when it’s throwaway code written by non-programmers.

    (thanks for dropping my missing-link-filled post, Tamino!)

  • Ray Ladbury // January 13, 2010 at 6:16 pm | Reply

    Ligne says: “anyway, it’s almost always easier to write your own code from the published algorithms than it is to read and understand someone else’s. especially when it’s throwaway code written by non-programmers.”

    And we have a WINNAAA! That’s what I’ve been saying all along. I have no problems sharing data. Sharing code seems to me a recipe for error propagation.

    Archiving code, fine. But let’s keep a firewall between independent groups.

  • Barton Paul Levenson // January 13, 2010 at 8:19 pm | Reply

    ligne,

    Hey, I definitely want to see the co-eds! Especially if they’re, you know, naked.

  • dhogaza // January 13, 2010 at 8:30 pm | Reply

    Sean sez …

    I have to say I have not seen the daily dhogaza talked about so I can not comment on how close to raw it is or how complete or upto date. I look forward to dhogaza posting the url.

    Here’s a sample of the raw data you can get.

    Raw enough for you?

  • sean // January 13, 2010 at 11:32 pm | Reply

    dhogaza,
    I could be wrong, but the url you have give appears to be a sample to show a what you can buy, if you are not in one of the DNS domaines given free access under access policy here

    http://www.ncdc.noaa.gov/oa/nndc/freeaccess.html

    Plus, clearly scanned documents like that will necessarly have been transcribed into digital form, which is easier to work with. Useful to known you can do back to check the source documentation if you suspect a transcription error on a particular day, but you would not want to retype everything.

    As far as I can see
    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily or around there is more likely to be the place to start.

    Personnel I have not done this. The files are real big. If someone like dhogaza can confirm this contains the sheets after they have been typed before adjustments that would be very reassuring. If this is adjusted, does anyone know if and where the raw data after typing is?

  • dhogaza // January 14, 2010 at 12:47 am | Reply

    Let’s see sean move the goalposts …

    The suggestion is that the data sharing clique is relativily small inner circle who can be trusted not to make waves for other insiders; and data and code is certainly not made available freely to all professional climate scientists, much less all scientists, or the general public.

    OK, then it’s made clear to him that tons of data are available at GHCN.

    Then he says …

    I have seen is the monthly on the GHCN public ftp server.

    So he knew that data’s available to the public before he claimed that it’s being hidden …

    Plus, clearly scanned documents like that will necessarly have been transcribed into digital form, which is easier to work with. Useful to known you can do back to check the source documentation if you suspect a transcription error on a particular day, but you would not want to retype everything.

    But, Sean, the denialist argument isn’t that “it would be really hard to type up all the raw data”.

    The denialist argument is that “the raw data isn’t available”.

    Yes, you have to pay them for the costs of making you a DVD for the stuff you’re interested in. It’s “freely available” in the sense of not being restricted, i.e. the claim that you can’t get the data because it’s restricted to “insiders” is simply false.

    The files are real big. If someone like dhogaza can confirm this contains the sheets after they have been typed before adjustments that would be very reassuring. If this is adjusted, does anyone know if and where the raw data after typing is?

    And now, in typical denialist fashion, after getting me to do his work for him, he asks me to do more work because, like, the files are really big and all that.

  • ligne // January 14, 2010 at 5:55 pm | Reply

    Ray> yaaaaaay! what do i win? i hope it’s a pony. or a tiger, i’ve always wanted a tiger :-D

    but it’s the sort of thing that should be blindingly obvious to anyone, you know, *even half-qualified to read the sodding code*. make of that what you will.

    BPL> co-eds? reproduction? i’m sure there’s a joke in there somewhere. i think it might include something about models and sauce too.

    sean> you poor thing! but i completely understand — it certainly would never have occurred to *me* that decades’ worth of daily measurements across tens of thousands of stations might involve a lot of data.

    do you think that’s why they also publish the monthly averages?

  • sean // January 15, 2010 at 7:23 pm | Reply

    I can download the source to FireFox or any other GNU application. If I tried to compile and found it was missing 5% of the code, would I be satisfied or dissatisfied? Where it depends on libraries, would I want access to that too, and all in a language for which I could find a compiler.

    The claim that « 95% » is available is compatible with « where is the data »?
    The expectation of on « denier side » is that everything that is needed to reproduce published papers results should be available in a form that is not needlessly obstructive to that reproduction. So if the dailies have been transcribed, it makes sense to use the transcribed stuff and go back to scanned copies if QA suggests a 7 became a 9, or two days are identical, or a day is missing. The monthly would be easier to play with, but the treatment of missing days is one of the issues.

    I use GNU applications as I have confidence. The source is available for those who want it. I do not actually compile any of the GNU stuff myself, but I have respect for the anoraks that actually do.

    The question is not if there is something online. The question is there enough online to regenerate the gridded series as published, and then generate a GNU fork which would let people to play with other assumptions about correcting urban heat islands and station moves, or to include/exclude additional data. The point of GNU is that you do not have to start at zero if you do not like the existing product. Most people would say that FireFox is better than the Netscape it came from.

    The climate folks are becoming more open as the pressure is increasing. However they do not appear to accept the basic principle, that is legitimate to want to download everything, run the calculations your to understand the details and pass comment.

  • sean // January 15, 2010 at 8:04 pm | Reply

    Honest, I had not read the latest US emails when I wrote that last comment.

  • dhogaza // January 15, 2010 at 8:51 pm | Reply

    I can download the source to FireFox or any other GNU application. If I tried to compile and found it was missing 5% of the code, would I be satisfied or dissatisfied? Where it depends on libraries, would I want access to that too, and all in a language for which I could find a compiler.

    If you’re missing 5% of a program’s source, most likely you’re not going to be able to compile it.

    If you’re missing 5% of *data*, most likely it’s not going to be a big problem.

    The satellite people – temperature reconstructors and ice extent/area reconstructors both – have to deal with missing data all the time (for instance, clouds fool the instruments used to detect ice, so areas determined to be covered by clouds via other sensors are stripped out).

    GISTEMP does just fine with the 95% of the data that’s available at GHCN. You could, too.

    Incomplete data is a fact of life in much of science.

    Incomplete source code is not.

  • dhogaza // January 15, 2010 at 8:59 pm | Reply

    However they do not appear to accept the basic principle, that is legitimate to want to download everything, run the calculations your to understand the details and pass comment.

    To extend the GNU principle you seem to hold in such high regard … some linux compatible devices come with proprietary drivers, and the source is not available.

    Asking CRU to give data that they have no license to distribute is akin to asking someone who’s helped write a proprietary driver under contract to violate that contract and release the source to that driver.

    No. Ain’t happening. We still have courts in this country (and in the UK) that will enforce contracts.

    No matter how much you and McI and others jump up and down and say “failure to break a contract is evidence of fraud!”

    And where is adherence to a contract evidence that one doesn’t “accept the basic principle, that is legitimate to want to download everything”

    It’s perfectly legitimate to want to. But just because you can download the source to linux, and want to download the source to proprietary drivers only supplied by the manufacturer in binary form, doesn’t mean you get to download the latter.

    The response to McI’s appeal of his FOI denial said explicitly that CRU’s working on getting agreements modified so this data can be released.

    This is no different than the work people have done in the software world to – at times successfully – get proprietary software made freely available in binary form also licensed and distributed under an open source license.

    But if you don’t win that fight, you don’t get the source.

    And if CRU’s not successful in getting those agreements modified, you’re not getting the data from them.

    Nowhere does libel of the “fraud” and “misconduct” kind fit in.

  • dhogaza // January 15, 2010 at 9:03 pm | Reply

    The claim that « 95% » is available is compatible with « where is the data »?

    No, the claim is 95% is available IN ONE PLACE, and that the other 5% is HELD ELSEWHERE BY NATIONAL AGENCIES IN VARIOUS COUNTRIES.

    The question, “where is the data”, has been answered years ago.

    McI doesn’t like the answer. Poor, whining, McI, who cares?

  • dhogaza // January 15, 2010 at 9:13 pm | Reply

    And we have a WINNAAA! That’s what I’ve been saying all along. I have no problems sharing data. Sharing code seems to me a recipe for error propagation.

    Depends on who’s doing it, in part, and the goal of redoing it. The Clear Climate Code folk have put a fair amount of effort in comparing GISTEMP to the papers detailing the algorithms the code supposedly implement – and have stated that they have quite a bit of confidence that the algorithms are correctly implemented.

    Of course, that’s a side benefit, not a goal – they studied the papers and code together to gain confidence that they understood GISTEMP well enough to rewrite it.

    And their goal in rewriting it is nothing more than to try to come up with a more readable and more approachable implementation. They don’t pretend to be doing research (much less to be doing “debunking”), just applying their software engineering skills to come up with something they hope will be of interest.

    Along the way, they’ve found a half-dozen or so very minor bugs, none of which impact the final product.

    This won’t impact the science at all, of course, and that’s not their goal.

    On the one hand they’re reaching out to the GISTEMP folk saying, hey, folks in the software engineering community might be able to help clean up your code (and the code maintainer has accepted and incorporated the bug fixes the CCC people have made to the FORTRAN version).

    On the other hand, they’re reaching out to the skeptic community and saying, more or less, well, it might be FORTRAN, it might be ugly, but it appears to be doing what they claim it’s doing – so deal. And if you prefer our version, well, here it is.

    And of course, mostly, they’re learning :)

  • David B. Benson // January 15, 2010 at 11:10 pm | Reply

    Maybe all these recent comments belong on the current open thread? They don’t seem to have much to do with cycles.

    Except the incessant recycling of various talking points, that is.

  • Ray Ladbury // January 16, 2010 at 1:12 am | Reply

    Sean,
    No offense intended, but it seems to me that most of the people clambering for code and data could spend their time more productively learning the basics of climate science.

    Without the theoretical basis, how do you know what the data mean? How do you even know if you have enough data? How do you test for and correct heat-island and other effects?

    I was very critical of Watts surface station project–not because I thought it was a bad idea to document issues with the stations, but because without at least some training it seemed more like a recipe for a poison-ivy induced dermatitis outbreak than a campaign to ensure data quality. And in the end, it really accomplished nothing for all the fanfare certain individuals gave it.

    If you want to learn about and understand this stuff, great. However, I’d start with the basic science and work from there.

  • Didactylos // January 16, 2010 at 3:28 am | Reply

    The CCC people are doing exactly what the deniers keep screaming needs to be done, but that which they adamantly refuse to do themselves (sometimes openly admitting that they are not competent to do so).

    But the CCC results are curiously ignored by the deniers. Are they wearing blinkers? I’ve heard of confirmation bias, but the self-deception that must be required hurts my brain just thinking about it.

    Sean: you can’t have it both ways. if you are competent to analyse and comment on the results, go right ahead. If you aren’t, then what exactly do you think you are doing? Even if you had all the data and all the code ever written, you would do nothing with it – and worse, none of your denier friends would, either. They already have more code and data than they can analyse, and so far they have amazed us all by doing absolutely nothing with what they already have.

  • Barton Paul Levenson // January 16, 2010 at 12:55 pm | Reply

    sean: do not appear to accept the basic principle, that is legitimate to want to download everything, run the calculations your to understand the details and pass comment.

    BPL: That’s because it’s a “principle” you made up.

  • dhogaza // January 16, 2010 at 2:21 pm | Reply

    But the CCC results are curiously ignored by the deniers. Are they wearing blinkers?

    It’s not curious at all. The motivation is to make science look bad in order to foster inaction on CO2 emissions, not to do science. Sean’s demonstrating it perfectly, and you’ve summed Sean up perfectly, too.

  • deech56 // January 16, 2010 at 3:14 pm | Reply

    Dhogaza, isn’t there a quote somewhere to the effect that McI really didn’t want to do any analysis of the information he was requesting?

  • Hank Roberts // January 17, 2010 at 4:09 am | Reply

    Speaking of cycles we’d like to hope are real, what’s new from the variable star folks? I know we’ve been slowly adding more observations of more sun-type stars over the last decade. I don’t know if we’re anywhere near close to having enough observations to say anything about how regular sun-type stars are over what periods of time.
    (That’s assuming we can identify sun-type stars, which I _think_ we can assume we can do.)

    This is the sort of local observation that brings up the wish for more longterm observations of more similar stars:
    http://www.leif.org/research/Solar-Microwaves-at-23-24-Minimum.pdf
    

  • Barton Paul Levenson // January 18, 2010 at 10:45 am | Reply

    Hank,

    Yes. Stars like the sun are of spectral type G2, luminosity class V, have similar metallicity as deduced from their spectrum, and have a similar age as deduced by lithium depletion and rotation rate. “Spotted stars” are known to have sunspots–I vaguely recall 9 Ceti having a 6-7 year sunspot cycle, but I can’t cite a source offhand.

  • Sceptical Guy // January 19, 2010 at 2:33 am | Reply

    BPL,
    You sure do know your astrophysics!

    BTW – I went on holiday before I got a chance to say thanks for the link to your planetary temps calculation page – that was exactly what I was after.

    Cheers.

  • Hank Roberts // January 19, 2010 at 11:24 am | Reply

    Here’s a decent (I think, amateur reader speaking) summary on current solar thinking:

    Solar change and climate: an update in the light of the current exceptional solar minimum
    – Mike Lockwood

    http://rspa.royalsocietypublishing.org/content/466/2114/303.full
    Proc. R. Soc. A 8 February 2010 vol. 466 no. 2114 303-329

  • Barton Paul Levenson // January 19, 2010 at 1:39 pm | Reply

    Thanks, SG! Glad I could help.

  • sean // January 24, 2010 at 9:53 am | Reply

    Actually the first question before understanding is knowing what we saw. People have collected data. People have taken the data off paper records and put them into machine-readable form. This raw data should be static, as it is what you saw.
    By nature monthlies can not be raw. Published dailies databases could be in principle be, but as far as I know are “nearly raw” and not the raw, and not static. If anyone can indicate a public database with the raw data, with or without TOB and knows the data really is raw, that would be the start. We would want to check the 1930’s and 1940s had not been revised at any time after first being published, and QA the “raw” data using easily accessible “raw” as published by the collectors of the readings. I have seen Central Park” used for this.

    When you see online databases with public accessible “dailies” as far as I know, it is “near raw” with “minor adjustments for…” Like instant coffee, it is convenient, but you can taste the difference. The problem with “instant” data is that the 21-Century problems can and often are treated by retrospect reprocessing of the data, and you do not really know what you are getting.

    Even recalculating the data set mean as new data come in can modify the early years through the infilling of missing days. Plus there are repeated allegations that stations have been selectively dropped even through the raw data was available. Plus the “near raw” data appears to be getting colder in the past, as 21 century problems are treated by adjustments to 20 Century data eg Y2K bug.

    So the way forward is find the raw data, and put it online. AGW is a trillion dollar problem, we can find the time to do this. You need to make a high visibility appeal for the records from the professionals, and have the originators check and authenticate their data, and add back in whatever is missing, and the sign it with MD5 keys. You will need version control naturally e.g. Subversion. Let folks re-run the calculations with traceable data. To bring the station numbers back up you need to make a high visibility PUBLIC appeal for records. Measuring the temp is an activity lots of people actually do as a hobby or as part another activity and you will find records several decades long WITH METADATA e.g. school. I did it in primary school, and they keep photos of that sort of thing. Past temp being IP is clearly nonsense. It seems highly unlikely any governments or courts would uphold Intellectual Property if the IPCC and UN supported disclosure.

    [Response: No offense, God love ya, but --

    It's arrogant as hell to think that the right analysis hasn't already been done by mainstream climate scientists, by people who know how a helluva lot better than you, or Steve McIntyre, or Anthony Watts or the other denialists who are standing in the way of doing something to make the future better rather than a lot worse.

    It's likewise arrogant to think that those who have done so are somehow incompetent boobs who've gotten the whole global trend wrong because they don't know what they're doing. Offensively arrogant, and rooted in astounding ignorance.

    It's not just arrogant, it's a damnable sin to suggest that they've been in any way dishonest.

    It's really, really stupid to think that the corrections which are applied (like TOB) screw things up. They make it better, not worse. They've been designed carefully by those who have devoted a lifetime of experience and expertise. I really, really doubt you'll be able to apply some insight they missed -- it's far more likely you'll screw up along the way and "cry wolf" -- ten or twenty or three hundred times.

    This whole idea that we need to "audit" the surface temperature record is really really stupid and insulting. Sure there are some minor flaws that creep through -- there always will be -- but the idea that some revelation will emerge which is going to overturn the real result (that we've already warmed the planet a lot and it's going to get worse) is a fool's wish motivated by your reluctance to pay the price for the damage we've already done.

    As for the "trillion dollar problem" -- we damn well better start spending now because if we don't, the price is gonna go up. A lot. It's like the old saying that education is extremely expensive, but the lack of it is far more so. The "way forward" is NOT to repeat the whole analysis of temperature data -- that's just a delaying tactic and you're either too foolish to see that or in cohoots. The way forward is to ACKNOWLEDGE THE TRUTH.

    But if you really want to track down all the "raw" data and process it yourself, go right ahead. It should keep you busy for years. Don't expect us to trust your ability even to do the job right, let alone to do it any better than those who have already done so. Do expect us to consider you a fool (or worse) for doubting their competence and their veracity with no substantive reason.]

  • Ray Ladbury // January 24, 2010 at 3:50 pm | Reply

    Sean,
    Yeah, we could assume that every human who has ever looked at climate science in the past is an idiot and go back and re-invent the wheel…
    or we could look at whether independent datasets show the same trends and whether events like melting ice, earlier springs, etc. paint a consistent picture of what is going on.

    No offense intended, but not only is your planned strategy unnecessary, arrogant and illogical, it is also astoundingly unimaginative. It is an accountant’s approach to science–but then, I guess that’s why you guys do “audits,” huh?

  • dhogaza // January 24, 2010 at 6:08 pm | Reply

    Along with what tamino and ray ladbury said …

    ” You will need version control naturally e.g. Subversion.”

    Surely using a piece of software named “Subversion” will only feed the conspiracy nuts who think climate science is a fraud designed to lead us down the path to the New World Order.

    Sean, go pay for a DVD of the scans of original station datasheets that are available at GHCN and let us know when you’ve proved that climate scientists are all frauds!

    I won’t hold my breath.

  • David B. Benson // January 24, 2010 at 10:28 pm | Reply

    sean // January 24, 2010 at 9:53 am — A far better use of your time is a careful reading of Mark Lynas’s “Six Degrees”. Climatologist David Archer has that book as required reading in the class for non-majors.

  • dhogaza // January 24, 2010 at 11:42 pm | Reply

    A far better use of your time is a careful reading of Mark Lynas’s “Six Degrees”

    Awwww, my idea keeps him away far longer :)

  • Igor Samoylenko // January 25, 2010 at 6:14 pm | Reply

    I would also recommend Archer’s own book on the history of Earth’s climate and the summary of current research: The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate . It is excellent – very well written; I thoroughly recommend it!

  • george // January 26, 2010 at 3:38 pm | Reply

    dhogaza

    Surely using a piece of software named “Subversion” will only feed the conspiracy nuts who think climate science is a fraud”

    But wouldn’t the full title be “Subversion control”, implying you are actively trying to keep subroutine hockeystick(handle, blade, year) out of the code?Cold Fusion

  • climatesight // January 29, 2010 at 12:29 am | Reply

    Tamino, I’ll know that I’ve chosen the right course of study when I can finally understand your posts :)

Leave a Comment