Open Mind

Cyclical? Probably not.

December 31, 2009 · 12 Comments

It’s way too easy to look at data and think you see cycles. After all, if it went down, then up, then down, then up — it must be cyclic, right? The amount of analysis that goes into such conclusions is often limited to “Looks pretty cyclical to me.” Such “analysis” is tantamount to seeing a rock formation on Mars that looks vaguely like a face, and concluding that aliens constructed it millions of years ago as a message to future humanity.

The tendency to see cycles where there are none spurs people to extrapolate their imaginary “cycles” into the future, leading to some expectation about economic recovery, or a world series victory for the Mets, or what is an all too common, imminent global cooling. It also enables those in denial to explain away the global warming we have observed with a wave of the hand, dismissing it as “natural cycles.”


Setting aside the folly of an overactive imagination, let’s consider the question, how do we tell if some phenomenon is cyclic? We should first define what we mean by “cyclic.” Merriam-Webster online defines it as “of, relating to, or being a cycle” — which is no help at all! They do, however, define “cycle” thus:


1: an interval of time during which a sequence of recurring events or phenomena is completed
2: a course or series of events or operations that recur regularly and usually lead back to the starting point

Wiktionary defines cyclic this way:


Characterized by, or moving in cycles, or happening at regular intervals.
A process that returns to its beginning and then repeats itself in the same sequence.

The essence of these definitions — the essence of cyclic behavior — is that a pattern will recur. To be cyclic, it has to repeat, and not just once or twice such as could happen by accident, it has to repeat with sufficient regularity, and often enough, that given the most recent observed cycles, we can at least make some useful prediction about the behavior during the next cycle.

The repetition doesn’t have to be perfect! Consider for example, the light curve (plot of brightness as a function of time) for the variable star Mira (magnitude is an inverted scale of brightness, so smaller numbers mean greater brightness, which is why the numbers on the y-axis go from bigger to smaller):

Clearly the ups and downs of Mira repeat, but just as clearly, each cycle is a bit different from the others; like snowflakes, no two are alike. Nonetheless, they’ve repeated often enough that we can be supremely confident they’ll continue at least into the immediate future (for the next few cycles, and more likely than not for the next several hundred), and we can certainly make a useful prediction of the course of this star’s brightness in the near future. Hence we can conclude that the brightness variations of Mira are periodic, although they’re not perfectly so.

There are numerous ways to search for, and test for, periodic behavior in time series. Probably the most common is Fourier analysis, which gives rise to the Fourier periodogram. Yet even then we have multiple versions of the Fourier periodogram, such as the Fast Fourier Transform (FFT), the Lomb-Scargle modified periodogram, the date-compensated discrete Fourier transform (DCDFT), etc. Even within a given method there are different conventions for definitions, especially about how the periodogram is scaled. In geophysics, e.g., it’s common to scale the periodogram so that the total integrated power is equal to 1, while in astronomy the custom is to scale the periodogram so that the average power level is equal to 1. Here’s a Fourier periodogram (the DCDFT using astronomical scaling) of the Mira data:

The periodogram is dominated by a tall spike at frequency 0.002987 cycles/day, corresponding to a period 334.7 days. That’s the mean period for Mira during this time span; each cycle is a bit different, and individual periods will vary, but the mean is reasonably stable (which helps us make useful short-term predictions).

There are ways other than Fourier analysis to compute a periodogram; one of the most ingenious is based on applying the analysis of variance (AoV), and was developed as a period-search method which is very fast to compute. Here’s the AoV periodogram for Mira:

Both Mira periodograms are dominated by a single tall peak at the genuine signal frequency, which indicates the actual periodic behavior (and enables us to estimate the frequency and therefore the period). They also show smaller peaks at 2 and 3 times the signal frequency, which indicates the presence of overtones in the spectrum. They also show a number of other small peaks which actually don’t indicate genuine periodic behavior, especially the one at very low frequency.

Both periodograms also return a “power” level which we can treat as a test statistic for periodic behavior. For Fourier analysis, the power level is often viewed as proportional to a chi-square statistic with 2 degrees of freedom, and the highest power level is treated as the maximum of a number of independent chi-square statistics with 2 degrees of freedom. Unfortunately, using these statistics to conclude periodicity is a very problematic procedure. For one thing, the null hypothesis on which these statistics are based is that the data are nothing but white noise. A significant value doesn’t mean the data are probably periodic, it just means the null hypothesis is probably not true — and there are lots of ways for that to happen. For example, any nontrivial signal can lead to exaggerated power levels, especially at low frequencies. Here, for instance, is the Fourier periodogram of data which follow a perfectly straight line with no noise:

Many of the low-frequency peaks are well above the “statistical significance” level, but as I say, that only indicates that these data are not pure white noise. Which we already knew. We can take the straight-line data and add noise, in particular noise which isn’t white, giving this:

The periodogram looks like this:

Again, quite a number of the periodogram peaks are “significant” but none of them are significant of periodicity; these data are a straight-line signal plus noise. Significance doesn’t demonstrate periodicity, it just negates the null hypothesis.

Another problem arises when the time spacing of the data (the sampling) isn’t perfectly regular. Then, the values of the usual Fourier (or other) periodogram aren’t independent, so we can’t treat them as a set of independent statistics. Furthermore, even with regular time sampling the Fourier periodogram values are only independent at certain pre-defined frequencies. But it’s commonplace (and in fact often highly desirable) to sample the Fourier periodogram at a lot of frequencies which don’t fall into the predefined set: to oversample the periodogram. Then the Fourier values aren’t independent even if the time sampling is regular and the data are pure white noise! The bottom line is that there are many ways for the simple “test statistic” treatment of periodogram analysis to go wrong. This is especially true for low frequencies (periods which are long compared to the total time span of data).

It’s my opinion that a better way to treat a Fourier periodogram is to consider the power levels of the peaks to be proportional to a chi-square statistic with three degrees of freedom. Furthermore, the constant of proportionality is not what a naive analysis would indicate, in fact the constant of proportionality can change throughout the spectrum so we see different regimes in frequency space. Hence the constant of proportionality should be estimated by the “background” level of the spectrum, and that in the neighborhood of the peak we happen to be testing. All of which makes for a very complicated procedure, with a lot of uncertainties that are hard to quantify! Welcome to period analysis.

That’s what I do in practice: compare the level of a periodogram peak to the levels of the other peaks in its neighborhood. In other words, one uses the “background” level to define the scale, and tests whether the given peak is sufficiently taller than its neighbors. After many years, one gets a good intuitive “feel” for significance, which is refined by the many times one fooled oneself into believing periodicity when there was none. Generally, one learns to be conservative, and not apply the label “periodic” without considerable confidence. As for claims about periodicity based on the “standard” treatment of test statistics from periodograms — I don’t put much stock in that. In fact, other than in the most exceptional circumstances I don’t put any stock in it at all!

One also learns that very low-frequency periodogram peaks are just plain untrustworthy. If the period is no longer than the time span, then of course one can’t conclude periodic behavior — there hasn’t been enough observed time for the “cycle” to repeat at all! Even if the time span is two full periods, so we’ve seen a full “repetition,” I never trust conclusions of periodicity. There are just too many other signals that look periodic on such a short time frame, and they happen far too often, for a single repetition to justify a conclusion of periodicity. Even with two full repetitions (covering three full “periods”) I tend to be skeptical — but at least then I begin to entertain the idea that genuine periodic behavior is plausible.

Not all periodic behaviors are visible in a time series plot; that’s one of the reasons we apply period analysis. Not all apparent periodic behaviors in a time series plot are genuine; that’s another reason we apply period analysis. And not all apparent significant values in a period analysis are genuine either! That’s why we apply healthy, hefty skepticism and lots of experience. Hence the title of this post. Asking whether or not your data show genuine periodic behavior is a bit like asking whether or not you can afford a Rolls-Royce automobile: if you have to ask, probably not.

Categories: Global Warming · mathematics
Tagged:

12 responses so far ↓

  • Andy // December 31, 2009 at 6:25 pm | Reply

    Tamino, great treatment as always.

    I have spent a lot of time analyzing the Pacific Decadal Oscillation. It’s the “O” in “PDO” that drives me bonkers. Two or three switches in sign does not an oscillation make! There was a movement a few years back to call it Pacific Decadal Variability but that didn’t stick. Alas. Enough whinging. Here’s my question: Since you are born again as a Bayesian, can you construct a better null than white noise? One that distinguishes between long-lived phenomena that follow an ARMA(p,q) model and a random pattern? The standard null model in a lot of geophysics is now an assumption of red noise – often an AR1 model with some reasonably high coeff for the first order autocorrelation. But it seems to me that you’d like to specify a null that relies on a mechanism.

    I suppose I’m Bayes-curious. How would you do it?

    [Response: I agree completely about PDO. It's an "oscillation" in the sense that it's a variation, but I don't see any firm evidence of periodicity. But the name "oscillation," whether correct or not, is stuck.

    I don't think you need to "go Bayesian" to improve the evaluation of period analysis. But it makes for some great puns.

    Alternative noise processes, and non-periodic signals, are what makes the scaling for a Fourier transform different from the "naive" (white-noise model) scaling. But they don't change the scaling for a single frequency, rather for a broad range of frequencies, so they're also what makes the scaling change from one period regime to another. That's why I advocate using the "background" level in the "neighborhood" of a spectral peak, to scale it for evaluation. Doing so accommodates the alternative behaviors you're talking about. So I think that's the right way to go about it.]

  • Andy // December 31, 2009 at 7:15 pm | Reply

    Tamino, thanks for your thoughtful reply. I think I now understand how using the background level allows you to create a better test. Got it. But at the end you are still asking if some signal is periodic or not periodic, right?

    The issue in many earth processes (be they incoming shortwave radiation, ocean currents, whatever) is that autocorrelation can give the impression of cyclic behavior from a simpler model. Not knowing a helluva lot about Bayesian modeling, can you compare models of
    some higher order model that correspond to an understood _mechanism_ and have a model cook-off rather than a comparison to a null? In some cases that would desirable and in some cases not I suppose but I usually want to know why something shows a pattern and not just if something shows a pattern .

    Correct me if I’m wrong (please!).

    (PS. Regarding our innate desire to see patterns. When I took a spatial stats class in grad school the instructor asked us to draw a random pattern of dots on a screen that we then analyzed for clumping. Almost everybody in the class ended up over correcting and drew a uniform pattern because they saw clusters that didn’t really exist. It was an interesting lesson in our tendency to see patterns that aren’t there.)

  • Philippe Chantreau // December 31, 2009 at 7:35 pm | Reply

    Very interesting once again. This is really the stuff you do best, I feel enlightened :-)

  • Cugel // January 1, 2010 at 2:29 am | Reply

    Thank you so much for this post. As an old dog faced with new tricks this is exactly what I need.

  • Cugel // January 1, 2010 at 2:48 am | Reply

    Andy :

    I agree wholeheartedly about the PDO. A few “cycles” at best when first proposed, and recent behaviour does not support the hypothesis.

  • John Mashey // January 1, 2010 at 3:06 am | Reply

    Good material as usual.
    Nit: “how the peirodogram is scaled”

    [Response: Fixed, thanks.]

  • billy t // January 1, 2010 at 8:57 pm | Reply

    Don’t forget truncation effects – e.g. the apparent ‘peaks’ in your periodogram of the flat line are because you’re seeing the Fourier transform of a square (truncated line).

  • David B. Benson // January 2, 2010 at 12:06 am | Reply

    Andy // December 31, 2009 at 7:15 pm — I don’t think it is a simple binary choice between periodic and not periodic. Rather, it is a (not very well defined, for me at least) notion of predictability, forecasting with skill. So here are four stages:
    (1) periodic;
    (2) pseudoperiodic, as in sunspot cycles;
    (3) quasi-periodic oscillations, maybe ENSO qualifies;
    (4) other “oscillations”.
    I don’t (yet) see this issue as a Bayesian matter, except to the extent that the maximum likelhood ideas used during model training are Bayesian.

    I believe I am at least as skeptic as Tamino when it comes to declaring “cycles!” In
    Variability of El Niño/Southern Oscillation activity at millennial timescales during the Holocene epoch
    http://www.nature.com/nature/journal/v420/n6912/full/nature01194.html
    there is a claim of detecting an approaximation 2000 yearr period “cycle” after seeing but 6 periods. Sorry, but from looking at the wavelet figure and the text proposing a mechanism, I view this as simply some of the (1/f^1.5) pink-red noise which happens to look sorta periodic during the Holocene, in the proxy used. [It is quite a good paper, just not on this one point.]

    However, some papers by McCoy and co-workers have proposed GARMA models to treat what are thought to be periodicities in a time series. One could use their MCMC procedure to develop GARMA and also ARMA models for the time series and then use BIC for the model cook-off.

  • HankHenry // January 2, 2010 at 5:50 pm | Reply

    Hey, there aren’t just faces on Mars. Don’t forget the canals.

  • Andy // January 3, 2010 at 4:11 am | Reply

    Ok, any opinions out there regarding the Atlantic Multi-decadal Oscillation? Whoever wrote the Wiki entry sees it as likely to be real, but I also recall some controversey over whether or not this was an artifact of global warming forced sea surface temp changes.

  • Hank Roberts // January 3, 2010 at 5:21 am | Reply

    > Andy
    > recall some controversy….

    Can you find that again?
    I spent about 20 minutes searching just now and didn’t find anything resembling that either with Google or Scholar.

  • David B. Benson // January 3, 2010 at 11:03 pm | Reply

    Andy // January 3, 2010 at 4:11 am — I am (currently) of the opinion that the AMO is just noise. See
    http://en.wikipedia.org/wiki/Atlantic_multidecadal_oscillation#Prediction_of_AMO_shifts

Leave a Comment