Open Mind

Western Sizzlin’

December 2, 2008 · 46 Comments

I saw a blog post last week attempting to ridicule global warming by pointing out that there was a cold spell in the U.S. (the lower 48 states). In spite of the fact that the cold spell was about a week old, and that the lower-48 U.S. states are less than 2% of the globe, the author made a huge deal out of the fact that record cold events (record low temperature for the given date, and record low max temperature for the given date) outnumbered hot events (record high temperature for the given date and record high min temperature for the given date) by a sizeable margin.

This week things are different. Record hot events over the last week outnumber record cold events by 506 to 110:

  • High Temperatures: 140
  • Highest Min Temperatures: 366
  • Low Temperatures: 84
  • Lowest Max Temperatures: 26

    What a difference a week makes!


    The data come from a fascinating site, Hamweather. They also include a map showing the locations of record events:

    hamweather

    We can see clearly that although the eastern U.S. has lately experienced colder-than-usual conditions, the western U.S. has experienced hotter-than-usual conditions — far more so that the cold in the east. There’s also a “pocket” of hot events along the Gulf coast straddling the Texas-Louisiana border.

    By no means does this prove global warming is correct. In fact it only underscores two important things to keep in mind: first, that there are strong regional differences in weather conditions so if we want to study global warming we should pay more attention to the globe than to less than 2% of it; second, that global warming is about climate, and that even if climate changes we’re still gonna have weather.

    The real sign of global warming is in the trends, not the momentary hot or cold spells. The real danger of global warming is that it brings with it fundamental changes in a great many environmental variables, including one that is fundamental to human survival: water. That’s the topic of a recent paper by Barnett et al. (2008, Science, 319, 1080), Human-Induced Changes in the Hydrology of the Western United States.


    Observations have shown that the hydrological cycle of the western United States changed significantly over the last half of the 20th century. We present a regional, multivariable climate change detection and attribution study, using a high-resolution hydrologic model forced by global climate models, focusing on the changes that have already affected this primarily arid region with a large and growing population. The results show that up to 60% of the climate-related trends of river flow, winter air temperature, and snow pack between 1950 and 1999 are human-induced. These results are robust to perturbation of study variates and methods. They portend, in conjunction with previous work, a coming crisis in water supply for the western United States.

    The authors studied three climate variables important to the water cycle: the daily minimum temperature Tmin from January through March, the timing of snowmelt runoff (as indicated by the center timing CT of river flows), and the amount of snowpack (expressed as SWE or “snow water equiavalent”) in mountain ranges (actually the snowpack-to-precipitation ratio).

    trends

    Fig. 2. Observed time series of selected variables (expressed as unit normal deviates) used in the multivariate detection and attribution analysis. Taken in isolation, seven of nine SWE/P, seven of nine JFM Tmin, and one of the three river flow variables have statistically significant trends.

    The observed trends are exactly what would be expected from global warming: higher temperature Tmin, earlier river flow due to earlier snowmelt runoff, and reduced snowpack. But could these changes be due to natural variation in the climate system? To investigate that issue, they studied the behavior of a regional hydrological model when it was fed with data from two global climate models: the parallel climate model (PCM), which had been used previously in hydrological studies in the western United States and realistically portrays important features of observed climate and the amplitude of natural internal variability, and the Model for Interdisciplinary Research on Climate (MIROC) selected from the current Intergovernmental Panel on Climate Change (IPCC) AR4 set of global runs because it had available many 20th-century ensemble members with daily data, and because it offered a high degree of realism in representing the Pacific Decadal Oscillation (PDO).

    Model runs for 1600 years without anthropogenic forcing were used to gauge the amount of natural variability in the sytem. Model runs with anthropogenic forcing were used to define the response to human-induced changes, a “fingerprint” for the impact of humanity’s effect. The observed data were then compared with both results to see whether or not the recent changes are compatible with anthropogenic causation (answer: yes) and whether or not they’re compatible with natural variation (answer: no):


    The observed signal falls outside the range expected from natural variability with high confidence (P < 0.01). In separate analyses for PCM and MIROC, the likelihood that the model signal arises from natural internal variability is between 0.01 and 0.001 (20). The different downscaling methods have little impact on these results. We conclude that natural internal climate variability alone cannot explain either the observed or simulated changes in SWE/P, JFM Tmin, and CT in response to anthropogenic forcing.

    The conclusion is that climate in the western U.S. is changing, it’s not just natural variation, and it’s having a significant impact on the water cycle.


    Our results are not good news for those living in the western United States. The scenario for how western hydrology will continue to change has already been published using one of the models used here [PCM (2)] as well as in other recent studies of western U.S. hydrology [e.g., (15)]. It foretells water shortages, lack of storage capability to meet seasonally changing river flow, transfers of water from agriculture to urban uses, and other critical impacts. Because PCM performs so well in replicating the complex signals of the last half of the 20th century, we have every reason to believe its projections and to act on them in the immediate future.

  • Categories: Global Warming
    Tagged: ,

    46 responses so far ↓

    • Sekerob // December 2, 2008 at 10:30 pm

      Just visit Rutgers to see where precipitation translates to snow, or not… and from the big rush here in Europe, little seems left:

      Day 335
      http://climate.rutgers.edu/snowcover/images/legend_daily_dn.gif

    • Sekerob // December 2, 2008 at 10:32 pm

      Sorry, legend and day 336

      http://climate.rutgers.edu/snowcover/png/daily_dn/2008336.png

    • Jim Arndt // December 2, 2008 at 11:16 pm

      Snow pack seems to follow PDO quite well. higher during a PDO negative and lower during a PDO positive. This may be due to changes in atmospheric wind patterns and not so much due to climate change. Here is also a link that show river flow following the sunspot cycle.
      Here is NASA’s take on rain fall and river flow.
      http://climate.gsfc.nasa.gov/research/solar_radiation.php

      [Response: The link you doesn't even hint that river flow follows the sunspot cycle. Do you have a valid reference for that claim?

      If there is a connection between PDO and snowpack (do you have a reference?), might it be related to precipitation? The authors of the Barnett et al. paper studied the ratio of SWE to seasonal precipitation, specifically to remove the impact of precipitation variations and more closely isolate the temperature influence on snowpack.]

    • geoxzi // December 2, 2008 at 11:21 pm

      Just watch the yearly ice levels at the poles at http://www.globalboiling.org

      [Response: This is certainly off topic for this post, and there are lots of posts on this blog about polar ice. There's even an open thread for comments not related to any post topic.]

    • P. Lewis // December 3, 2008 at 12:24 am

      Sunspots and river flow in New Scientist (contains the PRL ref for the paper).

    • Richard Steckis // December 3, 2008 at 12:43 am

      Just for your information. Here in Perth, Western Australia we had one of the warmest Octobers on record and the coolest November in over 35 years.

      Weather is just weather.

    • Ray Ladbury // December 3, 2008 at 1:21 am

      And climate is climate, and ne’er shall the skeptics learn the difference.

    • Jim Arndt // December 3, 2008 at 1:53 am

      Hi Tamino

      Thank you for your time. Here is a link unfortuneately they use an outdated TSI but the shape of the graph remains only difference is that TSI doesn’t vary that much, look up the data from TIM at SOURCE. This is very interesting and shows at least solar link to at least parts of the climate. There also maybe a link between intense hurricanes and UV but will stay OT for the moment.
      Graph
      http://ks.water.usgs.gov/pubs/report/paclim99.fig3.gif
      Summary
      http://ks.water.usgs.gov/pubs/reports/paclim99.html#HDR4
      Nile River
      http://www.cig.ensmp.fr/~iahs/redbooks/a051/051003.pdf
      Rainfall
      http://www.nsf.gov/news/news_summ.jsp?cntn_id=109789

    • Richard Steckis // December 3, 2008 at 3:06 am

      Wrong Ray. Climate is average weather over time.

    • Hank Roberts // December 3, 2008 at 3:52 am

      http://www.worldclimate.com/about.htm

    • Richard Steckis // December 3, 2008 at 4:30 am

      http://www.worldclimate.com/about.htm says in part:

      “Climate data are historical weather averages, showing what the weather was typically like each month, averaged over a range of years. ”

      Isn’t that what I said Hank?

    • Gavin's Pussycat // December 3, 2008 at 4:57 am

      > Wrong Ray. Climate is average weather over time.
      …and over the ensemble, of which we live in only one realization. Only for ergodic processes are they the same.

    • TCOisbanned? // December 3, 2008 at 9:31 am

      Tammy:

      “Model runs” were used to determine variability and look at AGW versus non-AGW. So, I wonder if it is circular to look at hydrology as an indicator of AGW. It would be more telling to look at something like Nile River data, where there’s actual data for several centuries and find the recent 50 year behavior is somehow markedly different than the historical.

    • Andrew Dodds // December 3, 2008 at 11:29 am

      Gavin’s Moggy -

      ‘Ergodic’? Is that a word?

      Ray/Richard -

      According to the Skeptic definition, warmer-than-average temperatures are weather regardless of timescale, and colder-than-average temperatures are climate regardless of timescale. It’s quite simple..

      The only observation I can make is that in the UK, we seem to be having fairly average weather for the time of year, which is catching people out as we’ve got used to very mild winters now.

    • Ray Ladbury // December 3, 2008 at 1:49 pm

      Andrew Dodds,
      1)Ergodic is indeed a word. Look it up.

      As to your characterization of skeptic’s views on climate and weather, you seem to be saying that in order to remain a skeptic, one must preserve one’s ignorance.

    • Ray Ladbury // December 3, 2008 at 1:53 pm

      Richard Steckis: Climate is average weather over time.

      -5 points for vagueness. Need to either specify what time or discuss the relation between time and confidence. Weather averaged over a microsecond is still weather.

    • TCOisbanned? // December 3, 2008 at 2:02 pm

      Yup…Dodds, you peg the Skeptics well there. Or (in similar vein), they bluster about long term persistance and Hurst and crap (trying to say that the Earth’s climate system inherantly has centenial scale oscillations liked we’ve seen in 20th century…while simultaneously coming out with nutter crap like claiming that ten years of flat temps disprove IPCC AGW predictions. (And don’t even come here and give me some semantic debate about what IPCC said, Lucia…that’s cheap ass shit.)

      Of course, I think it’s funny as shit to watch warmers spazz over every hot day and colders over every cold one. (And no, I don’t think much of the warmer, “we’re not saying this is a sign of AGW, just that things will be like this when it happens” prevarication.)

    • dhogaza // December 3, 2008 at 9:55 pm

      (And no, I don’t think much of the warmer, “we’re not saying this is a sign of AGW, just that things will be like this when it happens” prevarication.)

      In other words, you don’t think much of honesty.

      Big effing surprise.

    • S2 // December 3, 2008 at 11:01 pm

      Jim Arndt’s third link (on Lake Nyasa) above appears to date from 1958….

      I’m intrigued by the graphs above from the Barnett et al. paper. I don’t recall seeing graphs before with standard deviations as the Y-axis. Is this a common approach?

      I’m struggling with working out how to interpret them - are they saying that the standard deviations are changing with time?
      And I know I’m a novice at maths, but I can’t see how you can get a standard deviation less than zero?

      I’m missing something, obviously.

      [Response: They're not plotting standard deviations. Instead, they've taken the data, subtracted the mean, then divided by the standard deviation to generate a "normalized" data set. The axis says "standard deviations" because that's the units for the y-axis. Values that were originally above average end up being positive, values that were originally below average end up being negative.

      I could take a temperature series, for example, subtract the average from every value, then divide every value by the standard deviation, to get a new series where the average is zero (because I've subtracted the average from the original data) and the standard deviation is one (because I've divided the result by the standard deviation). In this way, if I want to compare two different series with very different means and standard deviations, I've brought them onto the same "scale." Without this "normalization," a graph of two data sets might have one varying from 10.992 to 11.008 while the other varies from 215 to 712, which would make comparison of their changes difficult to see.]

    • S2 // December 4, 2008 at 12:18 am

      Thanks for a clear and lucid explanation.

      I’ve learnt something new, and I’m grateful.

    • Otto Kakashka // December 4, 2008 at 12:52 am

      “The axis says “standard deviations” because that’s the units for the y-axis” Tamino

      I’ve seen plots of this sort with different y-axis labels - “Std. Deviations” or “Normalized Uniits” or “Standardized Units” or “Z”. Is there a particular label that is more correct or widely used?

      [Response: I don't think there's any "standard" label for standardized units (which is a bit ironic).]

    • TCOisbanned? // December 4, 2008 at 1:32 am

      I don’t thin m uch of snippety little yap dogs. But Jolliffe loves me. anfd I have looked at the enemy mthroufght cros hairs.

    • Richard Steckis // December 4, 2008 at 4:03 am

      Ray Ladbury:

      “5 points for vagueness. Need to either specify what time or discuss the relation between time and confidence. Weather averaged over a microsecond is still weather.”

      Ray. Grow up. If you want a minutiae definition of climate then go somewhere else. Otherwise, understand my concise definition as just that.

    • Ray Ladbury // December 4, 2008 at 12:47 pm

      Richard Steckis, actually, the relation between the confidence we can have in a trend and the length of time is critically important. It is not that climatic effects do not manifest on timescales less than 30 years, merely that we cannot confidently distinguish them from noise on such short timescales.

    • Bob North // December 4, 2008 at 3:03 pm

      Reviewing the charts, it appears that there was a step-change in the SWE/P in the late 70s. Tamino - would something like a Mann-Whitney U test, or some other test, be appropriate for determining if in fact there was a statistically significant step-change sometime between about 1978 and 1980?

      [Response: That will tell you whether or not the distribution post-1979 is different from that pre-1979, but it won't tell you whether the difference is due to a step change, or a linear trend, or some other pattern. To compare those two ideas, I'd suggest fitting a linear trend model, and fitting a step-change model, then using the Akaike Information Criterion (or Bayesian Information Criterion) to compare the quality of the models. Note that when using AIC or BIC on step-change models, the time of the step change is one of the parameters, so the linear model has two parameters (slope and intercept) while the step-change model has three (average1, average2, time of change).]

    • Richard Steckis // December 4, 2008 at 5:10 pm

      Ray. Rubbish. Clear trends can often be seen inside the IPCC sanctioned 30 years. A consistent trend in either direction over a decade can fall outside the range of just noise.

      I have a problem with the term “noise”. Noise is a statistical or mathematical construct. What you are calling noise is weather (it is not a statistical abstraction). The average regional weather over periods of years, decades and centuries constitute climate. There is no “set in stone” time period that defines climate despite the IPCC’s definition (which is not universally accepted).

    • Dano // December 4, 2008 at 5:13 pm

      RE: step-change argumentation:

      The issue is whether there has been some sort of regime change that has dampened these cycles. that is: the 1976-ish change was periodic and understandable - why is it difficult to see the next one? Where is it?

      —–

      Let us note the NewScientist usage: yet another early speculative question, grabbed on to and squeezed hard by some. Some things never change, do they?

      Best,

      D

    • Gavin's Pussycat // December 4, 2008 at 6:10 pm

      Tamino, any pointers on how to do AIC on correlated data? You have to take correlation into account, right?

      [Response: Outstanding question. My first instinct is that by using an "effective number" of degrees of freedom, one could adequately compensate for autocorrelation when applying AIC/BIC. Time to go search the literature...]

    • David B. Benson // December 4, 2008 at 7:53 pm

      Richard Steckis // December 4, 2008 at 5:10 pm — Actualy it is WMO which defines climate as at least 30 years of weather. You could research when this defintion first appeared, but I suggest in the 1930s when it first became clear just how to do sensible (that is, significant) statistics on weather data.

      In any case, the definition of 30 years is universally accepted by meteorologists; when the weather data is given as above or below ‘normal’, the ‘normal’ is some previous thirty year interval; paleoclimatologists prefer the ‘or longer’ part.

    • Ray Ladbury // December 4, 2008 at 7:57 pm

      GP and Tamino, I recently confronted the same problem–AIC on correlated data. In my case, I used a model that imposed a relation on the means of the two variables, but still fit for a width (or rather lognorm SD, I also looked at using a Weibull). In effect, this removed one DOF.

    • dhogaza // December 5, 2008 at 8:05 am

      I have a problem with the term “noise”. Noise is a statistical or mathematical construct.

      Do you honestly think anyone cares whether or not you have a problem with accepted terminology used to describe physical phenomena?

      The statistical understanding of noise postdates the recognition of noise in physical systems. It is not merely an abstract concept as you suggest.

    • Barton Paul Levenson // December 5, 2008 at 1:01 pm

      Richard Steckis writes:

      There is no “set in stone” time period that defines climate despite the IPCC’s definition (which is not universally accepted).

      The World Meteorological Organization (not the IPCC) defines climate as mean regional or global weather over a period of 30 years or more. Deal with it. They didn’t pluck that number out of the air. There’s a statistical reason for it.

    • Bob North // December 5, 2008 at 2:43 pm

      BPL and David Benson - Technically, Steckis is right. the thirty year period is not set in stone. WMO does not use 30 years in its formal definition of climate in its guidance document (I have linked to this before when this issue arose and am not going to this time) but, iirc, refers to long periods of time (decades or more). Admittedly, WMO does refer to the 30 years as the “classic” timeframe for climate on its FAQ page however.

      Interestingly, in the draft update to their guidance on climatological practices, they note that the reason for the thirty year period was basically because that was what they had when the first guidance came out in the 1930s (?). In other words, it does seem they plucked the number out of the air (admittedly, it does work pretty well statistically). Finally, the draft suggests that different averaging periods may be more appropriate for different climate variables.

    • Ray Ladbury // December 5, 2008 at 3:21 pm

      Bob North, Thirty years is not a magic number. Really, what we’re talking about is how our confidence in the reality of a particular signal evolves over time in the presence of noise. For robust, consistent forcers, 30 years is a good number, in part because it is longer than the duration of most “noise” in the system. I wouldn’t want to draw conclusions about Milankovich cycles based on 30 years worth of climate data, though.
      One way I would justify the 30 year definition is that it represents a good portion of 3 solar cycles (~11 years each), and we all know that the number 3 does indeed have magical properties wrt statistical inference of variability.

    • Bob North // December 5, 2008 at 6:05 pm

      Ray - I agree that 30 years isn’t a magic number and really don’t have any issue with the use of 30 years as a standard averaging time for climactic variables. It just really bugs me when people keep on repeating t that the 30 years (or longer) is part of the definition of climate, when it isn’t.

      BTW, I am not sure that the 30 years was intended as a time period for recognizing an underlying signal but more just for averaging purposes. Trends may become apparent and differentiable from “noise” on either a much shorter or much longer timeframe.

      [Response: It depends on the signal-to-noise ratio, and while the noise level appears to be stable, the signal level isn't. At the present rate of warming, 15 years or more is necessary to detect a trend reliably, and 30 years is a good choice for characterizing the trend with reasonable precision.]

    • David B. Benson // December 5, 2008 at 11:04 pm

      “Averaged separately for both hemispheres, 2006 surface temperatures for the northern hemisphere (0.58°C above 30-year mean of 14.6°C/58.28°F) are likely to be the fourth warmest and for the southern hemisphere (0.26°C above 30-year mean of 13.4°C/56.12°F), the seventh warmest in the instrumental record from 1861 to the present.”

      from

      http://www.wmo.ch/web/Press/PR_768_English.doc

      “The classical period is 30 years, as defined by the World Meteorological Organization (WMO). ”

      from

      http://en.wikipedia.org/wiki/Climate

      quoting from IPCC. The article also states “Climate (from Ancient Greek klima, meaning inclination) is commonly defined as the weather averaged over a long period of time.[2] The standard averaging period is 30 years,[3] but other periods may be used depending on the purpose.”

      Reference [3] is

      http://www.metoffice.gov.uk/climate/uk/averages/

      which states “The World Meteorological Organization (WMO) requires the calculation of averages for consecutive periods of 30 years, with the latest covering the 1961-1990 period. However, many WMO members, including the UK, update their averages at the completion of each decade. Thirty years was chosen as a period long enough to eliminate year-to-year variations.”

      So I fear Bob North is mistaken.

    • Gareth // December 6, 2008 at 12:00 am

      The standard definition of climate is fine as far as it goes, but during periods of rapid change it might not be entirely helpful. I did read (last year? - lost the refs) that someone at the Hadley Centre was working on combining the last 15 years data with model runs for the next 15 years to give a better picture of the statistical likelihood of a given weather phenomenon today.

      If your 30 year baseline is 1970-1999, and average temperatures in a given climate zone are increasing at (say) 0.2C per decade, then the probability distribution of warm and cold days will be shifting continuously. Might have have some significance, for instance, for issuing forecasts of damaging early or late frosts.

    • Steven Earl Salmony // December 8, 2008 at 3:41 pm

      WHAT IS GALILEO DOING TONIGHT?

      I find it irresistible not to at least take a moment to wonder aloud about what Galileo is doing tonight. My hope would be that the great man is resting in peace and that his head is not spinning in his grave. How, now, can Galileo possibly find peace when so many top-rank scientists refuse to speak out clearly, loudly and often regarding whatsoever they believe to be true about the distinctly human-induced, global predicament presented to the family of humanity in our time by certain unbridled “overgrowth” activities of the human species from which global challenges visibly issue now and loom ominously on the far horizon?

      Where are the thousands of scientists who have a responsibility to stand up with those who developed virtual mountains of good scientific research regarding overconsumption, overproduction and overpopulation activities of the human species that are now overspreading and threatening to engulf the Earth.

      Perhaps there is something in the great and everlasting work of many silent scientists that will give Galileo a moment of peace in our time.

      What would the world we inhabit look like if scientists like Galileo adopted a code of silence, speaking only about scientific evidence which was politically convenient, economically expedient, religiously condoned and socially correct?

      Steven Earl Salmony

      AWAREness Campaign on The Human Population,

      established 2001

      http://sustainabilityscience.org/content.html?contentid=1176

    • Gavin's Pussycat // December 8, 2008 at 8:19 pm

      Tamino:

      My first instinct is that by using
      an “effective number” of degrees of freedom, one could adequately
      compensate for autocorrelation when applying AIC/BIC.

      OK thanks Tamino. Yes. I suppose you mean the (1 - rho1) / (1 + rho1) factor.

      Now that I have your ear ;-) the problem I have is that I have residuals (from a three parameter fit), and am supposed to show that the third parameter was a good idea… “residuals” are not “errors”. And I have only eight of them… any good ideas? Pearson’s r is 99%, up from 95% when adding the third param. rho1 from the residuals turns out negative. Ouch.

      [Response: Do I read you right, that you only have a sample size of N=8? OUCH! Then fitting a 3-parameter model, you have only 5 degrees of freedom left. DOUBLE OUCH!!

      The N(eff) will of course be less than this, but it won't just be a factor (1-rho1)/(1+rho1) unless the residuals follow an AR(1) process. To be sure, you'd need to study the autocorrelations to larger lags, to test whether or not they behave as rho(n) = [rho(1)]^n like an AR(1) process should. But with only N=8 data points, you really can’t hope to answer that question because the probable errors of the autocorrelation estimates are too large.

      It’s likely that although the *estimated* rho(1) is negative, the actual rho(1) isn’t. Are you using the Yule-Walker autocorrelation estimate (I think that’s what you get using R)? A least-squares estimate? Bear in mind that all estimates of autocorrelation are biased, and although the bias vanishes as N goes to infinity, for N=8 it can be quite large. Also, most estimates are biased *low*, i.e., the expected estimate is less than the actual value.

      My advice, 1: since the estimated rho(1) of the residuals is <0, assume it’s approximately =0 and the residuals are white noise. Then you can apply the AIC (or BIC) without modification. 2: Accept the fact that with such a small sample size, it’s next to impossible to have much confidence in the result. If you’re publishing the results, include both model results and all applicable caveats, and let the reader decide where to place confidence.]

    • Gavin's Pussycat // December 8, 2008 at 9:11 pm

      OK Tamino, thanks. I was a bit afraid of that.

      The eight sample values are the result of smoothing. I could smooth less, getting 24 sample values, but they are much more noisy — probably still good enough though. Then, the rho1 of the residuals — computed as ave(v(i)*v(i+1))/ave(v(i))^2), homebrewn — becomes 0.26, giving me N_eff = 14. That’s still assuming AR(1).

      [Response: As a general (but not ironclad) rule, I only using smoothing to gain insights about what *might* be going on. For estimating a trend, or testing for the significance of a pattern, I stick to the unsmoothed data. After all, smoothing reduces noise but it can also remove part of the signal, and of course it introduces artificial autocorrelation (making the real autocorrelation that much harder to characterize).

      Even 24 is a small sample size. Good luck!]

    • traktor7 // December 8, 2008 at 10:02 pm

      this winter in Russia (Moscow) is warm (+5c) and rainy so far, instead of snow we have here just constant rain:(
      Btw, I heard on TV , that no return point in global warming passed, is that truth?

    • David B. Benson // December 8, 2008 at 11:19 pm

      traktor7 // December 8, 2008 at 10:02 pm — No return for avery, very long time. See David Ardher’s “The Long Thaw”.

      Unless, of course, we find the resolve to remove unwanted carbon dioxide faster than we generate it.

    • JCH // December 9, 2008 at 12:54 am

      DBB,

      I’m dubious about technologies that lack this or that - like hydrogen cars and 300-mile batteries.

      Still, your persistence has me reading all this useless information about all these darn rock formations! And this waste of my time is your fault.

    • Ray Ladbury // December 9, 2008 at 2:03 am

      GP, FWIW, I would agree with Tamino–you would probably be better off using the raw data even if it is noisier, but again, isn’t this something AIC or BIC/SIC would tell you. Also, with such small samples, wouldn’t you want to use correctic forms of the above ICs?

    • David B. Benson // December 9, 2008 at 2:05 am

      JCH // December 9, 2008 at 12:54 am — I’m a long time (almost 50 years now) anateur geologist.

      And the faults are in the earth’s crust, not in me.

      :-)

    • Steven Earl Salmony // December 9, 2008 at 7:11 pm

      On the need for scientific education regarding the human overpopulation of Earth in these early years of Century XXI………..

      [edit]

      [Response: your comment is, essentially, spam. This blog is about climate science and mathematics, and while I sympathize (strongly) with the cause of sustainability, that doesn't mean that the blog is here for sustainability spam any more than it's here for fossil-fuel industry spam.]

    Leave a Comment