Open Mind

Key Messages

June 22, 2009 · 119 Comments

The synthesis report from the Copenhagen conference on climate change gives dire warning of the consequences of inaction about global warming. They emphasize six key messages, each with its own chapter:


KEY MESSAGE 1: CLIMATIC TRENDS
Recent observations show that greenhouse gas emissions and many aspects of the climate are changing near the upper boundary of the IPCC range of projections. Many key climate indicators are already moving beyond the patterns of natural variability within which contemporary society and economy have developed and thrived. These indicators include global mean surface temperature, sea level rise, global ocean temperature, Arctic sea ice extent, ocean acidification, and extreme climatic events. With unabated emissions, many trends in climate will likely accelerate, leading to an increasing risk of abrupt or irreversible climatic shifts.

KEY MESSAGE 2: SOCIAL AND ENVIRONMENTAL DISRUPTION
The research community provides much information to support discussions on “dangerous climate change”. Recent observations show that societies and ecosystems are highly vulnerable to even modest levels of climate change, with poor nations and communities, ecosystem services and biodiversity particularly at risk. Temperature rises above 2C will be difficult for contemporary societies to cope with, and are likely to cause major societal and environmental disruptions through the rest of the century and beyond.

KEY MESSAGE 3: LONG-TERM STRATEGY: GLOBAL TARGETS AND TIMETABLES
Rapid, sustained, and effective mitigation based on coordinated global and regional action is required to avoid “dangerous climate change” regardless of how it is defined. Weaker targets for 2020 increase the risk of serious impacts, including the crossing of tipping points, and make the task of meeting 2050 targets more difficult and costly. Setting a credible long-term price for carbon and the adoption of policies that promote energy efficiency and low-carbon technologies are central to effective mitigation.

KEY MESSAGE 4: EQUITY DIMENSIONS
Climate change is having, and will have, strongly differential effects on people within and between countries and regions, on this generation and future generations, and on human societies and the natural world. An effective, well-funded adaptation safety net is required for those people least capable of coping with climate change impacts, and equitable mitigation strategies are needed to protect the poor and most vulnerable. Tackling climate change should be seen as integral to the broader goals of enhancing socioeconomic development and equity throughout the world.

KEY MESSAGE 5: INACTION IS INEXCUSABLE
Society already has many tools and approaches – economic, technological, behavioural, and managerial – to deal effectively with the climate change challenge. If these tools are not vigorously and widely implemented, adaptation to the unavoidable climate change and the societal transformation required to decarbonise economies will not be achieved. A wide range of benefits will flow from a concerted effort to achieve effective and rapid adaptation and mitigation. These include job growth in the sustainable energy sector; reductions in the health, social, economic and environmental costs of climate change; and the repair of ecosystems and revitalisation of ecosystem services.

KEY MESSAGE 6: MEETING THE CHALLENGE
If the societal transformation required to meet the climate change challenge is to be achieved, a number of significant constraints must be overcome and critical opportunities seized. These include reducing inertia in social and economic systems; building on a growing public desire for governments to act on climate change; reducing activities that increase greenhouse gas emissions and reduce resilience (e.g., subsidies); and enabling the shifts from ineffective governance and weak institutions to innovative leadership in government, the private sector and civil society. Linking climate change with broader sustainable consumption and production concerns, human rights issues and democratic values is crucial for shifting societies towards more sustainable development pathways.

Categories: Global Warming

119 responses so far ↓

  • David B. Benson // June 22, 2009 at 10:33 pm | Reply

    So far, US Congress isn’t getting it very well.

    Anyway, nice summary.

  • Deep Climate // June 22, 2009 at 11:00 pm | Reply

    If anyone has trouble with the above link, try this one (works better for me):

    http://www.pik-potsdam.de/news/press-releases/files/synthesis-report-web.pdf/view

  • Dano // June 22, 2009 at 11:20 pm | Reply

    I’m a glass half-full kinda guy (albeit full of groundwater tainted by Big Ag), but this list of what we need to do [reformatted and emphases added]:

    o reducing inertia in social and economic systems;

    o building on a growing public desire for governments to act on climate change;

    o reducing activities that increase greenhouse gas emissions and reduce resilience (e.g., subsidies);

    o and enabling the shifts from ineffective governance and weak institutions to innovative leadership in government, the private sector and civil society.

    Linking climate change with broader sustainable consumption and production concerns, human rights issues and democratic values is crucial for shifting societies towards more sustainable development pathways.

    Wowie.

    That is a very clear course to change course for the ships of state.

    Too bad we didn’t begin 30 years ago.We don’t move that fast.

    Sigh.

    Best,

    D

  • Deep Climate // June 22, 2009 at 11:53 pm | Reply

    I have something of a quibble with key message # 1.

    The temperature figure used visually suggests that temperature observations are running at or above the mid-point of the IPCC range of projections. But that depends which set of projections one examines (and how they are baselined).

    The supporting figure is updated from Rahmstorf et al 2007 comparison of observations to IPCC TAR (Third Assessment Report) projections (this was discussed by Tamino a year ago, IIRC).

    IPCC AR4 (Fourth Assessment Report) projections are somewhat higher than TAR, at least when using the AR4 baseline of 1980-99 average “hindcast”.

    Here is figure 3, as updated by Rahmstorf, extracted from “Key message # 1″):
    http://deepclimate.files.wordpress.com/2009/06/rahmstorf-2007-update.pdf

    Here is my attempt to compare AR4 projections with observations, both baselined to 1980-99.
    http://deepclimate.files.wordpress.com/2009/05/ar4-smooth.gif

    The full post:
    http://deepclimate.org/2009/06/03/ipcc-ar4-projections-and-observations-part-1/

    Don’t get me wrong – I support all six key messages. But I would have preferred some sort of reference to AR4 as well as TAR projections.

    Having said all that, it is certainly true, as the report states, that temperature trends do clearly lie within IPCC projections, whether TAR or AR4: “Nevertheless, the long-term trend of increasing temperature is clear and the trajectory of atmospheric temperature at the Earth’s surface is proceeding within the range of IPCC projections.”

    [Response: According to Rahmstorf at RealClimate,

    ... the report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990) compared to observations. For a meaningful comparison you need enough data, and we thought 16 years was enough. The AR4 projections start in 2000 (see Fig. SPM5 of the AR4). Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up. -stefan

    ]

  • Terry // June 23, 2009 at 12:39 am | Reply

    I believe we need to change our habits to stop Global Warming. My father is a stout republican who believes this is non-sense. His main arguement is if he fills a glass with ice and fills the rest with water, as the ice melts, why doesn’t it overflow the glass?

    I am not sure how to address this counter-arguement. Could you help me?

    [Response: His argument is irrelevant. We all know that when sea ice melts it doesn't have much effect on sea level -- Al Gore even highlighted that fact in his movie. It's the land ice that's the problem; this includes alpine glaciers all over the world, and the Greenland and Antarctic ice sheet. And to top it off, sea level rises because of thermal expansion of sea water.

    Tell him to fill a bowl with ice and watch the water level.]

  • Deep Climate // June 23, 2009 at 1:51 am | Reply

    Stefan Rahmstorf’s explanation of why AR4 temperature projection evaluation is premature is now at RC:

    “[T]he report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990) compared to observations. For a meaningful comparison you need enough data, and we thought 16 years was enough. The AR4 projections start in 2000 (see Fig. SPM5 of the AR4). Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up.”

    http://www.realclimate.org/index.php/archives/2009/06/a-warning-from-copenhagen/#comment-127592

    The original Rahmstorf et al Science paper is here:
    http://www.pik-potsdam.de/~stefan/Publications/Nature/rahmstorf_etal_science_2007.pdf

  • TCO // June 23, 2009 at 3:08 am | Reply

    I have even more skepticism of the AGW predictions of social and economic effects as I do of the scientific stuff. And I think the science guys are themselves not economically and social science astute. And yes, I know they have some specialists in there. But the whole thing is all commingled. And someone like Tammy who is skilled in statistics is not skilled in econ, so how can he even judge this stuff? My vote: stick to science and stop mixing physical science/social effects/policy. Feynman was very astute about the different nature of technical issues and management issues and policy issues for instance with the Shuttle Crash investigation and rightfully fought to limit his investigation from making last minute policy statements.

  • MikeN // June 23, 2009 at 6:27 am | Reply

    Just read your link to the Rahmstorf paper. How do you compute an 11 year non-linear trend for such a short data series?

  • B Buckner // June 23, 2009 at 2:20 pm | Reply

    The synthesis report references Rahmstorf and states “…comparing the IPCC projections of 1990 with observations show that some climate change indicators are changing near the upper end of the range indicated by the projections or, in the case of sea level rise (Figure 1), at even greater rates than indicated by IPCC projections.”

    But the figure shows the projections and observations diverged from the very beginning in 1990. Does this not indicate that the model is wrong rather than climate trends are worse than projected (worse than we thought a few years ago?) How was the divergence between observations and projections dealt with in TAR which was published in 2001 when it was obvious that the projections were wrong?

  • Dano // June 23, 2009 at 3:07 pm | Reply

    Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up.”

    This is a key component of adaptive management and scenario analysis (from which projections are derived). So the italicized is obvious.

    Sadly, the IPCC’s reliance on projections has met with a deaf public, as there have been scant few efforts made to help societies understand scenarios and projections. This void has allowed some to exploit this lack of knowledge and context to try and demonize the IPCC and the process.

    Best,

    D

  • Timothy Chase // June 23, 2009 at 3:38 pm | Reply

    Deep Climate wrote:

    Stefan Rahmstorf’s explanation of why AR4 temperature projection evaluation is premature is now at RC:

    “[T]he report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990)…”

    Thank you for that little bit of background. I was trying to follow the conversation, but it was a little unclear. I figured that TAR was much earlier than AR4, but didn’t really know by how much. (At this point I take it to be by about 10 years — as the simulations/papers AR4 is based off of date to 2000.

    Hitting Google on TAR I kept getting RankExploits — which (after search RealClimate) I gather is about as reliable as Watts. Anyway, thanks again.

  • Timothy Chase // June 23, 2009 at 3:42 pm | Reply

    PS

    That should have been “after searching RealClimate…” A little distracted this morning. Putting together a database (nothing serious) and taking a sick cat to the vet.

  • lucia // June 23, 2009 at 4:25 pm | Reply

    Timothy,
    I figured that TAR was much earlier than AR4, but didn’t really know by how much. (At this point I take it to be by about 10 years — as the simulations/papers AR4 is based off of date to 2000.

    Your estimate of the publication date of the TAR is a bit off. The full title of the TAR, published in 2001, is “Climate Change 2001: The Third Assessment Report”. It is available at for download here.

  • Deep Climate // June 23, 2009 at 7:17 pm | Reply

    Timothy,
    TAR projections are based on climate model simulations that start in 1990, while AR4 projections are for 2000 on. So, yes, the projections start 10 years earlier in TAR.

    The actual publications dates are 2001 (TAR ) and 2007 (AR4).

  • MikeN // June 23, 2009 at 7:29 pm | Reply

    Timothy, I just got this from rankexploits, so perhaps you’ll ignore it, but the old Rahmstorf paper and the one in the sysnthesis report do not match. If you calculate an 11 year smoothing with an update for 2007 and 2008, there should be a flattening at the end.

    [Response: "11-year smoothing" doesn't necessarily mean 11-year moving averages. Better methods are less sensitive to noise fluctuations.]

  • Timothy Chase // June 23, 2009 at 7:30 pm | Reply

    lucia wrote:

    Your estimate of the publication date of the TAR is a bit off. The full title of the TAR, published in 2001, is “Climate Change 2001: The Third Assessment Report”. It is available at for download here.

    Yes, that is when it was published (2001), but the data was from 1990, whereas the data for AR4 (2007) was based of 2000. Ten years difference.

    Please see:

    Stefan Rahmstorf’s explanation [inline here] of why AR4 temperature projection evaluation is premature is now at RC:

    “[T]he report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990) compared to observations. For a meaningful comparison you need enough data, and we thought 16 years was enough. The AR4 projections start in 2000 (see Fig. SPM5 of the AR4). Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up.”

    -Deep Climate

    *
    You should keep in mind the fact that a few years isn’t sufficient to establish a trend in global average temperature.

    Climate oscillations introduce a great deal of noise over the short-run. But it is just noise. They can’t get rid of the heat, they just move it around, from the surface to the ocean depths and back again. To get rid of the heat as far as the climate system is concerned, it has to be radiated into space. Conduction and convection just won’t cut it.

    But if you increase the level of greenhouse gases, you make the atmosphere more opaque to thermal radiation.

    You can see it here:

    Measuring Carbon Dioxide from Space with the Atmospheric Infrared Sounder
    http://airs.jpl.nasa.gov/story_archive/Measuring_CO2_from_Space/

    … and the following might look oddly familiar:

    CO2 bands in Earth’s atmosphere
    http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=37190
    *
    If energy is entering the climate system at a constant rate (or actually slightly declining since 1961) but is leaving the climate system at a reduced rate, by the conservation of energy we know that the amount of energy in the system has to increase. And it will continue to increase until the temperature rises enough that the system radiates radiation (according to the Stefan-Boltzmann equation) at a high enough rate (proportional to t4that it balances the rate at which energy is entering the system. (That’s called “radiation balance theory.”)

    But the absolute humidity of the atmosphere roughly doubles for every ten degrees, and water vapor is itself a greenhouse gas. So when everything is said and done, we are looking at a climate sensitivity of about 3°K. (I could point you to some papers if it would help.)

  • Timothy Chase // June 23, 2009 at 7:39 pm | Reply

    PS

    Given the noise that exists in the climate system, anyone who tries to establish the trend in global average temperature with much less than fifteen years data is — in my view — either particularly ignorant of the science, or what is more likely, some sort of flim-flam artist, a bit like the psychic surgeons that James Randi exposes who “remove” tumors without opening up the “patient.” (And in the case of psychic surgery, the patient often dies only a few years later — after the cancer metastasizes.)

    [The phrase "tries to establish the trend in global average temperature with much less than fifteen years data" describes the vast majority of those who deny the reality and severity of global warming. Imagine that.]

  • MikeN // June 23, 2009 at 9:05 pm | Reply

    What is the source for the chart of emissions scenarios, that has A1B growing at 2.42% per year and A1F1 growing at 2.71%? Based on the tables in the TAR, I get growth rates of 3.16 and 2.02 for those scenarios in terms of total CO2. How is this growth rate supposed to be calculated?

  • lucia // June 23, 2009 at 9:08 pm | Reply

    Tamino–

    Response: “11-year smoothing” doesn’t necessarily mean 11-year moving averages. Better methods are less sensitive to noise fluctuations.]

    I agree that’s why I used a simple example of moving averages and my post says “Different smoothing methods weigh the data data inside the smoothing region differently. ”

    Timothy,
    I agree the graphs in the TAR show simulation results beginning in 1990. However, the AOGCM models used and the “simple models” used to create the graph TAR had not even been developed in 1990. The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.

    PS. I haven’t been trying to “establish” any trends in any blog posts.

  • george // June 23, 2009 at 10:47 pm | Reply

    Lucia says

    “the AOGCM models used and the “simple models” used to create the graph TAR had not even been developed in 1990. The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.”

    It may be true generally that ‘projecting’ the past is easier than ‘projecting’ into the future, but is it true in the specific case of the TAR projections?

    Rahmsdorf et al indicate in their paper paper that the “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″.

    The IPCC scenarios and projec-
    tions start in the year 1990, which is also the
    base year of the Kyoto protocol, in which almost
    all industrialized nations accepted a binding
    commitment to reduce their greenhouse gas
    emissions. Although published in 2001, these
    model projections are essentially independent
    from the observed climate data since 1990: Cli-
    mate models are physics-based models devel-
    oped over many years that are not “tuned” to
    reproduce the most recent temperatures, and
    global sea-level data were not yet available at
    the time.

  • luminous beauty // June 23, 2009 at 11:25 pm | Reply

    The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.

    ‘What’?

    PS. I haven’t been trying to “establish” any trends in any blog posts.

    Or anything at all “meaningful”, either.

  • Timothy Chase // June 23, 2009 at 11:34 pm | Reply

    lucia wrote:

    I agree the graphs in the TAR show simulation results beginning in 1990. However, the AOGCM models used and the “simple models” used to create the graph TAR had not even been developed in 1990. The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.

    Lucia, your criticism might carry some weight — if climate models were simply instances of curve fitting.

    However climate models don’t just have to match up with a given set of data — such as temperature for a “hindcast” of given set of years. Climate models aren’t based upon curve-fitting. They are based upon our empirical, scientific understanding of the natural world.

    This is really part of the beauty of climate models – and at an abstract level, one of the most fundamental principles of climate modeling itself. It argues from general principles, typically fundamental principles of physics – although not strictly – as when the responses of organisms (e.g., certain representative species of vegetation) are incorporated into the models.

    It doesn’t allow the arbitrary element of curve-fitting. Such an approach would have little grounds for regarding its conclusions as applying to anything outside of what the curve was based on – and given the complexity of what we are dealing with, it would quickly evolve into a Rube Goldberg device which no one could understand the basis for – but which we would adopt merely like superstitious rats which are randomly rewarded – dancing about in the belief that some increasingly complex set of motions determines whether or not they get the reward.

    Their foundation consists of consists of the principles of physics. Radiation transfer theory, thermodynamics, fluid dynamics and so on. They don’t tinker with the model each time to make it fit the phenomena they are trying to model. With that sort of curve-fitting, tightening the fit in one area would loosen the fit in others. They might improve the physics by including a more detailed analysis of a given phenomena, but because it is actual physics, tightening the fit in one area almost inevitably means tightening the fit in numerous others.

    Since climate models are based upon our scientific understanding of the world, we have every reason to believe that the more detailed the analysis, the more factors we take into account, the better the models will do at forecasting the behavior of climate systems. A climate model is not some sort of black box. If we see that the predictions are not matching up, we can investigate the phenomena more closely, whether it is in terms of fluid dynamics, spectral analysis, chemistry or what have you and see what we are leaving out and properly account for it.

    This would seem to be the only rational approach that climatologists can take, and if this general approach did not work, this would seem to imply that natural science is a failed project, that its success up until this point has simply been some sort of illusion, and that the world simply doesn’t make sense.

    To the extent that we can incorporate the relevant physics, we are able to base our projections upon a far larger body of knowledge than just a few points on a graph – things as tried and tested as the laws of thermodynamics, the laws governing fluid motion, chemistry, the study or radiation in terms of the blackbody radiation, absorption and reemission – and even quantum mechanics.
    *
    lucia continued:

    PS. I haven’t been trying to “establish” any trends in any blog posts.

    I didn’t say that you had. However, given your denial I decided to check.

    I found the following essay:

    IPCC Projections Overpredict Recent Warming.
    10 March, 2008 (07:38) | global climate change Written by: lucia
    http://rankexploits.com/musings/2008/ipcc-projections-overpredict-recent-warming/

    In that essay you give a chart titled “Global Mean Temperature,” then state:

    I estimate the empirical trend using data, show with a solid purple line. Not[e] it is distinctly negative with a slope of -1.1 C/century.

    IPCC Projections Overpredict Recent Warming.
    10 March, 2008 (07:38) | global climate change Written by: lucia
    http://rankexploits.com/musings/2008/ipcc-projections-overpredict-recent-warming/

    In the chart itself it says that it is based upon an “Cochrane Orcutt fit to data: 2001-now & uncertainty bands.” 2001 to 2008 is seven years, less than half of fifteen. Would you claim that you weren’t trying to establish what the trend actually and empirically is but merely arguing against the trend that the IPCC claimed to exist on the basis of a larger span of years?

    If the latter, then I must ask: how much validity is there to such an argument against the IPCC’s conclusion if it can in no way support a conclusion about the actual, empirical trend itself? And if it can in fact be used to argue against the IPCC’s conclusion then surely it can support a conclusion about the trend in global average temperature. And if you maintain that it nevertheless cannot, did you make such unsupportable “hairsplitting” clear to those who visit your blog? Or did you simply let them assume what must logically follow from your argument?

  • Zeke Hausfather // June 24, 2009 at 12:38 am | Reply

    Lets be honest here; they aren’t completely independent. After all, modelers use the relative skill in hindcasting to assess model validity, so there is at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.

    Granted, there aren’t too many model parameters that can be tweaked, so the general ability of models to hindcast is in itself a sign that they are getting a fair bit right.

  • MikeN // June 24, 2009 at 1:36 am | Reply

    >[Response: "11-year smoothing" doesn't necessarily mean 11-year moving averages. Better methods are less sensitive to noise fluctuations.]

    There was a flattening with just 2007 data. YOu posted it March of last year.

    http://tamino.wordpress.com/2008/03/26/recent-climate-observations-compared-to-ipcc-projections/

    It looks like the graph is different in the report.

    [Response: The word "flattening" implies some trend -- not just a noise fluctuation. On that basis, there's no flattening.]

  • Ray Ladbury // June 24, 2009 at 1:41 am | Reply

    TCO says, “I have even more skepticism of the AGW predictions of social and economic effects as I do of the scientific stuff. ”

    Maybe you can explain why you guys alway assume that any uncertainty means you can stop worrying. The uncertainties actually push the risk higher rather than lower.

  • MikeN // June 24, 2009 at 4:46 am | Reply

    [Response: The word "flattening" implies some trend -- not just a noise fluctuation. On that basis, there's no flattening.]

    OK, then what should that be called?

    My point is that this ‘not a flattening’ isn’t in the version that is in the Copenhagen report, so the smoothing is confusing. The noise is on the low end, so I would expect the version with 2008 data included to have more ‘not a flattening’ than in the chart you posted last year.

  • MikeN // June 24, 2009 at 5:23 am | Reply

    >— if climate models were simply instances of curve fitting…

    That all sounds good, and the models are wonderful creations, but in the end they are curve fitting. Reading thru the documentation for the MIT IGSM that you highlighted in ‘It’s Going to Get Worse,’ they built a detailed economic model to couple with the other sub-models. However, they indeed did engage in curve-fitting to observed temperature measurements to get values for key parameters of the model. From Sokolov & Stone
    http://globalchange.mit.edu/files/document/MITJPSPGC_Rpt124.pdf
    Specifically, Forest et al. (2002) used the climate component of the IGSM
    (Sokolov & Stone, 1998; Prinn et al., 1999) to produce probability distributions for the climate
    sensitivity, the rate of heat uptake by the deep oceans, and the net forcing due to aerosols by
    comparing observed temperature changes over the 20th century with results of the simulations in which these model parameters were varied.

    I’ve worked with some of these climate models, and indeed changes in these climate variables make a huge difference in climate sensitivity estimates. In 1999, Prinn wrote that it has twice as much an effect as changes in carbon emissions. I haven’t read their latest work in full, but from the descriptions it appears to me they engaged in more curve fitting to estimate climate parameters, as well as increasing estimates for carbon emissions, a change I think is valid.

    [Response: I'm afraid you have crossed the "stupid threshhold." Models are NOT curve-fitting, and your abysmally ignorant portrayal of them as such proves that you are nowhere near sufficiently informed to offer any useful opinion. Unfortunately you're an example of "a LITTLE knowledge is a dangerous thing," so it'll probably take decades for you to realize not only how wrong, but how warped your beliefs are.

    As for the claim that you "worked with some of these climate models" perhaps you'll understand if I doubt the "work" you did had any merit, or imparted the understanding necessary to evaluate them.]

  • Deep Climate // June 24, 2009 at 5:36 am | Reply

    MikeN,
    Jean S and Lucia appear to be suggesting that Rahmstorf may have changed his smoothing process to minimize the so-called “flattening” effect of the 2007-8 trough. They speculate that the end-point processing may have changed or that a larger window (more smoothing points) was used than in the original. Hence the provocative title of Lucia’s post “Fishy odors surrounding Figure 3 from ‘The Copenhagen (Synthesis) Report’”.

    However, if one looks carefully at the Rahmstorf’s original chart (a higher resolution version taken from a presentation) and Jean S’s supposed replication with the same data to 2006, there are already subtle but important differences.

    http://deepclimate.files.wordpress.com/2009/06/rahmstorf-original-20072.jpg

    http://rankexploits.com/musings/wp-content/uploads/2009/06/2006_m_5_rahmstor.jpg

    Note that the smoothed observations coincide exactly with the HadCRU temp for 2005 in the original chart, but are noticeably below in the replication. Also, Jean S has more divergence between GISS and HadCRU around 1998, but less at the end point. These differences suggest to me that Jean S’s replication may have used different weightings within the smoothing window (and for all I know the entire algorithm might be very different). In particular, Jean S’s algorithm appears to give more weight at the point of estimation and less to the influence of neighbouring points.

    Now look at the updated chart (I’ve isolated the last few years, ending in 2008). Notice that recent years have all been lowered somewhat under the effect of a relatively cool 2008.

    http://deepclimate.files.wordpress.com/2009/06/rahmstorf-2007-update-detail.jpg

    I don’t see any evidence whatsoever that Rahmstorf changed his smoothing procedure between his two charts. Does that answer the question you are really asking?

    There’s a lot of other misinformation in that post. Maybe I’ll comment more when I’m a little calmer.

  • luminous beauty // June 24, 2009 at 12:16 pm | Reply

    OK, then what should that be called?

    Noise fluctuation.

  • george // June 24, 2009 at 1:36 pm | Reply

    Zeke Hausfather said

    Lets be honest here; they aren’t completely independent. After all, modelers use the relative skill in hindcasting to assess model validity, so there is at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.

    To be perfectly honest, I think hindcasting for these models is usually done over multiple decades, not just one. The graphs shown in Rahmsdorf et al actually indicate as much.

    If that were not the case and selective pressure was at work specifically “to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period”, why wouldn’t there actually be a better match between the IPCC projections and the GISS and HADCRU temperature trends for that period?

    As is clear from this graph, the observed (GISS and HADCRU) temperature trends actually follow the “upper bound” of the “projection envelope” for the latter half of that 1990-2000 period (after about 1996 or so).

  • MikeN // June 24, 2009 at 8:18 pm | Reply

    >why wouldn’t there actually be a better match between the IPCC projections and the GISS and HADCRU temperature trends for that period?

    Because as Tamino is saying, this isn’t curve fitting. The models are too complicated to get an exact match.

    >Models are NOT curve-fitting, and your abysmally ignorant portrayal of them as such proves that you are nowhere near sufficiently informed

    Well perhaps this is a definition issue again, however the people who created the model said what they did in their paper.

    You’re right that my work on the models doesn’t give me a detailed understanding, as I only changed some inputs to get an output of temperature, but this gave me an idea of how much small changes in input parameters change the output.

    The reason I say this is curve fitting is because the model gets tweaked to adjust to observed phenomena. Values for cloud sensitivity, ocean sensitivity, aerosols, volcanoes, etc. These get changed until you get the results that best match observed data.

  • MikeN // June 24, 2009 at 8:29 pm | Reply

    Deep, you are right that the flattening appears to be smoother in the update than Jean’s replication. The changes start at least as far back as 2003.
    So maybe it is the same flattening method after all.
    Interesting that adding 2007 alone showed more flattening like Jean’s replication, but throwing in 2008 lowered the overall slope.

  • MikeN // June 24, 2009 at 9:10 pm | Reply

    >I’m afraid you have crossed the “stupid threshhold.

    Oh no! Pull me back!
    I’m still trying to figure out why the A1F1 number is so high compared to A1B, different from the TAR description of the emissions scenarios.

  • Zeke Hausfather // June 24, 2009 at 9:15 pm | Reply

    george,

    I realize that hindcasting is done over multiple decades, and that there is no reason why 1990 to 2000 is necessarily more important to assessing model hindcasting strength than any prior decade. My only point was that models produced in 2000 are not completely independent of observations from 1990 to 2000, so their relative strength in hindcasting over that period is not as effective an evaluation of model predictions as is comparing projections to observed temperatures after the models were created.

    That said, the latter is rather difficult to do due to the relatively short period of observations post-model creation and the degree of annual variability in the climate.

  • Deep Climate // June 24, 2009 at 10:12 pm | Reply

    MikeN:
    Interesting that adding 2007 alone showed more flattening like Jean’s replication, but throwing in 2008 lowered the overall slope.

    You are assuming that Tamino’s updated graph was created with exactly the the same smoothing as Rahmstorf’s graph. I don’t know whether or not that is the case (and neither do you nor anyone else at the Blackboard at this point).

    But, in any event (and more to the point), it appears that you now acknowledge that suggestions that Rahmstorf changed the smoothing parameters are most likely completely unfounded.

  • Ray Ladbury // June 24, 2009 at 11:43 pm | Reply

    There seem to be lots of misapprehensions about “tweaking” of climate models. To call this process “curve fitting” is misleading at best and possibly perverse.

    First, these are not statistical models. There’s no “curve” you are fitting. Rather, you are trying to make the model look as Earthlike as possible while keeping the parameters within their independently determined confidence intervals. Moreover, the form of the model itself is dictated by physics.

    Now, the individual parameters may well be fit to data, but these data are independent of the criteria used for verification.

    Finally, as I said, there’s no curve. There are all the measurable variables–temperature, precipitation, possibly wind and other fields as well.

    Also, here’s a hint:

    And as far as “trends” go–if your conclusions depend on details like the smoothing algorithm, the dataset, weighting and starting and ending years, then you aren’t really talking about trends, are you?

  • David B. Benson // June 24, 2009 at 11:59 pm | Reply

    Well, the model I work on (now and then) is certainly based on physics. That part is not is so-called curve fitting, what I call parameter estimation. All would be done and finished except that there are two functions with unknown shape and parameters. There are reason why the shape should be thus-and-so or this-and-that. The multiplicative parameters on each term of these functions is unknown, but a likely range is known.

    I have some data and use parameter estimation methods to (1) find the best fitting parameter values and (2) whether, with the best fitting parameter values, the function shaped thus-and-so is better, the same, or worse than the function shape this-and-that.

    I have a hard time imaging that something the same sort in not occuring for AOGCMs. There are sub-scale parameterizations to a variety of processes such as clouds and aerosols. If I working working on climate models (which I am not) I’d use some of the available data for ‘training’, i.e., parameter estimation, and the remainder for validation studies. Unfortunately for my (occasional) problem, I have nothing left for validation studies. Different problem.

  • dhogaza // June 25, 2009 at 12:34 am | Reply

    I have a hard time imaging that something the same sort in not occuring for AOGCMs. There are sub-scale parameterizations to a variety of processes such as clouds and aerosols.

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    MikeN, another thing that ought to be obvious … the fact that you can download the code, compile it, and tweak parameters randomly and get bizzare stuff out doesn’t mean that experimenters are also just tweaking parameters randomly gettting random garbage outs. As Ray says, the parameter values are informed by physics and whatever empirical data is available …

    Also, I’m surprised that no one has called MikeN out on his bait-and-switch:

    n the end they are curve fitting. Reading thru the documentation for the MIT IGSM that you highlighted in ‘It’s Going to Get Worse,’ they built a detailed economic model to couple with the other sub-models. However, they indeed did engage in curve-fitting

    He’s using the fact that the MIT economic model is (apparently, if he’s correct) a statistical model to “prove” that the NASA and Hadley Center do the same sort of curve fitting as the MIT team did.

    Michael, that’s a bit like saying “since Soyuz doesn’t have wings, the space shuttle doesn’t have wings”.

  • Douglas Watts // June 25, 2009 at 2:49 am | Reply

    “Maybe you can explain why you guys alway assume that any uncertainty means you can stop worrying. The uncertainties actually push the risk higher rather than lower.”

    This is what those in the Atlantic salmon conservation biz call the Precautionary Principle. It is the fundamental basis of stock management under the North Atlantic Salmon Conservation Organization (NASCO), which is comprised of all the nations that touch on the coast of the North Atlantic, where Atlantic salmon live.

  • MikeN // June 25, 2009 at 3:03 am | Reply

    Um, no, the existence of the economic model has nothing to do with the curve fitting. The descriptions you give is in line with what I was saying. Curve-fitting may be the wrong term, but it was Tamino who first used that to refer to what Lucia was saying.

    [Response: ???]

  • Timothy Chase // June 25, 2009 at 3:45 am | Reply

    dhogaza wrote:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    In essence, to perform calculations, they have to break space, time, spectra, soil types and the like into grids — to perform discrete calculations, much like breaking the area under a curve into finite boxes — and then apply parameterizations to account for the sub-grid physics — including turbulence. Such an approach is more or less mandated by the fact that computer resources are, like everything else, finite. And as I pointed out above, the employ certain representative species — now that they are trying to incorporate the biosphere.

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    Is this what you were thinking of?

    Unified Model
    User Guide
    http://ncas-cms.nerc.ac.uk/html_umdocs/UM55_User_Guide/UM_User_Guide.html

    Here is something a little more introductory that I have brought up before:

    dhogaza continued:

    The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.

    This is along the lines of what you are mentioning regarding parameterization.

    Models ‘key to climate forecasts’
    Page last updated at 00:23 GMT, Friday, 2 February 2007
    By Dr Vicky Pope
    UK Met Office’s Hadley Centre
    http://news.bbc.co.uk/1/hi/sci/tech/6320515.stm

    Incidentally, the 38 levels are levels of atmosphere. The ocean is divided into 40 different levels. Resolution with respect to this particular model is 135 km by 135 km, according to the article.

    Time is typically divided into roughly 15 minute intervals with the more advanced models. However, resolution is increased in certain parts of the model as needed for the purpose of modelling, e.g., the polar vortex requires calculations with intervals of roughly 30 seconds — if I remember correctly. But that’s NASA GISS, not Hadley — although Hadley may do something similar. Likewise, the radiation transfer theory built into the most recent NASA GISS models include non-local thermodynamic equilibria calculations — but we are just beginning to realistically model glaciers. Clouds are being modeled more realistically nowadays. Aerosols will require more work

    In any case, all of this is well outside of my area of expertise — but I find it fascinating.

  • Timothy Chase // June 25, 2009 at 3:47 am | Reply

    CORRECTED VERSION OF THE ABOVE

    dhogaza wrote:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    In essence, to perform calculations, they have to break space, time, spectra, soil types and the like into grids — to perform discrete calculations, much like breaking the area under a curve into finite boxes — and then apply parameterizations to account for the sub-grid physics — including turbulence. Such an approach is more or less mandated by the fact that computer resources are, like everything else, finite. And as I pointed out above, the employ certain representative species — now that they are trying to incorporate the biosphere.

    dhogaza continued:

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    Is this what you were thinking of?

    Unified Model
    User Guide
    http://ncas-cms.nerc.ac.uk/html_umdocs/UM55_User_Guide/UM_User_Guide.html

    Here is something a little more introductory that I have brought up before:

    The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.

    This is along the lines of what you are mentioning regarding parameterization.

    Models ‘key to climate forecasts’
    Page last updated at 00:23 GMT, Friday, 2 February 2007
    By Dr Vicky Pope
    UK Met Office’s Hadley Centre
    http://news.bbc.co.uk/1/hi/sci/tech/6320515.stm

    Incidentally, the 38 levels are levels of atmosphere. The ocean is divided into 40 different levels. Resolution with respect to this particular model is 135 km by 135 km, according to the article.

    Time is typically divided into roughly 15 minute intervals with the more advanced models. However, resolution is increased in certain parts of the model as needed for the purpose of modelling, e.g., the polar vortex requires calculations with intervals of roughly 30 seconds — if I remember correctly. But that’s NASA GISS, not Hadley — although Hadley may do something similar. Likewise, the radiation transfer theory built into the most recent NASA GISS models include non-local thermodynamic equilibria calculations.

    In any case, all of this is well outside of my area of expertise — but I find it fascinating.

  • Timothy Chase // June 25, 2009 at 3:49 am | Reply

    FINAL CORRECTION (sorry)

    dhogaza wrote:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    In essence, to perform calculations, they have to break space, time, spectra, soil types and the like into grids — to perform discrete calculations, much like breaking the area under a curve into finite boxes — and then apply parameterizations to account for the sub-grid physics — including turbulence. Such an approach is more or less mandated by the fact that computer resources are, like everything else, finite. And as I pointed out above, they employ certain representative species — now that they are trying to incorporate the biosphere.

    dhogaza continued:

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    Is this what you were thinking of?

    Unified Model
    User Guide
    http://ncas-cms.nerc.ac.uk/html_umdocs/UM55_User_Guide/UM_User_Guide.html

    This is along the lines of what you are mentioning regarding parameterization.

    Here is something a little more introductory that I have brought up before:

    The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.

    Models ‘key to climate forecasts’
    Page last updated at 00:23 GMT, Friday, 2 February 2007
    By Dr Vicky Pope
    UK Met Office’s Hadley Centre
    http://news.bbc.co.uk/1/hi/sci/tech/6320515.stm

    Incidentally, the 38 levels are levels of atmosphere. The ocean is divided into 40 different levels. Resolution with respect to this particular model is 135 km by 135 km, according to the article.

    Time is typically divided into roughly 15 minute intervals with the more advanced models. However, resolution is increased in certain parts of the model as needed for the purpose of modelling, e.g., the polar vortex requires calculations with intervals of roughly 30 seconds — if I remember correctly. But that’s NASA GISS, not Hadley — although Hadley may do something similar. Likewise, the radiation transfer theory built into the most recent NASA GISS models include non-local thermodynamic equilibria calculations.

    In any case, all of this is well outside of my area of expertise — but I find it fascinating.

  • dhogaza // June 25, 2009 at 4:14 am | Reply

    The MIT people, whatever they’re doing, ouiji board, curve fitting, killing virgins, have NOTHING AT ALL to do what NASA or Hadley are doing.

    Must suck to be the lying you …

  • Barton Paul Levenson // June 25, 2009 at 10:47 am | Reply

    MikeN writes:

    That all sounds good, and the models are wonderful creations, but in the end they are curve fitting.

    That is an illiterate statement. The models use physical laws. They are not statistical models. You just don’t know what you’re talking about. Why don’t you actually look up the code of a climate model and read it? NASA GISS’s Model E has the source code available on-line.

  • Barton Paul Levenson // June 25, 2009 at 10:49 am | Reply

    MikeN writes:

    The reason I say this is curve fitting is because the model gets tweaked to adjust to observed phenomena. Values for cloud sensitivity, ocean sensitivity, aerosols, volcanoes, etc.

    Sensitivity is an OUTPUT of the model, not an INPUT.

  • george // June 25, 2009 at 2:11 pm | Reply

    Zeke Hausfather said:

    “My only point was that models produced in 2000 are not completely independent of observations from 1990 to 2000, so their relative strength in hindcasting over that period is not as effective an evaluation of model predictions as is comparing projections to observed temperatures after the models were created.”

    I don’t debate the issue that in general, a correct hindcast may not mean as much as a correct forecast, for some of the very reasons that you mentioned.

    I actually stated that in my original challenge of Lucia’s claim

    It may be true generally that ‘projecting’ the past is easier than ‘projecting’ into the future, but is it true in the specific case of the TAR projections?

    But we’re talking about a very specific case here and quite frankly, I don’t know enough of the details to say for sure.

    That’s why I questioned Lucia’s claim above (and why I continue to question yours) because it does not seem consistent with what the authors of the Rahmsdorf paper said. “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″.

    I honestly don’t know when the models used to produce the projections included in the 2001 IPCC TAR were made or what data they used for validation.

    Because of my lack of knowledge in this case, perhaps mistakenly, I was going on the claim I read in Rahmsdorf et al paper:

    Since we’re talking about a specific case here (the IPCC 2001 projections) and not some general rule, I guess my comments to you (and to Lucia) really boil down to 3 questions:

    1) Do you know when the models that were used for the 2001 IPCC projections were created?

    2) Do you know what data (which years and which data sets) were used to validate them?

    3) Do you know for a fact that for the specific case under discussion “there [was] at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.”?

    These are not merely debating questions.

    If you want me to be honest here I will: I think there is far too much of what I would term “idle speculation” that goes on with regard to the work of climate scientists.

    Some of it may be correct, but in the absence of evidence for the specific case, I don’t think it is either fair or accurate to say for the specific case at hand that

    Lets be honest here; they aren’t completely independent. After all, modelers use the relative skill in hindcasting to assess model validity, so there is at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.

  • Deep Climate // June 25, 2009 at 4:06 pm | Reply

    Zeke, george,

    In any discussion of IPCC projections and hindcasts, we should be very careful to distinguish between the two. A hindcast is not simply a “past projection.” Rather, in general, a hindcast incorporates estimates of natural forcings (volcanic, solar) and anthropogenic forcings (GHGs, aerosols), at least to the extent supported by the particular model. Projections, on the other hand, assume neutral natural forcings and various scenarios for the anthropogenic forcings.

    My reading of TAR chapter 9 is that all model runs incorporated were projections, not hindcasts, in the above sense from 1990 on.

    The actual date of the model run is not particularly relevant to this distinction, much less the date of publication or date of incorporation into a synthesis report like the TAR.

    It’s true that in TAR there were longer lags between the projection start and model runs/ publication than in AR4 (or planned for AR5). But I don’t see any evidence that there were “hindcasts” for 1990-2000, much less that they were used “exclude or tweak models that perform poorly.”

    The situation is that much clearer for AR4. The projections start in at the beginning of 2000 (not 2001 or 2007), despite various claims in the blogosphere to the contrary.

  • David B. Benson // June 25, 2009 at 10:23 pm | Reply

    dhogaza // June 25, 2009 at 12:34 am — Please read the “What is tuning?” section of
    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

    Other than a different term, “tuning”, this is what anybody who wants to do sensible, physics-based, parameter estimation is going to have to do. So I think I described the situation correctly in my earlier post.

    By the way, I’m not considering what people do in so-called economic models. Those are certainly not based on physics.

  • Dave A // June 25, 2009 at 10:48 pm | Reply

    Timothy Chase,

    “The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.”

    Last week the UK Met Office, launched a site purporting to show future climate for the UK based on 25km gridded squares.

    Today in Nature, Myles Allen, recent co-author of the ‘Trillionth tonne’ which made the front cover of Nature, is quoted as saying -

    “Current climate science might support projections on a scale of a couple of thousand kilometres, but anything smaller than that is uncharted territory”

  • dhogaza // June 25, 2009 at 11:56 pm | Reply

    So I think I described the situation correctly in my earlier post.

    It’s not you I’m worried about…

  • Timothy Chase // June 26, 2009 at 12:21 am | Reply

    From the same article:

    Most projections, such as those produced by the Intergovernmental Panel on Climate Change (IPCC), offer information on likely climate effects at a subcontinental scale of around 300 km2. Regional decadal-scale projections have recently been produced for the United States at a resolution of 25 km2 (see ‘Hot times ahead for the Wild West’). But the UKCIP’s approach takes projections to a new level, covering long-term climate change for the whole nation at the scale of 25 km2 and, in some cases, resolving weather patterns down to a scale of 5 km2.

    “It’s one step better than what you get from the IPCC at the global scale,” says Jacqueline McGlade, executive director of the Copenhagen-based European Environment Agency. “We’re getting more discrimination now between the south and the north and there are distinct differences.”

    UK climate effects revealed in finest detail yet
    Source: Copyright 2009, Nature
    Date: June 19, 2009
    Byline: Olive Heffernan
    http://forests.org/shared/reader/welcome.aspx?linkid=130562

    … and the full quote is:

    DEFRA’s chief scientist Bob Watson says that he expects the approach “will be taken up by other regions and highlighted by the IPCC in their next report”.

    “Current climate science might support projections on a scale of couple of thousand kilometres, but anything smaller than that is uncharted territory.’

    In otherwords, he regards it as something of an achievement. But by clipping what is in fact an expression of admiration to something shorter that sounds disapproving you engaged in what is known in evolutionary circles as “quote-mining.” And you did so with the skill that would make a young earth creationist envious.

    My hat is off to you, sir!

  • Timothy Chase // June 26, 2009 at 12:33 am | Reply

    PS

    That was in response to Dave A., where he wrote in response to me (or rather my quote from Dr. Vicky Pope):

    Today in Nature, Myles Allen, recent co-author of the ‘Trillionth tonne’ which made the front cover of Nature, is quoted as saying -

    “Current climate science might support projections on a scale of a couple of thousand kilometres, but anything smaller than that is uncharted territory”

    The rocket went up without its crew: I hit a return in one of the one-line submission fields and the webpage took that as equivalent to hitting the the “Submit” button.

    I am now about to click “Submit.” Wish me luck…

  • Timothy Chase // June 26, 2009 at 1:19 am | Reply

    David B. Benson originally wrote:

    Well, the model I work on (now and then) is certainly based on physics. That part is not is so-called curve fitting, what I call parameter estimation. All would be done and finished except that there are two functions with unknown shape and parameters….

    I have a hard time imaging that something the same sort in not occuring for AOGCMs. There are sub-scale parameterizations to a variety of processes such as clouds and aerosols.

    Dhogaza responded:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    David Benson responded:

    dhogaza // June 25, 2009 at 12:34 am — Please read the “What is tuning?” section of
    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

    I don’t think that dhogaza was disagreeing with you, but rather trying to forestall a misunderstanding that some might walk away with. He wished to emphasize (US, “emphasise,” UK) or “bring to the foreground” something which in his view was simply in the background as you presented things and might otherwise be missed.
    *
    David Benson then wrote of parameter-based estimation (or what is sometimes technically referred to as “tuning”):

    Other than a different term, “tuning”, this is what anybody who wants to do sensible, physics-based, parameter estimation is going to have to do. So I think I described the situation correctly in my earlier post.

    This is certainly how I understand it.

    The difference between curve-fitting as it is commonly understood and the sort of parameterization which is made use of in climate models is quite important — and relatively easy to make — so I hope you don’t mind if I explain it (in a little more detail than I have above) for the benefit of those who may be less knowledgable than yourself.

    Models use parameterizations because of the fact that they are necessarily limited (in one form or another) to finite difference calculations. There will exist individual cells, perhaps a degree in latitude and a degree in longitude. These cells will be of a certain finite height, such that the atmosphere will be broken into layers – with perhaps the troposphere and stratosphere sharing a total of forty atmospheric layers. Likewise, calculations will be performed in sweeps such that the entire state of the climate system for a given run is calculated perhaps every ten minutes in model time.

    Now physics provides the foundation for these calculations, but as we are speaking of finite difference, the calculations will tend to have problems calculating turbulent flow due to moist air convection or wind speed — where differences between neighboring cells are particularly steep. Thus when you have flow which is particularly turbulent, such as around the Polar Vortex, cell-by-cell calculation based on finite differences will lack the means by which to tell how for example the momentum, mass, moisture and heat which is leaving the cell will be split-up and transfered to the neighboring cells. To handle this you need some form of parameterization. Standard stuff as far as modeling is concerned, or so I would presume.

    Parameterization is a form of curve-fitting. But it is local curve-fitting in which one is concerned with local conditions, local chemistry and local physics — backed up by the study of local phenomena, e.g., what you are able to get out of labs or “in the field” studies. It is not curve-fitting that adjusts the models to specifically replicate the trend in the global average temperature or other aggregate and normalized measures of the climate system.

    Hopefully I have just done both your views (or rather — as I see it — your common view) justice.

  • dhogaza // June 26, 2009 at 1:25 am | Reply

    The Nature article’s behind a paywall, and I haven’t found which denialism site Dave A has cut-and-pasted this from, but wanna guess that Myles Allen is talking about GCM model outputs for the globe, and not when run with finer-sized grids over a smaller area like the UK?

    And do you wanna bet that the Hadley Centre knows that this research they’re doing is research, in other words exploration into relatively uncharted territory?

  • Zeke Hausfather // June 26, 2009 at 2:19 pm | Reply

    george, Deep,

    I’ll agree that I’m largely speculating here, as I have not personally worked in climate model development, and my thoughts should be taken simply as such.

    My point was that while the parametrization of any given individual model may be completely independent (and not adjusted post-facto), the next generation of models will, all things being equal, tend to build off those models which best reflected the past real world climate. There is likely at least some selection at work for the models that perform best in hindcasting. Rahmstorf is likely correct in stating that “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″, though the same is probably not true for the models used in the AR4.

  • Deep Climate // June 26, 2009 at 6:35 pm | Reply

    Zeke, you said:

    Rahmstorf is likely correct in stating that “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″, though the same is probably not true for the models used in the AR4.

    I’m not sure what your point is. The AR4 model-derived projections are from 2000 on, and are relative to a baseline of 1980-99. So the equivalent claim would be that they are “essentially independent from the observed climate data since 2000.” Do you agree or disagree with that statement? It’s hard to tell.

  • Mark // June 26, 2009 at 7:37 pm | Reply

    Better yet, Terry, tell your dad to fill a glass with water, put a lollipop stick over the glass, put an ice cube on the stick and see what happens when it melts.

  • Zeke Hausfather // June 26, 2009 at 7:53 pm | Reply

    Deep,

    I agree that, as far as I know, the AR4 models are “essentially independent from the observed climate data since 2000.” You’ve convinced me that there is some validity in evaluating models based on their predictions even if these predictions begin from a date prior to model formation.

    However, the argument that the TAR projections are more useful than AR4 projections in validating model performance over the last two decades still seems a tad odd, given that AR4 models take into account an increased understanding climate science in the interim. Couldn’t we just run the AR4 models with projections out from 1990?

  • Dave A // June 26, 2009 at 9:04 pm | Reply

    Dhogaza,

    The “denialism site” was my copy of Nature. I too was surprised to find the item was behind a paywall when I looked for a link.

  • Dave A // June 26, 2009 at 9:07 pm | Reply

    Timothy Chase,

    So are you saying that Myles Allen doesn’t know what he is talking about?

  • dhogaza // June 26, 2009 at 9:21 pm | Reply

    Timothy Chase:

    I don’t think that dhogaza was disagreeing with you, but rather trying to forestall a misunderstanding that some might walk away with.

    Yes, exactly, thanks. That’s why I pointed out I wasn’t worried about David (i.e. his understanding). I could’ve been a bit more verbose, I guess.

  • dhogaza // June 26, 2009 at 9:35 pm | Reply

    Time is typically divided into roughly 15 minute intervals with the more advanced models. However, resolution is increased in certain parts of the model as needed for the purpose of modelling, e.g., the polar vortex requires calculations with intervals of roughly 30 seconds — if I remember correctly.

    Yeah, the time interval is a consequence of grid size and the rate with which modeled phenomena can traverse the grid. A small grid with a time interval that’s too long can generate outputs in a cell that would traverse multiple cells if the time interval’s too great, so the results won’t propagate cell-by-cell correctly.

  • dhogaza // June 26, 2009 at 9:37 pm | Reply

    So are you saying that Myles Allen doesn’t know what he is talking about?

    My guess is that you’re applying his statement incorrectly.

    Can you post something from the news brief that suggests his resolution factor was in regard to regional modeling using a small grid and not to the large grid size used when modeling the climate for the entire globe?

  • Deep Climate // June 26, 2009 at 10:25 pm | Reply

    Zeke,
    Thanks for that – that clarifies your position.

    My first response is that we should avoid the word “prediction” and use “projection”. As I pointed out before, the projections do not (and obviously can not) take into account natural variations that might have an effect on the temperature record (for instance the longer than usual solar minimum and the relative predominance of La Nina conditions have made the 2000s a little cooler than they would have been under neutral forcing, as has been pointed out by me and many others (including Rahmstorf).

    Couldn’t we just run the AR4 models with projections out from 1990?

    I suppose that could be done, but might be an unjustifiable use of scarce resources.

    There simply just haven’t been enough time to properly evaluate the projections (only 10 years). But, of course, that hasn’t stopped the blogosphere from trying, which is why we’re here.

    Still, at the decadal level, it’s clear that there continues to be considerable warming and that warming is well within confidence intervals (when properly calculated).

    But if you insist on evaluating AR4 projections, here’s another way to look at it. Unlike TAR, the AR4 projections depend very much on the hindcast in the pre-projection period, since the projections are given relative to the 1980-99 baseline.

    So it’s possible to analyze deviations from the observations, and possible model error, as having two parts, namely hindcast error (pre-2000) and projection error (post-2000).

    In fact, if you look at the A1B model ensemble, there is a huge downtick at Pinatubo (more than in the observations), and a correspondingly very large upward trend afterwards. This is so even though some of the models don’t even include volcanos! The result is that the smoothed observations start out in 2000 already below the projections (at least 0.05 deg.)

    None of this, will make much difference in the long run obviously – one reason why it would have been better to wait for bloggers to wait before jumping all over the AR4 projections.

    But it does suggest that baselining AR4 projections to the smoothed observations at the start of projection in 2000 (as was done by Rahmstorf for corresponding TAR start in 1990) would allow us to focus on the projections themselves. What do you think?

  • Timothy Chase // June 26, 2009 at 10:33 pm | Reply

    Dave A wrote:

    Timothy Chase,

    So are you saying that Myles Allen doesn’t know what he is talking about?

    No. I quoted from the article:

    DEFRA’s chief scientist Bob Watson says that he expects the approach “will be taken up by other regions and highlighted by the IPCC in their next report”.

    “Current climate science might support projections on a scale of couple of thousand kilometres, but anything smaller than that is uncharted territory.’

    UK climate effects revealed in finest detail yet
    Source: Copyright 2009, Nature
    http://forests.org/shared/reader/welcome.aspx?linkid=130562

    Then I pointed out that by simply quoting the part that begins, “Current climate science might support projections…” you made it sound like Myles Allen thought that climate science couldn’t support conclusions on the scale on which the people at Hadley were attempting to work. But what he was in fact saying — as is made clear by the earlier paragraph — is that what Hadley is doing is cutting-edge, it is in a sense the beginning of the next generation of climate science.

    As I said:

    In otherwords, he regards it as something of an achievement. But by clipping what is in fact an expression of admiration to something shorter that sounds disapproving you engaged in what is known in evolutionary circles as “quote-mining.” And you did so with the skill that would make a young earth creationist envious.

    My hat is off to you, sir!

    It would be as if you quoted me as saying that you did things with “skill… that would make [others] envious,” and then said that I expressed admiration for your achievement — which would be the exact reverse of my intention.
    *
    dhogaza had stated:

    The Nature article’s behind a paywall, and I haven’t found which denialism site Dave A has cut-and-pasted this from, but wanna guess that Myles Allen is talking about GCM model outputs for the globe, and not when run with finer-sized grids over a smaller area like the UK?

    Dave A. responded:

    The “denialism site” was my copy of Nature. I too was surprised to find the item was behind a paywall when I looked for a link.

    dhogaza was giving you the benefit of a doubt. He had assumed that you were not so dishonest that you yourself had engaged in quote-mining, but that you had instead simply parroted someone else who had quote-mined the article in Nature.

    Now — for a variety of reasons — we know better — including the fact that I performed a search of the denialist blogs for that deliberate misquote — and came up empty, as well as your continued performance today.

    However, when I originally responded to you I had left certain things implicit that I later regretted not having made explicit. By continuing as you have, you have given me the opportunity to correct my earlier mistake. For that I am grateful.

  • David B. Benson // June 26, 2009 at 11:48 pm | Reply

    Timothy Chase // June 26, 2009 at 1:19 am — Thank you for the effort, but finding numerical methods to best approximate Navier-Stokes is not ordinarilly said to be parameter estimation, although considering to be “tuning” in the faq. There is a large literature on parameter estimation methods (and IMO climate models ought to use this phrase rather than the, overly general, short and snappy “tuning”).

    A typical problem, and appears to be the case in climate modeling “tuning”, is that one does not know the appropriate value for some parameters. For gravity, I use g = 9.80665 m/s^2, which is more than good enough for my problem even though I haven’t corrected for geodesy. If climate models need it (and maybe for the ocean part such do) the geodesic corrections for everywhere on the globe are known and accessible); one just plugs in the appropriate value. Can’t do that yet for aerosols, so one has a range of potentially valid approximations. Similarly for the remainder of the half dozen parameters to be estimated in climate models, according to the faq. (By the way, simultaneous multi-parameter estimation of 6 parameters is actually quite difficult!) In
    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/
    there are several examples of parameter estimations: gravity wave drag and threshold relative humidity are specifically mentioned.

    The simplest parameter estimations to understand are “the tuning in a single formula in order for that formula to best match the observed values of that specific relationship.” This is bread-n-butter parameter estimation based upon data (not necessarily global temperature in case of climate models).

    Finally, I certainly think that you, dhagoza and I are all in essential agreement; the only differences seem to be terminological ones given our differing backgrounds.

  • dhogaza // June 27, 2009 at 12:44 am | Reply

    the only differences seem to be terminological ones given our differing backgrounds.

    Well, and I was intentionally the terminology used by our cut-and-paste friends’ sources. The “knob-turning” argument is frequently made, with the intent of mad scientists playing some kind of climate synthesizer console, spinning knobs until they’re pleased with the results.

    Etc …

    Though I don’t have a formal background in modeling, I certainly had no problem following the documentation for HadCM4 referenced above when TCO earlier was saying the same crap about models as we’ve seen in this thread.

    TCO, at least, ended up admitting he knows nothing in detail as to how the models work when confronted with detailed info showing that he was blowin’ it out his rear.

    Our DaveAs and MattInSeattles and the like don’t have the integrity to admit it.

  • Timothy Chase // June 27, 2009 at 1:05 am | Reply

    David B. Benson wrote:

    So I think I described the situation correctly in my earlier post.

    Dhogaza responded:

    It’s not you I’m worried about…

    I had written in response to Benson’s post:

    I don’t think that dhogaza was disagreeing with you, but rather trying to forestall a misunderstanding that some might walk away with.

    Now Dhogaza has responded:

    Yes, exactly, thanks. That’s why I pointed out I wasn’t worried about David (i.e. his understanding). I could’ve been a bit more verbose, I guess.

    Actually I hadn’t yet seen your post. Although you posted before I did, it was late at night, and well… But I believe your post was more than sufficient. I just like “bringing things together” as it were — which at times I suppose can be a little much.

    Then again, I also like to think that each of us has his own style — and as such each of us will be able to speak to different people. So maybe it isn’t such a bad thing.

  • David B. Benson // June 27, 2009 at 1:57 am | Reply

    Timothy Chase // June 27, 2009 at 1:05 am — Certainly a good thing!

  • Timothy Chase // June 27, 2009 at 2:38 am | Reply

    David B. Benson wrote:

    Thank you for the effort, but finding numerical methods to best approximate Navier-Stokes is not ordinarilly said to be parameter estimation, although considered to be “tuning” in the faq…

    Thank you for the correction.

    Believe it or not I had actually written that without the benefit of the faq — I had done some reading a while back simply because I wanted to understand a little more about how the models worked, and well, that is the example I hit upon at one point in trying explain what I had picked up. Poorly chosen, and I should have read the faq first, but at least I would have ended up in the same spot, so apparently not all is for naught. Well, sort of…
    *
    David B. Benson continued:

    A typical problem, and appears to be the case in climate modeling “tuning”, is that one does not know the appropriate value for some parameters…. Can’t do that yet for aerosols, so one has a range of potentially valid approximations. Similarly for the remainder of the half dozen parameters to be estimated in climate models, according to the faq.

    Sounds like another place where I should do some digging.
    *
    David B. Benson continued:

    (By the way, simultaneous multi-parameter estimation of 6 parameters is actually quite difficult!)

    “Intuitively” (although I am not sure that is quite the right word) it seems this would more or less have to be the case — due to combinatorial issues.
    *
    From the FAQ, this sounds familiar:

    The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo.

    I actually ran across this in the documentation I believe Dhogaza was referring to:

    Unified Model
    User Guide
    http://ncas-cms.nerc.ac.uk/html_umdocs/UM55_User_Guide/UM_User_Guide.html

    The bit about it being tuned to get a realistic “global albedo” (which according to the text proceeds top-down) is a sort of counter-example to tuning being used simply in order to account for sub-grid physics — or so it at first seems. However, as cloud structure is largely fractal and albedo will be a function of an opacity which is not a constant or even smoothly varying… I will have to give that a little more thought.
    *
    David B. Benson continued:

    The simplest parameter estimations to understand are “the tuning in a single formula in order for that formula to best match the observed values of that specific relationship.” This is bread-n-butter parameter estimation based upon data (not necessarily global temperature in case of climate models).

    This is actually what I would normally consider to be “tuning.”
    *
    David B. Benson continued:

    Finally, I certainly think that you, dhagoza and I are all in essential agreement; the only differences seem to be terminological ones given our differing backgrounds.

    Well, as you can tell, I don’t have much of a technical background. My understanding is uncomfortably close to what Hank would call “poetry.” However, I like to think that there are different degrees of understanding, that both an electrical engineer and an eight year old child understand how a light switch works, but in different ways and to different degrees.

    In philosophy there are skeptics who like to argue sometimes that science has completely invalidated our common understanding of what “a solid object” or “perception” is, but it is by means of our common understanding that we ultimately arrive at advanced scientific understanding. And even once we have arrived at that more advanced understanding we depend upon perceptual judgements and everday reasoning simply in order to deal with scientific instruments. The “science-based” skepticism is self-defeating.

    In my view, while more advanced science may contradict some of what we commonly take to be true (such as when it appears that the sun moves rather than the earth), much of what we commonly take to be true is in fact preserved by our more advanced scientific understanding and language — but simply expressed in a different and more exact way. A bit like the correspondence principle between Galilean and Special Relativity, Newtonian gravitational theory and General Relativity, or Newtonian physics and Quantum Mechanics.

    In any case, if you need someone to analyze a dusty philosophy text for you in a hurry please keep me in mind.

  • Zeke Hausfather // June 27, 2009 at 3:52 am | Reply

    Deep,

    My fault on the projections vs. predictions mixup. I recall reading a rather emphatic article on the subject (perhaps somewhere in the TAR or AR4?) on the difference between the two awhile back.

    As far as the interest in comparing projections with observations, I’m mostly curious if the range of natural variability in the recent past (in a time with no major eruptions) is within the range projected by the envelope of various model scenarios and, if not, what this means. I had a rather interesting discussion awhile back with Lucia about whether or not higher than expected natural variability would imply a climate sensitivity on the high side or not. As far as her work goes, I don’t really have the statistics background necessary to evaluate its legitimacy, though she seems open to reasonable critiques. After all, the arguements about climate sensitivity and the effectiveness of models to date (which are good, but certainly not perfect) are much more interesting (and valid) than the usual straw men of the global warming debate.

    Baselining AR4 projections to the smoothed observations at the start of projection in 2000 may indeed be interesting, though it might also be worth doing a brief sensitivity analysis on various baselines to make sure the choice itself does not bias the results obtained.

  • Dave A // June 27, 2009 at 8:28 pm | Reply

    dhogaza,

    The Nature ‘News in Brief’ item runs to 25 lines. After the Myles Allen quote it goes on to say he was part of a “review committee commissioned to check the report’s methodology”.

    It does not go into any more detail about his comment or the Met office report.

  • Dave A // June 27, 2009 at 9:00 pm | Reply

    Timothy Chase,

    My posts related to the ‘News in Brief’ item in Nature and I reported accurately Myles Allen’s comment therein.

    You quote from an expanded version of the news brief, but I note your comment

    “what Hadley is doing is cutting-edge, it is in a sense the beginning of the next generation of climate science.”

    You may be right that this is cutting edge science but it does not negate Allen’s implict criticism. (And he is hardly a sceptic). The UK Met Office, it seems to me, is overegging the pudding in the run up to Copenhagen in December.

    And I would have thought that an approach like that is actually counterproductive.

  • Deep Climate // June 28, 2009 at 12:05 am | Reply

    I had a rather interesting discussion awhile back with Lucia about whether or not higher than expected natural variability would imply a climate sensitivity on the high side or not.

    Doesn’t this also depend what time scales we’re talking about? But, anyway, if you’re asking whether individual model realizations show similar variability to that observed, the answer is yes AFAIK.

    Baselining AR4 projections to the smoothed observations at the start of projection in 2000 may indeed be interesting, though it might also be worth doing a brief sensitivity analysis on various baselines to make sure the choice itself does not bias the results obtained.

    If one chooses a longer and/or earlier baseline period (rather than 1980-99) from the hindcast, the projections are even more in line with observations, as far as I can see. For example, notice that the observations to 2005 (very end of black curve) are clearly above the projections in IPCC fig. 9-5 (baseline 1900-50).

    http://www.ipcc.ch/graphics/graphics/ar4-wg1/jpg/fig-9-5.jpg

    Baselining from smoothed observations in 2000 gives a result somewhere in between (some day I’ll get around to blogging on that).

    If nothing else, doesn’t that demonstrate that accusations of IPCC manipulation of projections based on supposed knowledge of first few years of observations are completely and utterly unfounded? I’m fed up with the constant misinformation and innuendo coming from the contrarian bloggers and you should be too.

  • luminous beauty // June 28, 2009 at 1:55 pm | Reply

    …but it does not negate Allen’s implict[sic] criticism. (And he is hardly a sceptic).

    The inference that Allen’s statement was a criticism of this model, rather than a statement about the limits of the present state of the art and explicit support for why this approach is deserving of attention in his capacity as an expert referee on its publication, is entirely, completely, and utterly a confabulation of a paranoid mind.

    Of that one can be perfectly certain.

  • Dave A // June 28, 2009 at 9:25 pm | Reply

    luminous beauty,

    You have a very good way with words that commands some respect, but answer me two points.

    First, where has this report and that of the ‘review committee’ of which Myles Allen was a part been published?

    Second, why in such a short News in brief article would Nature have bothered to mention Myles Allen’s comment if he was actually being supportive of the Met Office?

  • t_p_hamilton // June 28, 2009 at 10:33 pm | Reply

    DaveA asks:”First, where has this report and that of the ‘review committee’ of which Myles Allen was a part been published?”

    I imagine Nature has it. It was part of the review process.

    “Second, why in such a short News in brief article would Nature have bothered to mention Myles Allen’s comment if he was actually being supportive of the Met Office?”

    That is the way Nature rolls. The News article is about an article in the very same journal, one which would not have been published if Allen’s committee criticized the methodology!

  • Timothy Chase // June 28, 2009 at 10:38 pm | Reply

    Dave A wrote:

    Timothy Chase,

    My posts related to the ‘News in Brief’ item in Nature and I reported accurately Myles Allen’s comment therein.

    You quote from an expanded version of the news brief, but I note your comment,

    “what Hadley is doing is cutting-edge, it is in a sense the beginning of the next generation of climate science.”

    Apologies are in order: mine.

    You are right — Miles Allen is skeptical — he does believe that Hadley is stretching things. The more optimistic passage was from Bob Watson.

    I looked more closely at the news release, and what I saw was not the “fuller” quote, but two separate quotes by two different people. The passage:

    DEFRA’s chief scientist Bob Watson says that he expects the approach “will be taken up by other regions and highlighted by the IPCC in their next report.”

    UK climate effects revealed in finest detail yet
    Source: Copyright 2009, Nature
    Date: June 19, 2009
    Byline: Olive Heffernan
    http://forests.org/shared/reader/welcome.aspx?linkid=130562

    … expresses the view of Bob Watson — who is part of the project.

    The passage which immediately follows:

    “Current climate science might support projections on a scale of couple of thousand kilometres, but anything smaller than that is uncharted territory.’

    Myles Allen

    It was the formatting that threw me off.

    If I had read further:

    Originally slated for release in November 2008, the projections were delayed by an independent review commissioned late last year to check the methodology. University of Oxford climatologist Myles Allen, who was on the review committee, worries that the results are “stretching the ability of current climate science”.

    His main concern is that the UKCIP report goes well beyond what the IPCC considers possible in terms of the spatial and temporal scales at which climate effects can be reliably resolved. “Current climate science might support projections on a scale of couple of thousand kilometres, but anything smaller than that is uncharted territory,” says Allen.

    … this would have been clarified.
    *
    Dave A wrote:

    You may be right that this is cutting edge science but it does not negate Allen’s implict criticism. (And he is hardly a sceptic). The UK Met Office, it seems to me, is overegging the pudding in the run up to Copenhagen in December.

    And I would have thought that an approach like that is actually counterproductive.

    Is this new approach a mistake? I can’t say. I am not a climatologist, and at the same time I understand some of the worries expressed by those climatologists who are skeptical of this approach. I would say that it is interesting — and I will be interested in how it pans out.

    In any case, my apologies. I should have looked further. I am used to dealing with denialists and clearly jumped the gun, both in terms of my reading of the news piece and my estimation of you. I think that the criticisms you pointed us to are well-worth considering. And looking things over, I would presently (from my entirely non-expert point of view) grant them roughly as much weight as the predictive approach itself. Moreover, I strongly appreciate your having seen this through — as this pushed me to look at things more closely and gave me the opportunity to correct myself.

  • Timothy Chase // June 29, 2009 at 3:13 am | Reply

    Re DEFRA’s 25km2 projections…

    Unfortunately I have not been able to find the reviews — I am not sure that they have been made public. However, it appears that I found the report itself online. One can order copies, but apparently that isn’t necessary if you don’t mind surfing from page to page.

    Here it is:

    Climate change projections
    June 2009 ISBN 978-1-906360-02-3
    http://ukclimateprojections.defra.gov.uk/content/view/824/517/index.html

    All of the reviewers (first stage and the eleventh hour international team) are listed here:

    http://ukclimateprojections.defra.gov.uk/content/view/824/517/index.html

  • Timothy Chase // June 29, 2009 at 6:14 am | Reply

    More on predicting the next El Niño…

    John Mashey wrote a post on August 14, 2007 at 6:24 am that included the following:

    http://aos.princeton.edu/WWWPUBLIC/gphlder/bams_predict200.pdf
    “How predictable is El Nino” says it isn’t.

    The paper is still there and this is a bit of an oversimplification — at least with respect to timing…

    From the conclusion:

    Thus far, attempts to forecast El Niño have not been very successful (Landsea and Knaff 2000, Barnston et al 1999). However, the factors that cause the irregularity of the Southern Oscillation – random atmospheric disturbances whose influence depends on the phase of the oscillation – are such that the predictability of specific El Niño events is inevitably limited. That is especially true of the intensity of El Niño. For example, the occurrence of an event in 1997 was predictable on the basis of information about the phase of the Southern Oscillation, but the amplitude of the event could not have been anticipated because it depended on the appearance of several wind bursts in rapid succession.

    How Predictable Is El Niño? (Nov 2002)
    A.V. Fedorov, S.L. Harper, S.G. Philander, B. Winter, and A. Wittenberg
    Atmospheric and Oceanic Sciences Program, Department of Geosciences, Princeton University
    Sayre Hall, P.O. Box CN710, Princeton, NJ 08544, USA
    http://aos.princeton.edu/WWWPUBLIC/gphlder/bams_predict200.pdf

  • Timothy Chase // June 29, 2009 at 6:17 am | Reply

    SORRY

    Posted the above to the wrong thread.

  • t_p_hamilton // June 29, 2009 at 2:12 pm | Reply

    DaveA,

    You were right about Myles Allen making a statement of worrying about extending models too far. I had this situation (a non peer reviewed study) confused with a peer reviewed article in the same Nature issue (where comments by fellow scientists explain in more general terms what the article is about, and its always wonderful work :).

  • Dave A // June 30, 2009 at 8:38 pm | Reply

    Timothy Chase,

    Thanks for the Defra links but the site is a mess and there seems to be no access at all to the ‘technical pages’

    I did eventually manage to access their FAQ. It says this in response to a question on are the models reliable? -

    “Yes. Defra organised an expert review of the methodologies used in the probabilistic projections, marine projections and the weather generator. A workshop with the reviewers followed this in January 2009. They confirmed that the methodologies used in UKCP09 (sic) were credible and represented a large step beyond UKCIP02, whilst noting that the techniques used are complex and require explanation and giudance”

    Well that doesn’t seem to be the way Myles Allen reads it!! I have been unable to find anything about the January reviwers workshop.

  • MikeN // July 1, 2009 at 5:15 pm | Reply

    >it appears that you now acknowledge that suggestions that Rahmstorf changed the smoothing parameters are most likely completely unfounded.

    No in fact, Dr. Rahmstorf has admitted that he changed the smoothing parameters.

  • Deep Climate // July 1, 2009 at 8:58 pm | Reply

    Mike N said:
    No in fact, Dr. Rahmstorf has admitted that he changed the smoothing parameters.

    Right you are. I’ve been aware of this since yesterday (hat tip to TCO) and I am planning to go over to Lucia’s Blackboard and acknowledge it. (Today is Canada Day and the Michaels-Carlin plagiarism EPA kerfuffle is letting up, so I should get to it soon).

    In fact, I should have realized it, because of the flattening of the curve in the latest version.

    But, I do maintain that Jean S’s emulation didn’t match Rahmstorf’s (maybe it does now). That’s not an excuse for my mistake, but still it would be good to see the effect of changing the parameters of the exact smoothing that Rahmstorf used.

    I have some other thoughts about smoothing (esp. lowess) and windows and such. So I’ll repeat my request to Tamino for a tutorial/discussion on the subject. Anyone else interested?

  • Phil Scadden // July 1, 2009 at 9:57 pm | Reply

    After so much discussion of Lowess for smoothing I have just coded a lowess for use on my own data. (I dont have the option for using R ) Doing a lot of playing to understand it and compare with S-G. I would love to see a detailed discussion from Tamino.

  • lucia // July 1, 2009 at 10:28 pm | Reply

    Deep–
    JeanS and I were both aware of these small differences long ago when JeanS first replicated using 2007 data only. The reason JeanS’s graph does not perfectly emulate Rahmstorf’s is that both Hadley and GISS update historic data and JeanS only had access to currently posted data. In contrast, Stefan used data posted at the time he did his analysis.

    MikeN conveyed your concerns, and JeanS explained this at my blog. I don’t have files back to 2006, but based on my oldest version, I can confirm that both groups annual average anomalies are not 100% stable as much as 2 years later. (Due to their method, GISS’s are not entire stable ever because new data can propagate back into the baseline or even further.)

    Because the method of smoothing requires extrapolating into the future to compute the current smoothed temperatures changes recorded temperature for 2005 or 2006 individually or both together would result in slight changes in the location of the 11-year smooth curve near years 2005 and 2006 even without the addition of 2007. Needless to say, such changes would also affect the precise location of the temperature anomalies in 2005 and 2006 on the graph.

    I have to admit, I was a bit perplexed when you wrote this:

    Timothy

    Would you claim that you weren’t trying to establish what the trend actually and empirically is but merely arguing against the trend that the IPCC claimed to exist on the basis of a larger span of years?

    I am arguing trend the IPCC projected to exist in the AR4 is inconsistent with observations. Others may disagree with my conclusion, but that is my argument. I think most who read the words and figures surrounding the snippet you quoted would notice that I discuss the uncertainty intervals which are large. So, I would never and have never claimed the correct magnitude of the trend had been “established” .

    Since you just provided a longwinded explanation of how AOGCMs work I am surprised you then suggest the projection in the AR4 were made “on the basis of a larger span of years”. The projections in the AR4 were not made by curve-fitting over a larger span of years.

    For what it’s worth, my argument about why the projection in the TAR were not frozen in 1990 is not based on any notion that AOGCM’s are curve fitting exercises. I’ve never thought nor said they were.

    I could discuss AOGCM’s further, but it’s not particularly meaningful to figuring out when the projections from the TAR were frozen.

    To identifiy the relevant information, about when the method used to develop the projections in the TAR was frozen, you might want to read the TAR. Focus on the discussion of how those projections were made. You will discover the TAR projections were created by a simple model which contains 6 tuning parameters. That simple model did not even exist in 1990.

    Moreover, the choice of the 6 tuning parameters used in the TAR were certainly not selected back in 1990. The choice is documented in a peer reviewed article cited as published in 2001. The parameters themselves were down selected from a subset of AOGCMs that existed at the time; meaning that some possible choices of tuning parameters from some models were deemed inappropriate.

    So, the specific model used was developed after 1990, the choice of method to create projections was made long after 1990 and the six tuning parameters used to drive that model were selected long after 1990. The precise date when the methodology was “frozen” is difficult to establish, but it is certain closer to 2000 than 1990.

    Since the authors of the SAR published in 1995 used a different method and different tuning parameters, I think it’s safe suggest that the methodology for creating TAR projections cannot be thought to pre-date 1995.

    If you want to think otherwise, fine. But the argument has nothing to do with anyone believing AOGCM’s are curve fitting exercises.

  • Deep Climate // July 1, 2009 at 11:46 pm | Reply

    Lucia –
    OK, I’ll check out those threads on smoothing.

    ===============
    Lucia wrote:

    I have to admit, I was a bit perplexed when you wrote this:

    Timothy

    but I think she meant:

    Timothy

    I have to admit, I was a bit perplexed when you wrote this:

    etc. etc.

    In other words, the rest of the comment is directed to Timothy, not to me. FWIW, part of the problem seems to be a misunderstanding of what Timothy actually meant, but I’ll let him respond.

  • Timothy Chase // July 2, 2009 at 12:49 am | Reply

    Deep Climate wrote:

    In other words, the rest of the comment is directed to Timothy, not to me. FWIW, part of the problem seems to be a misunderstanding of what Timothy actually meant, but I’ll let him respond.

    Well, clearly Lucia is in no rush. After seven days she might have to wait a couple at this point.

  • MikeN // July 2, 2009 at 4:15 am | Reply

    >In fact, I should have realized it, because of the flattening of the curve in the latest version.

    That’s what I said, and you said my concerns were unfounded.

  • Timothy Chase // July 3, 2009 at 9:28 pm | Reply

    Deep Climate wrote:

    Doesn’t this also depend what time scales we’re talking about? But, anyway, if you’re asking whether individual model realizations show similar variability to that observed, the answer is yes AFAIK.

    For the most part I tend to agree, and my apologies for not commenting on this earlier, but I found an unpublished paper by Kravtsov and Tsonis that attributes two-thirds of recent warming to climate variability.

    Please see:

    Rapid multi-decadal climate change, at the rate consistent with that actually observed during 1980-2005, is due, in our statistical model of global surface temperature (GST) evolution, to a combination of a linear trend – presumably associated with human-induced warming – and amplifying warming phases of intrinsic multi-decadal variability. The above interpretation of the observed global warming and ensuing statistical analysis results in that the estimate of human-induced warming rate during 1980-2005 is only about two thirds the actual rate observed; the remainder of the trend is due to intrinsic multi-decadal climate variability. In other words, the recent increased warming rate is interpreted as partly the consequence of intrinsic dynamics of the climate system, rather than “most up-to-date” estimate of the anthropogenic climate change.

    How Much of Global Warming is due to Natural Climate Variability?
    S. Kravtsov and A. A. Tsonis (unpublished from 2008)
    http://www.uwm.edu/~kravtsov/downloads/GW&NV_JCLI.pdf

    They make the claim that current climate models cannot explain the natural variability found in the climate system.

    Please see:

    The latter difference in interpretations may be one of the key reasons for enormous future warming seen in the general circulation models (GCMs). A related property of most GCM forecasts of GST evolution is that their ensemble-means are consistent with the notion that potentially important intrinsic climate modes are most likely to be strongly damped and the modeled climatological change is a mere linear response to amplifying radiative forcing. Our statistical results motivate experimentation with global climate GCMs in less viscous parameter regimes than currently used for global change forecasts. Indeed, Santer et al. (2006) show that the level of intrinsic variability in the current generation of climate models is in general insufficient to explain, by itself, the observed multi-decadal evolution; see, however, Knight et al. (2006) and Zhang et al. (2007) for contrary examples.

    ibid.

    Given this I was naturally interested in what Santer et al had to say. I may be wrong, but would appear that Kravtsov and Tsonis focused on what Santer et al said regarding the Atlantic Cyclonogenesis Region (ACR) rather than both the Atlantic and the Pacific (PCR).

    Please see:

    Model performance in simulating variability on decadal and longer timescales is of most interest here, because this constitutes the background noise against which any slowly evolving forced signal must be detected (Fig. 4C). SST data were low-pass filtered to isolate variability on these timescales (see Supporting Text). In the ACR, the standard deviations of the filtered SST data are systematically lower in models than in observations, pointing to possible biases in model low-frequency variability. Only 5 of the 22 models have 20CEN realizations with standard deviations close to or exceeding observed values. In the PCR, of 22 models produce 20CEN realizations with greater than observed low-frequency SST variability. The implications of these results are discussed below.

    Santer, B. D., Wigley, T. M. L., Gleckler, P. J., Bonfils, C., et al., 2006: Forced and unforced ocean temperature changes in Atlantic and Pacific tropical cyclogenesis regions. Proc. Natl. Acad. Sci., doi:10.1073/PNAS.0602861103.
    http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1599886

    Santer et al also state that external forcing would appear to explain (with a probability of 84%) at least two-thirds of the increases in SST in the Atlantic and Pacific Cyclogenesis Regions for roughly the last century.

    Please see:

    On the basis of our F1 results for the period 1906–2005, there is an 84% chance that external forcing explains at least 67% of observed SST increases in the ACR and PCR. In both regions, model simulations with external forcing by combined natural and anthropogenic effects are broadly consistent with observed SST increases. The PCM experiments suggest that forcing by well mixed greenhouse gases has been the dominant influence on century-timescale SST increases. We also find clear evidence of a volcanic influence on observed SST variability in the ACR and PCR.

    ibid.

  • Timothy Chase // July 3, 2009 at 9:29 pm | Reply

    Oscillations and Trends, Part II

    Now obviously there are some caveats on both sides of the argument to be made at this point.

    First, when Tsonis et al state that they believe roughly two-thirds of the warming that has taken place from 1980 to 2005, this does not in any way contradict what Santer et al have to say regarding models explaining at least two-thirds of the variability for the period from 1906 to 2005. Afterall, the modern period of global warming has certainly seen warming take place at a higher rate than either the 20th century as a whole or even the early twentieth century warming, e.g., 1900-1940. But unless I am wrong, the fact that Tsonis et al point to Santer et al as supporting the view that climate models have difficulty producing the level of variability in the climate system certainly gives one pause — as this is true of the Atlantic Cyclonogenesis Region, not the Pacific.

    Second, the tropical cyclonogenesis regions are neither the climate system nor the ocean as a whole, and as such attempting to equate the internal variability of the climate system from the variability of these two regions would be a mistake since any variability in those regions is necessary dampened by the climate system as a whole, e.g., the rest of the Atlantic and the rest of the Pacific — not to mention the fact that the Atlantic and Pacific will not be in sync with one another in their expression of variability.

    Likewise, although the positive phases of the Pacific Decadal Oscillation have tended to be in phase with the periods of 20th century global warming, the phases were named after the warming that takes place along the Pacific Northwest, not within the Pacific as a whole. And in fact it is during the periods of global warming that the Pacific Decadal oscillation tends as a whole towards a cool phase, not a warm one.

    Please see for example:

    On the Relationship between the Pacific Decadal Oscillation (PDO) and the Global Average Mean Temperature
    3 Aug 2008
    http://atmoz.org/…/pdo...

    Likewise, I remember that for much of the 20th century, Greenland has tended to be out of sync with global trends and it was only during the latter part of the modern period of global warming that it has synced up with global trends.

    Third, to forestall a possible misinterpretation on the part of laymen, even if Tsonis et al were correct in attributing two-thirds of the warming for the period from 1980-2005 to internal variability, this would not support the view that Charney climate sensitivity to CO2 is two-thirds of what we believe it to be (e.g., 2°C rather than 3°C) as it may simply mean that the rate at which the climate system approaches equilibrium is slower than what we thought while the total amount of warming (realized and yet to be realized) would be the same. Likewise we have independent paleoclimate evidence that climate sensitivity to carbon dioxide is roughly 3°C. For example, the relatively recent meta-study by Royer et al examining the past 420 million years that combined the results of 47 different studies.

    Please see:

    Here we estimate long-term equilibrium climate sensitivity by modelling carbon dioxide concentrations over the past 420 million years and comparing our calculations with a proxy record. Our estimates are broadly consistent with estimates based on short-term climate records, and indicate that a weak radiative forcing by carbon dioxide is highly unlikely on multi-million-year timescales. We conclude that a climate sensitivity greater than 1.5- 6°C has probably been a robust feature of the Earth’s climate system over the past 420 million years, regardless of temporal scaling.

    Royer DL, Berner RA, Park J. 2007.
    Climate sensitivity constrained by CO2 concentrations over the past 420 million years.
    Nature, 446: 530-532.

    They show the best fit climate sensitivity as being roughly 2.8°C.

    Please see:

    The best fit between the standard version of the model and proxies occurs for δT(2X)=2.8°C (blue curve in Fig. 2a), which parallels the most probable values suggested by climate models (2.3-3.0°C) (Fig. 2b).

    ibid.

  • Timothy Chase // July 3, 2009 at 9:30 pm | Reply

    Oscillations and Trends, Part III

    I find the views of Tsonis interesting. For example, there is the recent paper:

    This paper provides an update to an earlier work that showed specific changes in the aggregate time evolution of major Northern Hemispheric atmospheric and oceanic modes of variability serve as a harbinger of climate shifts. Specifically, when the major modes of Northern Hemisphere climate variability are synchronized, or resonate, and the coupling between those modes simultaneously increases, the climate system appears to be thrown into a new state, marked by a break in the global mean temperature trend and in the character of El Nino/Southern Oscillation variability.

    Swanson, K. L., and A. A. Tsonis (2009), Has the climate recently shifted?, Geophys. Res. Lett., 36, L06711, doi:10.1029/2008GL037022.

    … where the authors argue for the existence of networks between climate modes and where at crucial points these networks undergo a process of re-organization that affects the rate at which warming takes place, perhaps through the existence of plateaus. However, Tsonis seems to be of the view that whatever rate of warming is explained by climate oscillations is necessarily subtracted from the rate of warming due to external forcings. But there is of course a different view — first suggested I believe in the following…

    A crucial question in the global-warming debate concerns the extent to which recent climate change is caused by anthropogenic forcing or is a manifestation of natural climate variability. It is commonly thought that the climate response to anthropogenic forcing should be distinct from the patterns of natural climate variability. But, on the basis of studies of nonlinear chaotic models with preferred states or ‘regimes’, it has been argued, that the spatial patterns of the response to anthropogenic forcing may in fact project principally onto modes of natural climate variability. Here we use atmospheric circulation data from the Northern Hemisphere to show that recent climate change can be interpreted in terms of changes in the frequency of occurrence of natural atmospheric circulation regimes. We conclude that recent Northern Hemisphere warming may be more directly related to the thermal structure of these circulation regimes than to any anthropogenic forcing pattern itself. Conversely, the fact that observed climate change projects onto natural patterns cannot be used as evidence of no anthropogenic effect on climate. These results may help explain possible differences between trends in surface temperature and satellite-based temperature in the free atmosphere.

    Signature of recent climate change in frequencies of natural atmospheric circulation regimes
    S. Corti, F. Molteni, and T. N. Palmer
    Nature 398, 799-802 (29 April 1999)
    http://www.nature.com/nature/journal/v398/n6730/abs/398799a0.html

    Of the two views expressed in this passage, what is now more or less mainstream is that rather than the view climate modes (oscillations) are to some degree an alternative explanation of climate change (on relatively short temporal scales), they are in fact the mode through which forcing of the climate system (whether it is due to solar variability, cyclical variations in the earth’s orbit or more recently, anthropogenic greenhouse gases), changes in the frequency, strengths or phases of climate modes tends to be the way in which such forcing is first expressed within the climate system. And one may easily argue that to a first approximation, forcing is forcing, whether it is solar or anthropogenic, and climate modes can be expected to respond to forcings in roughly the same manner regardless of the nature of those forcings.

    But there are differences.

    For example, increases in solar irradiance should result in warming both in the stratosphere and the troposphere, but increases due to an enhanced greenhouse effect should result in a warming of the troposphere but a cooling of the stratosphere. Likewise, a warming due to solar variability should be should first express itself predominantly in terms of daytime temperatures rather than nighttime temperatures, but warming due to an enhanced greenhouse effect will be expressed predominantly in terms of nighttime temperatures. And likewise, warming due to an enhanced greenhouse effect will be stronger during the winter and at higher latitudes.

    But setting such differences aside, the view that anthropogenic forcing is projected onto climate modes and networks would suggest that, contrary to the views expressed by Tsonis and co-authors, the rate of warming due to climate modes and the rate of warming due to anthropogenic forcing are not to added to arrive at the overall rate of warming. Rather, on all but the shorter temporal scales they two different ways of viewing the same warming where the proximate cause of warming consists of variability due to climate modes, but the variability of climate modes is itself expressive of external forcing — and in the case of 20th century, predominantly warming due to anthropogenic causes.

  • Timothy Chase // July 3, 2009 at 11:08 pm | Reply

    CORRECTION

    The following:

    For the most part I tend to agree, and my apologies for not commenting on this earlier, but I found an unpublished paper by Kravtsov and Tsonis that attributes two-thirds of recent warming to climate variability.

    … should have ended, “attributes one-third of recent warming to climate variability,” or alternatively, as I was also thinking of it, “attributes only two-thirds of recent warming to anthropogenic forcing and the other one-third to internal variability.”

    Of course that much is made clear by part of the passage that I quote immediately afterwards:

    The above interpretation of the observed global warming and ensuing statistical analysis results in that the estimate of human-induced warming rate during 1980-2005 is only about two thirds the actual rate observed; the remainder of the trend is due to intrinsic multi-decadal climate variability.

  • Timothy Chase // July 3, 2009 at 11:11 pm | Reply

    PS

    I made that same mistake repeatedly… and only now have caught it. That’s what happens when you are in a hurry to get something out I suppose.

  • Timothy Chase // July 4, 2009 at 8:14 am | Reply

    Trends, Part I

    Section I

    Lucia quotes me:

    Would you claim that you weren’t trying to establish what the trend actually and empirically is but merely arguing against the trend that the IPCC claimed to exist on the basis of a larger span of years?

    … then she responds:

    I am arguing trend the IPCC projected to exist in the AR4 is inconsistent with observations. Others may disagree with my conclusion, but that is my argument. I think most who read the words and figures surrounding the snippet you quoted would notice that I discuss the uncertainty intervals which are large. So, I would never and have never claimed the correct magnitude of the trend had been “established”.

    For the moment I will postpone responding to this, but will return to it towards the end.

    Section II

    Lucia states:

    Since you just provided a longwinded explanation of how AOGCMs work I am surprised you then suggest the projection in the AR4 were made “on the basis of a larger span of years”. The projections in the AR4 were not made by curve-fitting over a larger span of years

    It helps to quote entire sentences — particularly when “debating” someone online. In this case my sentence was:

    Would you claim that you weren’t trying to establish what the trend actually and empirically is but merely arguing against the trend that the IPCC claimed to exist on the basis of a larger span of years?

    Now clearly I couldn’t simply be referring to the trend that you had graphed — as I was referring to the IPCC making use of a number of years that were larger than what you had graphed. Likewise I couldn’t have been simply referring to a projection — as I had used the words “IPCC claimed to exist on the basis of a larger number of years.” I am not referring to models at this point — but to the actual empirical data. And in this case I refer you to the AR4 WG1 Summary for Policy Makers, pg. 5:

    The updated 100-year linear trend (1906 to 2005) of 0.74°C [0.56°C to 0.92°C] is therefore larger than the corresponding trend for 1901 to 2000 given in the TAR of 0.6°C [0.4°C to 0.8°C]. The linear warming trend over the last 50 years (0.13°C [0.10°C to 0.16°C] per decade) is nearly twice that for the last 100 years.

    They give the warming trend for the past century (1906-2005) as roughly 0.74°C per century and for the past fifty years as roughly 1.3°C.

    Section III

    The summary states on page 12:

    Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.

    This is a period of fifteen years. And this isn’t the first time that I’ve mentioned fifteen years.

    If you will remember, I stated previously:

    Given the noise that exists in the climate system, anyone who tries to establish the trend in global average temperature with much less than fifteen years data is — in my view — either particularly ignorant of the science, or what is more likely, some sort of flim-flam artist, a bit like the psychic surgeons that James Randi exposes who “remove” tumors without opening up the “patient.” (And in the case of psychic surgery, the patient often dies only a few years later — after the cancer metastasizes.)

    If you are arguing against the IPCC AR4 temperature trends, particularly in March of 2008, you should at least be familiar with their summary for policy makers and the fact that it gives a fifteen year trend of roughly 0.2°C per decade. And fifteen years is the shortest period one should use for calculating the rate of change in global average temperature.

    Here I quote William M. Connolley:

    Pick up the HadCRU temperature series from here. Compute 5, 10 and 15 year trends running along the data since 1970 and get (black lines data, thicker black same but smoothed, thin straight lines non-sig trends; thick straight blue lines sig trends): [graphs]

    From which you can see (I hope) that the series is definitely going up; that 15 year trends are pretty well all sig and all about the same; that about 1/2 the 10 year trends are sig; and that very few of the 5 year trends are sig.

    Section IV

    Previously I had stated in relation to your trend analysis:

    2001 to 2008 is seven years, less than half of fifteen.

    However, what you were actually going off of were the years 2001 to 2007 — since as of March 2008 the year was less that four months old.

    You had stated back in 2008 on the basis of six years of data:

    I estimate the empirical trend using data, show with a solid purple line. Not[e] it is distinctly negative with a slope of -1.1 C/century.

    IPCC Projections Overpredict Recent Warming.
    10 March, 2008 (07:38) | global climate change Written by: lucia
    http://rankexploits.com/musings/2008/ipcc-projections-overpredict-recent-warming/

  • Timothy Chase // July 4, 2009 at 8:23 am | Reply

    Trends, Part II

    Section I

    Is -1.1°C per century that unusual or unexpected for a period of six years even given a fifteen year trend of 2.0°C per century?

    Using NASA GISS, for the period from 1990 to 2005 I have a linear trend of 2.27°C per century. But for the period from 1980 to 1986 I have a trend of -2.00 per century — which would appear to be distinctly more negative than your -1.1°C per century. For 1987-1993 I have -2.04, for 1988-1994 I have -2.18, But what of the period from 2001 to 2007 — the period that you based your analysis on? I get a linear trend of 0.96°C per century, which is distinctly positive.

    Going to Hadley’s global temperature anomaly, I have a linear trend from 1990 to 2005 of 1.27°C per century. But looking at only the six year trends, I have 1.43°C per century for 1979-1985, for 1980-1986 a negative trend of -1.76, for 1987-1993 a negative trend of -1.25°C, for 1988-1994 a negative trend of -0.77°C, for 1990-1996 a negative trend of -0.41°C per century. For the period from 2001 to 2007 I show a negative six-year trend, but it is only -0.31°C per century. Moreover the previous six year trend from 2000 to 2006 was 2.09°C per century — which should have strongly indicated that something was amiss.

    The fact that you managed to wring a trend of -1.1°C per century out of the years 2001-2007 is somewhat impressive. Perhaps it has something to do with your inclusion of UAH or RSS? You were using UAH and RSS TLT as a proxy for lower troposphere temperature, I presume?

    I won’t bother with asking you how exactly you arrived that figure. But the simple fact that such short-term negative trends are evident earlier in the modern period of global warming should have indicated to you that even a six year negative trend of -1.1°C is nothing out of the ordinary, even when the fifteen year trend exceeds 2.0°C per century. Suffice it to say, given the internal variability of the climate system, using anything less than fifteen years is a mistake at best. Adding more “points” by including monthlies or other strongly correlated series will do next to nothing to overcome the short-term trend uncertainties that are implicit in that variability.

    Section II

    Lucia states:

    I could discuss AOGCM’s further, but it’s not particularly meaningful to figuring out when the projections from the TAR were frozen.

    It is irrelevant in any case as you were arguing against a fifteen year empirical trend of 2°C on the basis of six years of data “showing” a trend of -1.1°C. It is irrelevant insofar as such negative short-term trends crop up numerous times in the modern period of global warming. The chart showed TAR but here you were arguing empirical trends, and if you were familiar with the summary for policy makers (rather than just the chapter 10 that your chart referenced) you would have known better.

    If you were the least bit familiar with this period you would have known about the internal variability. If you had seen a chart displaying the temperature anomalies for more than just the period from 2001to 2007 you would have known about the internal variability. Simply looking at the period 2000-2006 for Hadley temperatures would have shown you a “six-year trend” of more than 2°C per century — and the period from 1999 to 2005 would have shown you a six-year trend of nearly 3.5°C per century. If you were the least bit familiar with why 1998 stuck out — due to the El Nino — and that starting with 1998 would be an instance of cherry-picking — you would have known better.

    I have to conclude that either you were unbelievably uninformed — or you knew better.

    And looking at your chart I am struck by how you extend the “IPCC Short Term Projection” as you call it clear out from 2000-1 to 2020 and your so-called fit beyond that. This creates a misleading impression that the so-called deviation between the “IPCC Short Term Projection” and your “Cochrane Orcutt fit” is far greater than it would be even if one were to grant that your calculations were error-free.

    Five years would be a quarter of twenty, and the six years you were basing your calculations on were only a year beyond that. Extending the “IPCC Short Term Projection” line only as far as your six years would have shown that it was well within the bounds of natural variability — simply given the meandering the temperature anomaly for the years 1900-2000 shown in the IPCC’s graph that you had modified for your essay.

    Section III

    Lucia states:

    If you want to think otherwise, fine. But the argument has nothing to do with anyone believing AOGCM’s are curve fitting exercises.

    When earlier you stated:

    However, the AOGCM models used and the “simple models” used to create the graph TAR had not even been developed in 1990. The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.

    … that most certainly did sound like you view atmosphere ocean general circulation models as instances of curve-fitting rather than as models based upon the principles of physics.

    However, my criticism of your IPCC Projections essay was independent of that argument — as suggested by the fact that I brought that essay up in a separate section of my post that was specifically responding to your claim:

    PS. I haven’t been trying to “establish” any trends in any blog posts.

    … with the words:

    I didn’t say that you had. However, given your denial I decided to check.

    I found the following essay: …

    … and then it is at that point that I specifically mention the trend that the IPCC claims to exist on the basis of a larger span of years — a trend supported not merely by the projection of some simple model over the same years as what you included in your calculation but by empirical data from 1990 to 2005 — for example.

    It is in this context that I stated:

    Would you claim that you weren’t trying to establish what the trend actually and empirically is but merely arguing against the trend that the IPCC claimed to exist on the basis of a larger span of years?

    If the latter, then I must ask: how much validity is there to such an argument against the IPCC’s conclusion if it can in no way support a conclusion about the actual, empirical trend itself? And if it can in fact be used to argue against the IPCC’s conclusion then surely it can support a conclusion about the trend in global average temperature. And if you maintain that it nevertheless cannot, did you make such unsupportable “hairsplitting” clear to those who visit your blog? Or did you simply let them assume what must logically follow from your argument?

    In the context of your essay you write as if the only reason why the IPCC speaks of a trend of 2°C is due to the model simulations that produced the graph in chapter 10. But this in itself is misleading — and breezing over the summary at the very beginning of the IPCC AR4 wg1 would have been more than enough to show you as much.

    Section IV

    Now at the beginning of this response I quoted you as stating:

    I am arguing trend the IPCC projected to exist in the AR4 is inconsistent with observations. Others may disagree with my conclusion, but that is my argument. I think most who read the words and figures surrounding the snippet you quoted would notice that I discuss the uncertainty intervals which are large. So, I would never and have never claimed the correct magnitude of the trend had been “established”.

    Think about this: if you are arguing that the trend that the IPCC projected into the future or claims to already exist on the basis of past data is inconsistent with observations, then you must be claiming on the basis of those observations that there are bounds on what the actual trend may be. This is in fact what your uncertainty intervals indicate — that the evidence places probabilistic limitations on what the real trend may be. If you are stating that the trend supported by the IPCC is inconsistent with observations, you are stating that their trend is out of bounds. If you state that the trend they support is too high, this is equivilent to claiming that the evidence implies that the actual trend is lower.

    Qualitatively the fact that you claim the uncertainty intervals are large does nothing to change this. If you claim that the evidence excludes the trend supported by the IPCC as too high, implicit in this claim is the view that the actual trend must be lower. To argue otherwise is the very unsupportable “hairsplitting” I spoke of in the passage from my earlier post that I have just quoted above.

  • Frank O'Dwyer // July 7, 2009 at 3:42 pm | Reply

    Timothy Chase,

    “This is in fact what [Lucia's] uncertainty intervals indicate — that the evidence places probabilistic limitations on what the real trend may be.”

    Doesn’t it? I thought that this was the point of the uncertainty intervals.

    Are you saying that given some short term data then statistical methods alone are not sufficient to even put (albeit very large) bounds on the trend?

    I understand the point about the noise overwhelming the trend in short-term data, but again isn’t this the point of the uncertainty bounds. Shouldn’t those reflect the noise? Such uncertainty intervals must include a lot (including zero) but they must also exclude many values (10C per hour, to take a silly example). But you imply that short term data “can in no way support a conclusion about the actual, empirical trend itself?

    Or is your point specific to temperature data, i.e. that there are other (physical) reasons to reject statistical measures on short term data.

    (Please note, I’m not disagreeing – just trying to understand this point)

  • Timothy Chase // July 8, 2009 at 3:00 am | Reply

    Towards the end, Frank O’Dwyer states:

    (Please note, I’m not disagreeing – just trying to understand this point.)

    Not a problem.
    *
    Frank O’Dwyer quotes from the middle of the second to last paragraph of Part II, section 4 of “Trends”:

    This is in fact what [Lucia's] uncertainty intervals indicate — that the evidence places probabilistic limitations on what the real trend may be.

    Then he asks:

    Doesn’t it? I thought that this was the point of the uncertainty intervals.

    So it would — if she performed her math right. However, at this point I am not arguing whether or not it is valid to make a claim on the basis of such a short time span but merely that she was in fact claiming to have some knowledge of what the real trend is — on the basis of a much shorter timespan (6 years) than what the IPCC had used to arrive at its estimate of the trend (15 years). Instead I am merely making a logical point that one cannot declare the trend as calculated by the IPCC out of bounds without at least implicitly claiming some knowledge of what the trend actually is — namely the probabilistic knowledge that it lies within the uncertainty interval.

    Earlier Lucia had written:

    PS. I haven’t been trying to “establish” any trends in any blog posts.

    … then later she stated:

    I am arguing trend the IPCC projected to exist in the AR4 is inconsistent with observations. Others may disagree with my conclusion, but that is my argument. I think most who read the words and figures surrounding the snippet you quoted would notice that I discuss the uncertainty intervals which are large. So, I would never and have never claimed the correct magnitude of the trend had been “established”.

    … but implicit in her argument was the claim that she had established the the correct magnitude of the trend within the bounds of uncertainty that she claimed as the result of her argument. And at that point that was all I intended to prove. In this sense my argument was more a matter of logic than of statistics. Her position was incoherent.
    *
    Frank O’Dwyer wrote:

    Are you saying that given some short term data then statistical methods alone are not sufficient to even put (albeit very large) bounds on the trend?

    Simply as a matter of logic I can place limits on the trend: it is a real number existing somewhere between negative infinity and positive infinity. Obviously it is possible to place some sort of limits on the trend based upon a smaller sample than fifteen years — but there is the question of how meaningful it is to do so and how wide those limits have to be in order to account for the natural variability — including climate oscillations such as El Nino which I mentioned in my post.

    There is the record of variability which we had seen throughout the modern period of global warming. There had been numerous times at which we had seen “negative six-year trends” that were more extreme (more negative) than the -1.1°C that she claimed to have discovered. However it would have been a mistake to claim as she did that such “negative six-year trends” which were within the range of natural variability somehow invalidated a trend based upon a much larger number of years.

    Assuming all we had to go on was statistics (as opposed to detailed knowledge of the absorption spectra of CO2 and how they result from states of molecular excitation described in detail by quantum mechanics and so forth), one year would be sufficient to “statistically invalidate” even the claim that the world is still warming, but that one year would have to exist well outside the bounds of variability that exist along the trendline — where properly the bounds of variability would be based upon at least fifteen years of data.

    To see how, check out the following which actually uses a little over thirty years of data:

    You Bet!
    January 31, 2008
    http://tamino.wordpress.com/2008/01/31/you-bet/

    … particularly the second chart and those which come after it:

    http://tamino.files.wordpress.com/2008/01/bet1.jpg

    … and note that the choice of NASA GISS, HadCru or NCDC will have very little to do with it — as shown in the first chart:

    http://tamino.files.wordpress.com/2008/01/trend2.jpg

    I will let you read the post to see how one year would be sufficient.

    However, plotting a linear least squares fit trendline from 1975 to 2007 and upper and lower bounds paralleling this trendline, but two standard deviations out from it as a range of uncertainty, one finds that every year has lied within those two standard deviations — which would seem to suggest that the trend has more or less been constant for that entire period. Moreover, when one plots 2007 one finds that it lies almost right on top of the trendline itself, dead center of the range of uncertainty. One also finds that three out of the last six years were no more than a quarter of a standard deviation below the linear trendline and the other three were 1/2 to 1 standard deviations above it.

    Given this do you think that there is any validity to the claim that those six years demonstrated that the linear trendline is an overestimate of the actual trend? That the likelihood of the rate of warming being as much as or above half that linear trendline is less than 2.5%? That it is more than likely negative rather than positive?

    Such was the nature of what Lucia was claiming with her “95% confidence interval” and estimate of the “empirical trend.”

  • george // July 10, 2009 at 5:16 am | Reply

    It seems to me that since there is always uncertainty associated with any result (obtained with linear regression or any other method), one can never “establish” what the ‘true” trend is (or even if there indeed is such a trend).

    One can only really attempt to determine with some confidence (“establish is still too strong a word) what the trend is NOT.

    And to do the latter, one has to specify error bars.

    Unfortunately, as tamino has pointed out many times, over the short term, the actual noise model that one assumes can have a significant effect on what those error bars turn out to be.

    Under one noise model assumption (including assumptions about autocorrelation), certain values o f the trend may be “ruled out” (at some confidence level, say 95%), while under another noise model assumption, the very same values may still be included (and at the very same confidence level! )

    If that is not bad enough, for the short term, slight changes in the length of (or even shift in) the time period under consideration as well as a change in the data set(s) can also impact inclusion/exclusion of certain trend values (again, at the same confidence level)

    So, my question is this:

    What does it mean that one excludes certain trend values (eg, 0.2C/decade) at “95% confidence level” using one set of assumptions (and possibly data) and does not exclude them using another set?

    Tamino’s method of specifying the trend over a relatively long (climatologically meaningful) period (eg, from 1975-2000) and then seeing if recent temperatures fall within 2 sigma of the extended trend line would seem to be a better method for determining whether recent temperatures are consistent with (or possibly beginning to deviate from) the long term trend (0.18C/decade or “about .2C/decade” over the period 1975-2000) It is less dependent on the issues mentioned above.

  • Frank O'Dwyer // July 10, 2009 at 1:46 pm | Reply

    Timothy Chase,

    Thank you for the very detailed response – makes sense.

    I have a slight quibble re this:
    … but implicit in her argument was the claim that she had established the the correct magnitude of the trend within the bounds of uncertainty that she claimed as the result of her argument.

    I agree – but in fairness to her I think this is actually the substance of the argument she is trying to make. I don’t think she makes anything of the central trend she comes up with being negative or otherwise, I believe her point is that the error bars she gets exclude 0.2C/decade.

    This is not to say that I agree with her point (I don’t, and I think her presentation of it is highly misleading) but I would think a sympathetic reading of her ‘not trying to “establish” any trends’ to mean she is not trying to establish any particulr alternative number to 0.2. Her claim is just that 0.2C/decade is inconsistent with observations since 2001 or so, because (according to her math) it lies outside the 95% CI of the trend in the observations.

    As I say, I don’t think this conclusion is right. Still, as an intellectual exercise it doesn’t seem unreasonable to work out trend and error bars on the data on observations made after a prediction was made. But to me her exercise is like asking the question ‘if this information – obs since 2001 – was all we had, what would we say the trend was and with what certainty?’. [Of course, that premise is counterfactual - it's not all the information we have - we have more obs and also some physical understanding underlying the obs].

    So for me Lucia’s stuff doesn’t raise questions about the trend, but it does make me wonder about the statistical tool being used. It is surprising to me at least that the error bars *do not* include 0.2C. Shouldn’t they more than likely do so, assuming the math is done correctly? Or is this really a case of there is a 1 in 20 chance that the trend lies outside the CI, and it actually does?

  • Timothy Chase // July 10, 2009 at 3:35 pm | Reply

    george wrote:

    It seems to me that since there is always uncertainty associated with any result (obtained with linear regression or any other method), one can never “establish” what the “true” trend is (or even if there indeed is such a trend).

    One can only really attempt to determine with some confidence (“establish” is still too strong a word) what the trend is NOT.

    George, I would prefer to leave statistical problems to someone with a background in statistics, e.g., Tamino, but obviously not Lucia, since as I have subsequently discovered her arguments have been thoroughly picked clean by others until nothing was left except bone.

    However, in matters of degree there are almost inevitably error bars. Think about it. How tall are you? What is your weight. Where did you park your car?

    Can you give an “exact” answer to any of these questions?

    Obviously NOT. Can you only say what these things are NOT? Well I should think the answer to this question as well is obviously NOT.

    Even in terms of language (although at times language itself is a poor guide) when we state that a thing is between such-and-such a value (“He is between five foot nine inches and five foot ten inches”) there is a statement about what a thing IS. We have bounded it on both sides. And I would argue that even if we bounded it on one side but not the other, we could still make statements about what a thing IS, such as how tall a given person IS.

    Anyway, I would like to address some other points in your comment, but I am a little short on time at the moment — about to run out the door, actually — so I will have to wait. I have an appointment at 9 AM, and it will take a little bit to get there. But do you agree with me so far, and if not, why not?

  • george // July 10, 2009 at 7:17 pm | Reply

    Timothy Chase: “in matters of degree there are almost inevitably error bars. ”

    I may not have been clear, but that is precisely one o f the points I was trying to make.

    Practically speakling, it’s impossible to “establish” anything with absolute certainty.

    For better or worse, I think this case really comes down to how one interprets the word “establish”.

    I read that as essentially “nailing it down” (if not exactly, at least within some fairly narrow error bands about a central value)

    I question Lucia’s claims about having “falsified IPCC at the 95% level” for some o f the reasons that i mentioned above, but I would nonetheless agree in general that trying to “exclude” or “reject” certain trend values at some confidence level (95%) is not the same as trying to “establish” the “actual” trend.

    For the case at hand, you can have fairly broad error bands about some central trend value and STILL effectively exclude./reject some trend values (eg, 5C/decade) with pretty high confidence.

    In fact, there are lots (effectively an infinite number) of trend values (eg > 5C/decade and < -5C/decade) that can essentially be "ruled out" in this case (ie, with high statistical confidence).

    sayin gwhat the trend is not is actually different and easier, in some regard than"establishing" the trend (ie, saying what the trend "actually" is within some narrow range)

    the real problem i have accepting Lucia's claims about having effectively rejected 0.2C/decade at the 95% confidence level is that acceptance/rejection actually depends on which noise model you use and on the data set as well.

    The problem as i see it is that the value 0.2C/decade is "close" to the central value +- error bands, or at least close enough to make the rejection/acceptance questionable, give the issues surrounding the noise over the short term (and the noise model assumption) .

    That was actually the source of my question above:

    What does it mean that one excludes certain trend values (eg, 0.2C/decade) at “95% confidence level” using one set of assumptions (and possibly data) and does not exclude them using another set?

    If she were talking about rejecting 5C/decade at 95% or better, I'd have little problem accepting that claim.

  • Lazar // July 10, 2009 at 8:08 pm | Reply

    frank o’dwyer,

    It is surprising to me at least that the error bars *do not* include 0.2C. Shouldn’t they more than likely do so, assuming the math is done correctly?

    … probably when using short periods the error model mischaracterizes / does not fully capture low frequency variability… enso, solar variation… so if short periods must be used, probably something like this is good… although that doesn’t include solar variance…

  • Timothy Chase // July 11, 2009 at 12:13 am | Reply

    TAMINO — THIS IS THE CORRECTED VERSION, PLEASE DELETE THE OTHER WHICH HAS REALLY BAD FORMATTING PROBLEMS…

    Frank O’Dwyer wrote:

    Thank you for the very detailed response – makes sense.

    Thank you.
    *
    Frank O’Dwyer wrote:

    I have a slight quibble re this:

    the quotes where I wrote:

    … but implicit in her argument was the claim that she had established the the correct magnitude of the trend within the bounds of uncertainty that she claimed as the result of her argument

    Frank O’Dwyer wrote:

    I agree – but in fairness to her I think this is actually the substance of the argument she is trying to make.

    Was it? She claimed that she had not established any trends.

    lucia wrote:

    PS. I haven’t been trying to “establish” any trends in any blog posts.

    Later on she “corrects” herself (if you wish to call it that) and states:

    I am arguing trend the IPCC projected to exist in the AR4 is inconsistent with observations. Others may disagree with my conclusion, but that is my argument. I think most who read the words and figures surrounding the snippet you quoted would notice that I discuss the uncertainty intervals which are large. So, I would never and have never claimed the correct magnitude of the trend had been “established”.

    So there she states that she has, “…never claimed the correct magnitude of the trend had been ‘established’.” Likewise she emphasizes that “uncertainty intervals which are large.” But would the IPCC argue that the “correct magnitude” had been “established”?

    What does “correct magnitude” mean in this context? The “exact magnitude”? What does “established” mean in this context? “Beyond any reasonable doubt”?

    Is it the “largeness” of her uncertainty interval which differentiates her claim from that of the IPCC? How large does the uncertainty interval have to be before one is no longer “establishing” the “correct magnitude” of the “trend”?

    Her objective isn’t to identify the trend. Her objective is to deny that the IPCC has. She isn’t interested in identification but denial.
    *
    Frank O’Dwyer wrote:

    I don’t think she makes anything of the central trend she comes up with being negative or otherwise, I believe her point is that the error bars she gets exclude 0.2C/decade.

    Doesn’t she?

    Looking at her chart and using a strait edge, I put the lower bound at -0.33°C per decade. That is from 3.5°C at 1900 to -0.5°C in 2020. Looking at the upper bound I have 0.067°C per decade. That is from 0.2°C in 2000 to 0.87°C in 2100. The lower bound is -0.33°C per decade, the upper bound is 0.067°C, one fifth the absolute magnitude of the lower bound, and in her own words:

    I estimate the empirical trend using data, show with a solid purple line. Not it is distinctly negative with a slope of -1.1 C/century.

    (emphasis added)

    It would appear that she is making something of the central trend being negative. But as far as I can see, only as a means of denying that the IPCC has identified what the trend is, not as a means of identifying it herself.

    But perhaps this is what you mean by “…, I believe her point is that the error bars she gets exclude 0.2C/decade.”
    *
    Frank O’Dwyer wrote:

    This is not to say that I agree with her point (I don’t, and I think her presentation of it is highly misleading) but I would think a sympathetic reading of her ‘not trying to “establish” any trends’ to mean she is not trying to establish any particular alternative number to 0.2.

    “Highly misleading”? But was it deliberately, highly misleading, or to be more concise,”dishonest,” and if so, does she deserve her “sympathy”? I myself have never really been inclined towards “sympathetic” or “charitable” interpretations. I believe in what I think of as “rational interpretations.” You try to identify errors and their causes, whether they be due honest mistakes, ignorance, or something less benign.

    I have every reason to believe that she knew that the IPCC didn’t simply have a toy model projecting a trend of 0.2°C into the future but decades of evidence supporting that figure. Likewise I believe she knows about climate oscillations and the effect that either an El Nino or La Nina will have upon the climate system.

    As for her “… not trying to establish any particular alternative number to 0.2,” so long as it is considerably lower, I most certainly agree, but I view this as much less benign.
    *
    Frank O’Dwyer wrote:

    But to me her exercise is like asking the question ‘if this information – obs since 2001 – was all we had, what would we say the trend was and with what certainty?’. [Of course, that premise is counterfactual - it's not all the information we have - we have more obs and also some physical understanding underlying the obs].

    Did she say that this was merely an intellectual excerise and that as part of her “intellectual exercise” we were going to pretend that the only evidence we had was merely going to be the last few years? Did she actually say that no one should take any of this seriously?

    I am sorry, but at this point you are whitewashing her intellectual dishonesty. Is this what you mean by sympathetic interpretation?

    Frank O’Dwyer wrote:

    It is surprising to me at least that the error bars *do not* include 0.2C. Shouldn’t they more than likely do so, assuming the math is done correctly? Or is this really a case of there is a 1 in 20 chance that the trend lies outside the CI, and it actually does?

    Remember, according to her the the 95% confidence range is roughly from -0.33°C per decade to 0.067°C and centered on -0.11°C per decade. As such there would be only a 2.5% chance of the trend being above roughly 0.067°C, and assuming a normal distribution, two standard deviations would be at 2.1%. As such, 0.2°C per decade would be nearly four standard deviations out from the mean, and at a little more than 3 standard deviations the probability has already dropped to less than 1 in a 1000.

    As I pointed out earlier, I would strongly recommend:

    You Bet!
    January 31, 2008
    http://tamino.wordpress.com/2008/01/31/you-bet/

    … and as I commented:

    … plotting a linear least squares fit trendline from 1975 to 2007 and upper and lower bounds paralleling this trendline, but two standard deviations out from it as a range of uncertainty, one finds that every year has lied within those two standard deviations — which would seem to suggest that the trend has more or less been constant for that entire period. Moreover, when one plots 2007 one finds that it lies almost right on top of the trendline itself, dead center of the range of uncertainty. One also finds that three out of the last six years were no more than a quarter of a standard deviation below the linear trendline and the other three were 1/2 to 1 standard deviations above it.

    There is no reason to think that the trend has changed over this period. At this point we have no reason to think that the warming trend has slowed, reversed itself, or for that matter accelerated. However, physics suggests that in the long-run it will accelerate if we choose to continue with business as usual.

  • Frank O'Dwyer // July 11, 2009 at 9:38 am | Reply

    Timothy Chase,

    “How large does the uncertainty interval have to be before one is no longer “establishing” the “correct magnitude” of the “trend”?”

    I think it is more fuzzy than that and in fact to answer the ‘how large’ question requires knowing the intended application of the result. For example if the CI included 0 that would be very different than if it didn’t – it would mean failure to establish (with that data) whether it was warming or cooling. That CI could be quite narrow and failure to exclude 0 would still be a problem.

    The CI could also be very large, and still successfully excluding 0 could be useful information. For example the Lancet 1 survey of excess deaths in Iraq had a huge CI and didn’t ‘establish’ the number of deaths with any precision, but it clearly excluded 0 meaning things got worse.

    You could also say that strictly speaking nothing is ever ‘established’ as there are always error bars, but there is still a difference between a WAG and measuring something to a number of significant digits. I think this kind of difference is what people are generally getting at when colloquially using words like ‘established’ and ‘accurate’.

    “I myself have never really been inclined towards “sympathetic” or “charitable” interpretations. I believe in what I think of as “rational interpretations.” You try to identify errors and their causes, whether they be due honest mistakes, ignorance, or something less benign.”

    You’re reading too much into what I wrote there. I am just referring to that one statement – I don’t believe she meant that statement to be construed as (for example) ‘the world is cooling and here’s the evidence’. I don’t think she even believes or claims the world is cooling – it’s a subtler form of denialism than that.

    I don’t know why you would take that to mean I’m whitewashing it. There’s plenty of stuff to criticise on that blog, and there’s a reason denialists like it. I just don’t think that particular statement is the best one to pick. It is close to demolishing a strawman argument – but there’s no need as her real argument is weak enough.

    “Did she say that this was merely an intellectual excerise and that as part of her “intellectual exercise” we were going to pretend that the only evidence we had was merely going to be the last few years? Did she actually say that no one should take any of this seriously?”

    Actually some of the detailed content of the posts comes very close to that. (Or at least they did when she started, I stopped reading it early on in 2008 or so) But caveats like that hardly matter under screaming headlines whose purpose seems to be little more than to put IPCC and ‘false’/'falsified’ in the same sentence.

  • Frank O'Dwyer // July 11, 2009 at 9:47 am | Reply

    Timothy Chase,

    “Remember, according to her the the 95% confidence range is roughly from -0.33°C per decade to 0.067°C and centered on -0.11°C per decade. As such there would be only a 2.5% chance of the trend being above roughly 0.067°C, and assuming a normal distribution, two standard deviations would be at 2.1%. As such, 0.2°C per decade would be nearly four standard deviations out from the mean, and at a little more than 3 standard deviations the probability has already dropped to less than 1 in a 1000.”

    Forgot to comment on this part. This must be a more recent claim as I hadn’t seen this version – I only saw this stuff when it first appeared and stopped following it shortly afterwards. At that time I don’t recall any result that would have made cooling appear more likely than warming. This does sound like a completely nonsensical claim – it is also interesting that with additional data the estimate gets crazier!

    It might be interesting to do an animation of the month by month estimates from there as they must have swung all over the place – and presumably will now start swinging the other direction.

  • Barton Paul Levenson // July 11, 2009 at 11:12 am | Reply

    A note about recent temperature trends:

    http://BartonPaulLevenson.com/VV.html

    • KenM // July 12, 2009 at 4:57 pm | Reply

      BPL – First of all , my thanks for putting up these pages. I find them very straightforward and easy to understand.
      Now I have a question. I’ve read that the satellite readings should out pace the surface readings by 1.5 times. Looking at a 30-year comparison between UAH and GISS however, it appears that GISS is increasing faster than UAH! I know that 15 years ago or so a similar discrepancy was noted, and in an effort to figure out why it was discovered the satellites measurements were slightly off. That has since been rectified.
      If my math is correct (it probably is not :)), UAH should be recording a temp increase of.024 degrees per year increase, rather than the .012 we are currently seeing.
      Are there any current theories as to why? Or perhaps am I misunderstanding something simple here? The difference from what’s predicted to what’s been observed appears to be huge…

      [Response: First of all, the "1.5 times" isn't for the globe as a whole, it's for the *tropics*, and the figure I've read is 1.4 (although it's still considerably bigger than 1).

      Second, the UAH data are considered highly unreliable by many (including myself). I have good reason to suspect that the RSS satellite record is much more reliable.

      Third, if you look at the trends for the *tropics* from 1979 to 2008, GISS gives 0.0104 deg.C/yr and RSS gives 0.0146 deg.C/yr, a figure 1.4 times higher. But UAH gives only 0.0055 deg.C/yr, a little bit more than half that of GISS. If we take the RSS record as more reliable (which I do), then the tropospheric trend in the tropics is just about what's expected, compared to the GISS trend in the tropics.

      It should also be noted that the uncertainties in the trends are quite large, especially when restricting to the tropics (because the scatter is larger for tropical data than for global).]

      • KenM // July 12, 2009 at 11:52 pm

        Right you are tamino- I had skimmed the doc for the number and missed that 1.5 was referring to the tropics. For the globe, that link says

        Globally, the troposphere should warm about 1.2 times more than the surface

        Globally over 30 years, GISS temp increase is .0159 per year. This means RSS (or UAH) should show .019 increase but we see .0152 for RSS and .012 for uah.

        [Response: Indeed. But bear in mind that the estimated trends have associated errors, so the difference between RSS and GISS doesn't really contradict the purported tropospheric warming rate.

        We should also bear in mind that RSS (and UAH) are not direct measures of the temperature under consideration (mid-troposphere): the lower-troposphere estimates (what you usually see plotted) aren't direct measurements but estimates inferred from other channels, and the actual measurements (for different channels) are for very thick slabs of the atmosphere, all of them contaminated by stratospheric cooling. That's one of the reasons for other reductions of satellite MSU data (by Fu et al. and by Vinnikov & Grody), which show considerably more warming than either RSS or UAH.

        All in all, I'd say the rate of tropospheric warming can't be constrained by observations sufficiently well to test the theoretical result of enhanced mid-troposphere warming.]

  • Timothy Chase // July 11, 2009 at 4:20 pm | Reply

    Frank O’Dwyer wrote:

    Forgot to comment on this part. This must be a more recent claim as I hadn’t seen this version – I only saw this stuff when it first appeared and stopped following it shortly afterwards. At that time I don’t recall any result that would have made cooling appear more likely than warming. This does sound like a completely nonsensical claim – it is also interesting that with additional data the estimate gets crazier!

    It might be interesting to do an animation of the month by month estimates from there as they must have swung all over the place – and presumably will now start swinging the other direction.

    Well, you aren’t the only one who failed to notice things. Much farther down in the piece that I had been referencing she makes some qualifications, backtracks, and in essence says she doesn’t really mean it and makes somewhat less outrageous claims. But as far as I can tell, this is her gimmick.

    Frank O’Dwyer wrote in the previous post (that I have now just seen):

    Actually some of the detailed content of the posts comes very close to that. (Or at least they did when she started, I stopped reading it early on in 2008 or so) But caveats like that hardly matter under screaming headlines whose purpose seems to be little more than to put IPCC and ‘false’/’falsified’ in the same sentence.

    Exactly!

    Anyway, thank you for the discussion and pointing out where I had missed some things. I will check out your previous post a little more carefully and see whether I have anything intelligent to say, but I wanted to get this out sooner rather than later.

    PS Is she a one-trick pony?

  • Timothy Chase // July 11, 2009 at 4:30 pm | Reply

    PS

    By “gimmick,” I mean a trick that she uses repeatedly over time. This isn’t quite the same as being a one-trick pony in that the latter would suggest that it is her only big gimmick. In any case, she is dishonest, but more along the lines of a Pielke than a Watts. But ethically I put these two at roughly the same level so in that sense there isn’t much of a difference.

  • george // July 11, 2009 at 7:21 pm | Reply

    If I have learned anything at all about statistics over the years it is that if you get a different answer regarding a hypothesis (eg that 0.2C/decade lies outside the 95% confidence interval for global temperature development over the past 8 years) using different statistical tests, different assumptions about noise, different data sets, slightly different time period, etc, it’s a warning:

    “Danger! Sharp rocks below the water surface. You may want to think twice before you commit to jumping off”

    In other words, be very careful not to draw conclusions that are only marginally supported (at best): “Hmm, it’s probably safe if I jump over there since I don’t see any rocks”.

    Actually, where i first learned about this was in a college biology course for which I did basic experiments every week. For each experiment, I was expected to formulate a null hypothesis, collect data, do a statistical analysis o f the results and then discuss what I had found (basically, use the scientific method).

    I will never forget the first time I used statistics because I did an analysis using two different tests (at the suggestion of the TA) and one test rejected the null hypothesis while the other did not.

    It being my first exposure to statistics and all, at first, I was shocked. But then, after a few minutes, it naturally occurred to me that the “best” thing to do was select the test that allowed me to show what i wanted to show. So I did.

    Just kidding. Actually, I asked my TA what to do because I was honestly quite puzzled by the whole affair. Naive fool that I was back then (some would say still am), I thought statistics were like an “8 ball” that always gave you the “correct” answer. (That was before I learned about lies, damned lies and the lying bastar…I mean statisticians who tell them. )

    So, anyway, I asked my TA and he said “select the test that allows you to show what you ant to show. ”

    Just kidding.

    What he actually said was “Watch what you conclude , especially for this experiment, since I’m the one grading it!”

    The real problem was that I did not have enough data. I was pretty much at the bare minimum o f what was needed for either statistical test. The fact that I got different answers was likely a direct result of that. That’s what I said in my discussion … and my TA gave me an F.

    (I think it was actually a B)

    Though it was probably one of the best courses I had in college (taught by homing pigeon navigation expert William Keeton), I must admit that (30 years later) I have forgotten most of what I learned.

    But I will never forget the statistics lesson.

    The lesson applied to short term temperature trends is obvious, to me at least:

    Use extreme caution when drawing conclusions/making claims about the temperature trend over the past decade or so — even about “what the trend is probably not” — especially when using different assumptions (about noise, for example) and/or different data sets and/or slightly different time periods give you a different result.

  • Timothy Chase // July 11, 2009 at 8:22 pm | Reply

    Frank Dwyer quotes me as stating:

    How large does the uncertainty interval have to be before one is no longer “establishing” the “correct magnitude” of the “trend”?

    … then begins his second to last post which I found well-thoughout, detailed, and well-said:

    I think it is more fuzzy than that and in fact to answer the ‘how large’ question requires knowing the intended application of the result. For example if the CI included 0 that would be very different than if it didn’t – it would mean failure to establish (with that data) whether it was warming or cooling. That CI could be quite narrow and failure to exclude 0 would still be a problem.

    Can’t really find a single point that I would disagree with here. But not like that should be the primary focus of a discussion in any case.

    Incidentally, yes, her denialism is more subtle than for example that of Watts or your say a ditto-head, and this is part of what I mean by stating that she is more like Pielke than Watts. But part of the reason why I think of the two being ethically similar is that given what Pielke would presumably know as the result of his greater degree of understanding, he should know better.

    I would presume that the reason why he is more subtle and less provocative, play-acting the role of a man-in-the-middle whose objectivity consists of being able to see the truth in both sides (when truth is almost entirely on one side, not the other) is simply a matter of his “more delicate” sensibilities. Nevertheless, both profit from being in the spotlight, although I believe that the primary incentive that each pursues has more to do with the ego than with any financial considerations.

    As for who will be responsible for more devastation, I would say that is anyone’s guess. And I suspect it always will be.

  • Timothy Chase // July 11, 2009 at 8:28 pm | Reply

    PS Two quick corrections.

    First, where I said, ” your say a ditto-head” in my second paragraph I should have said either “your typical ditto-head” or “say a ditto-head” and preferably the former rather than the latter. Second, at the end of that paragraph said, “he should know better” when I meant to say “he should still know better.”

    My apologies.

  • Timothy Chase // July 11, 2009 at 9:53 pm | Reply

    george wrote:

    In other words, be very careful not to draw conclusions that are only marginally supported (at best): “Hmm, it’s probably safe if I jump over there since I don’t see any rocks”.

    Agreed — and I like the analogy, particularly since it stresses the possible consequences.

    As for being careful, particularly in the context of different tests where the null hypothesis passes one but is rejected by the other, this what Tamino would mean by the results not being “robust.” Given my background in philosophy I might refer to it as a borderline problem, but likewise stress the need for more evidence.

    Actually, where i first learned about this was in a college biology course for which I did basic experiments every week. For each experiment, I was expected to formulate a null hypothesis, collect data, do a statistical analysis o f the results and then discuss what I had found (basically, use the scientific method).

    Unfortunately I have never taken a course in statistics or been formally exposed to it in school. What little I have learned has almost entirely been in an informal context, but I still try to be useful and learn where I can.

    I remember at one point looking at how probability density calculus deals with mutually exclusive alternatives and then exploring the possibility of a continuous probability calculus where rather than mutually exclusive alternatives one is dealing with mutually independent alternatives — and discovered that I had more or less found a different way of talking about the Poisson distribution. But not particularly useful. Likewise I remember digging into quantum mechanics and discovering that the probability density operator could be viewed as continuous complex number truth values between arrays of statements, and that the mathematics behind quantum mechanics could largely be viewed as the translation of one set of statements and their truth values (e.g., description in terms of position space) into another set of statements and their truth values (e.g., description in terms of momentum space). A disjointed and scattered self-education.

    Likewise I was digging into pseudo-Reimannian geometry and general relativity before I had actually taken a course in Euclidean geometry. Jumping ahead, but not with much follow-through. My interests are far-ranging, but my science background is actually quite limited. And while I taught myself calculus, it would take a while for me to get it back and without a great deal of work I doubt that I would ever be as good as I once was — which was good, but not good enough to actually teach a first year course in it.

    george wrote:

    It being my first exposure to statistics and all, at first, I was shocked. But then, after a few minutes, it naturally occurred to me that the “best” thing to do was select the test that allowed me to show what i wanted to show. So I did.

    Just kidding. Actually, I asked my TA what to do because I was honestly quite puzzled by the whole affair. Naive fool that I was back then (some would say still am), I thought statistics were like an “8 ball” that always gave you the “correct” answer. (That was before I learned about lies, damned lies and the lying bastar…I mean statisticians who tell them. )

    There have been times when Tamino has refered to it as an “art.” I suspect that at a fundamental level it is that way with all disciplines.

    With regard to motivation, I believe that one’s primary, most fundamental motive should always be understanding. In this sense identification should always (at least logically) precede evaluation. And it is out of what one discovers regarding the world that one’s other motives should arise.

    Likewise, in discourse one should begin with the assumption that this is what motivates others — and that their arguments are likewise an attempt at identification — until sufficient evidence accumulates to the contrary. Of course there are those who argue that one may never know what motivates another. But this is incoherent insofar as the employment of language itself tacitly assumes that we are capable of knowing what motivates one-another, that is what ideas they wish to communicate, what they wish to get across.

  • Dave A // July 11, 2009 at 10:00 pm | Reply

    Timothy Chase,

    I’ve finally mamaged to track down a link to the ‘peer review’ of the UK Met Offices CPO9 25km gridded squares

    http://ukclimateprojections.defra.gov.uk/images/stories/Other_images/UKCP09_Review.pdf

    But guess what? The review document is very brief, although if you read between the lines it is lukewarm on what the Met Office has done. It is also brief because the detailed points raised in the ‘Annexe’ to this report are not available online

  • Timothy Chase // July 11, 2009 at 11:36 pm | Reply

    Dave A wrote:

    I’ve finally mamaged to track down a link to the ‘peer review’ of the UK Met Offices CPO9 25km gridded squares

    Thank you for sharing it with us. I am not particularly happy with the results I have seen so far. For one thing they are projecting roughly 70 years into the future, but the sweet spot of projection is supposed to be in the neighborhood of 30-40 years into the future, where the the consequences of differences in emission scenarios are negligible.

    But I suppose that part of what they are trying to do is show the difference between emission scenarios in order to motivate people to lower emissions. A bit at odds with preparing for the costs associated with climate change — assuming one has to choose between projections thirty years or seventy years into the future rather than performing both sets.

    You are of course right that their reception of the new approach is luke warm. I see terms like “moderate confidence” and mention of the desire to present the results of a more traditional approach alongside the new, as well as the importance of clearly stating the assumptions behind the lew approach.

    At the same time, they also state:

    The focus on UK-scale climate change information should not obscure the fact that the skill of the global climate model is of over-whelming importance. Errors in it, such as the limited current ability to represent European blocking, cannot be compensated by any downscaling or statistical procedures, however complex, and will be reflected in uncertainties on all scales.

    … where “downscaling”, “statistical procedures” are currently the less desirable alternatives we have to increasing the resolution of the global climate model. This would seem to suggest that they believe that this approach should continue to be pursued even if they are somewhat skeptical of the results so far.

Leave a Comment