Open Mind

Key Messages

June 22, 2009 · 57 Comments

The synthesis report from the Copenhagen conference on climate change gives dire warning of the consequences of inaction about global warming. They emphasize six key messages, each with its own chapter:


KEY MESSAGE 1: CLIMATIC TRENDS
Recent observations show that greenhouse gas emissions and many aspects of the climate are changing near the upper boundary of the IPCC range of projections. Many key climate indicators are already moving beyond the patterns of natural variability within which contemporary society and economy have developed and thrived. These indicators include global mean surface temperature, sea level rise, global ocean temperature, Arctic sea ice extent, ocean acidification, and extreme climatic events. With unabated emissions, many trends in climate will likely accelerate, leading to an increasing risk of abrupt or irreversible climatic shifts.

KEY MESSAGE 2: SOCIAL AND ENVIRONMENTAL DISRUPTION
The research community provides much information to support discussions on “dangerous climate change”. Recent observations show that societies and ecosystems are highly vulnerable to even modest levels of climate change, with poor nations and communities, ecosystem services and biodiversity particularly at risk. Temperature rises above 2C will be difficult for contemporary societies to cope with, and are likely to cause major societal and environmental disruptions through the rest of the century and beyond.

KEY MESSAGE 3: LONG-TERM STRATEGY: GLOBAL TARGETS AND TIMETABLES
Rapid, sustained, and effective mitigation based on coordinated global and regional action is required to avoid “dangerous climate change” regardless of how it is defined. Weaker targets for 2020 increase the risk of serious impacts, including the crossing of tipping points, and make the task of meeting 2050 targets more difficult and costly. Setting a credible long-term price for carbon and the adoption of policies that promote energy efficiency and low-carbon technologies are central to effective mitigation.

KEY MESSAGE 4: EQUITY DIMENSIONS
Climate change is having, and will have, strongly differential effects on people within and between countries and regions, on this generation and future generations, and on human societies and the natural world. An effective, well-funded adaptation safety net is required for those people least capable of coping with climate change impacts, and equitable mitigation strategies are needed to protect the poor and most vulnerable. Tackling climate change should be seen as integral to the broader goals of enhancing socioeconomic development and equity throughout the world.

KEY MESSAGE 5: INACTION IS INEXCUSABLE
Society already has many tools and approaches – economic, technological, behavioural, and managerial – to deal effectively with the climate change challenge. If these tools are not vigorously and widely implemented, adaptation to the unavoidable climate change and the societal transformation required to decarbonise economies will not be achieved. A wide range of benefits will flow from a concerted effort to achieve effective and rapid adaptation and mitigation. These include job growth in the sustainable energy sector; reductions in the health, social, economic and environmental costs of climate change; and the repair of ecosystems and revitalisation of ecosystem services.

KEY MESSAGE 6: MEETING THE CHALLENGE
If the societal transformation required to meet the climate change challenge is to be achieved, a number of significant constraints must be overcome and critical opportunities seized. These include reducing inertia in social and economic systems; building on a growing public desire for governments to act on climate change; reducing activities that increase greenhouse gas emissions and reduce resilience (e.g., subsidies); and enabling the shifts from ineffective governance and weak institutions to innovative leadership in government, the private sector and civil society. Linking climate change with broader sustainable consumption and production concerns, human rights issues and democratic values is crucial for shifting societies towards more sustainable development pathways.

Categories: Global Warming

57 responses so far ↓

  • David B. Benson // June 22, 2009 at 10:33 pm | Reply

    So far, US Congress isn’t getting it very well.

    Anyway, nice summary.

  • Deep Climate // June 22, 2009 at 11:00 pm | Reply

    If anyone has trouble with the above link, try this one (works better for me):

    http://www.pik-potsdam.de/news/press-releases/files/synthesis-report-web.pdf/view

  • Dano // June 22, 2009 at 11:20 pm | Reply

    I’m a glass half-full kinda guy (albeit full of groundwater tainted by Big Ag), but this list of what we need to do [reformatted and emphases added]:

    o reducing inertia in social and economic systems;

    o building on a growing public desire for governments to act on climate change;

    o reducing activities that increase greenhouse gas emissions and reduce resilience (e.g., subsidies);

    o and enabling the shifts from ineffective governance and weak institutions to innovative leadership in government, the private sector and civil society.

    Linking climate change with broader sustainable consumption and production concerns, human rights issues and democratic values is crucial for shifting societies towards more sustainable development pathways.

    Wowie.

    That is a very clear course to change course for the ships of state.

    Too bad we didn’t begin 30 years ago.We don’t move that fast.

    Sigh.

    Best,

    D

  • Deep Climate // June 22, 2009 at 11:53 pm | Reply

    I have something of a quibble with key message # 1.

    The temperature figure used visually suggests that temperature observations are running at or above the mid-point of the IPCC range of projections. But that depends which set of projections one examines (and how they are baselined).

    The supporting figure is updated from Rahmstorf et al 2007 comparison of observations to IPCC TAR (Third Assessment Report) projections (this was discussed by Tamino a year ago, IIRC).

    IPCC AR4 (Fourth Assessment Report) projections are somewhat higher than TAR, at least when using the AR4 baseline of 1980-99 average “hindcast”.

    Here is figure 3, as updated by Rahmstorf, extracted from “Key message # 1″):
    http://deepclimate.files.wordpress.com/2009/06/rahmstorf-2007-update.pdf

    Here is my attempt to compare AR4 projections with observations, both baselined to 1980-99.
    http://deepclimate.files.wordpress.com/2009/05/ar4-smooth.gif

    The full post:
    http://deepclimate.org/2009/06/03/ipcc-ar4-projections-and-observations-part-1/

    Don’t get me wrong – I support all six key messages. But I would have preferred some sort of reference to AR4 as well as TAR projections.

    Having said all that, it is certainly true, as the report states, that temperature trends do clearly lie within IPCC projections, whether TAR or AR4: “Nevertheless, the long-term trend of increasing temperature is clear and the trajectory of atmospheric temperature at the Earth’s surface is proceeding within the range of IPCC projections.”

    [Response: According to Rahmstorf at RealClimate,

    ... the report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990) compared to observations. For a meaningful comparison you need enough data, and we thought 16 years was enough. The AR4 projections start in 2000 (see Fig. SPM5 of the AR4). Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up. -stefan

    ]

  • Terry // June 23, 2009 at 12:39 am | Reply

    I believe we need to change our habits to stop Global Warming. My father is a stout republican who believes this is non-sense. His main arguement is if he fills a glass with ice and fills the rest with water, as the ice melts, why doesn’t it overflow the glass?

    I am not sure how to address this counter-arguement. Could you help me?

    [Response: His argument is irrelevant. We all know that when sea ice melts it doesn't have much effect on sea level -- Al Gore even highlighted that fact in his movie. It's the land ice that's the problem; this includes alpine glaciers all over the world, and the Greenland and Antarctic ice sheet. And to top it off, sea level rises because of thermal expansion of sea water.

    Tell him to fill a bowl with ice and watch the water level.]

  • Deep Climate // June 23, 2009 at 1:51 am | Reply

    Stefan Rahmstorf’s explanation of why AR4 temperature projection evaluation is premature is now at RC:

    “[T]he report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990) compared to observations. For a meaningful comparison you need enough data, and we thought 16 years was enough. The AR4 projections start in 2000 (see Fig. SPM5 of the AR4). Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up.”

    http://www.realclimate.org/index.php/archives/2009/06/a-warning-from-copenhagen/#comment-127592

    The original Rahmstorf et al Science paper is here:
    http://www.pik-potsdam.de/~stefan/Publications/Nature/rahmstorf_etal_science_2007.pdf

  • TCO // June 23, 2009 at 3:08 am | Reply

    I have even more skepticism of the AGW predictions of social and economic effects as I do of the scientific stuff. And I think the science guys are themselves not economically and social science astute. And yes, I know they have some specialists in there. But the whole thing is all commingled. And someone like Tammy who is skilled in statistics is not skilled in econ, so how can he even judge this stuff? My vote: stick to science and stop mixing physical science/social effects/policy. Feynman was very astute about the different nature of technical issues and management issues and policy issues for instance with the Shuttle Crash investigation and rightfully fought to limit his investigation from making last minute policy statements.

  • MikeN // June 23, 2009 at 6:27 am | Reply

    Just read your link to the Rahmstorf paper. How do you compute an 11 year non-linear trend for such a short data series?

  • B Buckner // June 23, 2009 at 2:20 pm | Reply

    The synthesis report references Rahmstorf and states “…comparing the IPCC projections of 1990 with observations show that some climate change indicators are changing near the upper end of the range indicated by the projections or, in the case of sea level rise (Figure 1), at even greater rates than indicated by IPCC projections.”

    But the figure shows the projections and observations diverged from the very beginning in 1990. Does this not indicate that the model is wrong rather than climate trends are worse than projected (worse than we thought a few years ago?) How was the divergence between observations and projections dealt with in TAR which was published in 2001 when it was obvious that the projections were wrong?

  • Dano // June 23, 2009 at 3:07 pm | Reply

    Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up.”

    This is a key component of adaptive management and scenario analysis (from which projections are derived). So the italicized is obvious.

    Sadly, the IPCC’s reliance on projections has met with a deaf public, as there have been scant few efforts made to help societies understand scenarios and projections. This void has allowed some to exploit this lack of knowledge and context to try and demonize the IPCC and the process.

    Best,

    D

  • Timothy Chase // June 23, 2009 at 3:38 pm | Reply

    Deep Climate wrote:

    Stefan Rahmstorf’s explanation of why AR4 temperature projection evaluation is premature is now at RC:

    “[T]he report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990)…”

    Thank you for that little bit of background. I was trying to follow the conversation, but it was a little unclear. I figured that TAR was much earlier than AR4, but didn’t really know by how much. (At this point I take it to be by about 10 years — as the simulations/papers AR4 is based off of date to 2000.

    Hitting Google on TAR I kept getting RankExploits — which (after search RealClimate) I gather is about as reliable as Watts. Anyway, thanks again.

  • Timothy Chase // June 23, 2009 at 3:42 pm | Reply

    PS

    That should have been “after searching RealClimate…” A little distracted this morning. Putting together a database (nothing serious) and taking a sick cat to the vet.

  • lucia // June 23, 2009 at 4:25 pm | Reply

    Timothy,
    I figured that TAR was much earlier than AR4, but didn’t really know by how much. (At this point I take it to be by about 10 years — as the simulations/papers AR4 is based off of date to 2000.

    Your estimate of the publication date of the TAR is a bit off. The full title of the TAR, published in 2001, is “Climate Change 2001: The Third Assessment Report”. It is available at for download here.

  • Deep Climate // June 23, 2009 at 7:17 pm | Reply

    Timothy,
    TAR projections are based on climate model simulations that start in 1990, while AR4 projections are for 2000 on. So, yes, the projections start 10 years earlier in TAR.

    The actual publications dates are 2001 (TAR ) and 2007 (AR4).

  • MikeN // June 23, 2009 at 7:29 pm | Reply

    Timothy, I just got this from rankexploits, so perhaps you’ll ignore it, but the old Rahmstorf paper and the one in the sysnthesis report do not match. If you calculate an 11 year smoothing with an update for 2007 and 2008, there should be a flattening at the end.

    [Response: "11-year smoothing" doesn't necessarily mean 11-year moving averages. Better methods are less sensitive to noise fluctuations.]

  • Timothy Chase // June 23, 2009 at 7:30 pm | Reply

    lucia wrote:

    Your estimate of the publication date of the TAR is a bit off. The full title of the TAR, published in 2001, is “Climate Change 2001: The Third Assessment Report”. It is available at for download here.

    Yes, that is when it was published (2001), but the data was from 1990, whereas the data for AR4 (2007) was based of 2000. Ten years difference.

    Please see:

    Stefan Rahmstorf’s explanation [inline here] of why AR4 temperature projection evaluation is premature is now at RC:

    “[T]he report cites a published study here: our 2007 Science paper, where we were looking at how the TAR projections (starting in 1990) compared to observations. For a meaningful comparison you need enough data, and we thought 16 years was enough. The AR4 projections start in 2000 (see Fig. SPM5 of the AR4). Around the year 2016 it may be worth redoing such a comparison, to see how the more recent projections have held up.”

    -Deep Climate

    *
    You should keep in mind the fact that a few years isn’t sufficient to establish a trend in global average temperature.

    Climate oscillations introduce a great deal of noise over the short-run. But it is just noise. They can’t get rid of the heat, they just move it around, from the surface to the ocean depths and back again. To get rid of the heat as far as the climate system is concerned, it has to be radiated into space. Conduction and convection just won’t cut it.

    But if you increase the level of greenhouse gases, you make the atmosphere more opaque to thermal radiation.

    You can see it here:

    Measuring Carbon Dioxide from Space with the Atmospheric Infrared Sounder
    http://airs.jpl.nasa.gov/story_archive/Measuring_CO2_from_Space/

    … and the following might look oddly familiar:

    CO2 bands in Earth’s atmosphere
    http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=37190
    *
    If energy is entering the climate system at a constant rate (or actually slightly declining since 1961) but is leaving the climate system at a reduced rate, by the conservation of energy we know that the amount of energy in the system has to increase. And it will continue to increase until the temperature rises enough that the system radiates radiation (according to the Stefan-Boltzmann equation) at a high enough rate (proportional to t4that it balances the rate at which energy is entering the system. (That’s called “radiation balance theory.”)

    But the absolute humidity of the atmosphere roughly doubles for every ten degrees, and water vapor is itself a greenhouse gas. So when everything is said and done, we are looking at a climate sensitivity of about 3°K. (I could point you to some papers if it would help.)

  • Timothy Chase // June 23, 2009 at 7:39 pm | Reply

    PS

    Given the noise that exists in the climate system, anyone who tries to establish the trend in global average temperature with much less than fifteen years data is — in my view — either particularly ignorant of the science, or what is more likely, some sort of flim-flam artist, a bit like the psychic surgeons that James Randi exposes who “remove” tumors without opening up the “patient.” (And in the case of psychic surgery, the patient often dies only a few years later — after the cancer metastasizes.)

    [The phrase "tries to establish the trend in global average temperature with much less than fifteen years data" describes the vast majority of those who deny the reality and severity of global warming. Imagine that.]

  • MikeN // June 23, 2009 at 9:05 pm | Reply

    What is the source for the chart of emissions scenarios, that has A1B growing at 2.42% per year and A1F1 growing at 2.71%? Based on the tables in the TAR, I get growth rates of 3.16 and 2.02 for those scenarios in terms of total CO2. How is this growth rate supposed to be calculated?

  • lucia // June 23, 2009 at 9:08 pm | Reply

    Tamino–

    Response: “11-year smoothing” doesn’t necessarily mean 11-year moving averages. Better methods are less sensitive to noise fluctuations.]

    I agree that’s why I used a simple example of moving averages and my post says “Different smoothing methods weigh the data data inside the smoothing region differently. ”

    Timothy,
    I agree the graphs in the TAR show simulation results beginning in 1990. However, the AOGCM models used and the “simple models” used to create the graph TAR had not even been developed in 1990. The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.

    PS. I haven’t been trying to “establish” any trends in any blog posts.

  • george // June 23, 2009 at 10:47 pm | Reply

    Lucia says

    “the AOGCM models used and the “simple models” used to create the graph TAR had not even been developed in 1990. The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.”

    It may be true generally that ‘projecting’ the past is easier than ‘projecting’ into the future, but is it true in the specific case of the TAR projections?

    Rahmsdorf et al indicate in their paper paper that the “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″.

    The IPCC scenarios and projec-
    tions start in the year 1990, which is also the
    base year of the Kyoto protocol, in which almost
    all industrialized nations accepted a binding
    commitment to reduce their greenhouse gas
    emissions. Although published in 2001, these
    model projections are essentially independent
    from the observed climate data since 1990: Cli-
    mate models are physics-based models devel-
    oped over many years that are not “tuned” to
    reproduce the most recent temperatures, and
    global sea-level data were not yet available at
    the time.

  • luminous beauty // June 23, 2009 at 11:25 pm | Reply

    The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.

    ‘What’?

    PS. I haven’t been trying to “establish” any trends in any blog posts.

    Or anything at all “meaningful”, either.

  • Timothy Chase // June 23, 2009 at 11:34 pm | Reply

    lucia wrote:

    I agree the graphs in the TAR show simulation results beginning in 1990. However, the AOGCM models used and the “simple models” used to create the graph TAR had not even been developed in 1990. The simulations run by relatively new models were ‘projecting’ the past which is generally easier than ‘projecting’ into the future.

    Lucia, your criticism might carry some weight — if climate models were simply instances of curve fitting.

    However climate models don’t just have to match up with a given set of data — such as temperature for a “hindcast” of given set of years. Climate models aren’t based upon curve-fitting. They are based upon our empirical, scientific understanding of the natural world.

    This is really part of the beauty of climate models – and at an abstract level, one of the most fundamental principles of climate modeling itself. It argues from general principles, typically fundamental principles of physics – although not strictly – as when the responses of organisms (e.g., certain representative species of vegetation) are incorporated into the models.

    It doesn’t allow the arbitrary element of curve-fitting. Such an approach would have little grounds for regarding its conclusions as applying to anything outside of what the curve was based on – and given the complexity of what we are dealing with, it would quickly evolve into a Rube Goldberg device which no one could understand the basis for – but which we would adopt merely like superstitious rats which are randomly rewarded – dancing about in the belief that some increasingly complex set of motions determines whether or not they get the reward.

    Their foundation consists of consists of the principles of physics. Radiation transfer theory, thermodynamics, fluid dynamics and so on. They don’t tinker with the model each time to make it fit the phenomena they are trying to model. With that sort of curve-fitting, tightening the fit in one area would loosen the fit in others. They might improve the physics by including a more detailed analysis of a given phenomena, but because it is actual physics, tightening the fit in one area almost inevitably means tightening the fit in numerous others.

    Since climate models are based upon our scientific understanding of the world, we have every reason to believe that the more detailed the analysis, the more factors we take into account, the better the models will do at forecasting the behavior of climate systems. A climate model is not some sort of black box. If we see that the predictions are not matching up, we can investigate the phenomena more closely, whether it is in terms of fluid dynamics, spectral analysis, chemistry or what have you and see what we are leaving out and properly account for it.

    This would seem to be the only rational approach that climatologists can take, and if this general approach did not work, this would seem to imply that natural science is a failed project, that its success up until this point has simply been some sort of illusion, and that the world simply doesn’t make sense.

    To the extent that we can incorporate the relevant physics, we are able to base our projections upon a far larger body of knowledge than just a few points on a graph – things as tried and tested as the laws of thermodynamics, the laws governing fluid motion, chemistry, the study or radiation in terms of the blackbody radiation, absorption and reemission – and even quantum mechanics.
    *
    lucia continued:

    PS. I haven’t been trying to “establish” any trends in any blog posts.

    I didn’t say that you had. However, given your denial I decided to check.

    I found the following essay:

    IPCC Projections Overpredict Recent Warming.
    10 March, 2008 (07:38) | global climate change Written by: lucia
    http://rankexploits.com/musings/2008/ipcc-projections-overpredict-recent-warming/

    In that essay you give a chart titled “Global Mean Temperature,” then state:

    I estimate the empirical trend using data, show with a solid purple line. Not[e] it is distinctly negative with a slope of -1.1 C/century.

    IPCC Projections Overpredict Recent Warming.
    10 March, 2008 (07:38) | global climate change Written by: lucia
    http://rankexploits.com/musings/2008/ipcc-projections-overpredict-recent-warming/

    In the chart itself it says that it is based upon an “Cochrane Orcutt fit to data: 2001-now & uncertainty bands.” 2001 to 2008 is seven years, less than half of fifteen. Would you claim that you weren’t trying to establish what the trend actually and empirically is but merely arguing against the trend that the IPCC claimed to exist on the basis of a larger span of years?

    If the latter, then I must ask: how much validity is there to such an argument against the IPCC’s conclusion if it can in no way support a conclusion about the actual, empirical trend itself? And if it can in fact be used to argue against the IPCC’s conclusion then surely it can support a conclusion about the trend in global average temperature. And if you maintain that it nevertheless cannot, did you make such unsupportable “hairsplitting” clear to those who visit your blog? Or did you simply let them assume what must logically follow from your argument?

  • Zeke Hausfather // June 24, 2009 at 12:38 am | Reply

    Lets be honest here; they aren’t completely independent. After all, modelers use the relative skill in hindcasting to assess model validity, so there is at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.

    Granted, there aren’t too many model parameters that can be tweaked, so the general ability of models to hindcast is in itself a sign that they are getting a fair bit right.

  • MikeN // June 24, 2009 at 1:36 am | Reply

    >[Response: "11-year smoothing" doesn't necessarily mean 11-year moving averages. Better methods are less sensitive to noise fluctuations.]

    There was a flattening with just 2007 data. YOu posted it March of last year.

    http://tamino.wordpress.com/2008/03/26/recent-climate-observations-compared-to-ipcc-projections/

    It looks like the graph is different in the report.

    [Response: The word "flattening" implies some trend -- not just a noise fluctuation. On that basis, there's no flattening.]

  • Ray Ladbury // June 24, 2009 at 1:41 am | Reply

    TCO says, “I have even more skepticism of the AGW predictions of social and economic effects as I do of the scientific stuff. ”

    Maybe you can explain why you guys alway assume that any uncertainty means you can stop worrying. The uncertainties actually push the risk higher rather than lower.

  • MikeN // June 24, 2009 at 4:46 am | Reply

    [Response: The word "flattening" implies some trend -- not just a noise fluctuation. On that basis, there's no flattening.]

    OK, then what should that be called?

    My point is that this ‘not a flattening’ isn’t in the version that is in the Copenhagen report, so the smoothing is confusing. The noise is on the low end, so I would expect the version with 2008 data included to have more ‘not a flattening’ than in the chart you posted last year.

  • MikeN // June 24, 2009 at 5:23 am | Reply

    >— if climate models were simply instances of curve fitting…

    That all sounds good, and the models are wonderful creations, but in the end they are curve fitting. Reading thru the documentation for the MIT IGSM that you highlighted in ‘It’s Going to Get Worse,’ they built a detailed economic model to couple with the other sub-models. However, they indeed did engage in curve-fitting to observed temperature measurements to get values for key parameters of the model. From Sokolov & Stone
    http://globalchange.mit.edu/files/document/MITJPSPGC_Rpt124.pdf
    Specifically, Forest et al. (2002) used the climate component of the IGSM
    (Sokolov & Stone, 1998; Prinn et al., 1999) to produce probability distributions for the climate
    sensitivity, the rate of heat uptake by the deep oceans, and the net forcing due to aerosols by
    comparing observed temperature changes over the 20th century with results of the simulations in which these model parameters were varied.

    I’ve worked with some of these climate models, and indeed changes in these climate variables make a huge difference in climate sensitivity estimates. In 1999, Prinn wrote that it has twice as much an effect as changes in carbon emissions. I haven’t read their latest work in full, but from the descriptions it appears to me they engaged in more curve fitting to estimate climate parameters, as well as increasing estimates for carbon emissions, a change I think is valid.

    [Response: I'm afraid you have crossed the "stupid threshhold." Models are NOT curve-fitting, and your abysmally ignorant portrayal of them as such proves that you are nowhere near sufficiently informed to offer any useful opinion. Unfortunately you're an example of "a LITTLE knowledge is a dangerous thing," so it'll probably take decades for you to realize not only how wrong, but how warped your beliefs are.

    As for the claim that you "worked with some of these climate models" perhaps you'll understand if I doubt the "work" you did had any merit, or imparted the understanding necessary to evaluate them.]

  • Deep Climate // June 24, 2009 at 5:36 am | Reply

    MikeN,
    Jean S and Lucia appear to be suggesting that Rahmstorf may have changed his smoothing process to minimize the so-called “flattening” effect of the 2007-8 trough. They speculate that the end-point processing may have changed or that a larger window (more smoothing points) was used than in the original. Hence the provocative title of Lucia’s post “Fishy odors surrounding Figure 3 from ‘The Copenhagen (Synthesis) Report’”.

    However, if one looks carefully at the Rahmstorf’s original chart (a higher resolution version taken from a presentation) and Jean S’s supposed replication with the same data to 2006, there are already subtle but important differences.

    http://deepclimate.files.wordpress.com/2009/06/rahmstorf-original-20072.jpg

    http://rankexploits.com/musings/wp-content/uploads/2009/06/2006_m_5_rahmstor.jpg

    Note that the smoothed observations coincide exactly with the HadCRU temp for 2005 in the original chart, but are noticeably below in the replication. Also, Jean S has more divergence between GISS and HadCRU around 1998, but less at the end point. These differences suggest to me that Jean S’s replication may have used different weightings within the smoothing window (and for all I know the entire algorithm might be very different). In particular, Jean S’s algorithm appears to give more weight at the point of estimation and less to the influence of neighbouring points.

    Now look at the updated chart (I’ve isolated the last few years, ending in 2008). Notice that recent years have all been lowered somewhat under the effect of a relatively cool 2008.

    http://deepclimate.files.wordpress.com/2009/06/rahmstorf-2007-update-detail.jpg

    I don’t see any evidence whatsoever that Rahmstorf changed his smoothing procedure between his two charts. Does that answer the question you are really asking?

    There’s a lot of other misinformation in that post. Maybe I’ll comment more when I’m a little calmer.

  • luminous beauty // June 24, 2009 at 12:16 pm | Reply

    OK, then what should that be called?

    Noise fluctuation.

  • george // June 24, 2009 at 1:36 pm | Reply

    Zeke Hausfather said

    Lets be honest here; they aren’t completely independent. After all, modelers use the relative skill in hindcasting to assess model validity, so there is at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.

    To be perfectly honest, I think hindcasting for these models is usually done over multiple decades, not just one. The graphs shown in Rahmsdorf et al actually indicate as much.

    If that were not the case and selective pressure was at work specifically “to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period”, why wouldn’t there actually be a better match between the IPCC projections and the GISS and HADCRU temperature trends for that period?

    As is clear from this graph, the observed (GISS and HADCRU) temperature trends actually follow the “upper bound” of the “projection envelope” for the latter half of that 1990-2000 period (after about 1996 or so).

  • MikeN // June 24, 2009 at 8:18 pm | Reply

    >why wouldn’t there actually be a better match between the IPCC projections and the GISS and HADCRU temperature trends for that period?

    Because as Tamino is saying, this isn’t curve fitting. The models are too complicated to get an exact match.

    >Models are NOT curve-fitting, and your abysmally ignorant portrayal of them as such proves that you are nowhere near sufficiently informed

    Well perhaps this is a definition issue again, however the people who created the model said what they did in their paper.

    You’re right that my work on the models doesn’t give me a detailed understanding, as I only changed some inputs to get an output of temperature, but this gave me an idea of how much small changes in input parameters change the output.

    The reason I say this is curve fitting is because the model gets tweaked to adjust to observed phenomena. Values for cloud sensitivity, ocean sensitivity, aerosols, volcanoes, etc. These get changed until you get the results that best match observed data.

  • MikeN // June 24, 2009 at 8:29 pm | Reply

    Deep, you are right that the flattening appears to be smoother in the update than Jean’s replication. The changes start at least as far back as 2003.
    So maybe it is the same flattening method after all.
    Interesting that adding 2007 alone showed more flattening like Jean’s replication, but throwing in 2008 lowered the overall slope.

  • MikeN // June 24, 2009 at 9:10 pm | Reply

    >I’m afraid you have crossed the “stupid threshhold.

    Oh no! Pull me back!
    I’m still trying to figure out why the A1F1 number is so high compared to A1B, different from the TAR description of the emissions scenarios.

  • Zeke Hausfather // June 24, 2009 at 9:15 pm | Reply

    george,

    I realize that hindcasting is done over multiple decades, and that there is no reason why 1990 to 2000 is necessarily more important to assessing model hindcasting strength than any prior decade. My only point was that models produced in 2000 are not completely independent of observations from 1990 to 2000, so their relative strength in hindcasting over that period is not as effective an evaluation of model predictions as is comparing projections to observed temperatures after the models were created.

    That said, the latter is rather difficult to do due to the relatively short period of observations post-model creation and the degree of annual variability in the climate.

  • Deep Climate // June 24, 2009 at 10:12 pm | Reply

    MikeN:
    Interesting that adding 2007 alone showed more flattening like Jean’s replication, but throwing in 2008 lowered the overall slope.

    You are assuming that Tamino’s updated graph was created with exactly the the same smoothing as Rahmstorf’s graph. I don’t know whether or not that is the case (and neither do you nor anyone else at the Blackboard at this point).

    But, in any event (and more to the point), it appears that you now acknowledge that suggestions that Rahmstorf changed the smoothing parameters are most likely completely unfounded.

  • Ray Ladbury // June 24, 2009 at 11:43 pm | Reply

    There seem to be lots of misapprehensions about “tweaking” of climate models. To call this process “curve fitting” is misleading at best and possibly perverse.

    First, these are not statistical models. There’s no “curve” you are fitting. Rather, you are trying to make the model look as Earthlike as possible while keeping the parameters within their independently determined confidence intervals. Moreover, the form of the model itself is dictated by physics.

    Now, the individual parameters may well be fit to data, but these data are independent of the criteria used for verification.

    Finally, as I said, there’s no curve. There are all the measurable variables–temperature, precipitation, possibly wind and other fields as well.

    Also, here’s a hint:

    And as far as “trends” go–if your conclusions depend on details like the smoothing algorithm, the dataset, weighting and starting and ending years, then you aren’t really talking about trends, are you?

  • David B. Benson // June 24, 2009 at 11:59 pm | Reply

    Well, the model I work on (now and then) is certainly based on physics. That part is not is so-called curve fitting, what I call parameter estimation. All would be done and finished except that there are two functions with unknown shape and parameters. There are reason why the shape should be thus-and-so or this-and-that. The multiplicative parameters on each term of these functions is unknown, but a likely range is known.

    I have some data and use parameter estimation methods to (1) find the best fitting parameter values and (2) whether, with the best fitting parameter values, the function shaped thus-and-so is better, the same, or worse than the function shape this-and-that.

    I have a hard time imaging that something the same sort in not occuring for AOGCMs. There are sub-scale parameterizations to a variety of processes such as clouds and aerosols. If I working working on climate models (which I am not) I’d use some of the available data for ‘training’, i.e., parameter estimation, and the remainder for validation studies. Unfortunately for my (occasional) problem, I have nothing left for validation studies. Different problem.

  • dhogaza // June 25, 2009 at 12:34 am | Reply

    I have a hard time imaging that something the same sort in not occuring for AOGCMs. There are sub-scale parameterizations to a variety of processes such as clouds and aerosols.

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    MikeN, another thing that ought to be obvious … the fact that you can download the code, compile it, and tweak parameters randomly and get bizzare stuff out doesn’t mean that experimenters are also just tweaking parameters randomly gettting random garbage outs. As Ray says, the parameter values are informed by physics and whatever empirical data is available …

    Also, I’m surprised that no one has called MikeN out on his bait-and-switch:

    n the end they are curve fitting. Reading thru the documentation for the MIT IGSM that you highlighted in ‘It’s Going to Get Worse,’ they built a detailed economic model to couple with the other sub-models. However, they indeed did engage in curve-fitting

    He’s using the fact that the MIT economic model is (apparently, if he’s correct) a statistical model to “prove” that the NASA and Hadley Center do the same sort of curve fitting as the MIT team did.

    Michael, that’s a bit like saying “since Soyuz doesn’t have wings, the space shuttle doesn’t have wings”.

  • Douglas Watts // June 25, 2009 at 2:49 am | Reply

    “Maybe you can explain why you guys alway assume that any uncertainty means you can stop worrying. The uncertainties actually push the risk higher rather than lower.”

    This is what those in the Atlantic salmon conservation biz call the Precautionary Principle. It is the fundamental basis of stock management under the North Atlantic Salmon Conservation Organization (NASCO), which is comprised of all the nations that touch on the coast of the North Atlantic, where Atlantic salmon live.

  • MikeN // June 25, 2009 at 3:03 am | Reply

    Um, no, the existence of the economic model has nothing to do with the curve fitting. The descriptions you give is in line with what I was saying. Curve-fitting may be the wrong term, but it was Tamino who first used that to refer to what Lucia was saying.

    [Response: ???]

  • Timothy Chase // June 25, 2009 at 3:45 am | Reply

    dhogaza wrote:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    In essence, to perform calculations, they have to break space, time, spectra, soil types and the like into grids — to perform discrete calculations, much like breaking the area under a curve into finite boxes — and then apply parameterizations to account for the sub-grid physics — including turbulence. Such an approach is more or less mandated by the fact that computer resources are, like everything else, finite. And as I pointed out above, the employ certain representative species — now that they are trying to incorporate the biosphere.

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    Is this what you were thinking of?

    Unified Model
    User Guide
    http://ncas-cms.nerc.ac.uk/html_umdocs/UM55_User_Guide/UM_User_Guide.html

    Here is something a little more introductory that I have brought up before:

    dhogaza continued:

    The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.

    This is along the lines of what you are mentioning regarding parameterization.

    Models ‘key to climate forecasts’
    Page last updated at 00:23 GMT, Friday, 2 February 2007
    By Dr Vicky Pope
    UK Met Office’s Hadley Centre
    http://news.bbc.co.uk/1/hi/sci/tech/6320515.stm

    Incidentally, the 38 levels are levels of atmosphere. The ocean is divided into 40 different levels. Resolution with respect to this particular model is 135 km by 135 km, according to the article.

    Time is typically divided into roughly 15 minute intervals with the more advanced models. However, resolution is increased in certain parts of the model as needed for the purpose of modelling, e.g., the polar vortex requires calculations with intervals of roughly 30 seconds — if I remember correctly. But that’s NASA GISS, not Hadley — although Hadley may do something similar. Likewise, the radiation transfer theory built into the most recent NASA GISS models include non-local thermodynamic equilibria calculations — but we are just beginning to realistically model glaciers. Clouds are being modeled more realistically nowadays. Aerosols will require more work

    In any case, all of this is well outside of my area of expertise — but I find it fascinating.

  • Timothy Chase // June 25, 2009 at 3:47 am | Reply

    CORRECTED VERSION OF THE ABOVE

    dhogaza wrote:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    In essence, to perform calculations, they have to break space, time, spectra, soil types and the like into grids — to perform discrete calculations, much like breaking the area under a curve into finite boxes — and then apply parameterizations to account for the sub-grid physics — including turbulence. Such an approach is more or less mandated by the fact that computer resources are, like everything else, finite. And as I pointed out above, the employ certain representative species — now that they are trying to incorporate the biosphere.

    dhogaza continued:

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    Is this what you were thinking of?

    Unified Model
    User Guide
    http://ncas-cms.nerc.ac.uk/html_umdocs/UM55_User_Guide/UM_User_Guide.html

    Here is something a little more introductory that I have brought up before:

    The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.

    This is along the lines of what you are mentioning regarding parameterization.

    Models ‘key to climate forecasts’
    Page last updated at 00:23 GMT, Friday, 2 February 2007
    By Dr Vicky Pope
    UK Met Office’s Hadley Centre
    http://news.bbc.co.uk/1/hi/sci/tech/6320515.stm

    Incidentally, the 38 levels are levels of atmosphere. The ocean is divided into 40 different levels. Resolution with respect to this particular model is 135 km by 135 km, according to the article.

    Time is typically divided into roughly 15 minute intervals with the more advanced models. However, resolution is increased in certain parts of the model as needed for the purpose of modelling, e.g., the polar vortex requires calculations with intervals of roughly 30 seconds — if I remember correctly. But that’s NASA GISS, not Hadley — although Hadley may do something similar. Likewise, the radiation transfer theory built into the most recent NASA GISS models include non-local thermodynamic equilibria calculations.

    In any case, all of this is well outside of my area of expertise — but I find it fascinating.

  • Timothy Chase // June 25, 2009 at 3:49 am | Reply

    FINAL CORRECTION (sorry)

    dhogaza wrote:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    In essence, to perform calculations, they have to break space, time, spectra, soil types and the like into grids — to perform discrete calculations, much like breaking the area under a curve into finite boxes — and then apply parameterizations to account for the sub-grid physics — including turbulence. Such an approach is more or less mandated by the fact that computer resources are, like everything else, finite. And as I pointed out above, they employ certain representative species — now that they are trying to incorporate the biosphere.

    dhogaza continued:

    Hadley has (or had) a nice high-level overview of such things in their documentation for HadCM4 but I didn’t bookmark it, and google’s not being helpful at the moment. I don’t remember how I found it in the first place, unfortunately.

    Is this what you were thinking of?

    Unified Model
    User Guide
    http://ncas-cms.nerc.ac.uk/html_umdocs/UM55_User_Guide/UM_User_Guide.html

    This is along the lines of what you are mentioning regarding parameterization.

    Here is something a little more introductory that I have brought up before:

    The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.

    Models ‘key to climate forecasts’
    Page last updated at 00:23 GMT, Friday, 2 February 2007
    By Dr Vicky Pope
    UK Met Office’s Hadley Centre
    http://news.bbc.co.uk/1/hi/sci/tech/6320515.stm

    Incidentally, the 38 levels are levels of atmosphere. The ocean is divided into 40 different levels. Resolution with respect to this particular model is 135 km by 135 km, according to the article.

    Time is typically divided into roughly 15 minute intervals with the more advanced models. However, resolution is increased in certain parts of the model as needed for the purpose of modelling, e.g., the polar vortex requires calculations with intervals of roughly 30 seconds — if I remember correctly. But that’s NASA GISS, not Hadley — although Hadley may do something similar. Likewise, the radiation transfer theory built into the most recent NASA GISS models include non-local thermodynamic equilibria calculations.

    In any case, all of this is well outside of my area of expertise — but I find it fascinating.

  • dhogaza // June 25, 2009 at 4:14 am | Reply

    The MIT people, whatever they’re doing, ouiji board, curve fitting, killing virgins, have NOTHING AT ALL to do what NASA or Hadley are doing.

    Must suck to be the lying you …

  • Barton Paul Levenson // June 25, 2009 at 10:47 am | Reply

    MikeN writes:

    That all sounds good, and the models are wonderful creations, but in the end they are curve fitting.

    That is an illiterate statement. The models use physical laws. They are not statistical models. You just don’t know what you’re talking about. Why don’t you actually look up the code of a climate model and read it? NASA GISS’s Model E has the source code available on-line.

  • Barton Paul Levenson // June 25, 2009 at 10:49 am | Reply

    MikeN writes:

    The reason I say this is curve fitting is because the model gets tweaked to adjust to observed phenomena. Values for cloud sensitivity, ocean sensitivity, aerosols, volcanoes, etc.

    Sensitivity is an OUTPUT of the model, not an INPUT.

  • george // June 25, 2009 at 2:11 pm | Reply

    Zeke Hausfather said:

    “My only point was that models produced in 2000 are not completely independent of observations from 1990 to 2000, so their relative strength in hindcasting over that period is not as effective an evaluation of model predictions as is comparing projections to observed temperatures after the models were created.”

    I don’t debate the issue that in general, a correct hindcast may not mean as much as a correct forecast, for some of the very reasons that you mentioned.

    I actually stated that in my original challenge of Lucia’s claim

    It may be true generally that ‘projecting’ the past is easier than ‘projecting’ into the future, but is it true in the specific case of the TAR projections?

    But we’re talking about a very specific case here and quite frankly, I don’t know enough of the details to say for sure.

    That’s why I questioned Lucia’s claim above (and why I continue to question yours) because it does not seem consistent with what the authors of the Rahmsdorf paper said. “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″.

    I honestly don’t know when the models used to produce the projections included in the 2001 IPCC TAR were made or what data they used for validation.

    Because of my lack of knowledge in this case, perhaps mistakenly, I was going on the claim I read in Rahmsdorf et al paper:

    Since we’re talking about a specific case here (the IPCC 2001 projections) and not some general rule, I guess my comments to you (and to Lucia) really boil down to 3 questions:

    1) Do you know when the models that were used for the 2001 IPCC projections were created?

    2) Do you know what data (which years and which data sets) were used to validate them?

    3) Do you know for a fact that for the specific case under discussion “there [was] at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.”?

    These are not merely debating questions.

    If you want me to be honest here I will: I think there is far too much of what I would term “idle speculation” that goes on with regard to the work of climate scientists.

    Some of it may be correct, but in the absence of evidence for the specific case, I don’t think it is either fair or accurate to say for the specific case at hand that

    Lets be honest here; they aren’t completely independent. After all, modelers use the relative skill in hindcasting to assess model validity, so there is at least some selective pressure to exclude or tweak models that perform poorly in hindcasting the 1990-2000 period.

  • Deep Climate // June 25, 2009 at 4:06 pm | Reply

    Zeke, george,

    In any discussion of IPCC projections and hindcasts, we should be very careful to distinguish between the two. A hindcast is not simply a “past projection.” Rather, in general, a hindcast incorporates estimates of natural forcings (volcanic, solar) and anthropogenic forcings (GHGs, aerosols), at least to the extent supported by the particular model. Projections, on the other hand, assume neutral natural forcings and various scenarios for the anthropogenic forcings.

    My reading of TAR chapter 9 is that all model runs incorporated were projections, not hindcasts, in the above sense from 1990 on.

    The actual date of the model run is not particularly relevant to this distinction, much less the date of publication or date of incorporation into a synthesis report like the TAR.

    It’s true that in TAR there were longer lags between the projection start and model runs/ publication than in AR4 (or planned for AR5). But I don’t see any evidence that there were “hindcasts” for 1990-2000, much less that they were used “exclude or tweak models that perform poorly.”

    The situation is that much clearer for AR4. The projections start in at the beginning of 2000 (not 2001 or 2007), despite various claims in the blogosphere to the contrary.

  • David B. Benson // June 25, 2009 at 10:23 pm | Reply

    dhogaza // June 25, 2009 at 12:34 am — Please read the “What is tuning?” section of
    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

    Other than a different term, “tuning”, this is what anybody who wants to do sensible, physics-based, parameter estimation is going to have to do. So I think I described the situation correctly in my earlier post.

    By the way, I’m not considering what people do in so-called economic models. Those are certainly not based on physics.

  • Dave A // June 25, 2009 at 10:48 pm | Reply

    Timothy Chase,

    “The latest Hadley Centre model, HadGEM1 (which is typical of current state-of-the-art models), uses 135km boxes with 38 levels in the vertical, and includes all of the complexity of the climate system outlined above.”

    Last week the UK Met Office, launched a site purporting to show future climate for the UK based on 25km gridded squares.

    Today in Nature, Myles Allen, recent co-author of the ‘Trillionth tonne’ which made the front cover of Nature, is quoted as saying -

    “Current climate science might support projections on a scale of a couple of thousand kilometres, but anything smaller than that is uncharted territory”

  • dhogaza // June 25, 2009 at 11:56 pm | Reply

    So I think I described the situation correctly in my earlier post.

    It’s not you I’m worried about…

  • Timothy Chase // June 26, 2009 at 12:21 am | Reply

    From the same article:

    Most projections, such as those produced by the Intergovernmental Panel on Climate Change (IPCC), offer information on likely climate effects at a subcontinental scale of around 300 km2. Regional decadal-scale projections have recently been produced for the United States at a resolution of 25 km2 (see ‘Hot times ahead for the Wild West’). But the UKCIP’s approach takes projections to a new level, covering long-term climate change for the whole nation at the scale of 25 km2 and, in some cases, resolving weather patterns down to a scale of 5 km2.

    “It’s one step better than what you get from the IPCC at the global scale,” says Jacqueline McGlade, executive director of the Copenhagen-based European Environment Agency. “We’re getting more discrimination now between the south and the north and there are distinct differences.”

    UK climate effects revealed in finest detail yet
    Source: Copyright 2009, Nature
    Date: June 19, 2009
    Byline: Olive Heffernan
    http://forests.org/shared/reader/welcome.aspx?linkid=130562

    … and the full quote is:

    DEFRA’s chief scientist Bob Watson says that he expects the approach “will be taken up by other regions and highlighted by the IPCC in their next report”.

    “Current climate science might support projections on a scale of couple of thousand kilometres, but anything smaller than that is uncharted territory.’

    In otherwords, he regards it as something of an achievement. But by clipping what is in fact an expression of admiration to something shorter that sounds disapproving you engaged in what is known in evolutionary circles as “quote-mining.” And you did so with the skill that would make a young earth creationist envious.

    My hat is off to you, sir!

  • Timothy Chase // June 26, 2009 at 12:33 am | Reply

    PS

    That was in response to Dave A., where he wrote in response to me (or rather my quote from Dr. Vicky Pope):

    Today in Nature, Myles Allen, recent co-author of the ‘Trillionth tonne’ which made the front cover of Nature, is quoted as saying -

    “Current climate science might support projections on a scale of a couple of thousand kilometres, but anything smaller than that is uncharted territory”

    The rocket went up without its crew: I hit a return in one of the one-line submission fields and the webpage took that as equivalent to hitting the the “Submit” button.

    I am now about to click “Submit.” Wish me luck…

  • Timothy Chase // June 26, 2009 at 1:19 am | Reply

    David B. Benson originally wrote:

    Well, the model I work on (now and then) is certainly based on physics. That part is not is so-called curve fitting, what I call parameter estimation. All would be done and finished except that there are two functions with unknown shape and parameters….

    I have a hard time imaging that something the same sort in not occuring for AOGCMs. There are sub-scale parameterizations to a variety of processes such as clouds and aerosols.

    Dhogaza responded:

    Yes, but what they’re looking for is reasonable parameter values that match the available empirical data available for the process being parameterized.

    They’re not “curve fitting” in the sense of twiddling a bunch of dials until the model spits out a good fit to past climate conditions.

    David Benson responded:

    dhogaza // June 25, 2009 at 12:34 am — Please read the “What is tuning?” section of
    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

    I don’t think that dhogaza was disagreeing with you, but rather trying to forestall a misunderstanding that some might walk away with. He wished to emphasize (US, “emphasise,” UK) or “bring to the foreground” something which in his view was simply in the background as you presented things and might otherwise be missed.
    *
    David Benson then wrote of parameter-based estimation (or what is sometimes technically referred to as “tuning”):

    Other than a different term, “tuning”, this is what anybody who wants to do sensible, physics-based, parameter estimation is going to have to do. So I think I described the situation correctly in my earlier post.

    This is certainly how I understand it.

    The difference between curve-fitting as it is commonly understood and the sort of parameterization which is made use of in climate models is quite important — and relatively easy to make — so I hope you don’t mind if I explain it (in a little more detail than I have above) for the benefit of those who may be less knowledgable than yourself.

    Models use parameterizations because of the fact that they are necessarily limited (in one form or another) to finite difference calculations. There will exist individual cells, perhaps a degree in latitude and a degree in longitude. These cells will be of a certain finite height, such that the atmosphere will be broken into layers – with perhaps the troposphere and stratosphere sharing a total of forty atmospheric layers. Likewise, calculations will be performed in sweeps such that the entire state of the climate system for a given run is calculated perhaps every ten minutes in model time.

    Now physics provides the foundation for these calculations, but as we are speaking of finite difference, the calculations will tend to have problems calculating turbulent flow due to moist air convection or wind speed — where differences between neighboring cells are particularly steep. Thus when you have flow which is particularly turbulent, such as around the Polar Vortex, cell-by-cell calculation based on finite differences will lack the means by which to tell how for example the momentum, mass, moisture and heat which is leaving the cell will be split-up and transfered to the neighboring cells. To handle this you need some form of parameterization. Standard stuff as far as modeling is concerned, or so I would presume.

    Parameterization is a form of curve-fitting. But it is local curve-fitting in which one is concerned with local conditions, local chemistry and local physics — backed up by the study of local phenomena, e.g., what you are able to get out of labs or “in the field” studies. It is not curve-fitting that adjusts the models to specifically replicate the trend in the global average temperature or other aggregate and normalized measures of the climate system.

    Hopefully I have just done both your views (or rather — as I see it — your common view) justice.

  • dhogaza // June 26, 2009 at 1:25 am | Reply

    The Nature article’s behind a paywall, and I haven’t found which denialism site Dave A has cut-and-pasted this from, but wanna guess that Myles Allen is talking about GCM model outputs for the globe, and not when run with finer-sized grids over a smaller area like the UK?

    And do you wanna bet that the Hadley Centre knows that this research they’re doing is research, in other words exploration into relatively uncharted territory?

  • Zeke Hausfather // June 26, 2009 at 2:19 pm | Reply

    george, Deep,

    I’ll agree that I’m largely speculating here, as I have not personally worked in climate model development, and my thoughts should be taken simply as such.

    My point was that while the parametrization of any given individual model may be completely independent (and not adjusted post-facto), the next generation of models will, all things being equal, tend to build off those models which best reflected the past real world climate. There is likely at least some selection at work for the models that perform best in hindcasting. Rahmstorf is likely correct in stating that “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″, though the same is probably not true for the models used in the AR4.

  • Deep Climate // June 26, 2009 at 6:35 pm | Reply

    Zeke, you said:

    Rahmstorf is likely correct in stating that “model projections [from 2001 TAR] are essentially independent from the observed climate data since 1990″, though the same is probably not true for the models used in the AR4.

    I’m not sure what your point is. The AR4 model-derived projections are from 2000 on, and are relative to a baseline of 1980-99. So the equivalent claim would be that they are “essentially independent from the observed climate data since 2000.” Do you agree or disagree with that statement? It’s hard to tell.

Leave a Comment