Open Mind

Open Thread #13

April 28, 2009 · 266 Comments

The previous open thread is getting quite large … so here’s a new one. Lucky 13.

Categories: Global Warming

266 responses so far ↓

  • BBP // April 28, 2009 at 3:40 pm | Reply

    Way back over in Open Thread 12, TCO says “For instance, if Pons and Fleishman can make neutrons come out of their palladium rod….well good for them. Of course they can’t and all they have is minutia of noise in calorimetry and detections of particles at the boundary of instrumental error. They’re shit iow [sic]. But it’s easy to set falsifiable boundaries for what it would take for them not to be shit. N’est pas?”

    This is actuall, in my opinion, a case where there should be skeptasim on both side. Over the years there have been multiple reports of neutron detection and excess heat above noise levels. See for example http://www.spectrum.ieee.org/mar09/8407
    or do a search in Google Scholar for CR-39 and LENR

    At this point, I don’t know if there is enough good evidence to say that ‘cold fusion’ is real, but I think there is enough so that we can’t simply dismiss it. Pons and Flieschman botched the original anouncement, but that doesn’t mean they were completely wrong, and we shouldn’t let that color how we look at all the subsiquent experiments by other researchers.

  • michel // April 28, 2009 at 5:44 pm | Reply

    Ray, it was a logical point. If, if it failed to warm or even cooled for the next 30 years, however unlikely that may be, if it happened, we would not abandon quantum mechanics or CO2 as a GHG. No way. But we would start to think maybe there was something wrong with AGW theory. We would not start doubting basic physics though. If it carried on for 50 years, I think we’d probably say it was not just doubtful, but that it had been falsified. Still would not doubt basic physics.

    I won’t live to find out, but maybe some of you guys will. Not totally sure that I envy you!

  • Geoff Beacon // April 28, 2009 at 6:03 pm | Reply

    I spoke to an ex chief scientist last week and asked him about biochar and Prof Salters proposal for thickening clouds. He said biochar did not have the capacity to make a difference and that thickening clouds did not adress the most important problem … the acidification of the sea.

    Are his judgements about right?

  • dhogaza // April 28, 2009 at 7:46 pm | Reply

    But we would start to think maybe there was something wrong with AGW theory.

    Or would look for another cause. AGW theory does not, for instance, claim that if the sun’s output drops drastically that this won’t be a cooling forcing. Etc etc.

    Oh, and again, repeating your lies regarding hybrid technology won’t make them true. You’ve been corrected several times on this blog. I’m not wasting by time correcting you again.

  • JCH // April 28, 2009 at 9:45 pm | Reply

    I don’t really believe in the miracle of 100 mpg Hummers, but it can’t be an accident that people who want to move size at a high mpg reach for hybrid technology.

  • Hank Roberts // April 28, 2009 at 11:04 pm | Reply

    > biochar

    http://www.google.com/search?q=EGU+biochar

    > thickening clouds did not adress the most
    > important problem … the acidification of the sea.

    http://www.google.com/search?q=EGU+ocean+acidification

  • Ray Ladbury // April 28, 2009 at 11:12 pm | Reply

    BBP says: “At this point, I don’t know if there is enough good evidence to say that ‘cold fusion’ is real, but I think there is enough so that we can’t simply dismiss it. ”

    I have absolutely no trouble dismissing it. There’s no theory that would explain such a reaction, and there are no convincing results to dismiss. I have yet to see a single independent reproduction of a single result. The only reason it attracts significant attention at all is because it would have a huge effect if it were true.

    Fer chrissake, consider the reaction D+T–>He +n +E. Hell, if Pons and Fleischmann had been write they’d be dead from radiation exposure!

  • David B. Benson // April 28, 2009 at 11:17 pm | Reply

    michel // April 28, 2009 at 5:44 pm — What we do notice is an increased ability to come to terms with the more minor aspects such as, for instance, aerosols; ABC aerosols cause a slight global cooling.

    Geoff Beacon // April 28, 2009 at 6:03 pm — Biochar has the capacity to make an important difference. Ocean acidification can only be addressed by removal of CO2.

  • TCO // April 29, 2009 at 12:20 am | Reply

    BBP failure to show convincing results despite 10 yuears of trying…arguing about how the man is keeping them down, founding there own journals, socializing excessively. All are characteristics of CF that make me skeptical of it (in addition to the energy barrier of nuclear binding energy >> chemical).

    It distresses me to see the many similarities of AGW skepticism and of CF. No McI is not as bad or silly as Pons Fleishman. But there are some disturbing similarities.

    Only thing that gives me hope is Mann being a phys-math dropout and the local oompa here being a red toolbox user, not a Bessel function user.

    Net/net: I still worry that my side is sillier…but the other side having some flaws gives faint hope…

  • Hank Roberts // April 29, 2009 at 1:21 am | Reply

    That EGU biochar link will find Nature’s Climate blog, which has a very recent survey post on the subject, with many good links. One recommended there: http://heliophage.wordpress.com/2009/03/30/the-biochar-backlash/#jump

  • Hank Roberts // April 29, 2009 at 1:41 am | Reply

    Oops. Dang!
    http://www.newscientist.com/article/dn17042-geothermal-explosion-rocks-green-energy-hopes.html

    Oboy!
    http://www.e360.yale.edu/content/feature.msp?id=2144

    Oh, wait …
    http://cameochemicals.noaa.gov/chemical/4510

  • dhogaza // April 29, 2009 at 1:42 am | Reply

    Only thing that gives me hope is Mann being a phys-math dropout and the local oompa here being a red toolbox user

    TCO, I never realized that your general superiority over the rest of mankind include such a talent for a comic turn!

    Mann could be convicted of fraud tomorrow and it wouldn’t affect the body of work put together by a very large number of scientists, nor the basic physics of CO2 infrared absorption.

    The “local oompa” you deride is just a blogger who doesn’t work in climate science.

    If that’s all that gives you hope, I’ll lend you a rope and a branch on my apple tree to hang it from. Just remember, strangulation by dangling’s less pleasant than a clean drop and a broken neck…

    Or maybe you might want to reconsider what might give you hope. Some larger straws to grasp at …

  • David B. Benson // April 29, 2009 at 2:15 am | Reply

    Biochar review:

    http://terrapreta.bioenergylists.org/node/578

  • Ray Ladbury // April 29, 2009 at 12:29 pm | Reply

    TCO, when you stop trying to make progress yourself and count on your opponents’ flaws, you’ve already lost the game. Not that you were ever in it.

  • Kevin McKinney // April 29, 2009 at 12:39 pm | Reply

    There’s a certain commonality to many criticisms one hears: “biochar isn’t the answer,” “hybrid technology isn’t the answer,” “conservation isn’t the answer,” “wind power isn’t the answer.”

    They are all true, and all a bit beside the point. There isn’t a silver bullet, but there is a role to be played by all the above and more.

    In so far as there is “an answer,” it’s the design, development and deployment of efficient systems integrating multiple technologies and systems in complementary fashion.

  • george // April 29, 2009 at 2:03 pm | Reply

    Kevin McKinney:

    There’s a certain commonality to many criticisms one hears: “biochar isn’t the answer,”“hybrid technology isn’t the answer,” “conservation isn’t the answer,” “wind power isn’t the answer.

    You are completely correct that there is no one answer and that “design, development and deployment of efficient systems integrating multiple technologies and systems in complementary fashion” are critical.

    But I think the unstated question “How can we use technology to supply the necessary energy to maintain (or even “grow”) our current lifestyle?” is not the critical one.

    The really important questions (or what should be, at least) in the case of climate change and energy use in general are more philosophical than technological.

    What kind of world do we wish to leave for our grandchildren and their grandchildren?

    Is our current (consumptive, wasteful) lifestyle worth maintaining?

    At what cost to present and future generations?

    What would it take in the way of energy and resource use to support a truly sustainable world economy ?

    How do we get there from where we are right now?

    Trying to produce enough energy to maintain our current economy (to say nothing of one that is a much bigger version of the same) might not be possible even with an efficient combination of alternative technologies, but it might be with a vastly different economy (and no, I’m not talking about one where we all have to sit freezing in a cold dark room in the dead of winter).

  • Ray Ladbury // April 29, 2009 at 2:25 pm | Reply

    Kevin McKinney,
    Exactly!!! There isn’t ONE answer, there about 6.5 billion sets of answers–one for every person. It’s up to each of us to take effective steps and not wait for gummint or industry or the market to lead.
    It may well be that even the social constructs needed to deal with this crisis do not yet exist. OK, we can invent social constructs. The corporation is only a century old. Democracy is less than 2 and a half centuries old and science only a century and a half more.
    The greatest challenge we face is to embrace the challenge. If we do that there is a chance we can make a decent start on leaving our progeny with a civilization we would at least recognize as civilized.

  • Chris // April 29, 2009 at 3:42 pm | Reply

    The game is to try and keep an open mind

  • george // April 29, 2009 at 3:44 pm | Reply

    Only thing that gives me hope is Mann being a phys-math dropout and the local oompa here being a red toolbox user

    Nature does not give a damn about what degrees you have or about the color of your “toolbox” (or even if you have one)

    Nature also does not give a damn if you give a damn.

    And “hope” of what?

    “Winning”?

    The idea that climate science is like some high school (or even college) “debate” with two sides pitting their wits and rhetorical skills against one another is more than a little silly (absurd, really).

  • BBP // April 29, 2009 at 5:26 pm | Reply

    Ray have you looked at the recent state of the ‘cold fusion’ research? You might try http://en.wikipedia.org/wiki/Cold_fusion#DOE2004r for a summary. Yes there is no theory that would explain it; but if this is new physics rather than widespread experimental error, you wouldn’t expect current theory to explain it. The fact that Pons and Fleischmann oversold (to put it mildly) their results should not descredit current work by other researchers.

    TCO, I’m skeptical as well, for some of the reasons you listed, but not completely dismissive. There is still work being published/presented in the scientific main stream, and while I suspect it won’t pan out in the long wrong, I don’t think we should rule it out yet.

  • Hank Roberts // April 29, 2009 at 6:42 pm | Reply

    Yeah. Climatologists are “pro” global warming like Pasteur was “pro” smallpox. This is not understood clearly by those who question their honesty.

  • Hank Roberts // April 29, 2009 at 7:17 pm | Reply

    Speaking of Pasteur — a good heartfelt plea about information hygiene, written apropos the current influenza, but painfully apt for climate science:

    http://nielsenhayden.com/makinglight/archives/011248.html#011248

    Excerpt follows:

    … The author then does a very creditable job of explaining how to judge the reliability of an online source. He begins:

    So when you’re sending around something you read about the flu, please, stop and think. Don’t forward wildly speculative ideas about government conspiracies to your friends or to the world. When someone proposes an idea, think about whether it makes sense.

    He gives a list of suggested questions to ask about a source, using as his example a wildly irresponsible article by Paul Joseph Watson: Medical Director: Swine Flu Was “Cultured In A Laboratory”. It appeared at InfoWars.com—a site which, as Jim would put it, frequently wanders into the Tinfoil Hat Mountains, gets trapped by snow, and eats its own dead.

    As usual, I recommend you read “False Rumors” in its entirety.

    I’ve been seeing some strange reactions—elsewhere not on Making Light—from people who apparently want to deny that it’s possible to have general, reasoned, public discussions of emotionally charged current events. I don’t know what their problem is, but they are wrong, and I defy them.

    We are reasonable human beings. We can seek out reliable information, and have useful, reasonable conversations about issues that matter to us. So what if we’re not all experts on every possible subject? We can help each other understand. If it’s not our turn to be the expert this time, it may be our turn next week; and a good discussion makes everyone smarter. …”

  • t_p_hamilton // April 29, 2009 at 7:31 pm | Reply

    BBP:”Ray have you looked at the recent state of the ‘cold fusion’ research? You might try http://en.wikipedia.org/wiki/Cold_fusion#DOE2004r for a summary. Yes there is no theory that would explain it;”

    This document gives no reason to think that there is an “it” to explain. Cold fusion is in the same boat as when Pons and Fleischmann proposed it, and for the very same reasons.

  • Ray Ladbury // April 29, 2009 at 7:31 pm | Reply

    BBP, It is horse puckey. Every objective study of it has concluded it is horse puckey. By all means rule it out.

  • t_p_hamilton // April 29, 2009 at 8:28 pm | Reply

    michel said:”Ray, it was a logical point. If, if it failed to warm or even cooled for the next 30 years, however unlikely that may be, if it happened, we would not abandon quantum mechanics or CO2 as a GHG. No way. But we would start to think maybe there was something wrong with AGW theory.”

    AGW is what MUST result from well-established physics. I think you don’t comprehend what goes into these models, how they are validated, or their accuracy.

    The only way someone is going to change the forecast (and produce the same hindcast predictions which validate the models) is to propose one of several things:

    1) which well established physics have been neglected (probability nil, although it is amusing to see deniers make ignorant statements which show they have no clue about the models)

    2) which well established physics is wrong (probability zero)

    3) numerical approximations are too crude ( which of course deniers are incapable of doing)(probability now nil)

    Arrhenius captured the essence of the AGW phenomenon from CO2 increase. How can we know this? Because every time the models have become more “realistic”, the same global result keeps coming out. This same result (minimum 2 degree Celsius sensitivity to CO2 doubling) is independently verified by paleoclimate data.

    Anybody who entertains the notion FOR NO PHYSICAL REASON that temperatures “might” show a turn away from AGW is precisely the same as a gambler who thinks because they had a run at roulette, that they “might” be lucky now. The correct response is to stop gambling – because the rules say you must lose in the long run.

  • Deech56 // April 29, 2009 at 8:30 pm | Reply

    This week’s Nature appears to have some interesting climate-related articles.

  • TCO // April 29, 2009 at 10:18 pm | Reply

    I honestly beleive that climatology is politicized from both the left and right. Remember that Mann’s site was supposed to be non-political and then they disappeared their policy to do an interview with DKos. Similarly Tammy used this climate blog for Obama touting, but refused to allow counter Obama comments.

    On the other side, my side, you have guys who essentially cherry pick their criticisms (McI, etc.).

  • Hank Roberts // April 29, 2009 at 10:31 pm | Reply

    PS — the above excerpt is from a blog post written about, and quoting from, this original article:

    http://canonical.org/~kragen/costs-lives.html

  • dhogaza // April 30, 2009 at 12:20 am | Reply

    I honestly beleive that climatology is politicized from both the left and right.

    And you also honestly believe that climate science is a load of hooey, and that you’re the brightest guy in the blogosphere.

    Shows what your honest beliefs are worth, eh?

  • Ray Ladbury // April 30, 2009 at 12:44 am | Reply

    BBP, Sorry to be flippant–I was in a hurry. I would say that the best evidence that cold fusion is a load of fetid dingo’s kidneys is to be found in the fact that none of the experimenters has been killed by his experiment. Any reasonable fusion reaction gives off some really, nasty high-energy neutrons, so if enough fusion reactions were occurring to separate the effect from background, somebody would be dead. Let me know when you see something published in Phys. Rev. Lett. posthumously.

  • TCO // April 30, 2009 at 1:28 am | Reply

    Is BBP, one of y’all’s or one of mine? Want to know who takes shame of the nutter.

  • Lazar // April 30, 2009 at 8:39 am | Reply

    TCO,

    I’ve been talking with Lucia about short term trends, she’s claiming that sensitivity analysis to inclusion/exclusion of a single year or to changing temperature datasets should not be done, because of a high probability of type II errors in short data series with high noise. I eventually understood what she’s saying and agreed then took half a step back… I think she goes too far. Care to have a look?

  • Hank Roberts // April 30, 2009 at 2:46 pm | Reply

    http://images.ucomics.com/comics/db/2009/db090430.gif

  • BBP // April 30, 2009 at 5:28 pm | Reply

    I had assumed that by using ‘cold fusion’ in quotes, by references LENR and new physics, as well as the link to the Wikipedia article it would be apparent that I was not talking about Pons and Fleischmann’s original hypothosis – which I agree is untenable. I appologize if this was not clear.
    My point is simply that there are a large number of experiments showing excess heat (and the 2004 DOE review members were split on whether this was ‘compelling’), and also several reports of nuclear fusion, and to quote the wikipedia article directly, “Two-thirds of the reviewers…did not feel the evidence was conclusive for low energy nuclear reactions, one found the evidence convincing, and the remainder indicated they were somewhat convinced”. There is also the recent SPAWAR report of high energy neutron detection. There are 2 basic possibilities:
    1 – Widespread experamental error, or
    2 – Something new and interesting going on

    At this point it is still up to the proponents to prove their claims with better experiments, but I don’t think the second option is so unlikely as to be dismissed out of hand. That being said, even if the second option is correct, it doesn’t mean that this would be an effective energy source.

    TCO – it will probably make you happy to know that I think the probability of point 2 above is higher than the the probablity of ‘your side’ being right about AGW :)

  • TCO // April 30, 2009 at 5:33 pm | Reply

    Lazar:

    Do you want me to actually read and parse that Lucia posting? I find it rather tedious to do so, since there is no synthesis and since I end up basically having to go back and read her entire blog full of evolving analyses.

    My general (honest) take on Lucia is that she is bright enough to do decent work, but that you have to watch her. She will sometimes put her opponents to unfair tests. Often when pinned down, she devolves to a semantic argument.

    I don’t think she has a serious argument to say that recent temp trends are sufficient to invalidate GCMs (the converse can not be validated either, however.) If she did have such an argument she could publish it. Instead, she doesn’t seem to really engage on what the data actually tells us versus models, but the semantic arguments of what she thinks opponents said.

    Sorry…I realize that I did not actually engage on the topic of pinning with a single year. Do you really require me to exert skull sweat on it?

  • Hank Roberts // April 30, 2009 at 5:36 pm | Reply

    Zing!
    http://energycommerce.house.gov/Press_111/20090429/aceshearing.pdf

  • BBP // April 30, 2009 at 6:18 pm | Reply

    Sorry – brain burp, in my previous post I meant several reports of nuclear reactions, not nuclear fusion.

  • michel // April 30, 2009 at 7:11 pm | Reply

    Well, we have news in the area of the electric car in the UK.

    We can now buy a Citroen C1 electric. It seems to sell for £16,850 and they expect to sell around 500 in the next 12 months and between 2,000 and 4,000 vehicles in 2010.

    It gets ‘up to’ 70 miles on a charge – presumably in winter with a heater on, this would be at least halved. But 35 on a charge will do a lot of the trips we presently do.

    The C1 one liter gasoline version gets 63 to the imperial gallon and costs roughly half what the electric version goes for. Hybrids are about the same mileage as this.

    What are we to make of the general issue of electric cars, lifestyle changes, and lowering emissions in the light of this?

    First, that probably we could live our current lifestyles very little changed with an electric C1. We’d have to accept smaller cars, and less long distance travel by car, but the basic drive and shop lifestyle could continue, at least technically.

    It probably could not continue economically however. If we roughly doubled the price of cars, we’d have quite a lot of the population simply unable to afford them. This would lead to quite dramatic lifestyle changes, including the extinction of the out of town shopping center. Double the price of cars in real terms, and the world will change. Yes, they will cost less to run – though you have to consider changing the batteries every so often as well, so they will not be as much cheaper as at first appears. But the capital cost will put them out of reach for a lot of people whose business is now an essential ingredient in the mall culture.

    Second, if you think 65 per imperial gallon is about right, and that we can live with that level of emissions from private transport, and about the present use of cars, then its simple. All we have to do is make cars with minimum 60mpg the only ones saleable. If Citroen can produce a perfectly decent small car with these specs for this price, there is no reason, if we are confronting a genuine species emergency, for us not to enact laws to make everyone drive such a car. If it is going to make the difference.

    Third, the forecast takeup is tiny. This will make no difference to UK emissions. Notice what else has just happened in the UK. They propose building 4 more coal fired power stations. With carbon capture of course. I read, correct if wrong, that what will be captured is 25% of the emissions.

    So, we will sell a couple thousand electric cars, we’ll discontinue home insulation grants, we’ll pay people a couple thousand to buy new gasoline driven cars so as to ‘kick start the economy’ and ’save jobs’, and we’ll do the equivalent of building three instead of four new power stations.

    And we will claim to believe that this is the greatest threat to humanity since records began, and that we the British are doing our part.

    It can’t be true, can it? This is just not how people who believe themselves to be under imminent threat of catastrophe behave.

  • dhogaza // April 30, 2009 at 10:32 pm | Reply

    The C1 one liter gasoline version gets 63 to the imperial gallon and costs roughly half what the electric version goes for. Hybrids are about the same mileage as this.

    No. The Toyota Prius is a four-door sedan, considerably larger than the C1.

    Put hybrid technology in the C1 and its mileage would go up by the same 30% plus the Prius gets over the equivalently-sized non-hybrid Toyota sedan.

    You continuously compare apples-with-oranges.

    This is just not how people who believe themselves to be under imminent threat of catastrophe behave.

    True enough. And you’re doing your bit with your own personal disinformation campaign to keep it that way.

  • Phil Scadden // April 30, 2009 at 10:44 pm | Reply

    Dave A. Just looking at a colleagues paper which will appear in Science this week. Ties down timing on NZ glacial advance/retreat cycles pretty well. So if what we see now is recovery from LIA, then how come it warming here and yet we didnt have a LIA?

  • TCO // April 30, 2009 at 11:36 pm | Reply

    HAHA!!!

    You all have a cold fusion nutter. We only have creationist nutters.

  • Lazar // May 1, 2009 at 12:14 am | Reply

    TCO,

    semantic argument

    I noticed.

    She’s rephrased to make it clear that she doesn’t object to sensitivity analysis per se. So we’re all cool now.

  • Ray Ladbury // May 1, 2009 at 12:18 am | Reply

    BBP
    I’ll take:

    1 – Widespread experamental error,

    for $100, please.

    Seriously, though. I’ll entertain the possibility if you can explain to me:
    1) How you overcome the degenerate electron pressure as the electron clouds of the hydrogen start to penetrate. (these are atoms, not ions, after all)
    2) How you overcome the coulomb repulsion of the nuclei

    3)How you get the nuclei within a femtometer or so of each other

    4)How you produce enough energy to heat water without the high-energy neutrons killing you.

    That’ll do for starters. The thing is that such a “cold fusion” or LENR or whatever not only requires “new physics,” it also conflicts with well accepted old physics. As to the DOE reviewers, I remember a press conference at Fermilab to announce the discovery of the top quark. They had some under-secretary talking about how this discovery would somehow usher in a new era of efficient lighting. (Huh?!) So, you’ll forgive me if I’m not too impressed with a DOE reviewer. American Physical Society has pretty much eviscerated every claim.

    It’s a truism that extraordinary claims require extraordinary evidence. LEND definitely qualifies as an extraordinary claim.

  • dhogaza // May 1, 2009 at 3:55 am | Reply

    You all have a cold fusion nutter. We only have creationist nutters.

    At least the cold fusion nutters are trying to measure and experiment.

    What do your creationists nutters have? 6,000 years vs. 4.5 billion years on what lab experiments subjected to what CIs?

    One side at least tries to do science, no matter how wrong they are.

    Which side is that, TCO? Your side with its creationists and Anthony Watts and (chuckle chuckle) Stephen Goddard from The Register (who now posts regularly as the apparent “Head Scientist” at WUWT)?

  • michel // May 1, 2009 at 8:52 am | Reply

    The issue with hybrids in the UK is a bit different from the one dhogaza raises.

    It is quite true that the Prius is a larger car than the C1, but gets about the same mileage on the tests. It also costs three times as much to buy.

    This would matter if we were writing for Consumer Reports. But the question is not a sort of consumer reports question about which one to buy. The question is whether it is ecologically and economically feasible to continue the shop and drive model of living, and what role hybrids and electrics play in the evolution of that issue.

    It makes no difference to that question, that the Prius gets the same mileage as the smaller C1. The issue is that 65mpg (imperial) is not enough to avert the necessity for major changes to the shop and drive social model. The issue is also that even if it were, paying three times as much (or twice as much) for cars would in any event doom that model. The issue is also that we could do it, if it were all that’s needed, and fairly cheaply with existing non-hybrid technology and a bit of coercion. But it is not all that is needed.

    Face it, the current model of shop and drive, of spread out suburbs dependent on the car, and shopping malls as recreation, is doomed. Hybrids will not save it.

    The sooner we get to grips with the social implications of this, and in particular how to manage the transition without destroying the standard of living of the less well off, and the sooner we stop fooling ourselves that minor adjustments to auto mileage at huge capital cost will do the trick, the better. This is not disinformation. This is just thinking straight.

    If you want to argue this point, there are three numbers that you need to produce. The first is what average mileage is necessary to meet the emission reductions that are required. The second is what the selling price of the vehicles that achieve this mileage can be. The third is, what if any reduction in annual miles driven is envisaged.

    At the moment it seems that dhogaza and others are arguing, what we have to do is double or triple the price of cars, get mileage to about 65mpg (though why we have to triple the price of cars and use new complicated technology to to this is a total mystery) and by some miracle, total miles driven will stay about the same, and we can all keep on shopping.

    Like I say, this is denialism. You can see how much difference purchase costs make by seeing the effects of the $3,000 car purchase credit in Europe, and the forecasts for the credit in the UK. There is no way that the world is going to carry on the way it is, but just move to hybrids. Its a completely crazy idea.

  • Philippe Chantreau // May 1, 2009 at 9:23 am | Reply

    Michel , if it was up to me, I’d get the Citroen that does this:

  • Lazar // May 1, 2009 at 11:01 am | Reply

    TCO (and anyone who cares),

    I won’t ask you to read through the post at Lucia’s, but if you can spare the time… The gist of the argument is;

    Eight years of observed temperature data are compared with modelled trends, the test rejects at alpha=.05. But censoring the last year of data the test fails to reject.

    1) The series is short so noise to signal is high which gives a high probability of type II errors. Shortening the series by one year increases noise to signal and increases the chance that the failure to reject is type II. Therefore suggesting that the rejection with eight years data is type I, based on the failure to reject with seven year’s data, increases the probability of type II.

    2) Alternatively, if one states based on eight year’s data that the null hypothesis is rejected with 95% confidence, then it is shown that the result depends on one year of data (2008) and that year is somewhat below the trend line… I feel that should increase the probability of the result occuring by chance. That combined with a large number of people testing a diverse number of short-term trends using different tests, for a number of years ever since “global warming stopped in 1998″ became currency.

    What do you think? The argument is based on Lucia’s response to this post on RealClimate.

    2008 compared to the previous four years is in a cool phase of ENSO, combined with an abnormal TSI low. This observed trend is being compared with models that a) do not predict ENSO b) do not adequately capture ENSO variability c) do not model TSI variability. And this is meant to cast doubt on their ability to predict long-term trends? Sounds silly to me.

  • Ray Ladbury // May 1, 2009 at 1:13 pm | Reply

    Michel, I wonder why you are fixated on hybrids. No one is saying that they are “the solution”. However, they could be part of a solution. Not everybody wants to drive a C1. Not everyone is so constrained by cost that they have to. Should not there be green solutions for the wealthy, too, or should we let them continue to drive Hummers? This is not a sprint. It’s a marathon. The way to win a marathon is not to sprint from the start, but to start running at the gun so you can sprint to the finish.
    Anything that decreases carbon emissions NOW is part of the solution.

  • TCO // May 1, 2009 at 1:16 pm | Reply

    Lazar: seems a little labored. Would think that if you are doing significance testing properly that the chance of a “bad year 8″ ought to already be included, but we still get a alpha of 0.5.

    I would worry a little about the opportunism associated with the initial year (to include even bringing things up…for the 5% of the time that things break down by touting those times.)

    Another way to think about it is, let’s say we are apolitical mathematical Martians who just want to understand things: is the recent temp record versus models significantly insightful? The sort of thing you would write a paper about since you’ve now learned something more about systems (not because you want to make models look good/bad.)

  • TCO // May 1, 2009 at 1:18 pm | Reply

    I think intuitively the lack of ENSO predictions means that you can’t really evaluate the models based on such a short run. Would have to wade through some math to really engage on it though.

  • george // May 1, 2009 at 2:14 pm | Reply

    Lazar on short period global temperature trends

    Sounds silly to me.

    I think that about sums it up: “Silly.”

    Short trends can (and do) go up and down when you change the time period slightly –by a year either way (minus or plus), for example.

    In some cases, subtracting/adding a single year to an 8 year period can even change (and has changed) the apparent global temperature trend by an amount on par with the trend over the past few decades (about 0.02 deg. C/yr).

    But the calculated trend is only apparent (in fact, we can’t know what it actuallyis) and the change resulting from the slight time period adjustment really means nothing because the change lies within the considerable uncertainty attached to such short (8 or 9 year) trends (about 0.03 deg C/yr for 8 year trends for example)

    Tamino has laid most of this bare in this post, including the reasons why some people tend to go so wrong in their conclusions about the recent global temperature trend.

    It’s really a waste of time to debate the “meaning” of short temperature trends, because they have none to speak of.

  • JCH // May 1, 2009 at 2:56 pm | Reply

    If a model “fails” to predict when a La Nina will happen, does it follow that the model does not capture ENSO variability?

  • Barton Paul Levenson // May 1, 2009 at 3:33 pm | Reply

    It’s not just creationists on the denialist side. I’ve run across one who accepts Velikovskian astronomy, one who insists that the sun is a supernova remnant and is not powered by fusion, and at least five who think the greenhouse effect violates the second law of thermodynamics.

  • george // May 1, 2009 at 5:18 pm | Reply

    I’ve run across …at least five who think the greenhouse effect violates the second law of thermodynamics.”

    Only five?

    I don’t think there are enough numbers in my calculator to count all the ones I’ve run across.

  • Ray Ladbury // May 1, 2009 at 5:34 pm | Reply

    Lazar,
    I think by definition, if your result is sample-dependent, you can’t claim a statistically significant trend. If you had some dynamical reason for your start and end points, that would be something different, but picking 8 years out of the record at random in a noisy system ought to get you laughed off the stage. And no, I don’t consider the fact that it’s the last 8 years justification.

  • dhogaza // May 1, 2009 at 5:46 pm | Reply

    It is quite true that the Prius is a larger car than the C1, but gets about the same mileage on the tests. It also costs three times as much to buy.

    In part because it’s a larger car, in part because it’s a somewhat higher-end car, but also very much because Japanese cars are subject to duty while cars built in the EU aren’t. The hybrid bit makes it about 10% more expensive than an equivalent non-hybrid car, no more.

    The new redesigned-from-scratch Honda Insight is going to sell for a bit under $20K here, base price. Lifetime ownership costs will be considerably cheaper than for an equivalent gas IC powered automobile.

    As I’ve posted several times, the State of Oregon Motor Pool has found that actual operating costs for hybrids over the first 100,000 miles are about 40% less than for an equivalent non-hybrid automobile. That more than covers the 10% or so higher purchase price.

    The issue is also that even if it were, paying three times as much (or twice as much) for cars would in any event doom that model.

    How about a 10% more initial investment, with a total lifetime cost over 100,000 miles actually being considerably less than for the non-hybrid alternative?

    No one is arguing in favor of continuing to build sprawling communities that force an auto-centric lifestyle on people.

    But most people are going to own cars and drive some, no matter what. There are autos in the cramped and inconvenient old center of Amsterdam. More bikes, but plenty of cars. There are a huge number of autos in downtown areas of Spanish cities like Madrid, Valencia and Seville. I’ve even seen a few in London. Quite a few, actually. Seville’s quite bike-friendly, but a bike commute in much of London or Madrid risks an early death, though both have good mass transit systems.

    A multi-pronged approach is necessary.

    Oregon (my home) has the highest per-capita use of hybrid automobiles in the US, due to their extreme popularity in Portland.

    Portland also has the highest percentage of bike commuters in the country (over 10% of downtown trips during the warmer, drier months are by bicycle).

    We’ve also been heavily investing in light rail over the past couple of decades. Not just building it, but using zoning regulations and financial incentives to get support services and housing clustered around light rail stations so people who mostly drive to commute and shop can switch to light rail for their commute, and walk to shop, if the purchase in one of the new communities.

    We’re opening up a new light rail line this month or next.

    We’ve also built tram lines in the city center, and just got approval (federal money) for another across the river from the center.

    Oregon and Washington have each invested in track upgrades on the Portland-Seattle route so we can run medium-speed Spanish Targa trains, and Obama recently recognized this corridor as being a target for futher speed upgrades.

    And Portland was the first US city to meet its share of the US share of Kyoto.

    So, believe me, we understand the need for a multi-prong approach to limit our carbon emissions. Not only do we understand it, but we’ve been spending our tax dollars on it since before Kyoto.

    Hybrids are one prong of that multi-pronged effort …

  • Ian // May 1, 2009 at 5:51 pm | Reply

    Lazar,

    I think what you have is reasonable. Discussing “short trend significance” reminds me of the problem of how to correct for multiple comparisons – no one takes the most conservative approach or the most liberal (e.g., .05 alpha = 5 errors per 100 tests). In terms of wider meaning and scientific findings, the lack of a formal solution to this question show how narrowly focused it is. Interpreting the sig test, under any range of criteria, needs more info to take it past the point of just an interesting suggestion.

    So, I’m agreeing with your overall point.

  • michel // May 1, 2009 at 7:12 pm | Reply

    How about a 10% more initial investment, with a total lifetime cost over 100,000 miles actually being considerably less than for the non-hybrid alternative?

    Yes, that would a no-brainer. I’d buy one. I don’t think we’ll be buying an all electric C1 for £17k though.

    Dedicated cycle ways seem to be the key to bike usage. That is what makes Amsterdam, or any Dutch city, possible. London is problematic. If you can trace out a back street route, its OK. The classic accident in London is on a two or three lane fast one way route, left turning truck, cyclist on the inside. Usually fatal.

    Holland could actually drop car ownership by 75% and not suffer too much. They have maintained all the infrastructure and policies from the pre-car era largely intact, small local rail lines, bike parking places, dedicated bike ways.

    Oregon sounds like it is making real progress. And I agree, fleet experience is a good, robust indicator of real world operating costs.

  • David B. Benson // May 1, 2009 at 8:53 pm | Reply

    JCH // May 1, 2009 at 2:56 pm — Huh?

    Yes.

  • Dave A // May 1, 2009 at 8:55 pm | Reply

    Phil Scadden,

    Obviously I am not privy to your pal’s forthcoming paper but haven’t some glaciers in NZ been growing whilst others have been declining?

    As for the LIA, is not the earth an interconnected system? If it was realatively cool in the NH for a few hundred years would this not also have affected the SH to some degree?

  • BBP // May 1, 2009 at 9:12 pm | Reply

    Ray,
    I think we’ve probably covered this about as much as it warrants in this thread. I agree it’s an extrodanry claim that doesn’t have the evidence required yet – but I’m not willing to bet that the evidence won’t be developed, and I think there is enough evidence to warrent further investigation.
    I also came across this paper, http://www.newenergytimes.com/v2/library/2006/2006Widom-UltraLowMomentumNeutronCatalyzed.pdf which would seem to cover your three theoretical objections. I don’t have any condensed matter expertise and Google Scholar doesn’t turn up many other papers referrncing it, so it could all be bullsh**. It dosen’t help that at http://physicsandphysicists.blogspot.com/2009/04/aps-did-not-endorse-scientist-in-cbss.html the co-auther says “According to our theoretical work, weak-interaction LENRs appear to be better than fusion. That is potentially revolutionary. However, LENRs also gore many long-standing sacred cows and threaten a myriad of vested scientific and commercial interests” – that sort of comment always makes me suspicious.

  • dhogaza // May 1, 2009 at 9:32 pm | Reply

    Dedicated cycle ways seem to be the key to bike usage. That is what makes Amsterdam, or any Dutch city, possible

    I’ve jarred my rear on plenty of cobblestone streets in the old center of A’dam. However, auto traffic is at a very low level and yes, along more busy thoroughfares there are provisions for bikes.

    I’ve been told, though, that what really makes it safe is that if you hit a cyclist, you’re pretty much assumed guilty unless overwhelming evidence convinces the authorities that you aren’t. Bikes can essentially do no wrong.

    It’s a real treat to go to a small town in Holland by train, rent a bike, and cruise along the dedicated bikeways through the country.

    In Portland, money has been spent to make some of the bridges across the river that bisects the city very bike-friendly. Marked bike lanes help. Putting up a bunch of “bike staples” (U-shaped sidewalk features that you can lock your bike to), bike shelters (it rains here, as bad as the UK), etc help, too. Recently in my neighborhood, the city built bike shelters with the walls at the end painted with detailed walking and biking maps, making clear which streets are easiest/safest to bike along. A bicycle commuter map is also available for sale.

    All these things help. The city used to have a separate office for the promotion of bicycle commuting, but budget cuts forced the city to fold it back in with the normal transportation people. However, the city still promotes bicycle commuting heavily.

  • TCO // May 1, 2009 at 9:39 pm | Reply

    In general the right has way more anti-science nutters than the left (creationists, etc.)

    There are a few places where the left has the refusal to face science (GMOs, racial differences).

  • Lazar // May 1, 2009 at 10:05 pm | Reply

    Thanks for the responses.

    JCH,

    If a model “fails” to predict when a La Nina will happen, does it follow that the model does not capture ENSO variability?

    … no, capturing variability would be showing a similar pattern of ups and downs in terms of variance, autocorrelation, spatial patterns etc., not a specific prediction of when, how much and for how long an up or down will occur.

    TCO,

    Another way to think about it is, let’s say we are apolitical mathematical Martians who just want to understand things: is the recent temp record versus models significantly insightful? The sort of thing you would write a paper about since you’ve now learned something more about systems (not because you want to make models look good/bad.)

    Agreed, I think the lack of saying anything about long term change is the killer argument… nor does it say what the results mean short term… if they can mean anything, as george suggests they can’t.

  • David B. Benson // May 1, 2009 at 11:34 pm | Reply

    Dave A // May 1, 2009 at 8:55 pm — In general SH follows NH in the ups and downs, but moderated by the much larger proportion that is ocean. So perfectly possible that during LIA there was little or no detectable effect (essentially none in Antarctica) although there was cooling in Patagonia.

  • Hank Roberts // May 2, 2009 at 1:42 am | Reply

    > refusal to face the science

    Oh, but you also have to look at the research _on_ the science, before claiming sides:
    http://scholar.google.com/scholar?q=%2Bresearch+%2Bfunding+%2Bsource+%2Bresult

  • Ray Ladbury // May 2, 2009 at 2:06 am | Reply

    BBP, A quick scan of the Widom paper didn’t yield anything very convincing. The sorts of reactions they are talking about would be very low cross section and energetically unfavorable. In general, especially at low atomic mass, atoms like to keep the same number of protons and neutrons.

    The paper is very speculative–more along the lines of “well, if it’s happening, maybe this could explain it.” The more parsimonious hypothesis is that it ain’t happening. I didn’t see anything that really answered the question, and I didn’t see anything that explained how being in a metal hydride realy catalyzed the process (nothing convincing anyway). There is a reason why you only get nucleosynthesis in stellar interiors–it ain’t easy. I’m afraid this is a field that just pegs my bullshit meter. Whether you call it cold fusion or LENR, it still doesn’t add up.

  • Ray Ladbury // May 2, 2009 at 2:12 am | Reply

    BBP,
    Another indicator of bullshit: Look at the references. About 25% of them are graduate level texts and he managed to work in a gratuitous citation of Feynmann. Not an indication of a field where intelligence is bubbling to the surface.

  • Rattus Norvegicus // May 2, 2009 at 3:28 am | Reply

    I just did an informal experiment on the effect of the location of a bbq to a temperature sensor, and I think the results are interesting.

    Conditions:

    1) Ambient temperature 44F measured by both my electronic sensor and a standard liquid thermometer at a distance of about 12 feet from the electronic sensor. Distance of the electronic sensor from the bbq 1 foot.

    2) Heat bbq to 550F.

    3) Observe electronic sensor, temp goes up to 56F from 44F. Distance from electronic sensor 1 foot. Temp at other thermometer 44F.

    4) No wind.

    5) Conclusion, WUWT is full of s**t.

  • Rattus Norvegicus // May 2, 2009 at 3:30 am | Reply

    This is at least an attempt to quantify the huge effect which WUWT postulates. A sensor which is very close to the BBQ is affected, a sensor which is not so close is not affected.

  • Lee // May 2, 2009 at 4:18 am | Reply

    TCO: “racial differences”

    Care to offer a biologically coherent definition of race that applies to humans, and detail for us the races and their characteristics that derive from that definition?

  • TCO // May 2, 2009 at 11:06 am | Reply

    No.

  • Barton Paul Levenson // May 2, 2009 at 12:01 pm | Reply

    TCO writes:

    There are a few places where the left has the refusal to face science (GMOs, racial differences).

    What racial differences? In IQ? You’ve been reading The Bell Curve, I take it, and mistook that for “science?”

  • TCO // May 2, 2009 at 12:35 pm | Reply

    Yes.

  • luminous beauty // May 2, 2009 at 4:01 pm | Reply

    Poor dear. Bless your tiny heart.

  • luminous beauty // May 2, 2009 at 4:48 pm | Reply

    I don’t think even the loony left has any problem with admitting the science of genetic engineering, or its promise, so much as lamenting the narrow, dim and short-sightedness of its existing technological and economic application.

  • dhogaza // May 2, 2009 at 6:41 pm | Reply

    so much as lamenting the narrow, dim and short-sightedness of its existing technological and economic application.

    “Trust us. Nothing possibly could go wrong”.

    “Oh, then you won’t complain if we require extensive third-party evaluations, monitoring, etc to make sure everything works out the way you promise?”

    “ANTI-PROGESS LUDDITE!!!!”

  • michel // May 2, 2009 at 7:16 pm | Reply

    t_p_hamilton writes “AGW is what MUST result from well-established physics”.

    In that case, were we to find that it did not happen in the presence of increasing levels of GHGs, we would conclude that well established physics had been falsified.

    Is that really your view? That the arrival or not of increased global temperatures represents a critical experiment which will confirm or falslfy well established physics?

    I simply do not believe it. If over the next 30 years it does not warm, we will conclude we were wrong, but not about well established physics. Someone remarked earlier that in order to convince him that AGW would not happen, one would have to show quantum mechanics was false. Again, I do not believe that were it not to happen, our reaction would be to abandon quantum mechanics. We’d decide we were wrong, but not about that.

    I am not, lets emphasize this, saying it will or will not happen. I am only talking about the logical implications of hypotheses and how strongly we have confirmed some versus others, and am not arguing that warming will not happen. Just arguing about what the structure of evidence and knowledge is on this and other points, which translates into a heirarchy of confirmations.

    If we would not abandon basic physics in the absence of warming, then it is clear that some other propositions of lesser certainty are involved, whether explicitly or not, in the hypothesis, and that it does not follow from basic physics alone.

  • David B. Benson // May 2, 2009 at 10:17 pm | Reply

    For comparison.

    Dave Occam’s SAR & TAR predictions vs data (1.28 std. dev.):
    http://i161.photobucket.com/albums/t231/Occam_bucket/IPCCTempPredictions.jpg

    Gavin Schmidt’s recent AR4:
    http://www.realclimate.org/images/comp_monck1.jpg
    http://www.realclimate.org/images/comp_monck3.jpg

  • Robert P. // May 2, 2009 at 10:32 pm | Reply

    BPL wrote on May 1:
    “It’s not just creationists on the denialist side. I’ve run across one who accepts Velikovskian astronomy, one who insists that the sun is a supernova remnant and is not powered by fusion, and at least five who think the greenhouse effect violates the second law of thermodynamics.”

    A significant figure in the early days of AGW denialism (and more generally, denialism about just about all of environmental science) was the late Petr Beckmann, a retired Electrical Engineering professor who was better known for arguing against Special Relativity. Beckmann promulgated his views in the 1980’s via his newsletter “Access to Energy”. After Beckmann died, his newsletter was taken over by Arthur Robinson (of the “Oregon Petition”). Robinson himself showed some interest in HIV denialism; he invited Duesberg to speak at a conference sponsored by OISM in 1995. Other speakers included Sallie Baliunas, Jane Orient, and Edward Teller.

  • dhogaza // May 2, 2009 at 10:33 pm | Reply

    If we would not abandon basic physics in the absence of warming, then it is clear that some other propositions of lesser certainty are involved, whether explicitly or not, in the hypothesis, and that it does not follow from basic physics alone.

    I love the way Michel argues.

    1. He doesn’t believe that those who post here who are physicists or have a physics background could possibly believe that “were we to find that it did not happen in the presence of increasing levels of GHGs, we would conclude that well established physics had been falsified.”

    2. He doesn’t believe it, therefore claims that the AGW argument does not follow from basic physics.

    Dude … this is seriously lame.

  • Robert P. // May 3, 2009 at 12:01 am | Reply

    Ray Ladbury, as you probably know there *is* in fact one experimentally verified form of “cold fusion” – muon-catalyzed fusion, discovered in the 1950’s. After P&F’s announcement, some very smart but overly enthusiastic theorists came up with the idea that since a muon acts in some ways like a “heavy electron”, and since transport of electrons in metals is often described in terms of an “effective mass” that can be wildly different from the physical mass (it can even be negative), perhaps electrons with large effective masses in Pd were behaving somewhat like neutrons. A few minutes of thought shows that this is really bad idea – the “effective mass” describes long range transport and has nothing to do with how an electron can help two nuclei get together. It was a classic error of what my advisor used to call “argument from nomenclature”. Nevertheless some people got publications out of it.

    Spring 1989 was a crazy time.

  • TCO // May 3, 2009 at 12:05 am | Reply

    I find a lot of flaws in both the left and right (or denialist and alarmist) commenters on these blogs. I’m not even asking that they be up to speed on all the literature, but that they know how to take ideas apart and test both sides. There are a few that are good (laz, Phil, couple on the other side). But generally most are wastoids. What’s that quote about I wish all my subjects had one neck for me to cut?

  • Ray Ladbury // May 3, 2009 at 12:58 am | Reply

    Michel, You are misinterpreting what TP is saying. Everything we know about physics says it must warm, so
    1) either there is some new physics that is unique to our time and invalidates the greenhouse effect of CO2 (not likely),
    2) or there is something particularly stable about our current temperature range
    3) or there is another forcing out there that behaves exactly as we believe CO2 behaves
    4) or our current climate model is completely wrong.

    So, to disprove anthropogenic causation, either find some new physics or find a better climate model that gives little or no role to CO2. Of course, you’d then have to explain why CO2 didn’t act like a ghg, but if you can do the former…

  • dhogaza // May 3, 2009 at 1:11 am | Reply

    I find a lot of flaws in both the left and right (or denialist and alarmist) commenters on these blogs.

    Cool. Wake us up when you find a flaw in the underlying science that leads to “alarmism”.

    I’m not even asking that they be up to speed on all the literature, but that they know how to take ideas apart and test both sides.

    Let’s see … what to ask of TCO …

    Stuff the hypocrisy. Practice what you preach. Let ye who are frequently “wastoided” not cast the first stone …

  • Rattus Norvegicus // May 3, 2009 at 2:51 am | Reply

    Oh hell, I’ll jump on the racial differences red herring.

    How about skin color, body type, height, hair color and texture. There are all kinds of racial differences. Just not intelligence. I’m a bird watcher and the biological definition of race for birds is strictly based on appearance. Human races are also defined by appearance. If some idiot out there thinks otherwise, I would point to our president who is, IMHO, probably the most intelligent president we have had in quite a while (and I’m including Clinton in that one).

    I made the mistake of reading the bell curve and although he tried to correct for SES, I’m not sure that captured the problems in the lower class, inner city blacks which tended to dominate the problems he claimed to have exposed. It is hard to eliminate the effects of 400 years of oppression.

  • TCO // May 3, 2009 at 2:52 am | Reply

    I tried listening to some of the Heartland speeches. It was painful. Could not bear to even listen past all the self assurance (of a bunch of guys talking to each other and not at a real science meeting). I swear there is a huge social component to denialism. They all want to have a dogpile in the sauna or something.

  • George D // May 3, 2009 at 5:11 am | Reply

    What they [“wild greens”] really mean is that they want ordinary families and kids to become extinct, leaving space for the Green elite to run the planet and enjoy exclusive bird-watching excursions while feasting on the bones of six year olds who’d earlier been sold to Asian brothels.

    So, which ones of you plan to eat Asian children?

    It’s almost so vile a suggestion as to not bear repeating, but it shows just how deranged the wild imaginations of those who would seize on any tendentious argument are. Tom Semmens explains just why Wishart appears to believe such madness.

  • Lazar // May 3, 2009 at 7:10 am | Reply

    Michel,

    If we would not abandon basic physics in the absence of warming, then it is clear that some other propositions of lesser certainty are involved, whether explicitly or not, in the hypothesis, and that it does not follow from basic physics alone.

    How people react, both initially and long term, and the content of the hypothesis are two different things…
    I throw a ball and hypothesize it will fall due to gravity “alone”. I would not proclaim to have disproven gravity if it did not. I may suspect some additional weird effect (perhaps I’m hallucinating!) Because of my potential reactions, would you claim that the ball falling due to gravity alone was not the hypothesis? Are you claiming that unstated, unknown effects are included in hypotheses that are being tested? There’s a hidden clause in any hypothesis that says “this will occur, provided there are no unknown effects”. Unknown unknowns can never be ruled out… doing otherwise is foolish. But using “unknown effects” and unwillingness to jump to conclusions on physics like gravity or radiative transfer that are very well established, both experimentally and theoretically, to conclude that AGW is not based on basic physics alone, or a ball dropping is not based on gravity alone, is just semantics… you could say the same thing about any hypothesis anywhere anytime… it’s not what people normally mean (under normal understanding of what constiutes a hypothesis.)

  • michel // May 3, 2009 at 8:01 am | Reply

    Its a point about the logical structure of our knowledge. The basic laws of physics are among the hardest things to falsify. This is not because they are certain, but because they are at the center of what we think we know, and so we revise them with great reluctance.

    To do that is going to require very controlled experiments with very specific outcomes. This isn’t a point about, for instance, quantum mechanics. Its a point about how our knowledge is structured. Its another way of saying we will be very reluctant to abandon basic laws of physics even in the face of apparently contrary evidence. Or what can be interpreted as contrary evidence.

    Any physicist who really thinks we would modify our assumptions about basic laws because the climate did not behave as the IPCC reports says it will probably knows a great deal about physics. But not enough about how science has advanced in an historical context.

    It is not going to happen. If it cools over the next 30 years in the presence of rising GHG’s, we are not going to start revisiting quantum mechanics. The idea is completely mad. This is about the last place we will start looking for our mistake.

    Let me ask you. We find what appears to be a perpetual motion machine. Do we start thinking that the laws of physics have been wrong all those years? Of course not. We start from the view that there is something about the machine that we don’t understand. Not that we have got the laws of physics wrong. This is what I am suggesting our reaction will be IF, repeat IF, the earth fails to warm with increasing levels of GHGs in the atmosphere. We will conclude that there are things about climate that we did not understand, but that quantum mechanics is safe.

    This observation leads logically to the conclusion that the AGW hypothesis is not one that follows solely from well known laws of physics. There must be some other premises.

    There is nothing wrong with this. In this respect, AGW is like most hypotheses about the nature of the world we live in. We made many discoveries and changed our ideas often between Newton and Einstein without feeling the need to revise the premises of Newtonian physics. This, if it happens, will be no different.

  • michel // May 3, 2009 at 10:28 am | Reply

    Well, I recently read

    http://www.nsstc.uah.edu/atmos/christy_pubs.html

    As far as I can tell, the argument is that the planet is not warming and will not warm, due to CO2, at the rates that the consensus form of the AGW hypothesis says it should. The reason given seem to be that the warming pattern is different from what this would require, and its suggested that feedback is different than IPCC forecasts. Maybe this is something to do with how clouds behave?

    My point is not whether this is right or wrong. It is rather than we have a couple of tenured scientists asserting that warming is lower than some have forecast, and not due to CO2, but they have felt no impulse, or none they refer to, to doubt quantum mechanics or revise any well accepted laws of physics.

    They may be right or wrong, but it supports my argument. And if you read the discussion of what empirical factors may be involved, that too bears out my argument. The chain of reasoning and observation is so complex that there is no way that failure of climate to follow the models can be a falsifying event for basic physics.

    The converse then is true: it does not, cannot, follow simply from well understood laws of physics. It does not by the way need to, to be either true or very probable, so I cannot understand why people keep on saying it does. Its starting to remind one a bit of MBH – desperate defences of some article of faith or propaganda which are peripheral to the hypothesis, but which have come to be included in the catechism and now cannot be abandoned for purely psychological reasons.

    It does not make the hypothesis any more believable to say, obviously wrongly, that it follows directly from basic laws of physics, and would require us to abandon them if we were to find it false. Any more than to doubt whether Trajan was such a great general as the Roman historians say requires us to doubt the existence of the city of Rome. And we do not increase the credibility of Trajan as a general by saying it does.

  • Ray Ladbury // May 3, 2009 at 12:06 pm | Reply

    Michel,
    Part of the problem is that the problem as stated (e.g. no warming despine increased ghg) is too vague. In considering how to respond to a model that apppears broken, it is essential to understand HOW IT BROKE.

    Do you toss out the model and start from scratch? If so, what are the implications for known physics? GCM are dynamical models based in large part on first-principles physics.

    Do you try to fix the model? If so, you are likely to concentrate on the physics you don’t know well–e.g. aerosols and clouds–rather than that which is well understood–e.g. greenhouse gasses. So, you look at what is different about the periods where the model worked and where it didn’t. If you find an effect that seems to explain the difference, it doesn’t mean we can all rest easy–the lack of warming could be more short-lived than the effects of CO2. That is precisely what happened with the lack of warming from the 40s-70s::
    1) No warming despint increasing ghg
    2) Postulate that aerosols from fossil fuel consumption are decreasing light reaching the ground.
    3)Clean air act limits sulfur
    4)warming kicks back in within years
    5)computer models improve enough to verify the effect in GCM

    So, thirty years of no warming, but it didn’t mean the end of anthropogenic global warming.

    Denialists keep wanting to pin the blame on “new physics” with causes we don’t understand like galactic cosmic rays (GCR). Independent of the lack of change of GCR flux, there is also the problem that this cause does nothing to explain patterns in the paleoclimate unless it has nearly exaxtly the same time dependence as CO2!

    Even if we found a negative feedback like Lindzen’s iris, it is likely that it would not match the effect of increased CO2 indefinitely. It might keep us from seeing warming for awhile but then fail and lead to extremely rapid rise in temperature.

    If warming were to stop now, and we were to see none for 30 years, 50 years, whatever, you would have myriad hypotheses of how to fix the GCM. Some might imply decreased risk of catastrophic climate change. Others might even imply a greater concern in the future. And we wouldn’t have sufficient data and modeling power to decide between them for decades.

    That is why I say that the only way to dismiss the current concern over climate change is to come up with another climate theory that explains paleoclimate and modern climate at least as well as the consensus model AND assigns a CO2 sensitivity well under 1 degree per doubling.

    As to giving up fundamental physics, have you read about the history of the neutrino? When it appeared that energy and momentum were not conserved in beta decay, Heisenberg and Bohr started working on a theory in which energy and momentum were conserved only on average in the quantum world. Pauli almost apologetically posited the existence of an effectively unobservable particle—massless, electrically neutral. Fermi dubbed it the “neutrino” or little neutral particle. While Bohr and Heisenberg abandoned their attempts, the neutrino wasn’t seen until a couple of decades later.

    A new theory means a lot more than twidding a couple of knobs.

  • TCO // May 3, 2009 at 1:39 pm | Reply

    I find myself needing to agree with Michel. On this point.

    I think there is a lot of confusion as to basic physics and then the detailed workings and predictions of global climate models.

    I mean if we find new things out about hominids (as has happened), we don’t alter our belief in evolution, in the descent of man, or in the general rational approach to biology. We recognize that that is a field with significant uncertainty, that the constructs that were set up are capable of revision based on new learning and were not iron clad to begin with.

    Given that GCMs can’t predict explicit forecasts (because of chaos, etc.), given that they can’t predict general regional impacts, given that they lack a record of decisive out of sample validation (please don’t bring up Pinatubo…it’s not sufficient for validating the model ability to forecast CO2 warming….and is far from some sort of eka-silicon of Mendeleev), given that they are incredibly detailed and a single person is challenged to really check all their math and code…given that…is it so inconceiveable that they might not predict properly? That the system might have aspects (either forcings or feedbacks) that they don’t well enough incorporate?

    And, Laz, when one talks about validation or disproof…it seems evident to me that we are really talking about the long-term (say 100 year) impact of CO2 rise, globally. Not to disproving some fundamental physical law. But just to saying that these models, really this ensemble of them and how we view that ensemble…could have been off.

    P.s. None of this is to say that I think they are wrong. My Bayesian bet is that the model mean is the rational bet for the impact of CO2. I just want to have a discussion about science and hypotheses and what it means to think about something.

  • luminous beauty // May 3, 2009 at 2:11 pm | Reply

    michel,

    Physics and observation both tell us that the enhanced greenhouse effect is real. As real as gravity. It is not an hypothetical. By itself, the empirical evidence of the last century suggests that temperatures should continue to increase. Should, in the near future, the already well demonstrated increase in energy available to the weather system not express itself as a temperature increase, then what effects on the weather might one expect?

    The expression leaping from the pan to the fire comes to mind.

    Einstein did revise basic premises of Newtonian physics. The underlying geometry of time and space and the fundamental nature of light to be precise. The differences are quite small at less than astronomical scales, however. Such is the kind of marginal adjustment we might expect from an advance in the physics of complex systems.

  • Lazar // May 3, 2009 at 5:12 pm | Reply

    A random diversion on teleconnections and GCMs / practise for improving clarity in communication :-)

    The Eastern Mediterranean teleconnection pattern (EMP) is a recently discovered (2006) dipole teleconnection. A teleconnection is a correlation between climate variables seperated by large (>1000 km) distances (I don’t know the precise definition of “large” here, but I think it roughly means beyond and seperated from the region of local climate correlations which surround a given point :-)). The correlation can be negative. A dipole is a teleconnection between two poles, and the poles are defined at the points which have maximum correlation, though the teleconnection itself is spread over a large area. If the correlation is negative the effect is like a pair of mechanical scales or a see saw… a positive anomaly in one climate variable at one pole is associated with a negative anomaly at the other pole. Of course the climate variables need not measure the same thing.

    Geopotential height is altitude with a slight adjustment, usually of order of a couple of meters, to account for variations in the gravitational field. Geopotential height and altitude can be considered equivalent for most meteorological applications. An online conversion calculator can be found here.

    When the atmospheric pressure increases this is equivalent to the altitude (geopotential height) of a given pressure level (say 700 hPa) increasing. A high pressure region creates anticyclonic circulation, which is clockwise rotation of the air mass in the northern hemisphere, or anticlockwise in the southern hemisphere. Surface winds generally blow from high to low pressure, whilst in the upper troposphere winds blow from low to high. This image may help.

    Hatzaki et al. (2007) found geopotential heights in the northeastern Atlantic and eastern Mediterranean are negatively correlated during winter, with a maximum correlation (poles) at 500 hPa (about 5-6 km or the mid-upper troposphere) at [52.5 N, 25 W] and [32.5 N, 22.5 E]. The correlation strength decreases at different pressure levels, reaching zero at the surface. The poles were found by rotated principal component analysis of gridded average winter (DJF) geopotential heights for the 500 hPa level, over the years 1958-2003, using data from NCEP-NCAR reanalysis.

    A standardized index is used to measure the state of a teleconnection, to study its time-varying characteristics and climatic effects. It is a somewhat arbitrary function that combines the teleconnection climate variables at each pole. When a teleconnection is in a negative or positive phase this corresponds to the index being respectively below or above its mean value. The index, then, is the measure of the teleconnection.

    Hatzaki et al. defined an EMP index as;
    EMPI = Z(52.5 N, 25 W) – Z(32.5 N, 22.5 E)
    where Z is “the mean winter geopotential height at 500 hPa of the grid point that forms each pole” (Hatzaki et al. 2009). The standardized index is the EMPI minus the mean and divided by the standard deviation of values recorded over the period 1958-2003. A positive EMPI phase is defined as a standardized value > 0.5, and similarly negative is < -0.5. Hatzaki et al. (2007) found that negative EMPI phase values correspond to an eastern Mediterranean pole that has a higher geopotential height relative to the northeastern Atlantic pole, hence a low pressure (cyclonic) system over the Atlantic and a high pressure system over the Mediterranean, resulting in high-level winds that blow northward. Hatzaki et al. (2009) follows up their previous work by examining climatological impacts of the EMP using regularized canonical correlation analysis (RCCA). I don’t know the details (mathematics) of RCCA, but it finds spatial patterns of climate variables (e.g. temperature, precipitation) which have a maximum correlation with the EMPI and which also explain a maximum amount of variance within their respective region. A description of the method is contained in the book by Von Storch, H., and Zwiers, M., Statistical Analysis in Climate Research.

    It’s also worth mentioning a validation of a GCM representation of teleconnections by Hatzaki et al. (2009), where they use the Hadley Center AM3P model to study the effects of climate change driven by IPCC emissions scenarios on the nature of the EMP. Validation is conducted again by a rotated principal component analysis of 500 hPa geopotential heights in the model simulation of present climate, and the locations of observed and modelled poles are compared as well as the strengths of the EMP correlation (the amount of variance explained by the PC that contains the EMP). Does the AM3P contain the teleconnection? If so, where are the poles? How do the strengths of the teleconnection compare?

    References

    The Impact of the Eastern Mediterranean Teleconnection Pattern on the Mediterranean Climate
    Hatzaki, M. et al.
    Journal of Climate, 22, 4, pp. 977-992 (2009)
    DOI: 10.1175/2008JCLI2519.1

    The eastern Mediterranean teleconnection pattern: identification and definition
    Hatzaki, M. et al.
    International Journal of Climatology, 27, 6, 727-737
    DOI:10.1002/joc.1429

    Hatzaki, M. et al.
    Study of future climatic variations of a teleconnection pattern affecting Eastern Mediterranean
    Global NEST Journal, 8, 3, pp. 195-203 (2006)

  • Lazar // May 3, 2009 at 5:34 pm | Reply

    TCO,

    And, Laz, when one talks about validation or disproof…it seems evident to me that we are really talking about the long-term (say 100 year) impact of CO2 rise, globally. Not to disproving some fundamental physical law. But just to saying that these models, really this ensemble of them and how we view that ensemble…could have been off.

    Agreed, the hypothesis that temperature anomalies will follow GCM projections over a given number of years is based on all the assumptions which go into GCMs. But a hypothesis predicting the sign of change at equilibrium (i.e. the world will get warmer eventually) is a prediction of basic physics alone. I thought that by AGW Michel meant the latter, but if I was mistaken (Michel?) then apologies.

  • Ray Ladbury // May 3, 2009 at 5:41 pm | Reply

    Michel,
    I’m sorry, that argument is simple BS. Christy is simply saying, “The models are wrong.” He is making no attempt to fix the model, and without a detailed understanding of the forcings, and given the very longterm effects of CO2, it’s impossible to say that there is no concern going into the future.
    It is not enough to simply say “The model’s broken.” Unless you have some idea of what is needed to fix it, you have no idea whether the consequences are trivial or profound.

    The things that could be wrong with the models include:

    1) Unknown physics–but the time dependence this would have to have to permanently reduce concern over anthropogenic climate change makes this unlikely.

    2) Something fundamentally wrong with something fundamental–e.g. radiative physics, Clausius-Clapeyron equation.

    3)Some special feedback that makes our current temperature range exceptionally stable–which strikes me a special pleading.

    Christy is only half doing science–he’s saying the models are wrong, but remaining silent on how they could be wrong. He’s also being rather selective in his data. Neither he, nor any of the other “skeptical” climate scientists have contributed anything that advances the understanding of climate.

  • TCO // May 3, 2009 at 6:12 pm | Reply

    Stop appealing to Clausius Clapeyron, Ray. You don’t know as much chemistry as I or Eli do. And people in AGW blogo-land throw that around like a bromide. The strict conditions for Clausius Clapeyron (system states, etc.) are not met. It’s like a slightly worse level of blather from my side on second law of thermo.

    Certainly, my intutitive hunch is that relative humidity will stay same percent. But it’s not a thermodynamic requirement. We are talking about an incredibly complex, dynamic system, not at equilibrium.

    I’m sure there is some useful insight…but citing C-C the way you guys do is silly.

  • TCO // May 3, 2009 at 6:14 pm | Reply

    Ray, you left out a whole bunch of other places the models could be wrong. Various aspects of parameterizations, training and subroutines. Not to mention gridscale effects, clouds, aerosols. Note, I’m not saying any of them ARE wrong. But since I haven’t checked them all, I think it’s possible.

  • TCO // May 3, 2009 at 6:17 pm | Reply

    “Christy is only half doing science–he’s saying the models are wrong, but remaining silent on how they could be wrong. He’s also being rather selective in his data. Neither he, nor any of the other “skeptical” climate scientists have contributed anything that advances the understanding of climate.”

    I think this is a significant indictment and mostly right. I do think there are some that have contributed though, depending on how you define skeptic and contribution. Landsea, Von Storch, Kosin, etc. Heck even Gray. Although I think it’s relevant that he is out of his depth on recent global stuff…and is old. But still he did get something or else done at one time or another that advanced understnading in the fierld.

  • Lazar // May 3, 2009 at 6:22 pm | Reply

    TCO,

    given that they are incredibly detailed and a single person is challenged to really check all their math and code…given that…is it so inconceiveable that they might not predict properly?

    Not at all. Thinking of the huge number of complex feedbacks… e.g. evaporation -> surface moisture -> SAT -> convection -> atmospheric circulation -> clouds -> temperature and precipitation -> stream runoff -> ocean salinity -> ocean circulation…
    But I think complexity can cut both ways for/against models. Looking at the large number of comparisons between models and observations that compare spatial and time-varying patterns of different metrics… and the representations may not be perfect, often they’re either pretty good or sketchy, but rarely ‘wrong’/'opposite’, then I think that, given these are all emergent effects and not explicitly programmed for, and they result from the extremely complex interactions as above, I tend to think the models have got most of the physics down… enough to Bayesian bet on their SAT predictions. There’s a totally intuitive non quantified argument for you.

  • Lazar // May 3, 2009 at 7:15 pm | Reply

    TCO,

    it seems evident to me that we are really talking about the long-term (say 100 year) impact of CO2 rise, globally.

    I don’t think it is evident. I think people need to really carefully define what they mean when they say falsifying ‘the AGW hypothesis’. There are choices between looking at equilibrium versus transient responses, measuring a response at a regional versus global average scale, or choosing some metric other than temperature, or any combination of those three. I don’t think GCMs are very useful for equilibrium estimates which can be obtained from one dimensional models, paleoclimate, and studies of the transient response, right? Those methods and GCMs agree pretty well with the Charney central tendency of 3.0 deg. C which doesn’t look like changing any time soon. GCMs are useful for estimation of impacts caused by the transient response and global distribution, then global average temperature is no more important than spatial patterns, effects on the fluid flow (riffing on sommat Michael Tobis said recently). Failing to predict a 30-year or 100-year temp. trend would throw those impacts predictions into the bin, but if it passed, you’d still want to look at other metrics.

  • David B. Benson // May 3, 2009 at 7:55 pm | Reply

    Iris effect, if any, must be quite small, for certainly interglacial 2 (the Eemian) was warmer than now and possibly interglacial 4 was also.

  • Ray Ladbury // May 3, 2009 at 11:33 pm | Reply

    TCO, C-C is shorthand for water vapor feedback, so feel free to substitute that. I do realize that more than C-C goes into that estimation, but it would be no less fundamental if it failed, since the same physics has been used for decades for weather.

    The “falsification” paradigm is rathe a narrow one that doesn’t really apply to very complicated theories that are based on many hypotheses with varying degrees of support. It would be more correct to say “demonstration of incompleteness” rather than “falsification”. That is why I say you have to specify how the model breaks if you expect to get reasonable answers.

  • Barton Paul Levenson // May 3, 2009 at 11:46 pm | Reply

    Rattus — that’s not all that was wrong with the Bell Curve. They did a lot of charts with lines going up or down as predicted, without telling the audience that the regression line charted had R^2 = 5-15% and a huge confidence interval.

  • David B. Benson // May 3, 2009 at 11:57 pm | Reply

    “All models are wrong; some are useful.”

    — statistician George Box

  • Lazar // May 4, 2009 at 12:04 am | Reply

    Good discussion, heated, fair, few ad homs. Open Mind rocks.

  • Lazar // May 4, 2009 at 12:20 am | Reply

    How climate scientists work in the real world…

    Models (GCM’s) predict a regional increased storminess, the prediction is tested against observations.

    A negative test is reported, peer reviewed and published.

    AMS news link (pdf);

    An international team of researchers, led by Dr Edward Hanna from the University of Sheffield’s Department of Geography, has discovered that the intensity of windstorms around the British Isles has not increased due to global warming.
    The research findings, published in the December 15, 2008 American Meteorological Society’s Journal of Climate, contradict some climate model predictions by showing little sign of overall increased storminess since the mid-to-late nineteenth century.

    Article abstract…

    The authors present initial results of a new pan-European and international storminess since 1800 as interpreted from European and North Atlantic barometric pressure variability (SENABAR) project. This first stage analyzes results of a new daily pressure variability index, dp(abs)24, from long-running meteorological stations in Denmark, the Faroe Islands, Greenland, Iceland, the United Kingdom, and Ireland, some with data from as far back as the 1830s. It is shown that dp(abs)24 is significantly related to wind speed and is therefore a good measure of Atlantic and Northwest European storminess and climatic variations. The authors investigate the temporal and spatial consistency of dp(abs)24, the connection between annual and seasonal dp(abs)24 and the North Atlantic Oscillation Index (NAOI), as well as dp(abs)24 links with historical storm records. The results show periods of relatively high dp(abs)24 and enhanced storminess around 1900 and the early to mid-1990s, and a relatively quiescent period from about 1930 to the early 1960s, in keeping with earlier studies. There is little evidence that the mid- to late nineteenth century was less stormy than the present, and there is no sign of a sustained enhanced storminess signal associated with “global warming.” The results mark the first step of a project intending to improve on earlier work by linking barometric pressure data from a wide network of stations with new gridded pressure and reanalysis datasets, GCMs, and the NAOI. This work aims to provide much improved spatial and temporal coverage of changes in European, Atlantic, and global storminess.

    New Insights into North European and North Atlantic Surface Pressure Variability, Storminess, and Related Climatic Change since 1830
    Hanna, E. et al.
    Journal of Climate, 21, 24, pp. 6739-6766 (2008)
    DOI: 10.1175/2008JCLI2296.1

  • Barton Paul Levenson // May 4, 2009 at 12:27 am | Reply

    TCO writes:

    I’m sure there is some useful insight…but citing C-C the way you guys do is silly.

    C-C predicts water vapor in the air should rise with temperature, assuming EITHER saturation OR fixed relative humidity.

    Observations seem to confirm fixed relative humidity.

    Water vapor has risen with temperature (e.g. Brown 2007).

    What more do you want?

  • David B. Benson // May 4, 2009 at 1:21 am | Reply

    The simple rule

    higher relative humidity => more precipitation => less clouds, lower relative humidity

    lower relative humidity, less clouds => more evaporation => higher relative humidity

    appears to approximately work on a global average scale.

  • Paul Middents // May 4, 2009 at 2:01 am | Reply

    Lazar,

    Your grasp of the literature and consistently interesting cites and comments are a main reason for returning to Tamino’s open threads.

    Another is the sometimes entertaining, mostly exasperating and frequently inebriated and misspelled comments from The Climate Obsk. He is one of a zooful of your workaday denialists and delayers–but a lot more entertaining than most.

    Paul

  • dhogaza // May 4, 2009 at 5:01 am | Reply

    It would be more correct to say “demonstration of incompleteness” rather than “falsification”. That is why I say you have to specify how the model breaks if you expect to get reasonable answers.

    This really captures it, though I’m sure it goes right over TCO’s inebriated head.

    It’s exactly how the more serious amongst the denialists attack mainstream science.

    “It is GCRs not CO2!” – incompleteness.

    Falsification is a much higher bar, and other than the woot-woot woot’s-WT types, largely ignored. And wootwoot-WT is an exercise in statistical and numerical parody, though apparently they’re not aware of it.

  • Ray Ladbury // May 4, 2009 at 12:23 pm | Reply

    dhogaza,
    I think TCO and I have been shouting past each other. He seems to think I am saying theory trumps observation. I’m not.

    What I’m saying observation has to be interpreted, and theory provides the only language we have for interpreting observation. Thus, even if a theory is flawed, if it’s the only theory we have, we rely on it to interpret the observation and tell us HOW the theory is flawed.
    The theory in this case is that of Earth’s climate, and anthropogenic climate change is a prediction of that theory. Falsification of this prediction consists of a lot more than just not seeing warming. After all, we had 30 years of no warming from ~1944-1974. Rather than falsifying anthropogenic warming, that epoch actually strengthened the case for it, since the fact that we didn’t cool despite large sulfate emissions argues for a higher CO2 forcing.

    So the theory of the 1950s-60 was incomplete. And not surprisingly, the incompleteness lay with the forcing that was uncertain (aerosols), rather than the forcing we knew well (CO2).

  • george // May 4, 2009 at 2:01 pm | Reply

    I think a big part of the problem in these discussions is that people throw about terminology like “rejection of the AGW model” without properly/precisely defining what it is they mean.

    Then, after being challenged on (or asked to clarify) their empty claim, they say something like

    “Of course, by ‘rejection of the AGW model’ I did not mean that “CO2 does not absorb IR” and “Of course, (isn’t it self-evident?) that I meant something else instead: [insert something here, preferably that can not be further criticized]“.

    Of course.

    It’s really just a waste of time trying to engage this stuff on a scientific level (or even common everyday level, for that matter)

  • michel // May 4, 2009 at 3:36 pm | Reply

    If I understand Spencer and Christy correctly, they are saying that all the physics is just the way we thought it was. But that clouds and feedback from clouds and water vapor on planet earth is different from what the models assume, and therefore that climate sensitivity is different from what they say.

    Now, this could be right or wrong – I am still not arguing about whether it is either a right or a meritorious point of view.

    It is however obvious that this is one way in which, if they were right, it could turn out that the IPCC estimates of climate sensitivity to CO2 rises could be wrong, but all the generally accepted laws of physics and chemistry be unaffected.

    Or, are you making the argument that the behaviour of clouds, water vapor and specifically feedbacks to IPCC parameters can be deduced from well known physics and indisputable observations of initial conditions? Is that really what is being argued? With sufficient certainty that should they not behave as expected, we would start looking for errors in C-C or similar well established physics?

    In response to george, if you had to state it in one sentence, is it not that climate sensitiviity to CO2 doubling, or equivalent rise in other GHGs, is between 2 and 4 degrees C? That is, higher by some appreciable margin than the effect from IP absorption by CO2.

    Skepticism about it could take the form of thinking that climate sensitivity is lower or negligible, and is usually accompanied by a belief that despite the current rise in atmospheric CO2, there is little or no additional warming.

    Surely one thing that AGW definitely predicts is that the planetary temperature cannot remain flat, or fall, over the next 30 years, if CO2 or other GHG levels rise. Absent something quite dramatic happening to particulates, at least.

    Well, maybe this is not what everyone else means by it?

  • gmo // May 4, 2009 at 3:54 pm | Reply

    Especially as the point has been made multiple times recently that the question needs to be better and more precisely posed, I think it is worthwhile to repeat what started the line of discussion on the topic. On April 22 Terry asked,
    “A simple question for everyone: What would have to happen for you to reverse your currently held position on AGW?”

    Such “simple” questions often are quite not, as george has just pointed out. The words I see begging for further interpretation are “reverse” and “position”.

    Granted the visitors here determine how it goes, but I think it is quite telling that the discussion evolved to the point of some people saying, “this is basic physics, it ain’t being reversed”, and some people saying that _if_ there was a decades-long period of no warming that would indicate there is something else not being factored in.

    Nobody (as far as I can tell and of course only talking about here) is giving consideration to any far-fringe ideas like that there is no such thing as a greenhouse effect. The “position” it seems is ~2C-4.5C warming for doubling CO2, and the “reverse” would be accepting that a currently unknown/unaccepted forcing or feedback pushes that value down to something like 1C/doubling. My guess would be Terry would not defined those terms in that limited way.

    The vaguely worded question people here are really talking about appears to be something more like, “what magnitude & duration of ‘non-warming’ would make you think that current theory is missing a factor of significant importance?”

  • TCO // May 4, 2009 at 4:19 pm | Reply

    george: that’s true. But then there is also impreciseness on the alarmist side, when talking about “consensus”, “the science”, etc.

  • george // May 4, 2009 at 5:47 pm | Reply

    RE: Bell curve

    Carnegie Mellon statistician Cosma Shalizi has some interesting things to say on the subjectg, a Statistical Myth

    According to Cosma Shalizi:

    To summarize what follows below (”shorter sloth”, as it were), the case for g rests on a statistical technique, factor analysis, which works solely on correlations between tests. Factor analysis is handy for summarizing data, but can’t tell us where the correlations came from; it always says that there is a general factor whenever there are only positive correlations. The appearance of g is a trivial reflection of that correlation structure. A clear example, known since 1916, shows that factor analysis can give the appearance of a general factor when there are actually many thousands of completely independent and equally strong causes at work. Heritability doesn’t distinguish these alternatives either. Exploratory factor analysis being no good at discovering causal structure, it provides no support for the reality of g.

    //end Cosma Shalizi quotes

  • TCO // May 4, 2009 at 6:38 pm | Reply

    It’s not just nescesarrily that the “current theory” is missing a factor of importance. Realize that GCMs aren’t readible in an evening, have training, are judged in ensembles, etc. etc. We might for instance find that some more alternate EBM based approach is a better way to understand the system than a dynamic weather model.

  • george // May 4, 2009 at 6:40 pm | Reply

    Gmo says Nobody (as far as I can tell and of course only talking about here) is giving consideration to any far-fringe ideas like that there is no such thing as a greenhouse effect.

    That may be the case (or might not be). Unless people spell it out, it’s really not possible to tell (and unfortunately, as you allude to, there are lots of places on the internet where people do mean “the greenhouse effect is bunk” when they say “AGW has been falsified”) .

    The “position”… is ~2C-4.5C warming for doubling CO2, and the “reverse” would be accepting that a currently unknown/unaccepted forcing or feedback pushes that value down to something like 1C/doubling.

    I agree that should be the focus of any further debate.

    TCO: While I agree that everyone (myself included) needs to be more careful in general to define what they mean, one has to be particulalry careful when one talks about “rejection of” or “falsification of” a scientific hypothesis, model, theory, etc.

    The standard goes way up for that because that has a very particular meaning in the scientific sense. It essentially begs to be tested and in order to do so, it has to be very precisely defined. Otherwise, it means nothing.

  • David B. Benson // May 4, 2009 at 9:13 pm | Reply

    michel // May 4, 2009 at 3:36 pm — That seems to be Spencer’s position; I don’t know about Christy. It is certainly wrong based on paleoclimate; at least one interglacial was measureably warmer than at present, requiring that clouds behave about as currently supposed.

    Further work on cloud formation and the effects of aersols continues. Thi8s doesn’t remarkably affect the effects of known forcings, being down in the 5% (or less) range.

  • TCO // May 4, 2009 at 10:23 pm | Reply

    McI is annoying me with some of his blathering. He’s got a halfway decent post on boreholes (nothing definitive, but teeing up an issue for investigation wrt the algorithm for solving). But then he pre-loads it with not on the subject Wahl and Amman kvetching. And he also refers to Tammy’s nice intro to boreholes post fromn 2007 and dwells on a location error (who cares) without noting the nice parts of Tammy’s post.

    And then he’s got some anti-Chu stuff. It’s just so disappointing to see someone capable of good work, just doing internet listserve whatever is hot on the denialist circuit stuff.

  • Ray Ladbury // May 4, 2009 at 11:46 pm | Reply

    TCO says, “Realize that GCMs aren’t readible in an evening, have training, are judged in ensembles, etc. etc”

    And yet, Arhennius got about the same value for CO2 sensitivity with pencil and paper. TCO, in effect, you are saying we shouldn’t put in the physics we know. An energy-balance model is a big step backwards from a GCM.
    The sorts of “errors” you are talking about aren’t subtle. They probably wouldn’t result in a model tha “almost worked”. Moreover, there are a couple of dozen independent GCM that give more or less consistent results, and they don’t look at all Earthlike if you make the sorts of changes that would make anthropogenic climate change go away.

    What is more, the globe is warming. It hasn’t paused. It hasn’t noticably slowed down (decade on decade). Without some sort of model, we’re flying blind.

  • TCO // May 5, 2009 at 1:05 am | Reply

    How can Arhenius get the answer with pencil and paper, but an EBM is insufficient? Do you even think about what you say?

    Tammy is so much smarter than you…

  • dhogaza // May 5, 2009 at 4:17 am | Reply

    And Ray is so much smarter than you, so where are you left?

    It’s just so disappointing to see someone capable of good work, just doing internet listserve whatever is hot on the denialist circuit stuff.

    Someday, will you just admit to us that McI doesn’t give a [edit] about the science, but is just into presenting enough FUD to toss a FUD-based monkey-wrench into the machinery?

    You’ve admitted to us, repeatedly, that you’re the smartest human on the planet since Feynman, but you can’t figure this out for yourself?

    Pffft.

  • michel // May 5, 2009 at 6:42 am | Reply

    Ray, doesn’t Hans Erren’s analysis show that Arrhenius got the magnitude of the CO2 effect far too high? I looked up the piece some while ago and it seemed quite definitive. I had earlier got the impression by misreading Weart that Arrhenius did not include feedbacks, which he did, but if Erren is right, Arrhenius got the CO2 effect wrong, way too high, and so way overstated the total effect. Not that this means anything one way or the other for the validity of modern estimates of climate sensitivity.

    Its another case of wanting to give the argument extra force by appealing to something which doesn’t have much, and is anyway mistaken. As in ‘this is just 200 year old physics’, or ‘its been know perfecly well since Arrhenius’.

    It isn’t ‘just physics’, and it hasn’t, at least in its current form, been known since Arrhenius. But whether it has or not is irrelevant to its validity. The question is what the evidence is. How long people have thought it, or who has thought it, is simply not evidence for it. These things however become part of the catechism, and their effect is to diminish the credibility of AGW, not increase it.

    As when your local Mac fanatic claims that his Mac is made of better components, when you can see for yourself that its generic memory, Samsung drives and low end graphics cards… But Macs could still be better choices. Sometimes fanatically pushing bad arguments is a worse tactic than no argument at all.

  • Barton Paul Levenson // May 5, 2009 at 11:05 am | Reply

    TCO writes:

    How can Arhenius get the answer with pencil and paper, but an EBM is insufficient? Do you even think about what you say?

    Arrhenius got about the right answer with pencil and paper FOR THE VALUE OF CO2 CLIMATE SENSITIVITY.

    An EBM is insufficient TO CORRECT THE GCM ERRORS UNDER DISCUSSION.

    Apples and oranges.

    Do you even read what you respond to?

  • Ray Ladbury // May 5, 2009 at 11:43 am | Reply

    TCO,
    So the Reading Comprehension Class didn’t work out for you, huh?

    Are you really so obtuse (or drunk) that the only thing you got out of my missive was “Arhenius”?

    Here’s an experiment. I typed the same sentence you got into Alicebot, and it got as much out of it as you did. Transcript follows:

    Human: Arhennius got about the same value for CO2 sensitivity with pencil and paper.

    God: Only Arhennius got about the same value for CO2 sensitivity with pencil and paper? You are quite mature.

    Congratulations, TCO, you just failed a Turing test! Your mother must be proud!

    Dude, energy balance is in GCM–it’s part of the physics that is already there. What you are asking them to do is dumb it down so you can understand it! You are asking them to take out verified physics–physics that doesn’t have adjustable parameters and makes the model more Earthlike (or skillful) just because YOU don’t understand it.

    I suspect the problem is that you haven’t done much dynamical modeling and so distrust it. Now an adult response might be to actually learn something about it. Instead you expect the modelers to stick to techniques you understand. Oh well, “To a man with only a hammer in his toolbox, every problem looks like a nail.”

    A little more advice: You really need to work on your ad homs. If you want to insult someone’s intelligence, it’s not very effective to say, “You’re dumber than [Insert Tamino, Einstein, Feynmann or somebody else smart].

    Now, here is the point I was making again. See if you can actually read ALL of the words I write:

    Arrhenius managed to get roughly the same ballpark figure for CO2 sensitivity as GCM do. That is sufficient to demonstrate that climate change presents a credible threat–no complicated GCM. If you want to build an EBM to demonstrate the same thing, knock yourself out.

    However, there is a whole helluva lot more we need to know about climate than CO2 sensitivity. That is why we need skillful models like GCM. There. Got that? Or did I use too many big words?

  • michel // May 5, 2009 at 1:17 pm | Reply

    “Arrhenius got about the right answer with pencil and paper FOR THE VALUE OF CO2 CLIMATE SENSITIVITY.”

    BPL, I really don’t think he did. If you read Erren’s stuff, he really did get the CO2 IR absorption numbers wrong, and persisted with his wrong estimate until quite late.

    Not that it bears much or at all on subsequent research on this topic, but we should stop saying he was right when he apparently was not. It harms rather than helps the case.

    [Response: Of course he got the absorption numbers wrong; our understanding of the interaction of light and matter was still evolving, we hadn't even touched quantum mechanics yet. But his estimate was within an order of magnitude -- which (if you're familiar with the history of science) for an initial foray is a startling success. His success was due to the fundamental correctness of his approach; you should stop implying that it wasn't, it harms rather than helps your credibility.]

  • Ray Ladbury // May 5, 2009 at 1:24 pm | Reply

    Michel,
    Arrhenius’s figure was high–not way too high, but high. And he later got a whole lot closer:
    From Wikipedia
    Arrhenius estimated that halving of CO2 would decrease temperatures by 4 – 5 °C (Celsius) and a doubling of CO2 would cause a temperature rise of 5 – 6 °C[3]. In 1906 Arrhenius adjusted the value downwards to 1.6 °C (including water vapour feedback: 2.1 °C). Recent (2007) estimates from IPCC say this value (the Climate sensitivity) is likely to be between 2 and 4.5 °C. It is remarkable that Arrhenius came so close to the most recent IPCC estimate.”

    It shows that you can ballpark the sensitivity with relatively simple methods.

  • TCO // May 5, 2009 at 2:18 pm | Reply

    Ray, you can bluster all you want. If a paper and pen calculation shows the climate sensitivity, then an EBM should also. If they don’t, then you’re just kind of being a documentarian and appealing to a big name. If they don’t, then Arrhenius pen and paper had holes in it (even if the answer is “right”, that does not mean his calculation could not have contained major errors.)

    The issue with the GCMs is that you have to use an ensemble, they are ginormous peices of code…such that they’re not really readible and checkible like a math theorem. They also have various places for decisions to be made…various “handles” that can be handled wrong. In addition, they are dynamic projections which we KNOW are wrong because of chaos and initial conditions, said chaos impacting even year to year projections (not being damped by seasons for instance).

    I mean GCMs MAY BE the right way to think about the problem and MAY BE correct.

    But if we come back and find the wrong answer…I’m going to be a lot more likely to look at over-modelling, at the issue of modellers not having enough real world validation, of assumptions and group think…than I will be to assume some radical departure in physics.

  • TCO // May 5, 2009 at 2:19 pm | Reply

    “However, there is a whole helluva lot more we need to know about climate than CO2 sensitivity. That is why we need skillful models like GCM. There. Got that? Or did I use too many big words?”

    But they can’t even tell you regional effects!

  • george // May 5, 2009 at 2:53 pm | Reply

    ray

    If you want to insult someone’s intelligence, it’s not very effective to say, “You’re dumber than [Insert Tamino, Einstein, Feynmann or somebody else smart].’

    I agree. It gets back to precisely defining what one means.

    What, precisely, does “Tammy is so much smarter than you” mean?

    Let’s break it down:

    1) What measure of “smartness” should we use? (“g”? )

    2) Just how smart is “Tammy” [Wynette? Wikipedia says she's a cosmetologist in addition to being a singer and General Relativity is not easy by any means so she's probably pretty bright by most measures. I wonder what her take is on the cosmetological constant]

    3) How much is “so much”?

    Thisssssssssssssssssssssssssss much?

    or Thissssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss
    sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss
    sssssssssssssssssssssssssssssssss…
    much

  • Ray Ladbury // May 5, 2009 at 4:35 pm | Reply

    TCO, What you are doing is talking out your ass. You have no idea how GCM work. You have no idea how they are developed. You have no idea how teams make a decision to include new physics and how it is validated before it is included.

    Instead you make vague insinuations of “groupthink,” and “overmodeling,” with zero evidence that this might be the case. Now maybe you can explain to us how you distinguish this sort of damning by inuendo from what McI does.

  • dhogaza // May 5, 2009 at 4:57 pm | Reply

    If a paper and pen calculation shows the climate sensitivity, then an EBM should also. If they don’t, then you’re just kind of being a documentarian and appealing to a big name. If they don’t, then Arrhenius pen and paper had holes in it (even if the answer is “right”, that does not mean his calculation could not have contained major errors.)

    Of course GCMs are trying to do more than just give you one number (CO2 sensitivity).

    But they can’t even tell you regional effects!

    Define “region”. They’ve done a good job with polar amplification, including the fact that the NH will show more than the SH.

    The problem is one of resolution, which follows from practical limitations on computing power.

  • dhogaza // May 5, 2009 at 5:09 pm | Reply

    Here’s an abstract of a paper that discusses the use of GCMs to explore regional effects of climate change.

    Climate features
    of a spatial scale of less than 200-300 km (gridscale in typical GCMs) cannot be
    represented at all (and physical processes that operate at such a scale will be
    poorly or not represented). In addition, many features of short time scale climatic
    variability, such as occurrence of extreme rainfall events and some extreme
    weather systems such as tropical cyclones cannot be adequately represented at
    the spatial resolution of a GCM.

    The inability to run GCMs with finer-scale resolution is NOT a failure or weakness of the models themselves. It’s a consequence of practical limitations on computing power.

    Computers continue to get faster and faster, as they do, you’ll see that 200-300km resolution become finer and as a result, finer-scale climate features and smaller “regions” will be modeled with greater accuracy.

    Without changing a line of code …

  • David B. Benson // May 5, 2009 at 7:27 pm | Reply

    Patagonia is a good example of a region, being a province of Argentina with booundaries which agree well with climate changes to north and south. A study (for all of Argentina) concluded that most of Patagonia will become drier with increased global warming “continuing the trends sens in the 20th century”.

    Now that was done using the Hadley Centre main GCM, but even I (without a GCM) could come to this same conclusion for Patagonia based on historical records and paleoclimate proxies.

    I hasten to add that other regions are much more difficult to reach conclusions about, the Eastern Mediterranean lands being a good example.

  • Hank Roberts // May 5, 2009 at 8:04 pm | Reply

    Food for thought:
    http://arstechnica.com/science/news/2008/11/elsevier-beyond-the-pale-of-scientific-respectability.ars

    “There doesn’t seem to be a valid economic justification for Elsevier’s pricing structure, other than the inherent value of their journals.

    So, are they worth it? Lets take a look at some examples …

  • TCO // May 5, 2009 at 10:30 pm | Reply

    Ray: I lack knowledge of how GCMs work…I agree. I’m not making an assertion. Do you capisce?

    Dhog: Those were thoughtful replies.

  • dhogaza // May 5, 2009 at 11:48 pm | Reply

    This is also interesting (from a 2007 presentation I dug up via Google) :

    Spatial discretization evolves with computer technology. Was 5 degree squares in 1990. Today 2.5 degrees. Usually 20 layers in atmosphere.

    Number of layers in the ocean also evolves
    In order to resolve weather and even averaged climate for local regions would need 1-degree (lat-long) resolution. Requires ~ 50-fold increase in supercomputer speed.

    It’s going to be awhile before we see that 50-fold increase in supercomputer speed …

    And BTW the 2.5 degree square falls within the 200-300km resolution figure I quoted earlier, so it seems both sources are on the same page.

  • dhogaza // May 5, 2009 at 11:56 pm | Reply

    And here’s an explanation as to where parameterization come in (and perhaps explains the source of the denialsphere myth that GCMs “ignore clouds”):

    Sub-grid scale processes are those that have dimensions smaller than the model resolution. Certainly cloud microphysical processes are in this category, therefore they need to be �parameterised�,i.e. the aggregate effect of the clouds on the resolved scale (in terms of changes in the radiation fluxes or moisture and mass transport, etc) is calculated.

    Parameterizations are empirical approximations based on large-scale (resolved) variables. Global models do not resolve cumulus clouds (even thunderstorms), so their presence and effects are parameterised: for instance, when the atmosphere is conditionally unstable and (gridscale) moisture convergence occurs, thunderstorms are assumed which stabilise and the atmosphere deposit rain.

    This makes perfect sense as individual cumulous clouds or thunderstorms are far smaller than 250-300km^2, therefore the models will be “blind” to them (unable to resolve such a fine feature). So such features are parameterized.

    Parameterizations may have a theoretical justification but always need to be tested experimentally. For instance, one can assume that the surface albedo depends solely on surface temperature (i.e. the likelihood of ice), or that the planetary albedo is simply related to cloud amount. All state-of-the-art models somehow parameterise atmospheric radiation, sub-gridscale motion, chemistry, and cloud physics.

  • dhogaza // May 6, 2009 at 12:36 am | Reply

    Here’s another link TCO might find interesting, and it backs my “without changing a single line of code” comment above.

    The atmospheric model used by HadCM3 (coupled-model 3) is exactly the same used to make weather predictions you see on TV or in print for the UK.

    The difference? Resolution (size of the grids). Why? For UK weather prediction well you only need to run it for the UK :) Obvious, right?

    Then there’s a separate ocean model used for climate work, coupled to the atmospheric model (thus the “couple model” moniker).

    For weather forecasting they just give the atmospheric model a snapshot of current conditions and run it out a few days rather than couple it to the ocean model (presumably because ocean conditions change much more slowly therefore you don’t really need to deal with it for short-term forecasts).

  • Ray Ladbury // May 6, 2009 at 12:56 am | Reply

    No, you aren’t making an assertion. You are casting vague aspersions against a whole discipline of science and all its practitioners. And if you make the aspersions vague enough, no one can defend themselves because they aren’t even sure if they’re the ones being attacked. It’s very reminiscent of McI’s MO, isn’t it?

  • Michael hauber // May 6, 2009 at 1:31 am | Reply

    Could aerosols be again cooling the climate?

    With such a large boom in China of late I have been curious about this possibility. I can’t seem to find much on the internet on recent aerosol forcings, GISS model E seems to apply a curiously constant aerosol forcing since about 2000.

    I made a crude attempt to guestimate what aerosol forcings may have been doing lately. The Carbon Dioxide Information Analysis Center publishes estimates of CO2 emissions by year. My assumptions to estimate Aerosol concentrations:

    Aerosol emissions are proportional to Co2 emissions.
    Every year atmospheric concentrations of emissions are reduced by 40%, and emissions for the year added. Sensitivity analysis shows that I can vary this parameter from around 25% to 80% and get the same conclusion. I guessed this figure from the fact that volcanos impact climate for about 3 years.

    I assume a pre industrial Co2 of 275, Co2 forcing of 2.9 in 2000 (which actually includes other GHGs), aerosol forcing of -2.1 in 2000, and linear relationship between aerosol and CO2 concentrations and forcing. I obtained CO2 concentrations from Mauno loa measurements.

    Result is a reduction in forcings over recent years. With an aerosol decay rate of 20%/annum I get peak in 2003, and a negligable -0.01 reduction since then. With a decay rate of 40%, I get peak in 2003 and a -0.07 reduction in forcing, with a decay rate of 80% I get peak in 2002, and a -0.11 reduction in forcing.

    Compared with a 0.15 forcing for N2O quoted in GISS model E forcings, this looks to be on the treshold of significance.

    However if forcings have reduced a little in the last 5 years, would this lead to a cooling? Considering that climate models predict significant further warming if forcings remain constant in the future, I would guess not. Also if chinese emissions are leading to a short term cooling I would predict that such cooling would be strongest in the northern hemisphere, and strongest over land. However looking at recent data from GISS, the last 5-10 years have seen a warmer land compared to sea, and warmer SH compared to NH, so i seriously doubt this factor has been dominant over the last 5-10 years.

    Also following some of the aerosol discussion at realclimate the recent China boom has been leading to an increase in black carbon, leading to warming, so perhaps there isn’t even any temporary reduction in anthropogenic climate forcings.

  • Hank Roberts // May 6, 2009 at 3:41 am | Reply

    Excellent radio program on at the moment:
    http://www.itsyourworld.org/wac/Radio.asp?SnID=641514496

    05-05
    Three challenges in one: the economy, energy, and the environment

    Jane C.S. Long, Associate Director, Energy and Environment, Lawrence Livermore National Laboratory
    Dan Reicher, Director for Climate Change and Energy Initiatives, Google.org
    David Victor, Director, Program on Energy and Sustainable Development, Stanford University

    Really, really good. Contact info on the above page if you want your local station to carry it.

    It should be in their archive in a week or so:
    http://wacsf.vportal.net/

  • dhogaza // May 6, 2009 at 4:21 am | Reply

    Aerosol emissions are proportional to Co2 emissions.

    Really?

  • dhogaza // May 6, 2009 at 4:23 am | Reply

    Could aerosols be again cooling the climate?

    First of course, it would have to be cooling, in excess of what one expects in a La Niña situation.

  • dhogaza // May 6, 2009 at 4:47 am | Reply

    Ray: I lack knowledge of how GCMs work…I agree. I’m not making an assertion. Do you capisce?

    Dhog: Those were thoughtful replies.

    This morning, I knew nothing at all about how GCMs really work (you don’t really need to know anything about the real science to know that the ideologues on the other side are full of it).

    So my “thoughtful replies”, and my follow-ups, and the additional knowledge which I’ve (strategically) chosen not to reveal yet, come from an hour or so of targetted googling and time spent quietly reading.

    TCO, you could’ve done so yourself.

    Why didn’t you?

  • Lazar // May 6, 2009 at 9:45 am | Reply

    A caution about how certain sites are extending Santer et al. (2008).

    The paper is largely a response to Douglass et al. 2007, and contains two methodologies for comparing models with observations. d*1 is an extension of the Douglass et al. test for consistency between modelled and observed trends to include uncertainty in the observed trend. d*1 is an examination of the methodology of Douglass et al., not Santer et al.’s optimal choice for comparing models and observations.

    … for which it would be entirely inappropriate…

    Each model predicts a range of trends using multiple realizations/runs, differences are due to sensitivity to initial conditions (chaos). The range of trends is not included in the estimate of the ensemble mean trend uncertainty in d*1 (equations 9 and 12 of Santer et al.)… which is calculated simply as the variance of model mean trends averaged over all runs/realizations. Santer et al. put in a big caveat…

    There are three underlying assumptions in the d∗1 test. The first assumption (which was also made by DCPS07 [Douglass et al.]) is that the uncertainty in <> [ensemble mean trend] is entirely due to inter-model differences in forcing and response,
    and not due to differences in variability and ensemble size.

    Santer et al. make it clear that d*1 does not test the ability of models to reproduce the observed trend (this is tested by the first methodology in Santer et al.). Instead it is a wilfully simple test of the consistency of one trend (the ensemble mean) with the observed trend…

    Under H2, we seek to determine whether the model-average signal is consistent
    with the trend in φo(t) (the signal contained in the observations).

    Certain sites are describing d*1 as the ‘Santer et al.’ method and eliding the Douglass et al. provenance. They are also claiming or implying that the test shows models are failing with long term trend tests or using more recent data. None of the sites are listing the above caveat even as they list others contained in Santer et al.

    In both methodologies Santer et al. assume AR1 noise. One site claims the test is equally likely to be too stringent versus too liberal if noise is not AR1. Santer et al…

    Experiments with synthetic data reveal that the use of an AR-1 model for calculating ne tends to overestimate the true effective sample size (Zwiers and von Storch,
    1995). This means that our d test is too liberal, and is more likely to indicate that there are significant differences between modelled and observed trends, even when significant differences do not actually exist. It
    should therefore be easier for us to confirm DCPS07’s finding that modelled and observed trends are inconsistent.

    References

    Consistency of modelled and observed temperature trends in the tropical troposphere
    Santer et al.
    International Journal of Climatology 28, 13, pp. 1703-1722 (2008)
    doi:10.1002/joc.1756

    A comparison of tropical temperature trends with model predictions
    Douglass et al.
    International Journal of Climatology 28, 13, pp. 1693-1701 (2007)
    doi: 10.1002/joc.1651

  • michel // May 6, 2009 at 2:27 pm | Reply

    http://home.casema.nl/errenwijlens/co2/arrhweart.htm

    According to Erren, if this is a correct reading of what he is saying, Arrhenius was estimating 8 degrees K in 1903 for a tripling of CO2 not allowing for water vapor feedback. This was despite having shown evidence of knowing the correct numbers from work by Angstrom.

    He may have been right about the direction, but he was wrong about the distance.

  • TCO // May 6, 2009 at 2:36 pm | Reply

    Dhog: I’m lazy and I like to fart around on the net as opposed to doing research. Will look at that stuff and respond.

    Ray: No. They’re not vague aspersions, they are vague concerns or just even confessions of my own lack of knowledge. You are politicized and simple-minded…and thus miss subtleties. That’s why I say you are not as smart as Tammy. Or Gavin. Or Eli.

  • JCH // May 6, 2009 at 4:04 pm | Reply

    “First of course, it would have to be cooling, in excess of what one expects in a La Niña situation. …”

    Are aerosol emissions continuing to diminish in North America and Western Europe? If so, wouldn’t the proposed Chinese effect also have to leap that?

  • Dave A // May 6, 2009 at 8:45 pm | Reply

    dhogaza,

    “This morning, I knew nothing at all about how GCMs really work “

    How does this sound?

    Models are less than perfect representations of reality and thus are an imperfect method of describing and predicting climatic change. This is partly because scientists do not yet fully understand how the atmosphere works and interacts with the oceans and biosphere, and partly because no computer currently available is powerful enough to carry out all the necessary calculations. Some elements, therefore, are missing altogehter from the models, while others are represented by assumptions rather than facts, or are included in an over-simplified form. The major deficiencies in climate modelling are the behaviour of oceans and the role of clouds.

    Does that sound about right?

  • David B. Benson // May 6, 2009 at 8:53 pm | Reply

    AFAIK, ABC aerosols from Asia vastly outweigh reductions in Europe; North America remains about constant. The result globally is a slight cooling effect, but in some regions (Arctic, for example) a warming effect.

    These various effects are less than 5% of the various forcings.

  • dhogaza // May 6, 2009 at 9:32 pm | Reply

    The major deficiencies in climate modelling are the behaviour of oceans and the role of clouds.

    Does that sound about right?

    Not at all clear regarding the “behavior of oceans” part, where did you read that?

    Uncertainty regarding cloud feedbacks have been acknowledged all along by climate modelers. The fact that sensitivity estimates to doubling CO2 ranges from 2-4.5 C is a reflection of that (and other) uncertainties.

  • dhogaza // May 6, 2009 at 9:45 pm | Reply

    I need to read more closely.

    Some elements, therefore, are missing altogehter from the models

    No. Nothing known to be significant is “missing altogether”.

    , while others are represented by assumptions rather than facts

    No. The parameterizations that are required due to spatial resolution limitations aren’t “represented by assumption”. Theory and empirical evidence, not assumption.

    , or are included in an over-simplified form.

    True. One out of three, not bad.

  • Ray Ladbury // May 7, 2009 at 12:15 am | Reply

    Dave A., I wouldn’t even give you a passing grade on that if it were a 3rd grade bood report. It’s pointless to even try to correct it as there is no information there to start with.

  • Ray Ladbury // May 7, 2009 at 12:22 am | Reply

    So, TCO, maybe you can explain some of these “subtleties”. For instance, when did it become an overtly political act to align oneself with the overwhelming preponderance of the evidence as published in peer-reviewed journals and the thinking or the overwhelming majority of experts in the field.

    And I’m still not quite clear on the difference between your casting aspersions of “groupthink” and “overmodeling” at climate modelers as a whole and McI’s technique of insinuating fraud or massive incompetence just vaguely enough that there’s no chance of rebutting an actual accusation. I mean both techniques seem pretty clever to me–gutless, but clever.

  • Phil Scadden // May 7, 2009 at 12:46 am | Reply

    Dave A – so your corollary is that because of these uncertainties, then things wont be that bad. Few things. First uncertainty cuts both ways. It could be worse. Uncertainty can be quantified in many cases. Still looks bad. Uncertainty is mostly in feedbacks. The sensitivity from straight GH effect puts a lower bound on basis of simple heat balance. Still doesnt look good.

    Now perhaps you want to look at risk management rather than hoping for the best?

  • Michael hauber // May 7, 2009 at 5:59 am | Reply

    Michael: Aerosol emissions are proportional to Co2 emissions.

    dhogaza: Really?

    Michael: probably not, but its my best guess after trying my google skills to find some real data on aerosols. From what I remember in this search GISS model E aerosol forcings are calculated based on assumptions from fuel consumption statistics, and the flat line after about 2000 feels like ‘fill in’ rather than real data.

  • TCO // May 7, 2009 at 12:33 pm | Reply

    Ray: I’m going to choose not to explain the subtleties.

  • Ray Ladbury // May 7, 2009 at 2:35 pm | Reply

    Didn’t think you would, but then, silence, too can speak volumes.

    “You always become the thing you hate.”–Dorothy Parker

  • luminous beauty // May 7, 2009 at 3:48 pm | Reply

    TCO,

    Wise decision. Better to be quiet and suspected of being a fool than opening one’s mouth and removing all doubt.

  • Steve A // May 7, 2009 at 4:46 pm | Reply

    Speaking about aerosols, do we really know they have net cooling effect? I’m a little behind on the subject because they last I heard we still didn’t know what their exact role in the atmosphere was due to the large uncertainties. I wouldn’t be surprised if there is better data now, since it has been a couple of years. Thanks

  • Ray Ladbury // May 7, 2009 at 5:58 pm | Reply

    Steve A., The question is “which aerosols”? Some cool, some warm and some it depends on where they are, etc. Sulfate aerosols, though, definitely cooling.

  • Lazar // May 7, 2009 at 8:01 pm | Reply

    Appropriatness of the d*1 test from Santer et al. (2008) in testing modelled-observed consistency…

    A quick and helpful response from Ben Santer (pers comm) (paraphrasing); using a paired trends test is preferable over d*1 as the former includes uncertainty due to interannual variability.

    Interannual variability can be seen as a spread of modelled trends among multiple realizations (due to different occurences of el ninos etc). d*1 takes the average of those trends and treats that average as an exact estimate of the real-world trend.

    It is very strange to find sites which use d*1 to seriously test modelled and observed trends but do not mention this issue that is explicitly mentioned in Santer et al. :-)

  • Lazar // May 7, 2009 at 8:42 pm | Reply

    The Santer et al. paired trends test is nice and simple to understand… it is the test of a difference between two trends that assumes errors between series are uncorrelated. Forty-nine modelled trends minus the observed trend are tested at alpha=.05 for significant differences from zero. There should be at least two rejections (0.05*49=2.45) if models and observations are inconsistent. In this case there was one. Figure 3. (a) of Santer et al. shows nicely how modelled and observed trends for the tropical troposphere overlap between models and between realizations.

  • Dave A // May 7, 2009 at 9:34 pm | Reply

    Dhogaza, Ray & Phil,

    Those words were actually from a book published in 1989 ( The Greenhouse Effect, S Boyle and J Ardill, New English Library, ISBN 0-450-50638-X, p35)

    A few problems with them undoubtedly but still substantially correct 20 years later !

  • Dave A // May 7, 2009 at 9:45 pm | Reply

    Ray,

    As I;ve said before your constant denigration of people who disagree with you is boring and only reflects badly upon yourself.

  • t_p_hamilton // May 8, 2009 at 1:08 am | Reply

    DaveA said:”Those words were actually from a book published in 1989 ( The Greenhouse Effect, S Boyle and J Ardill, New English Library, ISBN 0-450-50638-X, p35)

    A few problems with them undoubtedly but still substantially correct 20 years later !”

    Given that supercomputers are 100,000 times faster now, that means you are 0.001% correct.

  • Ray Ladbury // May 8, 2009 at 1:14 am | Reply

    Wow, Dave A., A post that is factually wrong, mostly illiterate and plagiarism. Your mother must be proud!

    And I don’t denigrate those who disagree with me–only those for whom the learning curve shows no positive slope. Prove me wrong, Dave. I’d love for you to prove me wrong. Show me just one thing you’ve learned since you started haunting this board.

  • dhogaza // May 8, 2009 at 3:34 am | Reply

    A few problems with them undoubtedly but still substantially correct 20 years later
    </blockquote
    Well, no. Models and our physical understanding has greatly increased in the last 20 years.

    It is up YOU to show us that those who write the models are LYING when they say the limitations you state are still true today.

    20 years ago you were living in Bill Gates vision of a world where NO ONE needed more than 640KB RAM, while today 1GB or 2GB laptops (for god’s sake) are common.

    Model limitations 20 years ago are similar (because the model limitations are largely driven by computer power limitations).

  • dhogaza // May 8, 2009 at 3:36 am | Reply

    As I;ve said before your constant denigration of people who disagree with you is boring and only reflects badly upon yourself.

    On the other hand, constantly posting your ignorance is even more boring and only reflects badly upon yourself.

    Posting a twenty-year old description of climate models to “prove” that today’s models have the same limitations just shows that you’re a ****ing idiot.

  • dhogaza // May 8, 2009 at 3:38 am | Reply

    For instance, DaveA, using your analogy …

    consumer digital cameras don’t exist.

    because a literature survey conducted 20 years ago would reveal that no consumer digital cameras were available for purchase.

    Not to mention HDTVs.

  • Phil Scadden // May 8, 2009 at 4:04 am | Reply

    Dave A. Your point? The uncertainties in feedback
    have reduced substantially in 20 years. The direct CO2 effect is unchanged. What I am claiming is that you are reading into the quote doubts that are not there. As a matter of interest, what did the book think temperatures would be 20 years on?

  • michel // May 8, 2009 at 7:18 am | Reply

    A simple but puzzling question.

    Came upon the interesting site woodfortrees, a chap whose approach to these things is rather close to my own, in that he thinks lowering emissions would be a good thing regardless of if it will make any difference to warming. Anyway, he has the following graph of the UAH lower troposphere anomaly readings:

    http://www.woodfortrees.org/plot/wti/mean:12/plot/wti/trend

    Here is the question. We seem, if you take the linear trend values as the start and end points, to have increased temperature by less than half a degree C in around 30 years. If this carried on, we’d raise temperature by under 1.5 degrees in the next 70 years.

    This does not seem terribly alarming. So are the UAH temperature estimates wrong? Is there real reason to think the pace is going to pick up (which it doesn’t seem to be doing lately)? Or what else?

    His site is very nice, you can pick just about anything you want and get it charted, if you haven’t already seen it. Certainly beats downloading and hacking away at it yourself.

  • michel // May 8, 2009 at 8:03 am | Reply

    The curious case of UK transport.

    It seems clear that if the UK as an example is to lower its carbon emissions from transport of people, more travel will have to take place on public transport. So we would expect a government committed to a green agenda, emission reductions, electric/hybrid cars and so on to have made public transport cheaper and car transport dearer.

    In fact, they have done the reverse as a recent UK Independent article shows.

    Official figures, seen by The Independent, show that the cost of motoring has fallen by 13 per cent in real terms since 1997, while bus and coach fares have increased by 17 per cent above inflation. Rail fares have risen by 7 per cent extra above inflation.

    You can see the political difficulties – most people with cars, which is to say 90% of the UK rural population, think that the government is engaged in a war against drivers. This is partly because of the government’s efforts to impose mandatory rather than advisor speed limits, which are anathema to the freedom to kill brigade, but its also because they have been raising tax on gasoline faster than inflation. Not enough faster however, apparently.

    What to conclude? Simple really. Governments limited by what is pragmatically politically acceptable will never succeed in making changes to emissions that are as large as the experts say are necessary. One or the other has to give – emissions policy, or political acceptability. You are not going to get both.

    Dhogaza suggested recently that maybe the reason Holland is so good on pedestrian and cycle deaths is not just segregation of traffic, but also legal policy. Possibly so. Another possible contributor comes up in an article in today’s Times. A report has concluded that

    20mph zones enforced by road humps reduced collisions involving pedestrians by 63 per cent and cyclists by 29 per cent.

    Not to mention, that when they occur at 20mph 90% of the victims live. At 40mph, 90% die. Its not linear. But this also is politically very problematic.

  • Gavin's Pussycat // May 8, 2009 at 12:19 pm | Reply

    Michel:

    Here is the question. We seem, if you take the linear trend values as the start and end points, to have increased temperature by less than half a degree C in around 30 years. If this carried on, we’d raise temperature by under 1.5 degrees in the next 70 years.

    This does not seem terribly alarming. So are the UAH temperature estimates wrong? Is there real reason to think the pace is going to pick up (which it doesn’t seem to be doing lately)? Or what else?

    There is nothing (badly) wrong with the UAH temperatures. It’s the assumption of linear behaviour that is too simple. Yes, the pace will pick up (under a BAU scenario) because CO2 emissions are then growing exponentially on a 30-year doubling time scale.

    Mathematically it gets a bit complicated because the excess CO2, which also grows exponentially, builds on top of the natural background (pre-industrial) concentration — and the forcing is proportional to the logarithm of the sum of those. What you need to study is the function

    F = k ln(DC0*exp(t/tau)+C0)

    where

    DC0 is the excess CO2 at time t=0

    C0 = pre-industrial CO2

    tau = time constant (some 40 years for base-e)

    To spill the beans, the behaviour is at first almost exponential, i.e., the pace picking up fast; then, around when CO2 passes 2x pre-industrial, it will gradually level off to linear. But that’s still some time away.

    …and any wiggles you see on decadal or less time scales, you can conveniently ignore.

    Hope this helps.

  • Ray Ladbury // May 8, 2009 at 1:07 pm | Reply

    dhogaza,
    No, Dave A. has an answer for that. The digital cameras and plasma TVs appeared magically one day, brought by the same fairies who will magically transform our energy infrastructure just in time to stave off disaster!

  • Ray Ladbury // May 8, 2009 at 1:14 pm | Reply

    Tamino, there is a poor benighted savage on Realclimate who desperately needs your tutelage. A couple of excerpts from a recent post:

    “You are assuming that all “noise” is random, and be averaged out. Which is true if it’s ideal “band limited white noise”. Unfortunately not all noise is “white” but can have periods of “non-randomness” due to what ever.”

    “Just using statistics done not seem to give as direct an insight that the Fourier analysis does. By adjusting the “kernel” or filter, I can easily look, or for, single or multiple waves or repeatable occurrences. This method does a better job then more classical signal processing, in that phase delays are reduced.”

    And the kicker:

    “As far as statistics go, I have no problems with that. I have been through enough 3-sigma performance specs to last a lifetime. However I think that the Fourier method give a more direct insight as to what is going on. As I stated in the earlier post, using the Fourier analysis, showed a peak or slight down trend in current global temps. Tamino’s, use of averages, keep right on going up after 2000. Which analysis is closer to reality?”

    Does this joker really think he’s learned everything there is to know about probability and statistics from a frickin’ Six-Sigma course?!

  • Lazar // May 8, 2009 at 1:52 pm | Reply

    Michel,

    So are the UAH temperature estimates wrong?

    Dunno… but the UAH analysis is an outlier compared to all others. Tamino compares the five satellite analyses and one of surface air temperature (GISTEMP) here, with discussion on this page.

  • dhogaza // May 8, 2009 at 2:44 pm | Reply

    If this carried on, we’d raise temperature by under 1.5 degrees in the next 70 years.

    That’s the global average. Temps over larger land masses – North America, Eurasia – would rise considerably more. Double? Something like that.

    Given that a 5C global temperature drop would trigger an ice age, on what basis do you claim that a 2C rise seems harmless?

  • Dave A // May 8, 2009 at 8:01 pm | Reply

    t_p_hamilton,

    Oh so the super computers are now 100,000 times faster – but they still rely on the quality of the information inputed to them!

    Obviously there has been considerable progress since 1989 but in the fundamental aspects of the relatively poor modelling of ocean processes (eg, PDO, AMO, ENSO) and cloud effects the fundamental problems remain the same

  • Dave A // May 8, 2009 at 8:19 pm | Reply

    Ray,

    My mother is proud of me, I think, but leave her out of this.

    I was having a bit of fun, albeit with a serious undercurrent. There are considerable problems with the models – go read the Stainforth paper I referenced in the previous Open Post.

    You, and others, may choose to ignore these problems but that doesn’t mean that they are not there.

    And one thing ,at least, I have learned on this blog is that you,Ray, are a very contemptuous person with an inflated sense of your self worth.

    [Response: You, and others, may choose to ignore the successes and solid physical foundation of the models, but that doesn't mean that they are not there.

    Your portrayal of computer models is vastly more one-sided and unrealistic than anybody else's.]

  • luminous beauty // May 8, 2009 at 8:38 pm | Reply

    Dave,

    “Le mieux est l’ennemi du bien.”

    Comprende vous?

  • dhogaza // May 8, 2009 at 9:25 pm | Reply

    Obviously there has been considerable progress since 1989 but in the fundamental aspects of the relatively poor modelling of ocean processes (eg, PDO, AMO, ENSO)

    Actually ENSO-like features arise as an emergent property of modern GCMs, which just goes to show you don’t know what the **** you’re talking about.

  • dhogaza // May 8, 2009 at 9:28 pm | Reply

    “Le mieux est l’ennemi du bien.”

    Dave’s just playing another variant of the “we don’t know everything therefore we know nothing” game.

    Tamino:

    Your portrayal of computer models is vastly more one-sided and unrealistic than anybody else’s.

    Unfortunately, DaveA’s portrayal is typical within the denialsphere.

  • Ray Ladbury // May 8, 2009 at 9:49 pm | Reply

    Dave A., Given that you haven’t bothered to the science, remind me again why I should listen to you. You certainly work very hard convincing yourself that you don’t need to do anything. Might I suggest that you devote even a tiny fraction of that effort to actually understanding the science. Just a suggestion.

    The fact of the matter is that the climate models are very good at reproducing the outlines and many of the details of what climate looks like on Earth. If you were to change tham in such a way that the climate change threat would go away, you would get something very different from Earth. Now you may say that the models are all wrong, but no one–least of all the denialists–has proposed anything different.

    We can go with science or we can go against science. There’s no middle ground. Science or anti-science. You’ve already picked, so…Bye.

  • TCO // May 8, 2009 at 10:25 pm | Reply

    I had several funny off color posts deleted by Tammy. You all can wonder what they said…

  • TCO // May 8, 2009 at 11:01 pm | Reply

    What kind of message board is this with having pre-censoring?

  • Philippe Chantreau // May 9, 2009 at 12:34 am | Reply

    Dave A, are you sure that the comment on inflated sense of self worth does not also apply to you?

    You showed a heavy reading comprehension problem when looking at the Ramanathan black carbon Asian studies. It would be quite easy to infer from it that you have no business commenting on scientific matters and that your comments here, on work by people who are experts in their field, are nothing more than indulging your inflated sense of self. At least Ray really knows what he’s talking about when it comes to radiation physics.

  • t_p_hamilton // May 9, 2009 at 1:53 am | Reply

    DaveA the modeling expert: “Oh so the super computers are now 100,000 times faster – but they still rely on the quality of the information inputed to them! Actually, the quality of the input is not an issue. Obviously there has been considerable progress since 1989 but in the fundamental aspects of the relatively poor modelling of ocean processes (eg, PDO, AMO, ENSO) and cloud effects the fundamental problems remain the same.”

    What is obvious is that you are completely clueless on how models are judged, and how much progress they have made in 20 years. Your fundamental problem remains the same – ignorance.

  • dhogaza // May 9, 2009 at 5:34 am | Reply

    What kind of message board is this with having pre-censoring?

    Since in the past you’ve admitted that your an asshole, while at the same time telling us your smarter than all of us, smarter than everyone since Feynman, while at the same time screaming that GCMs are wrong, while afterwards admitting that you don’t know how they work …

    Wait … they’re wrong? But you don’t know how they work? Why are they wrong? Because you dislike the political implications.

    I’m going to nominate myself for a Godwin’s Law banning …

    You reject modern science for the same reason Hitler rejected modern physics – the political implications.

    “Jewish physics” … right or wrong? Ask those who survived Hiroshima.

    TCO, you play the same role of the ideologically-driven anti-science fanatic …

  • dhogaza // May 9, 2009 at 5:37 am | Reply

    You showed a heavy reading comprehension problem when looking at the Ramanathan black carbon Asian studies. It would be quite easy to infer from it that you have no business commenting on scientific matters and that your comments here, on work by people who are experts in their field, are nothing more than indulging your inflated sense of self.

    Well, at least DaveA isn’t nearly as deluded as Watts, and also no one pays attention to him.

    And unlike Watts, when proven to be an idiot here, he doesn’t simply retreat and refuse to engage.

    He just digs himself deeper into the ignorant pit of stupidity he was apparently born in to.

  • Gavin's Pussycat // May 9, 2009 at 7:37 am | Reply

    About
    http://www.woodfortrees.org/
    yes, a great site. The only thing I regret is that it doesn’t allow least-squares with proper errors.But then that would be hard to get correct for autocorrelated data.

  • TCO // May 9, 2009 at 1:01 pm | Reply

    dhog: I love you.

  • TCO // May 9, 2009 at 1:54 pm | Reply

    dhog: I didn’t claim to be smart.

  • Dave A // May 9, 2009 at 8:09 pm | Reply

    dhogaza,

    “Actually ENSO-like features arise as an emergent property of modern GCMs, which just goes to show you don’t know what the **** you’re talking about.”

    Here’s what Stainforth has to say about ‘model inadequacy’

    ” In cases where the reliability of the forecasting system cannot be confirmed, the ability of our models to reproduce the past observations in detail gives us some hope that the model forecast may provide valuable guidance for the real world. Climate models fail this test.

    First, they do not include many processes which are known to be important for climate change on decadal to centennial time scales, eg. the carbon cycle, atmospheric and oceanic chemistry, and stratospheric circulation Second, limitations due to grid resolution lead to some processes, which we would expect to result from the physics represented by the model being represented poorly, if at allThe models are known to havenon-trivial shortcomings, examples include hurricanes, the diurnal cycle of tropical precipitation….,many characteristics of El Nino Southern Oscillation (ENSO) and the Inter Tropical Convergence Zone.

    Model inadequacy is a serious barrier to the interpretation of model results for decision support

  • luminous beauty // May 10, 2009 at 12:58 pm | Reply

    Dave A,

    Completing the last sentence of your quote:

    Model inadequacy is a serious barrier to the interpretation of model results for decision support, especially since, at present, there are no indications that the ‘missing’ processes would substantially ameliorate the climate change response, but an increasing number of plausible mechanisms which could make it worse than projected (Andreae et al. 2005; Walter et al. 2006)

  • michel // May 10, 2009 at 2:34 pm | Reply

    I started out being interested in what it will take in practice to make CO2 reductions of the sort said to be needed.

    This has led to the question: if those reductions were in fact made, what would the effects on temperature be?

    I’ve come across two recent accounts of this, both from sources that frequent commentators here will probably regard as tainted – these links are the first I know of them however, and the arguments stand or fall on their own merits.

    One quantification gives the number

    1,767,250

    And says that if you divide the number of gigatonnes of reduction per year by this number, it yields the lowering in degrees C that reduction will deliver per year. This is from World Climate Report. It of course turns out that every so far proposed reduction delivers tiny amounts of temperature lowering.

    Another argument along similar lines but using the MAGICC model concludes that for the US to reduce its emissions by 83% will make very little difference to temperatures. What might make a difference is if Asia were to do it. This is on

    http://masterresource.org/?p=2367

    There was a reply on Real Climate about this, but it did not seem to dispute the numbers, just argued the ethics of refusing to do things that do not work, but which may set an example and make a contribution however small. Different issue, I’m initially worried about the numbers.

    But I am left wondering even more about the little island we live on. Will anything we could conceivably do make any difference?

    Let alone that we have a government apparently determined to lead the world in a uniquely British combination of the proclamation of ambitious goals coupled with actions almost equally ambitiously designed not to achieve them!

  • Gavin's Pussycat // May 10, 2009 at 4:04 pm | Reply

    Dave, from the conclusions of the article:

    ( http://rsta.royalsocietypublishing.org/content/365/1857/2145.full.pdf )

    There is much to be done but information from today’s climate models is
    already useful. The range of possibilities highlighted for future climate at allscales clearly demonstrates the urgency for climate change mitigation measures
    and provides non-discountable ranges which can be used by the impacts
    community (e.g. Stern 2006). Most organizations are very familiar with
    uncertainty of many different kinds and even qualitative guidance can have
    substantial value in the design of robust adaptation strategies which minimize
    vulnerability to both climate variability and change. Accurate communication of
    the information we have is critical to providing valuable guidance to society.

    …sounds to me like some folks consider them worth taking seriousy…

  • dhogaza // May 10, 2009 at 4:15 pm | Reply

    I’ll stand on what I’ve said regarding DaveA inadequacy …

  • dhogaza // May 10, 2009 at 6:40 pm | Reply

    Watts has the PDF of his “Is the US Surface Temperature Record Reliable” report up on his site, now that they’ve “surveyed” 70% of the US stations.

    I’ve not read it, but obviously his conclusion is “no, it isn’t” because he comments:

    Congratulations are not in order yet, data analysis still has to be done to determine the magnitude of the siting effects on the temperature record. When that is complete, and that published, then will be the time for congratulations or denigrations.

    I’m sure he’s going to “prove” that the US Surface Station record actually shows cooling, not warming, in contradiction of the satellite record :)

    Anyone want to bet against me?

  • t_p_hamilton // May 10, 2009 at 8:10 pm | Reply

    DaveA reads Stainforth but does not understand it.

    Stainforth’s point about deficiencies of models is that they are not adequate to say which areas will receive greater rainfall, which less rainfall, etc, so that decision-making for adaptation specific to a locality is not yet possible.

    In other words the uncertainty means even the most horrific results from global climate models must be considered possible. There are no good ones, where warming magically disappears.

  • Lazar // May 10, 2009 at 10:31 pm | Reply

    When will anthropogenic precipitation changes be detectable?

    A long-running climate modelling prediction (Manabe, I don’t have the reference) is of GHG-induced warming causing a general drying of drier regions and moistening of wetter regions around the globe. A recent paper in GRL (Giorgi and Bi, 2009) identifies precipitation hotspots using a multi-model ensemble run under three IPCC forcing scenarios (B1, A1B, B2). Figure 1 identifies regions where there are changes in precipitation of at least +/- 20% relative to the 1981-2000 mean and which should, with around 84% probability, be detectable within the next hundred years, with many being detectable within the next eleven to fifty years. Comparing Figure 1 with global average precipitation reiterates the general modelling prediction of dry->drier and wet->wetter conditions. Particularly concerning are rainfall reductions in the water-stressed regions of southern Europe, northern and southern Africa, south Australia, and the southwestern US.

    Giorgi and Bi compare twenty-year running mean predictions of winter and summer precipitation to a model baseline of 1981-2000. Uncertainty estimates include systematic bias between models and interannual variability.

    Changes under the A1B scenario…

    Before 2020; precipitation increases across northern Europe, northern Asia, and northern north America during winter (NEU-OM, NAS-OM, NAM-OM)

    Between 2020 and 2055; precipitation decreases across the Mediterranean during summer and winter (MED-AS, MED-OM), in southern Africa during winter (SAF-AS), and the Carribean during summer (CAR-AS). A drying signal in south Australia, subject of much recent commentary at Open Mind, does not emerge above the noise level before 2100. A possible reason is unforced interannual variability due to effects of the Indian Ocean Dipole, which is suggested as the leading cause of drought epochs from 1880-present (Ummenhofer et al., 2009). Precipitation increases are predicted for China and India during the monsoon season (CHN-AS, IND-AS).

    continued and increasing GHG-forced global warming is expected to modify many features of the general circulation, which in turn would affect precipitation patterns across the globe. Examples of such features include changes in storm tracks, positioning of the Inter-Tropical Convergence Zone, characteristics of monsoon circulations, vertical atmospheric stability and atmospheric water vapor content.

    One notable uncertainty in the detection process is the accounting of the effects of anthropogenic black carbon aerosols on monsoon precipitation;

    intensification of these monsoon rain systems must be taken with care due to the lack or crude representation of black carbon effects in the models [Meehl et al., 2008]

    Black carbon aerosols (soot) are emitted from the combustion of fossil fuels, biomass, and biofuels. In large concentrations they form ABC’s (aerosol brown clouds — Ramanathan) which have a residence time of a couple of weeks and impact the atmospheric circulation in complex manner. There is a good wiki article here;

    Black carbon is a potent climate forcing agent, estimated to be the second largest contributor to global warming after carbon dioxide

    [...]

    The largest sources of black carbon are Asia, Latin America, and Africa. China and India account for 25-35% of global black carbon emissions. Black carbon emissions from China doubled from 2000 to 2006.

    [...]

    Black carbon emissions “peak close to major source regions and give rise to regional hotspots of black carbon-induced atmospheric solar heating.” Such hotspots include “the Indo-Gangetic plains in South Asia; eastern China; most of Southeast Asia…”

    [...]

    Approximately 20% of black carbon is emitted from burning biofuels, 40% from fossil fuels, and 40% from open biomass burning, according to Ramanathan.

    There have been a number of studies (Ramanathan) since 2000 on the impacts of black carbon aerosols. A most recent one is a modelling study by Krishnamurti et al., 2009. Black carbon aerosols impact the atmospheric circulation by several mechanisms. The aerosol first and second direct effects are, respectively, absorption in the lower troposphere of solar shortwave radiation that leads to atmospheric heating, and secondly, a corresponding cooling at the surface due to reduction of incoming solar shortwave. The aerosol first and second indirect effects result from the fact that aerosols serve as cloud condensation nuclei (CCN), and higher CCN concentrations lead to higher concentrations of water droplets (first effect) and a corresponding decrease in average droplet size (second effect), which combine to effect the formation and lifetime of clouds and the amount of precipitation. Krishnamurti et al. study effects of black carbon pollution emitted from the Mombay/Pune region that is on the northwest coast of India, and that occur during
    the “winter” monsoon season (the post-monsoon season, the months DJF). In particular they studied events known as Bombay Plume (BP) events, where high aerosol concentrations combined with the prevailing circulation carries the ABC southwest from Mombay/Pune and into the center of the Arabian sea where it coalesces. The BP has a residence time of about five days until the particulates are washed down with rain, with weather impacts extending over ten days. Krishnamurti et al. found that BP events increase precipitation over the Arabian sea and even more interestingly, reduce precipitation over the southeast coast of India and the Bay of Bengal by a teleconnection effect. Atmospheric heating from precipitation and shortwave absorption create an ascending air mass (low pressure) region over the Arabian sea. Divergence in the Arabian sea upper troposphere creates convergence over the Bay of Bengal. This convergence creates a descending motion, high pressure region and corresponding low-level air flow from east to west (high to low).

    I think Krishnamurti et al. leave scope for further work comparing modelled precipitation effects with observations. While they show that average precipitation over the Bay of Bengal during a BP event, during the day following peak aerosol optical depth, is below the 30-year average for the same callendar date, BP events are only studied for the period 2000-2004, so it is necessary to know if recent annual or seasonal mean precipitation are below the 30-year average, in which case some other cause may be present. Since BP events and their predicted weather consequences are tightly defined, it also should be possible to observe the effect on precipitation by creating averaged time series from BP events.

    Despite serious implications and the sad context of above research and more, I still think the climate is a beautiful system.

    References

    Time of emergence (TOE) of GHG-forced precipitation change hot-spots
    Giorgi, F. and Bi, X.
    Geophysical Research Letters, 36, L06709 (2009)
    doi:10.1029/2009GL037593

    Impact of Arabian Sea pollution on the Bay of Bengal winter monsoon rains
    Krishnamurti, T.N. et al.
    Journal of Geophysical Research, 114, D06213 (2009)
    doi:10.1029/2008JD010679

    What causes southeast Australia’s worst droughts?
    Ummenhofer, C. et al.
    Geophysical Research Letters, 36, L04706 (2009)
    doi:10.1029/2008GL036801

  • dhogaza // May 11, 2009 at 4:13 am | Reply

    Another argument along similar lines but using the MAGICC model concludes that for the US to reduce its emissions by 83% will make very little difference to temperatures.

    Right. Chip claims that since action by the US alone won’t be sufficient, the US should take no action.

    China and India claim that if the US doesn’t take action, why should they?

    You can go around the globe with this argument.

    It’s going to take worldwide action. This is not supportive of the fossil fuel industry argument that nothing should be done (masterresource is funded by the fossil fuel industry).

    Now, why do you think an argument that the US acting alone won’t be sufficient means the US should do nothing is convincing, given that (say) Europe’s ahead of us in taking action, and even China is talking about a carbon tax?

    It’s a strawman, and its release is timed to try to monkeywrench the current modest Waxman cap and trade legislation.

    It will probably succeed. Do you have any kids, Michel? I don’t. I’m getting closer and closer to just saying “f*** the future generations, they’re mostly begotten by denialists here in the US”

  • Gavin's Pussycat // May 11, 2009 at 4:27 am | Reply

    > But I am left wondering even more about the little island we live
    > on. Will anything we could conceivably do make any difference?
    With ‘litt¶le island’ you apparently mean Airstrip One, but it could just as well have meant the whole Western world. What remains unspoken also in Knappenberger’s study is that, come 2050, the US, and not even the whole of OECD, will be the superpower that it is today.
    Yes, by that time the torch will have passed to China and India. But due to the long delays in the system, it is our responsibility to start what must be done.
    It’s about leadership by example. What Knappenberger (intentionally?) mixes up is a neceesary vs. a sufficient condition. The West, and especially also Britain being responsible for a huge amount of very old emissions, must be seen to lead the way for anything to start happening.

  • Philippe Chantreau // May 11, 2009 at 5:30 am | Reply

    Dhogaza, I’m watching to see if Watts will release all data and code so that others can check the results, in true “skeptic” fashion :-)

    If he does not, then we can all go on a cut and paste spree drawing from his own comments on others’ works…

  • Gavin's Pussycat // May 11, 2009 at 6:12 am | Reply

    luminous, I missed that… you have a paranoid streak ;-)

    BTW here’s another one on the climateprediction.net work, also by Stainforth and friends:

    http://www.atm.ox.ac.uk/user/das/pubs/nature_first_results.pdf

    Look especially at Figure 2a. It seems easy to get higher sensitivities, but there’s a ‘floor’ of 2C which is much harder to beat…

    I would sign up for a guaranteed sensitivity of 3.4C in a heartbeat… “two degrees if we’re lucky, eight or more if we’re not”, now that’s scary.

    Well, fortunately there’s paleo ruling that one out. And the “probabilities” in the histogram are not real probabilities of course, being contingent upon the expert priors used. James Annan would have something to say about that.

  • michel // May 11, 2009 at 7:21 am | Reply

    You should read the Surface Stations piece before ridiculing it. Usually a good idea.

    [edit]

    [Response: Bullshit. Time and time again, Anthony Watts and his crew have been proved to be incompetent. Grossly incompetent. Listening to them is a waste of time; advertising them isn't gonna happen here.]

  • naught101 // May 11, 2009 at 11:10 am | Reply

    Michel: How many times would you try to pat a dog that keeps biting you?

  • Kevin McKinney // May 11, 2009 at 2:13 pm | Reply

    Dave A’s editing of the Stainforth piece rather reminds me of Inhofe’s–or rather, Morano’s–editing of Dr. Joanna Simpson’s letter: the bits about uncertainty get trumpeted, the parts which say, “We really need to take action,” mysteriously vanish.

    Then, of course, the cut-and-paste brigade keeps it recirculating forever. (Of course, for all I know, Dave could actually be part of the cut-and-paste brigade; I’m not about to go Googling to find out.)

  • dhogaza // May 11, 2009 at 2:24 pm | Reply

    It’s about leadership by example. What Knappenberger (intentionally?) mixes up is a neceesary vs. a sufficient condition. The West, and especially also Britain being responsible for a huge amount of very old emissions, must be seen to lead the way for anything to start happening.

    I’m sure it’s intentional on his part. He’s too smart to be ignorant (he is no Watts).

    His position also ignores the fact that you can’t reach your goal unless you take your first step. The fact that your first step doesn’t take you to the goal is irrelevant.

    The current legislation is a start, not the end point, to taking effective action.

  • dhogaza // May 11, 2009 at 2:27 pm | Reply

    Listening to them is a waste of time; advertising them isn’t gonna happen here.

    Unfortunately, Tamino, I believe that his report is going to be seized upon by the inactivists in Congress. He’s had the data for months. The release of the report in the midst of the hearings on Waxman is not likely to be due to coincidence.

    So someone, somewhere, with a suitably high profile is going to have to spend the time to do a takedown. Just another example of time being wasted due to obstructionism on the part of denialists like Watts.

  • michel // May 11, 2009 at 4:56 pm | Reply

    “Right. Chip claims that since action by the US alone won’t be sufficient, the US should take no action.”

    Well, no. What he is saying is that action by the US alone will make almost no difference. I would be interested in people’s comments about whether his numbers are right or wrong.

    It is quite different to say, it will make almost no difference, from saying that it will not be sufficient in itself. If the US reducing its emissions were to take us, lets say, 30-50% of the way, the point would be valid. Its not enough but its a start, its part of the solution.

    If the US can only take us 5% of the way, there is a very serious problem, and there are real questions about the cost/benefit ratios of making the kinds of draconian reductions which will be necessary to even deliver so little.

    Bu, question, are the numbers right? I’m less worried at the moment about the rights and wrongs and implications, but are the numbers on how many tons of emissions have to be taken out to have any considerable effect, are they right?

  • michel // May 11, 2009 at 5:03 pm | Reply

    Tamino, the problem with Watts is not advertising the site. No-one is doing that, and indeed it does not need it, it has soaring readership.

    The issue is the arguments. You cannot refuse to discuss the arguments any more. They are going to get traction. They are persuasive – in their own terms. You can of course ban all comments on the controversy, but it just means it will go on without you. You’ll find yourself the loser by keeping on doing that.

    Dhogaza is right. “someone, somewhere, with a suitably high profile is going to have to spend the time to do a takedown”. Or with no kind of profile. If not, Watts will win the argument by default.

    [Response: I've discussed the "arguments," dissected the carrion, and shown with zero doubt just how "stupid is as stupid does" describes WUWT, so many times I'm getting sick of it. The refutation of their nonsense and demonstration of their gross incompetence is here and elsewhere for all to see. The only folks who stand by Watts are those who are blind because they will not see. As for banning comments on "the controversy" -- there is not controversy. Your saying so, your believing so, doesn't make it so.

    I'd prefer to spend my time on those who at least have their eyes open.]

  • dhogaza // May 11, 2009 at 6:41 pm | Reply

    Well, no. What he is saying is that action by the US alone will make almost no difference.

    And then goes on to suggest that Waxman shouldn’t be made law. In other words, do nothing.

  • dhogaza // May 11, 2009 at 6:43 pm | Reply

    They are persuasive – in their own terms.

    Their own terms – innumerate scientific illiteracy.

    That’s Anthony and 99% of those who support him in a nutshell.

    Unfortunately, yes, he’ll get traction among that demographic.

    But that demographic shows no sign of being capable of learning.

  • Zeke Hausfather // May 11, 2009 at 6:53 pm | Reply

    Just update http://www.opentemp.org/main/ using the newest surfacestation.org results, and I bet you’d find that Class 1, 2, and 3 stations pretty closely track GISS.

    The real question is: if the data turns out to be “inconvenient”, will Watts still publish and promote it?

  • Gavin's Pussycat // May 11, 2009 at 7:45 pm | Reply

    michel, yeah the numbers look about right. You can check for yourself (and I would if the matter bugged me and I wanted to learn, and the numbers came from someone like Knappenberger; the MAGICC software is freely available, all you need is a Windows license, grrr, to run it on).

    What surprises me is that it surprises you. It has always been known that this is how a ‘commons’ works: it can only be preserved if everyone is along. The US alone doesn’t cut it without China or India; but then, China alone cannot do it either, or India alone, or Europe alone. We’re in the same boat. We’ll hang together or we’ll hang separately.

    Every individual nation has the power for bad, but only together — all of them, or even most — the power for good. It is a kind of challenge we haven’t seen before. The good news is that also in China and India there is an awareness that this problem affects them too and something must be done — the growing middle classes in these countries finally have something to lose. But without leadership by example, moral leadership, from the nations that created the present state of the problem, failure is guaranteed.

    If you find the numbers large: yes they are. They weren’t always this large. One other thing that has ‘always’ been known is that fossil fuel based development couldn’t last, that it could at best only be a transitional phase on the way to a more sustainable energy economy. Had we taken this seriously one emissions doubling time ago, it would have been relatively easy to get started building the institutions needed and gain experience. Now, we suddenly find ourselves in a hurry.

    But I suppose that’s human beings for you. Don’t believe doggie bites until bitten.

    Is it hopeless? No. Challenging, yes. As for the costs, IPCC WG3 (which studies these things; you and I are not the first to think about them you know…) give guesstimates of a few percent of global GDP even for fairly aggressive scenarios. IOW, the kind of money nations spend on military defense, what they manage to cough up in the face of a perceived existential threat — which this is too. A lot of dollars and cents, but nothing to drive us bankrupt. There is a fair amount of uncertainty in this — economics is a less exact science than climatology ;-) — but we better get started, and things will become more precise along the way.

  • Dave A // May 11, 2009 at 10:06 pm | Reply

    dhogaza, t-p H, & GP,

    If you take the whole of the Stainforth et al paper into consideration, the model inadequacies are far more prominent than the refs to usefulness for decision taking.

    None of you have addressed the deficiencies that the paper identifies (and t-p they are not just about local rainfall)

  • t_p_hamilton // May 11, 2009 at 11:46 pm | Reply

    DaveA said:”

    dhogaza, t-p H, & GP,

    If you take the whole of the Stainforth et al paper into consideration, the model inadequacies are far more prominent than the refs to usefulness for decision taking.

    None of you have addressed the deficiencies that the paper identifies (and t-p they are not just about local rainfall)”

    Apparently your ignorance extends to what etc. means.

    Models are good enough to tell us we all need about the global problem, but deficiencies do not enable us to predict how we will need to adapt to local conditions. These are two separate things, neither one of which is prominent.

  • Lazar // May 11, 2009 at 11:52 pm | Reply

    Michel,

    You cannot refuse to discuss the arguments any more. They are going to get traction.

    I refuse to discuss arguments of Bozo the Clown. But I would recommend people read the report, because it’s so jaw-dropping bad it’s funny. Ridiculous aspersions, conclusions not demonstrated… ooh, look, a photo of a cow, therefore the global temperature record is wrong, why, because we say so… such and such adjustment is wrong, because it just is… Remember when they censored John V., refused to link his work? They said it was ‘premature’? So after years, they finally report… nothing that compares CRN1 or CRN12 with GISTEMP. Michel, if you take this report as a serious, valid, scientific analysis, I’m seriously going to killfile you.

  • Lazar // May 12, 2009 at 12:03 am | Reply

    Tamino,

    I’ve discussed the “arguments,” dissected the carrion, and shown with zero doubt just how “stupid is as stupid does” describes WUWT

    Yep, ridicule, but do not respond. It’s irrelevant timewasting PR junk which will have no policy impact… none.

  • Ray Ladbury // May 12, 2009 at 12:37 am | Reply

    Dave A. says “If you take the whole of the Stainforth et al paper into consideration,…”

    Damn, Dude, you can’t even accurately quote a whole sentence from the paper without distorting it.

  • michel // May 12, 2009 at 8:33 am | Reply

    Lazar, who you killfile is entirely up to you.

    As to Watts and the report. Its obvious that some material posted on WUWT has been simply silly. Its clear also that what you can conclude from what has been published so far of the surface stations project, even if you accept the factual parts, the classification of the stations, as correct, is limited.

    What I simply do not understand however is the venom directed at them. What else were they to do? They just went and looked at stations to see if they complied with a published standard. What is so terrible about doing that? No-one else was going to do it.

    I also simply don’t understand why, if they are right, and if 70% or so of the stations are in the high error bands, this is something that can have no bearing at all on the historical record they have been used to measure.

    I do find one considerable oddity in the WUWT position on this general topic, and that is that on some occasions they seem to rely on the reported data, when reporting that, for example, temperatures are not rising, but on the other hand on other occasions suggest that the same record hasn’t a resolution sufficiently fine to measure swings of the same magnitude in the other direction. Having your cake and eating it.

    But anyway, its not my purpose to defend WUWT and all its works. I’m usually a specific issue person, and the specific issue in this that interests me is the number and proportion of stations with high error ratings. Sorry if that bothers you.

  • Igor Samoylenko // May 12, 2009 at 12:37 pm | Reply

    Dave A said:

    If you take the whole of the Stainforth et al paper into consideration, the model inadequacies are far more prominent than the refs to usefulness for decision taking.

    Model inadequacies? More prominent? Hmm…

    If you actually read the paper carefully, you’d see that “model inadequacy” is very carefully and clearly defined there:

    “…models of natural systems are inadequate in the sense that they cannot be isomorphic to the real system.”

    This “model inadequacy” relates to a more general problem with verification and validation of any numerical models of natural systems – see for example Oreskes et al (1994) (the first of the references provided by Stainforth et al in their paper). It is not unique to climate modelling.

    As Stainforth et al note in the paper, model inadequacy is just another, albeit “… less familiar form of uncertainty”, in addition to uncertainties relating to forcings, initial conditions and model imperfections.

    Model results are always checked for robustness: Do they appear in multiple models? Do they have a theoretical and observational support? Is there a match between the magnitude of the effects in the models/theory and observations? To suggest as you seem to that the mere existence of uncertainties in climate models makes them useless or unfit to use in decision support is just ludicrous.

    You just seem to scan papers for words that give you a confirmation of your personal biases, so that you can post around saying: “Stainforth et al said climate models are inadequate!” without even trying to understand what that actually means beyond the colloquial and naive use of the words. And that does not help your credibility much, to put it mildly…

  • dhogaza // May 12, 2009 at 2:51 pm | Reply

    What I simply do not understand however is the venom directed at them.

    I do find one considerable oddity in the WUWT position on this general topic, and that is that on some occasions they seem to rely on the reported data, when reporting that, for example, temperatures are not rising, but on the other hand on other occasions suggest that the same record hasn’t a resolution sufficiently fine to measure swings of the same magnitude in the other direction. Having your cake and eating it.

    You have posted enough information to understand the venom …

    They – Anthony Watts in particular – have no interest in objective science. Watts has an agenda to promote and is willing to throw science under the bus to achieve it.

    And he’s a pompous idiot, to boot.

    The venom’s not hard to understand.

  • luminous beauty // May 12, 2009 at 3:37 pm | Reply

    I think what Dave A means by venom is what most people mean by sarcasm.

    It must hurt to be so stupid.

  • luminous beauty // May 12, 2009 at 3:39 pm | Reply

    Oops! I mean michel. But, hey! What’s the difference?

  • Kevin McKinney // May 12, 2009 at 4:06 pm | Reply

    Further to dhogaza’s post, for me the cause for venom is that, at least as I see it, Watts and his ilk threaten the continued existence of the culture which it is my life’s work (as a creative artist and as an educator) to perpetuate and advance, and that the modality of this threat involves extensive and systematic lying, misdirection, deceit, and manipulation of both data and emotions, such that it is very difficult for me to believe that Watts possesses any shred of personal integrity.

    You may not share those perceptions of Watts, and that’s fine; reasonable people (and I trust I still count, regardless of the vehemence of what I just wrote) can differ. But given those perceptions, I think some venom is eminently reasonable.

  • michel // May 12, 2009 at 6:52 pm | Reply

    Venom – I mean expressions of extreme anger of the sort we have just been reading – is never productive. What is productive is calm and simple and clear refutation of mistaken ideas. Attack the idea not the man, attack with reason not rhetoric or emotion. It works better. Its better for you.

    Kevin, I do not know Mr Watts. To me he is just a guy with a point of view. Sometimes right, sometimes wrong. I am not sure exactly how he ‘threatens the continued existence of the culture…’. I am not at all sure that offering arguments, mistaken or not, in public forums is the sort of thing that can do that, Am I, for example, threatening the continued existence of the culture? You vastly overemphasize the power of the expression of what you believe to be mistaken ideas. This is not generally in a democracy where there is vigorous open debate, a very threatening thing to do.

    Generally the more public and the more explicit, the quicker mistaken ideas are refuted.

    I also don’t know on what grounds you conclude that Watts doesn’t possess any shred of personal integrity. That is a rather strong remark. All he has done, as far as I am aware, is take a different point of view on AGW. I know of no evidence that he is lacking in integrity in his business dealings, as a family man, as a citizen, a friend.

    Learn to tolerate disagreement. You will be happier for it.

  • dhogaza // May 12, 2009 at 9:04 pm | Reply

    What is productive is calm and simple and clear refutation of mistaken ideas. Attack the idea not the man, attack with reason not rhetoric or emotion. It works better. Its better for you.

    Thank you, your worship, for your insightful comment expressing ideas that have never occurred to anyone who’s dealt with Anthony Watts.

    Now, imagine many people over some years patiently follow your advice, patiently trying to teach Watts some of the basics of the scientific way of life, and statistics, and afterwards finding that Watts the repeats the same lying crap again … and again … and again … and again.

    Well, you don’t have to imagine. You can find it in the historical record on this site, for instance, and others.

    Pontificate all you want, but the fact remains, that if someone’s comfortable being a lying sack of shit in order to promote their political ideology, suggestions that people “be polite” etc ain’t going to change squat.

    I also don’t know on what grounds you conclude that Watts doesn’t possess any shred of personal integrity. That is a rather strong remark.

    History.

    All he has done, as far as I am aware, is take a different point of view on AGW.

    “Climate scientists are guilty of fraud and AGW is a hoax”. You are correct, that’s a different point of view on AGW than a person with integrity who studies the science would reach.

  • dhogaza // May 12, 2009 at 9:08 pm | Reply

    I know of no evidence that he is lacking in integrity in his business dealings, as a family man, as a citizen, a friend.

    I’m going Godwin on you …

    Hitler was a vegetarian who thought that killing animals for food was gross, loved nature, his dog Blondi, and children.

    Yet he didn’t have a speck of personal integrity in dealing with his enemies, domestic and foreign.

    One can have personal integrity in one area of life while showing an absolute lack of integrity in others. I don’t know Anthony so can’t refute your claim that he acts with integrity in his business, with his family, etc.

    However when it comes to climate science he’s a lying sack of it.

  • Lazar // May 12, 2009 at 9:38 pm | Reply

    Michel,

    As to Watts and the report. Its obvious that some material posted on WUWT has been simply silly.

    … what about the report?… are the way conclusions are drawn, and therefore the conclusions themselves, justified or not?

    Its clear also that what you can conclude [...] is limited

    and what do you conclude?

    What I simply do not understand however is the venom directed at them.

    The report is PR junk… that is not venom.

    What else were they to do? They just went and looked at stations

    You have read the report, you know that is not “just” what they did…

    Attack the idea not the man

    Any fool can serve up nonsense indefinitely, and only a fool would waste their time explaining why each new serving is nonsense.

  • Ian Forrester // May 12, 2009 at 9:54 pm | Reply

    Michel, it has got nothing to do with”opinion” but everything do with honesty. Honesty is the number one character parameter required by a scientist. It is ridiculous to imagine that scientists can have “opinions” such as “some scientists say that apples fall to the ground but my opinion is that they rise from the ground, how else could they have got there?”

    Honesty is completely lacking in Watts and those lemmings who infest his blog.

    It is no wonder that scientists react with “venom” when their life’s work is treated in such a dishonest and disrespectful manner by the deniers.

    You are either honest or tell lies, opinion is not involved in AGW.

    So, Michel which camp do you fall into, the honesty camp or do you consider yourself to be dishonest? It is easy for us to see, but it would be nice if at least you admitted it to yourself how you behave.

  • Dave A // May 12, 2009 at 10:31 pm | Reply

    Ian Forrester,

    “Honesty is the number one character parameter required by a scientist. “

    Quite right, so why does the following pertain?

    Phil Jones will not release data related to the Hadcru temperature index so that it can be independently verified. Even the Hadley Centre, which has been given the data, has only received it under condition they cannot pass it on to third parties.

    Why do you think this is? Does it seem to you that Jones is behaving in an open and honest manner? If his data is robust why is he apparently afraid of making it available? Remember his data is the main series used by the IPCC.

  • Lazar // May 12, 2009 at 10:41 pm | Reply

    Michel,

    I also simply don’t understand why [...] this is something that can have no bearing

    Of course ‘it’ “can” have bearing (effect). “Can” means speculation; taking photos, forming categories, and speculating how those variables “can” bias regional average temperature trends. Conclusions require demonstrating effects. John V. did which to Watts was ‘premature’, but when the ‘right’ time came Watts skipped the demonstration bit and jumped to conclusions based on speculation, whilst one conclusion was not based on speculation at all, just pulled out of thin air. You understand, Michel, that in ninety-nine times out of one hundred this report would not get past peer-review, and that it is not science, it’s PR.

  • Dave A // May 12, 2009 at 10:52 pm | Reply

    Igor,

    ““…models of natural systems are inadequate in the sense that they cannot be isomorphic to the real system.”

    They continue

    “Nevertheless, such models might conceivably include or simulate to some degree of subjective adequacy, all the processes believed to be important in the system under study given the aims of the study.”

    Hardly reassuring is it. And this leads directly on to the section I originally quoted that ends with “Climate models fail this test”

  • Ian Forrester // May 12, 2009 at 11:18 pm | Reply

    Dave A, witholding some thing is not being dishonest. If you asked me for money and I said “NO” would you consider that dishonest of me?

    You are pathetic.

    Why would any responsible scientist give out there information to any Tom Dick or Harry (sorry that should be any Steve, Antony or Dave), knowing full well that they will distort it and use it to discredit the scientist?

  • dhogaza // May 12, 2009 at 11:23 pm | Reply

    You understand, Michel, that in ninety-nine times out of one hundred this report would not get past peer-review, and that it is not science, it’s PR.

    I think your 1% chance of acceptance suggestion is a extreme overstatement …

  • Zeke Hausfather // May 13, 2009 at 3:23 am | Reply

    Now now Lazar, I expect an excellent paper by Watts et al to appear in the next issue of Energy and Environment. :-p

  • Philippe Chantreau // May 13, 2009 at 6:23 am | Reply

    There is ample evidence that Watts will not hesitate to misrepresent an article if that can serve his purpose. Recent examples are the GCR/Ozone post and the stratospheric temps/particle detection post. In both cases, Watts completely misrepresents what the article says in order to attack an aspect of science that he dislikes. The misprepresentation is so gross that it seems he hasn’t even understood or fully read as much as the abstract. When asked directly why he chose the title of the GCR/ozone post the way he did, he eluded the question.

    How can that be interpreted? Why should I or anyone else bother with Watts at all?

  • michel // May 13, 2009 at 7:08 am | Reply

    Lazar, you ask “and what do you conclude?”

    I did post what I concluded, but our host clipped it for what seemed like good reasons to him. It would be impolite to repeat it.

    Ian F, I fall into the camp of those who think that it is possible to be honest, sincere and informed and still not be convinced of the truth of all of the elements of the AGW hypothesis. I do understand that people think Watts wrong about many subjects, but I do not see why they think he cannot be simply mistaken, as opposed to dishonest. On both sides of this controversy we find people indulging in pointless rhetoric, and comparing Watts to Hitler is an example of this. Referring to people who differ from one as “a lying sack of shit” is another example.

    My point is not that this is impolite, but that it is ineffective as a debating tactic in a culture in which open debate happens.

    One of the most dangerous notions around nowadays is the equation of the expression of thoughts as actions. That is, people like Watts are ‘deniers’, which is to say, they are not sincere or informed in their views, they more or less know they are false, but assert them anyway – Dhogaza seems to think for political reasons. Thus, when they express these views, they are not really expressing views, they are actively doing damage to… the world, the culture, the environment. So they can be vilified for this, and some have even suggested that they are guilty of crimes for doing it. This point of view is dangerous and in the end anti-democratic.

    I also fall into the camp, rather a small one, of those who think that action to curb emissions, particularly from internal combustion engines and industrialized farming, would be a dramatic improvement to our quality of life, regardless of whether they affect the temperature.

    Finally, I am not a believer in symbolic action but in effective action. If it is true that as Knappenberger says that

    ” [impact of] 83% reduction [of US emissions] by 2050) was close to nil. Or more precisely, about 0.05°C (0.09°F) by the year 2050, expanding to maybe 0.1°C–0.2°C by the end of the century”.

    Then I have serious doubts about the wisdom of doing it in order to reduce global temperatures, because it doesn’t seem to actually reduce them. If we are trying to reduce temperatures, probably our efforts should be directed at things which do actually reduce them.

    Yes, as part of a global program perhaps, one would accept the argument that it has to be done to get others to do their much larger part. But as a standalone venture? It looks like a tough one.

    Nor do I think this opinion requires any explanation in terms of dishonesty…etc etc. I have arrived at it as a rational conclusion from the facts as presented.

  • Gavin's Pussycat // May 13, 2009 at 11:57 am | Reply

    Yes, as part of a global program perhaps, one would accept the argument that it has to be done to get others to do their much larger part. But as a standalone venture? It looks like a tough one.

    No, it has to be done because it is the right thing to do — for everyone. You have a surprisingly instrumental view of this. Think of it as like the Geneva Conventions, the Nuclear Test Ban, or the Montreal Treaty. Not something you choose to be a part of, but that you are a part of by virtue of being civilized.

    It’s only Chip Knappenberger who proposes this as a standalone venture — a strawman. And you go for it hook, line and sinker. Not very smart, michel.

    You go voting, don’t you? Why? Your vote won’t make any difference. One in forty million voters…

  • jr // May 13, 2009 at 12:33 pm | Reply

    To me it looks like the WUWT report is missing the part where they justify their “policy implications and reccomendations” section.

    I certainly don’t see any justification for the strength of the language they use in the rest of the report.

  • Ray Ladbury // May 13, 2009 at 12:45 pm | Reply

    Michel, There are two possible hypotheses that could explain Watts position: 1)He’s an idiot; 2)He’s a liar. They are not mutually exclusive. However, he has often gone out of his way to mischaracterize papers and positions of scientists to justify his preconceived notions.

    As to Knappenberger’s analysis, here is a man who has spent the last decade trying to block any sort of international action and now tells us that unilateral action will be inadequate. In doing so, he has gone beyond the usual example of chutzpah–that is killing one’s parents and begging for mercy because you are an orphan. Not satisfied with this , Knappenberger has killed his parents, asked for mercy due to his orphan status and is now trying to collect on his parent’s life insurance policies!

    On Realclimate, I said that we need to look at this as the right game–it is not zero sum. The game we are playing right now is Chicken. Chicken is a “less than or equal to zero-sum” game, since when the competitors collide, both lose. The question is whether we try to save civilization or abandon it now.

  • luminous beauty // May 13, 2009 at 12:52 pm | Reply

    Dave A,

    The first party is the PRC. They’re very paranoid about treaty agreements. McIntyre should take his request to the Chinese consulate. I don’t believe they have any FOIA.

    michel,

    An honest man admits his mistakes.

    Watts doesn’t own up to his mistakes and neither do you.

    What should one conclude?

    The US is not in a position of going it alone. The US is in the position of being the last industrialized country to put a carbon reduction policy in place.

  • bluegrue // May 13, 2009 at 1:36 pm | Reply

    Dave A
    remember the tantrum Watts and followers threw over the corrupt data at NSIDC, which was due to instrument failure, and how they could not post a correction fast enough. See the “update” frenzy over the NOAA SWPC Solar Cycle graphics, which is clearly a server issue?

    Compare this to his silly Roo in the Snow entry. The caption claims the photo is a few days old, i.e. from this seasons Australian summer/autumn. Snow, now? Funny. I tried my luck with google and on the first page for “Australia snow” you see this image, related to a July 2007 article. I posted this info 12 hours ago, in the mean time the photographer has confirmed, that this image was taken in June 2007, i.e. Australian winter(!) in the mountains near Canberra, 1350 meters above sea level. All replies on WUWT are moderated, how can they not be aware of this?

    Yet, Watts caption as of this moment reads:

    sends this photo along taken a few days ago in Australia from a colleague that “returned there for the summer”.

    On another topic on WUWT: Spencer’s claim

    But has been discussed elsewhere, a change in ocean biological activity (or vegetation on land) has a similar signature…so the C13 change is not a unique signature of fossil fuel source.

    , which relies on regressing trend line against trend line and which has been pointed out to Watts, goes unchallenged.

  • Barton Paul Levenson // May 13, 2009 at 2:55 pm | Reply

    michel writes:

    I do understand that people think Watts wrong about many subjects, but I do not see why they think he cannot be simply mistaken, as opposed to dishonest.

    He says wrong stuff. People point out that it’s wrong and patiently explain why it’s wrong. He says the same stuff again. People point out that it’s wrong and patiently explain why it’s wrong. He says the same stuff again…

    Now do you get the picture?

  • dhogaza // May 13, 2009 at 3:53 pm | Reply

    Just in case Michel thinks bluegrue is blowing smoke, here is a June 26th, 2007 blog post with the roo-in-the-snow-in-winter photograph anthony claims was taken “a few days ago”.

    Now, Michel, do you still not understand why we think Watts is a lying sack of it?

  • michel // May 13, 2009 at 4:00 pm | Reply

    I do not know anything about Mr Knappenberger, and its only the facts he cites which are of interest. They are apparently not in dispute, which is excellent in terms of clarity.

    We need to ask, is it the right thing to do, and if so why, for a country to reduce its emissions by 83% when this will reduce global temperatures by around 0.1C in the year 2100?

    Ethics is not an emotional discipline. It is subject to rational analysis. If we are saying that the reason it is right is that if the US does it, others will follow, and if it does not, they will not, then this is a valid argument. The only question is, what is the evidence that if the US does it, the East will follow? Because this matter of fact is now what the issue turns on.

    If they will not follow, I cannot see that it is right at all. That is, it is not right if the objective is to lower temperatures.

    It may well be right if its objective is to change lifestyles to more agreeable ones. Ones which are ethically better, because they will perhaps foster simplicity and attention to humanity over what seems to many of us the wasteland of frustration and emptiness that our current devotion to consumer goods produces.

    Why should we vote? An old fashioned answer would be, because it is our duty. We have duties, and they are obligations, though in particular cases considerations of consequences may weigh against performing them. We have duties of fidelity, respect for parents, consideration for others….and so on. We do not have to look after our children because it makes us happy, or because we like them. Whether that is true is irrelevant. We have to because it is our duty.

    But to argue that to reduce CO2 emissions by 83%, regardless of the consequences, is a duty, is very odd. It is like arguing that it is my duty to start walking East on 14th St, because its important for me to get to Europe. If walking East on 14th St is not a way of getting to a plane or boat, or Europe its hard to see why its so important to do it, or how it could possibly be a duty.

    Lowering emissions seems to be something whose merits should be assessed in terms of consequences, and what Knappenberger seems to be showing is that material consequences are a lot harder to deliver than one had thought, and almost impossible for the US to deliver. Possible however for China and India together to deliver.

    [Response: I'm really sick of the "let's not do the right thing because others won't" argument. It's morally bankrupt.]

  • Chris S. // May 13, 2009 at 4:09 pm | Reply

    Bluegrue:

    Nice detective work. Unfortunately my caption “No suprise, Watts caught lying again” got snipped for insulting the host. I also tried to post a link to the Perisher ski resort in the Snowy Mountains of Australia but I’m sure that won’t be allowed either. After all, we can’t let reality get in the way of a good meme (there’s no snow in Oz…) can we?

    • Igor Samoylenko // May 13, 2009 at 4:58 pm | Reply

      Chris S says: “After all, we can’t let reality get in the way of a good meme (there’s no snow in Oz…) can we?”
      Especially, when there > 180 comments from his hoi polloi already.

      BTW, he did publish your posts. There is also a reply from Watts himself: “Chris S, I think perhaps you are reading way to much into it. This is a photo caption contest, for fun. Note the tag “fun stuff”.”

      This is despite still claiming in his OP, that “WUWT reader David Summers sends this photo along taken a few days ago in Australia from a colleague that “returned there for the summer”.”

      What would you call that michel? An “honest” mistake?

  • dhogaza // May 13, 2009 at 4:51 pm | Reply

    We do not have to look after our children because it makes us happy, or because we like them. Whether that is true is irrelevant. We have to because it is our duty.

    And, yet, you continuously argue against performing an important part of this duty, doing our best to make sure the world we leave them is as habitable as the world we were raised in.

    Tch-tch. Just sayin’.

  • Ray Ladbury // May 13, 2009 at 5:46 pm | Reply

    Game theory: The proper model here is repeated rounds of The Prisoners’ dilemma. The winning strattegy is to 1)do the right thing on the first trial and 2)reward those who also do the right thing and punish the hell out of those who do not. This has been shown to lead to behavior approximating the Golden Rule based entirely on self-interest.

    Instead, Michel is proposing a game of Global Chicken–as I said a “Less-Than-or-Equal-To-Zero” sum game. And in the case of climate change, we can safely eliminate the equal to outcome.

    In the former case we have a chance of winning. Prisoners’ Dilemma, anywon?

  • michel // May 13, 2009 at 6:00 pm | Reply

    “Response: I’m really sick of the “let’s not do the right thing because others won’t” argument. It’s morally bankrupt.”

    That is not the argument. The argument that it cannot be the right thing if it does not have the effects which would make it right.

    [edit]

    [Response: I'm sure Judas told himself the same thing before betraying Jesus.]

  • Gavin's Pussycat // May 13, 2009 at 6:26 pm | Reply

    michel, preventing the destruction of our planetary home is our duty, individually and jointly. Nobody is entitled to a free ride.
    Nature doesn’t ask how we get our act together; she is a vengeful goddess who gives short shrift to lame excuses.

  • Hank Roberts // May 13, 2009 at 7:26 pm | Reply

    Michel’s moving the goalpost again to avoid noticing that cooperation in protecting a future isn’t socialist conspiracy

    > a country to reduce its emissions by 83%
    > will reduce global temperatures …
    > 0.1C in the year 2100?

    Not “a” country; countries
    Over-precision in details
    Short view ignoring extent of concern

    Just because splashing and bailing only reduces the rate at which your bathtub is filling up is no reason to give up and drown in it, Michel.

    Someone may find the valve and turn off the flow that’s threatening to drown you in the tub, given a bit more time to work at it.

  • bluegrue // May 13, 2009 at 9:05 pm | Reply

    As of now the caption reads

    WUWT reader David Summers sends this photo along taken a few days ago in 2007 in Australia from a colleague that “returned there for the summer”. I thought it might make a fun photo caption exercise.

    Note: This photo as represented to me in email, was supposedly recent.
    Thanks to alert WUWT reader “snow captain of queanbeyanobviously now that is not the case. So much for trusting friendly emails from people. The photo was originally taken in 2007 and you can see the details here.
    Still, as originally intended, feel free to make a fun photo caption.

    Watts has changed the text twice. Note, how he still avoids the word WINTER.

    Instead, I get this reply in the comments section

    bluegrue (11:36:16) :
    Anthony,
    the caption still implies that the photo was taken in summertime (“returned there for the summer”). As it was taken in June, that’s Australian winter, high in the mountains at an elevation of about 1350m / 4500 ft above sea level as Allan (23:48:08) reported.
    REPLY: I know this, I’m not unaware. He was in the USA, when that “is” summer. I’m just passing on what was in the email, which was pun intended. This is why it is in quotes. Gosh who knew so many people could make such a fuss over a funny photo. What next, threats? ;-) – Anthony

    I guess Anthony, believes in telepathy, so he does not have to give out all relevant data for his readers right away.

  • Philippe Chantreau // May 13, 2009 at 9:08 pm | Reply

    Michel says: “The only question is, what is the evidence that if the US does it, the East will follow? ”

    Obviously, it is not the only question. Next is: Is there a chance at all that the East will take any action if the Wes does not? Followed by: Will any action be taken if the West does not initiate it?

    So, by pulling on this string of yours, Michel , we get the following possible sequences:

    - Because it would not make any significant difference, the West does nothing, as a result, the East does nothing and BAU proceeds to the full extent of the experiment. It’s very understandable that, if you suscribe to that, you ‘d try to find all possible comfort from sources saying that it’s not going to be all that bad.

    -Because it’s the only way anything will be done, the West takes action. The East does not follow and benefits are virtually non existent despite significant costs to the West. Is this really possible on the long run? How viable are the Eastern economies outside of a true collaboration with the Western ones? If disposable income decreases in the West, what will happen to the Eastern economies relying only on that to pull them up?

    -Because that’s the only way anything will be done, the West takes action. The East follows and the world economy undergoes a tranformation toward sustainability. Even if the chance of getting that last scenario is remote, that’s the one I’d prefer to try.

    As for myself, I don’t mind so much being part of the morally correct choice, even if success eludes. I am currently involved in an individual situation just like that. Very costly action, very limited chance of success but no other rational AND moral choice possible.

    It seems that you prefer the first sequence above. Not only that but you seem bent on having everyone else share into it.

    Being in the US, I routinely see people at the grocery store so obese they’re unable to walk. So they use these little electric scooters to proceed on piling up boxes of donuts (high fructose corn syrup sweetened). Your message is kinda like saying this: we can’t get these people back on their feet because they’re too far gone anyway, it would be too hard, would likely fail and it would be bad for donut and electric scooter makers. So let’s just give up.

  • bluegrue // May 13, 2009 at 9:10 pm | Reply

    ‘Ere I forget, here’s a webcitation of Anthony’s original post:
    http://www.webcitation.org/5gj9KYM7Y

  • David B. Benson // May 13, 2009 at 9:26 pm | Reply

    How much slower would be sea level rise by avoiding an extra 0.1 K of global temperature rise? What is the projected costs, for the USA alone, of each 10 cm of sea level rise?

    And SLR looks now to be continuing for many centuries unless we can all start putting back the excess CO2…

  • dhogaza // May 13, 2009 at 9:40 pm | Reply

    What next, threats? ;-) – Anthony

    Hell, Anthony – lying’s a sin.

  • Dave A // May 13, 2009 at 10:14 pm | Reply

    Ian Forrester,

    Your argument does not stand up.Even GISS has now made access to its data available.

    Jones’ data is the bedrock of the IPCC but is the only temp data series that is not available. Considering the supposed stakes involved that is incredible!

    Now we can speculate why this might be. Some say it might indicate ‘misrepresentation’, others that the early data is not now in a suitably recoverable form.

    Either way,Jones needs to come into the 21st Century.

  • Ian Forrester // May 13, 2009 at 11:44 pm | Reply

    Dave A, why give anything to dishonest crooks? Why do you agree with the dishonest tactics of the AGW deniers? Surely you can come up with your own (maybe more honest arguments?

Leave a Comment