Open Mind

Exogenous Factors

December 31, 2009 · 64 Comments

Apparently Lucia thinks that my “estimation of uncertainty intervals without treating the effect of volcanic eruptions like Pinatubo as exogeneous is very misleading.” I’ve come to expect such foolishness from her; whenever she approaches the trend in temperature data, she reeks of desperation.

But if we do model some of the exogenous factors, we might get smaller uncertainties in our trend estimates. Yay! Let’s give that a try.


One thing we can tell right off the bat: treating volcanic forcing as an exogenous factor won’t give Lucia the “falsification” of recent warming trends she so desperately craves. That’s because there hasn’t been enough volcanic activity to have a sizeable climate impact since about 1992 (Pinatubo). Which makes you wonder whether she’s thought this through very clearly.

What might actually be “very misleading” is to cherry-pick your exogenous factors. After all, there’s another one we’re all well aware of — el Nino/la Nina — and it has shown enough activity to have a sizeable climate impact, as recently as last year. So, let’s estimate the impact of volcanic forcing and el Nino/la Nina on global temperature, then remove it to generate an “adjusted” global temperature time series. After all, it’s an “adjustment” so it should really drive ‘em crazy (even if they insisted on it). Then we can subject the adjusted data to the same analysis I did here.

For volcanic forcing I’ll use Amman et al. 2003 (Ammann, C.M., G.A. Meehl, W.M. Washington, and C. S. Zender, 2003, A monthly and latitudinally varying volcanic forcing dataset in simulations of 20th century climate, Geophysical Research Letters, 30, 1657), area-weighted to get an approximation of global volcanic forcing. For estimates after 2000 I’ll assume no volcanic forcing, which will err on the side of making it harder to establish a warming trend (we won’t be able to blame any post-2000 cooling on volcanism). For el Nino/la Nina I’ll use the MEI (Multivariate el Nino Index). Then I can do a multiple regression of global temperature since 1975 against volcanic forcing, MEI, and a time trend, allowing for a lag in the impact of volcanism and MEI. Here’s the result:

As expected, volcanic aerosols cause cooling, sometimes quite dramatically (1992-93). Also as expected, el Nino warms while la Nina cools, sometimes dramatically (1998). As expected, the fit isn’t perfect; there’s still noise in there. Of course this simple multiple regression doesn’t model the impact of either of these exogenous factors perfectly, but it does give us a good first approximation of their effect, which enables us to remove the estimated volcanic/el Nino influence from the temperature time series, creating our adjusted temperature record:

Now we can apply the same analysis we used for the unadjusted data:

In one sense, the result isn’t any different. There’s still no evidence of any recent cooling trend, in fact there’s no evidence that the trend has been any different than it’s been since 1975. The data since 1975 are still indistinguishable from a linear trend plus random noise.

But in another sense, we do have a different result. The probable error is smaller. Because of that, we’re able to get a significant result (i.e., a trend which is significantly different from zero) with less data. Without adjustment, we needed 14 years data to get a significant result, and since there’s uncertainty in parameter estimates we actually need a bit more. But with adjustment, we get a significant result with only 10 years data. So the quantitative result is indeed different, but the qualitative result — no evidence of any trend change — is the same.

One last point may be of interest. If we compute annual averages of the adjusted data, we get this (the average for 2009 is incomplete because I don’t have December data yet):

The interesting thing is that, using the adjusted data, the warmest year on record is 2009! Of course that’s just “so far,” we’ll have to wait for December data before we can compute a complete annual average.

It’s intriguing that, when we account for the two best-known exogenous factors, the previous conclusion about trends is unchanged, but we do seem to be headed for a new “hottest year ever” — this year.

Categories: Global Warming
Tagged:

64 responses so far ↓

  • JohnV // December 31, 2009 at 5:51 pm | Reply

    What would it look like with an adjustment for the solar cycle as well? The most accepted value seems to be about 0.1C peak-to-trough (and probably a little more with this current extended trough of solar activity). Add that to 2009 and it was a *very* warm year.

    • FurryCatHerder // January 4, 2010 at 2:50 am | Reply

      What would it look like with an adjustment for the solar cycle as well? The most accepted value seems to be about 0.1C peak-to-trough (and probably a little more with this current extended trough of solar activity). Add that to 2009 and it was a *very* warm year.

      Yup.

      Perhaps some of the RealClimate crowd who are slow on the GCR-related cooling biz might like to rethink whether quiet solar cycles are worth examining. Because if quiet solar cycles have anything to do with the “pause” in warming, we’re royally screwed in 10 or 20 years.

  • Ray Ladbury // December 31, 2009 at 6:26 pm | Reply

    Ohhh, Snap!!! I can’t wait to see the gyrations when the denialist mothership starts hearing about this!

  • cogito // December 31, 2009 at 7:32 pm | Reply

    @Ray: Is that all you are interested in?

  • Chad // December 31, 2009 at 7:36 pm | Reply

    Hey Tamino,
    I recently did a similar analysis and it’s nice to see that we’ve got similar results (meaning I didn’t screw up!) However, I included solar effects. What were your lags for MEI and the volcanic forcing? I used 4 and 11 months, respectively. It’s also worth noting that taking these effects into account seriously reduces the lag-1 serial correlation which in turn reduces the inflation adjustment to the standard errors.

    Also, how do you make your graphics look so neat and fresh? What format/size do you use? I’ve been using PNG() in R with fairly high resolution, but they don’t look sufficiently pretty.

    [Response: I selected the best-fit lags, which turned out to be 4 months for MEI and 9 for volcanic forcing.

    I did the graphics in R, expanding them to full screen and saving as jpg files, then let wordpress crunch them to smaller size. I do expand the axis markers and labels to make them more legible.]

  • Ray Ladbury // December 31, 2009 at 8:40 pm | Reply

    Cogito, Hell no, but you gotta admit that its fun to watch ‘em react to an own goal.

    I don’t attach a lot of importance to analyses like this. We already knew the “negative trend” trumped by the denialosphere had ENSO stamped all over it. This merely brings it into focus.
    My guess is that somebody in the denialsphere will start throwing every connection they can think of and use it to explain the Universe.

    Cue McI in 5, 4, 3, 2…

  • Cthulhu // January 1, 2010 at 2:57 am | Reply

    A recent paper that looks at removing natural noise like ENSO and volcanic noise from the hadcrut record:

    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F2009JCLI3089.1

    http://www.atmos.colostate.edu/ao/ThompsonPapers/ThompsonWallaceJonesKennedy_JClimate2009.pdf

  • Lamont // January 2, 2010 at 6:30 am | Reply

    What about MEI vs. ERSSTv3b vs. global SSTs, etc? Are there other timeseries that would be a better ‘fit’ to modelling heat transfer from ocean activity to the atmosphere?

    Deniers will also probably want this analysis done with UAH rather than GISS for surface temperature (just pointing that one out).

    And the 4 month lag from ENSO conditions to atmospheric temperature response makes sense and I’ve come across that before — but what’s the physical explanation for the 9-t0-11 month lag for volcanic forcing (not doubting it, but i’ve just never come across that before)?

  • David B. Benson // January 2, 2010 at 9:14 pm | Reply

    Lamont // January 2, 2010 at 6:30 am — That 3/4 to one year lag also appears as the response to changes in TSI. The upper 10 meters or so of the ocean rapidly mixes due to wave action, so delay in the response seems to me to be due to the time required to heat about that much water.

  • Todd Friesen // January 2, 2010 at 9:42 pm | Reply

    I’ve been doing my own modeling of climate for a couple of years, and I agree that adjusted for natural variability, 2009 is the warmest year on record. Using the GISS data, 2005 was the warmest year, and I would apply a -0.04C adjustment for solar, and a -0.05C for ENSO to adjust for 2005 to 2009. From the data, I see that 2009 is roughly 0.05C cooler than 2005. Based on my anthropogenic+natural model (adjusted for ENSO, solar, volcanoes), I would have expected both years to be roughly the same. (Anthropogenic forcings, i.e. GHGs, aerosols, various albedoes etc.) suggest about a 0.09C warming, which are roughly offset by ENSO and solar).

    The model has about a standard deviation of 0.06C between model and actual annual global anomaly. 2005 was warmer than the model by about +0.06C (+ 1 s.d.), and 2009 is warmer by a little over +0.01C.

    Prospectively, I expect 2010 will be +0.08C warmer than 2005 actual, with a land+ocean anomaly of +0.71C. Anthropogenic forcings will contribute about +0.10C of this +0.08C, solar: -0.03C, ENSO: +0.07C. (This doesn’t add up to +0.08C because 2005 actuals were +0.06C warmer than model).

    Of course, these sorts of models don’t control for every global weather variable, just the most material ones.

    Interestingly, my model would have expected 2003 to be the warmest year, but in this case, actual was below model by 0.06C (-1 s.d.).

    Given no model bias, the chance of 2010 not breaking a new record is quite slim. A single-tail probability of less than -1.2 s.d. would be required (< 10%). The current El Nino (assuming it pans out for a few more months as per NOAA's predictions) is plenty strong enough to overcome any decline in solar irradiance relative to 2005. The 2010 El Nino impact is expected to be roughly the same as that for 1998. Not to say that the 1997-1998 El Nino was weaker (it won't be), but that much of the Nino impact from the 97-98 Nino was during the latter half of 1997.

  • Todd Friesen // January 2, 2010 at 9:54 pm | Reply

    Lamont,

    With UAH and RSS being more sensitive to Ninos, it will be interesting what their response will be in 2010 when the impacts of the 2009-2010 Nino fully hit us. It’s convenient when you end with 2008 (a La Nina), but it won’t be nearly as convenient when they are quoting through 2010. But as has been suggested, for deniers, they tend only to make Nino adjustments when it suits them.

  • Todd Friesen // January 3, 2010 at 3:41 am | Reply

    Here is my reconstruction of 1975-2009 GISS data adjusted for the sun, volcanoes, and ENSO. Looks pretty similar to Tamino’s. Also shows 2009 as the hottest year. (I estimated 2009 with my modeled December in-month projection at +0.63C).

    http://4.bp.blogspot.com/_PwOeFv7HIQE/S0ARx7eWURI/AAAAAAAAABc/URfsFDgfAIQ/s1600-h/GISS+Adjusted+(1975-2009).JPG

  • Todd Friesen // January 3, 2010 at 3:43 am | Reply

    Here is my model vs GISS record for 1880-2009.

    http://3.bp.blogspot.com/_PwOeFv7HIQE/SxYFkUJh1zI/AAAAAAAAAA8/pTF7BFWo5z0/s1600-h/Actual+vs+Model+(smoothed).JPG

  • Lamont // January 3, 2010 at 3:48 am | Reply

    I’m sure they’ll just move the goal posts around…

    It’ll be emotionally fun to beat on them, but they’ll come up with more rationalizations…

    But now that you mention it, you brought up another issue that I haven’t had time to read up on — why is UAH/RSS more sensitive to ENSO?

  • jyyh // January 3, 2010 at 4:39 am | Reply

    Lamont, the effects of ENSO are more notable on the tropics and the two series do not account the arctic.

    On the solar variation, the issue of lag is not clear to me. If the standard lag used with it is 4 months, and the wave effect is the only one that is used to explain this, how can we be sure there are no longer lags in the system? One might want to allow very long lags for it for the unaccounted slow downwelling and such?

  • jyyh // January 3, 2010 at 4:45 am | Reply

    sorry, the 3/4 to one year lag…

  • Todd Friesen // January 3, 2010 at 6:44 am | Reply

    Lamont,

    I don’t know the answer, just that apparently there are sensitivity differences between the surface and the lower troposphere (4-10km above surface). Because of the irregular shape of Nino events, statisical fitting is fairly credible way to detect the Earth’s sensitivity, and my statistical fitting shows that LT sensitivity is higher by about 70% compared to the surface. For example, 2010 Nino might influence global temperatures by +0.10C on the GISS dataset (similar for HadCRU). But this would be +0.17C on the satellite datasets.

    Could it be because of the greenhouse blanket? The stratosphere doesn’t show as much senstivity to Nino. This would suggest that the heat released into space is slowed down by the greenhouse blanket, and it thus would concentrate at higher altitudes (provided they are below the blanket).

  • Slioch // January 3, 2010 at 11:47 am | Reply

    Tamino: “using the adjusted data, the warmest year on record is 2009! Of course that’s just “so far,” ”

    Not only that, but every year from 2001 to 2009 is warmer than 1998.

  • David B. Benson // January 3, 2010 at 10:55 pm | Reply

    jyyh // January 3, 2010 at 4:45 am — The 2/4 to 1 year lag is the phase delay of the fast response portion. There is also a 30 year characteristic time for the rest of the ocean done to the main thermocline.

  • David B. Benson // January 4, 2010 at 12:10 am | Reply

    Corrections:
    “2/4″ –> “3/4″
    “done” –> “down”

  • FurryCatHerder // January 4, 2010 at 2:57 am | Reply

    Not only that, but every year from 2001 to 2009 is warmer than 1998.

    And, assuming you believe in solar min / max as an exogenous factor, every year during SC22 becomes much cooler, SC23 becomes somewhat cooler, and the past few years of SC24 become a LOT warmer.

    All this means is that when SC25 starts in 13 or 14 years, we’re in big trouble.

  • jyyh // January 4, 2010 at 3:28 am | Reply

    Thank you David B. Benson, there’s something to think about, draw some in-depth cross-sections, more to think about, and then to someone else to calculate about… incidentally, can someone say do i remember correctly that photosynthesis can only occur in depths under 100m, as water quenches enough of the freqs the most sensitive algae need to do their job?

  • David Jones // January 4, 2010 at 11:35 am | Reply

    MEI is just an abstract index; when you perform a regression using it, isn’t there some sort of linearity assumption?

    Why is it reasonable to do regress anything against MEI?

  • J // January 8, 2010 at 6:31 pm | Reply

    This is a great post, Tamino. Thanks for doing it.

  • jyyh // January 9, 2010 at 5:27 am | Reply

    From MEI description page:
    “…by basing the Multivariate ENSO Index (MEI) on the six main observed variables over the tropical Pacific. These six variables are: sea-level pressure (P), zonal (U) and meridional (V) components of the surface wind, sea surface temperature (S), surface air temperature (A), and total cloudiness fraction of the sky (C). These observations have been collected and published in COADS for many years.”
    I think that’s rather real stuff they’re using to calculate the index, especially when it can be overcast and windy or overcast and stagnant air.

  • Hank Roberts // January 9, 2010 at 3:03 pm | Reply

    jyyh: you can narrow it down from here;
    http://scholar.google.com/scholar?q=ocean+depth+photosynthesis
    http://scholar.google.com/scholar?q=ocean+plankton+organisms+mixing+upper+layer

  • george // January 9, 2010 at 3:16 pm | Reply

    From Lucia’s comment:

    “Why a trained statistician would present distortions of this kind is mystifying.

    Of course, he’s developed a following of people who know no better– but that’s less mystifying.”

    hey, at least we (the “following”) understand that drawing conclusions about what the climate is doing based on very short time periods (just 7 years when Lucia set off on her expedition to climb “Mount IPCCfalsified”) is a fools game. Though Lucia has had this pointed out to her time and again, all indications are that she will continue the game until her short period becomes a long one (with updates every month until then)

    But I’d say the primary difference between Tamino’s and Lucia’s “following” (eg, Steven Mosher) is that the former are not conspiracy theorists seeing Piltdown Men behind the major discoveries of climate science.

  • george // January 9, 2010 at 5:09 pm | Reply

    it is interesting to me that on the last graph, there seems to actually be a qualitative change in both the stability of the central curve and the width of the uncertainty interval that happens as one moves from the trend beginning in 2000 to those starting later.

    The central curve starts to jump around almost randomly and by rather large amounts (relative to the size of the trend) both up and down and the uncertainty interval seems to expand almost exponentially when one moves from 2000-later as a starting time for the trend.

    Interestingly, as pointed out in the post, 2000 is also the minimum starting time for the trend that allows one to conclude that the trend is nonzero.

  • parallel // January 9, 2010 at 6:15 pm | Reply

    Can you find errors in this?

    Multi-Run Mean AR4 Projections: Statistically Significant from Observations from ‘50, ‘60, ‘70, ‘00 and ‘01.

    http://rankexploits.com/musings/2010/multi-run-mean-ar4-projections-statistically-significant-from-observations-from-50-6070-00-and-01/

    Seems to me that Lucia has answered your criticisms

  • Lazar // January 9, 2010 at 8:05 pm | Reply

    parallel,

    No, there’s a stochastic component to climate which effects trends and which she (still!) doesn’t understand or recognize…

    The graph below shows uncertainty in the trend computed based on the difference between the multi-run mean computed from a series of AR4 models used for projections and the observations from Hadley. If the multi-model run consisted of the average over runs that perfectly captured the earth’s climate response to externally applied forcings, we would expect the mean trend [probably meant to say 'the difference in mean trends' -- Lazar] to be equal to zero, or at least the value of zero would lie within the uncertainty intervals.

    … this isn’t worth anyone wasting one minute on.

  • Lazar // January 9, 2010 at 8:07 pm | Reply

    ’stochastic’ here means ‘unforced’.

  • george // January 9, 2010 at 8:11 pm | Reply

    Paralell says : “Seems to me that Lucia has answered your criticisms’”

    Lucia has never answered Tamino’s primary criticism: that drawing conclusions about what the climate is doing (or not doing) based on very short time periods (just 7 years when Lucia began her IPCC falsified theme) is a fools game.

    Most o fthe time, it’s actually rather difficult to even decipher what it is she is claiming because her “stream of consciousness” posts are an ongoing exercise in “claim adjustment” (she would probably do well in the insurance industry)

    More often than not, rather than answer criticisms, she merely changes what it is she is claiming — or more precisely, changes her terminology in order to make it appear that she was originally claiming something different than what she actually did. Her “IPCC Projections falsified” is a perfect example of the latter.

    I’m not sure why anyone should actually wade through all the constantly evolving claims on her blog in the hopes of finding something valid — or even stationary.

  • Gavin's Pussycat // January 9, 2010 at 8:44 pm | Reply

    So I made the mistake of following Tamino’s link to Pielke’s blog… and tell me again why we should take this guy’s scientific utterings seriously?

    What he makes of Gavin Schmidt’s model-data comparisons… Sigh. Just sigh.

  • Ray Ladbury // January 9, 2010 at 9:57 pm | Reply

    Question: How often do you really learn something from one of Lucia’s posts?

    How often do you really learn something from Tamino?

    IMO, that constitutes the real difference, and it speaks volumes to their motivations and their methodology.

  • David B. Benson // January 9, 2010 at 10:43 pm | Reply

    Gavin’s Pussycat // January 9, 2010 at 8:44 pm — Which Pielke?

  • Didactylos // January 10, 2010 at 1:34 am | Reply

    Okay, so I don’t normally waste my time reading Lucia’s drivel. But curiosity got the better of me, and I clicked. Like all deniers, Lucia seems to have difficulty saying what she actually means.

    But as far as I can tell, all her post shows is that if she is right (given a sensibly large trend interval) the AR4 models come very close to the observed trends. So, absolutely no surprise there, then!

    The bottom line, though, is I just don’t trust Lucia. I don’t think I ever did. It’s curious: if by chance she made a valid criticism, would anyone notice? Aesop wrote on this subject….

  • george // January 10, 2010 at 3:46 pm | Reply

    Ray,

    I think you are right.

    Lucia is not interested in learning.

    She is is interested in being “right” or at least perceived as being right.

    She seems to crave recognition, though she will not do what is required to actually get it: publish in the climate science related journals.

    People like Lucia, McIntyre and RP, jr have similar mentalities. They are more interested in scoring rhetorical points and “winning” (even minor) arguments than they are in advancing the science and actually learning something about the way nature works.

  • parallel // January 10, 2010 at 5:11 pm | Reply

    Lazar,
    What stochastic components are you writing about, that Lucia does not recognize? (I’m glad you explained your definition of the word later.)

    george,
    Lucia has explained the reasons for her choice of time period many times, if you had been following her blog. She has also done a comparison of results for all the time periods one could image in the past.

    The rest of the following comments were just ad hominems without any real content that I could see.

    [Response: We know the reason for Lucia's choices: it's the only way she can delude herself into achieving "falsification."

    If she can't do it with 10 years, she'll use 7. If the data "as is" contradict her tricks, she'll insinuate that I'm being misleading by not treating some influences as exogenous -- but only the influences that favor her. When an ARMA(1,1) error model fails, she'll argue that it's not valid and we should revert to AR(1) -- which is as idiotic as it gets. Proof positive she's not just mistaken, she's dishonest. Subject closed.]

  • Gavin's Pussycat // January 10, 2010 at 5:33 pm | Reply

    David, the younger (cf. Tamino’s link). Although the elder has been ‘going pielke’ too :-(

  • Lazar // January 10, 2010 at 6:44 pm | Reply

    parallel,

    What stochastic components are you writing about, that Lucia does not recognize?

    Why do scientists create multiple runs for each model+forcing pair? Why do the runs differ?

    • parallel // January 10, 2010 at 9:50 pm | Reply

      Lazar,

      You didn’t answer the question,so I’ll try again.

      What stochastic components are you writing about, that Lucia does not recognize?

      • Lazar // January 10, 2010 at 10:15 pm

        You didn’t answer the question

        did too :-)

      • Lazar // January 10, 2010 at 11:28 pm

        Ah, apologies, ok I see that she’s using an ARMA model to estimate the unforced variability instead of the model runs distribution… my bad… that may be ok, depending how she’s accounting for ENSO and forcings. It needs more detail, I see she’s going to provide future posts. Would be better to provide the detail first / alongside the results… just the results leaves everyone hanging. Also could edit “we would expect the mean [difference] trend to be equal to zero”… we wouldn’t really… it approaches zero in the limit.

      • Lazar // January 10, 2010 at 11:33 pm

        also whether she’s included uncertainty in the estimate of the model mean… is unclear.

  • george // January 10, 2010 at 7:01 pm | Reply

    “She has also done a comparison of results for all the time periods one could image in the past.”

    I don’t really have any interest in her latest post because just by it’s title “Multi-Run Mean AR4 Projections: Statistically Significant from Observations from ‘50, ‘60, ‘70, ‘00 and ‘01., it is clear that she still considers the 2000-present and even 2001-present time period long enough to draw “statistically significant” conclusions regarding the IPCC projections.

    And by that very same title, it also appears that she has excluded the period starting in ‘80.

    Why is that, parallel?

    paralell continues:

    “The rest of the following comments were just ad hominems without any real content that I could see.”

    That depends…

    on whether you believe her analysis has any content.

    I don’t.

    Ad Hom: you are stupid so anything you say is stupid.

    Not ad hom: most of what you have said so far is stupid, so the chance is very good that the the very next thing will be as well…

    So why should I bother with it?

    Incidentally, I actually have looked into some of the stuff that Lucia has done in the past and commented on it in detail on this very blog.

    One of the things that I have noticed is that Lucia has a pronounced tendency to resort to stupid word games when she is cornered on something.

    She shares that habit with RP, Jr. (ask Eli Rabett)

  • george // January 10, 2010 at 7:10 pm | Reply

    and parallel :

    By the title, she left out the ‘90 starting date as well.

    Finally,
    There are IPCC projections that begin in 1990 (TAR)

    There are also ones that begin in 2001 (AR4)

    I am not aware of any that begin in 1950, ‘60, or ‘70.

    Perhaps you can point me to the relevant IPCC document?

  • David B. Benson // January 10, 2010 at 8:47 pm | Reply

    Gavin’s Pussycat // January 10, 2010 at 5:33 pm — Thanks. (I don’t ever follows links to either Pielke’s site.) The younger is not a physical scientist and seems to be a delay advocate, obviously not what needs doing as even economists can figure out.

  • parallel // January 10, 2010 at 11:32 pm | Reply

    parallel wrote: “You didn’t answer the question”

    Lazar wrote: “did too :-)”

    Another question is not an answer. I didn’t think you would be able to answer it.

    • parallel // January 11, 2010 at 1:02 pm | Reply

      Lazar wrote: “Ah, apologies, ok I see that she’s using an ARMA model to estimate the unforced variability instead of the model runs distribution… my bad… that may be ok

      Apology accepted, but it should really be to Lucia.

  • Glenn Tamblyn // January 11, 2010 at 8:55 am | Reply

    Ray Ladbury (last year)

    “I don’t attach a lot of importance to analyses like this. We already knew the “negative trend” trumped by the denialosphere had ENSO stamped all over it. This merely brings it into focus.”

    Agreed. The denialosphere wont/can’t grasp this. However, for the still substantial number of non-scientifically literate people who are unsure, the ‘its cooling, AGW has stopped’ meme is still powerful, in a paralysing sort of way – queue some Aussie politicians I could name.

    So any level of analysis that can tip the balance of their uncertainty the other way is actually profoundly important. We don’t really have 5-6 years to wait for the next solar cycle to peak to galvanise people.

    And looking at the combined Atmospheric and ocean record, I would actually say that it may have a triple impact written all over it ENSO/Solar Cycle/Aerosols.

    What was the old saying about Los Angeles – ‘I Like To See The Air I Breath’. Well now that is more apt for Shanghai, Delhi etc.

    Given that one of the focuses for the next IPCC report is decade level predictions, these are perhaps some of the most important areas of study – unless we can suddenly get a big understanding of clouds which seems to be in the ‘thats really hard’ basket

    As an aside to anyone reading, including any of the RC guys, does anyone know how many of the major climate modelling groups out there currently include the Solar Cycle in their models – even if it is a simple sinusoid, amplitude X W/M^2, period 11 years?

  • Barton Paul Levenson // January 11, 2010 at 4:05 pm | Reply

    Glenn,

    I imagine there are probably quite a few that use historical TSI data when making climate retrodiction runs. Does anybody know?

  • parallel // January 11, 2010 at 7:15 pm | Reply

    [edit]

    [Response: No more advertising for Lucia's garbage. Take it elsewhere.]

  • Gavin's Pussycat // January 12, 2010 at 10:27 am | Reply

    > (I don’t ever follows links to either Pielke’s site.)

    Wise. I didn’t either, just innocently followed a link provided by Tamino :-(

  • Chad // January 12, 2010 at 7:19 pm | Reply

    Barton and Glenn,
    I have a post up on solar forcing in climate models. For the most part, the solar irradiance is completely flat post-20C3M. They don’t even include an 11-yr cycle.

  • Slioch // January 12, 2010 at 8:12 pm | Reply

    I’ve recently come across this:

    http://reallyrealclimate.blogspot.com/2010/01/another-inconvient-truth-for-agw.html

    which shows a four graphs of HADCRUT3 raw data and purportedly “HADCRUT3 ENSO adjusted” data for differing time periods, eg:

    http://1.bp.blogspot.com/__VkzVMn3cHA/S0tmTPVPyyI/AAAAAAAAAGg/Tnjp_G3tBRE/s1600-h/Easterling+%26+Wehner+3.bmp

    The ENSO adjusted data is said to have been “generated for this paper by D. Thompson et al.”, though the “this paper” link doesn’t work, so I don’t know to what it refers. It was said to be stored on the Real Climate web site.

    The author of this blog clearly believes that the ENSO adjusted data does not include any El Nino/La Nina influence.

    However, what is curious about the “ENSO adjusted data” is that it does not remove the El Nino/La Nina peaks and troughs – it merely mutes them somewhat.

    It is surely the case that if a temperature series is adjusted to remove the effects of El Nino/La Nina then the peaks and troughs caused by such episodes should be completely removed from the series. In other words, if a temperature peak such as 1998 is the result of an El Nino, the the adjusted data should not necessarily show any peak in 1998 at all.

    I would be interested to read any comments both about this particular data series and about ENSO adjusted data series in general, and whether my above supposition is correct.

  • Deep Climate // January 12, 2010 at 9:02 pm | Reply

    Chad’s link is:

    http://treesfortheforest.wordpress.com/2010/01/04/another-brief-look-at-climate-model-solar-forcing/

    (for later when it falls off the front page).

    I don’t think it’s a huge problem to leave out the 11-year cycle post-2000, since it’s swamped by simulated ENSO. It might have helped the short-term fit to observations a little, though. How did that work out for GISS?

  • Slioch // January 12, 2010 at 10:37 pm | Reply

    re. Slioch above: The “this paper” link now works.
    It is:

    http://www.atmos.colostate.edu/ao/ThompsonPapers/Thompson_etal_Nature2008.pdf

    “A large discontinuity in the mid-twentieth century in observed global-mean surface temperature” Thompson et al Nature Letters 29 May 2008

  • dhogaza // January 13, 2010 at 5:16 am | Reply

    Slioch:

    I would be interested to read any comments both about this particular data series and about ENSO adjusted data series in general, and whether my above supposition is correct.

    Why would you spend any time looking at crap of this sort?

    I’m sure the flat earth society, or at least their fellow travelers, have round-earth killing “information” on the web, too, if you look hard enough.

    And Answers in Genesis “proves” the world is only 6,000 years old, despite what evil scientists say.

    They have a very convincing website.

    Why should any of us bother refuting *crap*?

    Really?

  • Slioch // January 13, 2010 at 12:56 pm | Reply

    dhogaza

    “Why should any of us bother refuting *crap*?”

    Because if you don’t bother then the *crap* continues to fester and spread.

    What is most evident from these sorts of sites (WUWT etc.) is the increasing absence of any refutation of their garbage from scientifically literate individuals. Perhaps many share your view that it isn’t worth their time to do so. The result is an increasing polarisation of comments and fewer and fewer people exposed to counter arguments.

    If this was simply a matter of scientific debate, or if effective actions to tackle the problems of warming were being taken, then that might not matter very much, but that is not the case. Every person who becomes convinced that AGW is a scam has the same vote as you do, the same right to lobby his representative and a determination to undermine whatever actions might be proposed to counter AGW.

    Do I need to mention the global impact of the lack of action on AGW thus far from the US? The root cause of much of that is the scientific ignorance of your population. Much the same criticism can be made elsewhere of course.

    What, thus far, has been the response of the UK scientific community to the “climategate” debacle, for example, with respect to trying to educate the public? Precious little. 1700 UK scientists managed to find two minutes to sign a petition of support for CRU. But have there been letters to papers, comments in blogs etc. to stem the huge tide ignorance that that has unleashed? Very, very few. The overwhelming view now expressed in mainstream media comments is that AGW is a scam designed to … well, you’ve heard it all before.

    You don’t have to tell me that wading through all this *crap* is dirty, tedious and repetitive: I do it all the time, because I do believe in trying to educate people and counter the forces of darkness and greed.

    But when a modestly scientifically literate non-specialist like myself turns to “Open Mind” for a bit of advice, I think I can be forgiven for looking for a bit more than your dismissive and wholly unhelpful response.

  • Ray Ladbury // January 13, 2010 at 2:11 pm | Reply

    Slioch,
    Personally, I think our strategy ought to be to herd all the wackaloons into a WUWT convention and build a fricking wall around it. WUWT already serves the useful function of an asylum on the Internet. After all, if someone is too dumb to spot the transparent stupidity of that site, do we really want them on our side? WUWT is self refuting to anyone with two working brain cells.

    The first problem with trying to refute the nonsense on such websites is that the signal-to-noise ratio is effectively zero. The second problem is that people who go to those websites aren’t looking for truth or understanding. They are looking for reassurance that they can go on believing what they want to believe. That’s not the most conducive environment to a “teachable moment”.

    I’m willing to engage in an atmosphere where there’s at least a chance people are willing to learn. If there aren’t enough people willing to learn, well then, it looks like human intelligence isn’t all it is cracked up to be from an evolutionary standpoint.

    WRT your particular example. The blog post is clearly the work of an idiot. The SST anomaly actually helps to resolve a difficulty the consensus model faced–how to account for the warming from 1910-1945. If it turns out that some of that warming was due to this anomaly, it becomes MUCH easier.

    So, as usual the denialists score an own goal because they don’t know how to interpret data and they don’t understand the science.

  • Kevin McKinney // January 13, 2010 at 6:43 pm | Reply

    dhog/slioch/ray–

    I follow a news site in order to swat down the “rebunks.” (Which is mostly all there ever is.)

    You know you’re not going to convince your interlocutors, but there are lots of bystanders who may well have “teachable moments.” From that perspective, every zombie argument is an opportunity.

  • Ray Ladbury // January 13, 2010 at 7:17 pm | Reply

    Kevin McKinney,
    While I admire your stamina, I’m afraid I don’t share your optimism. If someone goes to WUWT, they aren’t looking to be educated, they’re looking to be reassured (that is, lied to).

    My strategy would be to simply make a note of some of the absolute howlers the site had perpetrated, and end it with: This is what happens when you try to get your science from people who don’t understand science. Find out what the scientists say:

    http://www.realclimate.org
    http://tamino.wordpress.org
    http://www.skepticalscience.org

    etc.

  • Slioch // January 13, 2010 at 8:53 pm | Reply

    Ray

    “This is what happens when …” etc

    And you thereby widen the divide. Simply being told that nice Mr Watts, (“who looks just like my favourite uncle Jo and is never rude, etc.”), doesn’t understand science is not going to win hearts and minds.

    I do not accept your wholly pessimistic view of people visiting WUWT and similar sites. When errors in articles are pointed out and explained, some people do take notice.

    In little over a couple of years the U.S. will be deciding between Obama and someone who very likely will be telling them that AGW is a scam to steal their tax dollars. You’re going to need everyone on your side: dismissing people because they are “too dumb” will be music to the Sarah Palins of this world.

  • Ray Ladbury // January 14, 2010 at 12:25 am | Reply

    Slioch,
    If people are trying to learn climate science from Anthony Watts or McI, they do not realize that expertise has value. Since this also suggests that they have no expertise themselves, then I hold out little hope that they would appreciate the truth even if you spoonfed it to them.

    This is humanity’s midterm. It consists of one question: Will you use your brain to comprehend physical reality rationally, or will you use it to rationalize youself into wishful thinking, complacecy and ultimately into oblivion. Unfortunately, most of humanity hasn’t even cracked the textbook. If all our brains are good for is rationalization, then they represent an evolutionary burden with no commensurate benefit to survival.

    The people who will vote for Palin would not listen to me anyway. I have an IQ above the 40th percentile, and that automatically makes me suspect in their eyes. If she gets elected in 2012, that will be my signal to give up on the species.

Leave a Comment