Today is a big day for cranks in two separate areas, but the interesting thing is the similarity of the responses.
First we have Casey Luskin of the “top think tank” the Discovery Institute (wow, they must be right up there with Cato and CEI!) blathering about paleontologists don’t know anything because of the self-correcting nature of science.
After this latest find, one researcher realized its implications and was quick to quash any doubts this may spark regarding human evolution, stating: “All the changes to human evolutionary thought should not be considered a weakness in the theory of evolution, Kimbel said. Rather, those are the predictable results of getting more evidence, asking smarter questions and forming better theories, he said.”
I’m all for “asking smarter questions and forming better theories,” and it logically follows that I therefore must also favor abandoning theories that aren’t working. The aforementioned Harvard biological anthropologist, Daniel Lieberman, apparently did not get the memo about refraining from making statements that might lead to doubts about evolution: he stated in the New York Times that these latest fossil finds regarding habilis:
“show ‘just how interesting and complex the human genus was and how poorly we understand the transition from being something much more apelike to something more humanlike.’” (emphasis added)
Indeed, as explained here, the first true members of Homo were “significantly and dramatically different” from our alleged ape-like ancestors, the australopithecines. So far, the data isn’t doing a very good job of explaining precisely from what, if anything, did our genus Homo evolve.
Well soooorrrry for actually looking for answers rather than stopping at, “a magic man done it.” Or rather, “I see design, therefore a magic man done it!” It’s really tiresome when denialist cranks like Luskin attack science and scientists because we’re self-correcting and willing to revise theories based on new evidence. That’s science people. I would hardly say looking for “smarter questions” involves dropping evolutionary theory, which is unaffected by this result as PZ has noted, to search for a magic man.
Anyway, that leads me to the second group of cranks dancing around a new result today. In this case it is global warming denialists like Steven Milloy, Tim Blair, Joseph D’Aleo at Icecap, NewsBusters (It’s a scandal!), etc. jumping up and down because of an error found in a dataset of US temperature that revises the records to show that 1934 was actually hotter than 1998. The chart and more below the fold.
One wonders how long it will take them to figure out that the record is for the US and not the globe, and that it doesn’t do much to trends. But again, we’re running into the same problem here as we are with the creationists. A change is made to the record (found by McIntyre – who to his credit at least is looking at the data however cranky he is), and all the denialists hoot and holler and say “gotcha” as if one revision is somehow a scandal and overturns every other observation that is made. How is this different from Luskin’s cheering of a revision in the fossil record? It’s not a matter of an interest in pursuing the truth and getting it right, it’s more about trying to put egg on the face of a branch of science – ineffectually I might add.
Between this and the release of results from von Storch and Bray of their 2003 survey, it’s a field day for cranks.
Now, Lambert still doesn’t like their methods and I agree that the solicitation likely led to contamination from cranks. Further the questions and the rating system are far too non-specific in most instances – especially with regards to models which get a drumming. It made a splash a few years ago because of this chart.
But even so, it couldn’t be too pleasing to the deniers due to graphs like this:
or this:
or this
or this:
or even this (notice the leftward shift from 1996 to 2003):
or this:
I’m sure they’ll find a nice way to quote-mine it all the same, but even with a potential crank contamination from distribution to the climate skeptics (200 potential spoilers and the overall survey n=557 according to Lambert), there is nothing close to a majority for any of the skeptic positions. So, take it with a hefty grain of salt, but even with contamination from freeping the signal emerges from the noise to show that the scientists in the field overwhelmingly think global warming is real, greenhouse gases are important, and we should do something about it.
Finally, for those interested in improvements in climate modeling Science has an article today on modeling (with forecasts for the next 10 years) using more data on current conditions which they have shown greatly reduces uncertainty.
For a century or more, meteorologists have known the secret to weather forecasting: To glimpse tomorrow’s weather, one must know today’s. And lately they have realized that the same precept applies to predicting climate years or decades ahead. Stirrings in the North Atlantic Ocean today that have nothing to do with the strengthening greenhouse–just natural jostlings of the climate system–could lead to drought in Africa’s Sahel in a decade or two, they recognized. Ignore today’s ocean conditions, and your 2020 global-warming forecast could be a bust. And such natural variability can be far-reaching. In a recent study, researchers found that when the Atlantic Ocean swung from one state to another, it apparently helped trigger a decade-long climate shift in the late 1960s that sprang from the Atlantic and reached as far as Australia.
But until now, climate forecasters who worry about what greenhouse gases could be doing to climate have ignored what’s happening naturally. Most looked 100 years ahead, far enough so that they could safely ignore what’s happening now. No more. In this week’s issue, researchers take their first stab at forecasting climate a decade ahead with current conditions in mind. The result is a bit disquieting. Natural climate variability driven by the ocean appears to have held greenhouse warming at bay the past few years, but the warming, according to the forecast, should come roaring back before the end of the decade.
“This is a very valuable step forward,” says meteorologist Rowan Sutton of the University of Reading, U.K. “It’s precisely on the decadal time scale and on regional scales that natural variability and anthropogenic effects have comparable magnitudes.” So improved climate forecasting of the next few decades could help decision-makers focus on where and when the most severe climate change will be happening. Or, conversely, they could recognize when the looming threat of global warming will be masked–temporarily–by natural variability.
…
Appreciating the power and reach of natural climate variations is a major step. To put that information to use, however, climate forecasters must find a way to model the future course of the variations themselves, starting from current conditions. Climate researchers from the Hadley Centre, led by Douglas Smith, are the first to try that, as they report on page 796.
The Hadley group tested the usefulness of their new prediction model by “hindcasting” the climate of two past decades. Starting from the observed distribution of ocean heat content, the model outperformed its own forecasts that lacked observed initial conditions. Errors in predicting global temperature declined by 20% or 36%, depending on the type of error. The model successfully predicted the warming of El Niño and the effect of unusually warm or cold waters around the world. An actual forecast starting in June 2005 correctly predicted that natural variability–the appearance of cooler water in the tropical Pacific and a resistance to warming in the Southern Ocean–would offset greenhouse warming until now. But beyond 2008, warming sets in with a vengeance. “At least half of the 5 years after 2009 are predicted to be warmer than 1998, the warmest year currently on record,” the Hadley Centre group writes.
Milloy, of course, waves it off with the classic there is no Global Mean Temperature canard, but what can you do?
Leave a Reply