Anti-GMO study is appropriately dismissed as biased, poorly-performed

The anti-GMO study released late last week has raised so many bad science red flags that I’m losing count. Orac and Steve Novella have both discussed fatal flaws in the research, the New Scientist discussed the researchers’ historical behavior of inflating insignificant results to hysterical headlines. And all this new paper seems to be proof of is that these researchers have become more savvy at manipulating press coverage. The result of this clever manipulation of the press embargo and news-release stenography by the press is predictable. The internet food crackpot army has a bogus paper to flog eternally with Mike Adams predicting the end of humanity, and Joe Mercola hailing this as the bestest study of GMO Evar. Lefty publications that are susceptible to this nonsense like Mother Jones have largely uncritical coverage and repeat the researchers’ bogus talking points. It’s a wonder Mark Bittman, organic food booster and anti-GMO half-wit hasn’t used it for his assertion that the evidence against GMO is “damning”. He substantiates this claim, by the way, by linking an article without a single scientific citation, just links to crankier and crankier websites.
Orac and Steve Novella do a good job dissecting many of the methodological flaws of this paper. Similarly, my read (or reads since this paper is unnecessarily obtuse in its data presentation), is that this paper is so flawed as to be meaningless.
Critically, these rates of tumor formation are well established from the pre-GMO era. This paper is exceptional for a low rate of tumor formation in the controls compared to historical controls and knowledge of tumor formation in this rat strain.
Second, the sample groups were small, and the parameters measured were large, almost guaranteeing false-postive events would outnumber true-positive events. Take a data set like they generated, and then perform subgroup analysis, and false-positive yet statistically-significant events are going to jump out at you like mad. The researchers then indeed seem to engage in this behavior, selecting a single time point to present their measurements of various biomarkers, rather than showing them over time. This is particularly notable in figure 5 and table 3. This is a sign of sloppy thinking, sloppy experimental design, and a failure to understand Bayesian probabilities. If you study 100 variables at random, you are likely to find false-positive statistically significant events about 5 percent of the time, even though there is no actual difference between groups. The pre-test probability of an effect being meaningful should determine whether a test should be performed and reported, and this “fishing trip” kind of experiment should only be the beginning of the process. It’s simply not possible to know the relevance of any of these ostensibly significant results found by subgroup analysis until they are subsequently studied as primary endpoints of a study.
The histology in figure 3 is demonstrative of nothing, and the scary rat tumor pictures notably lack a control rat – and we know the controls make tumors too. So why aren’t any control tumors shown? With the concern for bias throughout this paper I find the entire figure to be of no value, since it’s purely qualitative and highly susceptible to bias. Histology slides should be used to show something meaningful in terms of big qualitative effects, unusual structure, or a specific pathology. If one is to make claims about differences between groups by histology, you still have to subject it to rigorous and blinded analysis. I’ve done it, published on it, etc. It can be done. Worse, we know the controls have tumors, and that in this strain tumors are frequent. Why are the control samples always completely normal if not for biased selection of samples? Don’t show me one kidney, show me all the kidneys. Don’t show me one control slide, show me ten, or ideally the results of a blinded quantitative evaluation for tumors or histopathologic grade.
Similarly with figure 4, I don’t see a significant difference between the fields examined, and looking back at previous papers from the same group none of their ultrastructural evaluation of glyphosphate exposed or glyphosphate-resistant feed exposed cells and animals appears consistent or convincing. I don’t think many people have exposure to EM anymore as an assay, but having performed it, it’s very hard to say anything quantitative or meaningful with it. You’re going to find something in every grid, and it largely serves as a qualitative evaluation of cellular ultrastructure. I’m very wary of someone saying, upon presentation of a couple EM slides, that two groups of cells are “different”, and I’m confused by the assertion that the areas they describe represent aggregates of glycogen. What is the significance of glycogen being more dispersed in one cell versus another? You found some residual bodies, so what? They’re everywhere. Is this really a consistent effect? Show me numbers – summaries from 10 grids. Is there any clinical significance of such a change? The answer is no. If I were a reviewer I would have told them to junk the figure unless they wanted it to provide evidence of no difference between the cells.
In general the paper is confusing and poorly-written. Others have pointed out that Figure 1 is unnecessarily complex and better representation of the same data shows no consistent pattern of effect. I would say, given the sample sizes and effect sizes that the likelihood is the researchers are studying noise. There simply is no signal there, if there were there would be a consistent dose-response effect, rather than in many cases the “low dose” group having more tumors than the “high dose” groups. Without error bars it’s hard to be sure but my read of figure 1, in particular the inset panels, is that there really is no difference between any of the groups in terms of tumor formation.
We also have to consider that in the end, this whole idea is kind of dumb. Is there really a plausible explanation for how eating feed with an enzyme that’s resistant to glyphosate generates more tumors in rats, and so does exposure to glyphosate? Why would this protein be tumorigenic? If indeed the roundup-ready crop may have residual levels of glyphosate on it, and that’s the explanation for the similarity between groups, then aren’t you just admitting you’ve done a completely uncontrolled analysis of exposure to the compound? Couldn’t and shouldn’t this have been assayed? Isn’t this whole study kind of crap?
This paper should not have passed peer-review and represents a failure by the editors and reviewers to adequately vet this paper.

Katie Couric Picking Up Where Oprah Left Off

Gawker reports that on the first day of Katie Couric’s new show, Sheryl Crow discusses her theory that cell phone use caused her to have a brain tumor.
Update: The Chronicle reports that the show is just a celebrity infomercial, with softball questions, and no critical discussion:

You would be forgiven for mistakenly thinking you’d tuned in to an infomercial for Weight Watchers in the first half hour of Katie Couric’s new syndicated talk show, “Katie,” which premiered Monday afternoon…

$15 To Turn off "Special Offers" Bravo Amazon.com!

With the announcement of the Kindle Fire HD, some users were upset to learn that Amazon was going to stuff “special offers” on the device. But the company quickly retreated, and now is offering the option to turn of the ads for a mere $15.
This is a good development for consumers. We should have the choice to move away from ad-supported business models. As I explain with my co-author Jan Whittington, there is a cost to free business models. “Free,” ad-supported services are packed with hidden costs to privacy and other consumer interests.
While the ads are gone, there is still no word on whether Amazon will reduce tracking of Kindle users. Without backing off on tracking, this is not a pure privacy play.
And an interesting data point–how is it that Amazon is willing to give up these special offers for only $15, given that “customers love our special offers“?

How Did You Get My Facebook?

Facebook watchers are reporting that the service is about to launch a new feature for merchants that will allow merchants to target ads to users based upon users’ email and phone numbers. That’s a little confusing. Let me explain with a hypo–
As I understand it, it might work like this: ABC Corp. has an extensive database of consumer email addresses, but is concerned that no one is reading the company’s spam. So ABC uploads its consumer email database to Facebook, which identifies Facebook members who are customers of ABC. ABC Corp can then send its marketing through Facebook so that it lands in the Facebook Feeds of its existing customers.
The service has some privacy safeguards, because some hashing will be in place to stop Facebook from just copying the customer databases held by merchants (too bad they don’t do this for address book scanning!), and because the targeting will be based upon phone numbers and email addresses already in possession of the merchant. Thus, the idea is that this is marketing only to people with a business relationship with the advertiser.
This is a great model for businesses trying to communicate with their existing customers. It lets them reach customers through a new channel (Facebook) that is very popular. It avoids the hassle of telemarketing and possibly the regulatory regime associated with email marketing.
The Enhancement Problem
But here’s the catch–two core privacy assumptions are flawed. Merchants have difficulty getting phone numbers and email addresses from customers. Sometimes, instead of asking customers for personal information, they find ways to trick consumers into providing it, or they simply buy emails/phones/home address about a customer based upon whatever data they already possess. This practice is known as data enhancement, it happens where a company links more information about consumers to an existing database.
A recent case explored this practice at Williams-Sonoma: “After acquiring this information [zip code from Jessica Pineda at the register], the Store used customized computer software to perform reverse searches from databases that contain millions of names, e-mail addresses, residential telephone numbers and residential addresses, and are indexed in a manner that resembles a reverse telephone book. The Store’s software then matched Pineda’s now-known name, zip code or other personal information with her previously unknown address, thereby giving the Store access to her name and address.” That’s how you end up with dead trees in your mailbox.
The whole point of data enhancement is to get information about the consumer that she is otherwise unwilling to provide. It’s really sneaky and it contravenes transparency and fairness principles. Enhancement obviates many attempts to protect privacy through selective revelation.
How Did They Get My Facebook?
There’s a second problem here. Many people do not want to be contacted by the companies that they frequent. In a recent survey, I found with colleagues that 74 percent of Americans thought that a merchant should not be able to call them, even if they gave their phone number to the merchant! Consumers want specific permission controls over direct marketing.
Finding a new channel to contact people may be great for advertisers, but for users, contact through some new, unexpected channel, can be a bit unwelcome.
A Fix?
Perhaps Facebook could correct this problem by requiring merchants using this new service to guarantee that they collected email addresses and phone numbers directly from the consumer, with their consent that the information be used for marketing. Otherwise, this new service will create incentives for companies to engage in more enhancement, and it will further junk up Facebook.