Anti-GMO writers show profound ignorance of basic biology and now Jane Goodall has joined their ranks

It’s a sad day for the reality-based community, within the critiques of Jane Goodall’s new book ‘Seeds of Hope’ we find that in addition to plagiarism and sloppiness with facts, she’s fallen for anti-GMO crank Jeffrey Smith’s nonsense.

When asked by The Guardian whom she most despised, Goodall responded, “The agricultural company Monsanto, because I know too much about GM organisms and crops.” She might know too much, but what if what she knows is completely wrong?
Many of the claims in Seeds of Hope can also be found in Genetic Roulette: The Documented Health Risks of Genetically Engineered Foods, a book by “consumer advocate” Jeffrey Smith. Goodall generously blurbed the book (“If you care about your health and that of your children, buy this book, become aware of the potential problems, and take action”) and in Seeds of Hope cites a “study” on GMO conducted by Smith’s “think tank,” the Institute for Responsible Technology.
Like Goodall, Smith isn’t a genetic scientist. According to New Yorker writer Michael Specter, he “has no experience in genetics or agriculture, and has no scientific degree from any institution” but did study “business at the Maharishi International University, founded by the Maharishi Mahesh Yogi.” (In Seeds of Hope, Goodall also recommends a book on GM by Maharishi Institute executive vice president Steven M. Druker, who also has no scientific training). As Professor Bruce Chassy, an emeritus food scientist at the University of Illinois, told Specter, “His only professional experience prior to taking up his crusade against biotechnology is as a ballroom-dance teacher, yogic flying instructor, and political candidate for the Maharishi cult’s natural-law party.” Along with fellow food scientist Dr. David Tribe, Chassy runs an entire website devoted to debunking Smith’s pseudoscience.
And it apparently escaped Goodall’s notice that Smith’s most recent book—the one that she fulsomely endorsed—features a foreword by British politician Michael Meacher, who, after being kicked out of the Tony Blair’s government in 2003, has devoted a significant amount of time to furthering 9/11 conspiracy theories.

Goodall is, of course, not the first scientist of fame and repute to fall in for crankery and pseudoscience. From Linus Pauling to Luc Montagnier, even Nobel Prize winning scientists have fallen for psuedoscientific theories. However, we should always be saddened when yet another famous scientist decides to go emeritus and abandon the reality-based community.
There always seem to be a couple of different factors at play when this happens. For one, such scientists appear to have reached a such a status that it becomes very difficult for others to criticize them. It’s like a state of ultra-tenure, in which you practically have to insult the intelligence of an entire continent before people will object to your misbehavior. The second common factor seems to be that they start operating in a field in which they lack expertise, but seem to assume their expertise in other unrelated fields should allow them to waive in. This appears to be the case with Goodall, as even someone with rudimentary knowledge of molecular biology should be able to see the gaping holes in the anti-GMO movement’s logic.
For example, let’s start with the easy-pickings at Natural News. A recent article by Jon Rapaport entitled “Brand new GMO food can rewire your body: more evil coming” is a perfect example of how the arguments made against GMO foods are based on fundamentally-unsound understanding of biology. The author writes:

It’s already bad. Very bad. For the past 25 years, the biotech Dr. Frankensteins have been inserting DNA into food crops.
The widespread dangers of this technique have been exposed. People all over the world, including many scientists and farmers, are up in arms about it.
Countries have banned GMO crops or insisted on labeling.
Now, though, the game is changing, and it’ll make things even more unpredictable. The threat is ominous and drastic, to say the least.
GM Watch reports the latest GMO innovation: designed food plants that make new double-stranded (ds) RNA. What does the RNA do? It can silence a gene. It can activate a gene that was silent.
If you imagine the gene structure as a board covered with light bulbs, in the course of living some genes light up (activation) and some genes go dark (silent) at different times. This new designed RNA can change that process. No one knows how.
No one knows because no safety studies have been done. If you have genes lighting up and going dark in unpredictable ways, the functions of a plant or a body can change randomly.

Pinball, roulette, use any metaphor you want to; this is playing with the fate of the human race. Walk around with designer-RNA in your body, and who knows what effects will follow.

At this point, I think anyone familiar with the science of RNA interference (RNAi) has slapped themselves in the forehead, for anyone who wants a decent introduction the Wiki does a pretty good job. It’s clear that the author is projecting his own ignorance of RNAi onto the rest of us. Briefly, until about 20 years ago, the so-called “central dogma of molecular biology” was a one way road from DNA being transcribed into RNA which was then translated into a functional protein. Even this is a pretty gross simplification, but it’s fair to say, that prior to the discovery of RNAi, RNA was thought to be little more than a messenger in the cell, serving as an intermediary between the DNA code, and the protein function. Yes, we knew that some RNA had enzymatic function, was incorporated into some proteins, etc., but it wasn’t seen so much as a regulatory molecule.
Then, after a few intriguing findings in plants, Fire and Mello discovered that RNA itself could control the translation of other genes in c. elegans. Almost by accident, they found that if you inserted a double-stranded RNA molecule corresponding to a RNA transcript, that transcript would be degraded and the protein it encoded for wouldn’t be expressed. It was a surprising finding. One would think that what would work would be the anti-sense strand of RNA that would bind the sense strand and somehow inhibit it’s entry into the ribosomal machinery and ultimately interfere with translation. Instead, what they found was double-stranded RNA had a function all of it’s own, with a previously unknown cellular machinery specifically-purposed with processing dsRNA and inhibiting gene function through an entirely different mechansim. Subsequently we’ve also found the RNAi not only can directly regulate the levels of RNA transcripts, but can also regulate gene suppression, and activation directly on promoter sequences on DNA itself.
It’s amazing, decades after the discovery of RNA and understanding of its primary function, we discovered this new and incredibly complex layer of regulation of genetics by RNA molecules involved in everything from development to disease. But what does that mean for us? Should we be worried about gene-regulating RNA molecules in our food?
Of course not! RNAi is an intrinsic function of most eukaryotes. Just about every food you’ve ever eaten in your entire life is chock-full of RNA molecules, including double-stranded inhibitory RNAs involved in the normal biological processes occurring within the cell. If other organisms could affect us by poisoning us with RNA, we wouldn’t last a minute. Weirdly, in GMO paranoia world, however, whatever we consume has the potential to take over our bodies. The basic molecules of all life, that exist in everything we eat, take on new powers once handled by human scientists. The article hinted at as evidence of this risk (but of course not actually cited by the author) that suggests miRNA may have “cross-kingdom” effects, is a great example of crank cherry-picking, as the evidence demonstrating it may be artifact is of course not mentioned. And we shouldn’t be surprised, as it would be a pretty extraordinary hole in our defenses if other organisms could so easily modify our gene expression.
One of the great limitations of gene therapy as a potential therapy has been that it’s extremely difficult to introduce genes, or specifically regulate them with external vectors. If it were as simple as just feeding us RNA that would be something. For better or worse (likely better), your body is extremely resistant to other organisms tinkering with its DNA or cellular machinery.
Ok, but then you say, “Hey, that’s Natural News, we know they’re morons.” Ok, how about Clair Cummings in Common Dreams panic-posting about the GMO threat to our water supply from this week? Great evidence that “progressive” is no insulation from “anti-science”:

Today is World Water Day. The United Nations has set aside one day a year to focus the world’s attention on the importance of fresh water. And rightly so, as we are way behind in our efforts to protect both the quantity and quality of the water our growing world needs today.(Image: EarthTimes.org)

And now, there is a new form of water pollution: recombinant genes that are conferring antibiotic resistance on the bacteria in the water.
Researchers in China have found recombinant drug resistant DNA, molecules that are part of the manufacturing of genetically modified organisms, in every river they tested.
Genetically engineered organisms are manufactured using antibiotic resistant genes. And these bacteria are now exchanging their genetic information with the wild bacteria in rivers. As the study points out, bacteria already present in urban water systems provides “advantageous breeding conditions for the(se) microbes.”
Antibiotic resistance is perhaps the number one threat to public health today.

Transgenic pollution is already common in agriculture. U.C. Berkeley Professor Ignacio Chapela was the first scientist to identify the presence of genetically engineered maize in local maize varieties in Mexico. He is an authority on transgenic gene flow. He says it is alarming that “DNA from transgenic organisms have escaped to become an integral component of the genome of free-living bacteria in rivers.” He adds that “the transgenic DNA studied so far in these bacteria will confer antibiotic resistance on other organisms, making many different species resistant to the antibiotics we use to protect ourselves from infections.”

Our expensive attempts to filter and fight chemicals with other chemicals are only partially effective. Our attempts to regulate recombinant DNA technology has failed to prevent gene pollution. The only way to assure a sustainable source of clean water is to understand water for what it is: a living system of biotic communities, not a commodity. It is a living thing and as such it deserves our respect, as does the human right to have abundant fresh clean water for life.

You heard it, now they’re making up a new category of pollution “gene pollution”.
Let’s go back to some of the basic science here, so again, we can display just how silly and uninformed these Chicken Littles are. When molecular biologists wish to produce large quantities of a DNA or protein, what they usually do is insert the sequence into an easy-to-grow organism like E. Coli, or yeast, or some other cell, and then have the biologic machinery of those cells produce it for us. This is one of the most simple forms of genetic modification, and we use it from everything to making plasmid DNA in the lab, to the production of recombinant human insulin for diabetics. In order to make sure your organism is making your product of interest you include a gene that encodes for resistance to an antibiotic (in bacteria most commonly to ampicillin) so that when you grow your bug you can make sure the only cells growing are the ones that are working for you by including that antibiotic in the mix. Other resistance genes we use are often for antibiotics we don’t use in humans, like hygromycin or neomycin, which is nephrotoxic if injected (but also poorly absorbed).
“That’s terrible!”, you say, “how could we teach so many bacteria to be resistant to antibiotics! Surely this will kill us all!”
Um, no. For one, the resistance genes we use aren’t novel or made de novo by humans, they already existed before a single human was ever treated with an antibiotic. The first antibiotic discovered, penicillin, is a natural product. It’s an ancient agent in an ongoing war between microorganisms. The antidote for penicillin and related molecules was actually discovered at about the same time as we discovered penicillin. Beta-lactamase, which breaks open the structure of the penicillins and inhibits their antibiotic effects was around long before humans figured out how to harness antibiotics for our own purposes. The gene, which we clone into plasmids to make our GMO bacteria work for us, came from nature too. Now if we were growing bacteria in vancomycin or linezolid, yeah, I’d be pissed, but that’s not what’s happening. And even though we still use older penicillins clinically, it’s with full knowledge that resistance has been around for decades, and they are used for infections that we know never become resistant to the drugs, like group b strep (or syphilis). The war for penicillin is over. We lost. Any bug that’s going to become resistant to penicillin already is.
The antibiotic resistance that plagues our ICUs and hospitals doesn’t come from GMOs being taught to fight ampicillin, it comes from overuse of more powerful antibiotics in humans. The genes that are providing resistance to even beta-lactam resistant antibiotics like the carbapenems or methicillin are the result of a more classic form of genetic modification – natural selection.
So what is the risk to humans from the DNA encoding a wimpy beta-lactamase or whatever being detected in water? Zilch. Nada. Zip.
The paranoia over recombinant DNA has persisted for decades despite no rational basis for a threat to humans or other living things. The continued paranoia over rDNA is a sign that the GMO paranoids get their science from bad movies, not textbooks or serious knowledge of the risks and benefits of this technology. rDNA is why we have an unlimited supply of insulin, it’s how we have virtually all of our knowledge of molecular biology, it’s how we even have an understanding of how things like antibiotic resistance work. It’s been around since the 70s and how many times have you heard of it actually hurting a person?
This is the state of the argument over genetically-modified organisms. To the uninitiated this stuff sounds like it might be kind of scary. But with any real understanding of the molecular mechanisms of these technologies, the plausibility of their risk drops to zero. Sadly, Goodall has not only shown a pretty poor level of scholarship with this new book, but also, has fallen in with cranks promoting implausible risks of this biotechnology. It’s unfortunate because she should be respected for her previous work as an environmentalist and a conservationist. This is what is so annoying about anti-GMO paranoia. It makes environmentalists look like idiots, as it distracts from actual threats to the environment with invented threats and irrational fears of biotech. I’m sure I’ll now be accused of being in the pocket of big ag, as I am in every thread on GMO, but I assure you, I have no financial interests, or any dealings with these companies ever. I’m irritated with the anti-GMO movement because it’s an embarrassment. It’s Luddism, and ignorance masquerading as environmentalism. It’s bad biology. It’s the progressive equivalent of creationism or global warming denial. It’s classic anti-science, and we shouldn’t tolerate it.

Fixing the Chargemaster Problem for the Uninsured

For those disturbed by the evils of the hospital chargemaster as exposed by Brill’s piece in time, Uwe E. Reinhardt’s proposed solution is a must read.
While the hospitals are never going to charge the uninsured the same rate as they charge medicare (and probably be less forgiving the more they think they can get out of you), that’s no reason we can’t force them to with state law. Apparently that’s what Reinhardt had them do in Jersey:

In the fall of 2007, Gov. Jon Corzine of New Jersey appointed me as chairman of his New Jersey Commission on Rationalizing Health Care Resources. On a ride to the airport at that time I learned that the driver and his family did not have health insurance. The driver’s 3-year-old boy had had pus coming out of a swollen eye the week before, and the bill for one test and the prescription of a cream at the emergency room of the local hospital came to more than $1,000.
By circuitous routes I managed to get that bill reduced to $80; but I did not leave it at that. As chairman of the commission, I put hospital pricing for the uninsured on the commission’s agenda.
After some deliberation, the commission recommended initially that the New Jersey government limit the maximum prices that hospitals can charge an uninsured state resident to what private insurers pay for the services in question. But because the price of any given service paid hospitals or doctors by a private insurer in New Jersey can vary by a factor of three or more across the state (see Chapter 6 of the commission’s final report), the commission eventually recommended as a more practical approach to peg the maximum allowable prices charged uninsured state residents to what Medicare pays (see Chapter 11 of the report).
Five months after the commission filed its final report, Governor Corzine introduced and New Jersey’s State Assembly passed Assembly Bill No. 2609. It limits the maximum allowable price that can be charged to uninsured New Jersey residents with incomes up to 500 percent of the federal poverty level to what Medicare pays plus 15 percent, terms the governor’s office had negotiated with New Jersey’s hospital industry.

Reinhardt also makes clear that the problem of excess cost is not the chargemaster or hospital profits, which are not so extraordinary, as did I in my original piece (at least compared to excess drug costs, insurance administration, inefficiently delivered service and unnecessary services etc). But the injustice of the uninsured facing these inflated bills that are designed to antagonize large payers like health insurance companies, should be addressed. You can’t bleed a radish, and hospitals should stop trying to when it comes to the uninsured. Since they won’t without government encouragement, such legislation should be considered at the state and national levels.

New homebirth statistics show it's way too dangerous, and Mike Shermer on liberal denialism

Two links today for denialism blog readers, both are pretty thought provoking. The first, from Amy Tuteur, on the newly-released statistics on homebirth in Oregon. It seems that her crusade to have the midwives share their mortality data is justified, as when they were forced to release this data in Oregon, planned homebirth was about 7-10 times more likely to result in neonatal mortality than planned hospital birth.
I’m sure Tuteur won’t mind me stealing her figure and showing it here (original source of data is Judith Rooks testimony):

Oregon homebirth neonatal mortality statistics, from the Skeptical OB.

Armed with data such as these, it needs to become a point of discussion for both obstetricians and midwives that out of hospital births have a dramatically-higher neonatal mortality, and this is worse for midwives without nursing training (the DEM or direct-entry-midwives). It’s their body and their decision, but this information should be crucial to informing women as to whether or not they should take this risk. It also is only a reflection of neonatal mortality, one could also assume it speaks to higher rates of morbidity as well, as longer distances and poorer recognition of fetal distress and complications will lead to worse outcomes when the child survives. It should be noted this data is also consistent with nationwide CDC data on homebirth DEMs, and actually better than midwife data for some states like Colorado.
The second article worth pointing out today (even though it’s old) is from Michael Shermer in Scientific American on the liberal war on science. Regular readers know that I’m of the belief there isn’t really a difference between left and right-wing ideology on acceptance of science, it just means they just reject different findings that collide with their ideology.

The left’s war on science begins with the stats cited above: 41 percent of Democrats are young Earth creationists, and 19 percent doubt that Earth is getting warmer. These numbers do not exactly bolster the common belief that liberals are the people of the science book. In addition, consider “cognitive creationists”—whom I define as those who accept the theory of evolution for the human body but not the brain. As Harvard University psychologist Steven Pinker documents in his 2002 book The Blank Slate (Viking), belief in the mind as a tabula rasa shaped almost entirely by culture has been mostly the mantra of liberal intellectuals, who in the 1980s and 1990s led an all-out assault against evolutionary psychology via such Orwellian-named far-left groups as Science for the People, for proffering the now uncontroversial idea that human thought and behavior are at least partially the result of our evolutionary past.
There is more, and recent, antiscience fare from far-left progressives, documented in the 2012 book Science Left Behind (PublicAffairs) by science journalists Alex B. Berezow and Hank Campbell, who note that “if it is true that conservatives have declared a war on science, then progressives have declared Armageddon.” On energy issues, for example, the authors contend that progressive liberals tend to be antinuclear because of the waste-disposal problem, anti–fossil fuels because of global warming, antihydroelectric because dams disrupt river ecosystems, and anti–wind power because of avian fatalities. The underlying current is “everything natural is good” and “everything unnatural is bad.”
Whereas conservatives obsess over the purity and sanctity of sex, the left’s sacred values seem fixated on the environment, leading to an almost religious fervor over the purity and sanctity of air, water and especially food.

I’m worried that Shermer has confused liberal Luddism with denialism, and I would argue some anti-technology skepticism is healthy and warranted. While I agree that the anti-GMO movement does delve into denialist waters with regularity, these are not good examples he has chosen. One needs to be cautious with technology, and it’s a faith-based assumption that technology can solve all ills. I’m with Evgeny Morozov on this one, the assumption there is (or should be) a technological fix for every problem has become almost a religious belief system. Appropriately including the potential perils of a technology in its cost-benefit analysis is not a sign of being anti-science. Even overblowing specific risks because of individual values isn’t really anti-science either. It might be anti-human to put birds before human needs as with wind turbines, but no one is denying that wind turbines generate electricity. And while liberals may be overestimating the risk of say, nuclear waste generation over carbon waste generation (guess which is a planet-wide problem!), it doesn’t mean they don’t think nuclear power works or is real. They just have an arguably-skewed risk perception, which is an established problem in cases of ideological conflict with science or technology. There is also reasonable debate to be had over the business-practices of corporations (Monsanto in his example), which need and deserve strong citizen push-back and regulation to prevent anti-competitive or abusive behavior.
Anti-science requires the specific rejection of data, the scientific method, or strongly-supported scientific theory due to an ideological conflict, not because one possesses superior data or new information. I don’t think Shermer actually listed very good examples of this among liberals. If you’re going to talk about GMO denialism, don’t complain about people fighting with Monsanto, talk about how anti-GMO advocates make up crazy claims about the foods (see natural news for example) such as that they cause autism, or cancer. And even then it’s difficult to truly say this is a completely liberal form of denialism as Kahan’s work shows again, there is a pretty split ideological divide on GMO.
I agree that liberals are susceptible to anti-science and the mechanism is the same – ideological conflict with scientific results. However, the liberal tendency towards skepticism of technology is healthy in moderation, and anti-corporatism is not automatically anti-science. In an essay that was striving to say we must be less ideological and more pragmatic, Shermer has wrongly lumped in technological skepticism, and anti-corporatism with science denial.

Lead Industry & the Deck of Cards

Helen Epstein has an interesting review of Lead Wars: The Politics of Science and the Fate of America’s Children by Gerald Markowitz and David Rosner, in the current New York Review of Books. The review is worth reading to better understand the public policy problem of lead in products and the environment. But I cannot help but point out that the article could be used to provide more footnotes to the Denialists’ Deck of Cards:

… The lead companies also paid scientists who produced flawed studies casting doubt on the link between lead exposure and child health problems. When University of Pittsburgh professor Herbert Needleman first showed that even children with relatively modest lead levels tended to have lower intelligence and more behavioral problems than their lead-free peers, some of these industry-backed researchers claimed that his methods were sloppy and accused him of scientific misconduct (he has since been exonerated).
The companies also hired a public relations firm to influence stories in The Wall Street Journal and other conservative news outlets, which characterized Needleman as part of a leftist plot to increase government spending on housing and other social programs…

The Good, Not So Good, and Long View on Bmail

Denialism blog readers, especially those at academic institutions that have/are considering outsourcing email, may be interested in my essay on UC Berkeley’s migration to Gmail.  This is cross-posted from the Berkeley Blog.
Many campuses have decided to outsource email and other services to “cloud” providers.  Berkeley has joined in by migrating student and faculty to bMail, operated by Google.  In doing so, it has raised some anxiety about privacy and autonomy in communications.  In this post, I outline some advantages of our outsourcing to Google, some disadvantages, and how we might improve upon our IT outsourcing strategy, especially for sensitive or especially valuable materials.
Why outsourcing matters
Many of us welcome possible alternatives to CalMail, which experienced an embarrassing, protracted outage in fall 2011.  Many of us welcomed the idea of migrating to Gmail, because we use it personally, have found it user-friendly and reliable, and because it is provided by a hip company that all of our students want to work for.
But did we really look before we leaped?  Did we really consider the special context of higher education, one that requires us to protect both students and faculty from outside meddling and university-specific security risks?  Before deciding to outsource, we have to be sure that there are service providers that understand our obligations, norms, and the academic context.
In part because of the university’s particular role, our email is important and can be unusually sensitive to a variety of threats.  Researchers at Berkeley are conducting clinical trials with confidential data and patient information.  We are developing new drugs and technologies that are extremely valuable.  Some of us perform research that is classified, export-controlled, or otherwise could, if misused, cause great harm.  Some of us consult to Fortune 500 companies, serve as lawyers with duties of confidentiality, or serve as advisors to the government.  Some of us are the targets of extremist activists who try to embarrass us or harm us physically.  Some of us are critical of companies and repressive governments.  These entities are motivated to find out the identities of our correspondents and our strategic thinking, through either legal or technical means.  And not least, our email routinely contains communications with students about their progress, foibles, and other sensitive information, including information protected by specific privacy laws, such as the Federal Educational Rights and Privacy Act (FERPA). We have both legal and ethical duties to protect this information.
Our CalMail operators know these things, and as I understand it, they have been very careful in protecting the privacy of campus communications. Outsourcing providers such as Google however, may be far less likely to be familiar with our specific duties, norms, and protocols, or to have in place procedures to implement them. Outsource providers may be motivated to provide services that they can develop and serve “at scale” and that do not require special protocols. As described below, this seems to have been the case with Google’s contracts with academic institutions.
Finally, communications platforms are powerful.  They are the focus of government surveillance and control because those who control communications can influence how people think and how they organize.  Universities have historically experienced periodic pressures to limit research, publication, teaching, and speech. Without communications confidentiality, integrity, and availability, the quality of our freedom and the role we play in society suffers.  And thus the decision to entrust the valuable thoughts of our community to outsiders requires some careful consideration.
The Good
There are some clear benefits to outsourcing to Google.  They include:

  • An efficient, user-friendly communications system with a lot of storage.  The integration of Google Apps, such as Calendar, is particularly appealing, given the experience we have had with CalAgenda.  Google Drive is a pleasure compared to the awkward AFS.
  • Our communications may in some senses be more securely stored in the hands of Google.  Google has some of the best information security experts in the world.  They are experienced in addressing sophisticated, state-actor-level attacks against their network.  To its credit, Google has been more transparent about these attacks than other companies.
  • Although it is not implemented at Berkeley, Google offers two-factor authentication.  This is an important security benefit not offered by CalMail that could reduce the risk that our accounts are taken over by others.  Those of us using sensitive data, or who are at risk of retaliation by governments, hackers, activists, etc., should use two-factor authentication.
  • As a provider of services to the general public, Google is subject to a key federal communications privacy law.  This law imposes basic obligations on Google when data are sought by the government or private parties.  It is not clear that this law binds the operations of colleges and universities generally.  However, this factor is not very important with respect to the Berkeley’s adoption of bMail, as we have adopted a strongelectronic communications policy protecting emails systemwide.
  • Google recently announced that it will require government agents to obtain a probable cause warrant for user content.  This is important, because other providers release “stale” (that is, over 180 days old) data to government investigators with a mere subpoena.  A subpoena is very easy to obtain, whereas a probable cause warrant standard requires the involvement of a judge, an important check against overzealous law enforcement.  Google’s position protects us from the problem that our email archives can be obtained by many government officials who need only fill out and stamp a one-page form.

The Not So Good
Still, there are many reasons why outsourcing, and outsourcing to Google specifically, creates new risks.  While our IT professionals did an in-depth analysis of Google and Microsoft, it seems that the decision to outsource was taken before the reality of the alternatives available to us were evaluated.

  • We must consider issues around contract negotiations and whether services provided fulfill the requirements I set forth above. In initial negotiations, Google treated Berkeley IT professionals like ordinary consumers—it presented take-it-or–leave-it contracts.  Google was resistant to, though it eventually accepted, assuming obligations under FERPA, a critical concession for colleges and universities.  Google also used a gag clause in its negotiations with schools.  This made it difficult for our IT professionals to learn from other campuses about the nuances of outsourcing to Google.  As a result, much of what we know about how other campuses protected the privacy of their students and faculty is rumor that cannot be invoked, as it implicitly violates the gag clause.
  • On the most basic level, we should pause to consider that both companies the campus considered for outsourcing are the subject of 20-year consent decrees for engaging in deceptive practices surrounding privacy and/or security.  Google in particular, with its maximum transparency ideology, does not seem to have a corporate culture that appreciates the special context of professional secrecy.  The company is not only a fountainhead of privacy gaffes but also benefits from shaping users’ activities towards greater disclosure.
  • As discussed above, UC and Berkeley routinely handle very sensitive information, and many of us on campus have special obligations or particularized vulnerabilities.  Companies with valuable secrets do not place crown jewels in clouds.  When they do outsource, they typically buy “single-tenant” clouds, computers where a single client’s data resides on the machine.  Google’s service is a “multi-tenant” cloud, and thus Berkeley data will only be separated from others on a logical level.  Despite the contract negotiation, Google’s is a consumer-level service and our contract has features of that type of service.  There is a rumor that one state school addressed this issue by negotiating to be placed in Google’s government-grade cloud service, but because of the secrecy surrounding Google’s negotiations, I cannot verify this.
  • Third parties are a threat to communications privacy, but so are first parties—communications providers themselves.  While we may perceive cloud services as being akin to a locker that the user secures, in reality these are services where the provider can open the door to the locker.  In some cases, there is a technical justification for this, in other cases, companies have some business justification, such as targeting advertising or engaging in analysis of user data.
  • It is rumored that some campuses understood this risk, and negotiated a “no data mining clause.”  This would guarantee that Google would not use techniques to infer knowledge about users’ relationships with others or the content of messages.  Despite our special responsibilities to students to protect their information and our research and other requirements, we lack this guarantee.
  • Despite the good news about Google’s warrant requirement, we still need to consider intelligence agency monitoring of our data.  Any time data leaves the country, our government (and probably others) captures it at the landing stations and at repeater stations deep under the ocean.  And the bad news is our contract does not keep Berkeley data in the U.S.  Even while stored in the country, there are risks.  For instance, the government could issue a national security letter to Google, demanding access to hundreds or even thousands of accounts while prohibiting notice to university counsel.  Prior to outsourcing, those demands would have to be delivered to university officials because our IT professionals had the data.  Again, to its credit, Google is one of the most forthcoming companies on the national security letter issue, and its reporting on the topic indicates that some accounts have been subject to such requests.
  • Google represented that its service meets a SAS 70 standard in response to security concerns, but it is not clear to me that this certification is even relevant.    SAS 70 speaks to the internal controls of an organization, and specifically to data integrity in the financial services context.  The University’s concerns are broader–confidentiality and availability are key elements–and apply to both external and internal controls and the University’s rights to monitor and verify.  There are notable examples of SAS 70 compliant cloud services with extreme security lapses, such as Epsilon (confidentiality) and AWS (availability).  SAS 70 allows the company, which is the client of the auditor, and the auditor itself, to agree upon what controls are to be assured.
  • Google will have few if any incentives to develop privacy-enhancing technologies for our communications platform, such as a workable encryption infrastructure.  As it stands, the contract creates no incentives or requirements for development of such technologies, and in fact, such development runs counter to Google’s interests.
  • In the end, CalMail was being very effectively maintained by only a few employees. It is not clear to me that an outsourced solution—which, in order for the security and other issues to be managed properly, requires Berkeley personnel to interface with the system and with Google—is necessarily less costly. This is especially concerning in light of the fact that we appear to have lost the connection to IT personnel who understand the sensitivity of the data we handle, and moved to a much more consumer-oriented product.

The long view
Looking ahead, we should carefully consider how we could assume the best posture for outsourcing. Instead of experimenting with Google, we would be better served by an evaluation of the campus needs that includes regulatory and ethical obligations and that captures the norms and values of our mission.  Provider selection should be broader than choosing between Google and Microsoft.
As a first step, we should charge our IT leadership with forming formal alliances with other institutions to jointly share information and negotiate with providers.  Google’s gag provision harmed our ability to both recognize risks and to address them.
We need to be less infatuated with “the cloud,” which to some extent is a marketing fad.  Many of the putative benefits of the cloud are disclaimed in these services’ terms of service.  For instance, a 2009 survey of 31 contracts found that, “…In effect, a number of providers of consumer-oriented Cloud services appear to disclaim the specific fitness of their services for the purpose(s) for which many customers will have specifically signed up to use them.”  The same researchers found that providers’ business models were related to the generosity of terms.  This militates towards providers that charge some fee for service as opposed to “free” ones that monetize user data.
We should charge our IT professionals with the duty of documenting problems with outsourced services.  To more objectively understand the cloud phenomenon, we should track the real costs associated with outsourcing, including outages, the costs of managing the relationship with Google, and the technical problems that users experience.  Outsourcing is not costless.  We could learn that employees have simply been transferred from the operation of CalMail to the management of bMail.  We should not assume that systems mean fewer people—they may appropriately require meaningful staffing to fulfill our needs. As the expiration date ofsystem wide Google contract approaches in June 2015, these metrics will help us make an economical decision.
Finally, there are technical approaches that, if effective, could blunt, but not completely eliminate, the privacy problems created by cloud services.  Encryption tools, such asCipherCloud, exist to mask data from Google itself.  This can help hide the content of messages, reduce data mining risks from Google, and cause the government to have to come to Berkeley officials to gain access to content.  The emergence of these services indicates that there is a shared concern about storing even everyday emails in cloud services.  These services cost real money, but if we continue to think we can save money by handing over our communications systems to data mining companies, we are likely to end up paying in other ways.

Bittman changes his tune on Sugar Study, while Mother Jones Doubles Down

There’s been an interesting edit in Marc Bittman’s sugar post, as he has now changed his tune on the PLoS one sugar study, now Bittman acknowledges obesity too is important. That was big of him, it is after all, the most important factor. Maybe my angry letter to the editor had an effect, but he’s grudgingly changed this statement:

In other words, according to this study, obesity doesn’t cause diabetes: sugar does.

To:

In other words, according to this study, it’s not just obesity that can cause diabetes: sugar can cause it, too, irrespective of obesity. And obesity does not always lead to diabetes.

The second sentence is totally unnecessary. Of course obesity doesn’t always cause diabetes, or heart attack or whatever. Nor do cigarettes always cause lung cancer. Nor does sugar intake always lead to obesity or diabetes. But obesity is the primary cause of type two diabetes, just as cigarettes are the primary cause of lung cancer, and who knows what sugar is doing.
Mother Jones, sadly, has decided to double down, calling the PLoS One study the “Best. Diet. Study. Ever.” It’s not, of course. It’s merely interesting and suggestive of an effect. It is not nearly proof of causation. They also laud the Mediterranean diet study (maybe it was supposed to be the Best. Study. Ever.?), however, they again show they’re not actually reading these papers because if you read our coverage of the study you’d know they didn’t actually study the Mediterranean diet! In a case of the blind leading the blind, they quote Bittman’s misinformed piece on the Mediterranean diet study

Let’s cut to the chase: The diet that seems so valuable is our old friend the “Mediterranean” diet (not that many Mediterraneans actually eat this way). It’s as straightforward as it is un-American: low in red meat, low in sugar and hyperprocessed carbs, low in junk. High in just about everything else — healthful fat (especially olive oil), vegetables, fruits, legumes and what the people who designed the diet determined to be beneficial, or at least less-harmful, animal products; in this case fish, eggs and low-fat dairy.
This is real food, delicious food, mostly easy-to-make food. You can eat this way without guilt and be happy and healthy. Unless you’re committed to a diet big on junk and red meat, or you don’t like to cook, there is little downside

Except for one critical fact. The subjects assigned to the Mediterranean diet did not have lower consumption of red meat, sugar and hyperprocessed carbs, or other junk! If you look at the supplementary data, you see that the subjects took the positive recommendations of the diet (olive oil, nuts, fish), and more or less ignored the negative recommendations (less meat, less spreadable fats/butter, less baked goods). If you look at figures like supplementary S6, the study groups did not change their diets in these categories relative to the controls, so the effects on their cardiovascular events relative to controls aren’t likely to be from the diet recommendations. When there were changes relative to baseline, even when statistically significant, the changes were tiny.
The participants in this study actually had a very high fat intake, about 35-40% of calories across all groups. And while there was a statistically-significant decrease in cardiovascular events like stroke and heart attack in both study groups (Med + olive oil, Med + nuts), only one arm of the so-called Mediterranean diet (Med + Olive oil) had a non-significant decrease in mortality, while the other arm (Med + Nuts) had a similar curve compared to the “do nothing” control. My interpretation of this, and it’s fine to be critical of it, is that this isn’t that meaningful. If anything, the only variable correlating with decrease in mortality was excess olive oil consumption (> 4 tbsp/day), not the Mediterranean diet. Either that, or eating nuts cancels out the beneficial effects of the diet on mortality.
This is why people always dump on nutrition science when it appears to change every 10 years. Results get overblown, and when the inevitable regression towards the mean occurs, we get blamed for it. The reality is, the press coverage of science is extremely poor, and there is not adequate critical analysis and presentation of results to their audience.

Don't Switch to the Mediterranean Diet Just Yet

The New York Times made big news with reports that the New England Journal of Medicine study on the beneficial effects of the Mediterranean diet showed it could dramatically reduce the rates of heart attack and stroke. But this study has major issues that bear directly on whether or not physicians should make new recommendations about dietary intake of fats like olive oil, or whether patients should adopt the diet as a whole. Let’s talk about the trial.
First of all, this is a randomized, controlled trial, in which 7447 men and women between 55 and 80 years of age who had major risk factors for cardiovascular disease such as diabetes, obesity, smoking, hyperlipidemia etc., were divided evenly between 3 groups, one which received recommendations on a “low fat” diet, and two in which there was extensive counseling on the Mediterranean combined with either a ready free supply of extra-virgin olive oil, or alternatively a variety of nuts.
The primary end points being studied was the combined number of heart attacks, strokes, and death, and over the course of about 5 years of study about 288 such events occurred. If you combine all three of these end points together, and evaluate their frequency between the groups you find 96 of these end points occurred in the “Mediterranean diet with extra-virgin olive oil” or 3.8% of the group, 83 occurred in the Mediterranean diet with nuts for 3.4% of that population, and 109 in the control group for 4.4% of the controls.
But before anyone takes these results to heart, we have to recognize major flaws with the study design, and the populations that comprised these three groups. First, the rate of primary events was surprisingly low for such a high risk group, and because the study was stopped early, absurdly for “ethical reasons”, the number of events is quite low. For the life of me I can’t think of what that ethics committee was thinking. These results are not that dramatic. Further, the “low fat” diet was very ineffectually enforced or counseled, to the point that midway through the study the authors revised the protocol to include more counseling sessions. Evaluating the supplementary data, specifically table S7, you see this control group was in no way on a low fat diet. They still were consuming 37-39% of their calories from fat! “Low fat” should have 10-15% of calories from fats, so basically, everyone ignored the diet. Further, all of the groups consumed a similar amount of total fat, mono and poly-unsaturated fats, and even a used olive oil as their main culinary fat. All groups consumed (see table S5) a similar amount of red meat (forbidden from all diets), butter, soda, baked goods, etc. The places where there seemed to be more dramatic differences were in olive oil consumption (about 50% of controls had > 4 tbsp a day, vs 80% of the “nuts” group and about 90% of the “extra-virgin olive oil group), wine consumption (modest at about 30% in diet groups vs 25% in “low fat” control), nuts (crazy high at 90% in nuts group, vs 40% and 20% in “olive oil” and “low fat”, as well as modest elevation of the amount of fish, fruits and vegetables in the Mediterranean groups. Further, some of these differences, such as the consumption of alcohol, fruits and vegetables, was higher in the Mediterranean groups at baseline (notice no mean change in table S6) so the groups may have started out in a different place.
What does this mean? First of all, we have to reject the notion that this study compared Mediterranean diet to “low fat” diet. This was a study of basically no diet intervention versus increasing your intake of fish, nuts and/or olive oil. Otherwise, there didn’t appear to be compliance with the negative suggestions of the Mediterranean diet, to decrease red mean intake, baked goods, dairy, etc. The participants basically took the recommended items and increased them in their diets, but didn’t exclude any of the “discouraged” items. This is very interesting, but to call it the “Mediterranean diet” is misleading. In reality, it’s diet supplementation with olive oil, nuts and fish.
Second, the final results, while they sound impressive (30% reduction in combined primary end points!) are actually not as important as some of the less-emphasized findings. For this we have to evaluate the secondary endpoint, which happens to be the one we really care about – all cause mortality. They could not show a difference in mortality! So while you might be less likely to have a heart attack or stroke, you’re no less likely to die. This is why I’m so confused they ended the study early. This is really the only end point that matters, and it was unchanged at the interval at which the ethics committee decided this study had to be stopped for efficacy. Why did they do this? The evidence is suggestive that with more participants, the Mediterranean diet + olive oil might have diverged a bit and shown a benefit compared to the do nothing “low fat” control, but this didn’t reach significance.
What have we learned? Compared to other Spanish folks between the ages of 55 and 80, all with cardiovascular risk factors, those that added olive oil, nuts, and fish to their diet had fewer cardiovascular events, but no difference in their mortality compared with people that did nothing to change their diet.
Why did this make the front page of the New York Times? Let’s show a little bit more critical analysis of findings, and not just swallow the PR.