Thursday, October 21st, 2010
William Neuman’s article in today’s NY Times, Helping chickens go calmly to slaughter, raises interesting questions about how we produce poultry in the U.S. and the shift to more humane practices of meat production:
Two premium chicken producers, Bell & Evans in Pennsylvania and Mary’s Chickens in California, are preparing to switch to a system of killing their birds that they consider more humane. The new system uses carbon dioxide gas to gently render the birds unconscious before they are hung by their feet to have their throats slit, sparing them the potential suffering associated with conventional slaughter methods.
…Anglia Autoflow, the company that is building the knock-out systems for the two processors, calls the process “controlled atmosphere stunning” but Mr. Pitman said his company is considering the phrase “sedation stunning” for use on its packages. Also on the short-list: “humanely slaughtered,” “humanely processed” or “humanely handled.”
…Mr. Sechler said the system he chose, after years of research, was better than similar gas-stunning systems currently used in Europe. Those systems, he says, often deprive birds of oxygen too quickly, which may cause them to suffer. They are also designed to kill the birds rather than simply knock them out, something that Mr. Sechler is not comfortable with.
“I don’t want the public to say we gas our chickens,” he said.
Animal suffering during slaughter has long been a criticism of animal welfare ethicists and activists. So does this new approach help allay some of those concerns?
…The animal rights group People for the Ethical Treatment of Animals has been pushing chicken processors for years to switch to gas stunning systems, in part because it doesn’t believe that [commonly used method of] electrical stunning works.
How big of an impact will these two companies have on total poultry production?
Bell & Evans said it would begin selling chickens slaughtered using the new technology in April. The company, which processes about 840,000 birds a week, distributes its chickens nationwide.
Mary’s, which distributes in several Western states, expects to install the technology in June. The company processes about 200,000 birds a week.
By comparison, a single plant run by a large processor like Tyson Foods may handle more than 1 million birds a week.
As Michael Pollan and others have demonstrated in earlier essays, maximizing food production at a minimal cost is a primary reason why the current mode of industrial agriculture evolved. Farmers will tell you that humane treatment adds cost to their products, making them more difficult to sell if customers only care about cheap food. What’s the prospect of winning hearts, minds, and stomachs here?
The gas technology is expensive. Each company said it would cost about $3 million to convert their operations and more over time to run the systems. That makes it a hard sell in a commodity-oriented industry that relies on huge volumes and low costs to turn narrow margins into profits.
Mr. Sechler predicted that consumers would come to demand birds slaughtered in the new way, which would force the industry to gradually switch over.
Photo credit: antiguan_life
Wednesday, October 20th, 2010
Matt Nisbet has an excellent new post, Investing in Civic Education about Climate Change: What Should Be the Goals?, highlighting some of the next-generation approaches to helping people engage climate change.
Why don’t people engage climate change?
Wednesday, October 20th, 2010
In his latest blog post in Time Magazine, Bryan Walsh laments the fact that—6 months after the Gulf Oil Spill— it appears no lessons have been learned:
…We all wanted to find the “lessons of the spill”—even while the oil was still flowing. (Look back at that first story I did—it was written during the first week of May, more than 2 months before BP’s blown well was capped.) But we haven’t gotten smarter since the spill. We’ve gotten stupider.
…It’s now six months to the day after the Deepwater Horizon exploded, and it’s safe to say that the BP spill will not be remembered as the modern green movement’s march on Washington. Climate legislation is dead in the Senate, and if the midterm polls are accurate, next year’s Congress will be even less inclined to act on global warming—or even believe it. President Obama—under constant pressure from the same Gulf Coast states that were drenched in oil—lifted his moratorium on deepwater drilling earlier this month, before the initial deadline of Nov. 30 and before investigations into the true cause of the accident were complete. The government response to the disaster, while heroic at times, was deeply problematic, with evidence that Washington kept the public in the dark for weeks about the true size of the spill. The response on the ground was marred by obstructionism on the part of BP, to the point where off-duty cops in Louisiana seemed to be acting as hired muscle for the oil company that—let’s not forget—was chiefly responsible for spill in the first place. The legacy is a climate of distrust and paranoia in the Gulf—academic researchers and government scientists quarreling over underwater oil, conspiracy theories about BP burning sea animals, and anger along the Gulf coast among those who feel they’ve been left behind, as the rest of the country has moved on.
Forget energy reform—the biggest change in the Gulf seems to be the flood of money from BP, as part of its $20 billion promise to “make this right,” as former CEO Tony Hayward put it.
…It’s not exactly a clean energy revolution.
None of this is surprising. It’s what I predicted at the beginning:
Thursday, October 14th, 2010
At the 2009 meeting of the American Geophysical Union, renowned climate scientist Richard Alley (Penn State) gave a keynote address, The Biggest Control Knob: Carbon dioxide in Earth’s Climate History, in which he used a variety of paleoclimatological proxy data to show how CO2 changes over much of Earth history have exerted a strong influence on global temperatures.
In this week’s issue of Science, Andrew Lacis and colleagues published an article, Atmospheric CO2: Principal control knob governing Earth’s temperature (abstract only; subscription required), following up on this theme. Unlike Alley’s talk, which mainly focused on the role of CO2, this team starts by going after water vapor and confronting a widely held perception that it is the dominant greenhouse gas:
It often is stated that water vapor is the chief greenhouse gas (GHG) in the atmosphere. For example, it has been asserted that “about 98% of the natural greenhouse effect is due to water vapour and stratiform clouds with CO2 contributing less than 2%”. If true, this would imply that changes in atmospheric CO2 are not important influences on the natural greenhouse capacity of Earth, and that the continuing increase in CO2 due to human activity is therefore not relevant to climate change. This misunderstanding is resolved through simple examination of the terrestrial greenhouse.
Water vapor is a main reason why the world has a pleasant and life-sustaining average temperature of 16 degrees C. Based on the distance of Earth from the Sun, physics tells us that Earth should be about 0 degrees C—a giant snowball hurling through space. The reason why we are warmer than this is because of the natural envelope of greenhouse gases, including water and CO2 that absorb longwave heat radiating from the surface. This warms the surface of the planet just like a thick blanket keeps your body heat near your skin on a cold night.
In round numbers, water vapor accounts for about 50% of Earth’s greenhouse effect, with clouds contributing 25%, CO2 20%, and the minor GHGs and aerosols accounting for the remaining 5%.
So water vapor and clouds make up about 75% of the greenhouse effect, which sounds like the definition of “the dominant greenhouse gas” to most of us. How does one show that CO2 really is more important than water vapor as a primary greenhouse gas driving temperature change when it looks like water is so important?
Wednesday, October 13th, 2010
In a fascinating new article in PLOS One (open access), Daniel Nettle asks why we see social gradients in preventative health behaviors:
People of lower socioeconomic position have been found to smoke more, exercise less, have poorer diets, comply less well with therapy, use medical services less, adopt fewer safety measures, ignore health advice more, and be less health-conscious overall, than their more affluent peers. Some of these behaviors can simply be put down to financial constraints, as healthy diets, for example, cost more than unhealthy ones, but socioeconomic gradients are found even where the health behaviors in question would cost nothing, ruling out income differences as the explanation.
Socioeconomic gradients in health behavior are not easily abolished by providing more information. Informational health campaigns tend to lead to greater voluntary behavior change in people of higher socio-economic position, and thus can actually increase socioeconomic inequalities in health, even whilst improving health overall. Thus, we are struck with what we might call the exacerbatory dynamic of poverty: the people in society who face the greatest structural adversity, far from mitigating this by their lifestyles, behave in such ways as to make it worse, even when they are provided with the opportunity to do otherwise.
What are some of the possible explanations for this pattern, and are they sufficient?
Underlying socioeconomic differences in health behavior are differences in attitudinal and psychological variables. People of lower socioeconomic position have been found to be more pessimistic, have stronger beliefs in the influence of chance on health, and give a greater weighting to present over future outcomes, than people of higher socioeconomic position. These explanations seem clear.
However, they immediately raise the deeper question: why should pessimism, belief in chance, and short time perspective be found more in people of low socioeconomic position than those of high socioeconomic position? These deeper questions are at the level which behavioral ecologists call ultimate, as opposed to proximate causation
To develop more of an ultimate explanation, Nettle hypothesized that lower socioeconomic groups are subject to greater hazard or environmental harm or even simply the perception of living a more hazardous life. This, in turn, discourages healthy behavior.
To test this hypothesis, he developed a mathematical/statistical model predicting the probability of dying in a given year, which is a combination of extrinsic risks that people cannot control as well as intrinsic risks that they can control through modified health behavior. Thus, people choosing to take the time to engage healthier opportunities reduce their mortality risk. Now there’s a tradeoff, however, because the more time people choose to undertake healthy behavior, the less time is left over for leisure activities and other life events.
Overall survival is therefore a combination of all of these factors, which can easily be modeled by assuming a range of values for time spent on health vs. other activities to see what kinds of mortality outcomes arise.
Here are the interesting results he found…
Wednesday, October 13th, 2010
In the wake of Ryan Lizza’s provocative piece, As the World Burns, published in the New Yorker last week comes a renewed call by folks like Shellenberger and Nordhaus emphasizing the need to make clean energy cheap rather than dirty energy expensive.
See the latest in today’s NY Times.
Tuesday, October 12th, 2010
In 40 years, there will be about 3 billion additional people living on the Earth (~9.5 billion total). With all of these new folks, it’s easy to think about the added demands of energy, food, and water required to sustain their lifestyles. And in terms of climate warming, it’s hard to escape the fact that significantly greater energy consumption will lead to rising rates of carbon emissions, unless there’s a shift to decarbonize the economy.
In this week’s early Edition of the Proceedings of the National Academy of Sciences (open access), Brian O’Neill and colleagues note that emissions are not just controlled by the sheer size of the human population but also by important demographic changes.
For example, how might an aging or more urban population affect emissions? How about changes in household size? Modelers of carbon emissions don’t usually ask these kinds of questions, so the conventionally projected emissions might be off if these additional demographic details matter.
The researchers developed a global economic model (Population-Environment-Technology, or PET) in which they specified relationships between demographic factors like houshold size, age, and urban/rural residency and economic factors like the demand for consumer goods, wealth, and the supply of labor. Here’s a bit more on how this works:
In the PET model, households can affect emissions either directly through their consumption patterns or indirectly through their effects on economic growth in ways that up until now have not been explicitly accounted for in emissions models. The direct effect on emissions is represented by disaggregating household consumption for each household type into four categories of goods (energy, food, transport, and other) so that shifts in the composition of the population by household type produce shifts in the aggregate mix of goods demanded. Because different goods have different energy intensities of production, these shifts can lead to changes in emissions rates. To represent indirect effects on emissions through economic growth, the PET model
explicitly accounts for the effect of (i) population growth rates on economic growth rates, (ii) age structure changes on labor supply, (iii) urbanization on labor productivity, and (iv) anticipated demographic change (and its economic effects) on savings and consumption behavior.
Although there are some exceptions, households that are older, larger, or more rural tend to have lower per capita labor supply than those that are younger, smaller, or more urban. Lower-income households (e.g., rural households in developing countries) spend a larger share of income on food and a smaller share on transportation than higher-income households. Although labor supply and preferences can be influenced by a range of nondemographic factors, our scenarios focus on capturing the effects of shifts in population across types of households.
To project these demographic trends, we use the high, medium, and low scenarios of the United Nations (UN) 2003 Long-Range World Population Projections combined with the UN 2007 Urbanization Prospects extended by the International Institute for Applied Systems Analysis (IIASA) and derive population by age, sex, and rural/urban residence for the period of 2000–2100.
What did they find?
Sunday, October 10th, 2010
Thomas Rogers at Salon.com has a review of Devra Davis’ new book, “Disconnect: The Truth About Cell Phone Radiation, What the Industry Has Done to Hide It, and How to Protect Your Family“.
The apparent bottom line for cell phone safety:
The full article is worth reading. Below are a few excerpts of the review and Rogers’ interview with Davis:
In “Disconnect,” Devra Davis, a scientist and National Book Award finalist for “When Smoke Ran Like Water,” looks at the connection between cellphones and health problems, with some disturbing results. Recent studies have tied cellphone use to rises in brain damage, cheek cancer and malfunctioning sperm. She reveals the unsettling fact that many new cellphones now come with the small-print warning that they are to be kept at least one-inch from the ear (presumably for safety reasons) and many insurance companies refuse to insure cellphone companies against health-related claims. Most troubling of all, science has shown that children and teenagers are particularly susceptible to cellphone radiation, raising questions about its effects on coming generations.
What to you is the most compelling evidence that links cellphones to brain cancer?
The brain cancer connection is in fact a very complicated one. Cancer can take a long time to develop. After the Hiroshima bomb fell, there was no increase in brain cancer for 10 years, even 20 years afterward. Forty years later, there was a significant increase in brain cancer in people who survived the bombing. Now, for studies of people who have been heavy cellphone users (defined as someone who has made a half-hour call a day for 10 years), there is a 50 percent increase in brain cancer overall. And among the heaviest users there’s a two- to fourfold increased risk.
We’ve only really been using cellphones for 10 years. Isn’t it a bit early to be drawing these kinds of conclusions?
Well, that’s actually not true. Heavy use of cellphones in the United States is a very recent phenomenon for the general population. In the year 2000, fewer than half of us regularly used cellphones. Now almost all of us do. If there’s a 10-year latency, we still have to wait another five years in the United States to see any general population impacts.
You have to look at all of the evidence and not simply wait for proof of human harm or sick people or dead people. If the debate becomes, “Do we have sufficient proof of human harm?” that means we’re waiting another 20 years. That means we will potentially have an epidemic before we act to prevent harm. Now, some people could be very cynical and say, look, brain cancer is relatively rare so even if it doubles or quadruples it’s still rare. But it’s also, at this point, mostly incurable.
Why are young people so much more at risk?
Their brains are not fully protected with myelin. Myelin is a kind of fatty sheath that goes around neurons [brain cells] and helps to enhance judgment and a whole bunch of other things, like impulse control. Their skulls are also thinner, and a thinner skull admits more radiation. We now know that the young brain doesn’t mature until the mid-20s, later in boys than girls. We need to be much more vigilant about protecting the young brain because it is more vulnerable. We know that from work that’s been done on lead and a number of other agents.
If this research is really as convincing as it seems to be, then why hasn’t it created a widespread uproar?
Well, it has in France. Bills passed both houses of the French national government this spring that ban the marketing and creation of phones uniquely for children. It’s also had an impact in Israel, a country that is very sophisticated in its use of radar and microwaves, and Finland, both of which have issued warnings.
But think about the fine print warning that comes with BlackBerry Torch. It says, If you keep the phone in your pocket, it can exceed the FCC exposure guidelines. What’s that supposed to tell you? It sounds like that phone cannot safely be put in your pocket — well, where do they expect people to keep them?
….The book also describes the aggressive push-back by people affiliated with the cellphone industry against scientists whose findings point to safety concerns — including, in one case, a campaign to discredit someone’s findings by accusing them of manufacturing evidence. It’s pretty explosive stuff.
I think it might have started out as nothing more than companies wanting to make profits, and wanting to keep their products in a positive light. Companies are allowed to make profits; I’m not opposed to that. And I imagine people genuinely thought these kinds of dangers from radiation weren’t possible, because the physics paradigm [at the time] said it wasn’t. But it has since been morphed into something worse. Now even the insurance industry is listening to scientists. Many companies are no longer providing coverage for health damage from cellphones.
We need to be more sophisticated as a society in using experimental data where we have it. We have experimental data on sperm counts. We have experimental data on brain cell damage. We have experimental data on biological markers that we know increase the risk of cancer. These are the same debates that went out over passive smoking, over active smoking, over asbestos, over benzene, over vinyl chloride. They said we don’t have enough sick or dead people. The consequence was to continue exposing people. Is there anybody in the world who believes we should have waited as long as we did?
Photo credit: liber
Thursday, October 7th, 2010
There’s a new paper in this week’s issue of Science that suggests that growing a landscape mixed with genetically modified (GM) Bt corn and non-GM hybrid varieties of corn can be mutually beneficial to all corn farmers.
Why? They argue that the populations of GM corn knock down the populations of insect herbivores enough that, on a landscape scale, this effect spills over to nearby farmers growing non-GM corn, which raises yields and profits:
[W]e estimate that cumulative benefits for both Bt and non-Bt maize growers during the past 14 years were almost $6.9 billion in the five-state region (18.7 million ha in
2009)—more than $3.2 billion in Illinois, Minnesota, and Wisconsin, and $3.6 billion in Iowa and Nebraska. Of this $6.9 billion total, cumulative suppression benefits to non-Bt maize growers resulting from O. nubilalis [European corn borer] population suppression in non-Bt maize exceeded $4.3 billion—more than $2.4 billion in Illinois, Minnesota, and Wisconsin, and $1.9 billion in Iowa and Nebraska—or about 63% of the total benefits.
They suggest that the populations of non-GM corn also benefit the Bt corn farmers because the non-GM corn maintains a genetically diverse population of insects, helping prevent the evolution of herbivores resistant to Bt corn.
These results are interesting and —if they hold—could be an example of how GM crops bring environmental and social benefits. A good outcome for all.
However, there are a couple of important things to consider:
(1) The notion of mixing crop types to minimize herbivory is the one of the fundamental tenets of traditional agroecology and organic agriculture, but instead of relying on GM crops, it could be done with a mix of hybrid crop varieties that doesn’t risk the potential environmental side effects of Bt corn or other unexpected outcomes of GM crops. This is a major value judgment. Does having one GM crop and a few dominant corn varieties count as diversity when the Midwest becomes a giant sea of maize? As I explain in #2 below, probably not. Could we achieve the same kind of insect pest management using a diversity of non-GM crops? Yes—it happens all the time in midwestern organic farms. Multi-crop organic farming is often more labor intensive than industrial agriculture, making the food produced more expensive. But do we only care about cheap food?
(2) I’ve lived in southern Minnesota, where it’s a giant rotating monoculture of corn and soybeans. If you look at Figure 1 in this paper, you will see that 50-75% (or more) of the corn grown in many regions of states like Iowa, Nebraska, and Minnesota is Bt corn. When so much of your landscape is Bt corn, the evolution of resistance to Bt is most likely inevitable, as we saw in a previous post with the use of Roundup-ready crops like soybeans, which are often grown in rotation with Bt corn in these regions. Acknowledging this fact of life, EPA recommends mixing GM and non-GM corn in an effort to delay the evolution of resistance, not prevent it:
To delay evolution of resistance, the U.S. Environmental Protection Agency (EPA) mandated that a minimum 20 to 50% of total onfarm maize be planted as non-Bt maize within 0.8 km of Bt fields as a structured refuge for susceptible O. nubilalis. Use of non-Bt maize refugia is an important element of long-term insect resistance management.
…Sustained economic and environmental benefits of this technology, however, will depend on continued stewardship by producers to maintain non-Bt maize refugia to minimize the risk of evolution of Bt resistance in crop pest species, and also on the dynamics of Bt resistance evolution at low pest densities and for variable pest phenotypes.
Hutchison, W., Burkness, E., Mitchell, P., Moon, R., Leslie, T., Fleischer, S., Abrahamson, M., Hamilton, K., Steffey, K., Gray, M., Hellmich, R., Kaster, L., Hunt, T., Wright, R., Pecinovsky, K., Rabaey, T., Flood, B., & Raun, E. (2010). Areawide Suppression of European Corn Borer with Bt Maize Reaps Savings to Non-Bt Maize Growers Science, 330 (6001), 222-225 DOI: 10.1126/science.1190242
Photo credit: Ian Hayhurst
Wednesday, October 6th, 2010
Check out this video of a family who creates a balloon-mounted video camera and launches it 100,000 ft (~30 km) into the atmosphere (about halfway into the stratosphere). The ascent, eventual balloon burst, and descent are great to watch. A nice and unusual way to experience the planet. This kid definitely gets an A on his science fair project.