Wednesday, September 29th, 2010
Water security is making a bit of a splash this week. CNBC ran this story on the water crises in western U.S. states, where the region is possibly closing in on a day of reckoning, as described by Felicity Barringer in the NY Times, and creating a climate of pessimism among some western water managers.
The scientific community is also weighing in. C.J. Vörösmarty and colleagues published a review paper in this week’s issue of Nature in which they evaluate the worldwide risk of water security and threats to aquatic biodiversity (edited slightly to remove citations and statistics):
We find that nearly 80% (4.8 billion) of the world’s population (for 2000) lives in areas where either incident human water security or biodiversity threat exceeds the 75th percentile. Regions of intensive agriculture and dense settlement show high incident threat, as exemplified by much of the United States, virtually all of Europe (excluding Scandinavia and northern Russia), and large portions of central Asia, the Middle East, the Indian subcontinent and eastern China. Smaller contiguous areas of high incident threat appear in central Mexico, Cuba, North Africa, Nigeria, South Africa, Korea and Japan. The impact of water scarcity accentuates threat to drylands, as is apparent in the desert belt transition zones across all continents (for example, Argentina, Sahel, Central Asia, Australian Murray–Darling basin).
What is the disparity of risk between rich vs. poor nations?
Most of Africa, large areas in central Asia and countries including China, India, Peru, or Bolivia struggle with establishing basic water services like clean drinking water and sanitation, and emerge here as regions of greatest adjusted human water security threat. Lack of water infrastructure yields direct economic impacts. Drought- and famine-prone Ethiopia, for example, has 150 times less reservoir storage per capita than North America and its climate and hydrological variability takes a 38% toll on gross domestic product (GDP). The number of people under chronically high water scarcity, many of whom are poor, is 1.7 billion or more globally, with 1.0 billion of these living in areas with high adjusted human water security threat.
They also argue that as wealth increases in a nation, the apparent ability to deal with water security issues improves, leading to the perception that threat level is declining:
Contrasts between incident and adjusted human water security threat are striking when considered relative to national wealth. Incident human water security threat is a rising but saturating function of per capita GDP, whereas adjusted human water security threat declines sharply in affluent countries in response to technological investments. The latter constitutes a unique expression of the environmental Kuznets curve, which describes rising ambient stressor loads during early-to-middle stages of economic growth followed by reduced loading through environmental controls instituted as development proceeds. The concept applies well to air pollutants that directly expose humans to health risks, and which can be regulated at their source. The global investment strategy for human water security shows a distinctly different pattern. Rich countries tolerate relatively high levels of ambient stressors, then reduce their negative impacts by treating symptoms instead of underlying causes of incident threat.
Biodiversity threats from river use appear to be significant globally:
The worldwide pattern of river threats documented here offers the most comprehensive explanation so far of why freshwater biodiversity is considered to be in a state of crisis. Estimates suggest that at least 10,000–20,000 freshwater species are extinct or at risk, with loss rates rivalling those of previous transitions between geological epochs like the Pleistocene-to-Holocene.
And what about future prospects?
We remain off-pace for meeting the Millennium Development Goals for basic sanitation services, a testament to the lack of societal resolve, when one considers that a century of engineering know-how is available and returns on investment in facilities are high. For Organisation for Economic Co-operation and Development (OECD) and BRIC (Brazil, Russia, India and China) countries alone, 800 billion US dollars per year will be required in 2015 to cover investments in water infrastructure, a target likely to go unmet. The situation is even more daunting for biodiversity. International goals for its protection lag well behind expectation and global investments are poorly enumerated but likely to be orders of magnitude lower than those for human water security, leaving at risk animal and plant populations, critical habitat and ecosystem services that directly underpin the livelihoods of many of the world’s poor.
…with a not-so-comforting conclusion:
Left unaddressed, these linked human water security–biodiversity water challenges are forecast to generate social instability of growing concern to civil and military planners.
Vörösmarty, C., McIntyre, P., Gessner, M., Dudgeon, D., Prusevich, A., Green, P., Glidden, S., Bunn, S., Sullivan, C., Liermann, C., & Davies, P. (2010). Global threats to human water security and river biodiversity Nature, 467 (7315), 555-561 DOI: 10.1038/nature09440
Photo credit: suburbanbloke
Monday, September 27th, 2010
Genetically modified organisms (GMOs) are back in the news. A few days ago, NPR featured a couple of blog posts (here and here) considering whether the new GMO “supersized” salmon will be harmful to aquatic ecosystems.
A concern with GMOs is that—like the early adoption of pesticides—potential risks are being borne by the environment and consumers as we experiment with new species. There’s a lot of potential for GMOs, and I hope that they all end up being harmless. But there are potential downsides too that we are not able to assess very well at this point. And we may be creating problems that we are not even aware of yet.
As more data come in, it’s not always an encouraging outlook. A couple of recent examples:
Case #1: We saw a few months ago how weeds that were supposed to be eliminated by the agricultural herbicide, Roundup, are now evolving resistance to the chemical, meaning that Roundup-ready soybeans and other crops no longer work as designed.
Case #2: In this week’s Early Edition of the Proceedings of the National Academy of Sciences, Jennifer Tank and colleagues examined what happens to transgenic corn residue (old crop parts left on fields that are not harvested). One of the main transgenic varieties of corn is known as “Bt corn.” Bt stands for the name of a microbe—Bacillus thuringiensis—that makes a protein toxin that destroys the functioning of guts in some insects. Scientists have figured out how to move the Bt gene, and hence Bt toxin manufacturing capacity, from the bacteria to corn plants, thereby conferring general insect herbivore resistance to this crop (the main pest being the European corn borer).
This team asked: What happens when corn stalks, cobs, and leaves end up in streams and rivers throughout the Midwest? Their answer is eye-opening:
Widespread planting of maize throughout the agricultural Midwest may result in detritus entering adjacent stream ecosystems, and 63% of the 2009 US maize crop was genetically modified to express insecticidal Cry proteins derived from Bacillus thuringiensis. Six months after harvest, we conducted a synoptic survey of 217 stream sites in Indiana to determine the extent of maize detritus and presence of Cry1Ab protein in the stream network. We found that 86% of stream sites contained maize leaves, cobs, husks, and/or stalks in the active stream channel. We also detected Cry1Ab protein in stream-channel maize at 13% of sites and in the water column at 23% of sites. We found that 82% of stream sites were adjacent to maize fields, and Geographical Information Systems analyses indicated that 100% of sites containing Cry1Ab-positive detritus in the active stream channel had maize planted within 500 m during the previous crop year. Maize detritus likely enters streams throughout the Corn Belt; using US Department of Agriculture land cover data, we estimate that 91% of the 256,446 km of streams/rivers in Iowa, Illinois, and Indiana are located within 500 m of a maize field. Maize detritus is common in low-gradient stream channels in northwestern Indiana, and Cry1Ab proteins persist in maize leaves and can be measured in the water column even 6 mo after harvest. Hence, maize detritus, and associated Cry1Ab proteins, are widely distributed and persistent in the headwater streams of a Corn Belt landscape.
Who cares? Streams and rivers are the breeding grounds to many insect species, including dragonflies, mayflies, and damselflies. If there are toxins floating in these aquatic ecosystems that are good at killing insects, there is risk of disrupting food webs, including potential changes to bird species as well as many important recreational and sport fish that dine on insects:
Once maize detritus enters stream channels, this carbon source degrades rapidly via a combination of microbial decomposition, physical breakdown, and invertebrate consumption, and that energy may fuel stream food webs. Maize detritus in agricultural streams decomposes in ∼66 d …. Therefore, the material that we found during our synoptic survey had entered these streams relatively recently. Maize detritus is rapidly colonized by stream-dwelling invertebrates, and growth rates of invertebrates feeding on nontransgenic decomposing maize are comparable to those feeding on the deciduous leaf litter commonly found in forested streams
Perhaps this means that the Bt toxins might break down quickly and pose less harm? Doesn’t look like it:
Our data demonstrate that long after harvest, Cry1Ab is present in submerged Bt maize detritus; thus, stream organisms may be exposed to Cry1Ab for several months.
It’s also interesting to learn that low or no-till conservation tillage practices may exacerbate the corn residue inputs because greater material left on fields is susceptible to washing away:
The dried detritus left on fields after harvest, as part of conservation tillage, enters headwater streams as a result of surface runoff and/or wind events occurring throughout the year. During heavy precipitation, overland flow is the likely mechanism transporting this material to stream channels.
It may not even be a matter of leaving less residue; the toxins also appear to be draining through the soils:
Our results from tile drains indicate that tiles may be a mechanism by which Cry1Ab leached from detritus on fields or from soils can be transported to streams.
Cry1Ab released from root exudates or decaying maize detritus moves vertically through soils and can be detected at the base of 15-cm-long soil profiles for up to 9 h.
Their conclusion? An illustration of how little we know at this point:
The question of whether the concentrations of Cry1Ab protein we report in this study have any effects on nontarget organisms merits further study.
Jennifer L. Tank, Emma J. Rosi-Marshall, Todd V. Royer, Matt R. Whiles, Natalie A. Griffiths, Therese C. Frauendorf, and David J. Treering (2010). Occurrence of maize detritus and a transgenic insecticidal protein (Cry1Ab) within the stream network of an agricultural landscape Proceedings of the National Academy of Sciences : 10.1073/pnas.1006925107
Photo credit: snake.eyes
Sunday, September 26th, 2010
The NY Times and Huffington Post are running a story by Kim Severson, Told to Eat Its Vegetables, America Orders Fries, lamenting how hard it is to get people to eat healthy.
The thing that struck me about this article, as its title suggests, is how nutrition in America is often pitched top-down. A strategy is bound to fail when it consists simply of government experts making recommendations about nutrition, as one of the folks interviewed notes:
“It is disappointing,” said Dr. Jennifer Foltz, a pediatrician who helped compile the report. She, like other public health officials dedicated to improving the American diet, concedes that perhaps simply telling people to eat more vegetables isn’t working.
…The government keeps trying, too, to get its message across. It now recommends four and a half cups of fruits and vegetables (that’s nine servings) for people who eat 2,000 calories a day. Some public health advocates have argued that when the guidelines are updated later this year, they should be made even clearer. One proposal is to make Americans think about it visually, filling half the plate or bowl with vegetables.
The article explores the usual things claimed to be preventing people from eating better—convenience and cost:
“The moment you have something fresh you have to schedule your life around using it,” Mr. Balzer said.
In the wrong hands, vegetables can taste terrible. And compared with a lot of food at the supermarket, they’re a relatively expensive way to fill a belly.
“Before we want health, we want taste, we want convenience and we want low cost,” Mr. Balzer said.
Melissa MacBride, a busy Manhattan resident who works for a pharmaceuticals company, would eat more vegetables if they weren’t, in her words, “a pain.”
“An apple you can just grab,” she said. “But what am I going to do, put a piece of kale in my purse?”
“It’s just like any other bad habit,” he said. “Part of it is just that vegetables are a little intimidating. I’m not afraid of zucchinis, but I just don’t know how to cook them.”
The solution is presented as a problem of overcoming access to good food:
But clear guidance probably isn’t enough. Health officials now concede that convincing a nation that shuns vegetables means making vegetables more affordable and more available.
I’m a fan of nutritional literacy, as I am with environmental literacy, but only as one of several approaches in a portfolio of strategies for improving the quality of life and the environment. Nutritionists and climate change educators should team up in this regard because they face the same challenge—winning hearts and minds (or, in this case, stomachs) and changing behavior.
The problem is that a top-down nutritional literacy approach, by itself, is woefully inadequate (more information, alone, simply won’t accomplish this), and access to good food is only part of the challenge.
If you want engagement, then nutrition needs to be turned into a bottom-up venture. It’s not simply a matter of food pyramids and access to good food. People need to experience growing and cooking their own food. They need to be engaged with how good it can be, how it can be grown cheaply, and how plant-based diets are easy to prepare.
There are several ways to begin accomplishing this:
1. Start early. Make gardening and cooking a part of the elementary school experience. All kids should take an active role in planting, tending, and harvesting food. Then they should take part in preparing the foods they have grown in ways that are appealing to eat. The power of this should not be underestimated. The only thing I remember from kindergarten is making bread and butter from scratch.
2. Diffuse this knowledge to home or community gardens. When kids are taught how to prepare healthy, tasty food, they can bring what they learn home, starting home gardens and helping out with making dinner by showing parents what they learned in school (maybe accompanied by some kind of creative incentive from parents to do this). People can see for themselves that is is often less expensive to grow healthy food, especially if communities team up and share their bounties, than it is to buy junk food that makes up much of their diet.
3. Involve the community in a contest to generate a list of the most popular recipes for different fruits and vegetables. Perhaps engage the help of local chefs for fun. I have a 100% whole fruit smoothie recipe that most kids would mistake for dessert.
4. Disperse these recipes widely and incorporate them into school education programs and lunches, as Alice Waters is accomplishing in California.
5. Not only should farmers markets accept SNAP (food stamps), there should be classes/demos to show people how to prepare foods. Also, having samples and recipes that are tasty and convenient would be helpful. People should be convinced, by seeing with their own eyes and taste buds, that they can do this and that it’s worth their time.
And that’s part of the larger problem: overcoming the psychological barrier that fresh food prep is time consuming:
“The moment you have something fresh you have to schedule your life around using it.”
Although I see the point here, I think it’s a poor reason for not eating healthy. People schedule time around education, sleeping, exercising, soccer practice, vacation, being with friends, spirituality, and visits to the doctor/dentist because these things are considered necessary to living well. Is preparing healthy food not a similarly meaningful part of our lives? Is it really impossible for families to schedule 30-45 minutes preparing meals? Should leisure time or other competing interests really be that high an opportunity cost?
Perhaps that’s one lesson: So long as Americans treat preparing and enjoying healthy meals as a tradeoff with leisure time or other activities, American diets will suffer. No amount of top-down government nutrition guidelines will overcome that.
Related news: Bill Clinton now eats vegan
Photo credit: hellochris
Friday, September 24th, 2010
Mike Berners-Lee and Duncan Clark at The Guardian have a recent post in the series examining the carbon footprints of daily life activities. Their post asks how much carbon emissions results from the direct and indirect activities of building a car.
The carbon footprint of making a car is immensely complex. Ores have to be dug out of the ground and the metals extracted. These have to be turned into parts. Other components have to be brought together: rubber tyres, plastic dashboards, paint, and so on. All of this involves transporting things around the world. The whole lot then has to be assembled, and every stage in the process requires energy. The companies that make cars have offices and other infrastructure with their own carbon footprints, which we need to somehow allocate proportionately to the cars that are made.
….The best we can do is use so-called input-output analysis to break up the known total emissions of the world or a country into different industries and sectors, in the process taking account of how each industry consumes the goods and services of all the others. If we do this, and then divide by the total emissions of the auto industry by the total amount of money spent on new cars, we reach a footprint of 720kg CO2e per £1000 spent.
This is only a guideline figure, of course, as some cars may be more efficiently produced than others of the same price. But it’s a reasonable ballpark estimate, and it suggests that cars have much bigger footprints than is traditionally believed. Producing a medium-sized new car costing £24,000 may generate more than 17 tonnes of CO2e – almost as much as three years’ worth of gas and electricity in the typical UK home.
17 (metric) tons is 17,000 kg or about 37,400 pounds. The U.S. EPA estimates that the average passenger vehicle in the U.S. emits 5-5.5 metric tons CO2e per year, assuming 12,000 miles driven.
If you do the math, this means the embodied CO2e emissions to make a car is about 3-3.5 years worth of tailpipe emissions from driving. Assuming that most people own their cars for longer than three years, this figure doesn’t jive with what the authors claim:
The upshot is that – despite common claims to contrary – the embodied emissions of a car typically rival the exhaust pipe emissions over its entire lifetime. Indeed, for each mile driven, the emissions from the manufacture of a top-of-the-range Land Rover Discovery that ends up being scrapped after 100,000 miles may be as much as four times higher than the tailpipe emissions of a Citroen C1.
If people held onto their cars for 10 years (assuming 120,000 miles), tailpipe emissions would equal 50 metric tons of CO2e, and embodied emissions would be about 34% of tailpipe emissions. If people drove their cars for 20 years (assuming 240,000 miles), the exhaust emissions would rise to 100 metric tons CO2e, with embodied emissions dropping to 17% of tailpipe emissions.
While most folks generally agree with the notion of driving their vehicle into the ground (as my recently dead 16-yr-old truck illustrates), you’d have to be driving a Toyota Prius to get a lifetime tailpipe emission that equals the embodied emissions of building it (assuming that a Prius achieves three times the mpg of a typical car, which would drop CO2e tailpipe emissions from 5 to 1.7 metric tons CO2e per year, making a 10-year total tailpipe emission of 17 metric tons reasonable).
Thus, if you drive an average car for 10 years, your lifetime tailpipe emissions (50 metric tons) will be a lot larger than the embodied emissions to build the car (17 metric tons) (for a total emission of 67 metric tons). If you drive a hyper-efficient vehicle for 10 years, tailpipe and embodied emissions may be comparable (17 metric tons each, 34 metric tons total). This means you could buy a new Prius every three years, and the embodied emissions from all of these purchases plus tailpipe emissions would roughly equal a normal car driven for 10 years.
This raises an important question: What matters here? If the goal is to reduce total emissions, the best thing is to buy a car with a very high fuel efficiency and drive it for its full life, as the above examples illustrate.
Photo credit: atomicshark
Friday, September 24th, 2010
It’s always pleasing to run across stunning photos of our world. The Chartered Institution of Water and Environmental Management (CIWEM), a UK-based NGO, sponsors an annual contest for environmental photographer of the year. Here’s a story on this year’s competition, with a few excerpts and several of the award-winning photos below:
A picture of an unprecedented congregation of Munkiana Devil Rays in Baja California Sur has won Florian Schulz the prestigious 2010 title of The Environmental Photographer of the Year. And 20 year old Bulgarian Radoslav Radoslavov Valkov has gained the title of the Young Environmental Photographer of the Year with his macro photograph of a fly.
Organised by the Chartered Institution of Water and Environmental Management (CIWEM), the Environmental Photographer of the Year has exceeded all expectations, receiving over 4,500 entries from photographers in 97 countries in just its fourth year. That’s a record breaking rise in entries of 93 percent from 2009, with this year seeing the first entries from countries such as Tajikistan, the Democratic Republic of Congo, Mongolia, Swaziland, Palestine, Latvia and Bolivia.
The Environmental Photographer of the Year is an international showcase for the very best in environmental photography, honouring amateur and professional photographers who use their ability to raise awareness of environmental and social issues. The categories are Mott MacDonald’s Changing Climates; The Natural World; Quality of Life; Innovation in the Environment (New for 2010); The Underwater World (New for 2010); A View From the Western World (New for 2010); and the Young Environmental Photographer of the Year (Under 16 & Under 21).
Wednesday, September 22nd, 2010
The rise in global mean temperature of about 0.9 degrees C over the 20th century is one of the most well-known trends in the science of global change. Several modeling and empirical studies suggest that some (~0.3 degrees C) of this warming is due to natural causes like increased solar intensity and decreased vulcanism (which reduces cloud-forming aerosols). Most warming attributed to these factors occurred up to about 1950. The rest of the warming—about 0.6 degrees C, most of which occurs after 1960— can only be explained by the rise in greenhouse gases.
If warming is happening, what about that strange cooling dip from 1940-1970? This has often been attributed to the rise in aerosols from pollution (e.g., power plant smokestacks), which—like volcanoes— form clouds, block solar radiation, and cool temperatures. Once clean air legislation kicked in during the 1970s these pollution aerosols declined, allowing more solar radiation to reach the earth’s surface. Conventional wisdom therefore suggests that cleaning up the atmosphere may have contributed somewhat to climate warming.
A new study in this week’s issue of Nature by David Thompson and colleagues, An abrupt drop in Northern Hemisphere sea surface temperature around 1970, challenges this idea, suggesting that oceans, rather than aerosols, may be the driver of this multi-decadal cooling blip:
The twentieth-century trend in global-mean surface temperature was not monotonic: temperatures rose from the start of the century to the 1940s, fell slightly during the middle part of the century, and rose rapidly from the mid-1970s onwards. The warming–cooling–warming pattern of twentieth-century temperatures is typically interpreted as the superposition of long-term warming due to increasing greenhouse gases and either cooling due to a mid-twentieth century increase of sulphate aerosols in the troposphere, or changes in the climate of the world’s oceans that evolve over decades (oscillatory multidecadal variability). Loadings of sulphate aerosol in the troposphere are thought to have had a particularly important role in the differences in temperature trends between the Northern and Southern hemispheres during the decades following the Second World War. Here we show that the hemispheric differences in temperature trends in the middle of the twentieth century stem largely from a rapid drop in Northern Hemisphere sea surface temperatures of about 0.3 C between about 1968 and 1972. The timescale of the drop is shorter than that associated with either tropospheric aerosol loadings or previous characterizations of oscillatory multidecadal variability. The drop is evident in all available historical sea surface temperature data sets, is not traceable to changes in the attendant metadata, and is not linked to any known biases in surface temperature measurements. The drop is not concentrated in any discrete region of the Northern Hemisphere oceans, but its amplitude is largest over the northern North Atlantic.
This is an interesting development in understanding ocean-atmosphere dynamics. The authors did not go very far in speculating why they thought this observed pattern of North Atlantic cooling occurred, so it’s not yet clear what this means. They offer the following:
The suddenness of the drop in Northern Hemisphere SSTs is reminiscent of ‘abrupt climate change’, such as has been inferred from the palaeoclimate record.
The timescale of the drop is important, because it is considerably shorter than that typically associated with either tropospheric aerosol forcing or oscillatory multidecadal SST variability.
The timing of the drop corresponds closely to a rapid freshening of the northern North Atlantic in the late 1960s/early 1970s (the ‘great salinity anomaly’).
So that potentially rules out things like the North Atlantic Oscillation, Atlantic Multidecadal Oscillation, or Arctic Oscillation but suggests that freshwater inputs from glacial thaw may induce North Atlantic cooling—but likely on a much smaller scale than this.
Thompson, D., Wallace, J., Kennedy, J., & Jones, P. (2010). An abrupt drop in Northern Hemisphere sea surface temperature around 1970 Nature, 467 (7314), 444-447 DOI: 10.1038/nature09394
Image credit: http://commons.wikimedia.org/wiki/File:Instrumental_Temperature_Record_%28NASA%29.svg
Sunday, September 19th, 2010
Laura Miller at Salon reviews a new book out this week by Judy Pasternak titled, “Yellow Dirt: An American Story of a Poisoned Land and a People Betrayed.” A few excerpts from Miller’s analysis:
In the summer of 1979, an earthen dam over the town of Church Rock, Utah, broke, flooding the arroyo below and then the bed of the Rio Puerco (an intermittent stream) on the southern border of the Navajo Nation. It was a small flood, but a dangerous one. It burned the feet of a boy who stepped into it, and caused sheep and crops along the banks to drop dead. That’s because the pond it came from had been used by a nearby uranium mine to store the tailings (residue) of its excavations — the water kept the radioactive dust from blowing away. The 93 million gallons of contaminated water that poured into the Rio Puerco remains the largest accidental release of radioactive material in U.S. history, bigger than the notorious Three Mile Island reactor meltdown that occurred 14 weeks later.
The Church Rock flood is only one incident among many in the “slow-motion disaster” investigative journalist Judy Pasternak comprehensively recounts in her chilling new book, “Yellow Dirt: An American Story of a Poisoned Land and a People Betrayed.” Based on a prize-winning four-part series she wrote for the Los Angeles Times, “Yellow Dirt” begins during World War II, when secretive government surveyors first appeared on the remote reservation, supposedly looking for deposits of an ore called vanadium, used to strengthen steel needed for the war effort. Uranium was the real prize, and after the bombings of Hiroshima and Nagasaki and the ramping up of the Cold War, the American demand for the radioactive substance boomed.
The Navajo Nation and the area around it contained some of the richest deposits of uranium ore in the world, and certainly the most conveniently located. For about a decade, various corporations and government agencies reaped 1.4 million tons of uranium ore from the Monument Valley region alone; Pasternak makes a single mine there, known as Monument No. 2, her primary focus. The mining operations were relatively rudimentary, and by ordination of the tribal government, worked almost entirely by Navajo men. Even the cheapest and most elementary safety practices, such as wetting down blast areas to keep the miners from breathing toxic dust, were neglected in the rush to satisfy the Atomic Energy Commission’s insatiable appetite for uranium.
By the 1960s, the need tapered off, and the mining companies blithely abandoned the sites, leaving piles of radioactive tailings lying around for Navajo kids to play on and their parents to scavenge for conveniently sized rocks with which to build houses, ovens and cisterns. The dust and gravel made seemingly excellent concrete for floors. Monument No. 2, once a mesa, had been nearly leveled, its uranium-laced innards exposed to the open air, reduced to what Pasternak characterizes as a “radioactive pit.” Old quarries filled up with rain- and groundwater, new “lakes” from which local residents watered their herds and gratefully drank.
The next boom, unsurprisingly, was in cancer rates (previously so low among the Navajo that they were thought to be miraculously immune to the disease), and in a birth defect, christened “Navajo neuropathy,” that caused children’s fingers to fuse together and curl into claws. Still, it took decades for the cause to be fully recognized and even longer for it to be addressed; it wasn’t until 2008 and under the lashing of Rep. Henry Waxman, that the federal government made serious efforts to clean up the mine sites, purify water supplies and relocate families living in houses built from radioactive materials.
Read the rest of the review here.
Photo Credit: Christopher Isherwood
Sunday, September 19th, 2010
In the current issue of Population and Environment, Aaron McCright authors an article, The effects of gender on climate change knowledge and concern in the American public, in which he examines whether women and men perceive climate warming differently:
This study tests theoretical arguments about gender differences in scientific knowledge and environmental concern using 8 years of Gallup data on climate change knowledge and concern in the US general public. Contrary to expectations from scientific literacy research, women convey greater assessed scientific knowledge of climate change than do men. Consistent with much existing sociology of science research, women underestimate their climate change knowledge more than do men. Also, women express slightly greater concern about climate change than do men, and this gender divide is not accounted for by differences in key values and beliefs or in the social roles that men and women differentially perform in society. Modest yet enduring gender differences on climate change knowledge and concern within the US general public suggest several avenues for future research, which are explored in the conclusion.
McCright shares additional insights in a Michigan State University news story covering the article:
“Men still claim they have a better understanding of global warming than women, even though women’s beliefs align much more closely with the scientific consensus,” said McCright, an associate professor with appointments in MSU’s Department of Sociology, Lyman Briggs College and Environmental Science and Policy Program.
The study is one of the first to focus in-depth on how the genders think about climate change. The findings also reinforce past research that suggests women lack confidence in their science comprehension.
“Here is yet another study finding that women underestimate their scientific knowledge – a troubling pattern that inhibits many young women from pursuing scientific careers,” McCright said.
Understanding how the genders think about the environment is important on several fronts, said McCright, who calls climate change “the most expansive environmental problem facing humanity.”
“Does this mean women are more likely to buy energy-efficient appliances and hybrid vehicles than men?” he said. “Do they vote for different political candidates? Do they talk to their children differently about global warming?”
McCright analyzed eight years of data from Gallup’s annual environment poll that asked fairly basic questions about climate change knowledge and concern. He said the gender divide on concern about climate change was not explained by the roles that men and women perform such as whether they were homemakers, parents or employed full time.
Instead, he said the gender divide likely is explained by “gender socialization.” According to this theory, boys in the United States learn that masculinity emphasizes detachment, control and mastery. A feminine identity, on the other hand, stresses attachment, empathy and care – traits that may make it easier to feel concern about the potential dire consequences of global warming, McCright said.
“Women and men think about climate change differently,” he said. “And when scientists or policymakers are communicating about climate change with the general public, they should consider this rather than treating the public as one big monolithic audience.”
McCright, A. (2010). The effects of gender on climate change knowledge and concern in the American public Population and Environment, 32 (1), 66-87 DOI: 10.1007/s11111-010-0113-1
Photo Credit: BostonBill