Thursday, November 11th, 2010
In the Policy Forum of today’s issue of Science, a research team that includes recent Nobel laureate, Elinor Ostrom, issued a call for innovative interdisciplinary approaches to confronting major environmental challenges:
Tremendous progress has been made in understanding the functioning of the
Earth system and, in particular, the impact of human actions. Although this
knowledge can inform management of specific features of our world in transition, societies need knowledge that will allow them to simultaneously reduce global environmental risks while also meeting economic development goals. For example, how can we advance science and technology, change human behavior, and influence political will to enable societies to meet targets for reductions in greenhouse gas emissions to avoid dangerous climate change? At the same time, how can we meet needs for food, water, improved health and human security, and enhanced energy security? Can this be done while also meeting the United Nations Millennium Development Goals of eradicating extreme poverty and hunger and ensuring ecosystem integrity?
They identified what they call five grand challenges:
(1) Improve the usefulness of forecasts of future environmental conditions and their consequences for people.
(2) Develop, enhance, and integrate observation systems to manage global and regional environmental change.
(3) Determine how to anticipate, avoid, and manage disruptive global environmental change.
(4) Determine institutional, economic, and behavioral changes to enable effective steps toward global sustainability.
(5) Encourage innovation (and mechanisms for evaluation) in technological, policy, and social responses to achieve global sustainability.
And their concluding message resonates with much of what I have been writing about at Global Change (emphasis mine):
These grand challenges provide an overarching research framework to mobilize the international scientific community around a focused decade of research to support sustainable development in the context of global environmental change. … Research dominated by the natural sciences must transition toward research involving the full range of sciences and humanities. A more balanced mix of disciplinary and interdisciplinary research is needed that actively involves stakeholders and decision-makers.
Reid, W., Chen, D., Goldfarb, L., Hackmann, H., Lee, Y., Mokhele, K., Ostrom, E., Raivio, K., Rockstrom, J., Schellnhuber, H., & Whyte, A. (2010). Earth System Science for Global Sustainability: Grand Challenges Science, 330 (6006), 916-917 DOI: 10.1126/science.1196263
From the Environmental Literacy in Higher Education series:
From the Why Don’t People Engage Climate Change? series:
Image credit: woodleywonderworks
Wednesday, November 10th, 2010
There have been several critiques of geoengineering as a climate mitigation tool. Two of the most incisive, in my opinion, come from science and ethics.
The first is a 2007 paper in PNAS by Matthews and Caldeira showing that if we establish aerosol clouds or space reflectors while doing nothing to reduce carbon emissions, we run the risk of catastrophic rates of warming (2-4 degrees C per decade) if these systems were to fail.
The second is a recent piece in Slate by my colleague, Dale Jamieson, who argued that there is no moral and legal authority to know how and when to deploy geoengineering or by how much.
One proposed geoengineering tool is fertilizing the world’s oceans with iron. The premise behind this idea was developed by John Martin in 1990, who is often quoted as saying something like, “Give me a tanker of iron, and I’ll give you an ice age.” Micronutrients like iron and zinc are extremely limiting to phytoplankton growth in the open ocean—orders of magnitude moreso than nutrients we typically think of in common fertilizers, like nitrogen and phosphorus. Dumping iron into the oceans has been shown to stimulate algal blooms, and the creation of this biomass consumes CO2 from the surface waters and atmosphere, thereby helping to mitigate rising CO2 from fossil fuels. In theory, some of this biomass should sink to the deep ocean where it is sequestered for centuries, but this has yet to be shown definitively on a wide scale.
In a forthcoming paper in the Proceedings of the National Academy of Sciences, Mary Silver and colleagues show that there is another potential risk of geoengineering resulting from ocean iron fertilization…
Tuesday, November 9th, 2010
In an interesting new article in Climatic Change, Christopher Doughty and colleagues at Stanford consider whether raising crop albedo (reflectivity) could decrease solar absorption at the Earth’s surface and cool regional climates. One might consider this a kind of climate “bio”engineering.
How could you do this, and would it work?
Monday, November 8th, 2010
When CO2 from fossil fuels accumulates in the atmosphere, some of it dissolves into the oceans where it reacts with water to form a weak acid (H2CO3) —carbonic acid— that lowers seawater pH and makes it increasingly difficult for corals and other calcitic organisms to form their calcium carbonate (CaCO3) skeletons.
A new study in the Proceedings of the National Academy of Sciences by Rebecca Albright and colleagues suggests that the negative effects of ocean acidification don’t stop with adult organisms. The colonization and establishment of juvenile corals appear to be severely impacted. They studied a common coral found in the Caribbean—Acropora palmata (elkhorn coral, which is not the same as the staghorn coral species pictured above).
A snapshot of their results:
This is potentially very bad news because if you shut down the capacity for new corals to establish, you reduce the ability of coral reef systems to persist in the face of disturbances like hurricanes, wave action, nutrient pollution, bleaching, and disease.
Rebecca Albright, Benjamin Mason, Margaret Miller, and Chris Langdon (2010). Ocean acidification compromises recruitment success of the threatened Caribbean coral Acropora palmata Proceedings of the National Academy of Sciences
Wednesday, October 20th, 2010
In his latest blog post in Time Magazine, Bryan Walsh laments the fact that—6 months after the Gulf Oil Spill— it appears no lessons have been learned:
…We all wanted to find the “lessons of the spill”—even while the oil was still flowing. (Look back at that first story I did—it was written during the first week of May, more than 2 months before BP’s blown well was capped.) But we haven’t gotten smarter since the spill. We’ve gotten stupider.
…It’s now six months to the day after the Deepwater Horizon exploded, and it’s safe to say that the BP spill will not be remembered as the modern green movement’s march on Washington. Climate legislation is dead in the Senate, and if the midterm polls are accurate, next year’s Congress will be even less inclined to act on global warming—or even believe it. President Obama—under constant pressure from the same Gulf Coast states that were drenched in oil—lifted his moratorium on deepwater drilling earlier this month, before the initial deadline of Nov. 30 and before investigations into the true cause of the accident were complete. The government response to the disaster, while heroic at times, was deeply problematic, with evidence that Washington kept the public in the dark for weeks about the true size of the spill. The response on the ground was marred by obstructionism on the part of BP, to the point where off-duty cops in Louisiana seemed to be acting as hired muscle for the oil company that—let’s not forget—was chiefly responsible for spill in the first place. The legacy is a climate of distrust and paranoia in the Gulf—academic researchers and government scientists quarreling over underwater oil, conspiracy theories about BP burning sea animals, and anger along the Gulf coast among those who feel they’ve been left behind, as the rest of the country has moved on.
Forget energy reform—the biggest change in the Gulf seems to be the flood of money from BP, as part of its $20 billion promise to “make this right,” as former CEO Tony Hayward put it.
…It’s not exactly a clean energy revolution.
None of this is surprising. It’s what I predicted at the beginning:
Thursday, October 14th, 2010
At the 2009 meeting of the American Geophysical Union, renowned climate scientist Richard Alley (Penn State) gave a keynote address, The Biggest Control Knob: Carbon dioxide in Earth’s Climate History, in which he used a variety of paleoclimatological proxy data to show how CO2 changes over much of Earth history have exerted a strong influence on global temperatures.
In this week’s issue of Science, Andrew Lacis and colleagues published an article, Atmospheric CO2: Principal control knob governing Earth’s temperature (abstract only; subscription required), following up on this theme. Unlike Alley’s talk, which mainly focused on the role of CO2, this team starts by going after water vapor and confronting a widely held perception that it is the dominant greenhouse gas:
It often is stated that water vapor is the chief greenhouse gas (GHG) in the atmosphere. For example, it has been asserted that “about 98% of the natural greenhouse effect is due to water vapour and stratiform clouds with CO2 contributing less than 2%”. If true, this would imply that changes in atmospheric CO2 are not important influences on the natural greenhouse capacity of Earth, and that the continuing increase in CO2 due to human activity is therefore not relevant to climate change. This misunderstanding is resolved through simple examination of the terrestrial greenhouse.
Water vapor is a main reason why the world has a pleasant and life-sustaining average temperature of 16 degrees C. Based on the distance of Earth from the Sun, physics tells us that Earth should be about 0 degrees C—a giant snowball hurling through space. The reason why we are warmer than this is because of the natural envelope of greenhouse gases, including water and CO2 that absorb longwave heat radiating from the surface. This warms the surface of the planet just like a thick blanket keeps your body heat near your skin on a cold night.
In round numbers, water vapor accounts for about 50% of Earth’s greenhouse effect, with clouds contributing 25%, CO2 20%, and the minor GHGs and aerosols accounting for the remaining 5%.
So water vapor and clouds make up about 75% of the greenhouse effect, which sounds like the definition of “the dominant greenhouse gas” to most of us. How does one show that CO2 really is more important than water vapor as a primary greenhouse gas driving temperature change when it looks like water is so important?
Tuesday, October 5th, 2010
When we think of human population change and resource use, it’s easy to assume that more people will consume more resources, such as water, energy, and food. An important corollary is that resource limitations will limit population growth. Thomas Malthus was perhaps the most influential proponent of this idea.
However, several factors complicate this story:
(1) Affluence is a multiplier such that more people in a wealthy, high-consumption society lead to a disproportionate use of resources compared to people in poor countries. As my recent article on global change in Nature Knowledge shows,
the populations of China and India are roughly 1.32 and 1.14 billion people, respectively — about four times that of the US. However, the energy consumption per person in the US is six times larger than that of a person in China, and 15 times that of a person in India. Because the demand for resources like energy is often greater in wealthy, developed nations like the US, this means that countries with smaller populations can actually have a greater overall environmental impact. Over much of the past century, the US was the largest greenhouse gas emitter because of high levels of affluence and energy consumption. In 2007, China overtook the US in terms of overall CO2 emissions as a result of economic development, increasing personal wealth, and the demand for consumer goods, including automobiles.
(2) Interestingly, resource limitations may actually inhibit our ability to slow population growth. Yes, you read that right. A new paper by John DeLong and colleagues in this week’s PLOS One (open access) argues exactly this. Here’s why:
Influential demographic projections suggest that the global human population will stabilize at about 9–10 billion people by mid-century. These projections rest on two fundamental assumptions. The first is that the energy needed to fuel development and the associated decline in fertility will keep pace with energy demand far into the future. The second is that the demographic transition is irreversible such that once countries start down the path to lower fertility they cannot reverse to higher fertility. Both of these assumptions are problematic and may have an effect on population projections. Here we examine these assumptions explicitly. Specifically, given the theoretical and empirical relation between energy-use and population growth rates, we ask how the availability of energy is likely to affect population growth through 2050. Using a cross-country data set, we show that human population growth rates are negatively related to per-capita energy consumption, with zero growth occurring at ~13 kW, suggesting that the global human population will stop growing only if individuals have access to this amount of power. Further, we find that current projected future energy supply rates are far below the supply needed to fuel a global demographic transition to zero growth, suggesting that the predicted leveling-off of the global population by mid-century is unlikely to occur, in the absence of a transition to an alternative energy source. Direct consideration of the energetic constraints underlying the demographic transition results in a qualitatively different population projection than produced when the energetic constraints are ignored. We suggest that energetic constraints be incorporated into future population projections.
I love these kinds of unexpected outcomes that make us think more critically about simplified assumptions when it comes to the drivers and impacts of global change.
DeLong, J., Burger, O., & Hamilton, M. (2010). Current Demographics Suggest Future Energy Supplies Will Be Inadequate to Slow Human Population Growth PLoS ONE, 5 (10) DOI: 10.1371/journal.pone.0013206
Photo credit: wili_hybrid
Wednesday, September 29th, 2010
Water security is making a bit of a splash this week. CNBC ran this story on the water crises in western U.S. states, where the region is possibly closing in on a day of reckoning, as described by Felicity Barringer in the NY Times, and creating a climate of pessimism among some western water managers.
The scientific community is also weighing in. C.J. Vörösmarty and colleagues published a review paper in this week’s issue of Nature in which they evaluate the worldwide risk of water security and threats to aquatic biodiversity (edited slightly to remove citations and statistics):
We find that nearly 80% (4.8 billion) of the world’s population (for 2000) lives in areas where either incident human water security or biodiversity threat exceeds the 75th percentile. Regions of intensive agriculture and dense settlement show high incident threat, as exemplified by much of the United States, virtually all of Europe (excluding Scandinavia and northern Russia), and large portions of central Asia, the Middle East, the Indian subcontinent and eastern China. Smaller contiguous areas of high incident threat appear in central Mexico, Cuba, North Africa, Nigeria, South Africa, Korea and Japan. The impact of water scarcity accentuates threat to drylands, as is apparent in the desert belt transition zones across all continents (for example, Argentina, Sahel, Central Asia, Australian Murray–Darling basin).
What is the disparity of risk between rich vs. poor nations?
Most of Africa, large areas in central Asia and countries including China, India, Peru, or Bolivia struggle with establishing basic water services like clean drinking water and sanitation, and emerge here as regions of greatest adjusted human water security threat. Lack of water infrastructure yields direct economic impacts. Drought- and famine-prone Ethiopia, for example, has 150 times less reservoir storage per capita than North America and its climate and hydrological variability takes a 38% toll on gross domestic product (GDP). The number of people under chronically high water scarcity, many of whom are poor, is 1.7 billion or more globally, with 1.0 billion of these living in areas with high adjusted human water security threat.
They also argue that as wealth increases in a nation, the apparent ability to deal with water security issues improves, leading to the perception that threat level is declining:
Contrasts between incident and adjusted human water security threat are striking when considered relative to national wealth. Incident human water security threat is a rising but saturating function of per capita GDP, whereas adjusted human water security threat declines sharply in affluent countries in response to technological investments. The latter constitutes a unique expression of the environmental Kuznets curve, which describes rising ambient stressor loads during early-to-middle stages of economic growth followed by reduced loading through environmental controls instituted as development proceeds. The concept applies well to air pollutants that directly expose humans to health risks, and which can be regulated at their source. The global investment strategy for human water security shows a distinctly different pattern. Rich countries tolerate relatively high levels of ambient stressors, then reduce their negative impacts by treating symptoms instead of underlying causes of incident threat.
Biodiversity threats from river use appear to be significant globally:
The worldwide pattern of river threats documented here offers the most comprehensive explanation so far of why freshwater biodiversity is considered to be in a state of crisis. Estimates suggest that at least 10,000–20,000 freshwater species are extinct or at risk, with loss rates rivalling those of previous transitions between geological epochs like the Pleistocene-to-Holocene.
And what about future prospects?
We remain off-pace for meeting the Millennium Development Goals for basic sanitation services, a testament to the lack of societal resolve, when one considers that a century of engineering know-how is available and returns on investment in facilities are high. For Organisation for Economic Co-operation and Development (OECD) and BRIC (Brazil, Russia, India and China) countries alone, 800 billion US dollars per year will be required in 2015 to cover investments in water infrastructure, a target likely to go unmet. The situation is even more daunting for biodiversity. International goals for its protection lag well behind expectation and global investments are poorly enumerated but likely to be orders of magnitude lower than those for human water security, leaving at risk animal and plant populations, critical habitat and ecosystem services that directly underpin the livelihoods of many of the world’s poor.
…with a not-so-comforting conclusion:
Left unaddressed, these linked human water security–biodiversity water challenges are forecast to generate social instability of growing concern to civil and military planners.
Vörösmarty, C., McIntyre, P., Gessner, M., Dudgeon, D., Prusevich, A., Green, P., Glidden, S., Bunn, S., Sullivan, C., Liermann, C., & Davies, P. (2010). Global threats to human water security and river biodiversity Nature, 467 (7315), 555-561 DOI: 10.1038/nature09440
Photo credit: suburbanbloke
Friday, September 24th, 2010
Mike Berners-Lee and Duncan Clark at The Guardian have a recent post in the series examining the carbon footprints of daily life activities. Their post asks how much carbon emissions results from the direct and indirect activities of building a car.
The carbon footprint of making a car is immensely complex. Ores have to be dug out of the ground and the metals extracted. These have to be turned into parts. Other components have to be brought together: rubber tyres, plastic dashboards, paint, and so on. All of this involves transporting things around the world. The whole lot then has to be assembled, and every stage in the process requires energy. The companies that make cars have offices and other infrastructure with their own carbon footprints, which we need to somehow allocate proportionately to the cars that are made.
….The best we can do is use so-called input-output analysis to break up the known total emissions of the world or a country into different industries and sectors, in the process taking account of how each industry consumes the goods and services of all the others. If we do this, and then divide by the total emissions of the auto industry by the total amount of money spent on new cars, we reach a footprint of 720kg CO2e per £1000 spent.
This is only a guideline figure, of course, as some cars may be more efficiently produced than others of the same price. But it’s a reasonable ballpark estimate, and it suggests that cars have much bigger footprints than is traditionally believed. Producing a medium-sized new car costing £24,000 may generate more than 17 tonnes of CO2e – almost as much as three years’ worth of gas and electricity in the typical UK home.
17 (metric) tons is 17,000 kg or about 37,400 pounds. The U.S. EPA estimates that the average passenger vehicle in the U.S. emits 5-5.5 metric tons CO2e per year, assuming 12,000 miles driven.
If you do the math, this means the embodied CO2e emissions to make a car is about 3-3.5 years worth of tailpipe emissions from driving. Assuming that most people own their cars for longer than three years, this figure doesn’t jive with what the authors claim:
The upshot is that – despite common claims to contrary – the embodied emissions of a car typically rival the exhaust pipe emissions over its entire lifetime. Indeed, for each mile driven, the emissions from the manufacture of a top-of-the-range Land Rover Discovery that ends up being scrapped after 100,000 miles may be as much as four times higher than the tailpipe emissions of a Citroen C1.
If people held onto their cars for 10 years (assuming 120,000 miles), tailpipe emissions would equal 50 metric tons of CO2e, and embodied emissions would be about 34% of tailpipe emissions. If people drove their cars for 20 years (assuming 240,000 miles), the exhaust emissions would rise to 100 metric tons CO2e, with embodied emissions dropping to 17% of tailpipe emissions.
While most folks generally agree with the notion of driving their vehicle into the ground (as my recently dead 16-yr-old truck illustrates), you’d have to be driving a Toyota Prius to get a lifetime tailpipe emission that equals the embodied emissions of building it (assuming that a Prius achieves three times the mpg of a typical car, which would drop CO2e tailpipe emissions from 5 to 1.7 metric tons CO2e per year, making a 10-year total tailpipe emission of 17 metric tons reasonable).
Thus, if you drive an average car for 10 years, your lifetime tailpipe emissions (50 metric tons) will be a lot larger than the embodied emissions to build the car (17 metric tons) (for a total emission of 67 metric tons). If you drive a hyper-efficient vehicle for 10 years, tailpipe and embodied emissions may be comparable (17 metric tons each, 34 metric tons total). This means you could buy a new Prius every three years, and the embodied emissions from all of these purchases plus tailpipe emissions would roughly equal a normal car driven for 10 years.
This raises an important question: What matters here? If the goal is to reduce total emissions, the best thing is to buy a car with a very high fuel efficiency and drive it for its full life, as the above examples illustrate.
Photo credit: atomicshark
Wednesday, September 22nd, 2010
The rise in global mean temperature of about 0.9 degrees C over the 20th century is one of the most well-known trends in the science of global change. Several modeling and empirical studies suggest that some (~0.3 degrees C) of this warming is due to natural causes like increased solar intensity and decreased vulcanism (which reduces cloud-forming aerosols). Most warming attributed to these factors occurred up to about 1950. The rest of the warming—about 0.6 degrees C, most of which occurs after 1960— can only be explained by the rise in greenhouse gases.
If warming is happening, what about that strange cooling dip from 1940-1970? This has often been attributed to the rise in aerosols from pollution (e.g., power plant smokestacks), which—like volcanoes— form clouds, block solar radiation, and cool temperatures. Once clean air legislation kicked in during the 1970s these pollution aerosols declined, allowing more solar radiation to reach the earth’s surface. Conventional wisdom therefore suggests that cleaning up the atmosphere may have contributed somewhat to climate warming.
A new study in this week’s issue of Nature by David Thompson and colleagues, An abrupt drop in Northern Hemisphere sea surface temperature around 1970, challenges this idea, suggesting that oceans, rather than aerosols, may be the driver of this multi-decadal cooling blip:
The twentieth-century trend in global-mean surface temperature was not monotonic: temperatures rose from the start of the century to the 1940s, fell slightly during the middle part of the century, and rose rapidly from the mid-1970s onwards. The warming–cooling–warming pattern of twentieth-century temperatures is typically interpreted as the superposition of long-term warming due to increasing greenhouse gases and either cooling due to a mid-twentieth century increase of sulphate aerosols in the troposphere, or changes in the climate of the world’s oceans that evolve over decades (oscillatory multidecadal variability). Loadings of sulphate aerosol in the troposphere are thought to have had a particularly important role in the differences in temperature trends between the Northern and Southern hemispheres during the decades following the Second World War. Here we show that the hemispheric differences in temperature trends in the middle of the twentieth century stem largely from a rapid drop in Northern Hemisphere sea surface temperatures of about 0.3 C between about 1968 and 1972. The timescale of the drop is shorter than that associated with either tropospheric aerosol loadings or previous characterizations of oscillatory multidecadal variability. The drop is evident in all available historical sea surface temperature data sets, is not traceable to changes in the attendant metadata, and is not linked to any known biases in surface temperature measurements. The drop is not concentrated in any discrete region of the Northern Hemisphere oceans, but its amplitude is largest over the northern North Atlantic.
This is an interesting development in understanding ocean-atmosphere dynamics. The authors did not go very far in speculating why they thought this observed pattern of North Atlantic cooling occurred, so it’s not yet clear what this means. They offer the following:
The suddenness of the drop in Northern Hemisphere SSTs is reminiscent of ‘abrupt climate change’, such as has been inferred from the palaeoclimate record.
The timescale of the drop is important, because it is considerably shorter than that typically associated with either tropospheric aerosol forcing or oscillatory multidecadal SST variability.
The timing of the drop corresponds closely to a rapid freshening of the northern North Atlantic in the late 1960s/early 1970s (the ‘great salinity anomaly’).
So that potentially rules out things like the North Atlantic Oscillation, Atlantic Multidecadal Oscillation, or Arctic Oscillation but suggests that freshwater inputs from glacial thaw may induce North Atlantic cooling—but likely on a much smaller scale than this.
Thompson, D., Wallace, J., Kennedy, J., & Jones, P. (2010). An abrupt drop in Northern Hemisphere sea surface temperature around 1970 Nature, 467 (7314), 444-447 DOI: 10.1038/nature09394
Image credit: http://commons.wikimedia.org/wiki/File:Instrumental_Temperature_Record_%28NASA%29.svg