We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Is it possible that, prior to hitting a thermal equilibrium in the atmosphere, the increase in CO2 that humanity puts out every year will come to an equilibrium due to the consumption by the world's plants, which may respond by growing, reproducing and feeding more off of the additional CO2 in the atmosphere at a faster rate?
Yes, with 2 caveats.
More CO2 encourages plant growth up to the local plants' limit. After that limit, the plants are saturated and extra CO2 backlogs into the air. If you intend to prevent atmospheric CO2 buildup, you're better off gradually supplying CO2 rather than deploying it in large surges.
The other caveat is dead trees and other plants that should be harvested or logged to make way for new growth (while providing lumber, fuel, and employment).
Celebrating The Nobel Prizes
Editor's note: This story was originally posted in the July 2007 issue, and has been reposted to highlight the long history of Nobelists publishing in Scientific American.
For a scientist studying climate change, &ldquoeureka&rdquo moments are unusually rare. Instead progress is generally made by a painstaking piecing together of evidence from every new temperature measurement, satellite sounding or climate-model experiment. Data get checked and rechecked, ideas tested over and over again. Do the observations fit the predicted changes? Could there be some alternative explanation? Good climate scientists, like all good scientists, want to ensure that the highest standards of proof apply to everything they discover.
And the evidence of change has mounted as climate records have grown longer, as our understanding of the climate system has improved and as climate models have become ever more reliable. Over the past 20 years, evidence that humans are affecting the climate has accumulated inexorably, and with it has come ever greater certainty across the scientific community in the reality of recent climate change and the potential for much greater change in the future. This increased certainty is starkly reflected in the latest report of the Intergovernmental Panel on Climate Change (IPCC), the fourth in a series of assessments of the state of knowledge on the topic, written and reviewed by hundreds of scientists worldwide.
The panel released a condensed version of the first part of the report, on the physical science basis of climate change, in February. Called the &ldquoSummary for Policymakers,&rdquo it delivered to policymakers and ordinary people alike an unambiguous message: scientists are more confident than ever that humans have interfered with the climate and that further human-induced climate change is on the way. Although the report finds that some of these further changes are now inevitable, its analysis also confirms that the future, particularly in the longer term, remains largely in our hands&mdashthe magnitude of expected change depends on what humans choose to do about greenhouse gas emissions.
The physical science assessment focuses on four topics: drivers of climate change, changes observed in the climate system, understanding cause-and-effect relationships, and projection of future changes. Important advances in research into all these areas have occurred since the IPCC assessment in 2001. In the pages that follow, we lay out the key findings that document the extent of change and that point to the unavoidable conclusion that human activity is driving it.
Drivers of Climate Change
Atmospheric concentrations of many gases&mdashprimarily carbon dioxide, methane, nitrous oxide and halocarbons (gases once used widely as refrigerants and spray propellants)&mdashhave increased because of human activities. Such gases trap thermal energy (heat) within the atmosphere by means of the well-known greenhouse effect, leading to global warming. The atmospheric concentrations of carbon dioxide, methane and nitrous oxide remained roughly stable for nearly 10,000 years, before the abrupt and rapidly accelerating increases of the past 200 years. Growth rates for concentrations of carbon dioxide have been faster in the past 10 years than over any 10-year period since continuous atmospheric monitoring began in the 1950s, with concentrations now roughly 35 percent above preindustrial levels (which can be determined from air bubbles trapped in ice cores). Methane levels are roughly two and a half times preindustrial levels, and nitrous oxide levels are around 20 percent higher.
How can we be sure that humans are responsible for these increases? Some greenhouse gases (most of the halocarbons, for example) have no natural source. For other gases, two important observations demonstrate human influence. First, the geographic differences in concentrations reveal that sources occur predominantly over land in the more heavily populated Northern Hemisphere. Second, analysis of isotopes, which can distinguish among sources of emissions, demonstrates that the majority of the increase in carbon dioxide comes from combustion of fossil fuels (coal, oil and natural gas). Methane and nitrous oxide increases derive from agricultural practices and the burning of fossil fuels.
Climate scientists use a concept called radiative forcing to quantify the effect of these increased concentrations on climate. Radiative forcing is the change that is caused in the global energy balance of the earth relative to preindustrial times. (Forcing is usually expressed as watts per square meter.) A positive forcing induces warming a negative forcing induces cooling. We can determine the radiative forcing associated with the long-lived greenhouse gases fairly precisely, because we know their atmospheric concentrations, their spatial distribution and the physics of their interaction with radiation.
Climate change is not driven just by increased greenhouse gas concentrations other mechanisms&mdash both natural and human-induced&mdashalso play a part. Natural drivers include changes in solar activity and large volcanic eruptions. The report identifies several additional significant human-induced forcing mechanisms&mdashmicroscopic particles called aerosols, stratospheric and tropospheric ozone, surface albedo (reflectivity) and aircraft contrails&mdashalthough the influences of these mechanisms are much less certain than those of greenhouse gases.
Investigators are least certain of the climatic influence of something called the aerosol cloud albedo effect, in which aerosols from human origins interact with clouds in complex ways and make the clouds brighter, reflecting sunlight back to space. Another source of uncertainty comes from the direct effect of aerosols from human origins: How much do they reflect and absorb sunlight directly as particles? Overall these aerosol effects promote cooling that could offset the warming effect of long-lived greenhouse gases to some extent. But by how much? Could it overwhelm the warming? Among the advances achieved since the 2001 IPCC report is that scientists have quantified the uncertainties associated with each individual forcing mechanism through a combination of many modeling and observational studies. Consequently, we can now confidently estimate the total human- induced component. Our best estimate is some 10 times larger than the best estimate of the natural radiative forcing caused by changes in solar activity.
This increased certainty of a net positive radiative forcing fits well with the observational evidence of warming discussed next. These forcings can be visualized as a tug-of-war, with positive forcings pulling the earth to a warmer climate and negative ones pulling it to a cooler state. The result is a no contest we know the strength of the competitors better than ever before. The earth is being pulled to a warmer climate and will be pulled increasingly in this direction as the &ldquoanchorman&rdquo of greenhouse warming continues to grow stronger and stronger.
Observed Climate Changes
The many new or improved observational data sets that became available in time for the 2007 IPCC report allowed a more comprehensive assessment of changes than was possible in earlier reports. Observational records indicate that 11 of the past 12 years are the warmest since reliable records began around 1850.
he odds of such warm years happening in sequence purely by chance are exceedingly small. Changes in three important quantities&mdashglobal temperature, sea level and snow cover in the Northern Hemisphere&mdashall show evidence of warming, although the details vary. The previous IPCC assessment reported a warming trend of 0.6 ± 0.2 degree Celsius over the period 1901 to 2000. Because of the strong recent warming, the updated trend over 1906 to 2005 is now 0.74 ± 0.18 degree C. Note that the 1956 to 2005 trend alone is 0.65 ± 0.15 degree C, emphasizing that the majority of 20th-century warming occurred in the past 50 years. The climate, of course, continues to vary around the increased averages, and extremes have changed consistently with these averages&mdashfrost days and cold days and nights have become less common, while heat waves and warm days and nights have become more common.
The properties of the climate system include not just familiar concepts of averages of temperature, precipitation, and so on but also the state of the ocean and the cryosphere (sea ice, the great ice sheets in Greenland and Antarctica, glaciers, snow, frozen ground, and ice on lakes and rivers). Complex interactions among different parts of the climate system are a fundamental part of climate change&mdashfor example, reduction in sea ice increases the absorption of heat by the ocean and the heat flow between the ocean and the atmosphere, which can also affect cloudiness and precipitation.
A large number of additional observations are broadly consistent with the observed warming and reflect a flow of heat from the atmosphere into other components of the climate system. Spring snow cover, which decreases in concert with rising spring temperatures in northern midlatitudes, dropped abruptly around 1988 and has remained low since. This drop is of concern because snow cover is important to soil moisture and water resources in many regions.
In the ocean, we clearly see warming trends, which decrease with depth, as expected. These changes indicate that the ocean has absorbed more than 80 percent of the heat added to the climate system: this heating is a major contributor to sea-level rise. Sea level rises because water expands as it is warmed and because water from melting glaciers and ice sheets is added to the oceans. Since 1993 satellite observations have permitted more precise calculations of global sea-level rise, now estimated to be 3.1 ± 0.7 millimeters per year over the period 1993 to 2003. Some previous decades displayed similarly fast rates, and longer satellite records will be needed to determine unambiguously whether sea-level rise is accelerating. Substantial reductions in the extent of Arctic sea ice since 1978 (2.7 ± 0.6 percent per decade in the annual average, 7.4 ± 2.4 percent per decade for summer), increases in permafrost temperatures and reductions in glacial extent globally and in Greenland and Antarctic ice sheets have also been observed in recent decades. Unfortunately, many of these quantities were not well monitored until recent decades, so the starting points of their records vary.
Hydrological changes are broadly consistent with warming as well. Water vapor is the strongest greenhouse gas unlike other greenhouse gases, it is controlled principally by temperature. It has generally increased since at least the 1980s. Precipitation is very variable locally but has increased in several large regions of the world, including eastern North and South America, northern Europe, and northern and central Asia. Drying has been observed in the Sahel, the Mediterranean, southern Africa and parts of southern Asia. Ocean salinity can act as a massive rain gauge. Near-surface waters of the oceans have generally freshened in middle and high latitudes, while they have become saltier in lower latitudes, consistent with changes in large-scale patterns of precipitation.
Reconstructions of past climate&mdashpaleoclimate&mdash from tree rings and other proxies provide important additional insights into the workings of the climate system with and without human influence. They indicate that the warmth of the past half a century is unusual in at least the previous 1,300 years. The warmest period between A.D. 700 and 1950 was probably A.D. 950 to 1100, which was several tenths of a degree C cooler than the average temperature since 1980.
Attribution of Observed Changes
Although confidence is high both that human activities have caused a positive radiative forcing and that the climate has actually changed, can we confidently link the two? This is the question of attribution: Are human activities primarily responsible for observed climate changes, or is it possible they result from some other cause, such as some natural forcing or simply spontaneous variability within the climate system? The 2001 IPCC report concluded it was likely (more than 66 percent probable) that most of the warming since the mid-20th century was attributable to humans. The 2007 report goes significantly further, upping this to very likely (more than 90 percent probable).
The source of the extra confidence comes from a multitude of separate advances. For a start, observational records are now roughly five years longer, and the global temperature increase over this period has been largely consistent with IPCC projections of greenhouse gas&ndashdriven warming made in previous reports dating back to 1990. In addition, changes in more aspects of the climate have been considered, such as those in atmospheric circulation or in temperatures within the ocean. Such changes paint a consistent and now broadened picture of human inter-vention. Climate models, which are central to attribution studies, have also improved and are able to represent the current climate and that of the recent past with considerable fidelity. Finally, some important apparent inconsistencies noted in the observational record have been largely resolved since the last report.
The most important of these was an apparent mismatch between the instrumental surface temperature record (which showed significant warming over recent decades, consistent with a human impact) and the balloon and satellite atmospheric records (which showed little of the expected warming). Several new studies of the satellite and balloon data have now largely resolved this discrepancy&mdashwith consistent warming found at the surface and in the atmosphere.
An experiment with the real world that duplicated the climate of the 20th century with constant (rather than increasing) greenhouse gases would be the ideal way to test for the cause of climate change, but such an experiment is of course impossible. So scientists do the next best thing: they simulate the past with climate models.
Two important advances since the last IPCC assessment have increased confidence in the use of models for both attribution and projection of climate changes. The first is the development of a comprehensive, closely coordinated ensemble of simulations from 18 modeling groups around the world for the historical and future evolution of the earth&rsquos climate. Using many models helps to quantify the effects of uncertainties in various climate processes on the range of model simulations. Although some processes are well understood and well represented by physical equations (the flow of the atmosphere and ocean or the propagation of sunlight and heat, for example), some of the most critical components of the climate system are less well understood, such as clouds, ocean eddies and transpiration by vegetation. Modelers approximate these components using simplified representations called parameterizations. The principal reason to develop a multimodel ensemble for the IPCC assessments is to understand how this lack of certainty affects attribution and prediction of climate change. The ensemble for the latest assessment is unprecedented in the number of models and experiments performed.
The second advance is the incorporation of more realistic representations of climate processes in the models. These processes include the behavior of atmospheric aerosols, the dynamics (movement) of sea ice, and the exchange of water and energy between the land and the atmosphere. More models now include the major types of aerosols and the interactions between aerosols and clouds.
When scientists use climate models for attribution studies, they first run simulations with estimates of only &ldquonatural&rdquo climate influences over the past 100 years, such as changes in solar output and major volcanic eruptions. They then run models that include human-induced increases in greenhouse gases and aerosols. The results of such experiments are striking. Models using only natural forcings are unable to explain the observed global warming since the mid-20th century, whereas they can do so when they include anthropogenic factors in addition to natural ones. Large-scale patterns of tempera-ture change are also most consistent between models and observations when all forcings are included.
Two patterns provide a fingerprint of human influence. The first is greater warming over land than ocean and greater warming at the surface of the sea than in the deeper layers. This pattern is consistent with greenhouse gas&ndashinduced warming by the overlying atmosphere: the ocean warms more slowly because of its large thermal inertia. The warming also indicates that a large amount of heat is being taken up by the ocean, demonstrating that the planet&rsquos energy budget has been pushed out of balance.
A second pattern of change is that while the troposphere (the lower region of the atmosphere) has warmed, the stratosphere, just above it, has cooled. If solar changes provided the dominant forcing, warming would be expected in both atmospheric layers. The observed contrast, however, is just that expected from the combination of greenhouse gas increases and stratospheric ozone decreases. This collective evidence, when subjected to careful statistical analyses, provides much of the basis for the increased confidence that human influences are behind the observed global warming. Suggestions that cosmic rays could affect clouds, and thereby climate, have been based on correlations using limited rec-ords they have generally not stood up when tested with additional data, and their physical mechanisms remain speculative.
What about at smaller scales? As spatial and temporal scales decrease, attribution of climate change becomes more difficult. This problem arises because natural small-scale temperature variations are less &ldquoaveraged out&rdquo and thus more readily mask the change signal. Nevertheless, continued warming means the signal is emerging on smaller scales. The report has found that human activity is likely to have influenced temperature significantly down to the continental scale for all continents except Antarctica.
Human influence is discernible also in some extreme events such as unusually hot and cold nights and the incidence of heat waves. This does not mean, of course, that individual extreme events (such as the 2003 European heat wave) can be said to be simply &ldquocaused&rdquo by human-induced climate change&mdashusually such events are complex, with many causes. But it does mean that human activities have, more likely than not, affected the chances of such events occurring.
Projections of Future Changes
How will climate change over the 21st century? This critical question is addressed using simulations from climate models based on projections of future emissions of greenhouse gases and aerosols. The simulations suggest that, for greenhouse gas emissions at or above current rates, changes in climate will very likely be larger than the changes already observed during the 20th century. Even if emissions were immediately reduced enough to stabilize greenhouse gas concentrations at current levels, climate change would continue for centuries. This inertia in the climate results from a combination of factors. They include the heat capacity of the world&rsquos oceans and the millennial timescales needed for the circulation to mix heat and carbon dioxide throughout the deep ocean and thereby come into equilibrium with the new conditions.
To be more specific, the models project that over the next 20 years, for a range of plausible emissions, the global temperature will increase at an average rate of about 0.2 degree C per decade, close to the observed rate over the past 30 years. About half of this near-term warming represents a &ldquocommitment&rdquo to future climate change arising from the inertia of the climate system response to current atmospheric concentrations of greenhouse gases.
The long-term warming over the 21st century, however, is strongly influenced by the future rate of emissions, and the projections cover a wide variety of scenarios, ranging from very rapid to more modest economic growth and from more to less dependence on fossil fuels. The best estimates of the increase in global temperatures range from 1.8 to 4.0 degrees C for the various emission scenarios, with higher emissions leading to higher temperatures. As for regional impacts, projections indicate with more confidence than ever before that these will mirror the patterns of change observed over the past 50 years (greater warming over land than ocean, for example) but that the size of the changes will be larger than they have been so far.
The simulations also suggest that the removal of excess carbon dioxide from the atmosphere by natural processes on land and in the ocean will become less efficient as the planet warms. This change leads to a higher percentage of emitted carbon dioxide remaining in the atmosphere, which then further accelerates global warming. This is an important positive feedback on the carbon cycle (the exchange of carbon compounds throughout the climate system). Although models agree that carbon-cycle changes represent a positive feedback, the range of their responses remains very large, depending, among other things, on poorly understood changes in vegetation or soil uptake of carbon as the climate warms. Such processes are an important topic of ongoing research.
The models also predict that climate change will affect the physical and chemical characteristics of the ocean. The estimates of the rise in sea level during the 21st century range from about 30 to 40 centimeters, again depending on emissions. More than 60 percent of this rise is caused by the thermal expansion of the ocean. Yet these model-based estimates do not include the possible acceleration of recently observed increases in ice loss from the Greenland and Antarctic ice sheets. Although scientific understanding of such effects is very limited, they could add an additional 10 to 20 centimeters to sea-level rises, and the possibility of significantly larger rises cannot be excluded. The chemistry of the ocean is also affected, as the increased concentrations of atmospheric carbon dioxide will cause the ocean to become more acidic.
Some of the largest changes are predicted for polar regions. These include significant increases in high-latitude land temperatures and in the depth of thawing in permafrost regions and sharp reductions in the extent of summer sea ice in the Arctic basin. Lower latitudes will likely experience more heat waves, heavier precipitation, and stronger (but perhaps less frequent) hurricanes and typhoons. The extent to which hurricanes and typhoons may strengthen is uncertain and is a subject of much new research.
Some important uncertainties remain, of course. For example, the precise way in which clouds will respond as temperatures increase is a critical factor governing the overall size of the projected warming. The complexity of clouds, however, means that their response has been frustratingly difficult to pin down, and, again, much research remains to be done in this area.
We are now living in an era in which both humans and nature affect the future evolution of the earth and its inhabitants. Unfortunately, the crystal ball provided by our climate models becomes cloudier for predictions out beyond a century or so. Our limited knowledge of the response of both natural systems and human society to the growing impacts of climate change compounds our uncertainty. One result of global warming is certain, however. Plants, animals and humans will be living with the consequences of climate change for at least the next thousand years.
At its core, the issue of ocean acidification is simple chemistry. There are two important things to remember about what happens when carbon dioxide dissolves in seawater. First, the pH of seawater water gets lower as it becomes more acidic. Second, this process binds up carbonate ions and makes them less abundant—ions that corals, oysters, mussels, and many other shelled organisms need to build shells and skeletons.
A More Acidic Ocean
This graph shows rising levels of carbon dioxide (CO2) in the atmosphere, rising CO2 levels in the ocean, and decreasing pH in the water off the coast of Hawaii. (NOAA PMEL Carbon Program (Link))
Carbon dioxide is naturally in the air: plants need it to grow, and animals exhale it when they breathe. But, thanks to people burning fuels, there is now more carbon dioxide in the atmosphere than anytime in the past 15 million years. Most of this CO2 collects in the atmosphere and, because it absorbs heat from the sun, creates a blanket around the planet, warming its temperature. But some 30 percent of this CO2 dissolves into seawater, where it doesn't remain as floating CO2 molecules. A series of chemical changes break down the CO2 molecules and recombine them with others.
When water (H2O) and CO2 mix, they combine to form carbonic acid (H2CO3). Carbonic acid is weak compared to some of the well-known acids that break down solids, such as hydrochloric acid (the main ingredient in gastric acid, which digests food in your stomach) and sulfuric acid (the main ingredient in car batteries, which can burn your skin with just a drop). The weaker carbonic acid may not act as quickly, but it works the same way as all acids: it releases hydrogen ions (H + ), which bond with other molecules in the area.
Seawater that has more hydrogen ions is more acidic by definition, and it also has a lower pH. In fact, the definitions of acidification terms—acidity, H + , pH —are interlinked: acidity describes how many H + ions are in a solution an acid is a substance that releases H + ions and pH is the scale used to measure the concentration of H+ ions.
The lower the pH, the more acidic the solution. The pH scale goes from extremely basic at 14 (lye has a pH of 13) to extremely acidic at 1 (lemon juice has a pH of 2), with a pH of 7 being neutral (neither acidic or basic). The ocean itself is not actually acidic in the sense of having a pH less than 7, and it won’t become acidic even with all the CO2 that is dissolving into the ocean. But the changes in the direction of increasing acidity are still dramatic.
So far, ocean pH has dropped from 8.2 to 8.1 since the industrial revolution, and is expected by fall another 0.3 to 0.4 pH units by the end of the century. A drop in pH of 0.1 might not seem like a lot, but the pH scale, like the Richter scale for measuring earthquakes, is logarithmic. For example, pH 4 is ten times more acidic than pH 5 and 100 times (10 times 10) more acidic than pH 6. If we continue to add carbon dioxide at current rates, seawater pH may drop another 120 percent by the end of this century, to 7.8 or 7.7, creating an ocean more acidic than any seen for the past 20 million years or more.
Why Acidity Matters
The acidic waters from the CO2 seeps can dissolve shells and also make it harder for shells to grow in the first place. (Laetitia Plaisance)
Many chemical reactions, including those that are essential for life, are sensitive to small changes in pH. In humans, for example, normal blood pH ranges between 7.35 and 7.45. A drop in blood pH of 0.2-0.3 can cause seizures, comas, and even death. Similarly, a small change in the pH of seawater can have harmful effects on marine life, impacting chemical communication, reproduction, and growth.
The building of skeletons in marine creatures is particularly sensitive to acidity. One of the molecules that hydrogen ions bond with is carbonate (CO3 -2 ), a key component of calcium carbonate (CaCO3) shells. To make calcium carbonate, shell-building marine animals such as corals and oysters combine a calcium ion (Ca +2 ) with carbonate (CO3 -2 ) from surrounding seawater, releasing carbon dioxide and water in the process.
Like calcium ions, hydrogen ions tend to bond with carbonate—but they have a greater attraction to carbonate than calcium. When a hydrogen bonds with carbonate, a bicarbonate ion (HCO3-) is formed. Shell-building organisms can't extract the carbonate ion they need from bicarbonate, preventing them from using that carbonate to grow new shell. In this way, the hydrogen essentially binds up the carbonate ions, making it harder for shelled animals to build their homes. Even if animals are able to build skeletons in more acidic water, they may have to spend more energy to do so, taking away resources from other activities like reproduction. If there are too many hydrogen ions around and not enough molecules for them to bond with, they can even begin breaking existing calcium carbonate molecules apart—dissolving shells that already exist.
This is just one process that extra hydrogen ions—caused by dissolving carbon dioxide—may interfere with in the ocean. Organisms in the water, thus, have to learn to survive as the water around them has an increasing concentration of carbonate-hogging hydrogen ions.
A Catalyst for the Midwest’s Clean Energy Transition
Ashok Gupta spent decades cleaning up New York’s grid and reducing its reliance on dirty fuels. Now he’s working to bring the clean energy future to the Midwest.
Clean Energy Has Struggled During the Current Economic Crisis—but It’s Strong Enough to Recover.
Despite job losses and open hostility from the Trump administration, renewables are still poised to lead the way in our country’s recovery.
In Colorado’s High Desert, the Ute Tribe Goes Solar
The Ute Mountain Ute tribe is working to free itself from fossil fuel dependency—and preparing to help solarize the rest of the state, too.
Week 134: Trump Will Save Endangered Species Only When It’s Cheap
Plus, Bernhardt tries to sink offshore wind, and our first-term president takes credit for building a plastics factory that was announced seven years ago.
Some people tout bioenergy as a solution to our climate crisis. But take a closer look, and this plant and animal power doesn’t actually live up to its promise.
Illinois: Fossil Fuel–Free by 2050?
Two new bills could put the Prairie State on the path to 100 percent renewable energy within decades.
You Can’t Stop the Wind. But These Folks Are Trying Anyway.
In Texas, tax breaks for fossil fuels outpace tax breaks for renewables by a rate of two to one. Guess which sector is whining about unfairness?
Albuquerque Is Wiring Up Millennials to Work With the Sun
With its plan to source all city energy needs from renewable power by 2022, Albuquerque, a winner of the Bloomberg American Cities Climate Challenge, is also jump-starting its solar workforce.
It turns out this controversial renewable hardly lives up to its carbon-cutting reputation.
The Political Divide on Coal vs. Renewables Is Fake News
New polls show that all Americans—Democrats, Republicans, and independents alike—want to close the book on our dirtiest fossil fuel.
Wind and Solar Are the Final Nails in Coal’s Coffin
If you want to know where coal is headed, look no farther than Texas—where this dirty fuel is in its death throes.
2018 Wasn’t a Completely Horrible Year for the Environment
The people and the planet can claim more than a few victories—and 2019 is looking better already.
A Tribe in Northern Minnesota Shows the Country How to Do Community Solar
Photovoltaic panels on the Leech Lake reservation are generating clean power—and revenue to help those who need it most.
Will China Save the Planet?
Barbara Finamore, NRDC’s senior strategic director for Asia, witnessed the birth of China’s clean energy movement. Her new book considers its future.
As Statehouses Shift, So Do the Prospects for Clean Energy
A number of governors who campaigned on renewables and other environmental causes won their races—and the chance to get their states moving on serious climate action.
A Renewable Energy Revolution in Trump Country?
Americans know which way the energy winds are blowing—and in the heartland, they’re blowing mightily.
Thousands of Americans Take Action to Save the Clean Power Plan
Activists across the country rallied, hosted listening sessions, and submitted public comments to advocate for carbon pollution limits from power plants.
In Washington, a Coal-Fired Power Plant Will Put Its Money on the Sun
Ten percent of the state’s greenhouse gases come from a single coal-fired power plant—which will soon trade coal for solar.
North Carolina Sees Energy Potential in Pig Poop
But it may not be the solution local communities need to escape the stench and swill of the pork industry.
Kill the Renewable Energy Industry? The Midwest Says “NO!”
Since the election, Illinois, Michigan, and Ohio have been keeping their clean power progress strong.
The Rise of Chile’s River Protectors
For decades, NRDC has worked alongside Chileans who are fighting to save Patagonia’s wildest rivers from being yoked by massive hydroelectric dams.
Floating Solar Farms Catch on in California
Sonoma and San Diego test the waters for a new source of renewable energy.
Three Ways Andrew Wheeler Can Help Restore the EPA’s Dignity and Mission
Scott Pruitt is out—but can the new EPA chief escape Pruitt’s shadow of endless scandals, incompetence, and corruption?
How the Energy Grid Works
Fun fact: In most of the country, there’s a daily auction to sell energy into our power grids—with the least expensive sources winning. Also noteworthy: Coal’s not cheap.
The Forecast for Jobs in Appalachian Ohio Looks Sunny and Bright
A 400-megawatt solar farm could make this region a hub for clean power and big companies looking to cut carbon.
Like Wind Before It, Minnesota’s Solar Industry Is Booming
Rapid development and strong support in the state may help projects all across the country.
Energy Efficiency: The Clean Facts
Here’s what you need to know about energy efficiency and how you can help save the environment—and money—at the same time.
Why We Must Protect Canada’s Boreal Forest
Not only is this forest home to millions of indigenous people and endangered species, it’s also indispensable in helping us win the fight against climate change.
Blotting Out the Sun to Save the Earth? Seriously?
The prospect of geoengineering freaks us out. And it should—it signifies the lateness of our climate hour.
These Southern Cities Are Going 100 Percent Clean (Energy)
Here’s how they’re getting started.
On the Horizon in the Cowboy State: Wind Turbines
Wyoming, the country’s top coal producer, is wrangling support for wind power—and not a moment too soon.
Reinventing the Wind Turbine
We need wind farms to produce renewable power, and new solutions can keep the giant turbines from also harming birds and bats.
California Makes a Clean Break—from Carbon Pollution
What does it mean for a state that’s bigger and wealthier than many countries to commit itself to clean energy? It could end up meaning the world.
Whales With a Dam Problem
Orcas in the Pacific Northwest are struggling to boost their numbers. Could dams have something to do with it?
The Race to Develop Offshore Wind—and Protect Endangered Whales
As Northeast states make impressive commitments to this clean energy source, scientists are helping them stay out of the way of marine mammals.
Should You Go Solar?
Harnessing power generated by the sun reduces your reliance on fossil fuels, but it can come with a price tag. How to decide if it’s worth it to you.
The Carbon Cycle
The carbon cycle describes how carbon transfers between different reservoirs located on Earth. This cycle is important for maintaining a stable climate and carbon balance on Earth.
Biology, Conservation, Earth Science
Quinault River Rainforest
Full of living entities, and the formerly living, the temperate rainforest at the Quinault River in Olympic Peninsula, Washington, and places like it are rich reservoirs of carbon.
Carbon is an essential element for all life forms on Earth. Whether these life forms take in carbon to help manufacture food or release carbon as part of respiration, the intake and output of carbon is a component of all plant and animal life.
Carbon is in a constant state of movement from place to place. It is stored in what are known as reservoirs, and it moves between these reservoirs through a variety of processes, including photosynthesis, burning fossil fuels, and simply releasing breath from the lungs. The movement of carbon from reservoir to reservoir is known as the carbon cycle.
Carbon can be stored in a variety of reservoirs, including plants and animals, which is why they are considered carbon life forms. Carbon is used by plants to build leaves and stems, which are then digested by animals and used for cellular growth. In the atmosphere, carbon is stored in the form of gases, such as carbon dioxide. It is also stored in oceans, captured by many types of marine organisms. Some organisms, such as clams or coral, use the carbon to form shells and skeletons. Most of the carbon on the planet is contained within rocks, minerals, and other sediment buried beneath the surface of the planet.
Because Earth is a closed system, the amount of carbon on the planet never changes. However, the amount of carbon in a specific reservoir can change over time as carbon moves from one reservoir to another. For example, some carbon in the atmosphere might be captured by plants to make food during photosynthesis. This carbon can then be ingested and stored in animals that eat the plants. When the animals die, they decompose, and their remains become sediment, trapping the stored carbon in layers that eventually turn into rock or minerals. Some of this sediment might form fossil fuels, such as coal, oil, or natural gas, which release carbon back into the atmosphere when the fuel is burned.
The carbon cycle is vital to life on Earth. Nature tends to keep carbon levels balanced, meaning that the amount of carbon naturally released from reservoirs is equal to the amount that is naturally absorbed by reservoirs. Maintaining this carbon balance allows the planet to remain hospitable for life. Scientists believe that humans have upset this balance by burning fossil fuels, which has added more carbon to the atmosphere than usual and led to climate change and global warming.
Full of living entities, and the formerly living, the temperate rainforest at the Quinault River in Olympic Peninsula, Washington, and places like it are rich reservoirs of carbon.
MIT Engineers Have Discovered a Completely New Way of Generating Electricity
MIT engineers have discovered a way to generate electricity using tiny carbon particles that can create an electric current simply by interacting with an organic solvent in which they’re floating. The particles are made from crushed carbon nanotubes (blue) coated with a Teflon-like polymer (green). Credit: Jose-Luis Olivares, MIT. Based on a figure courtesy of the researchers.
Tiny Particles Power Chemical Reactions
A new material made from carbon nanotubes can generate electricity by scavenging energy from its environment.
MIT engineers have discovered a new way of generating electricity using tiny carbon particles that can create a current simply by interacting with liquid surrounding them.
The liquid, an organic solvent, draws electrons out of the particles, generating a current that could be used to drive chemical reactions or to power micro- or nanoscale robots, the researchers say.
“This mechanism is new, and this way of generating energy is completely new,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “This technology is intriguing because all you have to do is flow a solvent through a bed of these particles. This allows you to do electrochemistry, but with no wires.”
In a new study describing this phenomenon, the researchers showed that they could use this electric current to drive a reaction known as alcohol oxidation — an organic chemical reaction that is important in the chemical industry.
Strano is the senior author of the paper, which appears today (June 7, 2021) in Nature Communications. The lead authors of the study are MIT graduate student Albert Tianxiang Liu and former MIT researcher Yuichiro Kunai. Other authors include former graduate student Anton Cottrill, postdocs Amir Kaplan and Hyunah Kim, graduate student Ge Zhang, and recent MIT graduates Rafid Mollah and Yannick Eatmon.
The new discovery grew out of Strano’s research on carbon nanotubes — hollow tubes made of a lattice of carbon atoms, which have unique electrical properties. In 2010, Strano demonstrated, for the first time, that carbon nanotubes can generate “thermopower waves.” When a carbon nanotube is coated with layer of fuel, moving pulses of heat, or thermopower waves, travel along the tube, creating an electrical current.
That work led Strano and his students to uncover a related feature of carbon nanotubes. They found that when part of a nanotube is coated with a Teflon-like polymer, it creates an asymmetry that makes it possible for electrons to flow from the coated to the uncoated part of the tube, generating an electrical current. Those electrons can be drawn out by submerging the particles in a solvent that is hungry for electrons.
To harness this special capability, the researchers created electricity-generating particles by grinding up carbon nanotubes and forming them into a sheet of paper-like material. One side of each sheet was coated with a Teflon-like polymer, and the researchers then cut out small particles, which can be any shape or size. For this study, they made particles that were 250 microns by 250 microns.
When these particles are submerged in an organic solvent such as acetonitrile, the solvent adheres to the uncoated surface of the particles and begins pulling electrons out of them.
“The solvent takes electrons away, and the system tries to equilibrate by moving electrons,” Strano says. “There’s no sophisticated battery chemistry inside. It’s just a particle and you put it into solvent and it starts generating an electric field.”
“This research cleverly shows how to extract the ubiquitous (and often unnoticed) electric energy stored in an electronic material for on-site electrochemical synthesis,” says Jun Yao, an assistant professor of electrical and computer engineering at the University of Massachusetts at Amherst, who was not involved in the study. “The beauty is that it points to a generic methodology that can be readily expanded to the use of different materials and applications in different synthetic systems.”
The current version of the particles can generate about 0.7 volts of electricity per particle. In this study, the researchers also showed that they can form arrays of hundreds of particles in a small test tube. This “packed bed” reactor generates enough energy to power a chemical reaction called an alcohol oxidation, in which an alcohol is converted to an aldehyde or a ketone. Usually, this reaction is not performed using electrochemistry because it would require too much external current.
“Because the packed bed reactor is compact, it has more flexibility in terms of applications than a large electrochemical reactor,” Zhang says. “The particles can be made very small, and they don’t require any external wires in order to drive the electrochemical reaction.”
In future work, Strano hopes to use this kind of energy generation to build polymers using only carbon dioxide as a starting material. In a related project, he has already created polymers that can regenerate themselves using carbon dioxide as a building material, in a process powered by solar energy. This work is inspired by carbon fixation, the set of chemical reactions that plants use to build sugars from carbon dioxide, using energy from the sun.
In the longer term, this approach could also be used to power micro- or nanoscale robots. Strano’s lab has already begun building robots at that scale, which could one day be used as diagnostic or environmental sensors. The idea of being able to scavenge energy from the environment to power these kinds of robots is appealing, he says.
“It means you don’t have to put the energy storage on board,” he says. “What we like about this mechanism is that you can take the energy, at least in part, from the environment.”
Reference: “Solvent-induced electrochemistry at an electrically asymmetric carbon Janus particle” by Albert Tianxiang Liu, Yuichiro Kunai, Anton L. Cottrill, Amir Kaplan, Ge Zhang, Hyunah Kim, Rafid S. Mollah, Yannick L. Eatmon and Michael S. Strano, 7 June 2021, Nature Communications.
We're All Gonna Die: Climate Change Apocalypse by 2050
Man-made climate change (now dubbed "climate crisis" by The Guardian's editors) poses potentially serious risks for humanity in this century. But acknowledging the hazard is not enough for a growing claque of meteorological apocalypse porn peddlers who insist that if their prescriptions for solving the problem are not followed then civilization will momentarily come to an end.
Recent hawkers of fast approaching climate doom include David Wallace-Wells in his book The Uninhabitable Earth: Life After Warming, Cumbria University professor Jem Bendell's "Deep Adaptation" paper, and environmental activist Bill McKibben's Falter: Has the Human Game Begun to Play Itself Out? (my review is forthcoming).
In his foreword to the 8-page sketch of purported climate calamity, retired Australian admiral Chris Barrie asserts that it lays "bare the unvarnished truth about the desperate situation humans, and our planet, are in, painting a disturbing picture of the real possibility that human life on earth may be on the way to extinction, in the most horrible way."
To justify their alarm, the authors of the paper, David Spratt and Ian Dunlop, are basically channeling Harvard economist Martin Weitzman's dismal theorem. In deriving his dismal theorem, Weitzman probed what it would mean if equilibrium climate sensitivity (ECS)—conventionally defined as global average surface warming following a doubling of carbon dioxide concentrations—exceeded the likely range of 1.5–4.5°C.
Weitzman outlined a low probability-high consequence scenario in which ECS could be as high as 10°C. Such a case would indeed be catastrophic considering that the temperature difference between now and the last ice age is about 5°C and it took several thousand years for that increase to occur.
So just how likely is such an extremely high ECS? In its Fifth Assessment Report, the United Nations Intergovernmental Panel on Climate Change (IPCC) noted that the "equilibrium climate sensitivity (ECS) is likely in the range 1.5°C to 4.5°C, extremely unlikely less than 1°C, and very unlikely greater than 6°C." More reassuringly, a 2018 article in Climate Dynamics calculated a relatively low climate sensitivity range of between 1.1°C and 4.05°C (median 1.87°C).
The Breakthrough Centre paper rejects conventional cost-benefit analysis in favor of sketching out a "hothouse earth" scenario that relies on projections in a 2017 Proceedings of the National Academy of Sciences article that average global temperature will exceed 3°C by 2050. They devise their scenario with the aim of alerting policymakers to the idea that climate change could turn out to be worse than current climate model projections suggest.
In their scenario, sea level rises by about 20 inches by 2050 and by 6 to 10 feet by 2100. Fifty-five percent of the world's population is subjected annually to more than 20 days of heat "beyond the threshold of human survivability." Wildfire, heatwaves, drought, and inundating storms proliferate. Ecosystems collapse including the Amazon rainforest, coral reefs, and the Arctic. Global crop production falls by at least 20 percent. Unbearable heat, along with food and water shortages would force billions of people to migrate. The result of this "hothouse earth" scenario would be "a high likelihood of human civilization coming to an end."
On the basis of their scenario, the authors assert that only thing that can protect human civilization is "a massive global mobilization of resources is needed in the coming decade to build a zero-emissions industrial system and set in train the restoration of a safe climate. This would be akin in scale to the World War II emergency mobilization."
Before racing to embrace their scenario, let's consider what is known about the current rate of climate change. According to relatively uncontroversial data, average global surface temperatures have increased by 0.9°C since 1880. Getting to an increase of 3°C above the pre-industrial level by 2050 would mean that temperatures would have to increase at the rate of about 0.7°C per decade from now on.
The State of the Climate in 2017 report issued last year by the American Meteorological Society cites weather balloon and satellite datasets indicating that, since 1979, the increase of global average temperature in the lower troposphere is proceeding at the rate of between 0.13°C and 0.19°C per decade. According to NASA's Earth Observatory, the rate of temperature increase since 1975 as measured by thermometers at the surface is roughly 0.15–0.20°C per decade. Basically, the rate of global temperature increase would have to triple in order to destroy civilization in the Breakthrough Centre scenario.
The flaw with constructing scenarios is that they enable our easy propensity to imagine disaster to run rampant. Scenario building, with the goal of advising policymakers and the public on how to govern the globe in the context of environmental and economic policy, has failed its practitioners spectacularly.
Probably the best example of scenario building gone badly awry is Stanford biologist Paul Ehrlich's 1968 The Population Bomb: Population Control or Race to Oblivion?. Ehrlich sketched out three dismal scenarios in which hundreds of millions of people died in the 1970s from pandemic disease, thermonuclear war, and massive cancer epidemics sparked by exposure to synthetic pesticides. In his most hopeful scenario, the "major die-back" of hundreds of millions of people starving to death in India, China, Latin America, and Africa would end by 1985. Thereafter, the world population would be being managed downward by the remaining enlightened countries to just 2 billion by 2025 and 1.5 billion by 2100. "Our only choices are a lower birth rate or a bigger death rate," Ehrlich declared.
Much like the Breakthrough Centre policy researchers, Ehrlich proposed sweeping plans to solve what he viewed as a desperate global problem. At home, he recommended that "a federal Department of Population and Environment (DPE) should be set up with the power to take whatever steps are necessary to establish a reasonable population size in the United States and to put an end to the steady deterioration of our environment." Although Ehrlich acknowledged that it would be politically impossible at the time when he wrote, he noted that "many of my colleagues feel that some sort of compulsory birth regulation would be necessary to achieve such control. One plan often mentioned involves the addition of temporary sterilants to water supplies or staple food. Doses of the antidote would be carefully rationed by the government to produce the desired population size."
Outside our borders, the United States and other developed countries would have to practice "triage." Food exports and aid must be denied to hopelessly overpopulated countries such as India since that would only delay and worsen the famines that must eventually pare back their excess citizenry.
Fifty years after Ehrlich outlined his gloomy scenarios, world population is at 7.7 billion and global average life expectancy has increased from 57 years to over 72 years now. Ehrlich totally missed the scenario that actually unfolded which lowers fertility and limits population growth—the prosperity that results from the spread of economic freedom and the rule of law turns out to function as a kind of invisible hand of population control.
Nevertheless, despite the wrongheadedness of trying to use worst-case scenarios to guide policy, I have also noted that the projections of the climate and econometric models could be way underestimated.
The future trajectory of man-made climate change is not certain. Consequently, hedge fund manager Bob Litterman sensibly argues that climate change is an undiversifiable risk that would command a higher risk premium. Litterman likens climate change risk to the systemic risk that investors face in the stock market. It is hard to hedge when unknown unknowns can cause the prices of all assets to decline at once. Litterman's analysis suggests that some policies—perhaps a revenue-neutral carbon tax—could help mitigate climate risk.
More fantasy than fact, the Breakthrough Centre scenario fails to persuade that an impending climate apocalypse threatens human extinction. Worst-case scenarios mislead far more than they enlighten. Given what is known about the rate of global temperature increase, my best judgment is that it is not yet time to panic about the imminent end of civilization.
Reduction of conventional pollutants associated with energy use
23 The archetypal symbol of the industrial age was the smokestack and, in many developing countries, large energy facilities continue to represent modernity and economic opportunity. However, with increasing affluence and a better understanding of the adverse environmental and human health impact of most conventional air pollutants, the public’s willingness to accept dirty technologies has declined, especially during the last 30 years. The result has been a clear link in many countries between rising incomes and an increase in emphasis on environmental performance. Over time, energy end-use technologies (e.g., cooking stoves, automobiles) and energy conversion technologies (e.g., power plants) have become progressively cleaner, at least with respect to visible, local and immediately harmful pollutants.
24 In fact, the energy technology that has the most potential to immediately improve human health and well-being in many developing countries is relatively simple. It is the improved cooking stove. The use of such traditional fuels as wood and dung for cooking is inefficient and generates extremely high levels of indoor pollution. Accelerating the transition to more expensive, but far cleaner kerosene, liquefied petroleum gas (LPG), or electric stoves, would dramatically reduce the exposure to unhealthy levels of particulate pollution in many developing countries, particularly among women and children. Other sectors that offer great opportunities to reduce conventional levels of air pollutant emissions and to improve public health are transport and electricity production. More stringent pollution control requirements for automobiles, heavy-duty vehicles and equipment and power plants, in particular, would substantially improve air quality.
25 In some cases, technology improvements that reduce the emissions of conventional air pollutants (such as sulfur dioxide, nitrogen oxides, hydrocarbons and particulate matter) can be expected to also reduce emissions of greenhouse gases. A good example is the use of natural gas for the production of electricity. This became increasingly common in the United States in the 1990s. One reason is that natural gas plants do not require the same pollution controls that coal-fired plants do (e.g., electrostatic precipitators, sulfur dioxide scrubbers, etc.). This has helped them to become competitive with coal-fired power stations in many countries that regulate conventional pollutant emissions. Some conventional pollutants, such as black carbon, directly contribute to global warming. In those cases, conventional emission controls can provide automatic climate co-benefits. In other cases, the relationship is more complicated. For example, sulfur particles have a cooling effect on the atmosphere. In general, most post-combustion conventional-pollutant control technologies do not reduce the emissions of carbon dioxide, the chief greenhouse gas. Moreover, agreements to reduce or control emissions that could disrupt global climate systems have proved to be difficult to negotiate.
26 Devising effective policy responses to a problem that is truly global and multi-generational in scale presents a challenge that is both unprecedented in the history of environmental regulation and daunting to developed and developing countries alike. The challenge for developing countries is greatly complicated by the need to expand access to essential energy services and to simultaneously provide low-cost energy for economic development.
President Barack Obama seems more concerned with appeasing environmental extremists in his administration than he is with the lost jobs of poor Americans. He's letting the environmentalists run wild with long pent-up schemes to force a change in the American way of life that includes small cars, small apartments and, for many, a return to an idealized 19th century lifestyle. It's not China that's responsible for American job losses it's Washington's fault for shutting down whole industries and preventing new jobs from being created.
What's happened is that Obama has given the environmental extremists the power to make some of their wish list come true. Modern measurement techniques allow scientists to measure tiny parts per million much of the technology did not exist when the Clean Air Act was first legislated in 1990. Using these new techniques environmentalists are able to impose their fantasies upon American business and labor. For industry, removing the last parts per million is prohibitively costly. For instance, technology which could have removed the Gulf of Mexico oil spill was prohibited by the Environmental Protection Agency (EPA) because the discharged ocean water would still contain more than 15 parts per million of oil.
When the American economy was growing fast these EPA job killers were not so damaging. Now, in slower times, they are proving deadly.
Below are eight areas where the environmental extremists hope to wreak havoc on the American economy.
Carbon Dioxide. Human activity accounts for less than 4 percent of global CO2 emissions and CO2 itself accounts for only 10 or 20 percent of the greenhouse effect. Water vapor accounts for most of the other 80 percent. The actual quantity of C02 in the Earth's atmosphere is about 0.0387 percent, or 387 parts per million. The Christian Science Monitor recently published an excellent analysis of how the EPA's plans for reducing carbon dioxide could cause the loss of over a million jobs and raise every family's energy costs by over $1,200.
Factory boilers. The EPA wants new, more stringent limits on soot emissions from industrial and factory boilers. This would cost $9.5 billion according to the EPA, or over $20 billion according to the American Chemistry Council. A study released by the Council of Industrial Boiler Owners says the new rules would put 300,000 to 800,000 jobs at risk as industries opted to close plants rather than pay the expensive new costs. The ruling includes boilers used in manufacturing, processing, mining, and refining, as well as shopping malls, laundromats, apartments, restaurants, and hotels.
Home Remodeling. Some contractors are refusing to work on houses built before 1979 (when lead paint use was discontinued) because of stringent new EPA permitting required for lead paint removal. Lead paint in powdered or edible form can hurt growing children. It was once used in the hard gloss paint for wood surfaces, but has been painted over with non-lead-based paint during the past 30 years. The new fines of $37,000 per day are ruinous for smaller contractors and individual workers. Many jobs will therefore not be created as smaller contractors stop replacing window frames or turn down other work where lead paint may be present.
Ground Level Ozone. AutoBlog reports that the EPA has asked the U.S. government to enact draconian new smog regulations for ground-level ozone. The request to cut levels to .006 to .007 parts per million comes less than two years after standards were set at .0075 particles of pollutants per one million. As AutoBlog notes, "That doesn't sound like a very big change, but the New York Times reports that the agency quotes the price tag of such a change at between $19 billion and $100 billion per year by 2020. Oil manufacturers, manufacturing and utility companies are the main source of air pollution and they will have to spend heavily to meet the proposed regulation."
The Arctic National Wildlife Refuge (ANWR). The Fish and Wildlife Service is drawing up plans that define more parts of ANWR as "wilderness" thereby permanently removing any possibility for oil drilling in the vast field. The full Alaskan nature reserve is the size of South Carolina while the proposed drilling area would be the size of Dulles Airport.
Alaska Oil. Interior Secretary Ken Salazar has prohibited all off-shore drilling until further notice, although Shell Oil and others' proposed sites are in less than 150 feet of water and use fixed drilling platforms, not the floating kind used for deep water in the Gulf of Mexico. Potentially vast oil fields and the accompanying jobs are therefore on hold.
Cement Kiln Regulations. Sen. James Inhofe (R-Okla.), who led the fight to expose so called man-made global warming, warns of a new EPA job-killing plan. "EPA's new cement kiln regulation could shut down 18 plants, threatening 1,800 direct jobs and 9,000 indirect jobs," he writes. "According to an analysis of EPA's rule by King's College (London) Professor Ragnar Lofstedt, EPA could send 28 million tons of U.S. cement production offshore, mainly to China."
The above are all large-scale restrictions. There are also many smaller, mostly unreported new regulations. A Heritage Foundation study describes 43 such restrictions imposed during 2010 and totaled up their cost as well over $26 billion. As Sen. Blanche Lincoln (D-Ark.) complained before her defeat, farmers, ranchers, and foresters "are increasingly frustrated and bewildered by vague, overreaching, and unnecessarily burdensome EPA regulations, each of which will add to their costs, making it harder for them to compete."
Gulf of Mexico Oil. While Salazar ostensibly lifted his illegal and unnecessary suspension of all oil drilling in the Gulf of Mexico, we don't yet know if he has put up interminable, cost-wrecking regulations in the ban's place. Just one of his changes, allowing government bureaucrats 90 days instead of the prior 30 days to issue every decision, may be enough to ruin future oil drilling. The big floating rigs rent for over half a million dollars a day to operate. Just the threat of non-decisions along the chain of government command may be fatal and do to oil drilling what the environmentalists did to nuclear energy—namely, shutting down all new plants by making the costs and risks prohibitive. Michael Bromwich, Salazar's director of the Bureau of Ocean Energy Management, said that there were only 10 new well permits pending, but according to The Washington Post there were 69 unapproved exploration and development plans sitting in his office. Even simple, continued drilling in already producing oil sands, where the geological conditions are measured and known, has been suspended.
Salazar also suspended shallow well drilling in less than 500 feet depth from fixed platforms. Washington only issued 13 such shallow well permits in the seven months since the Macondo blowout in April. Before that it was issuing about 13 shallow well permits per month. As is often the case with Washington's heavy-handed regulators, it is the smaller companies, doing less costly drilling closer to shore, that are bankrupted or driven out of business by these costly and burdensome rules. All this comes after 40 years of successful drilling without a major blowout or spill.
Government restrictions and environmentalist lawsuits also affect other mining activity. For example, there is currently a shortage in Chinese rare earth elements, which are essential to a number of technologies, including hard drives and environmentalist-friendly hybrid-car batteries. Yet despite an abundance of rare earth reserves in the U.S., domestic production has been essentially shut down by the president's allies.
It's time for Congress to investigate what the EPA and its reckless agenda is costing American workers, businesses, and taxpayers.
Jon Basil Utley is associate publisher of The American Conservative. He was a foreign correspondent for Knight Ridder newspapers and former associate editor of The Times of the Americas. For 17 years, he was a commentator for the Voice of America. In the 1980s, he owned and operated a small oil drilling partnership in Pennsylvania.
Solar-to-Electricity Energy Efficiency.
The first step of the solar-to-feed/food process is the conversion of solar energy to electricity. We calculated the energy efficiency of this process, ηpv, using available information on 628 utility-scale (>1 ha) PV solar farms (including 347 from the United States, 73 from Japan, 35 from France, and 28 from China Dataset S1A): η pv = E out I in × A , 
where Eout is the annual electrical energy output of a solar farm, A is the total area of the solar farm, and Iin is the local annual irradiance energy incident per unit area. Electrical output and solar farm size were provided by Wiki-Solar as annual design output and, when available, by the US Energy Information Administration as average annual electrical output (Dataset S1A). Annual irradiance, for which we used Global Horizontal Irradiance, was determined using Solargis Prospect tool (63). The median ηpv was 4.9%, which implicitly incorporates several factors. We defined the lower and upper bounds of ηpv as the 30th and 70th percentile: 4.1% and 5.6%. We note that relatively low ηpv values, compared with ≈20% solar cell efficiency, are mainly attributed to ≈50% solar panel ground coverage ratio (to prevent interrow shading) (64, 65).
The median energetic efficiency associated with concentrated solar power is 4.1% (Dataset S1B), that is, lower than that of photovoltaic farms, and hence was not further considered.
Electricity-to-Electron Donor Energy Efficiency.
The energetic efficiency of converting electrical energy to chemical energy stored in an electron donor molecule (i.e., microbial feedstock) is described by ηec: η ec = Y ED × ΔH ° ED , 
where YED is the electron donor yield per unit of input electrical energy and ΔH°ED is the electron donor heat of combustion on a lower heating value basis (reflecting stored energy). η ec H , η ec F , and η ec M correspond, respectively, to the energy efficiency of producing hydrogen, formate, and methanol. Based on a literature survey, 65 % < η ec H < 75 % (66), 30 % < η ec F < 50 % (16, 67 ⇓ –69), and 50 % < η ec M < 60 % (24, 70). We note that η ec H and η ec M include also peripheral energy requirements, such as heating, cooling, pumping, compression, airflow, etc. For formate electrosynthesis ( η ec F ) , no such data on peripheral energy requirements is available hence, we used the relatively wide range given above for its energetic efficiency.
In each case, water supplies electrons to produce the electron donors. 2 H 2 O → 2 H 2 + O 2 ( hydrogen ) 2 H 2 O + 2 CO 2 → 2 HCOOH + O 2 ( formic acid ) 2 H 2 O + CO 2 → H 3 COH + 1.5 O 2 ( methanol ) .
For hydrogen and formate synthesis, 1 mol of water is required per mol of electron donor produced. Methanol synthesis requires two steps. In the first, 3 mols of H2O are electrolyzed to produce 3 mols of H2 and 1.5 mols of O2. Then, the H2 stream is channeled to a separate catalytic reactor and reacts with CO2 to generate 1 mol of methanol while also regenerating 1 mol of H2O. In the overall reaction, 2 mols of water are required to produce 1 mol of methanol.
Electron Donor-to-Biomass Energy Efficiency.
The efficiency with which microorganisms convert the chemical energy stored in electron donors (i.e., microbial feedstock) into biomass is described by ηbio: η bio = Y B ΔH ° B ΔH ° ED , 
where YB is the biomass yield per mol electron donor consumed, ΔH°B is the heat of combustion of bacterial biomass (i.e., stored energy in biomass) taken as 20 MJ · kg-dw −1 based on experimental result (71), and ΔH°ED is as previously defined in Eq. 2. YB values were taken from a recent study (25). For lower and upper bounds of ηbio, we took the 30th and 70th percentile of the values of each combination of electron donor and assimilation pathway.
Biomass-to-Food Product Efficiency.
The final step of food-grade SCP production is the conversion of wet biomass into protein by discarding all other cellular components, which we associated with the energy efficiency ηfilter. We assume that the filtering process maintains 100% of the protein. We calculated ηfilter as follows: η filter = ρ ΔH ° P ΔH ° B , 
where ρ is the fraction of usable protein in biomass on a gram per gram cell dry weight basis, ΔH°P is the heat of combustion of protein in kJ per gram protein, and ΔH°B is as previously defined in Eq. 3. ρ is taken from literature and falls within a range of 55 to 75% (11, 72). ΔH°P is taken as 16.7 MJ · kg −1 (73). The energetic costs required to extract protein from biomass are included in the microbial cultivation energy, as described below. Note that ηfilter does not appear in the feed production scenario, since, in that case, all cellular components are retained in the final product.
Effective Electricity Use Efficiency, η*.
This represents the fraction of electricity that is used in the electrochemical process for the generation of the electron donor compound (i.e., microbial feedstock). The rest of the electricity produced is distributed among several supporting processes: DAC of CO2, provision of macronutrients for microbial cultivation, bioreactor operation, and biomass downstream processing.
We collected available information regarding the energetic demand of DAC of CO2 using multiple technologies. Dataset S1C shows all values, where 6 and 9 MJ · kg-CO2 −1 are the 30th to 70th percentiles. We converted these values to represent energy demand per kg biomass. We assumed that all CO2 released from the bioreactor (e.g., from formate and methanol oxidation to provide cellular energy) is directly recycled without an additional energetic cost. In this case, the energy demand for CO2 capture is directly proportional to the carbon assimilated into the microbial biomass. As the weight fraction of carbon in CO2 is 27% and in biomass it is 48% [assuming biomass formula of CH1.77O0.49N0.24 (74)], we obtained an energy demand for CO2 capture between 11 and 16 MJ · kg-dw −1 . As the combustion energy of biomass is 20 MJ · kg-dw −1 , the normalized energy demand for CO2 capture, θDAC, ranges between 0.5 and 0.8. Low-temperature solid-sorbent DAC methods may also capture water in humid climates at a stoichiometry of 2 to 5 mol H2O per mol CO2 (19). This water could contribute to the water requirements of the electrochemical and cultivation processes.
The energy requirement for the supply of macronutrients—ammonium, phosphate, and sulfate—was calculated, per kg of dry weight biomass, based on life cycle assessment literature and cell stoichiometry (Dataset S1D). The approximate stoichiometry for producing 1 kg of dw biomass was taken as 1.76 kg CO2 (calculated in Dataset S1C), 0.112 kg NH3 (12), 0.02 to 0.03 kg H3PO4 (75), and 0.01 kg H2SO4 (76). The largest energy input is for provision of green NH3 (29.6 to 39.7 MJ per kg NH3) (77). The total energy cost for the provision of nutrients accounts for 4.8 to 6.6 MJ · kg-dw biomass −1 (Dataset S1D). Normalizing these values by the energy of combustion of 1 kg biomass gives θnut, which lies in the range of 0.24 to 0.33.
Energy demand for the operation of the bioreactor, mainly stirring and cooling, was calculated to lie in the range 7.7 to 15.6 MJ · kg-dw −1 (Dataset S1D and Methods). Such a wide range of values derived from the fermentation industry allowed to consider a broad variety of possible reactor configurations and the related stirring and cooling requirements. The latter might differ greatly based on the substrates used and their solubility, and while this was out of the scope of this study, the considered range encompassed possible variations in this sense. Normalizing the 7.7 to 15.6 MJ · kg-dw −1 by the energy of combustion of 1 kg biomass gives θbioreactor, which lies in the range of 0.39 to 0.78.
The energy demand for biomass downstream processing (i.e., converting the wet biomass into the final product) was calculated differently for feed and food production. In the case of feed production, which includes centrifugation and spray drying, the energy demand was calculated to be 8.4 to 9.1 MJ · kg-dw −1 (Dataset S1D). The energy demand for food production, which includes the additional steps of bead milling and microfiltration, was calculated to be 10.5 to 21 MJ · kg-dw −1 (Dataset S1D). Normalizing these values by the energy of combustion of 1 kg biomass gives θdsp, which lies in the range of 0.42 to 0.46 for feed production and 0.52 to 1.1 for food production.
We note that the overall energy demand for provision of macronutrients, bioreactor operation, and biomass downstream processing is 21 to 31 MJ · kg-dw −1 for feed production and 23 to 43 MJ · kg-dw −1 for food production, which generally agrees with a previous study reporting energy demand of 28 to 32 MJ · kg-dw −1 (78).
The effective electricity use efficiency η* is defined as the fraction of electrical energy used for electrochemistry out of the total amount used in all processes. In order to produce 1 kg biomass, the energy required for electrochemistry is ΔH°B/(ηec x ηbio) and for all other processes is ΔH°B x (θDAC + θnut + θbioreactor + θdsp). Therefore: η ∗ = ΔH ° B × η ec − 1 ×η bio − 1 ΔH ° B ×η ec − 1 ×η bio − 1 + ΔH ° B × θ DAC + θ nut + θ bioreactor + θ dsp = 1 / 1 + η ec × η bio θ DAC + θ nut + θ bioreactor + θ dsp , 
Overall Solar-to-Feed/Food Energy Efficiency.
The overall energy efficiency of the solar-to-feed/food process, ηscp, is calculated by the product of the above-defined efficiencies. For the production of feed, the formula is: η scp = η pv × η ec × η bio × η * . 
For the production of food, the extra step of microfiltration discards calorie-containing biomass. Hence, this additional factor is multiplied into the production chain, reducing the overall efficiency: η scp = η pv × η ec × η bio × η filter × η * . 
Estimation of Error.
To estimate the confidence interval for the ηscp in each scenario, we used standard propagation of uncertainty assuming all variables are independent. The uncertainty of each independent variable was taken as the difference between the 30th and 70th percentiles in our collected datasets (as shown in Table 1). The final uncertainty is the square root of the sum of squares of each variable’s uncertainty times the partial derivative of ηscp with respect to it.
Solar-to-Electricity Efficiency Correction Function.
We identified a statistically significant negative correlation between annual irradiance and solar farm energy efficiencies (P value < 0.0001, n = 628), probably reflecting the known fact that solar cells become less efficient at higher temperatures (32). We fitted a regression equation of irradiance to solar farm efficiency PVR, while systematically discarding outliers. This was performed iteratively, wherein each iteration data point for which the measured efficiency was different from the predicted one by more than 0.03 were discarded, and the regression was recalculated. We found the regression function to be: PV R = 0 . 077 – I / 59000 kWh · m - 2 · y - 1 , 
where I is the annual irradiance in the range of 700 to 2,700 kWh · m −2· y −1 . We divided PVR by median solar farm efficiency to quantify the deviation from median as a function of irradiance. The resulting equation is the solar correction function, fC: f C = 1 . 6 – I / 2800 kWh · m - 2 · y - 1 . 
At low irradiance (I < 1,680 kWh · m −2 · y −1 ), fC is greater than 1, and at high irradiance, fC is less than 1.
Yield of Food Energy and Protein.
The yield of nutritional calories per land area and time, Ycal, is given by: Y cal = I × f C × η scp , 
where I is irradiance ranging from 700 to 2,700 kWh m −2 y −1 , ηscp is as in Eq. 6, and fC is as in Eq. 7.
In the case of food production, the energy contained in the final product, Ycal, reflects only the energy in the protein fraction of the biomass. Hence, the SCP system yield in terms of protein mass for food production, Yprot, is determined by dividing the food energy yield by the energy content per unit protein: Y prot = Y cal ΔH ° P , 
where Yprot is the yield of protein in g · m −2 · y −1 and ΔH ° P is as defined in Eq. 4.
Yield of food/feed energy for SCP grown on sucrose extracted from sugar beet.
The production of SCP from sucrose extracted from sugar beet requires two plots of land. In one plot, sugar beet is cultivated. In the other, PV arrays are placed to generate the electricity needed for the extraction of sucrose and cultivation of microbes.
Upon derivation (described in the SI Appendix), we obtain an energy yield of SCP produced via sugar beet as follows: Y S B − S C P = Y s c a l × η bio × η filter Y s c a l θ nut + θ bioreactor + θ dsp × η bio + θ sx + θ scult I × η pv × f c + 1 . 
Yscal is the energetic yield of sucrose derived from sugar beet, which was calculated as 6.5 kg · m −2 · y −1 (Dataset S1E) multiplied by 16%, the characteristic extractable sucrose content per fresh weight sugar beet (79), and converted to units of energy (considering that 16.7 MJ · kg-sucrose −1 ) to give 4.8 kWh · m −2 · y −1 . θscult, the energy cost of sugar beet cultivation (11 to 28 GJ · ha −1 ) normalized by sucrose yield and the energy of combustion of sucrose (16.7 MJ · kg −1 ), ranges between 0.13 and 0.19 (Dataset S1H). θsx is the energy cost of sucrose extraction (0.2 MJ per kg-sugar beet (80) which converts to 1.25 MJ · kg-sucrose −1 ) normalized to the combustion energy of sucrose, which results in 0.07. θnut, θbioreactor, θdsp, and fC are as defined above. We note that the contribution of and thus the energy required to extract sucrose is negligible compared with the energy required for microbial cultivation. Note that in the case of feed production, the term η filter is omitted from the numerator, which increases the overall yield.
To calculate the protein yield of SCP produced via sugar beet in units of mass per land area and time, we divide YSB-SCP by the combustion energy of protein (16.7 MJ · kg −1 ).
The average kcal intake is 2,150 kcal per person per day (48), 15% of which should be protein (81). Hence, ≈80 g protein is the assumed protein mass consumed per person per day, or about 30 kg · y −1 .