Earth system models (ESMs) are key tools for providing climate projections under different scenarios of human-induced forcing. ESMs include a large number of additional processes and feedbacks such as biogeochemical cycles that traditional physical climate models do not consider. Yet, some processes such as cloud dynamics and ecosystem functional response still have fairly high uncertainties. In this article, we present an overview of climate feedbacks for Earth system components currently included in state-of-the-art ESMs and discuss the challenges to evaluate and quantify them. Uncertainties in feedback quantification arise from the interdependencies of biogeochemical matter fluxes and physical properties, the spatial and temporal heterogeneity of processes, and the lack of long-term continuous observational data to constrain them. We present an outlook for promising approaches that can help to quantify and to constrain the large number of feedbacks in ESMs in the future. The target group for this article includes generalists with a background in natural sciences and an interest in climate change as well as experts working in interdisciplinary climate research (researchers, lecturers, and students). This study updates and significantly expands upon the last comprehensive overview of climate feedbacks in ESMs, which was produced 15 years ago (NRC, 2003).
This study uses dynamical and statistical methods to understand end‐of‐century mean changes to Sierra Nevada snowpack. Dynamical results reveal mid‐elevation watersheds experience considerably more rain than snow during winter, leading to substantial snowpack declines by spring. Despite some high‐elevation watersheds receiving slightly more snow in January and February, the warming signal still dominates across the wet‐season and leads to notable declines by springtime. A statistical model is created to mimic dynamical results for April 1 snowpack, allowing for an efficient downscaling of all available General Circulation Models (GCMs) from the Coupled Model Intercomparison Project Phase 5. For all GCMs and emissions scenarios, dramatic April 1 snowpack loss occurs at elevations below 2500 meters, despite increased precipitation in many GCMs. Only 36% (±12%) of historical April 1 total snow water equivalent volume remains at the century's end under a “business‐as‐usual” emissions scenario, with 70% (±12%) remaining under a realistic “mitigation” scenario.
In recent years, an evaluation technique for Earth System Models (ESMs) has arisen—emergent constraints (ECs)—which rely on strong statistical relationships between aspects of current climate and future change across an ESM ensemble. Combining the EC relationship with observations could reduce uncertainty surrounding future change. Here, we articulate a framework to assess ECs, and provide indicators whereby a proposed EC may move from a strong statistical relationship to confirmation. The primary indicators are verified mechanisms and out-of-sample testing. Confirmed ECs have the potential to improve ESMs by focusing attention on the variables most relevant to climate projections. Looking forward, there may be undiscovered ECs for extremes and teleconnections, and ECs may help identify climate system tipping points.
Differences among climate models in equilibrium climate sensitivity (ECS; the equilibrium surface temperature response to a doubling of atmospheric CO2) remain a significant barrier to the accurate assessment of societally important impacts of climate change. Relationships between ECS and observable metrics of the current climate in model ensembles, so-called emergent constraints, have been used to constrain ECS. Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable to a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus, any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. In addition, any proposed ECS constraint should not be taken at face value since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.
Mediterranean climate regimes are particularly susceptible to rapid shifts between drought and flood—of which, California’s rapid transition from record multi-year dryness between 2012 and 2016 to extreme wetness during the 2016–2017 winter provides a dramatic example. Projected future changes in such dry-to-wet events, however, remain inadequately quantified, which we investigate here using the Community Earth System Model Large Ensemble of climate model simulations. Anthropogenic forcing is found to yield large twenty-first-century increases in the frequency of wet extremes, including a more than threefold increase in sub-seasonal events comparable to California’s ‘Great Flood of 1862’. Smaller but statistically robust increases in dry extremes are also apparent. As a consequence, a 25% to 100% increase in extreme dry-to-wet precipitation events is projected, despite only modest changes in mean precipitation. Such hydrological cycle intensification would seriously challenge California’s existing water storage, conveyance and flood control infrastructure.
This study investigates temperature impacts to snowpack and runoff‐driven flood risk over the Sierra Nevada during the extremely wet year of 2016–2017, which followed the extraordinary California drought of 2011–2015. By perturbing near‐surface temperatures from a 9‐km dynamically downscaled simulation, a series of offline land surface model experiments explore how Sierra Nevada hydrology has already been impacted by historical anthropogenic warming and how these impacts evolve under future warming scenarios. Results show that historical warming reduced 2016–2017 Sierra Nevada snow water equivalent by 20% while increasing early‐season runoff by 30%. An additional one third to two thirds loss of snowpack is projected by the end of the century, depending on the emission scenario, with middle elevations experiencing the most significant declines. Notably, the number of days in the future with runoff exceeding 20 mm nearly doubles under a mitigation emission scenarios and triples under a business‐as‐usual scenario. A smaller snow‐to‐rain ratio, as opposed to increased snowmelt, is found to be the primary mechanism of temperature impacts to Sierra snowpack and runoff. These findings are consequential to the prevalence of early‐season floods in the Sierra Nevada. In the Feather River Watershed, historical warming increased runoff by over one third during the period of heaviest precipitation in February 2017. This suggests that historical anthropogenic warming may have exacerbated runoff conditions underlying the Oroville Dam spillway overflow that occurred in this month. As warming continues in the future, the potential for runoff‐based flood risk may rise even higher.
Snow albedo feedback (SAF) behaves similarly in the current and future climate contexts; thus, constraining the large intermodel variance in SAF will likely reduce uncertainty in climate projections. To better understand this intermodel spread, structural and parametric biases contributing to SAF variability are investigated. We find that structurally varying snowpack, vegetation, and albedo parameterizations drive most of the spread, while differences arising from model parameters are generally smaller. Models with the largest SAF biases exhibit clear structural or parametric errors. Additionally, despite widespread intermodel similarities, model interdependency has little impact on the strength of the relationship between SAF in the current and future climate contexts. Furthermore, many models now feature a more realistic SAF than in the prior generation, but shortcomings from two models limit the reduction in ensemble spread. Lastly, preliminary signs from ongoing model development are positive and suggest a likely reduction in SAF spread among upcoming models.
A highly uncertain aspect of anthropogenic climate change is the rate at which the global hydrologic cycle intensifies. The future change in global‐mean precipitation per degree warming, or hydrologic sensitivity, exhibits a threefold spread (1–3%/K) in current global climate models. In this study, we find that the intermodel spread in this value is associated with a significant portion of variability in future projections of extreme precipitation in the tropics, extending also into subtropical atmospheric river corridors. Additionally, there is a very tight intermodel relationship between changes in extreme and nonextreme precipitation, whereby models compensate for increasing extreme precipitation events by decreasing weak‐moderate events. Another factor linked to changes in precipitation extremes is model resolution, with higher resolution models showing a larger increase in heavy extremes. These results highlight ways various aspects of hydrologic cycle intensification are linked in models and shed new light on the task of constraining precipitation extremes.
Emergent constraints use relationships between future and current climate states to constrain projections of climate response. Here we introduce a statistical, hierarchical emergent constraint (HEC) framework in order to link future and current climates with observations. Under Gaussian assumptions, the mean and variance of the future state are shown analytically to be a function of the signal‐to‐noise ratio between current climate uncertainty and observation error and the correlation between future and current climate states. We apply the HEC to the climate change, snow‐albedo feedback, which is related to the seasonal cycle in the Northern Hemisphere. We obtain a snow‐albedo feedback prediction interval of (−1.25,−0.58)%/K. The critical dependence on signal‐to‐noise ratio and correlation shows that neglecting these terms can lead to bias and underestimated uncertainty in constrained projections. The flexibility of using HEC under general assumptions throughout the Earth system is discussed.
This paper describes ESM-SnowMIP, an international coordinated modelling effort to evaluate current snow schemes, including snow schemes that are included in Earth system models, in a wide variety of settings against local and global observations. The project aims to identify crucial processes and characteristics that need to be improved in snow models in the context of local- and global-scale modelling. A further objective of ESM-SnowMIP is to better quantify snow-related feedbacks in the Earth system. Although it is not part of the sixth phase of the Coupled Model Intercomparison Project (CMIP6), ESM-SnowMIP is tightly linked to the CMIP6-endorsed Land Surface, Snow and Soil Moisture Model Intercomparison (LS3MIP).
California’s Sierra Nevada is a high-elevation mountain range with significant seasonal snow cover. Under anthropogenic climate change, amplification of the warming is expected to occur at elevations near snow margins due to snow albedo feedback. However, climate change projections for the Sierra Nevada made by global climate models (GCMs) and statistical downscaling methods miss this key process. Dynamical downscaling simulates the additional warming due to snow albedo feedback. Ideally, dynamical downscaling would be applied to a large ensemble of 30 or more GCMs to project ensemble-mean outcomes and intermodel spread, but this is far too computationally expensive. To approximate the results that would occur if the entire GCM ensemble were dynamically downscaled, a hybrid dynamical–statistical downscaling approach is used. First, dynamical downscaling is used to reconstruct the historical climate of the 1981–2000 period and then to project the future climate of the 2081–2100 period based on climate changes from five GCMs. Next, a statistical model is built to emulate the dynamically downscaled warming and snow cover changes for any GCM. This statistical model is used to produce warming and snow cover loss projections for all available CMIP5 GCMs. These projections incorporate snow albedo feedback, so they capture the local warming enhancement (up to 3°C) from snow cover loss that other statistical methods miss. Capturing these details may be important for accurately projecting impacts on surface hydrology, water resources, and ecosystems.
Sierra Nevada climate and snowpack is simulated during the period of extreme drought from 2011 to 2015 and compared to an identical simulation except for the removal of the twentieth century anthropogenic warming. Anthropogenic warming reduced average snowpack levels by 25%, with middle‐to‐low elevations experiencing reductions between 26 and 43%. In terms of event frequency, return periods associated with anomalies in 4 year 1 April snow water equivalent are estimated to have doubled, and possibly quadrupled, due to past warming. We also estimate effects of future anthropogenic warmth on snowpack during a drought similar to that of 2011–2015. Further snowpack declines of 60–85% are expected, depending on emissions scenario. The return periods associated with future snowpack levels are estimated to range from millennia to much longer. Therefore, past human emissions of greenhouse gases are already negatively impacting statewide water resources during drought, and much more severe impacts are likely to be inevitable.
High-resolution gridded datasets are in high demand because they are spatially complete and include important finescale details. Previous assessments have been limited to two to three gridded datasets or analyzed the datasets only at the station locations. Here, eight high-resolution gridded temperature datasets are assessed two ways: at the stations, by comparing with Global Historical Climatology Network–Daily data; and away from the stations, using physical principles. This assessment includes six station-based datasets, one interpolated reanalysis, and one dynamically downscaled reanalysis. California is used as a test domain because of its complex terrain and coastlines, features known to differentiate gridded datasets. As expected, climatologies of station-based datasets agree closely with station data. However, away from stations, spread in climatologies can exceed 6°C. Some station-based datasets are very likely biased near the coast and in complex terrain, due to inaccurate lapse rates. Many station-based datasets have large unphysical trends (>1°C decade−1) due to unhomogenized or missing station data—an issue that has been fixed in some datasets by using homogenization algorithms. Meanwhile, reanalysis-based gridded datasets have systematic biases relative to station data. Dynamically downscaled reanalysis has smaller biases than interpolated reanalysis, and has more realistic variability and trends. Dynamical downscaling also captures snow–albedo feedback, which station-based datasets miss. Overall, these results indicate that 1) gridded dataset choice can be a substantial source of uncertainty, and 2) some datasets are better suited for certain applications.
Using hybrid dynamical–statistical downscaling, 3-km-resolution end-of-twenty-first-century runoff timing changes over California’s Sierra Nevada for all available global climate models (GCMs) from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are projected. All four representative concentration pathways (RCPs) adopted by the Intergovernmental Panel on Climate Change’s Fifth Assessment Report are examined. These multimodel, multiscenario projections allow for quantification of ensemble-mean runoff timing changes and an associated range of possible outcomes due to both intermodel variability and choice of forcing scenario. Under a “business as usual” forcing scenario (RCP8.5), warming leads to a shift toward much earlier snowmelt-driven surface runoff in 2091–2100 compared to 1991–2000, with advances of as much as 80 days projected in the 35-model ensemble mean. For a realistic “mitigation” scenario (RCP4.5), the ensemble-mean change is smaller but still large (up to 30 days). For all plausible forcing scenarios and all GCMs, the simulated changes are statistically significant, so that a detectable change in runoff timing is inevitable. Even for the mitigation scenario, the ensemble-mean change is approximately equivalent to one standard deviation of the natural variability at most elevations. Thus, even when greenhouse gas emissions are curtailed, the runoff change is climatically significant. For the business-as-usual scenario, the ensemble-mean change is approximately two standard deviations of the natural variability at most elevations, portending a truly dramatic change in surface hydrology by the century’s end if greenhouse gas emissions continue unabated.
The response to warming of tropical low-level clouds including both marine stratocumulus and trade cumulus is a major source of uncertainty in projections of future climate. Climate model simulations of the response vary widely, reflecting the difficulty the models have in simulating these clouds. These inadequacies have led to alternative approaches to predict low-cloud feedbacks. Here, we review an observational approach that relies on the assumption that observed relationships between low clouds and the “cloud-controlling factors” of the large-scale environment are invariant across time-scales. With this assumption, and given predictions of how the cloud-controlling factors change with climate warming, one can predict low-cloud feedbacks without using any model simulation of low clouds. We discuss both fundamental and implementation issues with this approach and suggest steps that could reduce uncertainty in the predicted low-cloud feedback. Recent studies using this approach predict that the tropical low-cloud feedback is positive mainly due to the observation that reflection of solar radiation by low clouds decreases as temperature increases, holding all other cloud-controlling factors fixed. The positive feedback from temperature is partially offset by a negative feedback from the tendency for the inversion strength to increase in a warming world, with other cloud-controlling factors playing a smaller role. A consensus estimate from these studies for the contribution of tropical low clouds to the global mean cloud feedback is 0.25 ± 0.18 W m−2 K−1 (90% confidence interval), suggesting it is very unlikely that tropical low clouds reduce total global cloud feedback. Because the prediction of positive tropical low-cloud feedback with this approach is consistent with independent evidence from low-cloud feedback studies using high-resolution cloud models, progress is being made in reducing this key climate uncertainty.
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.
How tropical low clouds change with climate remains the dominant source of uncertainty in global warming projections. An analysis of an ensemble of CMIP5 climate models reveals that a significant part of the spread in the models’ climate sensitivity can be accounted by differences in the climatological shallowness of tropical low clouds in weak-subsidence regimes: models with shallower low clouds in weak-subsidence regimes tend to have a higher climate sensitivity than models with deeper low clouds. The dynamical mechanisms responsible for the model differences are analyzed. Competing effects of parameterized boundary-layer turbulence and shallow convection are found to be essential. Boundary-layer turbulence and shallow convection are typically represented by distinct parameterization schemes in current models—parameterization schemes that often produce opposing effects on low clouds. Convective drying of the boundary layer tends to deepen low clouds and reduce the cloud fraction at the lowest levels; turbulent moistening tends to make low clouds more shallow but affects the low-cloud fraction less. The relative importance different models assign to these opposing mechanisms contributes to the spread of the climatological shallowness of low clouds and thus to the spread of low-cloud changes under global warming.
In this study we developed and examined a hybrid modeling approach integrating physically-based equations and statistical downscaling to estimate fine-scale daily-mean surface turbulent fluxes (i.e., sensible and latent heat fluxes) for a region of southern California that is extensively covered by varied vegetation types over a complex terrain. The selection of model predictors is guided by physical parameterizations of surface flux used in land surface models and analysis showing net shortwave radiation that is a major source of variability in the surface energy budget. Through a structure of multivariable regression processes with an application of near-surface wind estimates from a previous study, we successfully reproduce dynamically-downscaled 3 km resolution surface flux data. The overall error in our estimates is less than 20 % for both sensible and latent heat fluxes, while slightly larger errors are seen in high-altitude regions. The major sources of error in estimates include the limited information provided in coarse reanalysis data, the accuracy of near-surface wind estimates, and an ignorance of the nonlinear diurnal cycle of surface fluxes when using daily-mean data. However, with reasonable and acceptable errors, this hybrid modeling approach provides promising, fine-scale products of surface fluxes that are much more accurate than reanalysis data, without performing intensive dynamical simulations.