High-resolution gridded datasets are in high demand because they are spatially complete and include important finescale details. Previous assessments have been limited to two to three gridded datasets or analyzed the datasets only at the station locations. Here, eight high-resolution gridded temperature datasets are assessed two ways: at the stations, by comparing with Global Historical Climatology Network–Daily data; and away from the stations, using physical principles. This assessment includes six station-based datasets, one interpolated reanalysis, and one dynamically downscaled reanalysis. California is used as a test domain because of its complex terrain and coastlines, features known to differentiate gridded datasets. As expected, climatologies of station-based datasets agree closely with station data. However, away from stations, spread in climatologies can exceed 6°C. Some station-based datasets are very likely biased near the coast and in complex terrain, due to inaccurate lapse rates. Many station-based datasets have large unphysical trends (>1°C decade−1) due to unhomogenized or missing station data—an issue that has been fixed in some datasets by using homogenization algorithms. Meanwhile, reanalysis-based gridded datasets have systematic biases relative to station data. Dynamically downscaled reanalysis has smaller biases than interpolated reanalysis, and has more realistic variability and trends. Dynamical downscaling also captures snow–albedo feedback, which station-based datasets miss. Overall, these results indicate that 1) gridded dataset choice can be a substantial source of uncertainty, and 2) some datasets are better suited for certain applications.
Using hybrid dynamical–statistical downscaling, 3-km-resolution end-of-twenty-first-century runoff timing changes over California’s Sierra Nevada for all available global climate models (GCMs) from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are projected. All four representative concentration pathways (RCPs) adopted by the Intergovernmental Panel on Climate Change’s Fifth Assessment Report are examined. These multimodel, multiscenario projections allow for quantification of ensemble-mean runoff timing changes and an associated range of possible outcomes due to both intermodel variability and choice of forcing scenario. Under a “business as usual” forcing scenario (RCP8.5), warming leads to a shift toward much earlier snowmelt-driven surface runoff in 2091–2100 compared to 1991–2000, with advances of as much as 80 days projected in the 35-model ensemble mean. For a realistic “mitigation” scenario (RCP4.5), the ensemble-mean change is smaller but still large (up to 30 days). For all plausible forcing scenarios and all GCMs, the simulated changes are statistically significant, so that a detectable change in runoff timing is inevitable. Even for the mitigation scenario, the ensemble-mean change is approximately equivalent to one standard deviation of the natural variability at most elevations. Thus, even when greenhouse gas emissions are curtailed, the runoff change is climatically significant. For the business-as-usual scenario, the ensemble-mean change is approximately two standard deviations of the natural variability at most elevations, portending a truly dramatic change in surface hydrology by the century’s end if greenhouse gas emissions continue unabated.
The response to warming of tropical low-level clouds including both marine stratocumulus and trade cumulus is a major source of uncertainty in projections of future climate. Climate model simulations of the response vary widely, reflecting the difficulty the models have in simulating these clouds. These inadequacies have led to alternative approaches to predict low-cloud feedbacks. Here, we review an observational approach that relies on the assumption that observed relationships between low clouds and the “cloud-controlling factors” of the large-scale environment are invariant across time-scales. With this assumption, and given predictions of how the cloud-controlling factors change with climate warming, one can predict low-cloud feedbacks without using any model simulation of low clouds. We discuss both fundamental and implementation issues with this approach and suggest steps that could reduce uncertainty in the predicted low-cloud feedback. Recent studies using this approach predict that the tropical low-cloud feedback is positive mainly due to the observation that reflection of solar radiation by low clouds decreases as temperature increases, holding all other cloud-controlling factors fixed. The positive feedback from temperature is partially offset by a negative feedback from the tendency for the inversion strength to increase in a warming world, with other cloud-controlling factors playing a smaller role. A consensus estimate from these studies for the contribution of tropical low clouds to the global mean cloud feedback is 0.25 ± 0.18 W m−2 K−1 (90% confidence interval), suggesting it is very unlikely that tropical low clouds reduce total global cloud feedback. Because the prediction of positive tropical low-cloud feedback with this approach is consistent with independent evidence from low-cloud feedback studies using high-resolution cloud models, progress is being made in reducing this key climate uncertainty.
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.
How tropical low clouds change with climate remains the dominant source of uncertainty in global warming projections. An analysis of an ensemble of CMIP5 climate models reveals that a significant part of the spread in the models’ climate sensitivity can be accounted by differences in the climatological shallowness of tropical low clouds in weak-subsidence regimes: models with shallower low clouds in weak-subsidence regimes tend to have a higher climate sensitivity than models with deeper low clouds. The dynamical mechanisms responsible for the model differences are analyzed. Competing effects of parameterized boundary-layer turbulence and shallow convection are found to be essential. Boundary-layer turbulence and shallow convection are typically represented by distinct parameterization schemes in current models—parameterization schemes that often produce opposing effects on low clouds. Convective drying of the boundary layer tends to deepen low clouds and reduce the cloud fraction at the lowest levels; turbulent moistening tends to make low clouds more shallow but affects the low-cloud fraction less. The relative importance different models assign to these opposing mechanisms contributes to the spread of the climatological shallowness of low clouds and thus to the spread of low-cloud changes under global warming.
In this study we developed and examined a hybrid modeling approach integrating physically-based equations and statistical downscaling to estimate fine-scale daily-mean surface turbulent fluxes (i.e., sensible and latent heat fluxes) for a region of southern California that is extensively covered by varied vegetation types over a complex terrain. The selection of model predictors is guided by physical parameterizations of surface flux used in land surface models and analysis showing net shortwave radiation that is a major source of variability in the surface energy budget. Through a structure of multivariable regression processes with an application of near-surface wind estimates from a previous study, we successfully reproduce dynamically-downscaled 3 km resolution surface flux data. The overall error in our estimates is less than 20 % for both sensible and latent heat fluxes, while slightly larger errors are seen in high-altitude regions. The major sources of error in estimates include the limited information provided in coarse reanalysis data, the accuracy of near-surface wind estimates, and an ignorance of the nonlinear diurnal cycle of surface fluxes when using daily-mean data. However, with reasonable and acceptable errors, this hybrid modeling approach provides promising, fine-scale products of surface fluxes that are much more accurate than reanalysis data, without performing intensive dynamical simulations.
In this study, we evaluate the ability of the Weather Research and Forecasting model to simulate surface energy fluxes in the southeast Pacific stratocumulus region. A total of 18 simulations is performed for the period of October to November 2008, with various combinations of boundary layer, microphysics, and cumulus schemes. Simulated surface energy fluxes are compared to those measured during VOCALS-REx. Using a process-based model evaluation, errors in surface fluxes are attributed to errors in cloud properties. Net surface flux errors are mostly traceable to errors in cloud liquid water path (LWPcld), which produce biases in downward shortwave radiation. Two mechanisms controlling LWPcld are diagnosed. One involves microphysics schemes, which control LWPcld through the production of raindrops. The second mechanism involves boundary layer and cumulus schemes, which control moisture available for cloud by regulating boundary layer height. In this study, we demonstrate that when parameterizations are appropriately chosen, the stratocumulus deck and the related surface energy fluxes are reasonably well represented. In the most realistic experiments, the net surface flux is underestimated by about 10 W m−2. This remaining low bias is due to a systematic overestimation of the total surface cooling due to sensible and latent heat fluxes in our simulations. There does not appear to be a single physical reason for this bias. Finally, our results also suggest that inaccurate representation of boundary layer height is an important factor limiting further gains in model realism.
Future snowfall and snowpack changes over the mountains of Southern California are projected using a new hybrid dynamical–statistical framework. Output from all general circulation models (GCMs) in phase 5 of the Coupled Model Intercomparison Project archive is downscaled to 2-km resolution over the region. Variables pertaining to snow are analyzed for the middle (2041–60) and end (2081–2100) of the twenty-first century under two representative concentration pathway (RCP) scenarios: RCP8.5 (business as usual) and RCP2.6 (mitigation). These four sets of projections are compared with a baseline reconstruction of climate from 1981 to 2000. For both future time slices and scenarios, ensemble-mean total winter snowfall loss is widespread. By the mid-twenty-first century under RCP8.5, ensemble-mean winter snowfall is about 70% of baseline, whereas the corresponding value for RCP2.6 is somewhat higher (about 80% of baseline). By the end of the century, however, the two scenarios diverge significantly. Under RCP8.5, snowfall sees a dramatic further decline; 2081–2100 totals are only about half of baseline totals. Under RCP2.6, only a negligible further reduction from midcentury snowfall totals is seen. Because of the spread in the GCM climate projections, these figures are all associated with large intermodel uncertainty. Snowpack on the ground, as represented by 1 April snow water equivalent is also assessed. Because of enhanced snowmelt, the loss seen in snowpack is generally 50% greater than that seen in winter snowfall. By midcentury under RCP8.5, warming-accelerated spring snowmelt leads to snow-free dates that are about 1–3 weeks earlier than in the baseline period.
In this study, uncoupled and coupled ocean–atmosphere simulations are carried out for the California Upwelling System to assess the dynamic ocean–atmosphere interactions, namely, the ocean surface current feedback to the atmosphere. The authors show the current feedback, by modulating the energy transfer from the atmosphere to the ocean, controls the oceanic eddy kinetic energy (EKE). For the first time, it is demonstrated that the current feedback has an effect on the surface stress and a counteracting effect on the wind itself. The current feedback acts as an oceanic eddy killer, reducing by half the surface EKE, and by 27% the depth-integrated EKE. On one hand, it reduces the coastal generation of eddies by weakening the surface stress and hence the nearshore supply of positive wind work (i.e., the work done by the wind on the ocean). On the other hand, by inducing a surface stress curl opposite to the current vorticity, it deflects energy from the geostrophic current into the atmosphere and dampens eddies. The wind response counteracts the surface stress response. It partly reenergizes the ocean in the coastal region and decreases the offshore return of energy to the atmosphere. Eddy statistics confirm the current feedback dampens the eddies and reduces their lifetime, improving the realism of the simulation. Finally, the authors propose an additional energy element in the Lorenz diagram of energy conversion: namely, the current-induced transfer of energy from the ocean to the atmosphere at the eddy scale.
The climate warming effects of accelerated urbanization along with projected global climate change raise an urgent need for sustainable mitigation and adaptation strategies to cool urban climates. Our modeling results show that historical urbanization in the Los Angeles and San Diego metropolitan areas has increased daytime urban air temperature by 1.3 °C, in part due to a weakening of the onshore sea breeze circulation. We find that metropolis-wide adoption of cool roofs can meaningfully offset this daytime warming, reducing temperatures by 0.9 °C relative to a case without cool roofs. Residential cool roofs were responsible for 67% of the cooling. Nocturnal temperature increases of 3.1 °C from urbanization were larger than daytime warming, while nocturnal temperature reductions from cool roofs of 0.5 °C were weaker than corresponding daytime reductions. We further show that cool roof deployment could partially counter the local impacts of global climate change in the Los Angeles metropolitan area. Assuming a scenario in which there are dramatic decreases in greenhouse gas emissions in the 21st century (RCP2.6), mid- and end-of-century temperature increases from global change relative to current climate are similarly reduced by cool roofs from 1.4 °C to 0.6 °C. Assuming a scenario with continued emissions increases throughout the century (RCP8.5), mid-century warming is significantly reduced by cool roofs from 2.0 °C to 1.0 °C. The end-century warming, however, is significantly offset only in small localized areas containing mostly industrial/commercial buildings where cool roofs with the highest albedo are adopted. We conclude that metropolis-wide adoption of cool roofs can play an important role in mitigating the urban heat island effect, and offsetting near-term local warming from global climate change. Global-scale reductions in greenhouse gas emissions are the only way of avoiding long-term warming, however. We further suggest that both climate mitigation and adaptation can be pursued simultaneously using 'cool photovoltaics'.
In the current generation of climate models, the projected increase in global precipitation over the 21st century ranges from 2% to 10% under a high‐emission scenario. Some of this uncertainty can be traced to the rapid response to carbon dioxide (CO2) forcing. We analyze an ensemble of simulations to better understand model spread in this rapid response. A substantial amount is linked to how the land surface partitions a change in latent versus sensible heat flux in response to the CO2‐induced radiative perturbation; a larger increase in sensible heat results in a larger decrease in global precipitation. Model differences in the land surface response appear to be strongly related to the vegetation response to increased CO2, specifically, the closure of leaf stomata. Future research should thus focus on evaluation of the vegetation physiological response, including stomatal conductance parameterizations, for the purpose of constraining the fast response of Earth's hydrologic cycle to CO2 forcing.
We examine the tropical inversion strength, measured by the estimated inversion strength (EIS), and its response to climate change in 18 models associated with phase 5 of the coupled model intercomparison project (CMIP5). While CMIP5 models generally capture the geographic distribution of observed EIS, they systematically underestimate it off the west coasts of continents, due to a warm bias in sea surface temperature. The negative EIS bias may contribute to the low bias in tropical low-cloud cover in the same models. Idealized perturbation experiments reveal that anthropogenic forcing leads directly to EIS increases, independent of “temperature-mediated” EIS increases associated with long-term oceanic warming. This fast EIS response to anthropogenic forcing is strongly impacted by nearly instantaneous continental warming. The temperature-mediated EIS change has contributions from both uniform and non-uniform oceanic warming. The substantial EIS increases in uniform oceanic warming simulations are due to warming with height exceeding the moist adiabatic lapse rate in tropical warm pools. EIS also increases in fully-coupled ocean–atmosphere simulations where CO2CO2 concentration is instantaneously quadrupled, due to both fast and temperature-mediated changes. The temperature-mediated EIS change varies with tropical warming in a nonlinear fashion: The EIS change per degree tropical warming is much larger in the early stage of the simulations than in the late stage, due to delayed warming in the eastern parts of the subtropical oceans. Given the importance of EIS in regulating tropical low-cloud cover, this suggests that the tropical low-cloud feedback may also be nonlinear.
A physically-based statistical modeling approach to downscale coarse resolution reanalysis near-surface winds over a region of complex terrain is developed and tested in this study. Our approach is guided by physical variables and meteorological relationships that are important for determining near-surface wind flow. Preliminary fine scale winds are estimated by correcting the course-to-fine grid resolution mismatch in roughness length. Guided by the physics shaping near-surface winds, we then formulate a multivariable linear regression model which uses near-surface micrometeorological variables and the preliminary estimates as predictors to calculate the final wind products. The coarse-to-fine grid resolution ratio is approximately 10–1 for our study region of southern California. A validated 3-km resolution dynamically-downscaled wind dataset is used to train and validate our method. Winds from our statistical modeling approach accurately reproduce the dynamically-downscaled near-surface wind field with wind speed magnitude and wind direction errors of <1.5 ms−1 and 30°, respectively. This approach can greatly accelerate the production of near-surface wind fields that are much more accurate than reanalysis data, while limiting the amount of computational and time intensive dynamical downscaling. Future studies will evaluate the ability of this approach to downscale other reanalysis data and climate model outputs with varying coarse-to-fine grid resolutions and domains of interest.
A new hybrid statistical–dynamical downscaling technique is described to project mid- and end-of-twenty-first-century local precipitation changes associated with 36 global climate models (GCMs) in phase 5 of the Coupled Model Intercomparison Project archive over the greater Los Angeles region. Land-averaged precipitation changes, ensemble-mean changes, and the spread of those changes for both time slices are presented. It is demonstrated that the results are similar to what would be produced if expensive dynamical downscaling techniques were instead applied to all GCMs. Changes in land-averaged ensemble-mean precipitation are near zero for both time slices, reflecting the region’s typical position in the models at the node of oppositely signed large-scale precipitation changes. For both time slices, the intermodel spread of changes is only about 0.2–0.4 times as large as natural interannual variability in the baseline period. A caveat to these conclusions is that interannual variability in the tropical Pacific is generally regarded as a weakness of the GCMs. As a result, there is some chance the GCM responses in the tropical Pacific to a changing climate and associated impacts on Southern California precipitation are not credible. It is subjectively judged that this GCM weakness increases the uncertainty of regional precipitation change, perhaps by as much as 25%. Thus, it cannot be excluded that the possibility that significant regional adaptation challenges related to either a precipitation increase or decrease would arise. However, the most likely downscaled outcome is a small change in local mean precipitation compared to natural variability, with large uncertainty on the sign of the change.
In this study (Part I), the mid-twenty-first-century surface air temperature increase in the entire CMIP5 ensemble is downscaled to very high resolution (2 km) over the Los Angeles region, using a new hybrid dynamical–statistical technique. This technique combines the ability of dynamical downscaling to capture finescale dynamics with the computational savings of a statistical model to downscale multiple GCMs. First, dynamical downscaling is applied to five GCMs. Guided by an understanding of the underlying local dynamics, a simple statistical model is built relating the GCM input and the dynamically downscaled output. This statistical model is used to approximate the warming patterns of the remaining GCMs, as if they had been dynamically downscaled. The full 32-member ensemble allows for robust estimates of the most likely warming and uncertainty resulting from intermodel differences. The warming averaged over the region has an ensemble mean of 2.3°C, with a 95% confidence interval ranging from 1.0° to 3.6°C. Inland and high elevation areas warm more than coastal areas year round, and by as much as 60% in the summer months. A comparison to other common statistical downscaling techniques shows that the hybrid method produces similar regional-mean warming outcomes but demonstrates considerable improvement in capturing the spatial details. Additionally, this hybrid technique incorporates an understanding of the physical mechanisms shaping the region’s warming patterns, enhancing the credibility of the final results.
Using the hybrid downscaling technique developed in part I of this study, temperature changes relative to a baseline period (1981–2000) in the greater Los Angeles region are downscaled for two future time slices: midcentury (2041–60) and end of century (2081–2100). Two representative concentration pathways (RCPs) are considered, corresponding to greenhouse gas emission reductions over coming decades (RCP2.6) and to continued twenty-first-century emissions increases (RCP8.5). All available global climate models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are downscaled to provide likelihood and uncertainty estimates. By the end of century under RCP8.5, a distinctly new regional climate state emerges: average temperatures will almost certainly be outside the interannual variability range seen in the baseline. Except for the highest elevations and a narrow swath very near the coast, land locations will likely see 60–90 additional extremely hot days per year, effectively adding a new season of extreme heat. In mountainous areas, a majority of the many baseline days with freezing nighttime temperatures will most likely not occur. According to a similarity metric that measures daily temperature variability and the climate change signal, the RCP8.5 end-of-century climate will most likely be only about 50% similar to the baseline. For midcentury under RCP2.6 and RCP8.5 and end of century under RCP2.6, these same measures also indicate a detectable though less significant climatic shift. Therefore, while measures reducing global emissions would not prevent climate change at this regional scale in the coming decades, their impact would be dramatic by the end of the twenty-first century.
Spatial and temporal variability of nearshore winds in eastern boundary current systems is affected by orography, coastline shape, and air-sea interaction. These lead to a weakening of the wind close to the coast: the so-called wind drop-off. In this study, regional atmospheric simulations over the US West Coast are used to demonstrate monthly characteristics of the wind drop-off and assess the mechanisms controlling it. Using a long-term simulation, we show the wind drop-off has spatial and seasonal variability in both its offshore extent and intensity. The offshore extent varies from around 10 to 80 km from the coast and the wind reduction from 10 to 80 %. We show that when the mountain orography is combined with the coastline shape of a cape, it has the biggest influence on wind drop-off. The primary associated processes are the orographically-induced vortex stretching and the surface drag related to turbulent momentum flux divergence that has an enhanced drag coefficient over land. Orographically-induced tilting/twisting can also be locally significant in the vicinity of capes. The land-sea drag difference acts as a barrier to encroachment of the wind onto the land through turbulent momentum flux divergence. It turns the wind parallel to the shore and slightly reduces it close to the coast. Another minor factor is the sharp coastal sea surface temperature front associated with upwelling. This can weaken the surface wind in the coastal strip by shallowing the marine boundary layer and decoupling it from the overlying troposphere.
Changes to mean and extreme wet season precipitation over California on interannual time scales are analyzed using twenty-first-century precipitation data from 34 global climate models. Models disagree on the sign of projected changes in mean precipitation, although in most models the change is very small compared to historical and simulated levels of interannual variability. For the 2020/21–2059/60 period, there is no projected increase in the frequency of extremely dry wet seasons in the ensemble mean. Wet extremes are found to increase to around 2 times the historical frequency, which is statistically significant at the 95% level. Stronger signals emerge in the 2060/61–2099/2100 period. Across all models, extremely dry wet seasons are roughly 1.5 to 2 times more common, and wet extremes generally triple in their historical frequency (statistically significant). Large increases in precipitation variability in most models account for the modest increases to dry extremes. Increases in the frequency of wet extremes can be ascribed to equal contributions from increased variability and increases to the mean. These increases in the frequency of interannual precipitation extremes will create severe water management problems in a region where coping with large interannual variability in precipitation is already a challenge. Evidence from models and observations is examined to understand the causes of the low precipitation associated with the 2013/14 drought in California. These lines of evidence all strongly indicate that the low 2013/14 wet season precipitation total can be very likely attributed to natural variability, in spite of the projected future changes in extremes.
Regional information on climate change is urgently needed but often deemed unreliable. To achieve credible regional climate projections, it is essential to understand underlying physical processes, reduce model biases and evaluate their impact on projections, and adequately account for internal variability. In the tropics, where atmospheric internal variability is small compared with the forced change, advancing our understanding of the coupling between long-term changes in upper-ocean temperature and the atmospheric circulation will help most to narrow the uncertainty. In the extratropics, relatively large internal variability introduces substantial uncertainty, while exacerbating risks associated with extreme events. Large ensemble simulations are essential to estimate the probabilistic distribution of climate change on regional scales. Regional models inherit atmospheric circulation uncertainty from global models and do not automatically solve the problem of regional climate change. We conclude that the current priority is to understand and reduce uncertainties on scales greater than 100 km to aid assessments at finer scales.
The area burned by Southern California wildfires has increased in recent decades, with implications for human health, infrastructure, and ecosystem management. Meteorology and fuel structure are universally recognized controllers of wildfire, but their relative importance, and hence the efficacy of abatement and suppression efforts, remains controversial. Southern California's wildfires can be partitioned by meteorology: fires typically occur either during Santa Ana winds (SA fires) in October through April, or warm and dry periods in June through September (non-SA fires). Previous work has not quantitatively distinguished between these fire regimes when assessing economic impacts or climate change influence. Here we separate five decades of fire perimeters into those coinciding with and without SA winds. The two fire types contributed almost equally to burned area, yet SA fires were responsible for 80% of cumulative 1990–2009 economic losses ($3.1 Billion). The damage disparity was driven by fire characteristics: SA fires spread three times faster, occurred closer to urban areas, and burned into areas with greater housing values. Non-SA fires were comparatively more sensitive to age-dependent fuels, often occurred in higher elevation forests, lasted for extended periods, and accounted for 70% of total suppression costs. An improved distinction of fire type has implications for future projections and management. The area burned in non-SA fires is projected to increase 77% (±43%) by the mid-21st century with warmer and drier summers, and the SA area burned is projected to increase 64% (±76%), underscoring the need to evaluate the allocation and effectiveness of suppression investments.