首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
This paper examines different concepts of a ‘warming commitment’ which is often used in various ways to describe or imply that a certain level of warming is irrevocably committed to over time frames such as the next 50 to 100 years, or longer. We review and quantify four different concepts, namely (1) a ‘constant emission warming commitment’, (2) a ‘present forcing warming commitment’, (3) a‘zero emission (geophysical) warming commitment’ and (4) a ‘feasible scenario warming commitment’. While a ‘feasible scenario warming commitment’ is probably the most relevant one for policy making, it depends centrally on key assumptions as to the technical, economic and political feasibility of future greenhouse gas emission reductions. This issue is of direct policy relevance when one considers that the 2002 global mean temperatures were 0.8± 0.2 °C above the pre-industrial (1861–1890) mean and the European Union has a stated goal of limiting warming to 2 °C above the pre-industrial mean: What is the risk that we are committed to overshoot 2 °C? Using a simple climate model (MAGICC) for probabilistic computations based on the conventional IPCC uncertainty range for climate sensitivity (1.5 to 4.5 °C), we found that (1) a constant emission scenario is virtually certain to overshoot 2 °C with a central estimate of 2.0 °C by 2100 (4.2 °C by 2400). (2) For the present radiative forcing levels it seems unlikely that 2 °C are overshoot. (central warming estimate 1.1 °C by 2100 and 1.2 °C by 2400 with ~10% probability of overshooting 2 °C). However, the risk of overshooting is increasing rapidly if radiative forcing is stabilized much above 400 ppm CO2 equivalence (1.95 W/m2) in the long-term. (3) From a geophysical point of view, if all human-induced emissions were ceased tomorrow, it seems ‘exceptionally unlikely’ that 2 °C will be overshoot (central estimate: 0.7 °C by 2100; 0.4 °C by 2400). (4) Assuming future emissions according to the lower end of published mitigation scenarios (350 ppm CO2eq to 450 ppm CO2eq) provides the central temperature projections are 1.5 to 2.1 °C by 2100 (1.5 to 2.0 °C by 2400) with a risk of overshooting 2 °C between 10 and 50% by 2100 and 1–32% in equilibrium. Furthermore, we quantify the ‘avoidable warming’ to be 0.16–0.26 °C for every 100 GtC of avoided CO2 emissions – based on a range of published mitigation scenarios.  相似文献   

2.
From multi-ensembles of climate simulations using the Community Climate System Model version 3, global climate changes have been investigated focusing on long-term responses to stabilized anthropogenic forcings. In addition to the standard forcing scenarios for the current international assessment, an overshoot scenario, where radiative forcings are decreased from one stabilized level to another, is also considered. The globally-averaged annual surface air temperature increases during the twenty-first century by 2.58 and 1.56°C for increased forcings under two future scenarios denoted by A1B and B1, respectively. These changes continue but at much slower rates in later centuries under forcings stabilized at year 2100. The overshoot scenario provides a different pathway to the lower B1 level by way of the greater A1B level. This scenario results in a surface climate similar to that in the B1 scenario within 100 years after the forcing reaches the B1 level. Contrasting to the surface changes, responses in the ocean are significantly delayed. It is estimated from the linear response theory that temperature changes under stabilized forcings to a final equilibrium state in the A1B (B1) scenario are factors of 0.3–0.4, 0.9, and 17 (0.3, 0.6, and 11) to changes during the twenty-first century, respectively, for three ocean layers of the surface to 100, 100–500, and 500 m to the bottom. Although responses in the lower ocean layers imply a nonlinear behavior, the ocean temperatures in the overshoot and B1 scenarios are likely to converge in their final equilibrium states.  相似文献   

3.
RCP4.5: a pathway for stabilization of radiative forcing by 2100   总被引:3,自引:2,他引:1  
Representative Concentration Pathway (RCP) 4.5 is a scenario that stabilizes radiative forcing at 4.5?W?m?2 in the year 2100 without ever exceeding that value. Simulated with the Global Change Assessment Model (GCAM), RCP4.5 includes long-term, global emissions of greenhouse gases, short-lived species, and land-use-land-cover in a global economic framework. RCP4.5 was updated from earlier GCAM scenarios to incorporate historical emissions and land cover information common to the RCP process and follows a cost-minimizing pathway to reach the target radiative forcing. The imperative to limit emissions in order to reach this target drives changes in the energy system, including shifts to electricity, to lower emissions energy technologies and to the deployment of carbon capture and geologic storage technology. In addition, the RCP4.5 emissions price also applies to land use emissions; as a result, forest lands expand from their present day extent. The simulated future emissions and land use were downscaled from the regional simulation to a grid to facilitate transfer to climate models. While there are many alternative pathways to achieve a radiative forcing level of 4.5?W?m?2, the application of the RCP4.5 provides a common platform for climate models to explore the climate system response to stabilizing the anthropogenic components of radiative forcing.  相似文献   

4.
Uncertainties in climate stabilization   总被引:1,自引:1,他引:0  
The atmospheric composition, temperature and sea level implications out to 2300 of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models are described and assessed. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying (in the absence of a substantial CO2 concentration overshoot) a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at about 1,000 ppm in 2200—but even to achieve this target requires large and rapid CO2 emissions reductions over the twenty-second century. Future temperature changes for the Level 1 stabilization case differ noticeably between the IA models even when a common set of climate model parameters is used (largely a result of different assumptions for non-Kyoto gases). For the Level 1 stabilization case, there is a probability of approximately 50% that warming from pre-industrial times will be less than (or more than) 2°C. For one of the IA models, warming in the Level 1 case is actually greater out to 2040 than in the reference case due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. This effect is less noticeable for the other stabilization cases, but still leads to policies having virtually no effect on global-mean temperatures out to around 2060. Sea level rise uncertainties are very large. For example, for the Level 1 stabilization case, increases range from 8 to 120 cm for changes over 2000 to 2300.  相似文献   

5.
Probabilistic climate change projections using neural networks   总被引:5,自引:0,他引:5  
Anticipated future warming of the climate system increases the need for accurate climate projections. A central problem are the large uncertainties associated with these model projections, and that uncertainty estimates are often based on expert judgment rather than objective quantitative methods. Further, important climate model parameters are still given as poorly constrained ranges that are partly inconsistent with the observed warming during the industrial period. Here we present a neural network based climate model substitute that increases the efficiency of large climate model ensembles by at least an order of magnitude. Using the observed surface warming over the industrial period and estimates of global ocean heat uptake as constraints for the ensemble, this method estimates ranges for climate sensitivity and radiative forcing that are consistent with observations. In particular, negative values for the uncertain indirect aerosol forcing exceeding –1.2 Wm–2 can be excluded with high confidence. A parameterization to account for the uncertainty in the future carbon cycle is introduced, derived separately from a carbon cycle model. This allows us to quantify the effect of the feedback between oceanic and terrestrial carbon uptake and global warming on global temperature projections. Finally, probability density functions for the surface warming until year 2100 for two illustrative emission scenarios are calculated, taking into account uncertainties in the carbon cycle, radiative forcing, climate sensitivity, model parameters and the observed temperature records. We find that warming exceeds the surface warming range projected by IPCC for almost half of the ensemble members. Projection uncertainties are only consistent with IPCC if a model-derived upper limit of about 5 K is assumed for climate sensitivity.  相似文献   

6.
The question of appropriate timing and stringency of future greenhouse gas (GHG) emission reductions remains an issue in the discussion of mitigation responses to the climate change problem. It has been argued that our near-term action should be guided by a long-term vision for the climate, possibly a long-term temperature target. In this paper, we review proposals for long-term climate targets to avoid ‘dangerous’ climate change. Using probability estimates of climate sensitivity from the literature, we then generate probabilistic emissions scenarios that satisfy temperature targets of 2.0, 2.5, and 3.0°C above pre-industrial levels with no overshoot. Our interest is in the implications of these targets on abatement requirements over the next 50 years. If we allow global industrial GHG emissions to peak in 2025 at 14 GtCeq, and wish to achieve a 2.0°C target with at least 50% certainty, we find that the low sensitivity estimate in the literature suggests our industrial emissions must fall to 9 GtCeq by 2050: equal to the level in 2000. However, the average literature sensitivity estimate suggests the level must be less than 2 GtCeq; and in the high sensitivity case, the target is simply unreachable unless we allow for overshoot. Our results suggest that in light of the uncertainty in our knowledge of the climate sensitivity, a long-term temperature target (such as the 2.0°C target proposed by the European Commission) can provide limited guidance to near-term mitigation requirements.  相似文献   

7.
The radiative flux perturbations and subsequent temperature responses in relation to the eruption of Mount Pinatubo in 1991 are studied in the ten general circulation models incorporated in the Coupled Model Intercomparison Project, phase 3 (CMIP3), that include a parameterization of volcanic aerosol. Models and observations show decreases in global mean temperature of up to 0.5 K, in response to radiative perturbations of up to 10 W m−2, averaged over the tropics. The time scale representing the delay between radiative perturbation and temperature response is determined by the slow ocean response, and is estimated to be centered around 4 months in the models. Although the magniude of the temperature response to a volcanic eruption has previously been used as an indicator of equilibrium climate sensitivity in models, we find these two quantities to be only weakly correlated. This may partly be due to the fact that the size of the volcano-induced radiative perturbation varies among the models. It is found that the magnitude of the modelled radiative perturbation increases with decreasing climate sensitivity, with the exception of one outlying model. Therefore, we scale the temperature perturbation by the radiative perturbation in each model, and use the ratio between the integrated temperature perturbation and the integrated radiative perturbation as a measure of sensitivity to volcanic forcing. This ratio is found to be well correlated with the model climate sensitivity, more sensitive models having a larger ratio. Further, if this correspondence between “volcanic sensitivity” and sensitivity to CO2 forcing is a feature not only among the models, but also of the real climate system, the alleged linear relation can be used to estimate the real climate sensitivity. The observational value of the ratio signifying volcanic sensitivity is hereby estimated to correspond to an equilibrium climate sensitivity, i.e. equilibrium temperature increase due to a doubling of the CO2 concentration, between 1.7 and 4.1 K. Several sources of uncertainty reside in the method applied, and it is pointed out that additional model output, related to ocean heat storage and radiative forcing, could refine the analysis, as could reduced uncertainty in the observational record, of temperature as well as forcing.  相似文献   

8.
Understanding the historical and future response of the global climate system to anthropogenic emissions of radiatively active atmospheric constituents has become a timely and compelling concern. At present, however, there are uncertainties in: the total radiative forcing associated with changes in the chemical composition of the atmosphere; the effective forcing applied to the climate system resulting from a (temporary) reduction via ocean-heat uptake; and the strength of the climate feedbacks that subsequently modify this forcing. Here a set of analyses derived from atmospheric general circulation model simulations are used to estimate the effective and total radiative forcing of the observed climate system due to anthropogenic emissions over the last 50 years of the twentieth century. They are also used to estimate the sensitivity of the observed climate system to these emissions, as well as the expected change in global surface temperatures once the climate system returns to radiative equilibrium. Results indicate that estimates of the effective radiative forcing and total radiative forcing associated with historical anthropogenic emissions differ across models. In addition estimates of the historical sensitivity of the climate to these emissions differ across models. However, results suggest that the variations in climate sensitivity and total climate forcing are not independent, and that the two vary inversely with respect to one another. As such, expected equilibrium temperature changes, which are given by the product of the total radiative forcing and the climate sensitivity, are relatively constant between models, particularly in comparison to results in which the total radiative forcing is assumed constant. Implications of these results for projected future climate forcings and subsequent responses are also discussed.  相似文献   

9.
Troy Masters 《Climate Dynamics》2014,42(7-8):2173-2181
Climate sensitivity is estimated based on 0–2,000 m ocean heat content and surface temperature observations from the second half of the 20th century and first decade of the 21st century, using a simple energy balance model and the change in the rate of ocean heat uptake to determine the radiative restoration strength over this time period. The relationship between this 30–50 year radiative restoration strength and longer term effective sensitivity is investigated using an ensemble of 32 model configurations from the Coupled Model Intercomparison Project phase 5 (CMIP5), suggesting a strong correlation between the two. The mean radiative restoration strength over this period for the CMIP5 members examined is 1.16 Wm?2K?1, compared to 2.05 Wm?2K?1 from the observations. This suggests that temperature in these CMIP5 models may be too sensitive to perturbations in radiative forcing, although this depends on the actual magnitude of the anthropogenic aerosol forcing in the modern period. The potential change in the radiative restoration strength over longer timescales is also considered, resulting in a likely (67 %) range of 1.5–2.9 K for equilibrium climate sensitivity, and a 90 % confidence interval of 1.2–5.1 K.  相似文献   

10.
Article 2 of the United Nations Framework Convention on Climate Change (UNFCCC) calls for stabilization of greenhouse gas (GHG) concentrations at levels that prevent dangerous anthropogenic interference (DAI) in the climate system. However, some of the recent policy literature has focused on dangerous climatic change (DCC) rather than on DAI. DAI is a set of increases in GHGs concentrations that has a non-negligible possibility of provoking changes in climate that in turn have a non-negligible possibility of causing unacceptable harm, including harm to one or more of ecosystems, food production systems, and sustainable socio-economic systems, whereas DCC is a change of climate that has actually occurred or is assumed to occur and that has a non-negligible possibility of causing unacceptable harm. If the goal of climate policy is to prevent DAI, then the determination of allowable GHG concentrations requires three inputs: the probability distribution function (pdf) for climate sensitivity, the pdf for the temperature change at which significant harm occurs, and the allowed probability (“risk”) of incurring harm previously deemed to be unacceptable. If the goal of climate policy is to prevent DCC, then one must know what the correct climate sensitivity is (along with the harm pdf and risk tolerance) in order to determine allowable GHG concentrations. DAI from elevated atmospheric CO2 also arises through its impact on ocean chemistry as the ocean absorbs CO2. The primary chemical impact is a reduction in the degree of supersaturation of ocean water with respect to calcium carbonate, the structural building material for coral and for calcareous phytoplankton at the base of the marine food chain. Here, the probability of significant harm (in particular, impacts violating the subsidiary conditions in Article 2 of the UNFCCC) is computed as a function of the ratio of total GHG radiative forcing to the radiative forcing for a CO2 doubling, using two alternative pdfs for climate sensitivity and three alternative pdfs for the harm temperature threshold. The allowable radiative forcing ratio depends on the probability of significant harm that is tolerated, and can be translated into allowable CO2 concentrations given some assumption concerning the future change in total non-CO2 GHG radiative forcing. If future non-CO2 GHG forcing is reduced to half of the present non-CO2 GHG forcing, then the allowable CO2 concentration is 290–430 ppmv for a 10% risk tolerance (depending on the chosen pdfs) and 300–500 ppmv for a 25% risk tolerance (assuming a pre-industrial CO2 concentration of 280 ppmv). For future non-CO2 GHG forcing frozen at the present value, and for a 10% risk threshold, the allowable CO2 concentration is 257–384 ppmv. The implications of these results are that (1) emissions of GHGs need to be reduced as quickly as possible, not in order to comply with the UNFCCC, but in order to minimize the extent and duration of non-compliance; (2) we do not have the luxury of trading off reductions in emissions of non-CO2 GHGs against smaller reductions in CO2 emissions, and (3) preparations should begin soon for the creation of negative CO2 emissions through the sequestration of biomass carbon.  相似文献   

11.
Multi-gas Emissions Pathways to Meet Climate Targets   总被引:1,自引:1,他引:1  
So far, climate change mitigation pathways focus mostly on CO2 and a limited number of climate targets. Comprehensive studies of emission implications have been hindered by the absence of a flexible method to generate multi-gas emissions pathways, user-definable in shape and the climate target. The presented method ‘Equal Quantile Walk’ (EQW) is intended to fill this gap, building upon and complementing existing multi-gas emission scenarios. The EQW method generates new mitigation pathways by ‘walking along equal quantile paths’ of the emission distributions derived from existing multi-gas IPCC baseline and stabilization scenarios. Considered emissions include those of CO2 and all other major radiative forcing agents (greenhouse gases, ozone precursors and sulphur aerosols). Sample EQW pathways are derived for stabilization at 350 ppm to 750 ppm CO2 concentrations and compared to WRE profiles. Furthermore, the ability of the method to analyze emission implications in a probabilistic multi-gas framework is demonstrated. The probability of overshooting a 2 C climate target is derived by using different sets of EQW radiative forcing peaking pathways. If the probability shall not be increased above 30%, it seems necessary to peak CO2 equivalence concentrations around 475 ppm and return to lower levels after peaking (below 400 ppm). EQW emissions pathways can be applied in studies relating to Article 2 of the UNFCCC, for the analysis of climate impacts, adaptation and emission control implications associated with certain climate targets. See for EQW-software and data.  相似文献   

12.
The RCP greenhouse gas concentrations and their extensions from 1765 to 2300   总被引:16,自引:2,他引:14  
We present the greenhouse gas concentrations for the Representative Concentration Pathways (RCPs) and their extensions beyond 2100, the Extended Concentration Pathways (ECPs). These projections include all major anthropogenic greenhouse gases and are a result of a multi-year effort to produce new scenarios for climate change research. We combine a suite of atmospheric concentration observations and emissions estimates for greenhouse gases (GHGs) through the historical period (1750?C2005) with harmonized emissions projected by four different Integrated Assessment Models for 2005?C2100. As concentrations are somewhat dependent on the future climate itself (due to climate feedbacks in the carbon and other gas cycles), we emulate median response characteristics of models assessed in the IPCC Fourth Assessment Report using the reduced-complexity carbon cycle climate model MAGICC6. Projected ??best-estimate?? global-mean surface temperature increases (using inter alia a climate sensitivity of 3°C) range from 1.5°C by 2100 for the lowest of the four RCPs, called both RCP3-PD and RCP2.6, to 4.5°C for the highest one, RCP8.5, relative to pre-industrial levels. Beyond 2100, we present the ECPs that are simple extensions of the RCPs, based on the assumption of either smoothly stabilizing concentrations or constant emissions: For example, the lower RCP2.6 pathway represents a strong mitigation scenario and is extended by assuming constant emissions after 2100 (including net negative CO2 emissions), leading to CO2 concentrations returning to 360 ppm by 2300. We also present the GHG concentrations for one supplementary extension, which illustrates the stringent emissions implications of attempting to go back to ECP4.5 concentration levels by 2250 after emissions during the 21st century followed the higher RCP6 scenario. Corresponding radiative forcing values are presented for the RCP and ECPs.  相似文献   

13.
Recent works with energy balance climate models and oceanic general circulation models have assessed the potential role of the world ocean for climatic changes on a decadal to secular time scale. This scientific challenge is illustrated by estimating the response of the global temperature to changes in trace gas concentration from the pre-industrial epoch to the middle of the next century. A simple energetic formulation is given to estimate the effect on global equilibrium temperature of a fixed instantaneous radiative forcing and of a time-dependent radiative forcing. An atmospheric energy balance model couple to a box-advection-diffusion ocean model is then used to estimate the past and future global climalic transient response to trace-gas concentration changes. The time-dependent radiative perturbation is estimated from a revised approximate radiative parameterization, and the recent reference set of trace gas scenarios proposed by Wuebbles et al. (1984) are adopted as standard scenarios. Similar computations for the past and future have recently been undertaken by Wigley (1985), but using a purely diffusive ocean and slightly different trace gas scenarios. The skill of the socalled standard experiment is finally assessed by examining the model sensitivity of different parameters such as the equilibrium surface air temperature change for a doubled CO2 concentration [T ae (2×CO2)], the heat exchange with the deeper ocean and the trace gas scenarios. For T ae (2×CO2) between 1 K and 5 K, the following main results are obtained: (i) for a pre-industrial CO2, concentration of 270 ppmv, the surface air warming between 1850 and 1980 ranges between 0.4 and 1.4 K (if a pre-industrial CO2 concentration of 290 ppmv is chosen, the range is between 0.3 and 1 K); (ii) by comparison with the instantaneous equilibrium computations, the deeper ocean inertia induces a delay which amounts to between 6 years [for lower T ae (2×CO2)] and 23 years [for higher Tae(2×CO2)] in 1980; (iii) for the standard future CO2 and other trace gas scenarios of Wuebbles et al., the surface air warming between 1980 and 2050 is calculated to range between 0.9 and 3.4 K, with a delay amounting to between 7 years and 32 years in 2050 when compared to equilibrium computations.  相似文献   

14.
Abstract

A new earth system climate model of intermediate complexity has been developed and its climatology compared to observations. The UVic Earth System Climate Model consists of a three‐dimensional ocean general circulation model coupled to a thermodynamic/dynamic sea‐ice model, an energy‐moisture balance atmospheric model with dynamical feedbacks, and a thermomechanical land‐ice model. In order to keep the model computationally efficient a reduced complexity atmosphere model is used. Atmospheric heat and freshwater transports are parametrized through Fickian diffusion, and precipitation is assumed to occur when the relative humidity is greater than 85%. Moisture transport can also be accomplished through advection if desired. Precipitation over land is assumed to return instantaneously to the ocean via one of 33 observed river drainage basins. Ice and snow albedo feedbacks are included in the coupled model by locally increasing the prescribed latitudinal profile of the planetary albedo. The atmospheric model includes a parametrization of water vapour/planetary longwave feedbacks, although the radiative forcing associated with changes in atmospheric CO2 is prescribed as a modification of the planetary longwave radiative flux. A specified lapse rate is used to reduce the surface temperature over land where there is topography. The model uses prescribed present‐day winds in its climatology, although a dynamical wind feedback is included which exploits a latitudinally‐varying empirical relationship between atmospheric surface temperature and density. The ocean component of the coupled model is based on the Geophysical Fluid Dynamics Laboratory (GFDL) Modular Ocean Model 2.2, with a global resolution of 3.6° (zonal) by 1.8° (meridional) and 19 vertical levels, and includes an option for brine‐rejection parametrization. The sea‐ice component incorporates an elastic‐viscous‐plastic rheology to represent sea‐ice dynamics and various options for the representation of sea‐ice thermodynamics and thickness distribution. The systematic comparison of the coupled model with observations reveals good agreement, especially when moisture transport is accomplished through advection.

Global warming simulations conducted using the model to explore the role of moisture advection reveal a climate sensitivity of 3.0°C for a doubling of CO2, in line with other more comprehensive coupled models. Moisture advection, together with the wind feedback, leads to a transient simulation in which the meridional overturning in the North Atlantic initially weakens, but is eventually re‐established to its initial strength once the radiative forcing is held fixed, as found in many coupled atmosphere General Circulation Models (GCMs). This is in contrast to experiments in which moisture transport is accomplished through diffusion whereby the overturning is reestablished to a strength that is greater than its initial condition.

When applied to the climate of the Last Glacial Maximum (LGM), the model obtains tropical cooling (30°N‐30°S), relative to the present, of about 2.1°C over the ocean and 3.6°C over the land. These are generally cooler than CLIMAP estimates, but not as cool as some other reconstructions. This moderate cooling is consistent with alkenone reconstructions and a low to medium climate sensitivity to perturbations in radiative forcing. An amplification of the cooling occurs in the North Atlantic due to the weakening of North Atlantic Deep Water formation. Concurrent with this weakening is a shallowing of, and a more northward penetration of, Antarctic Bottom Water.

Climate models are usually evaluated by spinning them up under perpetual present‐day forcing and comparing the model results with present‐day observations. Implicit in this approach is the assumption that the present‐day observations are in equilibrium with the present‐day radiative forcing. The comparison of a long transient integration (starting at 6 KBP), forced by changing radiative forcing (solar, CO2, orbital), with an equilibrium integration reveals substantial differences. Relative to the climatology from the present‐day equilibrium integration, the global mean surface air and sea surface temperatures (SSTs) are 0.74°C and 0.55°C colder, respectively. Deep ocean temperatures are substantially cooler and southern hemisphere sea‐ice cover is 22% greater, although the North Atlantic conveyor remains remarkably stable in all cases. The differences are due to the long timescale memory of the deep ocean to climatic conditions which prevailed throughout the late Holocene. It is also demonstrated that a global warming simulation that starts from an equilibrium present‐day climate (cold start) underestimates the global temperature increase at 2100 by 13% when compared to a transient simulation, under historical solar, CO2 and orbital forcing, that is also extended out to 2100. This is larger (13% compared to 9.8%) than the difference from an analogous transient experiment which does not include historical changes in solar forcing. These results suggest that those groups that do not account for solar forcing changes over the twentieth century may slightly underestimate (~3% in our model) the projected warming by the year 2100.  相似文献   

15.
郭准  周天军 《大气科学进展》2013,30(6):1758-1770
To understand the strengths and limitations of a low-resolution version of Flexible Global Ocean Atmosphere-Land-Sea-ice (FGOALS-gl) to simulate the climate of the last millennium, the energy balance, climate sensitivity and absorption feedback of the model are analyzed. Simulation of last-millennium climate was carried out by driving the model with natural (solar radiation and volcanic eruptions) and anthropogenic (greenhouse gases and aerosols) forcing agents. The model feedback factors for (model sensitivity to) different forcings were calculated. The results show that the system feedback factor is about 2.5 (W m-2) K-1 in the pre-industrial period, while 1.9 (W m-2) K-1 in the industrial era. Thus, the model's sensitivity to natural forcing is weak, which explains why it reproduces a weak Medieval Warm Period. The relatively reasonable simulation of the Little Ice Age is caused by both the specified radiative forcing and unforced linear cold drift. The model sensitivity in the industrial era is higher than that of the pre-industrial period. A negative net cloud radiative feedback operates during whole-millennial simulation and reduces the model's sensitivity to specified forcing. The negative net cloud radiative forcing feedback under natural forcing in the period prior to 1850 is due to the underestimation (overestimation) of the response of cloudiness (in-cloud water path). In the industrial era, the strong tropospheric temperature response enlarges the effective radius of ice clouds and reduces the fractional ice content within cloud, resulting in a weak negative net cloud feedback in the industrial period. The water vapor feedback in the industrial era is also stronger than that in the pre-industrial period. Both are in favor of higher model sensitivity and thus a reasonable simulation of the 20th century global warming.  相似文献   

16.
The potential effects of a dynamic ocean on climate change are assessed by comparison of a simulation from 1880 into the future by the CSIRO (Mark 2) coupled atmosphere–ocean general circulation model with equilibrium results from a mixed-layer ocean (MLO) version of the model. At 2082, when the effective CO 2 is tripled, the global warming in the coupled model is barely half the 3×CO 2 MLO result, largely because of oceanic heat uptake, as diagnosed using an effective heat capacity. The effective ocean depth continues to increase during a further 1700 years with stabilized tripled CO 2, by which time the mean ocean warming reaches the upper ocean value. Some reduction of the coupled model warming is due to the effective sensitivity (for 2×CO 2), determined from the radiative response to the forcing, being persistently 0.2 K lower than the MLO model value. A regional energy and feedback analysis shows that this is largely due to an overall equatorward oceanic heat transport anomaly, which reduces the high-latitude warming in the coupled model. The global warming at 3800 is around 95% of the anticipated equilibrium value, which is matched by the result of a simple energy balance model for the approach to equilibrium. The geographical effect of the oceanic heat transport is confirmed using a mixed-layer model with perturbed oceanic heat convergence. The eastern equatorial Pacific warming is enhanced by over 1 K, and rainfall is perturbed in an ENSO-like pattern.  相似文献   

17.
A problem for climate change studies with coupled ocean-atmosphere models has been how to incorporate observed initial conditions into the ocean, which holds most of the `memory' of anthropogenic forcing effects. The first difficulty is the lack of comprehensive three-dimensional observations of the current ocean temperature (T) and salinity (S) fields to initialize to. The second problem is that directly imposing observed T and S fields into the model results in rapid drift back to the model climatology, with the corresponding loss of the observed information. Anthropogenic forcing scenarios therefore typically initialize future runs by starting with pre-industrial conditions. However, if the future climate depends on the details of the present climate, then initializing the model to observations may provide more accurate forecasts. Also, this ~ 130 yr spin up imposes substantial overhead if only a few decades of predictions are desired. A new technique to address these problems is presented. In lieu of observed T and S, assimilated ocean data were used. To reduce model drift, an anomaly coupling scheme was devised. This consists of letting the model's climatological (pre-industrial) oceanic and atmospheric heat contents and transports balance each other, while adding on the (much smaller) changes in heat content since the pre-industrial era as anomalies. The result is model drift of no more than 0.2 K over 50 years, significantly smaller than the forced response of 1.0 K. An ensemble of runs with these assimilated initial conditions is then compared to a set spun up from pre-industrial conditions. No systematic differences were found, i.e., the model simulation of the ocean temperature structure in the late 1990s is statistically indistinguishable from the assimilated observations. However, a model with a worse representation of the late 20th century climate might show significant differences if initialized in this way.  相似文献   

18.
The response of the ocean’s meridional overturning circulation (MOC) to increased greenhouse gas forcing is examined using a coupled model of intermediate complexity, including a dynamic 3-D ocean subcomponent. Parameters are the increase in CO2 forcing (with stabilization after a specified time interval) and the model’s climate sensitivity. In this model, the cessation of deep sinking in the north “Atlantic” (hereinafter, a “collapse”), as indicated by changes in the MOC, behaves like a simple bifurcation. The final surface air temperature (SAT) change, which is closely predicted by the product of the radiative forcing and the climate sensitivity, determines whether a collapse occurs. The initial transient response in SAT is largely a function of the forcing increase, with higher sensitivity runs exhibiting delayed behavior; accordingly, high CO2-low sensitivity scenarios can be assessed as a recovering or collapsing circulation shortly after stabilization, whereas low CO2-high sensitivity scenarios require several hundred additional years to make such a determination. We also systemically examine how the rate of forcing, for a given CO2 stabilization, affects the ocean response. In contrast with previous studies based on results using simpler ocean models, we find that except for a narrow range of marginally stable to marginally unstable scenarios, the forcing rate has little impact on whether the run collapses or recovers. In this narrow range, however, forcing increases on a time scale of slow ocean advective processes results in weaker declines in overturning strength and can permit a run to recover that would otherwise collapse.  相似文献   

19.
Climate policies must consider radiative forcing from Kyoto greenhouse gases, as well as other forcing constituents, such as aerosols and tropospheric ozone that result from air pollutants. Non-Kyoto forcing constituents contribute negative, as well as positive forcing, and overall increases in total forcing result in increases in global average temperature. Non-Kyoto forcing modeling is a relatively new component of climate management scenarios. This paper describes and assesses current non-Kyoto radiative forcing modeling within five integrated assessment models. The study finds negative forcing from aerosols masking (offsetting) approximately 25 % of positive forcing in the near-term in reference non-climate policy projections. However, masking is projected to decline rapidly to 5–10 % by 2100 with increasing Kyoto emissions and assumed reductions in air pollution—with the later declining to as much as 50 % and 80 % below today’s levels by 2050 and 2100 respectively. Together they imply declining importance of non-Kyoto forcing over time. There are however significant uncertainties and large differences across models in projected non-Kyoto emissions and forcing. A look into the modeling reveals differences in base conditions, relationships between Kyoto and non-Kyoto emissions, pollution control assumptions, and other fundamental modeling. In addition, under climate policy scenarios, we find air pollution and resulting non-Kyoto forcing reduced to levels below those produced by air pollution policies alone—e.g., China sulfur emissions fall an additional 45–85 % by 2050. None of the models actively manage non-Kyoto forcing for climate implications. Nonetheless, non-Kyoto forcing may be influencing mitigation results, including allowable carbon dioxide emissions, and further evaluation is merited.  相似文献   

20.
The Global Warming Potential (GWP) index is currently used to create CO2-equivalent emission totals for multi-gas greenhouse targets. While many alternatives have been proposed, it is not possible to uniquely define a metric that captures the different impacts of emissions of substances with widely disparate atmospheric lifetimes, which leads to a wide range of possible index values. We examine the sensitivity of emissions and climate outcomes to the value of the index used to aggregate methane emissions using a technologically detailed integrated assessment model. The methane index is varied between 4 and 70, with a central value of 21, which is the 100-year GWP value currently used in policy contexts. We find that the sensitivity to index value is, at most, 10–18 % in terms of methane emissions but only 2–3 % in terms of the maximum total radiative forcing change, with larger regional emissions differences in some cases. The choice of index also affects estimates of the cost of meeting a given end of century forcing target, with total two-gas mitigation cost increasing by 7–9 % if the index is increased, and increasing in most scenarios from 4 to 23 % if the index is lowered, with a slight (1 %) decrease in total cost in one case. We find that much of the methane abatement occurs as the induced effect of CO2 abatement rather than explicit abatement, which is one reason why climate outcomes are relatively insensitive to the index value. We also find that the near-term climate benefit of increasing the methane index is small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号