This article simulates deep decarbonization pathways for a small open economy that lacks the usual avenues for large CO2 reductions – heavy industry and power generation. A computable general equilibrium model is used to assess the energy and economic impacts of the transition to only one ton of CO2 emissions per capita in 2050. This represents a 76% reduction with respect to 1990 levels, while the population is expected to be 46% larger and GPD to increase by 90%. The article discusses several options and scenarios that are compatible with this emissions target and compares them with a reference scenario that extrapolates already-decided climate and energy policy instruments. We show that the ambitious target is attainable at moderate welfare costs, even if it needs very high carbon prices, and that these costs are lower when either CO2 can be captured and sequestered or electricity consumption can be taxed sufficiently to stabilize it.
Policy relevance
In the context of COP 21, all countries must propose intended contributions that involve deep decarbonization of their economy over the next decades. This article defines and analyses such pathways for Switzerland, taking into consideration the existing energy demand and supply and also already-defined climate policies. It draws several scenarios that are compatible with a target of 1 ton of CO2 emissions per capita in 2050. This objective is very challenging, especially with the nuclear phase out decided after the disaster in Fukushima and the political decision to balance electricity trade. Nevertheless, it is possible to design several feasible pathways that are based on different options. The economic cost is significant but affordable for the Swiss economy. The insights are relevant not only for Switzerland, but also for other industrialized countries when defining their INDCs. 相似文献
This study focuses on the spatial distribution of mean annual and monthly precipitation in a small island (1128 km2) named Martinique, located in the Lesser Antilles. Only 35 meteorological stations are available on the territory, which has a complex topography. With a digital elevation model (DEM), 17 covariates that are likely to explain precipitation were built. Several interpolation methods, such as regression-kriging (????????, ????????, and ????????) and external drift kriging (??????) were tested using a cross-validation procedure. For the regression methods, predictors were chosen by established techniques whereas a new approach is proposed to select external drifts in a kriging which is based on a stepwise model selection by the Akaike Information Criterion (AIC). The prediction accuracy was assessed at validation sites with three different skill scores. Results show that using methods with no predictors such as inverse distance weighting (??????) or universal kriging (????) is inappropriate in such a territory. ?????? appears to outperform regression methods for any criteria, and selecting predictors by our approach improves the prediction of mean annual precipitation compared to kriging with only elevation as drift. Finally, the predicting performance was also studied by varying the size of the training set leading to less conclusive results for ?????? and its performance. Nevertheless, the proposed method seems to be a good way to improve the mapping of climatic variables in a small island. 相似文献
This paper discusses predicted evolution patterns of present-day changes of ice thickness, surface elevation, and bedrock elevation over the Greenland and Antarctic continents. These were obtained from calculations with dynamic 3-D ice sheet models which were coupled to a visco-elastic solid Earth model. The experiments were initialized over the last two glacial cycles and subsequently averaged over the last 200 years to obtain the current evolution. The calculations indicate that the Antarctic Ice Sheet is still adjusting to the last glacial-interglacial transition yielding a decreasing ice volume and a rising bedrock elevation of the order of several centimetres per year. The Greenland Ice Sheet was found to be close to a stationary state with a mean thickness change of only a few millimetres per year, but the calculations revealed large spatial differences. Predicted patterns over Greenland are characterized by a small thickening over the ice sheet interior and a general thinning of the ablation area. In Antarctica, almost all of the predicted changes are concentrated in the West Antarctic Ice Sheet, which is still retreating at both the Weddell and Ross Sea margins. Over most of both ice sheets, the model indicates that the surface elevation trend is dominated by ice thickness changes rather than by bedrock elevation changes. 相似文献
Atmospheric aerosols (sea salt, crustal dust, and biogenic aerosols) are the primary source of dissolved species in rainwater as well as one of the sources of dissolved species in river water. Chemical weathering studies require quantification of this atmospheric input. The crustal component of atmospheric input can have various origins, both distant and local. The proportions of the various inputs (marine, distant or local) are determined in this study.Strontium isotope ratios and Ca, Na, K, Mg, Al, Cl, SO4, NO3 and Sr concentrations were measured in rainwater samples collected in the Massif Central (France) over a period of one year. Each sample, collected automatically, represents a monthly series of rain events. Chemical composition of the rainwater samples varied considerably and the 87Sr/86Sr ratios ranged between 0.709198 and 0.713143.Using Na as an indicator of marine origin, and Al for the crustal input in rain samples, the proportion of marine and crustal elements was estimated from elemental ratios. A marine origin of 4 to 100% of Cl, of 0.6 to 20% of the SO4, of <1 to 10% of Ca, <1 to 40% of K, 4 to 100% of Mg and 1 to 44% of Sr was determined.Strontium isotopes were used to characterize the crustal sources. The 87Sr/86Sr ratios of the crustal sources varied considerably from 0.7092 to 0.71625 and indicate the occurrence of multiple sources for the crustal component in the analysed rainwaters. 相似文献
Three‐dimensional receiver ghost attenuation (deghosting) of dual‐sensor towed‐streamer data is straightforward, in principle. In its simplest form, it requires applying a three‐dimensional frequency–wavenumber filter to the vertical component of the particle motion data to correct for the amplitude reduction on the vertical component of non‐normal incidence plane waves before combining with the pressure data. More elaborate techniques use three‐dimensional filters to both components before summation, for example, for ghost wavelet dephasing and mitigation of noise of different strengths on the individual components in optimum deghosting. The problem with all these techniques is, of course, that it is usually impossible to transform the data into the crossline wavenumber domain because of aliasing. Hence, usually, a two‐dimensional version of deghosting is applied to the data in the frequency–inline wavenumber domain. We investigate going down the “dimensionality ladder” one more step to a one‐dimensional weighted summation of the records of the collocated sensors to create an approximate deghosting procedure. We specifically consider amplitude‐balancing weights computed via a standard automatic gain control before summation, reminiscent of a diversity stack of the dual‐sensor recordings. This technique is independent of the actual streamer depth and insensitive to variations in the sea‐surface reflection coefficient. The automatic gain control weights serve two purposes: (i) to approximately correct for the geometric amplitude loss of the Z data and (ii) to mitigate noise strength variations on the two components. Here, Z denotes the vertical component of the velocity of particle motion scaled by the seismic impedance of the near‐sensor water volume. The weights are time‐varying and can also be made frequency‐band dependent, adapting better to frequency variations of the noise. The investigated process is a very robust, almost fully hands‐off, approximate three‐dimensional deghosting step for dual‐sensor data, requiring no spatial filtering and no explicit estimates of noise power. We argue that this technique performs well in terms of ghost attenuation (albeit, not exact ghost removal) and balancing the signal‐to‐noise ratio in the output data. For instances where full three‐dimensional receiver deghosting is the final product, the proposed technique is appropriate for efficient quality control of the data acquired and in aiding the parameterisation of the subsequent deghosting processing. 相似文献
Daily rainfall is a complex signal exhibiting alternation of dry and wet states, seasonal fluctuations and an irregular behavior at multiple scales that cannot be preserved by stationary stochastic simulation models. In this paper, we try to investigate some of the strategies devoted to preserve these features by comparing two recent algorithms for stochastic rainfall simulation: the first one is the modified Markov model, belonging to the family of Markov-chain based techniques, which introduces non-stationarity in the chain parameters to preserve the long-term behavior of rainfall. The second technique is direct sampling, based on multiple-point statistics, which aims at simulating a complex statistical structure by reproducing the same data patterns found in a training data set. The two techniques are compared by first simulating a synthetic daily rainfall time-series showing a highly irregular alternation of two regimes and then a real rainfall data set. This comparison allows analyzing the efficiency of different elements characterizing the two techniques, such as the application of a variable time dependence, the adaptive kernel smoothing or the use of low-frequency rainfall covariates. The results suggest, under different data availability scenarios, which of these elements are more appropriate to represent the rainfall amount probability distribution at different scales, the annual seasonality, the dry-wet temporal pattern, and the persistence of the rainfall events. 相似文献
Categorical parameter distributions consisting of geologic facies with distinct properties, for example, high-permeability channels embedded in a low-permeability matrix, are common at contaminated sites. At these sites, low-permeability facies store solute mass, acting as secondary sources to higher-permeability facies, sustaining concentrations for decades while increasing risk and cleanup costs. Parameter estimation is difficult in such systems because the discontinuities in the parameter space hinder the inverse problem. This paper presents a novel approach based on Traveling Pilot Points (TRIPS) and an iterative ensemble smoother (IES) to solve the categorical inverse problem. Groundwater flow and solute transport in a hypothetical aquifer with a categorical parameter distribution are simulated using MODFLOW 6. Heads and concentrations are recorded at multiple monitoring locations. IES is used to generate posterior ensembles assuming a TRIPS prior and an approximate multi-Gaussian prior. The ensembles are used to predict solute concentrations and mass into the future. The evaluation also includes an assessment of how the number of measurements and the choice of the geological prior determine the characteristics of the posterior ensemble and the resulting predictions. The results indicate that IES was able to efficiently sample the posterior distribution and showed that even with an approximate geological prior, a high degree of parameterization and history matching could lead to parameter ensembles that can be useful for making certain types of predictions (heads, concentrations). However, the approximate geological prior was insufficient for predicting mass. The analysis demonstrates how decision-makers can quantify uncertainty and make informed decisions with an ensemble-based approach. 相似文献
Atmospheric dust is an integral component of the Earth system with major implications for the climate, biosphere and public health. In this context, identifying and quantifying the provenance and the processes generating the various types of dust found in the atmosphere is paramount. Isotopic signatures of Pb, Nd, Sr, Zn, Cu and Fe are commonly used as sensitive geochemical tracers. However, their combined use is limited by the lack of (a) a dedicated chromatographic protocol to separate the six elements of interest for low‐mass samples and (b) specific reference materials for dust. Indeed, our work shows that USGS rock reference materials BHVO‐2, AGV‐2 and G‐2 are not applicable as substitute reference materials for dust. We characterised the isotopic signatures of these six elements in dust reference materials ATD and BCR‐723, representatives of natural and urban environments, respectively. To achieve this, we developed a specific procedure for dust, applicable in the 4–25 mg mass range, to separate the six elements using a multi‐column ion‐exchange chromatographic method and MC‐ICP‐MS measurements. 相似文献
Stochastic Environmental Research and Risk Assessment - Multiple-point statistics (MPS) is a simulation technique allowing to generate images that reproduce the spatial features present in a... 相似文献