We present the first statistical analysis of 27 Ultraviolet Optical Telescope (UVOT) optical/ultraviolet light curves of gamma-ray burst (GRB) afterglows. We have found, through analysis of the light curves in the observer's frame, that a significant fraction rise in the first 500 s after the GRB trigger, all light curves decay after 500 s, typically as a power law with a relatively narrow distribution of decay indices, and the brightest optical afterglows tend to decay the quickest. We find that the rise could be either produced physically by the start of the forward shock, when the jet begins to plough into the external medium, or geometrically where an off-axis observer sees a rising light curve as an increasing amount of emission enters the observers line of sight, which occurs as the jet slows. We find that at 99.8 per cent confidence, there is a correlation, in the observed frame, between the apparent magnitude of the light curves at 400 s and the rate of decay after 500 s. However, in the rest frame, a Spearman rank test shows only a weak correlation of low statistical significance between luminosity and decay rate. A correlation should be expected if the afterglows were produced by off-axis jets, suggesting that the jet is viewed from within the half-opening angle θ or within a core of a uniform energy density θc . We also produced logarithmic luminosity distributions for three rest-frame epochs. We find no evidence for bimodality in any of the distributions. Finally, we compare our sample of UVOT light curves with the X-ray Telescope (XRT) light-curve canonical model. The range in decay indices seen in UVOT light curves at any epoch is most similar to the range in decay of the shallow decay segment of the XRT canonical model. However, in the XRT canonical model, there is no indication of the rising behaviour observed in the UVOT light curves. 相似文献
Fluctuations of glaciers during the 20th century in Garibaldi Provincial Park, in the southern Coast Mountains of British Columbia, were reconstructed from historical documents, aerial photographs, and fieldwork. Over 505 km2, or 26%, of the park, was covered by glacier ice at the beginning of the 18th century. Ice cover decreased to 297 km2 by 1987–1988 and to 245 km2 (49% of the early 18th century value) by 2005. Glacier recession was greatest between the 1920s and 1950s, with typical frontal retreat rates of 30 m/a. Many glaciers advanced between the 1960s and 1970s, but all glaciers retreated over the last 20 years. Times of glacier recession coincide with warm and relatively dry periods, whereas advances occurred during relatively cold periods. Rapid recession between 1925 and 1946, and since 1977, coincided with the positive phase of the Pacific Decadal Oscillation (PDO), whereas glaciers advanced during its negative phase (1890–1924 and 1947–1976). The record of 20th century glacier fluctuations in Garibaldi Park is similar to that in southern Europe, South America, and New Zealand, suggesting a common, global climatic cause. We conclude that global temperature change in the 20th century explains much of the behaviour of glaciers in Garibaldi Park and elsewhere. 相似文献
Although agriculture could contribute substantially to European emission reductions, its mitigation potential lies untapped and dormant. Market-based instruments could be pivotal in incentivizing cost-effective abatement. However, sector specificities in transaction costs, leakage risks and distributional impacts impede its implementation. The significance of such barriers critically hinges on the dimensions of policy design. This article synthesizes the work on emissions pricing in agriculture together with the literature on the design of market-based instruments. To structure the discussion, an options space is suggested to map policy options, focusing on three key dimensions of policy design. More specifically, it examines the role of policy coverage, instruments and transfers to farmers in overcoming the barriers. First, the results show that a significant proportion of agricultural emissions and mitigation potential could be covered by a policy targeting large farms and few emission sources, thereby reducing transaction costs. Second, whether an instrument is voluntary or mandatory influences distributional outcomes and leakage. Voluntary instruments can mitigate distributional concerns and leakage risks but can lead to subsidy lock-in and carbon price distortion. Third, the impact on transfers resulting from the interaction of the Common Agricultural Policy (CAP) with emissions pricing will play a key role in shaping political feasibility and has so far been underappreciated.
POLICY RELEVANCE
Following the 2015 Paris Agreement, European climate policy is at a crossroads. Achieving cost-effectively the 2030 and 2050 European targets requires all sectors to reduce their emissions. Yet, the cornerstone of European climate policy, the European Union Emissions Trading System (EU ETS), covers only about half of European emissions. Major sectors have been so far largely exempted from carbon pricing, in particular transport and agriculture. While transport has been increasingly under the spotlight as a possible candidate for an EU ETS sectoral expansion, policy discussions on pricing agricultural emissions have been virtually absent. This article attempts to fill this gap by investigating options for market-based instruments to reduce agricultural emissions while taking barriers to implementation into account. 相似文献
Climate policy uncertainty significantly hinders investments in low-carbon technologies, and the global community is behind schedule to curb carbon emissions. Strong actions will be necessary to limit the increase in global temperatures, and continued delays create risks of escalating climate change damages and future policy costs. These risks are system-wide, long-term and large-scale and thus hard to diversify across firms. Because of its unique scale, cost structure and near-term availability, Reducing Emissions from Deforestation and forest Degradation in developing countries (REDD+) has significant potential to help manage climate policy risks and facilitate the transition to lower greenhouse gas emissions. ‘Call’ options contracts in the form of the right but not the obligation to buy high-quality emissions reduction credits from jurisdictional REDD+ programmes at a predetermined price per ton of CO2 could help unlock this potential despite the current lack of carbon markets that accept REDD+ for compliance. This approach could provide a globally important cost-containment mechanism and insurance for firms against higher future carbon prices, while channelling finance to avoid deforestation until policy uncertainties decline and carbon markets scale up.
Key policy insights
Climate policy uncertainty discourages abatement investments, exposing firms to an escalating systemic risk of future rapid increases in emission control expenditures.
This situation poses a risk of an abatement ‘short squeeze,’ paralleling the case in financial markets when prices jump sharply as investors rush to square accounts on an investment they have sold ‘short’, one they have bet against and promised to repay later in anticipation of falling prices.
There is likely to be a willingness to pay for mechanisms that hedge the risks of abruptly rising carbon prices, in particular for ‘call’ options, the right but not the obligation to buy high-quality emissions reduction credits at a predetermined price, due to the significantly lower upfront capital expenditure compared to other hedging alternatives.
Establishing rules as soon as possible for compliance market acceptance of high-quality emissions reductions credits from REDD+ would facilitate REDD+ transactions, including via options-based contracts, which could help fill the gap of uncertain climate policies in the short and medium term.
Using the database provided by the Reviewed Event Bulletins (REBs) for the first 2.5 years of the Group of Scientific Experts Technical Test-3 (GSETT-3) experiment, we compiled mislocation vectors for both arrays and selected three-component stations of the primary network from the published slowness and azimuth information gained through f-k- and polarization analysis. Imposing constraints such as a minimum signal-to-noise ratio (SNR) and number of defining phases, we aim at eliminating location bias as the hypocentral parameters are taken from the REBs. Results from 14 arrays with apertures from about 1 km to more than 20 km are presented as well as from 18 three-component stations, which indicate that the mislocation vectors in many cases can improve location accuracy considerably. If these mislocation vectors are compiled to provide coverage of a sufficient portion of the slowness domain these empirical corrections can easily be applied prior to location processing. In the context of the Comprehensive Nuclear Test-Ban Treaty (CTBT), these mislocation patterns could be essential for providing accurate event location of suspicious low-magnitude events, as these location parameters will be used to pinpoint the area where to conduct an on-site inspection. 相似文献
Minimal Detectable Biases (MDBs) or Minimal Detectable Outliers for the Expectation Maximization (EM) algorithm based on the variance-inflation and the mean-shift model are determined for an example. A Monte Carlo method is applied with no outlier and with one, two and three randomly chosen outliers. The outliers introduced are recovered and the corresponding MDBs are almost independent from the number of outliers. The results are compared to the MDB derived earlier by the author. This MDB approximately agrees with the MDB for one outlier of the EM algorithm. The MDBs for two and three outliers are considerably larger than MDBs of the EM algorithm. 相似文献
Quantities like tropospheric zenith delays or station coordinates are repeatedly measured at permanent VLBI or GPS stations
so that time series for the quantities at each station are obtained. The covariances of these quantities can be estimated
in a multivariate linear model. The covariances are needed for computing uncertainties of results derived from these quantities.
The covariance matrix for many permanent stations becomes large, the need for simplifying it may therefore arise under the
condition that the uncertainties of derived results still agree. This is accomplished by assuming that the different time
series of a quantity like the station height for each permanent station can be combined to obtain one time series. The covariance
matrix then follows from the estimates of the auto- and cross-covariance functions of the combined time series. A further
approximation is found, if compactly supported covariance functions are fitted to an estimated autocovariance function in
order to obtain a covariance matrix which is representative of different kinds of measurements. The simplification of a covariance
matrix estimated in a multivariate model is investigated here for the coordinates of points of a grid measured repeatedly
by a laserscanner. The approximations are checked by determining the uncertainty of the sum of distances to the points of
the grid. To obtain a realistic value for this uncertainty, the covariances of the measured coordinates have to be considered.
Three different setups of measurements are analyzed and a covariance matrix is found which is representative for all three
setups. Covariance matrices for the measurements of laserscanners can therefore be determined in advance without estimating
them for each application. 相似文献