We designed a new seismic source model for Italy to be used as an input for country-wide probabilistic seismic hazard assessment (PSHA) in the frame of the compilation of a new national reference map.
We started off by reviewing existing models available for Italy and for other European countries, then discussed the main open issues in the current practice of seismogenic zoning.
The new model, termed ZS9, is largely based on data collected in the past 10 years, including historical earthquakes and instrumental seismicity, active faults and their seismogenic potential, and seismotectonic evidence from recent earthquakes. This information allowed us to propose new interpretations for poorly understood areas where the new data are in conflict with assumptions made in designing the previous and widely used model ZS4.
ZS9 is made out of 36 zones where earthquakes with Mw > = 5 are expected. It also assumes that earthquakes with Mw up to 5 may occur anywhere outside the seismogenic zones, although the associated probability is rather low. Special care was taken to ensure that each zone sampled a large enough number of earthquakes so that we could compute reliable earthquake production rates.
Although it was drawn following criteria that are standard practice in PSHA, ZS9 is also innovative in that every zone is characterised also by its mean seismogenic depth (the depth of the crustal volume that will presumably release future earthquakes) and predominant focal mechanism (their most likely rupture mechanism). These properties were determined using instrumental data, and only in a limited number of cases we resorted to geologic constraints and expert judgment to cope with lack of data or conflicting indications. These attributes allow ZS9 to be used with more accurate regionalized depth-dependent attenuation relations, and are ultimately expected to increase significantly the reliability of seismic hazard estimates. 相似文献
The imbalance between incoming and outgoing salt causes salinization of soils and sub-soils that result in increasing the
salinity of stream-flows and agriculture land. This salinization is a serious environmental hazard particularly in semi-arid
and arid lands. In order to estimate the magnitude of the hazard posed by salinity, it is important to understand and identify
the processes that control salt movement from the soil surface through the root zone to the ground water and stream flows.
In the present study, Malaprabha sub-basin (up to dam site) has been selected which has two distinct climatic zones, sub-humid
(upstream of Khanapur) and semi-arid region (downstream of Khanapur). In the upstream, both surface and ground waters are
used for irrigation, whereas in the downstream mostly groundwater is used. Both soils and ground waters are more saline in
downstream parts of the study area. In this study we characterized the soil salinity and groundwater quality in both areas.
An attempt is also made to model the distribution of potassium concentration in the soil profile in response to varying irrigation
conditions using the SWIM (Soil-Water Infiltration and Movement) model. Fair agreement was obtained between predicted and
measured results indicating the applicability of the model. 相似文献
MODFLOW is a groundwater modeling program. It can be compiled and remedied according to the practical applications. Because
of its structure and fixed data format, MODFLOW can be integrated with Geographic Information Systems (GIS) technology for
water resource management. The North China Plain (NCP), which is the politic, economic and cultural center of China, is facing
with water resources shortage and water pollution. Groundwater is the main water resource for industrial, agricultural and
domestic usage. It is necessary to evaluate the groundwater resources of the NCP as an entire aquifer system. With the development
of computer and internet information technology it is also necessary to integrate the groundwater model with the GIS technology.
Because the geological and hydrogeological data in the NCP was mainly in MAPGIS format, the powerful function of GIS of disposing
of and analyzing spatial data and computer languages such as Visual C and Visual Basic were used to define the relationship
between the original data and model data. After analyzing the geological and hydrogeological conditions of the NCP, the groundwater
flow numerical simulation modeling was constructed with MODFLOW. On the basis of GIS, a dynamic evaluation system for groundwater
resources under the internet circumstance was completed. During the process of constructing the groundwater model, a water
budget was analyzed, which showed a negative budget in the NCP. The simulation period was from 1 January 2002 to 31 December
2003. During this period, the total recharge of the groundwater system was 49,374 × 106 m3 and the total discharge was 56,530 × 106 m3 the budget deficit was −7,156 × 106 m3. In this integrated system, the original data including graphs and attribution data could be stored in the database. When
the process of evaluating and predicting groundwater flow was started, these data were transformed into files that the core
program of MODFLOW could read. The calculated water level and drawdown could be displayed and reviewed online. 相似文献
The Cu–Co–Ni Texeo mine has been the most important source of Cu in NW Spain since Roman times and now, approximately 40,000 m3 of wastes from mine and metallurgical operations, containing average concentrations of 9,263 mg kg−1 Cu, 1,100 mg kg−1 As, 549 mg kg−1 Co, and 840 mg kg−1 Ni, remain on-site. Since the cessation of the activity, the abandoned works, facilities and waste piles have been posing
a threat to the environment, derived from the release of toxic elements. In order to assess the potential environmental pollution
caused by the mining operations, a sequential sampling strategy was undertaken in wastes, soil, surface and groundwater, and
sediments. First, screening field tools were used to identify hotspots, before defining formal sampling strategies; so, in
the areas where anomalies were detected in a first sampling stage, a second detailed sampling campaign was undertaken. Metal
concentrations in the soils are highly above the local background, reaching up to 9,921 mg kg−1 Cu, 1,373 mg kg−1 As, 685 mg kg−1 Co, and 1,040 mg kg−1 Ni, among others. Copper concentrations downstream of the mine works reach values up to 1,869 μg l−1 and 240 mg kg−1 in surface water and stream sediments, respectively. Computer-based risk assessment for the site gives a carcinogenic risk
associated with the presence of As in surface waters and soils, and a health risk for long exposures; so, trigger levels of
these elements are high enough to warrant further investigation. 相似文献
Many different runout prediction methods can be applied to estimate the mobility of future debris flows during hazard assessment. The present article reviews the empirical, analytical, simple flow routing and numerical techniques. All these techniques were applied to back-calculate a debris flow, which occurred in 1982 at La Guingueta catchment, in the Eastern Pyrenees. A sensitivity analysis of input parameters was carried out, while special attention was paid to the influence of rheological parameters. We used the Voellmy fluid rheology for our analytical and numerical modelling, since this flow resistance law coincided best with field observations. The simulation results indicated that the “basal” friction coefficients rather affect the runout distance, while the “turbulence” terms mainly influence flow velocity. A comparison of the velocity computed on the fan showed that the analytical model calculated values similar to the numerical ones. The values of our rheological parameters calibrated at La Guingueta agree with data back-calculated for other debris flows. Empirical relationships represent another method to estimate total runout distance. The results confirmed that they contain an important uncertainty and they are strictly valid only for the conditions, which were the basis for their development. With regards to the simple flow routing algorithm, this methods could satisfactorily simulate the total area affected by the 1982 debris flow, but it was not able to directly calculate total runout distance and velocity. Finally, a suggestion on how different runout prediction methods can be applied to generate debris-flow hazard maps is presented. Taking into account the definition of hazard and intensity, the best choice would be to divide the resulting hazard maps into two types: “final hazard maps” and “preliminary hazard maps”. Only the use of numerical models provided final hazard maps, because they could incorporate different event magnitudes and they supplied output-values for intensity calculation. In contrast, empirical relationships and flow routing algorithms, or a combination of both, could be applied to create preliminary hazard maps. The present study only focussed on runout prediction methods. Other necessary tasks to complete the hazard assessment can be looked up in the “Guidelines for landslide susceptibility, hazard and risk zoning” included in this Special Issue. 相似文献
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies.
Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making. 相似文献