首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Least-squares fitting of marine seismic refraction data   总被引:2,自引:0,他引:2  
Summary. An iterative procedure is presented for fitting waveform data from a marine seismic refraction experiment. The wavefunction from the explosive source is known and the crustal structure is refined using the damped least squares procedure. The damping parameter serves the dual purpose of stabilizing an under-constrained inversion and improving the linearity by suppressing high frequencies. The synthetic seismograms and their model differentials are calculated using the WKBJ seismogram method. Both the synthetic seismograms and the linear algebra are sufficiently straightforward that the computations can be performed on an array processor. The inversion procedure is then sufficiently rapid that interactive computations are possible. The technique is illustrated using the FF2 refraction data from the Fanfare cruise of the Scripps Institution of Oceanography. These data had been interpreted previously by trial-and-error using the reflectivity method. Starting from two different, simple models, the inversion procedure obtains essentially one unique model. Its features are very similar to the previous model.  相似文献   

2.
The accuracy of rockfall trajectory simulations depends to a large extent on the calculation of the rebound of falling boulders on different parts of a slope where rockfalls could occur. The models commonly used for rebound calculation are based on restitution coefficients, which can only be calibrated subjectively in the field. To come up with a robust and objective procedure for rebound calculation, a stochastic impact model associated with an objective field data collection method was developed and tested in this study. The aims of this work were to assess the adequacy of this approach and to evaluate the minimum amount of field data required to obtain simulation results with a satisfactory level of predictability. To achieve these objectives, the rebound calculation procedure developed was integrated into a three-dimensional rockfall simulation model, and the simulated results were compared with those obtained from field rockfall experiments. For rocky slopes, the simulations satisfactorily predict the experimental results. This approach is advantageous because it combines precise modelling of the mechanisms involved in the rebound and of their related variability with an objective field data collection procedure which basically only requires collecting the mean size of soil rocks. The approach proposed in this study therefore constitutes an excellent basis for the objective probabilistic assessment of rockfall hazard.  相似文献   

3.
Additional Samples: Where They Should Be Located   总被引:2,自引:0,他引:2  
Information for mine planning requires to be close spaced, if compared to the grid used for exploration and resource assessment. The additional samples collected during quasimining usually are located in the same pattern of the original diamond drillholes net but closer spaced. This procedure is not the best in mathematical sense for selecting a location. The impact of an additional information to reduce the uncertainty about the parameter been modeled is not the same everywhere within the deposit. Some locations are more sensitive in reducing the local and global uncertainty than others. This study introduces a methodology to select additional sample locations based on stochastic simulation. The procedure takes into account data variability and their spatial location. Multiple equally probable models representing a geological attribute are generated via geostatistical simulation. These models share basically the same histogram and the same variogram obtained from the original data set. At each block belonging to the model a value is obtained from the n simulations and their combination allows one to access local variability. Variability is measured using an uncertainty index proposed. This index was used to map zones of high variability. A value extracted from a given simulation is added to the original data set from a zone identified as erratic in the previous maps. The process of adding samples and simulation is repeated and the benefit of the additional sample is evaluated. The benefit in terms of uncertainty reduction is measure locally and globally. The procedure showed to be robust and theoretically sound, mapping zones where the additional information is most beneficial. A case study in a coal mine using coal seam thickness illustrates the method.  相似文献   

4.
Abstract

This paper describes an inductive modelling procedure integrated with a geographical information system for analysis of pattern within spatial data. The aim of the modelling procedure is to predict the distribution within one data set by combining a number of other data sets. Data set combination is carried out using Bayes’ theorem. Inputs to the theorem, in the form of conditional probabilities, are derived from an inductive learning process in which attributes of the data set to be modelled are compared with attributes of a variety of predictor data sets. This process is carried out on random subsets of the data to generate error bounds on inputs for analysis of error propagation associated with the use of Bayes’ theorem to combine data sets in the GIS. The statistical significance of model inputs is calculated as part of the inductive learning process. Use of the modelling procedure is illustrated through the analysis of the winter habitat relationships of red deer in Grampian Region, north-east Scotland. The distribution of red deer in Deer Management Group areas in Gordon and in Kincardine and Deeside Districts is used to develop a model which predicts the distribution throughout Grampian Region; this is tested against red deer distribution in Moray District. Habitat data sets used for constructing the model are accumulated frost and altitude, obtained from maps, and land cover, derived from satellite imagery. Errors resulting from the use of Bayes’ theorem to combine data sets within the GIS and introduced in generalizing output from 50 m pixel to 1 km grid squares resolution are analysed and presented in a series of maps. This analysis of error trains is an integral part of the implemented analytical procedure and provides support to the interpretation of the results of modelling. Potential applications of the modelling procedure are discussed.  相似文献   

5.
A NOAA AVHRR data set covering Senegal and parts of the surrounding countries during the period from 1987 to present is under construction and improvement in an ongoing collaboration between Centre de Suivi Écologique (CSE). Dakar. Senegal, and Institute of Geography, University of Copenhagen (IGUC). This paper details the entire processing chain from raw images to a properly calibrated, geometrically rectified and cloud-masked time-series tracking a number of well-defined variables. Two vital aspects of this time series, the cloud masking procedure and the geometrical rectification, are evaluated in detail. A two-step, multi-criteria cloud masking procedure requiring a manual input does not consistently improve the quality of the data set compared to the simpler, automated procedure. With respect to geometrical rectification an accuracy on the order of I km is obtained. Finally, suggestions for further improving the data set are forwarded  相似文献   

6.
Concentration estimates of components present in a sample mixture can be obtained using matrixmathematics. In the past, the condition number of the calibration matrix has been used to give anamplification factor by which uncertainties in data can work through to errors in the concentrationestimates. This paper explores an additional interpretation of condition numbers with regards tosignificant figures and rounding errors. A procedure is suggested which will always give the most accurateconcentration estimates provided the calibration matrix is not too ill-conditioned. Condition numbershave also been used by analytical chemists to discuss the error bounds for concentration estimates.Unfortunately, only one representative error bound can be approximated for all the components. Thispaper will show how to compute bounds for individual concentration estimates obtained as solutions to asystem of m equations and n unknowns. The procedure is appropriate when calibration data and sampleresponses are inaccurate.  相似文献   

7.
A novel procedure to analyse the uncertainty associated to the output of GIS-based models is presented. The procedure can handle models of any degree of complexity that accept any kind of input data. Two important aspects of spatial modelling are addressed: the propagation of uncertainty from model inputs and model parameters up to the model output (uncertainty analysis); and the assessment of the relative importance of the sources of uncertainty in the output uncertainty (sensitivity analysis). Two main applications are proposed. The procedure allows implementation of a GIS-based model whose output can reliably support the decision process with an optimized allocation of resources for spatial data acquisition. This is possible in low cost strategy, based on numerical simulations on a small prototype of the GIS-based model. Furthermore, the procedure provides an effective model building tool to choose, from a group of alternative models, the best one in terms of cost-benefit analysis. A comprehensive case study is described. It concerns the implementation of a new GIS-based hydrologic model, whose goal is providing near real-time flood forecasting.  相似文献   

8.
Li  Nan  Cao  Rui  Ye  HuiShou  Li  Qiang  Wang  Yitian  Lv  Xiping  Guo  Na  Su  Yuanxiang  Hao  Jianrui  Yin  Shitao  Chu  Wenkai 《Natural Resources Research》2022,31(4):2129-2161

The mineral system modeling approach for prospectivity mapping is an efficient and economic method to assess undiscovered mineral potential quantitatively. It is a procedure of modeling, acquiring, and coupling the proxies of footprints of mineral systems at multiple scales (e.g., regional, district, and deposit scales). In this approach, the critical issue from multiple scales is that the data collected are asymmetrical from the superficial to the deep or from mine to its brown fields, so that it is hard to employ and integrate them. To complete this study, firstly, multi-tactic 3D geological modeling methods, including the explicit, the implicit, and inversion, were used to build geological models in the condition of asymmetrical datasets at the deposit and district scales. Secondly, indicators acquired in drill-intensive fields among multisource datasets composed of geology, geochemistry, geophysics and alteration data were transferred to studies in deep and brown fields. Finally, deep (~?1,100 m) and circumjacent potentials of mine were targeted in the Haoyaoerhudong gold deposit situated in the Urad Middle Banner area, Inner Mongolia, which is one of the largest black-rock-series-type gold mines in China. This proposed procedure is more visual, clear, intuitive, and transferable to drive mineral system approach to exploration discovery than previous GIS-based studies.

  相似文献   

9.
Four dates of Landsat Thematic Mapper data from 1993, April 9, July 30, August 15, and September 16, were used to assess temporal and spatial patterns of lake area and dimensions of suspended sediment concentration in Tuttle Creek Reservoir, Kansas. In 1993, excessive precipitation in the Big Blue River Basin, and throughout much of the Upper Middle West, led to widespread flooding. Rains produced substantial erosion, sediment movement down the stream network, and a runoff volume that filled Tuttle Creek Reservoir, a U.S. Army Corps of Engineers flood control structure. The April 9 data are from before the flood, the July 30 data are from the time of maximum pool size and use of the emergency spillway, and the August and September data document the declining pool sizes. Three separate analyses were performed on each of the four dates of Thematic Mapper data. One set of analyses involved applying an existing physical model that uses at-satellite reflectance for TM Band 3 to estimate variations in suspended sediment, turbidity, and Secchi depth throughout the reservoir. Maps of estimated parameters of water quality for the four individual dates were compared and analyzed to document spatial and temporal changes. The second research method involved unsupervised classification (ERDAS ISODATA algorithm) of the data from the Tuttle Creek Reservoir. Water areas were grouped into coherent classes for further spatial analysis using a two-step or layered classification procedure for each date. The third analysis used a GIS overlay technique to compare the area of the water surface for each of the four dates with the flood pool as marked on U.S.G.S. 7-1/2 minute quadrangles. Comparisons document the major change in lake area between April and July, the high levels of suspended sediment in mid-summer, and the decline in pool size and concentrations of suspended sediment by mid-September. The study illustrates the advantages of using remote sensing to assist in documenting a relatively short-term environmental hazard. This study also demonstrates the value of Landsat Thematic Mapper data for use in mapping geographic variations in water area and quality in conjunction with a major flood event.  相似文献   

10.
Urbanization is an important issue concerning diverse scientific and policy communities. Computational models quantifying locations and quantities of urban growth offer numerous environmental and socioeconomic benefits. Traditional urban growth models are based on a single-algorithm fitting procedure and thus restricted on their ability to capture spatial heterogeneity. Accordingly, a GIS-based modeling framework titled multi-network urbanization (MuNU) model is developed that integrates multiple neural networks. The MuNU model enables a filtering approach where input data patterns are automatically reallocated into appropriate neural networks with targeted accuracies. We hypothesize that observations classified by individual neural networks share greater homogeneity, and thus modeling accuracy will increase with the integration of multiple targeted algorithms. Land use and land cover data sets of two time snapshots (1977 and 1997) covering the Denver Metropolitan Area are used for model training and validation. Compared to a single-step algorithm – either a stepwise logistic regression or a single neural network – several improvements are evident in the visual output of the MuNU model. Statistical validations further quantify the superiority of the MuNU model and support our hypothesis of effective incorporation of spatial heterogeneity.  相似文献   

11.
This paper presents an analysis of the dune field dynamics of El Fangar Spit in the Ebro Delta (Spain), associating it with the internal structure of dunes carried out with ground-penetrating radar and supported by data from topographic DGPS. These analyses are of great importance to ascertain the state of the internal structure of dunes as an important element in their stability and, therefore in their evolution. The internal structure shows accretion and progradation sequences of dunes over beach deposits, which depend on dune morphology (height, crest orientation) and location, as well as the processes acting on them.  相似文献   

12.
A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceano-graphic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations, Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.  相似文献   

13.
A new algorithm is presented for the integrated 2-D inversion of seismic traveltime and gravity data. The algorithm adopts the 'maximum likelihood' regularization scheme. We construct a 'probability density function' which includes three kinds of information: information derived from gravity measurements; information derived from the seismic traveltime inversion procedure applied to the model; and information on the physical correlation among the density and the velocity parameters. We assume a linear relation between density and velocity, which can be node-dependent; that is, we can choose different relationships for different parts of the velocity–density grid. In addition, our procedure allows us to consider a covariance matrix related to the error propagation in linking density to velocity. We use seismic data to estimate starting velocity values and the position of boundary nodes. Subsequently, the sequential integrated inversion (SII) optimizes the layer velocities and densities for our models. The procedure is applicable, as an additional step, to any type of seismic tomographic inversion.
We illustrate the method by comparing the velocity models recovered from a standard seismic traveltime inversion with those retrieved using our algorithm. The inversion of synthetic data calculated for a 2-D isotropic, laterally inhomogeneous model shows the stability and accuracy of this procedure, demonstrates the improvements to the recovery of true velocity anomalies, and proves that this technique can efficiently overcome some of the limitations of both gravity and seismic traveltime inversions, when they are used independently.
An interpretation of field data from the 1994 Vesuvius test experiment is also presented. At depths down to 4.5 km, the model retrieved after a SII shows a more detailed structure than the model obtained from an interpretation of seismic traveltime only, and yields additional information for a further study of the area.  相似文献   

14.
This paper demonstrates the utility of the iterative proportional fitting procedure (IPF) in generating disaggregated spatial data from aggregated data and evaluates the performance of the procedure. Estimates of individual level data created by IPF using data of equal-interval categories are reliable, but the performance of the estimation can be improved by increasing sample size. The improvement usually is enough to offset the increase in error created by other factors. If the two variables defining the cross-classification have a significant interaction effect and the number of categories in each variable is larger than two, then IPF is preferred over an independent model.  相似文献   

15.
b
In this paper, the method of small perturbations is applied to the ray and energy transport equations in an investigation of the effect of weak inhomogeneities on the propagation of seismic rays through a layer of fixed thickness. Just as Aki et al . have used travel-time residuals to infer first-order velocity perturbations in their block-modelling procedure, it is proposed that surface slowness and amplitude data may be used to give additional information about the structure of the velocity perturbations beneath the observer.  相似文献   

16.
Cross-validatory estimation of the bilinear model based on principal components is reviewed andKrzanowski's modification of Wold's procedure is described. Two different types of residuals useful forchecking model adequacy are defined and indices measuring the influence of each observed unit on theestimates of the parameters are discussed. A method for the selection of variables derived from Procrustesanalysis is described. Results arising from the study of two sets of enological data are given.  相似文献   

17.
Summary. Two late Mesozoic dolerite sills, situated near Agardhbukta on the east coast of Vestspitsbergen and dated radiométrically at 100 ± 4 Myr BP, have been sampled in five localities and subjected to detailed mineralogical and rock magnetic studies to determine the direction and origin of their magnetization. Although the sills lie outside the Tertiary orogenic belt, one locality (no. 4) has undergone strong hydrothermal alteration and a small part of another locality (no. 3) has also been affected. A conventional procedure based on examination of Zijderveld diagrams, applied to specimens demagnetized by alternating fields and thermally, yielded similar remanence directions at all five localities, except at the altered part of locality 3. Using a least squares computer méthod of analysis of step demagnetization data, comparable directions were isolated from all localities, including the altered part of locality 3. Except in this last case, all directions were reversed. The adjusted mean direction obtained from this analysis is D = 159.0°, I = 62.2°, α95= 9.0° yielding a palaeomagnetic pole situated at 225.0°, 54.3°N comparable with pole positions obtained from other late Mesozoic igneous rocks on Spitsbergen and distinct from palaeopoles derived from Mesozoic rocks in North America and Eurasia. This suggests that during the late Mesozoic Svalbard existed as a semi-independent microplate.  相似文献   

18.
1IntroductionInJune1992,theworldsummitorganizedbytheUnitedNations,withparticipantsincludingnationalleadersfromaroundtheworld,concludedwithAgenda21(UnitedNations,1992),theRioDeclarationonEnvironmentandDevelopment,inRiodeJaneiro.Thedeclarationpromptedcountr…  相似文献   

19.
Land cover mapping plays an important role for a wide spectrum of applications that are ranging from climate modeling to food security. However, it is a common case that several and partially conflicting land cover products are available at the same time over a same area, where each product suffers from specific limitations and lack of accuracy. In order to take advantage of the best features of each product while at the same time attenuating their respective weaknesses, this paper is proposing a methodology that allows the user to combine these products together based on a general framework involving maximum entropy/minimum divergence principles, Bayesian data fusion and Bayesian updating. First, information brought by each land cover product is coded in terms of inequality constraints so that a first estimation of their quality can be computed based on a maximum entropy/minimum divergence principle. Information from these various land cover products can then be fused afterwards in a Bayesian framework, leading to a single map with an associated measure of uncertainty. Finally, it is shown how the additional information brought by control data can help improving this fused map through a Bayesian updating procedure. The first part of the paper is briefly presenting the most important theoretical results, while the second part is illustrating the use of this suggested approach for a specific area in Belgium, where five different land cover products are at hand. The benefits and limitations of this approach are finally discussed by the light of the results for this case study.  相似文献   

20.
Water marketing, which involves the purchase or transfer of water supplies or rights between a willing buyer and seller, represents on strategy for redistributing water resources among competing users. Most frequently, municipalities purchase agricultural water rights to augment their existing supply and help meet projected water demand. In Texas, the most active water market is in the lower Rio Grande Valley where the cities of Brownsville, Harlingen, and McAllen have acquired surface water rights for converting water from agricultural to municipal and industrial uses. The existence of the Rio Grande Watermaster simplifies the procedure for transferring surface water rights, helps address problems such as the maintenance of instream flow and protection of senior water rights holders, and serves as an administrative model for water resource management.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号