首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary. The operation of a digital image analysis system in a limestone quarry is described. The calibration of the system, required in order to obtain moderately reliable fragmentation values, is done from muckpile sieving data by tuning the image analysis software settings so that the fragmentation curve obtained matches as close as possible the sieving. The sieving data have also been used to extend the fragment size distribution curves measured to sizes below the system’s optical resolution and to process the results in terms of fragmented rock, discounting the material coming from a loose overburden (natural fines) that is cast together with the fragmented rock. Automatic and manual operation modes of the image analysis are compared. The total fragmentation of a blast is obtained from the analysis of twenty photographs; a criterion for the elimination of outlier photographs has been adopted using a robust statistic. The limitations of the measurement system due to sampling, image processing and fines corrections are discussed and the errors estimated whenever possible. An analysis of consistency of the results based on the known amount of natural fines is made. Blasts with large differences in the amount of fines require a differentiated treatment, as the fine sizes tend to be the more underestimated in the image analysis as they become more abundant; this has been accomplished by means of a variable fines adjustment factor. Despite of the unavoidable errors and the large dispersion always associated with large-scale rock blasting data, the system is sensitive to relative changes in fragmentation.  相似文献   

2.
Fragmentation measurements in the form of sieve passing and mass fraction data were used to test the capability of three different distributions to fit the observed data over a wide range in fragment size and mass. These distributions were based on Rosin-Rammler, lognormal and simple sigmoidal (S-shaped) functions, having 2 input parameters for the single-component versions and 5 input parameters for the two-component versions. Provided convergence was achieved in the non-linear curve-fitting technique, the two-component versions always provided superior fits to the observed data. However, these versions were very sensitive to variations in the values chosen for the input parameters. In this particular regard, the two-component sigmoidal function was the most robust. The present results also show that the two-component lognormal function provided the best fit to the fragmentation data in a general sense, and the two-component Rosin-Rammler function provided the worst fit. However, there was not a significant difference between any of the three methods.  相似文献   

3.
Four image analysis systems for measuring rock fragmentation: FragScan, PowerSieve®, Split and WipFrag, have been compared under conditions necessary to provide an objective though limited assessment of their capabilities. The analysis of results is based on a sample of ten photographs taken from a series of photographs of controlled artificial muckpiles. These were created from dumping a blended mixture of sieved samples of limestone aggregate, in order to create a range of near perfect Rosin-Rammler sieve size distributions. Results from the various systems are compared with sieved results using both histogram and cumulative forms, with and without fines corrections in the case of Split and Wipfrag. Statistical indicators are evaluated to examine the match between system prediction values and sieving values. Commentaries on the results by the inventors of each system have been incorporated. All four systems were found to perform both well in some cases and poorly in others. From a detailed examination of the results, some insight into the strengths and weaknesses of the various systems is presented.  相似文献   

4.
Undiscovered oil and gas assessments are commonly reported as aggregate estimates of hydrocarbon volumes. Potential commercial value and discovery costs are, however, determined by accumulation size, so engineers, economists, decision makers, and sometimes policy analysts are most interested in projected discovery sizes. The lognormal and Pareto distributions have been used to model exploration target sizes. This note contrasts the outcomes of applying these alternative distributions to the play level assessments of the U.S. Geological Survey's 1995 National Oil and Gas Assessment. Using the same numbers of undiscovered accumulations and the same minimum, medium, and maximum size estimates, substitution of the shifted truncated lognormal distribution for the shifted truncated Pareto distribution reduced assessed undiscovered oil by 16% and gas by 15%. Nearly all of the volume differences resulted because the lognormal had fewer larger fields relative to the Pareto. The lognormal also resulted in a smaller number of small fields relative to the Pareto. For the Permian Basin case study presented here, reserve addition costs were 20% higher with the lognormal size assumption.  相似文献   

5.
就岩石在爆炸载作用下破坏块度分布的物理机理进行了分析。从分析可以看出,岩石破坏块度的对数正态分布与材料的多重破坏有关。在封闭爆炸情况下这种分布描述爆心附近岩石破坏块度的分布,此处材料处于静水压力状态,应变率很高,材料的破坏为多重破坏。而Rosin-Rammler分布主要描述离爆心较远处岩石破坏的块度分布,此处岩石的破坏主要是由环向拉力引起的径向裂纹所致,以单重破坏为主。  相似文献   

6.
Estimation of Pearson’s correlation coefficient between two time series, in the evaluation of the influences of one time-dependent variable on another, is an often used statistical method in climate sciences. Data properties common to climate time series, namely non-normal distributional shape, serial correlation, and small data sizes, call for advanced, robust methods to estimate accurate confidence intervals to support the correlation point estimate. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, Math Geol 35(6):651–665, 2003), where the main intention is to obtain accurate confidence intervals for correlation coefficients between two time series by taking the serial dependence of the data-generating process into account. However, Monte Carlo experiments show that the coverage accuracy of the confidence intervals for smaller data sizes can be substantially improved. In the present paper, the existing program is adapted into a new version, called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique that performs a second bootstrap loop (it resamples from the bootstrap resamples). It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap resampling is used to preserve the serial dependence of both time series. The calibration is applied to standard error-based bootstrap Student’s $t$ confidence intervals. The performance of the calibrated confidence interval is examined with Monte Carlo simulations and compared with the performance of confidence intervals without calibration. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small already (i.e., within a few percentage points) for data sizes as small as 20.  相似文献   

7.
This study compares kriging and maximum entropy estimators for spatial estimation and monitoring network design. For second-order stationary random fields (a subset of Gaussian fields) the estimators and their associated interpolation error variances are identical. Simple lognormal kriging differs from the lognormal maximum entropy estimator, however, in both mathematical formulation and estimation error variances. Two numerical examples are described that compare the two estimators. Simple lognormal kriging yields systematically higher estimates and smoother interpolation surfaces compared to those produced by the lognormal maximum entropy estimator. The second empirical comparison applies kriging and entropy-based models to the problem of optimizing groundwater monitoring network design, using six alternative objective functions. The maximum entropy-based sampling design approach is shown to be the more computationally efficient of the two.  相似文献   

8.
GLADSTONE  PHILLIPS  SPARKS 《Sedimentology》1998,45(5):833-843
Laboratory experiments show that the propagation and sedimentation patterns of particle-laden gravity currents are strongly influenced by the size of suspended particles. The main series of experiments consisted of fixed-volume releases of dilute mixtures containing two sizes of silicon carbide particles (25 μm and 69 μm mean diameter) within a 6-m flume. Polydisperse experiments involved mixtures of five different particle sizes and variation of the amounts of the finest and coarsest particles. All variables apart from the initial relative proportions of particles were identical in the experiments. The effects of mixing different proportions of fine and coarse particles is markedly non-linear. Adding small amounts of fine sediment to a coarse-grained gravity current has a much larger influence on flow velocity, run-out distance and sedimentation patterns than adding a small amount of coarse sediment to a fine-grained gravity current. The experiments show that adding small amounts of fine particles to a coarse-grained current results in enhanced flow velocities because the fine sediment remains suspended and maintains an excess current density for a much longer time. Thus, the distance to which coarse particles are transported increases substantially as the proportion of fines in the flow is increased. Our experiments suggest that sandy turbidity currents containing suspended fines will be much more extensive than turbidity currents composed of clean sand.  相似文献   

9.
The effect of grain size distribution on the unconfined compressive strength (UCS) of bio-cemented granular columns is examined. Fine and coarse aggregates were mixed in various percentages to obtain five different grain size distributions. A four-phase percolation strategy was adopted where a bacterial suspension and a cementation solution (urea and calcium chloride) were percolated sequentially. The results show that a gap-graded particle size distribution can improve the UCS of bio-cemented coarser granular materials. A maximum UCS of approximately 575 kPa was achieved with a particle size distribution containing 75% coarse aggregate and 25% fine aggregate. Furthermore, the minimum UCS obtained has applications where mitigation of excessive bulging of stone/sand columns, and possible slumping that might occur during their installation, is needed. The finding also implies that the amount of biochemical treatments can be reduced by adding fine aggregate to coarse aggregate resulting in effective bio-cementation within the pore matrix of the coarse aggregate column as it could substantially reduce the cost associated with bio-cementation process. Scanning electron microscopy results confirm that adding fine aggregate to coarse aggregate provides more bridging contacts (connected by calcium carbonate precipitation) between coarse aggregate particles, and hence, the maximum UCS achieved was not necessarily associated with the maximum calcium carbonate precipitation.  相似文献   

10.
模糊综合评判在地球化学异常评价中的应用   总被引:2,自引:1,他引:2  
张晓常 《物探与化探》2003,27(2):106-109
对云南省元江中上游1:5万土壤测量所圈定的20个地球化学异常进行了模糊综合评判,其中甲类异常正确评判率为100%,其它几个综合评判值较大的异常经查证均发现了不同规模的矿床(点),为研究区异常的筛选和评价提供了一种较为可靠的方法.  相似文献   

11.
Grain size is a fundamental property of sediments and is commonly used to describe sedimentary facies and classify sedimentary environments. Among the various conventional techniques utilized to determine grain‐size frequency distributions, sieving is the most widely applied procedure. The accuracy of such analyses is, among other factors, strongly dependent on the sieving time. However, despite a substantial amount of research in this field, optimal sieving times for different types of sediments have, to date, not been established. In this article, the influence of sieving time on grain‐size analyses of medium‐grained microtidal and mesotidal beach and dune sands has been determined. To assess the precision of important textural parameters, such as median grain size, sorting, skewness and kurtosis, an error analysis was carried out for different sieving times (2, 5, 10, 15 and 20 minutes). After calibrating the analytical and sampling methodologies, significant deviations were registered when sieving time was less than 10 minutes. However, such deviations were very small and grain‐size distributions remained almost identical for sieving times of 10 minutes and longer, relative errors being as low as 0% in some cases.  相似文献   

12.
It is generally agreed that particle size distributions of sediments tend ideally to approximate the form of the lognormal probability law, but there is no single widely accepted explanation of how sedimentary processes generate the form of this law. Conceptually, and in its simplest form, sediment genesis involves the transformation of a parent rock mass into a particulate end product by processes that include size reduction and selection during weathering, transportation, and deposition. The many variables that operate simultaneously during this transformation can be shown to produce a distribution of particle sizes that approaches asymptotically the lognormal form when the effect of the variables is multiplicative. This was first shown by Kolmogorov (1941). Currently available models combine breakage and selection in differing degrees, but are similar in treating the processes as having multiplicative effects on particle sizes. The present paper, based on careful specification of the initial state, the nth breakage rule and the nth selection rule, leads to two stochastic models for particle breakage, and for both models the probability distributions of particle sizes are obtained. No attempt is made to apply these models to real world sedimentary processes, although this topic is touched upon in the closing remarks.  相似文献   

13.
Suffusion involves fine particles migration within the matrix of coarse fraction under seepage flow, which usually occurs in the gap-graded material of dams and levees. Key factors controlling the soil erodibility include confining pressure (p′) and fines content (Fc), of which the coupling effect on suffusion still remains contradictory, as concluded from different studies considering narrow scope of these factors. For this reason, a systematical numerical simulation that considers a relative wide range of p′ and Fc was performed with the coupled discrete element method and computational fluid dynamics approach. Two distinct macroresponses of soil suffusion to p′ were revealed, ie, for a given hydraulic gradient = 2, an increase in p′ intensifies the suffusion of soil with fines overfilling the voids (eg, Fc = 35%), but have negligible effects on the suffusion of gap-graded soil containing fines underfilling the voids (eg, Fc = 20%). The micromechanical analyses, including force chain buckling and strain energy release, reveal that when the fines overfilled the voids between coarse particles (eg, Fc = 35%) and participated heavily in load-bearing, the erosion of fines under high i could cause the collapse of the original force transmission structure. The release of higher strain energy within samples under higher p′ accelerated particle movement and intensified suffusion. Conversely, in the case where the fines underfilled the voids between coarse particles (eg, F= 20%), the selective erosion of fines had little influence on the force network. High p′ in this case prevented suffusion.  相似文献   

14.
Sorted circles, polygons, and stripes are reported from Alaska, Greenland, Baffin Island, Antarctica, and New Hampshire. From these studies and key references, all cases are found to have: (1) a mixed parent material, commonly till, composed of a wide range of clast sizes unsorted below frost table, (2) gutter depressions containing the largest stones and carrying summer drainage, and (3) tabular stones on edge in the gutters showing expansion-squeezing from the sides. The size of the unit cells, gutter to gutter, is a function of mean maximum clast size: smallest chips making forms 10 cm diameter across and largest forms 20 m across. The slope determines the shape: polygons, and nets form on slopes up to 2 or 4° depending upon amount of water and fines. Ellipses form on 3 to 6° slopes, and stripes form on 4 to 11° slopes. Clearly shape is an effect of solifluction. Lastly, time involves seasons of sporadic sorting until there is a stable end form with lichen-covered stone gutters and tundra-covered soil centers. The up-and-out mechanism, described by Corté, is the best known for the primary sorting. Larger sorted forms (2–20 m in diameter) are reported almost exclusively where nearly continuous permafrost exists. They form where the mean annual temperature is below ? 4°C. Former permafrost is indicated where lichen and turf are dense and not overturned and where measured motion is nil. Small forms (under 1 m in diameter) are generated in a year or two where there is only deep annual freezing (0.1–2 m), but no permafrost.  相似文献   

15.
在煤层气开发过程中,煤粉聚集及沉降会堵塞煤层气运移通道及导致卡泵、埋泵等事故。为了查明不同粒度煤粉的聚集及沉降特征,选取粒度>140目(<106 μm)、>70~140目(106~<212 μm)和>50~70目(212~<300 μm)3种粒度范围的煤粉,开展了在去离子水中煤粉的聚集及沉降实验,从煤粉聚集及沉降特征观察、悬浮液中煤粉含量及煤粉粒度分布探究不同粒度煤粉在去离子水悬浮液中的聚集及沉降特征。结果表明,随着静置时间的增加,各粒度煤粉悬浮液的颜色均不同程度地变浅,逐渐出现分层,其中,粒度>140目的煤粉悬浮液最先出现分层。煤粉粒度越小,煤粉悬浮液顶部漂浮的煤粉量越多;煤粉粒度越大,其下沉到煤粉悬浮液底部的煤粉量越多。不同粒度煤粉悬浮溶液中煤粉含量均随着静置时间的增加呈现不同程度的降低,在停止搅拌后3 min内,煤粉含量下降最快,粒度为>70~140目的煤粉悬浮液中煤粉含量最大。根据不同粒度煤粉悬浮液中煤粉粒度分布曲线,将煤粉聚集及沉降过程分为3个阶段:单峰变双峰阶段(煤粉快速上浮及沉降)、双峰变单峰阶段(煤粉快速聚集及沉降)和单峰阶段(煤粉缓慢沉降)。粒度>140目的煤粉在悬浮液中最先达到缓慢沉降阶段,粒度>70~140目的煤粉在悬浮液中最后到达缓慢沉降阶段。从煤粉的受力、扩展的DLVO理论及煤粉的有机分子结构方面探讨了煤粉聚集沉降的机理:煤样含有大量的脂肪烃和芳香烃等疏水性基团,疏水性强,润湿性低;随着煤粉粒度的减小,其比表面积显著增大,煤粉表面吸附大量空气,形成气膜;同时,煤粉颗粒间相互吸附聚集,内部形成很多微孔隙,导致粒度小的煤粉易聚集漂浮在悬浮液面上。实验得到的不同粒度煤粉的上浮、下沉及悬浮情况,为后期煤层气开发中煤粉管控措施提供依据。   相似文献   

16.
For national or global resource estimation of frequencies of metals a lognormal distribution has sometimes been assumed but never adequately tested. Tests of frequencies of Cu, Zn, Pb, Ag, Au, Mo, Re, Ni, Co, Nb2O3, REE2O3, Cr2O3, Pt, Pd, Ir, Rh, and Ru, contents in over 3000 well-explored mineral deposits display a poor fit to the lognormal distribution. Neither a lognormal distribution nor a power law is an adequate model of the metal contents across all deposits. When these metals are grouped into 28 geologically defined deposit types, only nine of the over 100 tests fail to be fit by the lognormal distribution, and most of those failures are in two deposit types suggesting problems with those types. Significant deviations from lognormal distributions of most metals when ignoring deposit types demonstrate that there is not a global lognormal or power law equation for these metals. Mean and standard deviation estimates of each metal within deposit types provide a basis for modeling undiscovered resources. When tracts of land permissive for specific deposit types are delineated, deposit density estimates and contained metal statistics can be used in Monte Carlo simulations to estimate total amounts of undiscovered metals with associated explicit uncertainties as demonstrated for undiscovered porphyry copper deposits in the Tibetan Plateau of China.  相似文献   

17.
Empirical approaches for predicting fragmentation from blasting continue to play a significant role in the mining industry in spite of a number of inherent limitations associated with such methods. These methods can be successfully applied provided the users understand or recognize their limitations. Arguably, the most successful empirical based fragmentation models have been those applicable to surface blasting (e.g., Kuz-Ram/Kuznetsov based models). With widespread adoption of fragmentation assessment technologies in underground operations, an opportunity has arisen to extend and further develop these type approaches to underground production blasting.

This paper discusses the development of a new fragmentation modelling framework for underground ring blasting applications. The approach is based on the back-analysis of geotechnical, blasting and fragmentation data gathered at the Ridgeway sub level caving (SLC) operation in conjunction with experiences from a number of surface blasting operations.

The basis of the model are, relating a peak particle velocity (PPV) breakage threshold to a breakage uniformity index; modelling of the coarse end of the size distribution with the Rosin-Rammler distribution; and modelling the generation of fines with a newly developed approach that allows the prediction of the volume of crushing around blastholes.

Preliminary validations of the proposed model have shown encouraging results. Further testing and validation of the proposed model framework continues and the approach is currently being incorporated into an underground blast design and analysis software to facilitate its application.  相似文献   

18.
An electrical sensing-zone particle size analyser has been calibrated for use with sands. A 10: 90 saline/glycerol electrolyte is used with a concentration of 0-1-0.5 g/1 of suspended sand. Calibration error is ± 1.6% (± 0.02 φ). Comparison of counter and sieving results shows close agreement. Advantages of the machine are rapidity—a prepared sample can be analysed in 60 sec—and the small size of the sample required.  相似文献   

19.
Turbidite bed thickness distributions are often interpreted in terms of power laws, even when there are significant departures from a single straight line on a log–log exceedence probability plot. Alternatively, these distributions have been described by a lognormal mixture model. Statistical methods used to analyse and distinguish the two models (power law and lognormal mixture) are presented here. In addition, the shortcomings of some frequently applied techniques are discussed, using a new data set from the Tarcău Sandstone of the East Carpathians, Romania, and published data from the Marnoso‐Arenacea Formation of Italy. Log–log exceedence plots and least squares fitting by themselves are inappropriate tools for the analysis of bed thickness distributions; they must be accompanied by the assessment of other types of diagrams (cumulative probability, histogram of log‐transformed values, q–q plots) and the use of a measure of goodness‐of‐fit other than R2, such as the chi‐square or the Kolmogorov–Smirnov statistics. When interpreting data that do not follow a single straight line on a log–log exceedence plot, it is important to take into account that ‘segmented’ power laws are not simple mixtures of power law populations with arbitrary parameters. Although a simple model of flow confinement does result in segmented plots at the centre of a basin, the segmented shape of the exceedence curve breaks down as the sampling location moves away from the basin centre. The lognormal mixture model is a sedimentologically intuitive alternative to the power law distribution. The expectation–maximization algorithm can be used to estimate the parameters and thus to model lognormal bed thickness mixtures. Taking into account these observations, the bed thickness data from the Tarcău Sandstone are best described by a lognormal mixture model with two components. Compared with the Marnoso‐Arenacea Formation, in which bed thicknesses of thin beds have a larger variability than thicknesses of the thicker beds, the thinner‐bedded population of the Tarcău Sandstone has a lower variability than the thicker‐bedded population. Such differences might reflect contrasting depositional settings, such as the difference between channel levées and basin plains.  相似文献   

20.
Maximum likelihood estimation of joint size from trace length measurements   总被引:5,自引:1,他引:5  
Summary Usually, rock joints are observed in outcrops and excavation walls only as traces. Under some assumptions about the shapes of the joints and the nature of their size distributions, the underlying joint size distribution can be estimated from trace length measurements. However, the interpretation of trace length distributions from line mapping data should be approached with caution. The data are always length-biased and furthermore, the semi-trace length, the trace length, and the underlying joint size may have different distributional forms. Semi-trace length distributions are monotonic decreasing functions not sensitive to changes in the real trace length distributions. Experimental full trace length distributions are shown to have lognormal distributions and to be insensitive to major changes in the underlying joint size distributions. Under the assumptions of joint convexity and circularity a parametric model for the three-dimensional distribution of joint sizes is developed. A maximum likelihood estimation of the distribution of joint diameters, which best reflects the observed joint trace data, and corrects simultaneously for joint censoring, truncation and size bias, is developed. The theory is illustrated with numerical examples using data collected from five field sites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号