首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
基于CBS的违禁品目标凸现   总被引:1,自引:1,他引:0  
本文在分析基于CBS的人身安检图像基础上,提出了基于数学形态学膨胀累加、双阈值分割的违禁品分割算法.采用形态学的膨胀累加算法可以有效抑制噪声并提高目标能量;利用连通性判断获得违禁品候选区域;最后通过获得的数据对违禁品目标进行有效地凸现.  相似文献   

2.
图像分割多阈值法研究   总被引:3,自引:0,他引:3  
目的:医学图像分割是CT图像的三维重建等后续操作的基础,分割的准确性对医生判断至关重要。本文深入研究阈值分割算法、开发出一款将医学图像进行多阈值分割的实用软件。方法:此软件是利用VC 编程工具进行开发。其功能模块主要是利用医学影像处理与分析开发包MITK中的输入文件将输入数据利用像素值进行提取并处理。结论:相对于单阈值分割,多阈值分割有着诸多优势。  相似文献   

3.
为了从FMI资料中提取定量信息,一个很重要的步骤就是要对FMI图像数据进行分割.即从实际FMI资料中分离出反映孔洞、裂缝的子图像,然后用相应的方法对分割后的子图像进行分析处理,提取相应参数.本文通过研究大量图像分割算法,认为奇异点多阈值分割算法、基于过度区的分割算法、Hopfield网络方法、基于图像间模糊散度的阈值化算法是FMI图像分割的有效方法,实现了从FMI图像中将地层中有用目标,从背景中分离出来.同时,为了得到孔洞及裂缝的形状参数,本文还研究了轮廓跟踪标识边缘的方法以及根据标识的边缘进行填充的算法.在此基础上,也研究了根据目标边缘坐标计算目标的长度、宽度、圆度、裂缝密度、孔洞密度等参数的方法.上述所有方法在SUN工作站GeoFrame平台上开发成功,通过对LX45等八口井的处理结果与岩心分析数据对比表明,效果较好.  相似文献   

4.
基于混沌微粒群和二维Otsu法的图像快速分割   总被引:1,自引:1,他引:0  
为了解决二维Otsu法在求取最佳阈值时存在计算量大,及微粒群算法容易陷入局部最优且速度较慢等问题,提出一种将混沌微粒群优化算法和二维Otsu法相结合的图像分割方法。利用混沌微粒群优化算法,实现二维阈值向量的快速全局搜索。实验证明,该算法对复杂图像具有良好的分割效果和较强的实时处理能力。  相似文献   

5.
针对ICT图像序列,研究了基于Facet模型和基于矩的亚体素表面检测算法,并通过引入基于Otsu的阈值分割预处理环节,大大减少了待处理体素的数目,在很大程度上提高了原始算法的处理速度.最后在对航空发动机叶片仿真数据的实验中,对比了算法处理效果,结果表明两算法检测精度均可达1/5个像素以内,预处理环节的引入可将原始算法速度提高约4倍.  相似文献   

6.
针对当前炮弹引信检测手段的效率低,而且还需要检测人员具备丰富的专业知识的缺点,本文以榴-5引信为研究对象,给出了一种基于ICT的扫描图像的自动检测算法。该算法根据榴-5引信解脱保险前后相关断面中的圆形凹槽发生的一个显著变化,分别在两个截面的重建图像找到圆形凹槽图像区域,计算出区域重心值,进而比较两组区域的重心值是否相等,从而判断该引信是否处于安全状态。实验结果表明,本文提出的算法不仅较好地解决了传统检测手段的缺陷,而且检测准确率也较高。  相似文献   

7.
基于ICT断层扫描图像的逆向工程软件平台   总被引:2,自引:0,他引:2  
本文介绍了一种基于ICT断层扫描图像的逆向工程软件平台.该软件运用图像图形处理中的边缘检测、阈值分割、边界细化、矢量化及格式转换等算法与技术,实现了工件由三维断层ICT数据到CAD图纸再编辑的功能.软件操作流程简便清晰,易于使用.  相似文献   

8.
为了提高多时相遥感图像变化检测的精确度和运算效率,本文提出了一种基于Contourlet变换和独立分量分析(ICA-Independent component analysis)的变化检测算法.利用Contourlet变换多尺度、多方向性和各向异性等性质,对图像数据进行多尺度分解,再对分解后的数据进行独立分量分析,利用改进的基于牛顿迭代的固定点ICA算法分离出互相独立的数据分量,然后将分离后的数据分量转变成图像分量,最终对变化图像分量经阈值分割实现变化检测.实验结果表明,与现有的基于PCA、基于ICA、基于小波变换与ICA三种变化检测算法相比,本文算法能有效地分离出变化信息,减少了计算的复杂性,得到的变化图像具有更高的精确度,且对背景有较强的稳健性.  相似文献   

9.
由于工业CT图像结构复杂,存在各种伪影,并且灰度分布呈现区域性质等特点,难以准确找出分割阈值,为此提出了一种适用于工业CT图像的分割算法。首先利用最大类间方差法和图像处理方法处理了外层伪影,然后利用聚类迭代的方法处理中心空气,得到感兴趣的区域。实验结果表明,此算法能够对具有先验知识的工业断层图像准确地提取感兴趣的区域。  相似文献   

10.
本文针对基于图像分割的双能CT不完备数据重建算法,提出了图像分割过程和方程组求解环节是优化算法的两个关键点,同时给出了初步优化方法。初步实验结果表明这两方面因素选取的优劣将直接影响重建图像质量。因此,根据实际应用需求,如能改善这两方面因素,基于图像分割的双能CT不完备数据重建算法的重建性能将会进一步提高。  相似文献   

11.
Noise suppression or signal‐to‐noise ratio enhancement is often desired for better processing results from a microseismic dataset. In this paper, a polarization–linearity and time–frequency‐thresholding‐based approach is used for denoising waveforms. A polarization–linearity filter is initially applied to preserve the signal intervals and suppress the noise amplitudes. This is followed by time–frequency thresholding for further signal‐to‐noise ratio enhancement in the S transform domain. The parameterisation for both polarization filter and time–frequency thresholding is also discussed. Finally, real microseismic data examples are shown to demonstrate the improvements in processing results when denoised waveforms are considered in the workflow. The results indicate that current denoising approach effectively suppresses the background noise and preserves the vector fidelity of signal waveform. Consequently, the quality of event detection, arrival‐time picking, and hypocenter location improves.  相似文献   

12.
医学CT图像中,肺实质区域的准确分割乃是肺结节检测的基础,其对于临床肺部疾病诊断具有重要意义。本文首先综述基于医学CT图像的肺实质分割算法,然后详细阐明肺实质分割的主要步骤,探讨几种典型算法的分割效果,包括肺实质与肺气管的比较分析。最后以此为基础,综合几种常用的分割算法与改进,提出一种实用性强、鲁棒性较好的肺实质分割算法。   相似文献   

13.
微震监测是直观评价压裂过程和压裂效果的有效手段.微震事件识别是微震监测的首要步骤.然而对于低信噪比微震监测数据,常规识别方法很难取得满意效果.基于微震事件在时频域中的稀疏性,本文提出利用Renyi熵值表示微震监测数据的时频稀疏程度,并以时频距离为约束条件,建立以低熵值的道数为判别阈值的目标函数.本文方法能在识别出微震事件的同时,恢复出较为清晰的微震事件.通过数值计算和对实际监测数据的测试,表明该方法对低信噪比的微震监测数据有较好的处理效果.  相似文献   

14.
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.  相似文献   

15.
Denoising of full-tensor gravity-gradiometer data involves detailed information from field sources, especially the data mixed with high-frequency random noise. We present a denoising method based on the translation-invariant wavelet with mixed thresholding and adaptive threshold to remove the random noise and retain the data details. The novel mixed thresholding approach is devised to filter the random noise based on the energy distribution of the wavelet coefficients corresponding to the signal and random noise. The translationinvariant wavelet suppresses pseudo-Gibbs phenomena, and the mixed thresholding better separates the wavelet coefficients than traditional thresholding. Adaptive Bayesian threshold is used to process the wavelet coefficients according to the specific characteristics of the wavelet coefficients at each decomposition scale. A two-dimensional discrete wavelet transform is used to denoise gridded data for better computational efficiency. The results of denoising model and real data suggest that compared with Gaussian regional filter, the proposed method suppresses the white Gaussian noise and preserves the high-frequency information in gravity-gradiometer data. Satisfactory denoising is achieved with the translation-invariant wavelet.  相似文献   

16.
ABSTRACT

The present study demonstrates the use of a new approach for delineating the accurate flood hazard footprint in the urban regions. The methodology involves transformation of Landsat Thematic Mapper (TM) imagery to a three-dimensional feature space, i.e. brightness, wetness and greenness, then a change detection technique is used to identify the areas affected by the flood. Efficient thresholding of the normalized difference image generated during change detection has shown promising results in identifying the flood extents which include standing water due to flood, sediment-laden water and wetness caused by the flood. Prior to wetness transformations, dark object subtraction has been used in lower wavelengths to avoid errors due to scattering in urban areas. The study shows promising results in eliminating most of the problems associated with urban flooding, such as misclassification due to presence of asphalt, scattering in lower wavelengths and delineating mud surges. The present methodology was tested on the 2010 Memphis flood event and validated on Queensland floods in 2011. The comparative analysis was carried out with the widely-used technique of delineating flood extents using thresholding of near infrared imagery. The comparison demonstrated that the present approach is more robust towards the error of omission in flood mapping. Moreover, the present approach involves less manual effort and is simpler to use.
Editor Z.W. Kundzewicz; Associate editor A. Viglione  相似文献   

17.
Flood season segmentation, which partitions an entire flood season into multiple subseasons, constitutes a considerable water resources management task. Moreover, the risks associated with various schemes for flood season segmentation should be evaluated. Preliminary analysis in this study used the principal component based outlier detection (PCOut) algorithm to identify possible outlying observations to reduce the uncertainty involved in flood season segmentation. Then, a quantitative measurement, the seasonal exceedance probability (SEP), was proposed to evaluate various segmentation schemes. The SEP quantifies the risk that the maximum observation occurs outside the main flood season. Several findings were derived based on a case study of China’s Three Gorges Reservoir (TGR) and daily streamflow records (1882–2010). (1) The PCOut algorithm was found effective in identifying outliers, and the estimation uncertainty of the segmentation evaluation due to outliers decreased when the end date of main flood season (EDMFS) was postponed. (2) The proposed SEP measurement was shown capable of supporting quantitative evaluation of the segmentation schemes in the flood season. (3) The current flood segmentation scheme based on an EDMFS of September 10 is sufficiently safe for the TGR. The findings of this study could help in the proper operation of the TGR.  相似文献   

18.
Automatic feature detection from seismic data is a demanding task in today's interpretation workstations. Channels are among important stratigraphic features in seismic data both due to their reservoir capability or drilling hazard potential. Shearlet transform as a multi‐scale and multi‐directional transformation is capable of detecting anisotropic singularities in two and higher dimensional data. Channels occur as edges in seismic data, which can be detected based on maximizing the shearlet coefficients through all sub‐volumes at the finest scale of decomposition. The detected edges may require further refinement through the application of a thinning methodology. In this study, a three‐dimensional, pyramid‐adapted, compactly supported shearlet transform was applied to synthetic and real channelised, three‐dimensional post‐stack seismic data in order to decompose the data into different scales and directions for the purpose of channel boundary detection. In order to be able to compare the edge detection results based on three‐dimensional shearlet transform with some famous gradient‐based edge detectors, such as Sobel and Canny, a thresholding scheme is necessary. In both synthetic and real data examples, the three‐dimensional shearlet edge detection algorithm outperformed Sobel and Canny operators even in the presence of Gaussian random noise.  相似文献   

19.
X射线在轮胎边缘检测中的应用   总被引:1,自引:1,他引:0  
利用X射线对子午线轮胎进行质量检测是一个重要的课题,通过图像处理和匹配进行轮胎质量检测是一个复杂的系统,其中边缘检测是图像分割、纹理特征提取和形状特征提取等图像分析的重要基础。通过Matlab实验比较几种不同的边缘检测方法,找到了适合子午线轮胎质量检测的边缘检测方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号