首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
降低计算机断层扫描(CT)的剂量对于降低临床应用中的辐射风险至关重要,深度学习的快速发展和广泛应用为低剂量CT成像算法的发展带来了新的方向。与大多数受益于手动设计的先验函数或有监督学习方案的现有先验驱动算法不同,本文使用基于深度能量模型来学习正常剂量CT的先验知识,然后在迭代重建阶段,将数据一致性作为条件项集成到低剂量CT的迭代生成模型中,通过郎之万动力学迭代更新训练的先验,实现低剂量CT重建。实验比较,证明所提方法的降噪和细节保留能力优良。   相似文献   

2.
为了改善低剂量CT重建图像质量,在传统非局部先验的基础上,提出了一种基于投影对称性的改进非局部先验模型。基于该先验模型构造了一种贝叶斯(Bayesian)重建算法,并将其应用到低剂量CT投影数据降噪中,通过滤波反投影算法重建出图像。仿真实验结果表明,本文所提出的算法较基于传统先验模型的重建算法,能在去除噪声与保持边缘之间取得较好的平衡。  相似文献   

3.
不完全角度重建问题一直是CT图像重建领域研究的重点和难点。目前,通常的不完全角度重建方法是基于空域的迭代方法,但由于正反投影的高计算复杂度,空域迭代方法存在计算耗时,对硬件资源需求大等问题。本文提出了一种基于外推的邻近网格迭代算法(INNG-TV)。首先,平行束采样的数据通过傅里叶变换和样条插值到频域空间,然后在迭代的过程中,傅立叶空间投影已知部分的数据始终不变,缺失部分数据通过对重建图像进行INNG外推得到,同时在图像空间对重建图像做非负、最小化总变分等先验及优化约束。  相似文献   

4.
体积CT投影数据的模拟方法   总被引:7,自引:3,他引:4  
体积CT是目前研究的热点,也是医疗CT和工业CT的发展方向。在研究体积CT重建算法的过程中,模拟投影数据是必不可少的一部分。本文提出一种体积CT投影数据的模拟方法。这种方法的特点是准确、适用性广,模拟得到的投影数据能够准确反映各断层的细节信息,一方面可以为后继体积CT重建算法提供可靠的投影数据,另一方面也可以根据需要有选择地模拟失真的投影数据。我'ffJN用此方法对人脑部及某工件的CT投影数据进行了模拟,并对模拟得到的数据采用滤波反投影算法进行重建,将重建后的断层图像与原始图像进行比较,得到了很好的重建结果,进一步验证了该投影数据模拟方法的可行性。  相似文献   

5.
有限角度CT图像重建算法综述   总被引:1,自引:1,他引:1  
本文主要介绍了处理有限角度CT图像重建的思路和方法。有限角度CT图像重建属于不完全数据重建范畴,由于不满足数据完备性条件,因此不能精确重建。其处理方法大致可以分为两类:基于变换的迭代-解析重建算法和基于级数展开的迭代-代数/统计重建算法。同时,有限角度重建等价于病态矩阵求逆问题,适当的约束条件、先验知识以及正则化因子对提高重建图像质量非常重要。  相似文献   

6.
由于自然条件限制和人为因素的影响,实际采集得到的地震数据往往会出现地震道数据缺失的情况,会对后续的地震数据处理和解释制造困难,需要对地震数据进行重建.而传统地震数据重建方法通常存在着重建效果受先验条件约束、超参数选择需要人工干预、自动化程度低等问题.于是人们将目光投向发展迅速的深度学习领域,截至今日已经有不少深度学习方法应用于地震数据重建领域以解决上述地震数据重建过程中的问题.本文将着重分析具有代表性的深度学习地震数据重建方法,分别基于卷积神经网络、循环神经网络、卷积自编码器、生成对抗性网络.通过重建结果残差对比图,重建结果信噪比分析等方法对深度学习地震数据重建方法的优势和不足进行深入探讨.并进一步阐述深度学习地震数据重建方法的研究现状、方法优势、存在的问题以及未来发展趋势,对现今的深度学习重建方法进行总结和展望.  相似文献   

7.
针对有限角度扫描的CT重建,提出一种基于模型融合的CT迭代重建方法。模型来源于患者的早期Dicom图像。对扫描角度有限的投影数据,用统计迭代算法进行初步重建,得到预重建图像;将预重建图像与模型进行融合,得到融合图像;然后再次投影,补全原始投影缺失的部分,根据补全的投影数据重建出中间结果,之后重复投影、融合、重建过程直到满足终止条件。仿真实验表明,该算法能完整重建整个目标,在有效保留原目标特征的同时提高了小角度投影数据重建的质量。  相似文献   

8.
针对目前CT的X射线电离辐射危害,提出了一种基于低剂量CT投影数据进行稀疏角度重建的方法来降低辐射剂量。本方法首先对低剂量CT的投影数据采用惩罚加权最小二乘(PWLS)方法进行去噪处理,然后对去噪后的投影数据进行稀疏角度的CT图像重建。对Shepp-Logan模体仿真实验及真实实验数据重建结果表明:本文提出的基于低剂量CT的稀疏重建方法可有效地抑制重建图像噪声和条形伪影,实现稀疏角度的低剂量CT图像重建。  相似文献   

9.
稀疏角采样与减小X射线源电流可有效降低多能谱CT低辐射剂量,然而会导致投影数据不足且包含较大噪声,重建图像会严重降质。针对这一问题,本文对传统全核变分(TNV)正则化方法进行推广,利用非局部梯度向量构成的雅克比矩阵的低秩特性,提出非局部全核变分(NLTNV)正则化方法。该方法用单个正则项同时建模能谱CT图像的结构相似性、梯度域稀疏性与非局部自相似性3种先验信息,能恢复稀疏角度投影含较大噪声(剂量较低)时图像的结构特征,并且有效缓解了用多正则项建模多能谱CT图像不同先验信息所导致的正则化参数过多问题。此外,基于NLTNV的重建模型为凸优化模型,保证了算法的稳定性与收敛性。实验结果表明,与TNV正则化方法相比,本方法显著提升重建图像的整体质量。   相似文献   

10.
由于诸多因素的影响,地震数据沿空间方向通常是稀疏采样的,因此引起较为严重的空间假频.本文提出一种反假频地震数据规则化的方法,采用Fourier变换域加权范数带限重建方法完成低频数据重建,利用自适应频谱加权范数的正则化项约束方程的解,将地震数据的带宽和谱形状作为先验信息,具有较好的低频重建特性.文中采用共轭梯度算法求解方程,而后利用重建的低频数据信息,应用频带延拓的方法重建高频数据,未知的高频带信息由重建的低频带信息构建.本方法在完成地震数据规则化的同时,可有效去除地震数据中的空间假频干扰.理论模型和实际资料处理均表明文中所提出的反假频地震数据规则化方法是有效可行的.  相似文献   

11.
三维重力反演是地质工作者了解地球深部构造,认知地下结构的重要手段.按照反演单元划分,三维重力反演有离散多面体(Discrete)反演和网格节点(Voxels)反演两种方式.离散多面体反演由于易于吸收先验地质信息得到的理论场能够很好地拟合观测场,因此,在实际重力反演中更受欢迎.目前离散多面体重力反演中初始模型的建立方法繁杂不一,实际应用受到很大的限制.本文本着充分挖掘利用先验信息和重力观测数据得到丰富可靠的反演结果这一原则,以离散多面体反演技术为基础,改进建模过程.在初始模型的建立中,吸收贝叶斯算法优势,采用隐马尔科夫链改善朴素贝叶斯方法的分类效果,通过最大似然函数算法求解,再采取模型降阶技术,固定所建模型中几何体的形态或密度,达到在几何体形态(x,y,z)、密度(σ)和重力值(g)五个参数中降低维数目的,从而减小高维不确定性和正演的计算量,由此反演计算的地质体密度和分布范围相对更准确,更利于重现重力模型结构.通过单位球体和任意形态几何体模拟实验,以及安徽省泥河矿区三维重力反演实践,得到非常接近实际的密度或重力值,大幅提高了三维重力反演的精度和效率,说明该方法是有效、实用的.  相似文献   

12.
This study uses elliptical copulas and transition probabilities for uncertainty modeling of categorical spatial data. It begins by discussing the expressions of the cumulative distribution function and probability density function of two major elliptical copulas: Gaussian copula and t copula. The basic form of spatial copula discriminant function is then derived based on Bayes’ theorem, which consists of three parts: the prior probability, the conditional marginal densities, and the conditional copula density. Finally, three kinds of parameter estimation methods are discussed, including maximum likelihood estimation, inference functions for margins and canonical maximum likelihood (CML). To avoid making assumptions on the form of marginal distributions, the CML approach is adopted in the real-world case study. Results show that the occurrence probability maps generated by these two elliptical copulas are similar to each other. However, the prediction map interpolated by Gaussian copula has a relatively higher classification accuracy than t copula.  相似文献   

13.
含噪声数据反演的概率描述   总被引:5,自引:4,他引:1       下载免费PDF全文
根据贝叶斯理论给出了对含噪声地球物理数据处理的具体流程和方法,主要包括似然函数估计和后验概率计算.我们将数据向量的概念扩展为数据向量的集合,通过引入数据空间内的信赖度,把数据噪声转移到模型空间的概率密度函数上,即获得了反映数据本身的不确定性的似然函数.该方法由于避免了处理阶段数据空间内的人工干预,因而可以保证模型空间中的概率密度单纯反映数据噪声,具有信息保真度高、保留可行解的优点.为了得到加入先验信息的后验分布,本文提出了使用加权矩阵的概率分析法,该方法在模型空间直接引入地质信息,对噪声引起的反演多解性有很强的约束效果.整个处理流程均以大地电磁反演为例进行了展示.  相似文献   

14.
双能CT技术可以重建物质的原子序数和电子密度信息,是一种有效的物质辨别技术。进行X射线能谱估计是双能重建的前提,能谱估计的准确性直接影响双能重建结果。但是,目前还没有学者对能谱估计的误差如何影响双能图像质量进行系统的研究。本文从双效应分解双能重建出发,量化分析能谱估计误差对双能重建结果的影响并给出误差传递的理论计算方法,论证影响双能图像准确重建的关键因素,同时由此定义适用于双能成像分析的能谱误差和针对双能成像的等效能谱概念。初步实验结果验证了这些理论的有效性。  相似文献   

15.
Probability theory as logic (or Bayesian probability theory) is a rational inferential methodology that provides a natural and logically consistent framework for source reconstruction. This methodology fully utilizes the information provided by a limited number of noisy concentration data obtained from a network of sensors and combines it in a consistent manner with the available prior knowledge (mathematical representation of relevant physical laws), hence providing a rigorous basis for the assimilation of this data into models of atmospheric dispersion for the purpose of contaminant source reconstruction. This paper addresses the application of this framework to the reconstruction of contaminant source distributions consisting of an unknown number of localized sources, using concentration measurements obtained from a sensor array. To this purpose, Bayesian probability theory is used to formulate the full joint posterior probability density function for the parameters of the unknown source distribution. A simulated annealing algorithm, applied in conjunction with a reversible-jump Markov chain Monte Carlo technique, is used to draw random samples of source distribution models from the posterior probability density function. The methodology is validated against a real (full-scale) atmospheric dispersion experiment involving a multiple point source release.  相似文献   

16.
Knowledge about saturation and pressure distributions in a reservoir can help in determining an optimal drainage pattern, and in deciding on optimal well designs to reduce risks of blow‐outs and damage to production equipment. By analyzing time‐lapse PP AVO or time‐lapse multicomponent seismic data, it is possible to separate the effects of production related saturation and pressure changes on seismic data. To be able to utilize information about saturation and pressure distributions in reservoir model building and simulation, information about uncertainty in the estimates is useful. In this paper we present a method to estimate changes in saturation and pressure from time‐lapse multicomponent seismic data using a Bayesian estimation technique. Results of the estimations will be probability density functions (pdfs), giving immediate information about both parameter values and uncertainties. Linearized rock physical models are linked to the changes in saturation and pressure in the prior probability distribution. The relationship between the elastic parameters and the measured seismic data is described in the likelihood model. By assuming Gaussian distributed prior uncertainties the posterior distribution of the saturation and pressure changes can be calculated analytically. Results from tests on synthetic seismic data show that this method produces more precise estimates of changes in effective pressure than a similar methodology based on only PP AVO time‐lapse seismic data. This indicates that additional information about S‐waves obtained from converted‐wave seismic data is useful for obtaining reliable information about the pressure change distribution.  相似文献   

17.
In previous work, we presented a method for estimation and correction of non-linear mathematical model structures, within a Bayesian framework, by merging uncertain knowledge about process physics with uncertain and incomplete observations of dynamical input-state-output behavior. The resulting uncertainty in the model input-state-output mapping is expressed as a weighted combination of an uncertain conceptual model prior and a data-derived probability density function, with weights depending on the conditional data density. Our algorithm is based on the use of iterative data assimilation to update a conceptual model prior using observed system data, and thereby construct a posterior estimate of the model structure (the mathematical form of the equation itself, not just its parameters) that is consistent with both physically based prior knowledge and with the information in the data. An important aspect of the approach is that it facilitates a clear differentiation between the influences of different types of uncertainties (initial condition, input, and mapping structure) on the model prediction. Further, if some prior assumptions regarding the structural (mathematical) forms of the model equations exist, the procedure can help reveal errors in those forms and how they should be corrected. This paper examines the properties of the approach by investigating two case studies in considerable detail. The results show how, and to what degree, the structure of a dynamical hydrological model can be estimated without little or no prior knowledge (or under conditions of incorrect prior information) regarding the functional forms of the storage–streamflow and storage–evapotranspiration relationships. The importance and implications of careful specification of the model prior are illustrated and discussed.  相似文献   

18.
Studies have illustrated the performance of at-site and regional flood quantile estimators. For realistic generalized extreme value (GEV) distributions and short records, a simple index-flood quantile estimator performs better than two-parameter (2P) GEV quantile estimators with probability weighted moment (PWM) estimation using a regional shape parameter and at-site mean and L-coefficient of variation (L-CV), and full three-parameter at-site GEV/PWM quantile estimators. However, as regional heterogeneity or record lengths increase, the 2P-estimator quickly dominates. This paper generalizes the index flood procedure by employing regression with physiographic information to refine a normalized T-year flood estimator. A linear empirical Bayes estimator uses the normalized quantile regression estimator to define a prior distribution which is employed with the normalized 2P-quantile estimator. Monte Carlo simulations indicate that this empirical Bayes estimator does essentially as well as or better than the simpler normalized quantile regression estimator at sites with short records, and performs as well as or better than the 2P-estimator at sites with longer records or smaller L-CV.  相似文献   

19.
CT断层重建中滤波函数设计的新方法   总被引:1,自引:0,他引:1  
滤波函数在CT图像重建过程中起着非常重要的作用,直接关系到重建图像质量。为了提高CT重建图像质量,本文从加权平均的思想出发,根据FBP重建算法的理论基础,提出一种设计滤波函数的新思路,并分析了五点加权平均滤波函数性能比较。最后针对Shepp-Logan模型数据和实际的海螺投影数据,设计出一种新的滤波函数并与S-L和R-L滤波函数的重建效果进行了比较。从比较结果可以看出,新设计的滤波函数重建的图像效果在整体性能上最好,在局部地方,其密度分辨率有所提高。本文为滤波函数的设计提出了一种新的想法和思路。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号