首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Electrical resistivity tomography is a non-linear and ill-posed geophysical inverse problem that is usually solved through gradient-descent methods. This strategy is computationally fast and easy to implement but impedes accurate uncertainty appraisals. We present a probabilistic approach to two-dimensional electrical resistivity tomography in which a Markov chain Monte Carlo algorithm is used to numerically evaluate the posterior probability density function that fully quantifies the uncertainty affecting the recovered solution. The main drawback of Markov chain Monte Carlo approaches is related to the considerable number of sampled models needed to achieve accurate posterior assessments in high-dimensional parameter spaces. Therefore, to reduce the computational burden of the inversion process, we employ the differential evolution Markov chain, a hybrid method between non-linear optimization and Markov chain Monte Carlo sampling, which exploits multiple and interactive chains to speed up the probabilistic sampling. Moreover, the discrete cosine transform reparameterization is employed to reduce the dimensionality of the parameter space removing the high-frequency components of the resistivity model which are not sensitive to data. In this framework, the unknown parameters become the series of coefficients associated with the retained discrete cosine transform basis functions. First, synthetic data inversions are used to validate the proposed method and to demonstrate the benefits provided by the discrete cosine transform compression. To this end, we compare the outcomes of the implemented approach with those provided by a differential evolution Markov chain algorithm running in the full, un-reduced model space. Then, we apply the method to invert field data acquired along a river embankment. The results yielded by the implemented approach are also benchmarked against a standard local inversion algorithm. The proposed Bayesian inversion provides posterior mean models in agreement with the predictions achieved by the gradient-based inversion, but it also provides model uncertainties, which can be used for penetration depth and resolution limit identification.  相似文献   

2.
Seismic Rock physics plays a bridge role between the rock moduli and physical properties of the hydrocarbon reservoirs. Prestack seismic inversion is an important method for the quantitative characterization of elasticity, physical properties, lithology and fluid properties of subsurface reservoirs. In this paper, a high order approximation of rock physics model for clastic rocks is established and one seismic AVO reflection equation characterized by the high order approximation(Jacobian and Hessian matrix) of rock moduli is derived. Besides, the contribution of porosity, shale content and fluid saturation to AVO reflectivity is analyzed. The feasibility of the proposed AVO equation is discussed in the direct estimation of rock physical properties. On the basis of this, one probabilistic AVO inversion based on differential evolution-Markov chain Monte Carlo stochastic model is proposed on the premise that the model parameters obey Gaussian mixture probability prior model. The stochastic model has both the global optimization characteristics of the differential evolution algorithm and the uncertainty analysis ability of Markov chain Monte Carlo model. Through the cross parallel of multiple Markov chains, multiple stochastic solutions of the model parameters can be obtained simultaneously, and the posterior probability density distribution of the model parameters can be simulated effectively. The posterior mean is treated as the optimal solution of the model to be inverted.Besides, the variance and confidence interval are utilized to evaluate the uncertainties of the estimated results, so as to realize the simultaneous estimation of reservoir elasticity, physical properties, discrete lithofacies and dry rock skeleton. The validity of the proposed approach is verified by theoretical tests and one real application case in eastern China.  相似文献   

3.
Markov chain Monte Carlo algorithms are commonly employed for accurate uncertainty appraisals in non-linear inverse problems. The downside of these algorithms is the considerable number of samples needed to achieve reliable posterior estimations, especially in high-dimensional model spaces. To overcome this issue, the Hamiltonian Monte Carlo algorithm has recently been introduced to solve geophysical inversions. Different from classical Markov chain Monte Carlo algorithms, this approach exploits the derivative information of the target posterior probability density to guide the sampling of the model space. However, its main downside is the computational cost for the derivative computation (i.e. the computation of the Jacobian matrix around each sampled model). Possible strategies to mitigate this issue are the reduction of the dimensionality of the model space and/or the use of efficient methods to compute the gradient of the target density. Here we focus the attention to the estimation of elastic properties (P-, S-wave velocities and density) from pre-stack data through a non-linear amplitude versus angle inversion in which the Hamiltonian Monte Carlo algorithm is used to sample the posterior probability. To decrease the computational cost of the inversion procedure, we employ the discrete cosine transform to reparametrize the model space, and we train a convolutional neural network to predict the Jacobian matrix around each sampled model. The training data set for the network is also parametrized in the discrete cosine transform space, thus allowing for a reduction of the number of parameters to be optimized during the learning phase. Once trained the network can be used to compute the Jacobian matrix associated with each sampled model in real time. The outcomes of the proposed approach are compared and validated with the predictions of Hamiltonian Monte Carlo inversions in which a quite computationally expensive, but accurate finite-difference scheme is used to compute the Jacobian matrix and with those obtained by replacing the Jacobian with a matrix operator derived from a linear approximation of the Zoeppritz equations. Synthetic and field inversion experiments demonstrate that the proposed approach dramatically reduces the cost of the Hamiltonian Monte Carlo inversion while preserving an accurate and efficient sampling of the posterior probability.  相似文献   

4.
Parameter estimation in nonlinear environmental problems   总被引:5,自引:4,他引:1  
Popular parameter estimation methods, including least squares, maximum likelihood, and maximum a posteriori (MAP), solve an optimization problem to obtain a central value (or best estimate) followed by an approximate evaluation of the spread (or covariance matrix). A different approach is the Monte Carlo (MC) method, and particularly Markov chain Monte Carlo (MCMC) methods, which allow sampling from the posterior distribution of the parameters. Though available for years, MC methods have only recently drawn wide attention as practical ways for solving challenging high-dimensional parameter estimation problems. They have a broader scope of applications than conventional methods and can be used to derive the full posterior pdf but can be computationally very intensive. This paper compares a number of different methods and presents improvements using as case study a nonlinear DNAPL source dissolution and solute transport model. This depth-integrated semi-analytical model approximates dissolution from the DNAPL source zone using nonlinear empirical equations with partially known parameters. It then calculates the DNAPL plume concentration in the aquifer by solving the advection-dispersion equation with a flux boundary. The comparison is among the classical MAP and some versions of computer-intensive Monte Carlo methods, including the Metropolis–Hastings (MH) method and the adaptive direction sampling (ADS) method.  相似文献   

5.
In glacial studies, properties such as glacier thickness and the basement permeability and porosity are key to understand the hydrological and mechanical behaviour of the system. The seismoelectric method could potentially be used to determine key properties of glacial environments. Here we analytically model the generation of seismic and seismoelectric signals by means of a shear horizontal seismic wave source on top of a glacier overlying a porous basement. Considering a one-dimensional setting, we compute the seismic waves and the electrokinetically induced electric field. We then analyse the sensitivity of the seismic and electromagnetic data to relevant model parameters, namely depth of the glacier bottom, porosity, permeability, shear modulus and saturating water salinity of the glacier basement. Moreover, we study the possibility of inferring these key parameters from a set of very low noise synthetic data, adopting a Bayesian framework to pay particular attention to the uncertainty of the model parameters mentioned above. We tackle the resolution of the probabilistic inverse problem with two strategies: (1) we compute the marginal posterior distributions of each model parameter solving multidimensional integrals numerically and (2) we use a Markov chain Monte Carlo algorithm to retrieve a collection of model parameters that follows the posterior probability density function of the model parameters, given the synthetic data set. Both methodologies are able to obtain the marginal distributions of the parameters and estimate their mean and standard deviation. The Markov chain Monte Carlo algorithm performs better in terms of numerical stability and number of iterations needed to characterize the distributions. The inversion of seismic data alone is not able to constrain the values of porosity and permeability further than the prior distribution. In turn, the inversion of the electric data alone, and the joint inversion of seismic and electric data are useful to constrain these parameters as well as other glacial system properties. Furthermore, the joint inversion reduces the uncertainty of the model parameters estimates and provides more accurate results.  相似文献   

6.
基于MCMC的叠前地震反演方法研究   总被引:6,自引:5,他引:1       下载免费PDF全文
马尔科夫链蒙特卡洛方法(MCMC)是一种启发式的全局寻优算法[1].它在贝叶斯框架下,利用已有资料进行约束,既可使最优解满足参数的统计特性,又通过融入的先验信息,提高解的精度;寻优过程可跳出局部最优,得到全局最优解.利用MCMC方法,可以得到大量来自于后验概率分布的样本,不仅可以得到每个未知参数的估计值,而且可以得到与...  相似文献   

7.
Probability theory as logic (or Bayesian probability theory) is a rational inferential methodology that provides a natural and logically consistent framework for source reconstruction. This methodology fully utilizes the information provided by a limited number of noisy concentration data obtained from a network of sensors and combines it in a consistent manner with the available prior knowledge (mathematical representation of relevant physical laws), hence providing a rigorous basis for the assimilation of this data into models of atmospheric dispersion for the purpose of contaminant source reconstruction. This paper addresses the application of this framework to the reconstruction of contaminant source distributions consisting of an unknown number of localized sources, using concentration measurements obtained from a sensor array. To this purpose, Bayesian probability theory is used to formulate the full joint posterior probability density function for the parameters of the unknown source distribution. A simulated annealing algorithm, applied in conjunction with a reversible-jump Markov chain Monte Carlo technique, is used to draw random samples of source distribution models from the posterior probability density function. The methodology is validated against a real (full-scale) atmospheric dispersion experiment involving a multiple point source release.  相似文献   

8.
Characterization of groundwater contaminant source using Bayesian method   总被引:2,自引:1,他引:1  
Contaminant source identification in groundwater system is critical for remediation strategy implementation, including gathering further samples and analysis, as well as implementing and evaluating different remediation plans. Such problem is usually solved with the aid of groundwater modeling with lots of uncertainty, e.g. existing uncertainty in hydraulic conductivity, measurement variance and the model structure error. Monte Carlo simulation of flow model allows the input uncertainty onto the model predictions of concentration measurements at monitoring sites. Bayesian approach provides the advantage to update estimation. This paper presents an application of a dynamic framework coupling with a three dimensional groundwater modeling scheme in contamination source identification of groundwater. Markov Chain Monte Carlo (MCMC) is being applied to infer the possible location and magnitude of contamination source. Uncertainty existing in heterogonous hydraulic conductivity field is explicitly considered in evaluating the likelihood function. Unlike other inverse-problem approaches to provide single but maybe untrue solution, the MCMC algorithm provides probability distributions over estimated parameters. Results from this algorithm offer a probabilistic inference of the location and concentration of released contamination. The convergence analysis of MCMC reveals the effectiveness of the proposed algorithm. Further investigation to extend this study is also discussed.  相似文献   

9.
Selection of a flood frequency distribution and associated parameter estimation procedure is an important step in flood frequency analysis. This is however a difficult task due to problems in selecting the best fit distribution from a large number of candidate distributions and parameter estimation procedures available in the literature. This paper presents a case study with flood data from Tasmania in Australia, which examines four model selection criteria: Akaike Information Criterion (AIC), Akaike Information Criterion—second order variant (AICc), Bayesian Information Criterion (BIC) and a modified Anderson–Darling Criterion (ADC). It has been found from the Monte Carlo simulation that ADC is more successful in recognizing the parent distribution correctly than the AIC and BIC when the parent is a three-parameter distribution. On the other hand, AIC and BIC are better in recognizing the parent distribution correctly when the parent is a two-parameter distribution. From the seven different probability distributions examined for Tasmania, it has been found that two-parameter distributions are preferable to three-parameter ones for Tasmania, with Log Normal appears to be the best selection. The paper also evaluates three most widely used parameter estimation procedures for the Log Normal distribution: method of moments (MOM), method of maximum likelihood (MLE) and Bayesian Markov Chain Monte Carlo method (BAY). It has been found that the BAY procedure provides better parameter estimates for the Log Normal distribution, which results in flood quantile estimates with smaller bias and standard error as compared to the MOM and MLE. The findings from this study would be useful in flood frequency analyses in other Australian states and other countries in particular, when selecting an appropriate probability distribution from a number of alternatives.  相似文献   

10.
马尔科夫链蒙特卡洛方法(MCMC)是一种启发式的全局寻优算法,可以用来解决概率反演的问题.基于MCMC方法的反演不依赖于准确的初始模型,可以引入任意复杂的先验信息,通过对先验概率密度函数的采样来获得大量的后验概率分布样本,在寻找最优解的过程中可以跳出局部最优得到全局最优解.MCMC方法由于计算量巨大,应用难度较高,在地...  相似文献   

11.
Parameter uncertainty in hydrologic modeling is crucial to the flood simulation and forecasting. The Bayesian approach allows one to estimate parameters according to prior expert knowledge as well as observational data about model parameter values. This study assesses the performance of two popular uncertainty analysis (UA) techniques, i.e., generalized likelihood uncertainty estimation (GLUE) and Bayesian method implemented with the Markov chain Monte Carlo sampling algorithm, in evaluating model parameter uncertainty in flood simulations. These two methods were applied to the semi-distributed Topographic hydrologic model (TOPMODEL) that includes five parameters. A case study was carried out for a small humid catchment in the southeastern China. The performance assessment of the GLUE and Bayesian methods were conducted with advanced tools suited for probabilistic simulations of continuous variables such as streamflow. Graphical tools and scalar metrics were used to test several attributes of the simulation quality of selected flood events: deterministic accuracy and the accuracy of 95 % prediction probability uncertainty band (95PPU). Sensitivity analysis was conducted to identify sensitive parameters that largely affect the model output results. Subsequently, the GLUE and Bayesian methods were used to analyze the uncertainty of sensitive parameters and further to produce their posterior distributions. Based on their posterior parameter samples, TOPMODEL’s simulations and the corresponding UA results were conducted. Results show that the form of exponential decline in conductivity and the overland flow routing velocity were sensitive parameters in TOPMODEL in our case. Small changes in these two parameters would lead to large differences in flood simulation results. Results also suggest that, for both UA techniques, most of streamflow observations were bracketed by 95PPU with the containing ratio value larger than 80 %. In comparison, GLUE gave narrower prediction uncertainty bands than the Bayesian method. It was found that the mode estimates of parameter posterior distributions are suitable to result in better performance of deterministic outputs than the 50 % percentiles for both the GLUE and Bayesian analyses. In addition, the simulation results calibrated with Rosenbrock optimization algorithm show a better agreement with the observations than the UA’s 50 % percentiles but slightly worse than the hydrographs from the mode estimates. The results clearly emphasize the importance of using model uncertainty diagnostic approaches in flood simulations.  相似文献   

12.
Exposure estimation using repeated blood concentration measurements   总被引:3,自引:3,他引:0  
Physiologically based toxicokinetic (PBTK) modeling has been well established to study the distributions of chemicals in target tissues. In addition, the hierarchical Bayesian statistical approach using Markov Chain Monte Carlo (MCMC) simulations has been applied successfully for parameter estimation. The aim was to estimate the constant inhalation exposure concentration (assumed) using a PBTK model based on repeated measurements in venous blood, so that exposures could be estimated. By treating the constant exterior exposure as an unknown parameter of a four-compartment PBTK model, we applied MCMC simulations to estimate the exposure based on a hierarchical Bayesian approach. The dataset on 16 volunteers exposed to 100 ppm (≅0.538 mg/L) trichloroethylene vapors for 4 h was reanalyzed as an illustration. Cases of time-dependent exposures with a constant mean were also studied via 100 simulated datasets. The posterior geometric mean of 0.571, with narrow 95% posterior confidence interval (CI) (0.506, 0.645), estimated the true trichloroethylene inhalation concentration (0.538 mg/L) with very high precision. Also, the proposed method estimated the overall constant mean of the simulated time-dependent exposure scenarios well with slightly wider 95% CIs. The proposed method justifies the accuracy of exposure estimation from biomonitoring data using PBTK model and MCMC simulations from a real dataset and simulation studies numerically, which provides a starting point for future applications in occupational exposure assessment.  相似文献   

13.
Improved Monte Carlo inversion of surface wave data   总被引:2,自引:0,他引:2  
Inversion of surface wave data suffers from solution non‐uniqueness and is hence strongly biased by the initial model. The Monte Carlo approach can handle this non‐uniqueness by evidencing the local minima but it is inefficient for high dimensionality problems and makes use of subjective criteria, such as misfit thresholds, to interpret the results. If a smart sampling of the model parameter space, which exploits scale properties of the modal curves, is introduced the method becomes more efficient and with respect to traditional global search methods it avoids the subjective use of control parameters that are barely related to the physical problem. The results are interpreted drawing inference by means of a statistical test that selects an ensemble of feasible shear wave velocity models according to data quality and model parameterization. Tests on synthetic data demonstrate that the application of scale properties concentrates the sampling of model parameter space in high probability density zones and makes it poorly sensitive to the initial boundary of the model parameters. Tests on synthetic and field data, where boreholes are available, prove that the statistical test selects final results that are consistent with the true model and which are sensitive to data quality. The implemented strategies make the Monte Carlo inversion efficient for practical applications and able to effectively retrieve subsoil models even in complex and challenging situations such as velocity inversions.  相似文献   

14.
A new uncertainty estimation method, which we recently introduced in the literature, allows for the comprehensive search of model posterior space while maintaining a high degree of computational efficiency. The method starts with an optimal solution to an inverse problem, performs a parameter reduction step and then searches the resulting feasible model space using prior parameter bounds and sparse‐grid polynomial interpolation methods. After misfit rejection, the resulting model ensemble represents the equivalent model space and can be used to estimate inverse solution uncertainty. While parameter reduction introduces a posterior bias, it also allows for scaling this method to higher dimensional problems. The use of Smolyak sparse‐grid interpolation also dramatically increases sampling efficiency for large stochastic dimensions. Unlike Bayesian inference, which treats the posterior sampling problem as a random process, this geometric sampling method exploits the structure and smoothness in posterior distributions by solving a polynomial interpolation problem and then resampling from the resulting interpolant. The two questions we address in this paper are 1) whether our results are generally compatible with established Bayesian inference methods and 2) how does our method compare in terms of posterior sampling efficiency. We accomplish this by comparing our method for two electromagnetic problems from the literature with two commonly used Bayesian sampling schemes: Gibbs’ and Metropolis‐Hastings. While both the sparse‐grid and Bayesian samplers produce compatible results, in both examples, the sparse‐grid approach has a much higher sampling efficiency, requiring an order of magnitude fewer samples, suggesting that sparse‐grid methods can significantly improve the tractability of inference solutions for problems in high dimensions or with more costly forward physics.  相似文献   

15.
In this study, we focus on a hydrogeological inverse problem specifically targeting monitoring soil moisture variations using tomographic ground penetrating radar (GPR) travel time data. Technical challenges exist in the inversion of GPR tomographic data for handling non-uniqueness, nonlinearity and high-dimensionality of unknowns. We have developed a new method for estimating soil moisture fields from crosshole GPR data. It uses a pilot-point method to provide a low-dimensional representation of the relative dielectric permittivity field of the soil, which is the primary object of inference: the field can be converted to soil moisture using a petrophysical model. We integrate a multi-chain Markov chain Monte Carlo (MCMC)–Bayesian inversion framework with the pilot point concept, a curved-ray GPR travel time model, and a sequential Gaussian simulation algorithm, for estimating the dielectric permittivity at pilot point locations distributed within the tomogram, as well as the corresponding geostatistical parameters (i.e., spatial correlation range). We infer the dielectric permittivity as a probability density function, thus capturing the uncertainty in the inference. The multi-chain MCMC enables addressing high-dimensional inverse problems as required in the inversion setup. The method is scalable in terms of number of chains and processors, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. The proposed inversion approach can successfully approximate the posterior density distributions of the pilot points, and capture the true values. The computational efficiency, accuracy, and convergence behaviors of the inversion approach were also systematically evaluated, by comparing the inversion results obtained with different levels of noises in the observations, increased observational data, as well as increased number of pilot points.  相似文献   

16.
Finding an operational parameter vector is always challenging in the application of hydrologic models, with over‐parameterization and limited information from observations leading to uncertainty about the best parameter vectors. Thus, it is beneficial to find every possible behavioural parameter vector. This paper presents a new methodology, called the patient rule induction method for parameter estimation (PRIM‐PE), to define where the behavioural parameter vectors are located in the parameter space. The PRIM‐PE was used to discover all regions of the parameter space containing an acceptable model behaviour. This algorithm consists of an initial sampling procedure to generate a parameter sample that sufficiently represents the response surface with a uniform distribution within the “good‐enough” region (i.e., performance better than a predefined threshold) and a rule induction component (PRIM), which is then used to define regions in the parameter space in which the acceptable parameter vectors are located. To investigate its ability in different situations, the methodology is evaluated using four test problems. The PRIM‐PE sampling procedure was also compared against a Markov chain Monte Carlo sampler known as the differential evolution adaptive Metropolis (DREAMZS) algorithm. Finally, a spatially distributed hydrological model calibration problem with two settings (a three‐parameter calibration problem and a 23‐parameter calibration problem) was solved using the PRIM‐PE algorithm. The results show that the PRIM‐PE method captured the good‐enough region in the parameter space successfully using 8 and 107 boxes for the three‐parameter and 23‐parameter problems, respectively. This good‐enough region can be used in a global sensitivity analysis to provide a broad range of parameter vectors that produce acceptable model performance. Moreover, for a specific objective function and model structure, the size of the boxes can be used as a measure of equifinality.  相似文献   

17.
Many civil infrastructures are located near the confluence of two streams, where they may be subject to inundation by high flows from either stream or both. These infrastructures, such as highway bridges, are designed to meet specified performance objectives for floods of a specified return period (e.g. the 100 year flood). Because the flooding of structures on one stream can be affected by high flows on the other stream, it is important to know the relationship between the coincident exceedence probabilities on the confluent stream pair in many hydrological engineering practices. Currently, the National Flood Frequency Program (NFF), which was developed by the US Geological Survey (USGS) and based on regional analysis, is probably the most popular model for ungauged site flood estimation and could be employed to estimate flood probabilities at the confluence points. The need for improved infrastructure design at such sites has motivated a renewed interest in the development of more rigorous joint probability distributions of the coincident flows. To accomplish this, a practical procedure is needed to determine the crucial bivariate distributions of design flows at stream confluences. In the past, the copula method provided a way to construct multivariate distribution functions. This paper aims to develop the Copula‐based Flood Frequency (COFF) method at the confluence points with any type of marginal distributions via the use of Archimedean copulas and dependent parameters. The practical implementation was assessed and tested against the standard NFF approach by a case study in Iowa's Des Moines River. Monte Carlo simulations proved the success of the generalized copula‐based joint distribution algorithm. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Kil Seong Lee  Sang Ug Kim 《水文研究》2008,22(12):1949-1964
This study employs the Bayesian Markov Chain Monte Carlo (MCMC) method with the Metropolis–Hastings algorithm and maximum likelihood estimation (MLE) using a quadratic approximation of the likelihood function for the evaluation of uncertainties in low flow frequency analysis using a two‐parameter Weibull distribution. The two types of prior distributions, a non‐data‐based distribution and a data‐based distribution using regional information collected from neighbouring stations, are used to establish a posterior distribution. Eight case studies using the synthetic data with a sample size of 100, generated from two‐parameter Weibull distribution, are performed to compare with results of analysis using MLE and Bayesian MCMC. Also, Bayesian MCMC and MLE are applied to 36 years of gauged data to validate the efficiency of the developed scheme. These examples illustrate the advantages of Bayesian MCMC and the limitations of MLE based on a quadratic approximation. From the point of view of uncertainty analysis, Bayesian MCMC is more effective than MLE using a quadratic approximation when the sample size is small. In particular, Bayesian MCMC method is more attractive than MLE based on a quadratic approximation because the sample size of low flow at the site of interest is mostly not enough to perform the low flow frequency analysis. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
A new bivariate pseudo Pareto distribution is proposed, and its distributional characteristics are investigated. The parameters of this distribution are estimated by the moment-, the maximum likelihood- and the Bayesian method. Point estimators of the parameters are presented for different sample sizes. Asymptotic confidence intervals are constructed and the parameter modeling the dependency between two variables is checked. The performance of the different estimation methods is investigated by using the bootstrap method. A Markov Chain Monte Carlo simulation is conducted to estimate the Bayesian posterior distribution for different sample sizes. For illustrative purposes, a real set of drought data is investigated.  相似文献   

20.
京津唐张地区速度结构和震源位置联合反演的遗传算法   总被引:12,自引:0,他引:12  
震源参数和速度结构的联合反演是一个典型的非线性名参数最优化问题,常规的局部线性化反演方法往往易于陷入局部极值,且严重依赖于初始模型的选取。模拟生物界进化的遗传算法则是一种简单而高效的全局性搜索方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号