共查询到20条相似文献,搜索用时 15 毫秒
1.
Performance of three types of Stokes's kernel in the combined solution for the geoid 总被引:8,自引:6,他引:2
When regional gravity data are used to compute a gravimetric geoid in conjunction with a geopotential model, it is sometimes
implied that the terrestrial gravity data correct any erroneous wavelengths present in the geopotential model. This assertion
is investigated. The propagation of errors from the low-frequency terrestrial gravity field into the geoid is derived for
the spherical Stokes integral, the spheroidal Stokes integral and the Molodensky-modified spheroidal Stokes integral. It is
shown that error-free terrestrial gravity data, if used in a spherical cap of limited extent, cannot completely correct the
geopotential model. Using a standard norm, it is shown that the spheroidal and Molodensky-modified integration kernels offer
a preferable approach. This is because they can filter out a large amount of the low-frequency errors expected to exist in
terrestrial gravity anomalies and thus rely more on the low-frequency geopotential model, which currently offers the best
source of this information.
Received: 11 August 1997 / Accepted: 18 August 1998 相似文献
2.
The probability distribution of the GPS baseline for a class of integer ambiguity estimators 总被引:12,自引:2,他引:10
P. J. G. Teunissen 《Journal of Geodesy》1999,73(5):275-284
In current global positioning system (GPS) ambiguity resolution practice there is not yet a rigorous procedure in place to
diagnose its expected performance and to evaluate the probabilistic properties of the computed baseline. The necessary theory
to bridge this gap is presented. Probabilistic statements about the `fixed' GPS baseline can be made once its probability
distribution is known. This distribution is derived for a class of integer ambiguity estimators. Members from this class are
the ambiguity estimators that follow from `integer rounding', `integer bootstrapping' and `integer least squares' respectively.
It is also shown how this distribution differs from the one which is usually used in practice. The approximations involved
are identified and ways of evaluating them are given. In this comparison the precise role of GPS ambiguity resolution is clarified.
Received: 3 August 1998 / Accepted: 4 March 1999 相似文献
3.
This paper aims at a comparative study of several measures to compensate for gross errors in kinematic orbit data. It starts with a simulation study on the influence of a single outlier in the orbit data on the gravity field solution. It is shown that even a single outlier can degrade the resulting gravity field solution considerably. To compensate for outliers, two different strategies are investigated: wavelet filters, which detect and eliminate gross errors, and robust estimators, which due to an iterative downweighting gradually ignore those observations that lead to large residuals. Both methods are applied in the scope of the analysis of a 2-year kinematic CHAMP (challenging minisatellite payload) orbit data set. In various real data studies, robust estimators outperform wavelet filters in terms of resolution of the derived gravity field solution. This superior performance is at the cost of computational load, as robust estimators are implemented iteratively and require the solution of large sets of linear equations several times. 相似文献
4.
P. J. G. Teunissen 《Journal of Geodesy》1997,71(9):541-551
In this contribution we analyse in a qualitative sense for the geometry-free model the dependency of the location, the size
and the shape of the ambiguity search space on different factors of the stochastic model. For this purpose a rather general
stochastic model is used. It includes time-correlation, cross-correlation, satellite elevation dependency and the use of an
a priori weighted ionospheric model, having the ionosphere-fixed model and the ionosphere-float model as special cases. It
is shown that the location is invariant for changes in the cofactor matrix of the phase observables. This also holds true
for the cofactor matrix of the code observables in the ionosphere-float case. As for time-correlation and satellite elevation
dependency, it is shown that they only affect the size of the search space, but not its shape and orientation. It is also
shown that the least-squares ambiguities, their variance matrix and its determinant, for, respectively, the ionosphere-fixed
model, the ionosphere-float model and the ionosphere-weighted model, are all related through the same scalar weighted mean,
the weight of which is governed by the variance ratio of the ionospheric delays and the code observables. A closed-form expression
is given for the area of the search space in which all contributing factors are easily recognized. From it one can infer by
how much the area gets blown up when the ionospheric spatial decorrelation increases. This multiplication factor is largest
when one switches from the ionosphere-fixed model to the ionosphere-float model, in which case it is approximately equal to
the ratio of the standard deviation of phase with that of code. The area gives an indication of the number of grid points
inside the search space.
Received: 11 November 1996 / Accepted: 21 March 1997 相似文献
5.
P. J. G. Teunissen 《Journal of Geodesy》1997,71(6):320-336
The present contribution is the first of four parts. It considers the precision of the floated and the fixed baseline. A
measure is introduced for the gain in baseline precision which is experienced when the carrier phase double-differenced ambiguities
are treated as integers instead of as reals. The properties of this measure are analyzed, and it is shown by means of principal
angles how it relates to the change over time of the relative receiver-satellite geometry. We also present canonical forms
of the baseline variance matrices for different measurement scenarios. These canonical forms make the relation between the
various variance matrices transparent and thus present a simple way of studying their relative merits.
Received: 16 July 1996; Accepted: 14 November 1996 相似文献
6.
TTS系统中基于双音素的基元选择方法 总被引:1,自引:0,他引:1
为寻示能较好解决音节内和音节间的协同发音单元方案,提出了采用类似英文语转换系统中使用的双音素作为合成单元方案,并根据普通话音中只包含410个音节特点,进一步完善了双音素在汉语中的应用。试验结果表明,该方案包含了连续语流中的所有过渡音征,使合成语音转接流畅、自然。 相似文献
7.
大地测量相关观测抗差估计理论 总被引:21,自引:4,他引:21
相关观测异常诊断、质量控制是测量数据处理领域亟待解决的难题之一。分别从方差膨胀模型和相关权元素压缩模型入手研究了相关观测的质量控制理论和方法;给出了误差影响函数;构造了方差膨胀函数和权因子收缩函数;利用观测量的等价协方差阵和等价权矩阵讨论了相关观测质量控制的计算方法。该等价协方差矩阵和等价权矩阵不仅保持了原有协方差矩阵和权矩阵的对称性,而且保持了原有协方差矩阵的相关性不变。计算结果表明异常观测的方差膨胀法和等价权法能有效地控制异常观测对参数估值的影响。 相似文献
8.
A wide-angle airborne laser ranging system has been developed for the determination of relative heights of ground-based benchmarks
in regional-scale networks (typically 100 laser reflectors spread over 100 km2). A first prototype demonstrated a 1–2 mm accuracy in radial distance measurement in a ground-based experiment in 1995. The
first aircraft experiment was conducted in 1998, over a small area (1 km2) equipped with a network of 64 benchmarks. The instrument was modified before that experiment, in order to minimize echo
superimposition due to the high density of benchmarks. New data processing algorithms have been developed, for the deconvolution
of strongly overlapped echoes and a high a priori uncertainty in the aircraft flight path, and for the estimation of benchmark
coordinates. A special methodology has been developed for the parameterization of these algorithms and of outlier detection
tests. From a total of 2×104 pseudo-range measurements, that have been acquired from two flights composed of 30 legs each, only 3×103 remain after outlier detection. A positioning accuracy of 1.5 cm in the vertical coordinate (2.1 cm in the difference between
the two flights) has been achieved. It is shown that the errors are normally distributed, with a nearly zero mean, and are
consistent with the a posteriori uncertainty. It is also shown that the accuracy is limited mainly by the sensitivity of the
photodetector used for this experiment (due to reduced response time). Another limiting factor is the effect of aircraft attitude
changes during the measurements, which produces additional uncertainties in absolute distance measurements. It is planned
to test new photodetectors with high internal gains. These should provide, in future experiments with smaller benchmark density,
an improvement in signal-to-noise ratio of a factor of 5–10, leading to sub-centimeter vertical positioning accuracy.
Received: 19 June 2001 / Accepted: 3 January 2002 相似文献
9.
One of the most serious practical limitations of boundary element methods for gravity field determination is that they cannot
make efficient use of existing satellite geopotential models. Three basic approaches to solving the problem are developed:
(1) alternative representation formulas; (2) modified kernel functions of classical representation formulas; and (3) modified
trial and test spaces. The three methods are tested and compared for the altimetry–gravimetry II boundary value problem. It
is shown that there is in fact a significant improvement when compared to the pure boundary element solution. Most promising
is the method of multiscale trial and test spaces which, in addition, yields sparse system matrices.
Received: 7 September 1998 / Accepted: 16 June 1999 相似文献
10.
The least-squares ambiguity decorrelation adjustment: its performance on short GPS baselines and short observation spans 总被引:9,自引:2,他引:9
The least-squares ambiguity decorrelation adjustment is a method for fast GPS double-difference (DD) integer ambiguity estimation.
The performance of the method will be discussed, and although it is stressed that the method is generally applicable, attention
is restricted to short-baseline applications in the present contribution. With reference to the size and shape of the ambiguity
search space, the volume of the search space will be introduced as a measure for the number of candidate grid points, and
the signature of the spectrum of conditional variances will be used to identify the difficulty one has in computing the integer
DD ambiguities. It is shown that the search for the integer least-squares ambiguities performs poorly when it takes place
in the space of original DD ambiguities. This poor performance is explained by means of the discontinuity in the spectrum
of conditional variances. It is shown that through a decorrelation of the ambiguities, transformed ambiguities are obtained
which generally have a flat and lower spectrum, thereby enabling a fast and efficient search. It is also shown how the high
precision and low correlation of the transformed ambiguities can be used to scale the search space so as to avoid an abundance
of unnecessary candidate grid points. Numerical results are presented on the spectra of conditional variances and on the statistics
of both the original and transformed ambiguities. Apart from presenting numerical results which can typically be achieved,
the contribution also emphasizes and explains the impact on the method's performance of different measurement scenarios, such
as satellite redundancy, single vs dual-frequency data, the inclusion of code data and the length of the observation time
span.
Received: 31 October 1995 / Accepted: 21 March 1997 相似文献
11.
Fast integer least-squares estimation for GNSS high-dimensional ambiguity resolution using lattice theory 总被引:4,自引:0,他引:4
GNSS ambiguity resolution is the key issue in the high-precision relative geodetic positioning and navigation applications.
It is a problem of integer programming plus integer quality evaluation. Different integer search estimation methods have been
proposed for the integer solution of ambiguity resolution. Slow rate of convergence is the main obstacle to the existing methods
where tens of ambiguities are involved. Herein, integer search estimation for the GNSS ambiguity resolution based on the lattice
theory is proposed. It is mathematically shown that the closest lattice point problem is the same as the integer least-squares
(ILS) estimation problem and that the lattice reduction speeds up searching process. We have implemented three integer search
strategies: Agrell, Eriksson, Vardy, Zeger (AEVZ), modification of Schnorr–Euchner enumeration (M-SE) and modification of
Viterbo-Boutros enumeration (M-VB). The methods have been numerically implemented in several simulated examples under different
scenarios and over 100 independent runs. The decorrelation process (or unimodular transformations) has been first used to
transform the original ILS problem to a new one in all simulations. We have then applied different search algorithms to the
transformed ILS problem. The numerical simulations have shown that AEVZ, M-SE, and M-VB are about 320, 120 and 50 times faster
than LAMBDA, respectively, for a search space of dimension 40. This number could change to about 350, 160 and 60 for dimension
45. The AEVZ is shown to be faster than MLAMBDA by a factor of 5. Similar conclusions could be made using the application
of the proposed algorithms to the real GPS data. 相似文献
12.
The Nature and Classification of Unlabelled Neurons in the Use of Kohonen's Self-Organizing Map for Supervised Classification 总被引:2,自引:0,他引:2
Kohonen's Self‐Organizing Map is a neural network procedure in which a layer of neurons is initialized with random weights, and subsequently organized by inspection of the data to be analyzed. The organization procedure uses progressive adjustment of weights based on data characteristics and lateral interaction such that neurons with similar weights will tend to spatially cluster in the neuron layer. When the SOM is associated with a supervised classification, a majority voting technique is usually used to associate these neurons with training data classes. This technique, however, cannot guarantee that every neuron in the output layer will be labelled, and thus causes unclassified pixels in the final map. This problem is similar to but fundamentally different from the problem of dead units that arises in unsupervised SOM classification (neurons which are never organized by the input data). In this paper we specifically address the problem and nature of unlabelled neurons in the use of SOM for supervised classification. Through a case study it is shown that unlabelled neurons are associated with unknown image classes and, most particularly, mixed pixels. It is also shown that an auxiliary algorithm proposed here for assigning classes to unlabelled neurons performs with the same success as that experienced with Maximum Likelihood. 相似文献
13.
Four integral-based methods for the inversion of gravity disturbances, derived from airborne gravity measurements, into the disturbing potential on the Bjerhammar sphere and the Earths surface are investigated and compared with least-squares (LS) collocation. The performance of the methods is numerically investigated using noise-free and noisy observations, which have been generated using a synthetic gravity field model. It is found that advanced interpolation of gravity disturbances at the nodes of higher-order numerical integration formulas significantly improves the performance of the integral-based methods. This is preferable to the commonly used one-point composed Newton–Cotes integration formulas, which intrinsically imply a piecewise constant interpolation over a patch centered at the observation point. It is shown that the investigated methods behave similarly for noise-free observations, but differently for noisy observations. The best results in terms of root-mean-square (RMS) height-anomaly errors are obtained when the gravity disturbances are first downward continued (inverse Poisson integral) and then transformed into potential values (Hotine integral). The latter has a strong smoothing effect, which damps high-frequency errors inherent in the downward-continued gravity disturbances. An integral method based on the single-layer representation of the disturbing potential shows a similar performance. This representation has the advantage that it can be used directly on surfaces with non-spherical geometry, whereas classical integral-based methods require an additional step if gravity field functionals have to be computed on non-spherical geometries. It is shown that defining the single-layer density on the Bjerhammar sphere gives results with the same quality as obtained when using the Earths topography as support for the single-layer density. A comparison of the four integral-based methods with LS collocation shows that the latter method performs slightly better in terms of RMS height-anomaly errors. 相似文献
14.
15.
Sylvie Le Hgarat-Mascle Cyrille Andr 《ISPRS Journal of Photogrammetry and Remote Sensing》2009,64(4):351-366
In this study, we propose an automatic detection algorithm for cloud/shadow on remote sensing optical images. It is based on physical properties of clouds and shadows, namely for a cloud and its associated shadow: both are connex objects of similar shape and area, and they are related by their relative locations. We show that these properties can be formalized using Markov Random Field (MRF) framework at two levels: one MRF over the pixel graph for connexity modelling, and one MRF over the graph of objects (clouds and shadows) for their relationship modelling. Then, we show that, practically, having performed an image pre-processing step (channel inter-calibration) specific to cloud detection, the local optimization of the proposed MRF models leads to a rather simple image processing algorithm involving only six parameters. Using a 39 image database, performance is shown and discussed, in particular in comparison with the Marked Point Process approach. 相似文献
16.
Reliability analysis is inseparably connected with the formulation of failure scenarios, and common test statistics are based on specific assumptions. This is easily overlooked when processing observation differences. Poor failure identification performance and misleading pre-analysis results, mainly meaningless minimum detectable biases and external reliability measures, are the consequence. A reasonable failure scenario for use with differenced GNSS observations is formulated which takes into account that individual outliers in the original data affect more than one processed observation. The proper test statistics and reliability indicators are given for use with correlated observations and both batch processing and Kalman filtering. It is also shown that standardized residuals and redundancy numbers fail completely when used with double differenced observations.
相似文献
Andreas WieserEmail: Phone: +43-316-8736323Fax: +43-316-8736820 |
17.
The global positioning system (GPS) model is distinctive in the way that the unknown parameters are not only real-valued,
the baseline coordinates, but also integers, the phase ambiguities. The GPS model therefore leads to a mixed integer–real-valued
estimation problem. Common solutions are the float solution, which ignores the ambiguities being integers, or the fixed solution,
where the ambiguities are estimated as integers and then are fixed. Confidence regions, so-called HPD (highest posterior density)
regions, for the GPS baselines are derived by Bayesian statistics. They take care of the integer character of the phase ambiguities
but still consider them as unknown parameters. Estimating these confidence regions leads to a numerical integration problem
which is solved by Monte Carlo methods. This is computationally expensive so that approximations of the confidence regions
are also developed. In an example it is shown that for a high confidence level the confidence region consists of more than
one region.
Received: 1 February 2001 / Accepted: 18 July 2001 相似文献
18.
方差—协方差分量极大似然估计的通用公式 总被引:6,自引:1,他引:6
本文由概括平差函数模型出发,按极大似然做估计原则导出了适用于所有平差函数模型的方差分量估计的通用公式,由K.Kubik和C.R.Koch所导出的两个公式都是它的特例。 相似文献
19.
Javad Saberian Mohammad Reza Malek Stephan Winter Majid Hamrah 《Transactions in GIS》2014,18(5):767-782
In this article we define inverse line graphs of directed graphs as a new framework for solving some classical network analysis problems. The extraction method and theories of inverse line graphs are explained in this article. It is shown that by changing the analysis space from the original directed graph to the inverse line graph, complex problems can be changed into simpler problems. We show the usefulness of the proposed framework in two particular applications: shortest path computations and the more general route planning. Considering the implementation result, we expect that this framework could be used in many more network analysis problems. 相似文献
20.
Klaus-Peter Schwarz 《Journal of Geodesy》1974,48(2):171-186
When combining satellite and terrestrial networks, covariance matrices are used which have been estimated from previous data.
It can be shown that the least-squares estimator of the unknown parameters using such an estimated covariance matrix is not
necessarily the best. There are a number of cases where a more efficient estimator can be obtained in a different way. The
problem occurs frequently in geodesy, since in least-squares adjustment of correlated observations estimated covariance matrices
are often used.
If the general structure of the covariance matrix is known, results can often be improved by a method called covariance adjustment.
The statistical model used in least-squares collocation leads to a type of covariance matrix which fits into this framework.
It is shown in which way improvements can be made using a modified approach of principal component analysis.
As a numerical example the combination of a satellite and a terrestrial network has been computed with varying assumptions
on the covariance matrix. It is shown which types of matrices are critical and where the usual least-squares approach can
be applied without hesitation. Finally, a simplified representation of covariances for spatial networks by means of a suitable
covariance function is suggested.
Paper presented at the International Symposium on Computational Methods in Geometrical Geodesy-Oxford, 2–8 September, 1973. 相似文献