首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model (YE Y = (XE X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler–Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335–342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.  相似文献   

2.
Summary The system of normal equations for the adjustment of a free network is a singular one. Therefore, a number of coordinates has to be fixed according to the matrix. The mean square errors and the error ellipses of such an adjustment are dependent on this choice. This paper gives a simple, direct method for the adjustment of free networks, where no coordinates need to be fixed. This is done by minimizing not only the sum of the squares of the weighted errorsV T PV=minimun but also the Euclidean norm of the vectorX and of the covariance matrixQ X T X=minimum trace (Q)=minimum This last condition is crucial for geodetic problems of this type.  相似文献   

3.
A general formula giving Molodenskii coefficientsQ n of the truncation errors for the geoidal height is introduced in this paper. A relation betweenQ n andq n, Cook’s truncation function, is also obtained. Cook (1951) has treated the truncation errors for the deflection of the vertical in the Vening Meinesz integration. Molodenskii et al. (1962) have also derived the truncation error formulas for the deflection of the vertical. It is proved in this paper that these two formulas are equivalent.  相似文献   

4.
In November 1968, a marine geodetic control point was established in the Pacific Ocean at a water depth of6,200 feet. The control point (reference point) consists of three underwater acoustic transponders, two of which are powered with lead-acid batteries and the third with an underwater radioisotope power source “URIPS” with a10- to20- year life expectancy. Four independent measuring techniques (LORAC airborne line-crossing, satellite, ship inertial, and acoustic techniques) were used to measure and determine the coordinates of the control point. Preliminary analysis of the acoustic and airborne data indicates that high accuracies can be achieved in the establishment of geodetic reference points at sea. Geodetic adjustment by the method of variation of coordinates yielded a standard point error of±50 to±66 feet in determining the unknown ship station. The original location of the ship station as determined by shipboard navigation equipment was off by about1,600 feet. Paper previously published in the Proceedings of the Second Marine Geodesy Symposium of the Marine Technology Society.  相似文献   

5.
Summary The probability to find an error vector in multiples of the Helmert-Maxwell-Boltzmann point error σ2 δijij Kronecker symbol) is calculated. It is found that the probability is for σ39%, for2 σ86% and for3 σ99% in two dimensions, for σ20%, for2 σ74% and for3 σ97% in three dimensions. The fundamental Maxwell-Boltzmann-distribution is tabulated0,02 (0,02) 4,50.   相似文献   

6.
    
From periodic variations of the orbital inclinations of three artificial satellites 1959Alpha 1, 1960Iota 2, and 1962Beta Mu 1 Love’s number of the earth and time lag of the bodily tide due to the friction are determined, respectively,0.29±0.03 and(10±5) minutes in time. While the previous paper on the determination of Love’s number of the earth (Kozai, 1967) was in press, a minor error was discovered in the Differential Orbit Improvement program(DOI) of the Smithsonian Astrophysical Observatory(SAO). Since the analysis was based on time-variations of the orbital inclinations which were derived by theDOI from precisely reduced Baker-Nunn observations, it is likely that the results in the previous paper was affected by the error in theDOI. Therefore, the analysis is iterated by using the revisedDOI. Three satellites, 1959Alpha 1 (Vanguard 2), 1960Iota 2 (rocket ofEcho 1), and 1962Beta Mu 1 (Anna) (see Table 1) are adopted for determining Love’s number in the present paper. The satellite, 1959Eta, which was used in the previous paper, is not adopted here, since the inclination of this satellite shows irregular variations unexplained. Instead of 1959Eta 1962Beta Mu 1 is adopted as orbital elements from precisely reduced Baker-Nunn observations have become available for a long interval of time for this satellite.  相似文献   

7.
A set of2261 5°×5° mean anomalies were used alone and with satellite determined harmonic coefficients of the Smithsonian' Institution to determine the geopotential expansion to various degrees. The basic adjustment was carried out by comparing a terrestrial anomaly to an anomaly determined from an assumed set of coefficients. The (14, 14) solution was found to agree within ±3 m of a detailed geoid in the United States computed using1°×1° anomalies for an inner area and satellite determined anomalies in an outer area. Additional comparisons were made to the input anomaly field to consider the accuracy of various harmonic coefficient solutions. A by-product of this investigation was a new γE=978.0463 gals in the Potsdam system or978.0326 gals in an absolute system if −13.7 mgals is taken as the Potsdam correction. Combining this value of γE withf=1/298.25, KM=3.9860122·10 22 cm 3 /sec 2 , the consistent equatorial radius was found to be6378143 m.  相似文献   

8.
Summary Within potential theory of Poisson-Laplace equation the boundary value problem of physical geodesy is classified asfree andnonlinear. For solving this typical nonlinear boundary value problem four different types of nonlinear integral equations corresponding to singular density distributions within single and double layer are presented. The characteristic problem of free boundaries, theproblem of free surface integrals, is exactly solved bymetric continuation. Even in thelinear approximation of fundamental relations of physical geodesy the basic integral equations becomenonlinear because of the special features of free surface integrals.  相似文献   

9.
In order to achieve to GPS solutions of first-order accuracy and integrity, carrier phase observations as well as pseudorange observations have to be adjusted with respect to a linear/linearized model. Here the problem of mixed integer-real valued parameter adjustment (IRA) is met. Indeed, integer cycle ambiguity unknowns have to be estimated and tested. At first we review the three concepts to deal with IRA: (i) DDD or triple difference observations are produced by a properly chosen difference operator and choice of basis, namely being free of integer-valued unknowns (ii) The real-valued unknown parameters are eliminated by a Gauss elimination step while the remaining integer-valued unknown parameters (initial cycle ambiguities) are determined by Quadratic Programming and (iii) a RA substitute model is firstly implemented (real-valued estimates of initial cycle ambiguities) and secondly a minimum distance map is designed which operates on the real-valued approximation of integers with respect to the integer data in a lattice. This is the place where the integer Gram-Schmidt orthogonalization by means of the LLL algorithm (modified LLL algorithm) is applied being illustrated by four examples. In particular, we prove that in general it is impossible to transform an oblique base of a lattice to an orthogonal base by Gram-Schmidt orthogonalization where its matrix enties are integer. The volume preserving Gram-Schmidt orthogonalization operator constraint to integer entries produces “almost orthogonal” bases which, in turn, can be used to produce the integer-valued unknown parameters (initial cycle ambiguities) from the LLL algorithm (modified LLL algorithm). Systematic errors generated by “almost orthogonal” lattice bases are quantified by A. K. Lenstra et al. (1982) as well as M. Pohst (1987). The solution point of Integer Least Squares generated by the LLL algorithm is = (L')−1[L'◯] ∈ ℤ m where L is the lower triangular Gram-Schmidt matrix rounded to nearest integers, [L], and = [L'◯] are the nearest integers of L'◯, ◯ being the real valued approximation of z ∈ ℤ m , the m-dimensional lattice space Λ. Indeed due to “almost orthogonality” of the integer Gram-Schmidt procedure, the solution point is only suboptimal, only close to “least squares.” ? 2000 John Wiley & Sons, Inc.  相似文献   

10.
Abstract

A version of the Presidential Address given at the British Cartographic Society Annual Symposium in Manchester, September 2006.  相似文献   

11.
Summary The discrepancy between precision and accuracy in astronomical determinations is usually explained in two ways: on the one hand by ostensible large refraction anomalies and on the other hand by variable instrumental errors which are systematic over a certain interval of time and which are mainly influenced by temperature.In view of the research of several other persons and the author’s own investigations, the authors are of the opinion that the large night-errors of astronomical determinations are caused by variable, systematic instrumental errors dependent on temperature. The influence of refraction anomalies is estimated to be smaller than 0″.1 for most of the field stations. The possibility of determining the anomalous refraction from the observations by the programme given by Prof. Pavlov and Anderson has also been investigated. The precision of the determination of the anomalous refraction is good as long as no other systematic error working in a similar way is present.The results, which are interpreted as an effect of the anomalous refraction by Pavlov and Sergijenko, could also be interpreted as a systematic instrumental error. It is furthermore maintained thatthe latitude and longitude of a field station can be determined in a few hours of one night if the premisses given in [3, p.68]are kept. It has been deplored that the determination of the azimuth has not been given the necessary attention. It is therefore proposed to intensify the research on this problem. The profession has been called upon to acquaint itself better with the valuable possibilities of astronomical determinations and to apply them in a useful and appropriate manner. At the same time, attention has been called to the possibility of improving astronomical determinations with regard to accuracy as well as effectiveness.  相似文献   

12.
Abstract

A cartographic symposium held at Ulm in November 1982 was stimulated by the 500th anniversary of the printing of Ptolemy's 'Geographica' in that city, and was attended by an international gathering of historical cartographers.  相似文献   

13.
《测量评论》2013,45(83):224-230
Abstract

Mr. A. J. Morley has contributed a series of articles in the Review (E.S.R., iv, 23, 16; iv, 25, 136 and vi, 40, 76) on the adjustment of trigonometrical levels and the evaluation of the coefficient of terrestrial refraction with a view to ascertaining how other Colonies and Dominions deal with these problems. This object is very commendable as several problems concerning both the observational and theoretical sides arise in height determinations, regarding which there is not much guidance in the usual treatises on the subject.  相似文献   

14.
The development of lasers, new electro-optic light modulation methods, and improved electronic techniques have made possible significant improvements in the range and accuracy of optical distance measurements, thus providing not only improved geodetic tools but also useful techniques for the study of other geophysical, meteorological, and astronomical problems. One of the main limitations, at present, to the accuracy of geodetic measurements is the uncertainty in the average propagation velocity of the radiation due to inhomogeneity of the atmosphere. Accuracies of a few parts in ten million or even better now appear feasible, however, through the use of the dispersion method, in which simultaneous measurements of optical path length at two widely separated wavelengths are used to determine the average refractive index over the path and hence the true geodetic distance. The design of a new instrument based on this method, which utilizes wavelengths of6328 ? and3681 ? and3 GHz polarization modulation of the light, is summarized. Preliminary measurements over a5.3 km path with this instrument have demonstrated a sensitivity of3×10 −9 in detecting changes in optical path length for either wavelength using1-second averaging, and a standard deviation of3×10 −7 in corrected length. The principal remaining sources of error are summarized, as is progress in other laboratories using the dispersion method or other approaches to the problem of refractivity correction.  相似文献   

15.
An investigation was made of the behaviour of the variable (where ρij are the discrepancies between the direct and reverse measurements of the height of consecutive bench marks and theR ij are their distance apart) in a partial net of the Italian high precision levelling of a total length of about1.400 km. The methods of analysis employed were in general non-parametric individual and cumulative tests; in particular randomness, normality and asymmetry tests were carried out. The computers employed wereIBM/7094/7040. From the results evidence was obtained of the existence of an asymmetry in respect to zero of thex ij confirming the well-known results given firstly by Lallemand. A new result was obtained from the tests of randomness which put in evidence trends of the mean values of thex ij and explained some anomalous behaviours of the cumulative discrepancy curves. The extension of this investigation to a broader net possibly covering other national nets would be very useful to get a deeper insight into the behaviour of the errors in high precision levelling. Ad hoc programs for electronic computers are available to accomplish this job quickly. Presented at the 14th International Assembly of Geodesy (Lucerne, 1967).  相似文献   

16.
Abstract

From 1966 to 2006, digital map generalization has undergone a 40-year development. This paper provides an examination of the development in the first 40 years and an outlook. Emphasis is on theoretical and technical developments.  相似文献   

17.
Abstract

This paper describes the range of topographic maps produced by the Danish Geodetic Institute, some of the production methods together with plans for the future.  相似文献   

18.
Crustal data of surface elevations and depth of Moho (and densities) can be utilized to form model-earth anomalies. These model-anomalies can closely approximate the free-air anomaly field of the earth, and could thus be used to predict the latter. A review of several such models is presented, with some elaboration on model developments, procedures, data analysis and accuracies. One of the models approaches a prediction accuracy of ±10 mgal for5°×5° mean free-air anomalies, whose r.m.s. value was about30% higher.  相似文献   

19.
《The Cartographic journal》2013,50(4):342-350
Abstract

This overview of the history of jigsaw puzzles highlights their connection with maps and the education of geography. The illustrations have been selected from the 125 slides which originally accompanied a talk presented at the BCS Symposium in Manchester, September 2006.  相似文献   

20.
Abstract

The alphabet is the best tool that humankind has for storing thoughts, ideas and instructions until they can be employed, acted upon or communicated to others. This paper presents a necessarily brief and selective summary of the development of the alphabet in Europe from pre-Christian days to the digital/laser technology in use today.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号