共查询到20条相似文献,搜索用时 31 毫秒
1.
Warren G. Heller 《Journal of Geodesy》1981,55(4):354-369
The state of current and proposed moving-base gravity gradiometer instruments is briefly reviewed. The review perspective is directed toward their deployment as a source of additional gravimetric data during inertial surveys. In such gradiometer-aided surveys, the additional gravity gradient information could be used to:
- Improve surveyed gravity vector accuracy
- Extend the interval between zero velocity update stops
- Accomplish varying combinations of the above.
2.
A sequential adjustment procedure is proposed for the direct estimation of point—velocities in deformation analysis networks. At any intermediate stage of the adjustment the up-to-date covariance matrix of those velocities tells the evolving story of the network in terms of solvability and reliability. A pre-zero-epoch covariance matrix is utilized for a smooth and flexible treatment of two characteristic problems of deformation analysis: - high turnover of points in the network - processing variable and generally incomplete observational batches. A small numerical example is presented at the end as an illustration. 相似文献
3.
J. J. Levallois 《Journal of Geodesy》1983,57(1-4):312-331
The French astronomerJean PICARD (1620–1682) was certainly one of the leading scientists of his time. Friend of Huygens, of Hevelius, of Oldenburg, master of Römer, indefatigable traveller, he played a very important part in the development of positional astronomy and geodesy. - He first, had the idea of comparing the length units to a reproductible physical quantity, namely the length of the one second pendulum at Paris, and measured that length. - He conceived the first cross wire telescopes and adapted them on geodetic and astronomical instruments of his own, used throughout one century until 1780. - He obtained the first really reliable value of the earth radius, in his famous measurement of the meridional arc PARIS-AMIENS, being the original cell of the French triangulations. The following article is devoted to a recomputation and evaluation of the accuracy of that work, as compared with further operations, but independently concludes that this achievement gave the necessary impulse to the development of geodesy in France and probably abroad. 相似文献
4.
R R Navalgund V Jayaraman A S Kiran Kumar Tara Sharma Kurien Mathews K K Mohanty V K Dadhwal M B Potdar T P Singh R Ghosh V Tamilarasan T T Medhavy 《Journal of the Indian Society of Remote Sensing》1996,24(4):207-237
Although data available from various earth observation systems have been routinely used in many resource applications, however there have been gaps, and data needs of applications at different levels of details have not been met. There is a growing demand for availability of data at higher repetivity, at higher spatial resolution, in more and narrower spectral bands etc. Some of the thrust areas of applications particularly in the Indian context are;
- Management of natural resources to ensure sustainable increase in agricultural production,
- Study the state of the environment, its monitoring and assessment of the impact of. various development actions on the environment,
- Updating and generation of large scale topographical maps.
- Exploration/exploitation of marine and mineral resources and
- Operational meteorology and studying various land and oceanic processes to understand/predict global climate changes.
- Moderate spatial resolution (l50-300m), high repetivity (2 Days), minimum set of spectral bands (VIS, NIR, MIR. TIR) full coverage.
- Moderate to high spatial resolution (20-40m), high repetivity (4-6 Days), spectral bands (VIS, MR, MIR, TIR) full coverage.
- High spatial resolution (5-10m) muitispectral data with provision for selecting specific narrow bands (VIS, N1R. MIR), viewing from different angles.
- Synthetic aperture radar operating in at least two frequencies (C, X, Ku), two incidence angles/polarizations, moderate to high spatial resolution (20-40m), high repetivity (4-6 Days).
- Very high spatial resolution (1-2m) data in panchromatic band to provide terrain details at cadastral level (1:10,000).
- Stereo capability (1-2m height resolution) to help planning/execution of development plans.
- Moderate resolution sensor operating in VIS, NIR, MIR on a geostationary platform for observations at different sun angles necessary for the development of canopy reflectance inversion models.
- Diurnal (at least two i.e. pre-dawn and noon) temperature measurements of the earth surface.
- Ocean colour monitor with daily coverage.
- Multi-frequency microwave radiometer, scatterometer. altimeter, atmospheric sounder, etc.
5.
M. G. Sideris 《Journal of Geodesy》1996,70(8):470-479
Spectral methods have been a standard tool in physical geodesy applications over the past decade. Typically, they have been used for the efficient evaluation of convolution integrals, utilizing homogeneous, noise-free gridded data. This paper answers the following three questions:
- Can data errors be propagated into the results?
- Can heterogeneous data be used?
- Is error propagation possible with heterogeneous data?
6.
Geological studies of the area around Katta, in the southern part of the Ratnagiri District of Maharashtra, were carried out with the help of visual remote sensing techniques using LANDSAT imageries on 1:250,000 scale and aerial photographs on 1:60,000 scale. The major stratigraphic units represented in the area under study are the Archean Complex, Kaladgi Supergroup, Deccan Trap, Laterite and Alluvium. The Kaladgis unconformably overlie the Archean metasediments and also at places exhibit faulted contacts with the latter. The major part of the area is covered by a thick evergreen vegetation. The interpretation followed by field work and laboratory work revealed the following:
- The different lithologic units could be delineated on the aerial photographs.
- Different lineaments marked on the imagery were found to be due either to faults or fracture zones. Some of the older faults appear to have been rejuvenated after the formation of the laterites.
- Some of the lithologic horizons can be identified on the Landsat imagery by virtue of their spatial signatures.
7.
Georges Blaha 《Journal of Geodesy》1982,56(4):281-299
The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
- A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
- The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
- The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
8.
Johannes Ihde 《Journal of Geodesy》1981,55(2):99-110
The investigations refer to the compartment method by using mean terrestrial free air anomalies only. Three main error influences of remote areas (distance from the fixed point >9°) on height anomalies and deflections of the vertical are being regarded:
- The prediction errors of mean terrestrial free air anomalies have the greatest influence and amount to about ±0″.2 in each component for deflections of the vertical and to ±3 m for height anomalies;
- The error of the compartment method, which originates from converting the integral formulas of Stokes and Vening-Meinesz into summation formulas, can be neglected if the anomalies for points and gravity profiles are compiled to 5°×5° mean values.
- The influences of the mean gravimetric correction terms of Arnold—estimated for important mountains of the Earth by means of an approximate formula—on height anomalies may amount to 1–2 m and on deflections of the vertical to 0″0.5–0″.1, and, therefore, they have to be taken into account for exact calculations.
9.
Observable quantities in satellite gradiometry 总被引:1,自引:1,他引:0
Martin Vermeer 《Journal of Geodesy》1990,64(4):347-361
Deriving the observables for satellite gravity gradiometry, several workers have identified the invariants under spatial rotation of the gravitation gradient tensor for obtaining quantities insensitive to the precise (unrecoverable) attitude of the satellite. Extending this work we show:
- Considering that an approximate (not precise) attitude recovery for these, three-axes-stabilised, satellites is to be expected, one can identifythree independent invariants instead of two.
- Besides studying gradient tensor invariants for one observation time, one should also study (as withGPS observables) first and seconddifferences between successive tensor component values in time. Bias and trend patterns in the measured tensor components caused by satellite rotation uncertainty, and by attitude uncertainty in some cross components, are shown to cancel. Information thus obtained is exclusively high-frequency, however.
10.
Combining consecutive short arcs into long arcs for precise and efficient GPS Orbit Determination 总被引:1,自引:1,他引:0
G. Beutler E. Brockmann U. Hugentobler L. Mervart M. Rothacher R. Weber 《Journal of Geodesy》1996,70(5):287-299
The final products of theCODE Analysis Center (Center for Orbit Determination in Europe) of theInternational GPS Service for Geodynamics (IGS) stem fromoverlapping 3-day-arcs. Until 31 December, 1994 these long arcs were computedfrom scratch, i.e. by processing three days of observations of about 40 stations (by mid 1995 about 60 stations were used) of the IGS Global Network in our parameter estimation program GPSEST. Becauseone-day-arcs have to be produced first (for the purpose of error detection etc.) the actual procedure was rather time-consuming. In the present article we develop the mathematical tools necessary to form long arcs based on the normal equation systems of consecutive short arcs (one-day-solutions in the case of CODE). The procedure in its simplest version is as follows:
- Each short arc is described bysix initial conditions and a number of dynamical orbit parameters (e.g. radiation pressure parameters). The resulting long arc in turn shall be based onn consecutive short arcs and described bysix initial conditions and again the same number of dynamical parameters as in the short arcs..
- By asking position and velocity to be continuous at the boundaries of the short arcs we obtain a long arc which is actually defined by one set of initial conditions andn sets of dynamical parameters (ifn short arcs are combined)..
- By asking the dynamical parameters to be identical in consecutive short arcs, the resulting long arc is characterized by exactly the same number of orbit parameters as each of the short arcs.
- This procedure isnot yet optimized becauseformally all n sets of orbit parameters have to be set up and solved for in the long arc solution (although they are not independent). In order to allow for an optimized solution we derive all necessary relations to eliminate the unnecessary parameters in the combination. Each long arc is characterized by the actual number of independent orbit parameters. The resulting procedure isvery efficient.
11.
Error analysis of the NGS’ surface gravity database 总被引:1,自引:1,他引:0
Jarir Saleh Xiaopeng Li Yan Ming Wang Daniel R. Roman Dru A. Smith 《Journal of Geodesy》2013,87(3):203-221
Are the National Geodetic Survey’s surface gravity data sufficient for supporting the computation of a 1 cm-accurate geoid? This paper attempts to answer this question by deriving a few measures of accuracy for this data and estimating their effects on the US geoid. We use a data set which comprises ${\sim }1.4$ million gravity observations collected in 1,489 surveys. Comparisons to GRACE-derived gravity and geoid are made to estimate the long-wavelength errors. Crossover analysis and $K$ -nearest neighbor predictions are used for estimating local gravity biases and high-frequency gravity errors, and the corresponding geoid biases and high-frequency geoid errors are evaluated. Results indicate that 244 of all 1,489 surface gravity surveys have significant biases ${>}2$ mGal, with geoid implications that reach 20 cm. Some of the biased surveys are large enough in horizontal extent to be reliably corrected by satellite-derived gravity models, but many others are not. In addition, the results suggest that the data are contaminated by high-frequency errors with an RMS of ${\sim }2.2$ mGal. This causes high-frequency geoid errors of a few centimeters in and to the west of the Rocky Mountains and in the Appalachians and a few millimeters or less everywhere else. Finally, long-wavelength ( ${>}3^{\circ }$ ) surface gravity errors on the sub-mGal level but with large horizontal extent are found. All of the south and southeast of the USA is biased by +0.3 to +0.8 mGal and the Rocky Mountains by $-0.1$ to $-0.3$ mGal. These small but extensive gravity errors lead to long-wavelength geoid errors that reach 60 cm in the interior of the USA. 相似文献
12.
Yi Bin Yao Bao Zhang Shun Qiang Yue Chao Qian Xu Wen Fei Peng 《Journal of Geodesy》2013,87(5):439-448
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ – $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ – $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable. 相似文献
13.
Yehuda Bock 《Journal of Geodesy》1983,57(1-4):294-311
The estimation of crustal deformations from repeated baseline measurements is a singular problem in the absence of prior information. One often applied solution is a free adjustment in which the singular normal matrix is augmented with a set of inner constraints. These constraints impose no net translation nor rotation for the estimated deformations X which may not be physically meaningful for a particular problem. The introduction of an available geophysical model from which an expected deformation vector \(\bar X\) and its covariance matrix \(\sum _{\bar X} \) can be computed will direct X to a physically more meaningful solution. Three possible estimators are investigated for estimating deformations from a combination of baseline measurements and geophysical models. 相似文献
14.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ ~ 2 %). 相似文献
15.
M-estimation with probabilistic models of geodetic observations 总被引:1,自引:1,他引:0
Z. Wiśniewski 《Journal of Geodesy》2014,88(10):941-957
The paper concerns \(M\) -estimation with probabilistic models of geodetic observations that is called \(M_{\mathcal {P}}\) estimation. The special attention is paid to \(M_{\mathcal {P}}\) estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of \(M_{\mathcal {P}}\) estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments \(\mu _{k},\, k=2,3,4\) , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The \(M_{\mathcal {P}}\) estimation that includes the Pearson type IV and VII distributions ( \(M_{\mathrm{PD(l)}}\) method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering \(M\) -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( \(M_{\mathrm{G-C}}\) method). The paper shows that \(M_{\mathcal {P}}\) estimation with the application of probabilistic models belongs to the class of robust estimations; \(M_{\mathrm{PD(l)}}\) method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis. 相似文献
16.
Sridevi Jade Malay Mukul V. K. Gaur Kireet Kumar T. S. Shrungeshwar G. S. Satyal Rakesh Kumar Dumka Saigeetha Jagannathan M. B. Ananda P. Dileep Kumar Souvik Banerjee 《Journal of Geodesy》2014,88(6):539-557
We present new insights on the time-averaged surface velocities, convergence and extension rates along arc-normal transects in Kumaon, Garhwal and Kashmir–Himachal regions in the Indian Himalaya from 13 years of high-precision Global Positioning System (GPS) time series (1995–2008) derived from GPS data at 14 GPS permanent and 42 campaign stations between $29.5{-}35^{\circ }\hbox {N}$ and $76{-}81^{\circ }\hbox {E}$ . The GPS surface horizontal velocities vary significantly from the Higher to Lesser Himalaya and are of the order of 30 to 48 mm/year NE in ITRF 2005 reference frame, and 17 to 2 mm/year SW in an India fixed reference frame indicating that this region is accommodating less than 2 cm/year of the India–Eurasia plate motion ( ${\sim }4~\hbox {cm/year}$ ). The total arc-normal shortening varies between ${\sim }10{-}14~\hbox {mm/year}$ along the different transects of the northwest Himalayan wedge, between the Indo-Tsangpo suture to the north and the Indo-Gangetic foreland to the south indicating high strain accumulation in the Himalayan wedge. This convergence is being accommodated differentially along the arc-normal transects; ${\sim } 5{-}10~\hbox {mm/year}$ in Lesser Himalaya and 3–4 mm/year in Higher Himalaya south of South Tibetan Detachment. Most of the convergence in the Lesser Himalaya of Garhwal and Kumaon is being accommodated just south of the Main Central Thrust fault trace, indicating high strain accumulation in this region which is also consistent with the high seismic activity in this region. In addition, for the first time an arc-normal extension of ${\sim }6~\hbox {mm/year}$ has also been observed in the Tethyan Himalaya of Kumaon. Inverse modeling of GPS-derived surface deformation rates in Garhwal and Kumaon Himalaya using a single dislocation indicate that the Main Himalayan Thrust is locked from the surface to a depth of ${\sim }15{-}20~\hbox {km}$ over a width of 110 km with associated slip rate of ${\sim }16{-}18~\hbox {mm/year}$ . These results indicate that the arc-normal rates in the Northwest Himalaya have a complex deformation pattern involving both convergence and extension, and rigorous seismo-tectonic models in the Himalaya are necessary to account for this pattern. In addition, the results also gave an estimate of co-seismic and post-seismic motion associated with the 1999 Chamoli earthquake, which is modeled to derive the slip and geometry of the rupture plane. 相似文献
17.
G. Wöppelmann M. Marcos A. Coulomb B. Martín Míguez P. Bonnetain C. Boucher M. Gravelle B. Simon P. Tiphaneau 《Journal of Geodesy》2014,88(9):869-885
This paper describes the historical sea level data that we have rescued from a tide gauge, especially devised originally for geodesy. This gauge was installed in Marseille in 1884 with the primary objective of defining the origin of the height system in France. Hourly values for 1885–1988 have been digitized from the original tidal charts. They are supplemented by hourly values from an older tide gauge record (1849–1851) that was rediscovered during a survey in 2009. Both recovered data sets have been critically edited for errors and their reliability assessed. The hourly values are thoroughly analysed for the first time after their original recording. A consistent high-frequency time series is reported, increasing notably the length of one of the few European sea level records in the Mediterranean Sea spanning more than one hundred years. Changes in sea levels are examined, and previous results revisited with the extended time series. The rate of relative sea level change for the period 1849–2012 is estimated to have been \(1.08\pm 0.04\) mm/year at Marseille, a value that is slightly lower but in close agreement with the longest time series of Brest over the common period ( \(1.26\pm 0.04\) mm/year). The data from a permanent global positioning system station installed on the roof of the solid tide gauge building suggests a remarkable stability of the ground ( \(-0.04\pm 0.25\) mm/year) since 1998, confirming the choice made by our predecessor geodesists in the nineteenth century regarding this site selection. 相似文献
18.
A. Lannes 《Journal of Geodesy》2013,87(4):323-335
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012). 相似文献
19.
Deformations of radio telescopes used in geodetic and astrometric very long baseline interferometry (VLBI) observations belong to the class of systematic error sources which require correction in data analysis. In this paper we present a model for all path length variations in the geometrical optics of radio telescopes which are due to gravitational deformation. The Effelsberg 100 m radio telescope of the Max Planck Institute for Radio Astronomy, Bonn, Germany, has been surveyed by various terrestrial methods. Thus, all necessary information that is needed to model the path length variations is available. Additionally, a ray tracing program has been developed which uses as input the parameters of the measured deformations to produce an independent check of the theoretical model. In this program as well as in the theoretical model, the illumination function plays an important role because it serves as the weighting function for the individual path lengths depending on the distance from the optical axis. For the Effelsberg telescope, the biggest contribution to the total path length variations is the bending of the main beam located along the elevation axis which partly carries the weight of the paraboloid at its vertex. The difference in total path length is almost \(-\) 100 mm when comparing observations at 90 \(^\circ \) and at 0 \(^\circ \) elevation angle. The impact of the path length corrections is validated in a global VLBI analysis. The application of the correction model leads to a change in the vertical position of \(+120\) mm. This is more than the maximum path length, but the effect can be explained by the shape of the correction function. 相似文献
20.
Heiner Denker Ludger Timmen Christian Voigt Stefan Weyers Ekkehard Peik Helen S. Margolis Pacôme Delva Peter Wolf Gérard Petit 《Journal of Geodesy》2018,92(5):487-516
The frequency stability and uncertainty of the latest generation of optical atomic clocks is now approaching the one part in \(10^{18}\) level. Comparisons between earthbound clocks at rest must account for the relativistic redshift of the clock frequencies, which is proportional to the corresponding gravity (gravitational plus centrifugal) potential difference. For contributions to international timescales, the relativistic redshift correction must be computed with respect to a conventional zero potential value in order to be consistent with the definition of Terrestrial Time. To benefit fully from the uncertainty of the optical clocks, the gravity potential must be determined with an accuracy of about \(0.1\,\hbox {m}^{2}\,\hbox {s}^{-2}\), equivalent to about 0.01 m in height. This contribution focuses on the static part of the gravity field, assuming that temporal variations are accounted for separately by appropriate reductions. Two geodetic approaches are investigated for the derivation of gravity potential values: geometric levelling and the Global Navigation Satellite Systems (GNSS)/geoid approach. Geometric levelling gives potential differences with millimetre uncertainty over shorter distances (several kilometres), but is susceptible to systematic errors at the decimetre level over large distances. The GNSS/geoid approach gives absolute gravity potential values, but with an uncertainty corresponding to about 2 cm in height. For large distances, the GNSS/geoid approach should therefore be better than geometric levelling. This is demonstrated by the results from practical investigations related to three clock sites in Germany and one in France. The estimated uncertainty for the relativistic redshift correction at each site is about \(2 \times 10^{-18}\). 相似文献