首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The state of current and proposed moving-base gravity gradiometer instruments is briefly reviewed. The review perspective is directed toward their deployment as a source of additional gravimetric data during inertial surveys. In such gradiometer-aided surveys, the additional gravity gradient information could be used to:
  1. Improve surveyed gravity vector accuracy
  2. Extend the interval between zero velocity update stops
  3. Accomplish varying combinations of the above.
The paper examines potential survey improvements associated with gradiometers having noise levels observed in laboratory prototypes. The additional improvements possible with future gradiometers are also discussed. These results are interpreted in light of present and likely future inertial survey system technology.  相似文献   

2.
A sequential adjustment procedure is proposed for the direct estimation of point—velocities in deformation analysis networks. At any intermediate stage of the adjustment the up-to-date covariance matrix of those velocities tells the evolving story of the network in terms of solvability and reliability. A pre-zero-epoch covariance matrix is utilized for a smooth and flexible treatment of two characteristic problems of deformation analysis:
  • - high turnover of points in the network
  • - processing variable and generally incomplete observational batches.
  • A small numerical example is presented at the end as an illustration.  相似文献   

    3.
    The French astronomerJean PICARD (1620–1682) was certainly one of the leading scientists of his time. Friend of Huygens, of Hevelius, of Oldenburg, master of Römer, indefatigable traveller, he played a very important part in the development of positional astronomy and geodesy.
  • - He first, had the idea of comparing the length units to a reproductible physical quantity, namely the length of the one second pendulum at Paris, and measured that length.
  • - He conceived the first cross wire telescopes and adapted them on geodetic and astronomical instruments of his own, used throughout one century until 1780.
  • - He obtained the first really reliable value of the earth radius, in his famous measurement of the meridional arc PARIS-AMIENS, being the original cell of the French triangulations.
  • The following article is devoted to a recomputation and evaluation of the accuracy of that work, as compared with further operations, but independently concludes that this achievement gave the necessary impulse to the development of geodesy in France and probably abroad.  相似文献   

    4.
    Although data available from various earth observation systems have been routinely used in many resource applications, however there have been gaps, and data needs of applications at different levels of details have not been met. There is a growing demand for availability of data at higher repetivity, at higher spatial resolution, in more and narrower spectral bands etc. Some of the thrust areas of applications particularly in the Indian context are;
    1. Management of natural resources to ensure sustainable increase in agricultural production,
    2. Study the state of the environment, its monitoring and assessment of the impact of. various development actions on the environment,
    3. Updating and generation of large scale topographical maps.
    4. Exploration/exploitation of marine and mineral resources and
    5. Operational meteorology and studying various land and oceanic processes to understand/predict global climate changes.
    Each of these thrust area of application has many components, related to basic resource areas such as agriculture, forestry, water resources, minerals, marine resources etc. and the field of cartography. Observational requirements for major applications have been summarized as under. Monitoring vegetation health from space remains the most important observational parameter with applications, in agriculture, forestry, environment, hydrology etc. Vegetation extent, quantity and temporal changes are the three main requirements which are not fully realized with RS data available. Vegetation productivity, forest biomass, canopy moisture status, canopy biogeochemistry are some examples. Crop production forecasting is an important application area. Remotely sensed data has been used for identification of crops and their acreage estimation. Fragmented holdings, large spread in crop calendars and different management practices continue to pose a challenge lo remote sensing. Remotely sensed data at much higher spatial resolution than hitherto available as well as at greater repetivity are required to meet this need. Non-availability of cloud-free data in the kharif season is one of the serious problems in operational use of remote sensing for crop inventory. Synthetic aperture radar data al X & Ku bands is necessary to meet this demand. Nutrient stress/disease detection requires observations in narrow spectral bands. In case of forestry applications, multispectral data at high spatial resolution of the order of 5 to 10 metres is required to make working plans at forest compartment level. Observations from space for deriving tree height are required for volume estimation. Observations in the middle infrared region would greatly enhance capability of satellite remote sensing in forest fire detection. Temporal, spatial and spectral observational requirements in various applications on vegetation viewing are diverse, as they address processes at different spatial and time scales. Hence, it would be worthwhile to address this issue in three broad categories. a) Full coverage, moderate spatial resolution with high repetivity (drought, large scale deforestation, forest phenology....). b) Full coverage, moderate to high spatial resolution and high repetivity (crop forecasting, vegetation productivity). c) Selected viewing at high spatial resolution, moderate to high repetivity and with new dimensions to imaging (narrow spectral bands, different viewing angles). A host of agrometeorological parameters are needed to be measured from space for their effective use in development of yield models. Estimation of root-zone soil moisture is an important area requiring radar measurements from space. Surface meteorological observations from space at the desired spatial and temporal distributions has not developed because of heavy demands placed on the sensor as well as analytical operational models. Agrometeorology not only provides quantitative inputs to other applications such as crop forecasting, hydrological models but also could be used for farmer advisory services by local bodies. Mineral exploration requires information on geological structures, geomorphology and lithology. Surface manifestation over localized regions requires large scale mapping while the lithology can be deciphered from specific narrow bands in visible. NIR, MIR and TIR regions. Sensors identified for mapping/cartography in conjunction with imaging spectrometer would seem to cover requirements of this application. Narrow spectral bands in the short regions which provide diagnostics of relevant geological phenomenon are necessary for mineral exploration. Thermal inertia measurements help in better discrimination of different rock units. Measurements from synthetic aperture data which would provide information on geological structures and geomorphology are necessary for mineral exploration. The applications related to marine environment fall in three major areas: (i) Ocean colour and productivity, biological resources; (ii) Land-ocean interface, this includes coastal landforms, bathymetry, littoral transport processes, etc. and; (iii) Physical oceanography, sea surface temperature, winds, wave spectra, energy and mass exchange between atmosphere and ocean. Measurement of chlorophyll concentration accurately on daily basis, sea surface temperature with an accuracy of 0.5 °K. and information on current patterns arc required for developing better fishery forecast models. Improved spatial resolution data are desirable for studying sediment and other coastal processes. Cartography is another important application area. The major problems encountered in relation to topographic map updation are location and geometric accuracy and information content. Two most important requirements for such an application are high spatial resolution data of 1 to 2 metre and stereo capability to provide vertical resolution of 1 metre. This requirement places stringent demands on the sensor specifications, geometric processing, platform stability and automated digital cartography. The requirements for the future earth observation systems based on different application needs can be summarized as follows:
    1. Moderate spatial resolution (l50-300m), high repetivity (2 Days), minimum set of spectral bands (VIS, NIR, MIR. TIR) full coverage.
    2. Moderate to high spatial resolution (20-40m), high repetivity (4-6 Days), spectral bands (VIS, MR, MIR, TIR) full coverage.
    3. High spatial resolution (5-10m) muitispectral data with provision for selecting specific narrow bands (VIS, N1R. MIR), viewing from different angles.
    4. Synthetic aperture radar operating in at least two frequencies (C, X, Ku), two incidence angles/polarizations, moderate to high spatial resolution (20-40m), high repetivity (4-6 Days).
    5. Very high spatial resolution (1-2m) data in panchromatic band to provide terrain details at cadastral level (1:10,000).
    6. Stereo capability (1-2m height resolution) to help planning/execution of development plans.
    7. Moderate resolution sensor operating in VIS, NIR, MIR on a geostationary platform for observations at different sun angles necessary for the development of canopy reflectance inversion models.
    8. Diurnal (at least two i.e. pre-dawn and noon) temperature measurements of the earth surface.
    9. Ocean colour monitor with daily coverage.
    10. Multi-frequency microwave radiometer, scatterometer. altimeter, atmospheric sounder, etc.
      相似文献   

    5.
    Spectral methods have been a standard tool in physical geodesy applications over the past decade. Typically, they have been used for the efficient evaluation of convolution integrals, utilizing homogeneous, noise-free gridded data. This paper answers the following three questions:
    1. Can data errors be propagated into the results?
    2. Can heterogeneous data be used?
    3. Is error propagation possible with heterogeneous data?
    The answer to the above questions is yes and is illustrated for the case of two input data sets and one output. Firstly, a solution is obtained in the frequency domain using the theory of a two-input, single-output system. The assumption here is that both the input signals and their errors are stochastic variables with known PSDs. The solution depends on the ratios of the error PSD and the signal PSD, i.e., the noise-to-signal ratios of the two inputs. It is shown that, when the two inputs are partially correlated, this solution is equivalent to stepwise collocation. Secondly, a solution is derived in the frequency domain by a least-squares adjustment of the spectra of the input data. The assumption is that only the input errors are stochastic variables with known power spectral density functions (PSDs). It is shown that the solution depends on the ratio of the noise PSDs. In both cases, there exists the non-trivial problem of estimating the input noise PSDs, given that we only have available the error variances of the data. An effective but non-rigorous way of overcoming this problem in practice is to approximate the noise PSDs by simple stationary models.  相似文献   

    6.
    Geological studies of the area around Katta, in the southern part of the Ratnagiri District of Maharashtra, were carried out with the help of visual remote sensing techniques using LANDSAT imageries on 1:250,000 scale and aerial photographs on 1:60,000 scale. The major stratigraphic units represented in the area under study are the Archean Complex, Kaladgi Supergroup, Deccan Trap, Laterite and Alluvium. The Kaladgis unconformably overlie the Archean metasediments and also at places exhibit faulted contacts with the latter. The major part of the area is covered by a thick evergreen vegetation. The interpretation followed by field work and laboratory work revealed the following:
    1. The different lithologic units could be delineated on the aerial photographs.
    2. Different lineaments marked on the imagery were found to be due either to faults or fracture zones. Some of the older faults appear to have been rejuvenated after the formation of the laterites.
    3. Some of the lithologic horizons can be identified on the Landsat imagery by virtue of their spatial signatures.
    These studies indicate that even in the area covered with thick vegetation, aerospace imagery in appropriate band and data scale can provide significant geological information.  相似文献   

    7.
    The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
    1. A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
    2. The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
    3. The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
      相似文献   

    8.
    The investigations refer to the compartment method by using mean terrestrial free air anomalies only. Three main error influences of remote areas (distance from the fixed point >9°) on height anomalies and deflections of the vertical are being regarded:
    1. The prediction errors of mean terrestrial free air anomalies have the greatest influence and amount to about ±0″.2 in each component for deflections of the vertical and to ±3 m for height anomalies;
    2. The error of the compartment method, which originates from converting the integral formulas of Stokes and Vening-Meinesz into summation formulas, can be neglected if the anomalies for points and gravity profiles are compiled to 5°×5° mean values.
    3. The influences of the mean gravimetric correction terms of Arnold—estimated for important mountains of the Earth by means of an approximate formula—on height anomalies may amount to 1–2 m and on deflections of the vertical to 0″0.5–0″.1, and, therefore, they have to be taken into account for exact calculations.
    The computations of errors are carried out using a global covariance function of point free air anomalies.  相似文献   

    9.
    Observable quantities in satellite gradiometry   总被引:1,自引:1,他引:0  
    Deriving the observables for satellite gravity gradiometry, several workers have identified the invariants under spatial rotation of the gravitation gradient tensor for obtaining quantities insensitive to the precise (unrecoverable) attitude of the satellite. Extending this work we show:
    1. Considering that an approximate (not precise) attitude recovery for these, three-axes-stabilised, satellites is to be expected, one can identifythree independent invariants instead of two.
    2. Besides studying gradient tensor invariants for one observation time, one should also study (as withGPS observables) first and seconddifferences between successive tensor component values in time. Bias and trend patterns in the measured tensor components caused by satellite rotation uncertainty, and by attitude uncertainty in some cross components, are shown to cancel. Information thus obtained is exclusively high-frequency, however.
    Observation equations for gradiometry are derived taking three satellite attitude angles into account. Various alternatives for the satellite’s nominalattitude law are discussed.  相似文献   

    10.
    The final products of theCODE Analysis Center (Center for Orbit Determination in Europe) of theInternational GPS Service for Geodynamics (IGS) stem fromoverlapping 3-day-arcs. Until 31 December, 1994 these long arcs were computedfrom scratch, i.e. by processing three days of observations of about 40 stations (by mid 1995 about 60 stations were used) of the IGS Global Network in our parameter estimation program GPSEST. Becauseone-day-arcs have to be produced first (for the purpose of error detection etc.) the actual procedure was rather time-consuming. In the present article we develop the mathematical tools necessary to form long arcs based on the normal equation systems of consecutive short arcs (one-day-solutions in the case of CODE). The procedure in its simplest version is as follows:
    • Each short arc is described bysix initial conditions and a number of dynamical orbit parameters (e.g. radiation pressure parameters). The resulting long arc in turn shall be based onn consecutive short arcs and described bysix initial conditions and again the same number of dynamical parameters as in the short arcs..
    • By asking position and velocity to be continuous at the boundaries of the short arcs we obtain a long arc which is actually defined by one set of initial conditions andn sets of dynamical parameters (ifn short arcs are combined)..
    • By asking the dynamical parameters to be identical in consecutive short arcs, the resulting long arc is characterized by exactly the same number of orbit parameters as each of the short arcs.
    • This procedure isnot yet optimized becauseformally all n sets of orbit parameters have to be set up and solved for in the long arc solution (although they are not independent). In order to allow for an optimized solution we derive all necessary relations to eliminate the unnecessary parameters in the combination. Each long arc is characterized by the actual number of independent orbit parameters. The resulting procedure isvery efficient.
    From the point of view of the result the new procedure iscompletely equivalent to an actual re-evaluation of all observations pertaining to the long arc. It is much more efficient and flexible, however because it allows us to construct 2-day-arcs, 3-day-arcs, etc. based on the previously stored daily normal equation systems without requiring much additional CPU time. The theory is developed in the first four sections. Technical aspects are dealt with in appendices A and B. The actual implementation into the Bernese GPS Software system and test results are given in section 5.  相似文献   

    11.
    Error analysis of the NGS’ surface gravity database   总被引:1,自引:1,他引:0  
    Are the National Geodetic Survey’s surface gravity data sufficient for supporting the computation of a 1 cm-accurate geoid? This paper attempts to answer this question by deriving a few measures of accuracy for this data and estimating their effects on the US geoid. We use a data set which comprises ${\sim }1.4$ million gravity observations collected in 1,489 surveys. Comparisons to GRACE-derived gravity and geoid are made to estimate the long-wavelength errors. Crossover analysis and $K$ -nearest neighbor predictions are used for estimating local gravity biases and high-frequency gravity errors, and the corresponding geoid biases and high-frequency geoid errors are evaluated. Results indicate that 244 of all 1,489 surface gravity surveys have significant biases ${>}2$  mGal, with geoid implications that reach 20 cm. Some of the biased surveys are large enough in horizontal extent to be reliably corrected by satellite-derived gravity models, but many others are not. In addition, the results suggest that the data are contaminated by high-frequency errors with an RMS of ${\sim }2.2$  mGal. This causes high-frequency geoid errors of a few centimeters in and to the west of the Rocky Mountains and in the Appalachians and a few millimeters or less everywhere else. Finally, long-wavelength ( ${>}3^{\circ }$ ) surface gravity errors on the sub-mGal level but with large horizontal extent are found. All of the south and southeast of the USA is biased by +0.3 to +0.8 mGal and the Rocky Mountains by $-0.1$ to $-0.3$  mGal. These small but extensive gravity errors lead to long-wavelength geoid errors that reach 60 cm in the interior of the USA.  相似文献   

    12.
    We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable.  相似文献   

    13.
    The estimation of crustal deformations from repeated baseline measurements is a singular problem in the absence of prior information. One often applied solution is a free adjustment in which the singular normal matrix is augmented with a set of inner constraints. These constraints impose no net translation nor rotation for the estimated deformations X which may not be physically meaningful for a particular problem. The introduction of an available geophysical model from which an expected deformation vector \(\bar X\) and its covariance matrix \(\sum _{\bar X} \) can be computed will direct X to a physically more meaningful solution. Three possible estimators are investigated for estimating deformations from a combination of baseline measurements and geophysical models.  相似文献   

    14.
    Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ 2 %).  相似文献   

    15.
    M-estimation with probabilistic models of geodetic observations   总被引:1,自引:1,他引:0  
    The paper concerns \(M\) -estimation with probabilistic models of geodetic observations that is called \(M_{\mathcal {P}}\) estimation. The special attention is paid to \(M_{\mathcal {P}}\) estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of \(M_{\mathcal {P}}\) estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments \(\mu _{k},\, k=2,3,4\) , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The \(M_{\mathcal {P}}\) estimation that includes the Pearson type IV and VII distributions ( \(M_{\mathrm{PD(l)}}\) method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering \(M\) -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( \(M_{\mathrm{G-C}}\) method). The paper shows that \(M_{\mathcal {P}}\) estimation with the application of probabilistic models belongs to the class of robust estimations; \(M_{\mathrm{PD(l)}}\) method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis.  相似文献   

    16.
    We present new insights on the time-averaged surface velocities, convergence and extension rates along arc-normal transects in Kumaon, Garhwal and Kashmir–Himachal regions in the Indian Himalaya from 13 years of high-precision Global Positioning System (GPS) time series (1995–2008) derived from GPS data at 14 GPS permanent and 42 campaign stations between $29.5{-}35^{\circ }\hbox {N}$ and $76{-}81^{\circ }\hbox {E}$ . The GPS surface horizontal velocities vary significantly from the Higher to Lesser Himalaya and are of the order of 30 to 48 mm/year NE in ITRF 2005 reference frame, and 17 to 2 mm/year SW in an India fixed reference frame indicating that this region is accommodating less than 2 cm/year of the India–Eurasia plate motion ( ${\sim }4~\hbox {cm/year}$ ). The total arc-normal shortening varies between ${\sim }10{-}14~\hbox {mm/year}$ along the different transects of the northwest Himalayan wedge, between the Indo-Tsangpo suture to the north and the Indo-Gangetic foreland to the south indicating high strain accumulation in the Himalayan wedge. This convergence is being accommodated differentially along the arc-normal transects; ${\sim } 5{-}10~\hbox {mm/year}$ in Lesser Himalaya and 3–4 mm/year in Higher Himalaya south of South Tibetan Detachment. Most of the convergence in the Lesser Himalaya of Garhwal and Kumaon is being accommodated just south of the Main Central Thrust fault trace, indicating high strain accumulation in this region which is also consistent with the high seismic activity in this region. In addition, for the first time an arc-normal extension of ${\sim }6~\hbox {mm/year}$ has also been observed in the Tethyan Himalaya of Kumaon. Inverse modeling of GPS-derived surface deformation rates in Garhwal and Kumaon Himalaya using a single dislocation indicate that the Main Himalayan Thrust is locked from the surface to a depth of ${\sim }15{-}20~\hbox {km}$ over a width of 110 km with associated slip rate of ${\sim }16{-}18~\hbox {mm/year}$ . These results indicate that the arc-normal rates in the Northwest Himalaya have a complex deformation pattern involving both convergence and extension, and rigorous seismo-tectonic models in the Himalaya are necessary to account for this pattern. In addition, the results also gave an estimate of co-seismic and post-seismic motion associated with the 1999 Chamoli earthquake, which is modeled to derive the slip and geometry of the rupture plane.  相似文献   

    17.
    This paper describes the historical sea level data that we have rescued from a tide gauge, especially devised originally for geodesy. This gauge was installed in Marseille in 1884 with the primary objective of defining the origin of the height system in France. Hourly values for 1885–1988 have been digitized from the original tidal charts. They are supplemented by hourly values from an older tide gauge record (1849–1851) that was rediscovered during a survey in 2009. Both recovered data sets have been critically edited for errors and their reliability assessed. The hourly values are thoroughly analysed for the first time after their original recording. A consistent high-frequency time series is reported, increasing notably the length of one of the few European sea level records in the Mediterranean Sea spanning more than one hundred years. Changes in sea levels are examined, and previous results revisited with the extended time series. The rate of relative sea level change for the period 1849–2012 is estimated to have been \(1.08\pm 0.04\)  mm/year at Marseille, a value that is slightly lower but in close agreement with the longest time series of Brest over the common period ( \(1.26\pm 0.04\)  mm/year). The data from a permanent global positioning system station installed on the roof of the solid tide gauge building suggests a remarkable stability of the ground ( \(-0.04\pm 0.25\)  mm/year) since 1998, confirming the choice made by our predecessor geodesists in the nineteenth century regarding this site selection.  相似文献   

    18.
    The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012).  相似文献   

    19.
    Deformations of radio telescopes used in geodetic and astrometric very long baseline interferometry (VLBI) observations belong to the class of systematic error sources which require correction in data analysis. In this paper we present a model for all path length variations in the geometrical optics of radio telescopes which are due to gravitational deformation. The Effelsberg 100 m radio telescope of the Max Planck Institute for Radio Astronomy, Bonn, Germany, has been surveyed by various terrestrial methods. Thus, all necessary information that is needed to model the path length variations is available. Additionally, a ray tracing program has been developed which uses as input the parameters of the measured deformations to produce an independent check of the theoretical model. In this program as well as in the theoretical model, the illumination function plays an important role because it serves as the weighting function for the individual path lengths depending on the distance from the optical axis. For the Effelsberg telescope, the biggest contribution to the total path length variations is the bending of the main beam located along the elevation axis which partly carries the weight of the paraboloid at its vertex. The difference in total path length is almost \(-\) 100 mm when comparing observations at 90 \(^\circ \) and at 0 \(^\circ \) elevation angle. The impact of the path length corrections is validated in a global VLBI analysis. The application of the correction model leads to a change in the vertical position of \(+120\)  mm. This is more than the maximum path length, but the effect can be explained by the shape of the correction function.  相似文献   

    20.
    The frequency stability and uncertainty of the latest generation of optical atomic clocks is now approaching the one part in \(10^{18}\) level. Comparisons between earthbound clocks at rest must account for the relativistic redshift of the clock frequencies, which is proportional to the corresponding gravity (gravitational plus centrifugal) potential difference. For contributions to international timescales, the relativistic redshift correction must be computed with respect to a conventional zero potential value in order to be consistent with the definition of Terrestrial Time. To benefit fully from the uncertainty of the optical clocks, the gravity potential must be determined with an accuracy of about \(0.1\,\hbox {m}^{2}\,\hbox {s}^{-2}\), equivalent to about 0.01 m in height. This contribution focuses on the static part of the gravity field, assuming that temporal variations are accounted for separately by appropriate reductions. Two geodetic approaches are investigated for the derivation of gravity potential values: geometric levelling and the Global Navigation Satellite Systems (GNSS)/geoid approach. Geometric levelling gives potential differences with millimetre uncertainty over shorter distances (several kilometres), but is susceptible to systematic errors at the decimetre level over large distances. The GNSS/geoid approach gives absolute gravity potential values, but with an uncertainty corresponding to about 2 cm in height. For large distances, the GNSS/geoid approach should therefore be better than geometric levelling. This is demonstrated by the results from practical investigations related to three clock sites in Germany and one in France. The estimated uncertainty for the relativistic redshift correction at each site is about \(2 \times 10^{-18}\).  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号