首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable.  相似文献   

2.
A neural network model for predicting weighted mean temperature   总被引:2,自引:0,他引:2  
Maohua Ding 《Journal of Geodesy》2018,92(10):1187-1198
Water vapor is an important element of the Earth’s atmosphere, and most of it concentrates at the bottom of the troposphere. Knowledge of the water vapor measured by Global Navigation Satellite Systems (GNSS) is an important direction of GNSS research. In particular, when the zenith wet delay is converted to precipitable water vapor, the weighted mean temperature \(T_\mathrm{m}\) is a variable parameter to be determined in this conversion. The purpose of the study is getting a more accurate \(T_\mathrm{m}\) model for global users by a combination of two different characteristics of \(T_\mathrm{m}\) (i.e., the \(T_\mathrm{m}\) seasonal variations and the relationships between \(T_\mathrm{m}\) and surface meteorological elements). The modeling process was carried out by using the neural network technology. A multilayer feedforward neural network model (the NN) was established. The NN model is used with measurements of only surface temperature \(T_\mathrm{S}\). The NN was validated and compared with four other published global \(T_\mathrm{m}\) models. The results show that the NN performed better than any of the four compared models on the global scale.  相似文献   

3.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ 2 %).  相似文献   

4.
Fast error analysis of continuous GNSS observations with missing data   总被引:3,自引:0,他引:3  
One of the most widely used method for the time-series analysis of continuous Global Navigation Satellite System (GNSS) observations is Maximum Likelihood Estimation (MLE) which in most implementations requires $\mathcal{O }(n^3)$ operations for $n$ observations. Previous research by the authors has shown that this amount of operations can be reduced to $\mathcal{O }(n^2)$ for observations without missing data. In the current research we present a reformulation of the equations that preserves this low amount of operations, even in the common situation of having some missing data.Our reformulation assumes that the noise is stationary to ensure a Toeplitz covariance matrix. However, most GNSS time-series exhibit power-law noise which is weakly non-stationary. To overcome this problem, we present a Toeplitz covariance matrix that provides an approximation for power-law noise that is accurate for most GNSS time-series.Numerical results are given for a set of synthetic data and a set of International GNSS Service (IGS) stations, demonstrating a reduction in computation time of a factor of 10–100 compared to the standard MLE method, depending on the length of the time-series and the amount of missing data.  相似文献   

5.
The well known least squares collocation model (I) $$\ell = Ax + \left[ {\begin{array}{*{20}c} O \\ I \\ \end{array} } \right]^T \left[ {\begin{array}{*{20}c} s \\ {s' + n} \\ \end{array} } \right]$$ is compared with the model (II) $$\ell = Ax + \left[ {\begin{array}{*{20}c} R \\ I \\ \end{array} } \right]^T \left[ {\begin{array}{*{20}c} s \\ n \\ \end{array} } \right]$$ The basic differences of these two models in the framework of physical geodesy are pointed out by analyzing the validity of the equation $$s' = Rs$$ that transforms one model into the other, for different cases. For clarification purposes least squares filtering, prediction and collocation are discussed separately. In filtering problems the coefficient matrix R becomes the unit matrix and by this the two models become identical. For prediction and collocation problems the relation s′=Rs is only fulfilled in the global limit where s becomes either a continuous function on the earth or an intinite set of spherical harmonic coefficients. Applying Model (II), we see that for any finite dimension of s the operator equations of physical geodesy are approximated by a finite matrix relation whereas in Model (I) the operator equations are applied in their correct form on a continuous, approximate function \(\tilde s\) .  相似文献   

6.
Canadian gravimetric geoid model 2010   总被引:4,自引:1,他引:3  
A new gravimetric geoid model, Canadian Gravimetric Geoid 2010 (CGG2010), has been developed to upgrade the previous geoid model CGG2005. CGG2010 represents the separation between the reference ellipsoid of GRS80 and the Earth’s equipotential surface of $W_0=62{,}636{,}855.69~\mathrm{m}^2\mathrm{s}^{-2}$ W 0 = 62 , 636 , 855.69 m 2 s ? 2 . The Stokes–Helmert method has been re-formulated for the determination of CGG2010 by a new Stokes kernel modification. It reduces the effect of the systematic error in the Canadian terrestrial gravity data on the geoid to the level below 2 cm from about 20 cm using other existing modification techniques, and renders a smooth spectral combination of the satellite and terrestrial gravity data. The long wavelength components of CGG2010 include the GOCE contribution contained in a combined GRACE and GOCE geopotential model: GOCO01S, which ranges from $-20.1$ ? 20.1 to 16.7 cm with an RMS of 2.9 cm. Improvement has been also achieved through the refinement of geoid modelling procedure and the use of new data. (1) The downward continuation effect has been accounted accurately ranging from $-22.1$ ? 22.1 to 16.5 cm with an RMS of 0.9 cm. (2) The geoid residual from the Stokes integral is reduced to 4 cm in RMS by the use of an ultra-high degree spherical harmonic representation of global elevation model for deriving the reference Helmert field in conjunction with a derived global geopotential model. (3) The Canadian gravimetric geoid model is published for the first time with associated error estimates. In addition, CGG2010 includes the new marine gravity data, ArcGP gravity grids, and the new Canadian Digital Elevation Data (CDED) 1:50K. CGG2010 is compared to GPS-levelling data in Canada. The standard deviations are estimated to vary from 2 to 10 cm with the largest error in the mountainous areas of western Canada. We demonstrate its improvement over the previous models CGG2005 and EGM2008.  相似文献   

7.
M-estimation with probabilistic models of geodetic observations   总被引:1,自引:1,他引:0  
The paper concerns \(M\) -estimation with probabilistic models of geodetic observations that is called \(M_{\mathcal {P}}\) estimation. The special attention is paid to \(M_{\mathcal {P}}\) estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of \(M_{\mathcal {P}}\) estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments \(\mu _{k},\, k=2,3,4\) , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The \(M_{\mathcal {P}}\) estimation that includes the Pearson type IV and VII distributions ( \(M_{\mathrm{PD(l)}}\) method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering \(M\) -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( \(M_{\mathrm{G-C}}\) method). The paper shows that \(M_{\mathcal {P}}\) estimation with the application of probabilistic models belongs to the class of robust estimations; \(M_{\mathrm{PD(l)}}\) method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis.  相似文献   

8.
A terrestrial survey, called the Geoid Slope Validation Survey of 2011 (GSVS11), encompassing leveling, GPS, astrogeodetic deflections of the vertical (DOV) and surface gravity was performed in the United States. The general purpose of that survey was to evaluate the current accuracy of gravimetric geoid models, and also to determine the impact of introducing new airborne gravity data from the ‘Gravity for the Redefinition of the American Vertical Datum’ (GRAV-D) project. More specifically, the GSVS11 survey was performed to determine whether or not the GRAV-D airborne gravimetry, flown at 11 km altitude, can reduce differential geoid error to below 1 cm in a low, flat gravimetrically uncomplicated region. GSVS11 comprises a 325 km traverse from Austin to Rockport in Southern Texas, and includes 218 GPS stations ( $\sigma _{\Delta h }= 0.4$ cm over any distance from 0.4 to 325 km) co-located with first-order spirit leveled orthometric heights ( $\sigma _{\Delta H }= 1.3$ cm end-to-end), including new surface gravimetry, and 216 astronomically determined vertical deflections $(\sigma _{\mathrm{DOV}}= 0.1^{\prime \prime })$ . The terrestrial survey data were compared in various ways to specific geoid models, including analysis of RMS residuals between all pairs of points on the line, direct comparison of DOVs to geoid slopes, and a harmonic analysis of the differences between the terrestrial data and various geoid models. These comparisons of the terrestrial survey data with specific geoid models showed conclusively that, in this type of region (low, flat) the geoid models computed using existing terrestrial gravity, combined with digital elevation models (DEMs) and GRACE and GOCE data, differential geoid accuracy of 1 to 3 cm (1 $\sigma )$ over distances from 0.4 to 325 km were currently being achieved. However, the addition of a contemporaneous airborne gravity data set, flown at 11 km altitude, brought the estimated differential geoid accuracy down to 1 cm over nearly all distances from 0.4 to 325 km.  相似文献   

9.
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012).  相似文献   

10.
11.
In order to move the polar singularity of arbitrary spherical harmonic expansion to a point on the equator, we rotate the expansion around the y-axis by \(90^{\circ }\) such that the x-axis becomes a new pole. The expansion coefficients are transformed by multiplying a special value of Wigner D-matrix and a normalization factor. The transformation matrix is unchanged whether the coefficients are \(4 \pi \) fully normalized or Schmidt quasi-normalized. The matrix is recursively computed by the so-called X-number formulation (Fukushima in J Geodesy 86: 271–285, 2012a). As an example, we obtained \(2190\times 2190\) coefficients of the rectangular rotated spherical harmonic expansion of EGM2008. A proper combination of the original and the rotated expansions will be useful in (i) integrating the polar orbits of artificial satellites precisely and (ii) synthesizing/analyzing the gravitational/geomagnetic potentials and their derivatives accurately in the high latitude regions including the arctic and antarctic area.  相似文献   

12.
Design and validation of broadcast ephemeris for low Earth orbit satellites   总被引:1,自引:0,他引:1  
Low Earth orbit (LEO) constellations have potentialities to augment global navigation satellite systems for better service performance. The prerequisite is to provide the broadcast ephemerides that meet the accuracy requirement for navigation and positioning. In this study, the Kepler ephemeris model is chosen as the basis of LEO broadcast ephemeris design for backward compatibility and simplicity. To eliminate the singularity caused by the smaller eccentricity of LEO satellites compared to MEO satellites, non-singular elements are introduced for curve fitting of parameters and then transformed to Kepler elements to assure the algorithm of ephemeris computation remains unchanged for the user. We analyze the variation characteristics of LEO orbital elements and establish suitable broadcast ephemeris models considering fit accuracy, number of parameters, fit interval, and orbital altitude. The results of the fit accuracy for different fit intervals and orbital altitudes suggest that the optimal parameter selections are \((Crs3,Crc3)\), \((Crs3,Crc3, \, \dot{a},\dot{n})\) and \(\left( {Crs3,Crc3, \, \dot{a},\dot{n}, \, \ddot{i},\ddot{a}} \right)\), i.e., adding two, four or six parameters to the GPS 16-parameter ephemeris. When adding four parameters, the fit accuracy can be improved by about one order of magnitude compared to the GPS 16-parameter ephemeris model, and fit errors of less than 10 cm can be achieved with 20-min fit interval for a 400–1400 km orbital altitude. In addition, the effects of the number of parameters, fit interval, and orbit altitude on fit accuracy are discussed in detail. The validation with four LEO satellites in orbit also confirms the effectiveness of proposed models.  相似文献   

13.
The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
  1. A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
  2. The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
  3. The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
  相似文献   

14.
We analyze the high-resolution dilatation data for the October 2013 \(M_w\) 6.2 Ruisui, Taiwan, earthquake, which occurred at a distance of 15–20 km away from a Sacks–Evertson dilatometer network. Based on well-constrained source parameters (\(\hbox {strike}=217^\circ \), \(\hbox {dip}=48^\circ \), \(\hbox {rake}=49^\circ \)), we propose a simple rupture model that explains the permanent static deformation and the dynamic vibrations at short period (\(\sim \)3.5–4.5 s) for most of the four sites with less than 20 % of discrepancies. This study represents a first attempt of modeling simultaneously the dynamic and static crustal strain using dilatation data. The results illustrate the potential for strain recordings of high-frequency seismic waves in the near-field of an earthquake to add constraints on the properties of seismic sources.  相似文献   

15.
Large-scale mass redistribution in the terrestrial water storage (TWS) leads to changes in the low-degree spherical harmonic coefficients of the Earth’s surface mass density field. Studying these low-degree fluctuations is an important task that contributes to our understanding of continental hydrology. In this study, we use global GNSS measurements of vertical and horizontal crustal displacements that we correct for atmospheric and oceanic effects, and use a set of modified basis functions similar to Clarke et al. (Geophys J Int 171:1–10, 2007) to perform an inversion of the corrected measurements in order to recover changes in the coefficients of degree-0 (hydrological mass change), degree-1 (centre of mass shift) and degree-2 (flattening of the Earth) caused by variations in the TWS over the period January 2003–January 2015. We infer from the GNSS-derived degree-0 estimate an annual variation in total continental water mass with an amplitude of \((3.49 \pm 0.19) \times 10^{3}\) Gt and a phase of \(70^{\circ } \pm 3^{\circ }\) (implying a peak in early March), in excellent agreement with corresponding values derived from the Global Land Data Assimilation System (GLDAS) water storage model that amount to \((3.39 \pm 0.10) \times 10^{3}\) Gt and \(71^{\circ } \pm 2^{\circ }\), respectively. The degree-1 coefficients we recover from GNSS predict annual geocentre motion (i.e. the offset change between the centre of common mass and the centre of figure) caused by changes in TWS with amplitudes of \(0.69 \pm 0.07\) mm for GX, \(1.31 \pm 0.08\) mm for GY and \(2.60 \pm 0.13\) mm for GZ. These values agree with GLDAS and estimates obtained from the combination of GRACE and the output of an ocean model using the approach of Swenson et al. (J Geophys Res 113(B8), 2008) at the level of about 0.5, 0.3 and 0.9 mm for GX, GY and GZ, respectively. Corresponding degree-1 coefficients from SLR, however, generally show higher variability and predict larger amplitudes for GX and GZ. The results we obtain for the degree-2 coefficients from GNSS are slightly mixed, and the level of agreement with the other sources heavily depends on the individual coefficient being investigated. The best agreement is observed for \(T_{20}^C\) and \(T_{22}^S\), which contain the most prominent annual signals among the degree-2 coefficients, with amplitudes amounting to \((5.47 \pm 0.44) \times 10^{-3}\) and \((4.52 \pm 0.31) \times 10^{-3}\) m of equivalent water height (EWH), respectively, as inferred from GNSS. Corresponding agreement with values from SLR and GRACE is at the level of or better than \(0.4 \times 10^{-3}\) and \(0.9 \times 10^{-3}\) m of EWH for \(T_{20}^C\) and \(T_{22}^S\), respectively, while for both coefficients, GLDAS predicts smaller amplitudes. Somewhat lower agreement is obtained for the order-1 coefficients, \(T_{21}^C\) and \(T_{21}^S\), while our GNSS inversion seems unable to reliably recover \(T_{22}^C\). For all the coefficients we consider, the GNSS-derived estimates from the modified inversion approach are more consistent with the solutions from the other sources than corresponding estimates obtained from an unconstrained standard inversion.  相似文献   

16.
We present new insights on the time-averaged surface velocities, convergence and extension rates along arc-normal transects in Kumaon, Garhwal and Kashmir–Himachal regions in the Indian Himalaya from 13 years of high-precision Global Positioning System (GPS) time series (1995–2008) derived from GPS data at 14 GPS permanent and 42 campaign stations between $29.5{-}35^{\circ }\hbox {N}$ and $76{-}81^{\circ }\hbox {E}$ . The GPS surface horizontal velocities vary significantly from the Higher to Lesser Himalaya and are of the order of 30 to 48 mm/year NE in ITRF 2005 reference frame, and 17 to 2 mm/year SW in an India fixed reference frame indicating that this region is accommodating less than 2 cm/year of the India–Eurasia plate motion ( ${\sim }4~\hbox {cm/year}$ ). The total arc-normal shortening varies between ${\sim }10{-}14~\hbox {mm/year}$ along the different transects of the northwest Himalayan wedge, between the Indo-Tsangpo suture to the north and the Indo-Gangetic foreland to the south indicating high strain accumulation in the Himalayan wedge. This convergence is being accommodated differentially along the arc-normal transects; ${\sim } 5{-}10~\hbox {mm/year}$ in Lesser Himalaya and 3–4 mm/year in Higher Himalaya south of South Tibetan Detachment. Most of the convergence in the Lesser Himalaya of Garhwal and Kumaon is being accommodated just south of the Main Central Thrust fault trace, indicating high strain accumulation in this region which is also consistent with the high seismic activity in this region. In addition, for the first time an arc-normal extension of ${\sim }6~\hbox {mm/year}$ has also been observed in the Tethyan Himalaya of Kumaon. Inverse modeling of GPS-derived surface deformation rates in Garhwal and Kumaon Himalaya using a single dislocation indicate that the Main Himalayan Thrust is locked from the surface to a depth of ${\sim }15{-}20~\hbox {km}$ over a width of 110 km with associated slip rate of ${\sim }16{-}18~\hbox {mm/year}$ . These results indicate that the arc-normal rates in the Northwest Himalaya have a complex deformation pattern involving both convergence and extension, and rigorous seismo-tectonic models in the Himalaya are necessary to account for this pattern. In addition, the results also gave an estimate of co-seismic and post-seismic motion associated with the 1999 Chamoli earthquake, which is modeled to derive the slip and geometry of the rupture plane.  相似文献   

17.
For science applications of the gravity recovery and climate experiment (GRACE) monthly solutions, the GRACE estimates of \(C_{20}\) (or \(J_{2}\)) are typically replaced by the value determined from satellite laser ranging (SLR) due to an unexpectedly strong, clearly non-geophysical, variation at a period of \(\sim \)160 days. This signal has sometimes been referred to as a tide-like variation since the period is close to the perturbation period on the GRACE orbits due to the spherical harmonic coefficient pair \(C_{22}/S_{22}\) of S2 ocean tide. Errors in the S2 tide model used in GRACE data processing could produce a significant perturbation to the GRACE orbits, but it cannot contribute to the \(\sim \)160-day signal appearing in \(C_{20}\). Since the dominant contribution to the GRACE estimate of \(C_{20}\) is from the global positioning system tracking data, a time series of 138 monthly solutions up to degree and order 10 (\(10\times 10\)) were derived along with estimates of ocean tide parameters up to degree 6 for eight major tides. The results show that the \(\sim \)160-day signal remains in the \(C_{20}\) time series. Consequently, the anomalous signal in GRACE \(C_{20}\) cannot be attributed to aliasing from the errors in the S2 tide. A preliminary analysis of the cross-track forces acting on GRACE and the cross-track component of the accelerometer data suggests that a temperature-dependent systematic error in the accelerometer data could be a cause. Because a wide variety of science applications relies on the replacement values for \(C_{20}\), it is essential that the SLR estimates are as reliable as possible. An ongoing concern has been the influence of higher degree even zonal terms on the SLR estimates of \(C_{20}\), since only \(C_{20}\) and \(C_{40}\) are currently estimated. To investigate whether a better separation between \(C_{20}\) and the higher-degree terms could be achieved, several combinations of additional SLR satellites were investigated. In addition, a series of monthly gravity field solutions (\(60\times 60\)) were estimated from a combination of GRACE and SLR data. The results indicate that the combination of GRACE and SLR data might benefit the resonant orders in the GRACE-derived gravity fields, but it appears to degrade the recovery of the \(C_{20}\) variations. In fact, the results suggest that the poorer recovery of \(C_{40}\) by GRACE, where the annual variation is significantly underestimated, may be affecting the estimates of \(C_{20}\). Consequently, it appears appropriate to continue using the SLR-based estimates of \(C_{20}\), and possibly also \(C_{40}\), to augment the existing GRACE mission.  相似文献   

18.
19.
The consistent estimation of terrestrial reference frames (TRF), celestial reference frames (CRF) and Earth orientation parameters (EOP) is still an open subject and offers a large field of investigations. Until now, source positions resulting from Very Long Baseline Interferometry (VLBI) observations are not routinely combined on the level of normal equations in the same way as it is a common process for station coordinates and EOPs. The combination of source positions based on VLBI observations is now integrated in the IVS combination process. We present the studies carried out to evaluate the benefit of the combination compared to individual solutions. On the level of source time series, improved statistics regarding weighted root mean square have been found for the combination in comparison with the individual contributions. In total, 67 stations and 907 sources (including 291 ICRF2 defining sources) are included in the consistently generated CRF and TRF covering 30 years of VLBI contributions. The rotation angles \(A_1\), \(A_2\) and \(A_3\) relative to ICRF2 are ?12.7, 51.7 and 1.8 \({\upmu }\) as, the drifts \(D_\alpha \) and \(D_\delta \) are ?67.2 and 19.1 \(\upmu \) as/rad and the bias \(B_\delta \) is 26.1 \(\upmu \) as. The comparison of the TRF solution with the IVS routinely combined quarterly TRF solution shows no significant impact on the TRF, when the CRF is estimated consistently with the TRF. The root mean square value of the post-fit station coordinate residuals is 0.9 cm.  相似文献   

20.
This paper describes the historical sea level data that we have rescued from a tide gauge, especially devised originally for geodesy. This gauge was installed in Marseille in 1884 with the primary objective of defining the origin of the height system in France. Hourly values for 1885–1988 have been digitized from the original tidal charts. They are supplemented by hourly values from an older tide gauge record (1849–1851) that was rediscovered during a survey in 2009. Both recovered data sets have been critically edited for errors and their reliability assessed. The hourly values are thoroughly analysed for the first time after their original recording. A consistent high-frequency time series is reported, increasing notably the length of one of the few European sea level records in the Mediterranean Sea spanning more than one hundred years. Changes in sea levels are examined, and previous results revisited with the extended time series. The rate of relative sea level change for the period 1849–2012 is estimated to have been \(1.08\pm 0.04\)  mm/year at Marseille, a value that is slightly lower but in close agreement with the longest time series of Brest over the common period ( \(1.26\pm 0.04\)  mm/year). The data from a permanent global positioning system station installed on the roof of the solid tide gauge building suggests a remarkable stability of the ground ( \(-0.04\pm 0.25\)  mm/year) since 1998, confirming the choice made by our predecessor geodesists in the nineteenth century regarding this site selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号