首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fast error analysis of continuous GNSS observations with missing data   总被引:3,自引:0,他引:3  
One of the most widely used method for the time-series analysis of continuous Global Navigation Satellite System (GNSS) observations is Maximum Likelihood Estimation (MLE) which in most implementations requires $\mathcal{O }(n^3)$ operations for $n$ observations. Previous research by the authors has shown that this amount of operations can be reduced to $\mathcal{O }(n^2)$ for observations without missing data. In the current research we present a reformulation of the equations that preserves this low amount of operations, even in the common situation of having some missing data.Our reformulation assumes that the noise is stationary to ensure a Toeplitz covariance matrix. However, most GNSS time-series exhibit power-law noise which is weakly non-stationary. To overcome this problem, we present a Toeplitz covariance matrix that provides an approximation for power-law noise that is accurate for most GNSS time-series.Numerical results are given for a set of synthetic data and a set of International GNSS Service (IGS) stations, demonstrating a reduction in computation time of a factor of 10–100 compared to the standard MLE method, depending on the length of the time-series and the amount of missing data.  相似文献   

2.
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable.  相似文献   

3.
For science applications of the gravity recovery and climate experiment (GRACE) monthly solutions, the GRACE estimates of \(C_{20}\) (or \(J_{2}\)) are typically replaced by the value determined from satellite laser ranging (SLR) due to an unexpectedly strong, clearly non-geophysical, variation at a period of \(\sim \)160 days. This signal has sometimes been referred to as a tide-like variation since the period is close to the perturbation period on the GRACE orbits due to the spherical harmonic coefficient pair \(C_{22}/S_{22}\) of S2 ocean tide. Errors in the S2 tide model used in GRACE data processing could produce a significant perturbation to the GRACE orbits, but it cannot contribute to the \(\sim \)160-day signal appearing in \(C_{20}\). Since the dominant contribution to the GRACE estimate of \(C_{20}\) is from the global positioning system tracking data, a time series of 138 monthly solutions up to degree and order 10 (\(10\times 10\)) were derived along with estimates of ocean tide parameters up to degree 6 for eight major tides. The results show that the \(\sim \)160-day signal remains in the \(C_{20}\) time series. Consequently, the anomalous signal in GRACE \(C_{20}\) cannot be attributed to aliasing from the errors in the S2 tide. A preliminary analysis of the cross-track forces acting on GRACE and the cross-track component of the accelerometer data suggests that a temperature-dependent systematic error in the accelerometer data could be a cause. Because a wide variety of science applications relies on the replacement values for \(C_{20}\), it is essential that the SLR estimates are as reliable as possible. An ongoing concern has been the influence of higher degree even zonal terms on the SLR estimates of \(C_{20}\), since only \(C_{20}\) and \(C_{40}\) are currently estimated. To investigate whether a better separation between \(C_{20}\) and the higher-degree terms could be achieved, several combinations of additional SLR satellites were investigated. In addition, a series of monthly gravity field solutions (\(60\times 60\)) were estimated from a combination of GRACE and SLR data. The results indicate that the combination of GRACE and SLR data might benefit the resonant orders in the GRACE-derived gravity fields, but it appears to degrade the recovery of the \(C_{20}\) variations. In fact, the results suggest that the poorer recovery of \(C_{40}\) by GRACE, where the annual variation is significantly underestimated, may be affecting the estimates of \(C_{20}\). Consequently, it appears appropriate to continue using the SLR-based estimates of \(C_{20}\), and possibly also \(C_{40}\), to augment the existing GRACE mission.  相似文献   

4.
Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489–20502, 1997.  https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.  https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.  相似文献   

5.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ 2 %).  相似文献   

6.
Fast error analysis of continuous GPS observations   总被引:4,自引:1,他引:3  
It has been generally accepted that the noise in continuous GPS observations can be well described by a power-law plus white noise model. Using maximum likelihood estimation (MLE) the numerical values of the noise model can be estimated. Current methods require calculating the data covariance matrix and inverting it, which is a significant computational burden. Analysing 10 years of daily GPS solutions of a single station can take around 2 h on a regular computer such as a PC with an AMD AthlonTM 64 X2 dual core processor. When one analyses large networks with hundreds of stations or when one analyses hourly instead of daily solutions, the long computation times becomes a problem. In case the signal only contains power-law noise, the MLE computations can be simplified to a process where N is the number of observations. For the general case of power-law plus white noise, we present a modification of the MLE equations that allows us to reduce the number of computations within the algorithm from a cubic to a quadratic function of the number of observations when there are no data gaps. For time-series of three and eight years, this means in practise a reduction factor of around 35 and 84 in computation time without loss of accuracy. In addition, this modification removes the implicit assumption that there is no environment noise before the first observation. Finally, we present an analytical expression for the uncertainty of the estimated trend if the data only contains power-law noise. Electronic supplementary material The online version of this article (doi: ) contains supplementary material, which is available to authorized users.  相似文献   

7.
Estimation of variance and covariance components   总被引:3,自引:2,他引:3  
  相似文献   

8.
M-estimation with probabilistic models of geodetic observations   总被引:1,自引:1,他引:0  
The paper concerns \(M\) -estimation with probabilistic models of geodetic observations that is called \(M_{\mathcal {P}}\) estimation. The special attention is paid to \(M_{\mathcal {P}}\) estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of \(M_{\mathcal {P}}\) estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments \(\mu _{k},\, k=2,3,4\) , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The \(M_{\mathcal {P}}\) estimation that includes the Pearson type IV and VII distributions ( \(M_{\mathrm{PD(l)}}\) method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering \(M\) -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( \(M_{\mathrm{G-C}}\) method). The paper shows that \(M_{\mathcal {P}}\) estimation with the application of probabilistic models belongs to the class of robust estimations; \(M_{\mathrm{PD(l)}}\) method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis.  相似文献   

9.
This paper presents new variants of the Hodges–Lehmann estimates, which belong to the class of $R$ -estimates. The new approach to this method arises from the need of taking into account differences in accuracy of geodetic measurements, which is not possible while applying traditional $R$ -estimates. The theoretical assumptions of the conventional Hodges–Lehmann estimates are supplemented with the information about the accuracy of observations and two new variants of the estimates in question are derived by applying the principles proposed by Hodges and Lehmann, hence they are called the Hodges–Lehmann weighted estimate. The main properties of the new estimates follow from such approach, and from the practical point of view, the most important seems to be their robustness against outliers. Since the first estimate proposed is a natural estimator of the shift between two samples, it can be applied in deformation analysis to estimate point displacements. The paper presents two numerical examples that show the properties as well as possible applications of the new estimates.  相似文献   

10.
The estimation of crustal deformations from repeated baseline measurements is a singular problem in the absence of prior information. One often applied solution is a free adjustment in which the singular normal matrix is augmented with a set of inner constraints. These constraints impose no net translation nor rotation for the estimated deformations X which may not be physically meaningful for a particular problem. The introduction of an available geophysical model from which an expected deformation vector \(\bar X\) and its covariance matrix \(\sum _{\bar X} \) can be computed will direct X to a physically more meaningful solution. Three possible estimators are investigated for estimating deformations from a combination of baseline measurements and geophysical models.  相似文献   

11.
Large-scale mass redistribution in the terrestrial water storage (TWS) leads to changes in the low-degree spherical harmonic coefficients of the Earth’s surface mass density field. Studying these low-degree fluctuations is an important task that contributes to our understanding of continental hydrology. In this study, we use global GNSS measurements of vertical and horizontal crustal displacements that we correct for atmospheric and oceanic effects, and use a set of modified basis functions similar to Clarke et al. (Geophys J Int 171:1–10, 2007) to perform an inversion of the corrected measurements in order to recover changes in the coefficients of degree-0 (hydrological mass change), degree-1 (centre of mass shift) and degree-2 (flattening of the Earth) caused by variations in the TWS over the period January 2003–January 2015. We infer from the GNSS-derived degree-0 estimate an annual variation in total continental water mass with an amplitude of \((3.49 \pm 0.19) \times 10^{3}\) Gt and a phase of \(70^{\circ } \pm 3^{\circ }\) (implying a peak in early March), in excellent agreement with corresponding values derived from the Global Land Data Assimilation System (GLDAS) water storage model that amount to \((3.39 \pm 0.10) \times 10^{3}\) Gt and \(71^{\circ } \pm 2^{\circ }\), respectively. The degree-1 coefficients we recover from GNSS predict annual geocentre motion (i.e. the offset change between the centre of common mass and the centre of figure) caused by changes in TWS with amplitudes of \(0.69 \pm 0.07\) mm for GX, \(1.31 \pm 0.08\) mm for GY and \(2.60 \pm 0.13\) mm for GZ. These values agree with GLDAS and estimates obtained from the combination of GRACE and the output of an ocean model using the approach of Swenson et al. (J Geophys Res 113(B8), 2008) at the level of about 0.5, 0.3 and 0.9 mm for GX, GY and GZ, respectively. Corresponding degree-1 coefficients from SLR, however, generally show higher variability and predict larger amplitudes for GX and GZ. The results we obtain for the degree-2 coefficients from GNSS are slightly mixed, and the level of agreement with the other sources heavily depends on the individual coefficient being investigated. The best agreement is observed for \(T_{20}^C\) and \(T_{22}^S\), which contain the most prominent annual signals among the degree-2 coefficients, with amplitudes amounting to \((5.47 \pm 0.44) \times 10^{-3}\) and \((4.52 \pm 0.31) \times 10^{-3}\) m of equivalent water height (EWH), respectively, as inferred from GNSS. Corresponding agreement with values from SLR and GRACE is at the level of or better than \(0.4 \times 10^{-3}\) and \(0.9 \times 10^{-3}\) m of EWH for \(T_{20}^C\) and \(T_{22}^S\), respectively, while for both coefficients, GLDAS predicts smaller amplitudes. Somewhat lower agreement is obtained for the order-1 coefficients, \(T_{21}^C\) and \(T_{21}^S\), while our GNSS inversion seems unable to reliably recover \(T_{22}^C\). For all the coefficients we consider, the GNSS-derived estimates from the modified inversion approach are more consistent with the solutions from the other sources than corresponding estimates obtained from an unconstrained standard inversion.  相似文献   

12.
Homogeneous reprocessing of GPS,GLONASS and SLR observations   总被引:3,自引:2,他引:1  
The International GNSS Service (IGS) provides operational products for the GPS and GLONASS constellation. Homogeneously processed time series of parameters from the IGS are only available for GPS. Reprocessed GLONASS series are provided only by individual Analysis Centers (i. e. CODE and ESA), making it difficult to fully include the GLONASS system into a rigorous GNSS analysis. In view of the increasing number of active GLONASS satellites and a steadily growing number of GPS+GLONASS-tracking stations available over the past few years, Technische Universität Dresden, Technische Universität München, Universität Bern and Eidgenössische Technische Hochschule Zürich performed a combined reprocessing of GPS and GLONASS observations. Also, SLR observations to GPS and GLONASS are included in this reprocessing effort. Here, we show only SLR results from a GNSS orbit validation. In total, 18 years of data (1994–2011) have been processed from altogether 340 GNSS and 70 SLR stations. The use of GLONASS observations in addition to GPS has no impact on the estimated linear terrestrial reference frame parameters. However, daily station positions show an RMS reduction of 0.3 mm on average for the height component when additional GLONASS observations can be used for the time series determination. Analyzing satellite orbit overlaps, the rigorous combination of GPS and GLONASS neither improves nor degrades the GPS orbit precision. For GLONASS, however, the quality of the microwave-derived GLONASS orbits improves due to the combination. These findings are confirmed using independent SLR observations for a GNSS orbit validation. In comparison to previous studies, mean SLR biases for satellites GPS-35 and GPS-36 could be reduced in magnitude from \(-35\) and \(-38\)  mm to \(-12\) and \(-13\)  mm, respectively. Our results show that remaining SLR biases depend on the satellite type and the use of coated or uncoated retro-reflectors. For Earth rotation parameters, the increasing number of GLONASS satellites and tracking stations over the past few years leads to differences between GPS-only and GPS+GLONASS combined solutions which are most pronounced in the pole rate estimates with maximum 0.2 mas/day in magnitude. At the same time, the difference between GLONASS-only and combined solutions decreases. Derived GNSS orbits are used to estimate combined GPS+GLONASS satellite clocks, with first results presented in this paper. Phase observation residuals from a precise point positioning are at the level of 2 mm and particularly reveal poorly modeled yaw maneuver periods.  相似文献   

13.
Proper understanding of how the Earth’s mass distributions and redistributions influence the Earth’s gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east–east–radial, north–north–radial and radial–radial–radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15\(^{{\prime } }\,\times \) 15\(^{{\prime }}\) at GOCE satellite height can reach of about 10\(^{-16}\) m\(^{-1}\) s\(^{2}\) for zero order, 10\(^{-24 }\) or 10\(^{-23}\) m\(^{-1}\) s\(^{2}\) for second order, 10\(^{-29}\) m\(^{-1}\) s\(^{2}\) for fourth order and 10\(^{-35}\) or 10\(^{-34}\) m\(^{-1}\) s\(^{2}\) for sixth order, respectively.  相似文献   

14.
We combine the publicly available GRACE monthly gravity field time series to produce gravity fields with reduced systematic errors. We first compare the monthly gravity fields in the spatial domain in terms of signal and noise. Then, we combine the individual gravity fields with comparable signal content, but diverse noise characteristics. We test five different weighting schemes: equal weights, non-iterative coefficient-wise, order-wise, or field-wise weights, and iterative field-wise weights applying variance component estimation (VCE). The combined solutions are evaluated in terms of signal and noise in the spectral and spatial domains. Compared to the individual contributions, they in general show lower noise. In case the noise characteristics of the individual solutions differ significantly, the weighted means are less noisy, compared to the arithmetic mean: The non-seasonal variability over the oceans is reduced by up to 7.7% and the root mean square (RMS) of the residuals of mass change estimates within Antarctic drainage basins is reduced by 18.1% on average. The field-wise weighting schemes in general show better performance, compared to the order- or coefficient-wise weighting schemes. The combination of the full set of considered time series results in lower noise levels, compared to the combination of a subset consisting of the official GRACE Science Data System gravity fields only: The RMS of coefficient-wise anomalies is smaller by up to 22.4% and the non-seasonal variability over the oceans by 25.4%. This study was performed in the frame of the European Gravity Service for Improved Emergency Management (EGSIEM; http://www.egsiem.eu) project. The gravity fields provided by the EGSIEM scientific combination service (ftp://ftp.aiub.unibe.ch/EGSIEM/) are combined, based on the weights derived by VCE as described in this article.  相似文献   

15.
The quality of the links between the different space geodetic techniques (VLBI, SLR, GNSS and DORIS) is still one of the major limiting factors for the realization of a unique global terrestrial reference frame that is accurate enough to allow the monitoring of the Earth system, i.e., of processes like sea level change, postglacial rebound and silent earthquakes. According to the specifications of the global geodetic observing system of the International Association of Geodesy, such a reference frame should be accurate to 1 mm over decades, with rates of change stable at the level of 0.1 mm/year. The deficiencies arise from inaccurate or incomplete local ties at many fundamental sites as well as from systematic instrumental biases in the individual space geodetic techniques. Frequently repeated surveys, the continuous monitoring of antenna heights and the geometrical mount stability (Lösler et al. in J Geod 90:467–486, 2016.  https://doi.org/10.1007/s00190-016-0887-8) have not provided evidence for insufficient antenna stability. Therefore, we have investigated variations in the respective system delays caused by electronic circuits, which is not adequately captured by the calibration process, either because of subtle differences in the circuitry between geodetic measurement and calibration, high temporal variability or because of lacking resolving bandwidth. The measured system delay variations in the electric chain of both VLBI- and SLR systems reach the order of 100 ps, which is equivalent to 3 cm of path length. Most of this variability is usually removed by the calibrations but by far not all. This paper focuses on the development of new technologies and procedures for co-located geodetic instrumentation in order to identify and remove systematic measurement biases within and between the individual measurement techniques. A closed-loop optical time and frequency distribution system and a common inter-technique reference target provide the possibility to remove variable system delays. The main motivation for the newly established central reference target, locked to the station clock, is the combination of all space geodetic instruments at a single reference point at the observatory. On top of that it provides the unique capability to perform a closure measurement based on the observation of time.  相似文献   

16.
A sequential adjustment procedure is proposed for the direct estimation of point—velocities in deformation analysis networks. At any intermediate stage of the adjustment the up-to-date covariance matrix of those velocities tells the evolving story of the network in terms of solvability and reliability. A pre-zero-epoch covariance matrix is utilized for a smooth and flexible treatment of two characteristic problems of deformation analysis:
  • - high turnover of points in the network
  • - processing variable and generally incomplete observational batches.
  • A small numerical example is presented at the end as an illustration.  相似文献   

    17.
    The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
    1. A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
    2. The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
    3. The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
      相似文献   

    18.
    The study areas Tikovil and Payppara sub-watersheds of Meenachil river cover 158.9 and 111.9 km2, respectively. These watersheds are parts of Western Ghats, which is an ecologically sensitive region. The drainage network of the sub-watersheds was delineated using SOI topographical maps on 1:50,000 scale using the Arc GIS software. The stream orders were calculated using the method proposed by Strahler's (1964 Strahler, A. N. 1964. “Quantitative geomorphology of drainage basins and channel networks”. In Hand book of applied hydrology. Vol. 4, Edited by: Chow, V. T. Vol. 4, 3944.  [Google Scholar]). The drainage network shows that the terrain exhibits dendritic to sub-dendritic drainage pattern. Stream order ranges from the fifth to the sixth order. Drainage density varies between 1.69 and 2.62 km/km2. The drainage texture of the drainage basins are 2.3 km–1 and 6.98 km–1 and categorized as coarse to very fine texture. Stream frequency is low in the case of Payappara sub-watershed (1.78 km–2). Payappara sub-watershed has the highest constant of channel maintenance value of 0.59 indicating much fewer structural disturbances and fewer runoff conditions. The form factor value varies in between 0.42 and 0.55 suggesting elongated shape formed for Payappara sub-watershed and a rather more circular shape for Tikovil sub-watershed. The mean bifurcation ratio (3.5) indicates that both the sub-watersheds are within the natural stream system. Hence from the study it can be concluded that GIS techniques prove to be a competent tool in morphometric analysis.  相似文献   

    19.
    For the following problems
  • - estimating the statistical parameters of the precise levelling,
  • - adjusting the primary levelling networks and
  • - estimating vertical crustal movements
  • mathematical models are being sketched out. Results obtained in evaluating primary relevellings in the G.D.R. are reported.  相似文献   

    20.
    Precise transformation between the celestial reference frames (CRF) and terrestrial reference frames (TRF) is needed for many purposes in Earth and space sciences. According to the Global Geodetic Observing System (GGOS) recommendations, the accuracy of positions and stability of reference frames should reach 1 mm and 0.1 mm year\(^{-1}\), and thus, the Earth Orientation Parameters (EOP) should be estimated with similar accuracy. Different realizations of TRFs, based on the combination of solutions from four different space geodetic techniques, and CRFs, based on a single technique only (VLBI, Very Long Baseline Interferometry), might cause a slow degradation of the consistency among EOP, CRFs, and TRFs (e.g., because of differences in geometry, orientation and scale) and a misalignment of the current conventional EOP series, IERS 08 C04. We empirically assess the consistency among the conventional reference frames and EOP by analyzing the record of VLBI sessions since 1990 with varied settings to reflect the impact of changing frames or other processing strategies on the EOP estimates. Our tests show that the EOP estimates are insensitive to CRF changes, but sensitive to TRF variations and unmodeled geophysical signals at the GGOS level. The differences between the conventional IERS 08 C04 and other EOP series computed with distinct TRF settings exhibit biases and even non-negligible trends in the cases where no differential rotations should appear, e.g., a drift of about 20 \(\upmu \)as year\(^{-1 }\)in \(y_{\mathrm{pol }}\) when the VLBI-only frame VTRF2008 is used. Likewise, different strategies on station position modeling originate scatters larger than 150 \(\upmu \)as in the terrestrial pole coordinates.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号