首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489–20502, 1997.  https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.  https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.  相似文献   

2.
López et al. (Reg Sci Urban Econ 40(2–3):106–115, 2010) introduce a nonparametric test of spatial dependence, called SG(m). The test is claimed to be consistent and asymptotically Chi-square distributed. Elsinger (Reg Sci Urban Econ 43(5):838–840, 2013) raises doubts about the two properties. Using a particular counterexample, he shows that the asymptotic distribution of the SG(m) test may be far from the Chi-square family; the property of consistency is also questioned. In this note, the authors want to clarify the properties of the SG(m) test. We argue that the cause of the conflict is in the specification of the symbolization map. The discrepancies can be solved by adjusting some of the definitions made in the original paper. Moreover, we introduce a permutational bootstrapped version of the SG(m) test, which is powerful and robust to the underlying statistical assumptions. This bootstrapped version may be very useful in an applied context.  相似文献   

3.
We propose an approach for calibrating the horizontal tidal shear components [(differential extension (\(\gamma _1\)) and engineering shear (\(\gamma _2\))] of two Sacks–Evertson (in Pap Meteorol Geophys 22:195–208, 1971) SES-3 borehole strainmeters installed in the Longitudinal Valley in eastern Taiwan. The method is based on the waveform reconstruction of the Earth and ocean tidal shear signals through linear regressions on strain gauge signals, with variable sensor azimuth. This method allows us to derive the orientation of the sensor without any initial constraints and to calibrate the shear strain components \(\gamma _1\) and \(\gamma _2\) against \(M_2\) tidal constituent. The results illustrate the potential of tensor strainmeters for recording horizontal tidal shear strain.  相似文献   

4.
The correction of tropospheric influences via so-called path delays is critical for the analysis of observations from space geodetic techniques like the very long baseline interferometry (VLBI). In standard VLBI analysis, the a priori slant path delays are determined using the concept of zenith delays, mapping functions and gradients. The a priori use of ray-traced delays, i.e., tropospheric slant path delays determined with the technique of ray-tracing through the meteorological data of numerical weather models (NWM), serves as an alternative way of correcting the influences of the troposphere on the VLBI observations within the analysis. In the presented research, the application of ray-traced delays to the VLBI analysis of sessions in a time span of 16.5 years is investigated. Ray-traced delays have been determined with program RADIATE (see Hofmeister in Ph.D. thesis, Department of Geodesy and Geophysics, Faculty of Mathematics and Geoinformation, Technische Universität Wien. http://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-3444, 2016) utilizing meteorological data provided by NWM of the European Centre for Medium-Range Weather Forecasts (ECMWF). In comparison with a standard VLBI analysis, which includes the tropospheric gradient estimation, the application of the ray-traced delays to an analysis, which uses the same parameterization except for the a priori slant path delay handling and the used wet mapping factors for the zenith wet delay (ZWD) estimation, improves the baseline length repeatability (BLR) at 55.9% of the baselines at sub-mm level. If no tropospheric gradients are estimated within the compared analyses, 90.6% of all baselines benefit from the application of the ray-traced delays, which leads to an average improvement of the BLR of 1 mm. The effects of the ray-traced delays on the terrestrial reference frame are also investigated. A separate assessment of the RADIATE ray-traced delays is carried out by comparison to the ray-traced delays from the National Aeronautics and Space Administration Goddard Space Flight Center (NASA GSFC) (Eriksson and MacMillan in http://lacerta.gsfc.nasa.gov/tropodelays, 2016) with respect to the analysis performances in terms of BLR results. If tropospheric gradient estimation is included in the analysis, 51.3% of the baselines benefit from the RADIATE ray-traced delays at sub-mm difference level. If no tropospheric gradients are estimated within the analysis, the RADIATE ray-traced delays deliver a better BLR at 63% of the baselines compared to the NASA GSFC ray-traced delays.  相似文献   

5.
Large-scale mass redistribution in the terrestrial water storage (TWS) leads to changes in the low-degree spherical harmonic coefficients of the Earth’s surface mass density field. Studying these low-degree fluctuations is an important task that contributes to our understanding of continental hydrology. In this study, we use global GNSS measurements of vertical and horizontal crustal displacements that we correct for atmospheric and oceanic effects, and use a set of modified basis functions similar to Clarke et al. (Geophys J Int 171:1–10, 2007) to perform an inversion of the corrected measurements in order to recover changes in the coefficients of degree-0 (hydrological mass change), degree-1 (centre of mass shift) and degree-2 (flattening of the Earth) caused by variations in the TWS over the period January 2003–January 2015. We infer from the GNSS-derived degree-0 estimate an annual variation in total continental water mass with an amplitude of \((3.49 \pm 0.19) \times 10^{3}\) Gt and a phase of \(70^{\circ } \pm 3^{\circ }\) (implying a peak in early March), in excellent agreement with corresponding values derived from the Global Land Data Assimilation System (GLDAS) water storage model that amount to \((3.39 \pm 0.10) \times 10^{3}\) Gt and \(71^{\circ } \pm 2^{\circ }\), respectively. The degree-1 coefficients we recover from GNSS predict annual geocentre motion (i.e. the offset change between the centre of common mass and the centre of figure) caused by changes in TWS with amplitudes of \(0.69 \pm 0.07\) mm for GX, \(1.31 \pm 0.08\) mm for GY and \(2.60 \pm 0.13\) mm for GZ. These values agree with GLDAS and estimates obtained from the combination of GRACE and the output of an ocean model using the approach of Swenson et al. (J Geophys Res 113(B8), 2008) at the level of about 0.5, 0.3 and 0.9 mm for GX, GY and GZ, respectively. Corresponding degree-1 coefficients from SLR, however, generally show higher variability and predict larger amplitudes for GX and GZ. The results we obtain for the degree-2 coefficients from GNSS are slightly mixed, and the level of agreement with the other sources heavily depends on the individual coefficient being investigated. The best agreement is observed for \(T_{20}^C\) and \(T_{22}^S\), which contain the most prominent annual signals among the degree-2 coefficients, with amplitudes amounting to \((5.47 \pm 0.44) \times 10^{-3}\) and \((4.52 \pm 0.31) \times 10^{-3}\) m of equivalent water height (EWH), respectively, as inferred from GNSS. Corresponding agreement with values from SLR and GRACE is at the level of or better than \(0.4 \times 10^{-3}\) and \(0.9 \times 10^{-3}\) m of EWH for \(T_{20}^C\) and \(T_{22}^S\), respectively, while for both coefficients, GLDAS predicts smaller amplitudes. Somewhat lower agreement is obtained for the order-1 coefficients, \(T_{21}^C\) and \(T_{21}^S\), while our GNSS inversion seems unable to reliably recover \(T_{22}^C\). For all the coefficients we consider, the GNSS-derived estimates from the modified inversion approach are more consistent with the solutions from the other sources than corresponding estimates obtained from an unconstrained standard inversion.  相似文献   

6.
In order to move the polar singularity of arbitrary spherical harmonic expansion to a point on the equator, we rotate the expansion around the y-axis by \(90^{\circ }\) such that the x-axis becomes a new pole. The expansion coefficients are transformed by multiplying a special value of Wigner D-matrix and a normalization factor. The transformation matrix is unchanged whether the coefficients are \(4 \pi \) fully normalized or Schmidt quasi-normalized. The matrix is recursively computed by the so-called X-number formulation (Fukushima in J Geodesy 86: 271–285, 2012a). As an example, we obtained \(2190\times 2190\) coefficients of the rectangular rotated spherical harmonic expansion of EGM2008. A proper combination of the original and the rotated expansions will be useful in (i) integrating the polar orbits of artificial satellites precisely and (ii) synthesizing/analyzing the gravitational/geomagnetic potentials and their derivatives accurately in the high latitude regions including the arctic and antarctic area.  相似文献   

7.
Griliches’ knowledge production function has been increasingly adopted at the regional level where location-specific conditions drive the spatial differences in knowledge creation dynamics. However, the large majority of such studies rely on a traditional regression approach that assumes spatially homogenous marginal effects of knowledge input factors. This paper extends the authors’ previous work (Kang and Dall’erba in Int Reg Sci Rev, 2015. doi: 10.1177/0160017615572888) to investigate the spatial heterogeneity in the marginal effects by using nonparametric local modeling approaches such as geographically weighted regression (GWR) and mixed GWR with two distinct samples of the US Metropolitan Statistical Area (MSA) and non-MSA counties. The results indicate a high degree of spatial heterogeneity in the marginal effects of the knowledge input variables, more specifically for the local and distant spillovers of private knowledge measured across MSA counties. On the other hand, local academic knowledge spillovers are found to display spatially homogenous elasticities in both MSA and non-MSA counties. Our results highlight the strengths and weaknesses of each county’s innovation capacity and suggest policy implications for regional innovation strategies.  相似文献   

8.
This work is an investigation of three methods for regional geoid computation: Stokes’s formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223–232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes’s formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes’s formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.  相似文献   

9.
As a precursor study for the upcoming combined Earth Gravitational Model 2020 (EGM2020), the Experimental Gravity Field Model XGM2016, parameterized as a spherical harmonic series up to degree and order 719, is computed. XGM2016 shares the same combination methodology as its predecessor model GOCO05c (Fecher et al. in Surv Geophys 38(3): 571–590, 2017. doi: 10.1007/s10712-016-9406-y). The main difference between these models is that XGM2016 is supported by an improved terrestrial data set of \(15^\prime \times 15^\prime \) gravity anomaly area-means provided by the United States National Geospatial-Intelligence Agency (NGA), resulting in significant upgrades compared to existing combined gravity field models, especially in continental areas such as South America, Africa, parts of Asia, and Antarctica. A combination strategy of relative regional weighting provides for improved performance in near-coastal ocean regions, including regions where the altimetric data are mostly unchanged from previous models. Comparing cumulative height anomalies, from both EGM2008 and XGM2016 at degree/order 719, yields differences of 26 cm in Africa and 40 cm in South America. These differences result from including additional information of satellite data, as well as from the improved ground data in these regions. XGM2016 also yields a smoother Mean Dynamic Topography with significantly reduced artifacts, which indicates an improved modeling of the ocean areas.  相似文献   

10.
Oil spill pollution is a major environmental concern since it has most dangerous and hazardous effects on marine environment. Periodic monitoring by detecting oil spills along with its movement, helps in efficient clean-up and recovery operations. Over the past few years, Synthetic Aperture Radar (SAR) based remote sensing has received considerable attention for monitoring and detecting oil spill due to its unique capabilities to provide wide-area observation in all weather conditions. However, the interpretation of marine SAR imagery is often ambiguous, since it is difficult to separate oil spill from look-alike features. The objective behind our study was to extract probable oil spill candidates automatically from SAR imageries containing oil spill incidences, where new methods based on over-segmentation and amalgamate approach is used for this purpose. The methodology is all about over segmenting the entire image based on its statistics and amalgamating relevant segments at later point of time to represent actual dark features as probable oil spill candidates. Under the dependency on SAR imageries alone, the approach does not take care of the separation of look-alike features which can be addressed subsequently by the consideration of associated synchronous external data resources such as optical data, wind and ocean parameters (Zhao et al. in Opt Express 22(11):13755–13772, 2014; Espedal and Wahl in Int J Remote Sens 20(1):49–65, 1999). The approach is carried out on a set of RISAT-1 imagery containing oil spill incidences and the extracted oil spill areas are in well agreement with the visually interpreted output with kappa coefficient greater than 0.70 and overall classification accuracy greater than 80%.  相似文献   

11.
Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo’s and Gopi’s GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40\(^{\circ }\)N, 0\(^{\circ }\)E; mid latitude) and Accra in Ghana (5.50\(^{\circ }\)N, ?0.20\(^{\circ }\)E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo’s calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi’s technique (a calibration technique based on code and carrier-phase measurements). At the same time, Gopi’s calibration was also found more reliable in low latitude than Ciraolo’s technique. In addition, the TEC derived from IGS GIM seems to be much reliable in middle-latitude than in low-latitude region.  相似文献   

12.
Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is \((6.2 \, \pm \, 0.6) \times 10^6~\mathrm{m}^3/\mathrm{year} \). Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay.  相似文献   

13.
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, \(1/f^{\alpha }\) with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.  相似文献   

14.
Detecting communities in large networks has become a common practice in socio-spatial analyses and has led to the development of numerous dedicated mathematical algorithms. Nowadays, however, researchers face a deluge of data and algorithms, and great care must be taken regarding methodological questions such as the values of the parameters and the geographical characteristics of the data. We aim here at testing the sensitivity of multi-scale modularity optimized by the Louvain method to the value of the resolution parameter (introduced by Reichardt and Bornholdt (Phys Rev Lett 93(21):218701, 2004.  https://doi.org/10.1103/PhysRevLett.93.218701) and controlling the size of the communities) and to a number of spatial issues such as the inclusion of internal loops and the delineation of the study area. We compare the community structures with those found by another well-known community detection algorithm (Infomap), and we further interpret the final results in terms of urban geography. Sensitivity analyses are conducted for commuting movements in and around Brussels. Results reveal slight effects of spatial issues (inclusion of the internal loops, definition of the study area) on the partition into job basins, while the resolution parameter plays a major role in the final results and their interpretation in terms of urban geography. Community detection methods seem to reveal a surprisingly strong spatial effect of commuting patterns: Similar partitions are obtained with different methods. This paper highlights the advantages and sensitivities of the multi-scale Louvain method and more particularly of defining communities of places. Despite these sensitivities, the method proves to be a valuable tool for geographers and planners.  相似文献   

15.
GNSS observations provided by the global tracking network of the International GNSS Service (IGS, Dow et al. in J Geod 83(3):191–198, 2009) play an important role in the realization of a unique terrestrial reference frame that is accurate enough to allow a detailed monitoring of the Earth’s system. Combining these ground-based data with GPS observations tracked by high-quality dual-frequency receivers on-board low earth orbiters (LEOs) is a promising way to further improve the realization of the terrestrial reference frame and the estimation of geocenter coordinates, GPS satellite orbits and Earth rotation parameters. To assess the scope of the improvement on the geocenter coordinates, we processed a network of 53 globally distributed and stable IGS stations together with four LEOs (GRACE-A, GRACE-B, OSTM/Jason-2 and GOCE) over a time interval of 3 years (2010–2012). To ensure fully consistent solutions, the zero-difference phase observations of the ground stations and LEOs were processed in a common least-squares adjustment, estimating all the relevant parameters such as GPS and LEO orbits, station coordinates, Earth rotation parameters and geocenter motion. We present the significant impact of the individual LEO and a combination of all four LEOs on the geocenter coordinates. The formal errors are reduced by around 20% due to the inclusion of one LEO into the ground-only solution, while in a solution with four LEOs LEO-specific characteristics are significantly reduced. We compare the derived geocenter coordinates w.r.t. LAGEOS results and external solutions based on GPS and SLR data. We found good agreement in the amplitudes of all components; however, the phases in x- and z-direction do not agree well.  相似文献   

16.
The quality of the links between the different space geodetic techniques (VLBI, SLR, GNSS and DORIS) is still one of the major limiting factors for the realization of a unique global terrestrial reference frame that is accurate enough to allow the monitoring of the Earth system, i.e., of processes like sea level change, postglacial rebound and silent earthquakes. According to the specifications of the global geodetic observing system of the International Association of Geodesy, such a reference frame should be accurate to 1 mm over decades, with rates of change stable at the level of 0.1 mm/year. The deficiencies arise from inaccurate or incomplete local ties at many fundamental sites as well as from systematic instrumental biases in the individual space geodetic techniques. Frequently repeated surveys, the continuous monitoring of antenna heights and the geometrical mount stability (Lösler et al. in J Geod 90:467–486, 2016.  https://doi.org/10.1007/s00190-016-0887-8) have not provided evidence for insufficient antenna stability. Therefore, we have investigated variations in the respective system delays caused by electronic circuits, which is not adequately captured by the calibration process, either because of subtle differences in the circuitry between geodetic measurement and calibration, high temporal variability or because of lacking resolving bandwidth. The measured system delay variations in the electric chain of both VLBI- and SLR systems reach the order of 100 ps, which is equivalent to 3 cm of path length. Most of this variability is usually removed by the calibrations but by far not all. This paper focuses on the development of new technologies and procedures for co-located geodetic instrumentation in order to identify and remove systematic measurement biases within and between the individual measurement techniques. A closed-loop optical time and frequency distribution system and a common inter-technique reference target provide the possibility to remove variable system delays. The main motivation for the newly established central reference target, locked to the station clock, is the combination of all space geodetic instruments at a single reference point at the observatory. On top of that it provides the unique capability to perform a closure measurement based on the observation of time.  相似文献   

17.
The study areas Tikovil and Payppara sub-watersheds of Meenachil river cover 158.9 and 111.9 km2, respectively. These watersheds are parts of Western Ghats, which is an ecologically sensitive region. The drainage network of the sub-watersheds was delineated using SOI topographical maps on 1:50,000 scale using the Arc GIS software. The stream orders were calculated using the method proposed by Strahler's (1964 Strahler, A. N. 1964. “Quantitative geomorphology of drainage basins and channel networks”. In Hand book of applied hydrology. Vol. 4, Edited by: Chow, V. T. Vol. 4, 3944.  [Google Scholar]). The drainage network shows that the terrain exhibits dendritic to sub-dendritic drainage pattern. Stream order ranges from the fifth to the sixth order. Drainage density varies between 1.69 and 2.62 km/km2. The drainage texture of the drainage basins are 2.3 km–1 and 6.98 km–1 and categorized as coarse to very fine texture. Stream frequency is low in the case of Payappara sub-watershed (1.78 km–2). Payappara sub-watershed has the highest constant of channel maintenance value of 0.59 indicating much fewer structural disturbances and fewer runoff conditions. The form factor value varies in between 0.42 and 0.55 suggesting elongated shape formed for Payappara sub-watershed and a rather more circular shape for Tikovil sub-watershed. The mean bifurcation ratio (3.5) indicates that both the sub-watersheds are within the natural stream system. Hence from the study it can be concluded that GIS techniques prove to be a competent tool in morphometric analysis.  相似文献   

18.

Background

Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend on the application of allometric models, which often have unknown and unreported uncertainties outside of the size class or environment in which they were developed.

Results

Here, we test three popular allometric approaches to field biomass estimation, and explore the implications of allometric model selection for county-level biomass mapping in Sonoma County, California. We test three allometric models: Jenkins et al. (For Sci 49(1): 12–35, 2003), Chojnacky et al. (Forestry 87(1): 129–151, 2014) and the US Forest Service’s Component Ratio Method (CRM). We found that Jenkins and Chojnacky models perform comparably, but that at both a field plot level and a total county level there was a ~ 20% difference between these estimates and the CRM estimates. Further, we show that discrepancies are greater in high biomass areas with high canopy covers and relatively moderate heights (25–45 m). The CRM models, although on average ~ 20% lower than Jenkins and Chojnacky, produce higher estimates in the tallest forests samples (> 60 m), while Jenkins generally produces higher estimates of biomass in forests < 50 m tall. Discrepancies do not continually increase with increasing forest height, suggesting that inclusion of height in allometric models is not primarily driving discrepancies. Models developed using all three allometric models underestimate high biomass and overestimate low biomass, as expected with random forest biomass modeling. However, these deviations were generally larger using the Jenkins and Chojnacky allometries, suggesting that the CRM approach may be more appropriate for biomass mapping with lidar.

Conclusions

These results confirm that allometric model selection considerably impacts biomass maps and estimates, and that allometric model errors remain poorly understood. Our findings that allometric model discrepancies are not explained by lidar heights suggests that allometric model form does not drive these discrepancies. A better understanding of the sources of allometric model errors, particularly in high biomass systems, is essential for improved forest biomass mapping.
  相似文献   

19.
Automatic building extraction is an important topic for many applications such as urban planning, disaster management, 3D building modeling and updating GIS databases. Its approaches mainly depend on two data sources: light detection and ranging (LiDAR) point cloud and aerial imagery both of which have advantages and disadvantages of their own. In this study, in order to benefit from the advantages of each data sources, LiDAR and image data combined together. And then, the building boundaries were extracted with the automated active contour algorithm implemented in MATLAB. Active contour algorithm uses initial contour positions to segment an object in the image. Initial contour positions were detected without user interaction by a series of image enhancements, band ratio and morphological operations. Four test areas with varying building and background levels of detail were selected from ISPRS’s benchmark Vaihingen and Istanbul datasets. Vegetation and shadows were removed from all the datasets by band ratio to improve segmentation quality. Subsequently, LiDAR point cloud data was converted to raster format and added to the aerial imagery as an extra band. Resulting merged image and initial contour positions were given to the active contour algorithm to extract building boundaries. In order to compare the contribution of LiDAR to the proposed method, the boundaries of the buildings were extracted from the input image before and after adding LiDAR data to the image as a layer. Finally extracted building boundaries were smoothed by the Awrangjeb (Int J Remote Sen 37(3): 551–579.  https://doi.org/10.1080/01431161.2015.1131868, 2016) boundary regularization algorithm. Correctness (Corr), completeness (Comp) and accuracy (Q) metrics were used to assess accuracy of segmented building boundaries by comparing extracted building boundaries with manually digitized building boundaries. Proposed approach shows the promising results with over 93% correctness, 92% completeness and 89% quality.  相似文献   

20.
The Doppler orbitography and radiopositioning integrated by satellite (DORIS) system was originally developed for precise orbit determination of low Earth orbiting (LEO) satellites. Beyond that, it is highly qualified for modeling the distribution of electrons within the Earth’s ionosphere. It measures with two frequencies in L-band with a relative frequency ratio close to 5. Since the terrestrial ground beacons are distributed quite homogeneously and several LEOs are equipped with modern receivers, a good applicability for global vertical total electron content (VTEC) modeling can be expected. This paper investigates the capability of DORIS dual-frequency phase observations for deriving VTEC and the contribution of these data to global VTEC modeling. The DORIS preprocessing is performed similar to commonly used global navigation satellite systems (GNSS) preprocessing. However, the absolute DORIS VTEC level is taken from global ionospheric maps (GIM) provided by the International GNSS Service (IGS) as the DORIS data contain no absolute information. DORIS-derived VTEC values show good consistency with IGS GIMs with a RMS between 2 and 3 total electron content units (TECU) depending on solar activity which can be reduced to less than 2 TECU when using only observations with elevation angles higher than \(50^\circ \) . The combination of DORIS VTEC with data from other space-geodetic measurement techniques improves the accuracy of global VTEC models significantly. If DORIS VTEC data is used to update IGS GIMs, an improvement of up to 12  % can be achieved. The accuracy directly beneath the DORIS satellites’ ground-tracks ranges between 1.5 and 3.5 TECU assuming a precision of 2.5 TECU for altimeter-derived VTEC values which have been used for validation purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号