首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
In general, to reconstruct the accurate shape of buildings, we need at least one stereomodel (two photographs) for each building. In most cases, however, only a single non-metric photograph is availabl...  相似文献   

2.
The Householder transformation-norm structure function in L2 vector space of linear algebra is introduced, and the edge enhancement for remote sensing images is realized. The experiment result is compared with traditional Laplacian and Sobel edge enhancements and it shows that the effect of the new method is better than that of the traditional algorithms.  相似文献   

3.
We address the problem of estimating the carrier-to-noise ratio (C/N0) in weak signal conditions. There are several environments, such as forested areas, indoor buildings and urban canyons, where high-sensitivity global navigation satellite system (HS-GNSS) receivers are expected to work under these reception conditions. The acquisition of weak signals from the satellites requires the use of post-detection integration (PDI) techniques to accumulate enough energy to detect them. However, due to the attenuation suffered by these signals, estimating their C/N0 becomes a challenge. Measurements of C/N0 are important in many applications of HS-GNSS receivers such as the determination of a detection threshold or the mitigation of near-far problems. For this reason, different techniques have been proposed in the literature to estimate the C/N0, but they only work properly in the high C/N0 region where the coherent integration is enough to acquire the satellites. We derive four C/N0 estimators that are specially designed for HS-GNSS snapshot receivers and only use the output of a PDI technique to perform the estimation. We consider four PDI techniques, namely non-coherent PDI, non-quadratic non-coherent PDI, differential PDI and truncated generalized PDI and we obtain the corresponding C/N0 estimator for each of them. Our performance analysis shows a significant advantage of the proposed estimators with respect to other C/N0 estimators available in the literature in terms of estimation accuracy and computational resources.  相似文献   

4.
The TOPEX/Poseidon (T/P) satellite alti- meter mission marked a new era in determining the geopotential constant W 0. On the basis of T/P data during 1993–2003 (cycles 11–414), long-term variations in W 0 have been investigated. The rounded value W 0 = 62636856.0 ± 0.5) m 2 s −2 has already been adopted by the International Astronomical Union for the definition of the constant L G = W 0/c 2 = 6.969290134 × 10−10 (where c is the speed of light), which is required for the realization of the relativistic atomic time scale. The constant L G , based on the above value of W 0, is also included in the 2003 International Earth Rotation and Reference Frames Service conventions. It has also been suggested that W 0 is used to specify a global vertical reference system (GVRS). W 0 ensures the consistency with the International Terrestrial Reference System, i.e. after adopting W 0, along with the geocentric gravitational constant (GM), the Earth’s rotational velocity (ω) and the second zonal geopotential coefficient (J 2) as primary constants (parameters), then the ellipsoidal parameters (a,α) can be computed and adopted as derived parameters. The scale of the International Terrestrial Reference Frame 2000 (ITRF2000) has also been specified with the use of W 0 to be consistent with the geocentric coordinate time. As an example of using W 0 for a GVRS realization, the geopotential difference between the adopted W 0 and the geopotential at the Rimouski tide-gauge point, specifying the North American Vertical Datum 1988 (NAVD88), has been estimated.  相似文献   

5.
6.
This article presents the results and potential of using volunteered geographic information (VGI) in heritage detection. Research was completed under the project entitled “Laser Discoverers – non‐invasive examination and documentation of archeological and historical objects in the ?wi?tokrzyskie Voivodeship”, carried out as a part of the Ministry of Science and Higher Education program entitled “The Paths of Copernicus”. Within the project, strong emphasis was placed on promotional and awareness‐raising activities, to involve as many voluntary users as possible. Project participants had at their disposal a web application, which provided access to a digital terrain model (DTM) where they identified possible heritage objects. All samples of data were additionally available in eight variants of sunshine, based on the simulation of sunlight from eight directions and at a constant angle. In total, 5,989 elementary areas with dimensions of 100 × 100 m were used for the project. After conducting a field inventory, Internet users together with specialists were able to recognize several thousands of potential archaeological and historic objects. During the project, approximately 10% of those features were verified through non‐invasive (field survey) work, with 75% success.  相似文献   

7.
Rod Bryant 《GPS Solutions》2002,6(3):138-148
A key requirement for emergency call location (e.g. E911), for robust operation of location-based m-commerce systems and for telematics systems is that the location technology be able to operate in urban canyons and inside buildings. We start from a definition of the target environments, which includes multi-level parking garages, office buildings and homes, but not underground parking garages or tunnels. Based on experience in these target environments and understanding of typical applications we derive specific requirements for sensitivity and acquisition speed. The primary problems associated with weak signal operation are as follows. (1) In conventional GPS receivers sampling at the correlator output typically occurs at a sampling interval of the order of 1 ms. With weak signals, however, the signal-to-noise ratio of these samples is too low to support lock-in of a phase-locked or frequency-locked loop. (2) With weak signals, the signal-to-noise ratio is too low to support the extraction of the 50BPS navigation message from the signal. Therefore, aiding data is required from an external source. (3) Because the data cannot be extracted, it is not possible for the receiver to synchronize to the incoming bits, words or subframes. Therefore, it is not possible to construct pseudoranges without prior information. (4) The paper describes Sigtec Navigation's subATTO technology. This technology provides sensitivity down to –185 dBW (19 dBHz assuming NF of 1.5 dB and no other implementation loss). This is 5 dB below an attoWatt (10–18 W) and has been shown to provide reliable positioning inside buildings, multi-level parking garages and in urban canyons without any aiding at all. The paper describes the patented signal processing scheme, how ambiguity resolution and time synchronization are achieved, the wireless assistance technique, the acquisition strategy and the use of scanning channels. Results are presented from trials in a multi-level parking garage. The results obtained in most parking garages are similar to these in terms of availability of fixes, signal strengths received and location accuracy achieved. The performance achieved in multi-level parking garages is rarely worse than this. One of the major impediments to practical application of weak signal-processing schemes is the limited dynamic range imposed by the GPS C/A code signal structure. This problem is discussed along with the problems of multipath distortion in the context of telematics operation in urban canyons. A realistic urban accuracy goal of 20 m for 95% of fixes is proposed based on experience with GPS and dead reckoning. Enhancements under development will provide sensitivity of –188 dBW, which will provide continuous availability within a broader range of indoor environments. For practical applications, this will require the use of modern 'search engine' hardware for acceptable acquisition speed. As the paper shows, this sensitivity is near the practical limit of sensitivity with acceptable acquisition times and dynamic capability. Electronic Publication  相似文献   

8.
The paper presents a method of estimating parameters in two competitive functional models. The models considered here are concerned with the same observation set and are based on the assumption that an observation may result from a realization of either of two different random variables. These variables differ from one another at least in the main characteristic (for example, outliers can be realizations of one variable). A quantity that describes the opportunity of identifying a single observation with one random variable is assumed to be known. That quantity, called the elementary split potential, is strictly referred to the amount of information that an observation can provide about two competitive assumptions concerning the observation distribution. Parameter assessments that maximize the global elementary split potential (concerning all observations), are called M split estimators. A generalization of M split estimation presented in the paper refers to the theoretical foundation of M-estimation. An erratum to this article can be found at  相似文献   

9.
The Renault Company has developed a system for the mathematical definition of complex surfaces known as the Unisurf method. In association with this method, close range photogrammetry is used to provide the digital data required for the analysis of a surface. Photogrammetric techniques are of value in various fields of application, including the styling of new models of cars, design modifications to existing models, car accident investigations and statistical studies on human bodies.  相似文献   

10.
Big Data, Linked Data, Smart Dust, Digital Earth, and e‐Science are just some of the names for research trends that surfaced over the last years. While all of them address different visions and needs, they share a common theme: How do we manage massive amounts of heterogeneous data, derive knowledge out of them instead of drowning in information, and how do we make our findings reproducible and reusable by others? In a network of knowledge, topics span across scientific disciplines and the idea of domain ontologies as common agreements seems like an illusion. In this work, we argue that these trends require a radical paradigm shift in ontology engineering away from a small number of authoritative, global ontologies developed top‐down, to a high number of local ontologies that are driven by application needs and developed bottom‐up out of observation data. Similarly as the early Web was replaced by a social Web in which volunteers produce data instead of purely consuming it, the next generation of knowledge infrastructures has to enable users to become knowledge engineers themselves. Surprisingly, existing ontology engineering frameworks are not well suited for this new perspective. Hence, we propose an observation‐driven ontology engineering framework, show how its layers can be realized using specific methodologies, and relate the framework to existing work on geo‐ontologies.  相似文献   

11.
12.
Errors in high-frequency ocean tide models alias to low frequencies in time-variable gravity solutions from the Gravity Recovery and Climate Experiment (GRACE). We conduct an observational study of apparent gravity changes at a period of 161 days, the alias period of errors in the S2 semidiurnal solar tide. We examine this S2 alias in the release 4 (RL04) reprocessed GRACE monthly gravity solutions for the period April 2002 to February 2008, and compare with that in release 1 (RL01) GRACE solutions. One of the major differences between RL04 and RL01 is the ocean tide model. In RL01, the alias is evident at high latitudes, near the Filchner-Ronne and Ross ice shelves in Antarctica, and regions surrounding Greenland and Hudson Bay. RL04 shows significantly lower alias amplitudes in many of these locations, reflecting improvements in the ocean tide model. However, RL04 shows continued alias contamination between the Ronne and Larson ice shelves, somewhat larger than in RL01, indicating a need for further tide model improvement in that region. For unknown reasons, the degree-2 zonal spherical harmonics (C20) of the RL04 solutions show significantly larger S2 aliasing errors than those from RL01.  相似文献   

13.
The p‐median problem (PMP) is one of the most applied location problems in urban and regional planning. As an NP‐hard problem, the PMP remains challenging to solve optimally, especially for large‐sized problems. A number of heuristics have been developed to obtain PMP solutions in a fast manner. Among the heuristics, the Teitz and Bart (TB) algorithm has been found effective for finding high‐quality solutions. In this article, we present a spatial‐knowledge‐enhanced Teitz and Bart (STB) heuristic method for solving PMPs. The STB heuristic prioritizes candidate facility sites to be examined in the solution set based on the spatial distribution of demand and service provision. Tests based on a range of PMPs demonstrate the effectiveness of the STB heuristic. This new algorithm can be incorporated into current commercial GIS packages to solve a wide range of location‐allocation problems.  相似文献   

14.
15.
An artificial neural network (ANN) based chlorophyll-a algorithm was developed to estimate chlorophyll-a concentration using OCEANSAT-I Ocean Colour Monitor (OCM) satellite-data. A multi-layer perceptron (MLP) type neural network was trained using simulated reflectances (~60,000 spectra) with known chlorophyll-a concentration, corresponding to the first five spectral bands of OCM. The correlation coefficient(r 2) andRMSE for the log transformed training data was found to be 0.99 and 0.07, respectively. The performance of the developed ANN-based algorithm was tested with the global SeaWiFS Bio-optical Algorithm Mini Workshop (SeaBAM) data (~919 spectra), 0.86 and 0.13 were observed asr 2 andRMSE for the test data set. The algorithm was further validated with thein-situ bio-optical data collected in the northeastern Arabian Sea (~215 spectra), ther 2 andRMSE were observed as 0.87 and 0.12 for this regional data set. Chlorophyll-a images were generated by applying the weight and bias matrices obtained during the training, on the normalized water leaving radiances (nL W) obtained from the OCM data after atmospheric correction. The chlorophyll-a image generated using ANN based algorithm and global Ocean Chlorophyll-4 (OC4) algorithm was compared. Chlorophyll-a estimated using both the algorithms showed a good correlation for the open ocean regions. However, in the coastal waters the ANN algorithm estimated relatively smaller concentrations, when compared to OC4 estimated chlorophyll-a.  相似文献   

16.
In order to better understand the movement of an object with respect to a region, we propose a formal model of the evolving spatial relationships that transition between local topologies with respect to a trajectory and a region as well as develop a querying mechanism to analyze movement patterns. We summarize 12 types of local topologies built on trajectory‐region intersections, and derive their transition graph; then we capture and model evolving local topologies with two types of trajectory‐region strings, a movement string and a stop‐move string. The stop‐move string encodes the stop information further during a trajectory than the movement string. Such a string‐format expression of trajectory‐region movement, although conceptually simple, carries unprecedented information for effectively interpreting how trajectories move with respect to regions. We also design the corresponding Finite State Automations for a movement string as well as a stop‐move string, which are used not only to recognize the language of trajectory‐region strings, but also to deal effectively with trajectory‐region pattern queries. When annotated with the time information of stops and intersections, a trajectory‐region movement snapshot and its evolution during a time interval can be inferred, and even the relationships among trajectories with respect to the same region can be explored.  相似文献   

17.
If sites, cities, and landscapes are captured at different points in time using technology such as LiDAR, large collections of 3D point clouds result. Their efficient storage, processing, analysis, and presentation constitute a challenging task because of limited computation, memory, and time resources. In this work, we present an approach to detect changes in massive 3D point clouds based on an out‐of‐core spatial data structure that is designed to store data acquired at different points in time and to efficiently attribute 3D points with distance information. Based on this data structure, we present and evaluate different processing schemes optimized for performing the calculation on the CPU and GPU. In addition, we present a point‐based rendering technique adapted for attributed 3D point clouds, to enable effective out‐of‐core real‐time visualization of the computation results. Our approach enables conclusions to be drawn about temporal changes in large highly accurate 3D geodata sets of a captured area at reasonable preprocessing and rendering times. We evaluate our approach with two data sets from different points in time for the urban area of a city, describe its characteristics, and report on applications.  相似文献   

18.
The rapid development of urban retail companies brings new opportunities to the Chinese economy. Due to the spatiotemporal heterogeneity of different cities, selecting a business location in a new area has become a challenge. The application of multi‐source geospatial data makes it possible to describe human activities and urban functional zones at fine scale. We propose a knowledge transfer‐based model named KTSR to support citywide business location selections at the land‐parcel scale. This framework can optimize customer scores and study the pattern of business location selection for chain brands. First, we extract the features of each urban land parcel and study the similarities between them. Then, singular value decomposition was used to build a knowledge‐transfer model of similar urban land parcels between different cities. The results show that: (1) compared with the actual scores, the estimated deviation of the proposed model decreased by more than 50%, and the Pearson correlation coefficient reached 0.84 or higher; (2) the decomposed features were good at quantifying and describing high‐level commercial operation information, which has a strong relationship with urban functional structures. In general, our method can work for selecting business locations and estimating sale volumes and user evaluations.  相似文献   

19.
20.
Social media networks allow users to post what they are involved in with location information in a real‐time manner. It is therefore possible to collect large amounts of information related to local events from existing social networks. Mining this abundant information can feed users and organizations with situational awareness to make responsive plans for ongoing events. Despite the fact that a number of studies have been conducted to detect local events using social media data, the event content is not efficiently summarized and/or the correlation between abnormal neighboring regions is not investigated. This article presents a spatial‐temporal‐semantic approach to local event detection using geo‐social media data. Geographical regularities are first measured to extract spatio‐temporal outliers, of which the corresponding tweet content is automatically summarized using the topic modeling method. The correlation between outliers is subsequently examined by investigating their spatial adjacency and semantic similarity. A case study on the 2014 Toronto International Film Festival (TIFF) is conducted using Twitter data to evaluate our approach. This reveals that up to 87% of the events detected are correctly identified compared with the official TIFF schedule. This work is beneficial for authorities to keep track of urban dynamics and helps build smart cities by providing new ways of detecting what is happening in them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号