首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
The continually increasing size of geospatial data sets poses a computational challenge when conducting interactive visual analytics using conventional desktop-based visualization tools. In recent decades, improvements in parallel visualization using state-of-the-art computing techniques have significantly enhanced our capacity to analyse massive geospatial data sets. However, only a few strategies have been developed to maximize the utilization of parallel computing resources to support interactive visualization. In particular, an efficient visualization intensity prediction component is lacking from most existing parallel visualization frameworks. In this study, we propose a data-driven view-dependent visualization intensity prediction method, which can dynamically predict the visualization intensity based on the distribution patterns of spatio-temporal data. The predicted results are used to schedule the allocation of visualization tasks. We integrated this strategy with a parallel visualization system deployed in a compute unified device architecture (CUDA)-enabled graphical processing units (GPUs) cloud. To evaluate the flexibility of this strategy, we performed experiments using dust storm data sets produced from a regional climate model. The results of the experiments showed that the proposed method yields stable and accurate prediction results with acceptable computational overheads under different types of interactive visualization operations. The results also showed that our strategy improves the overall visualization efficiency by incorporating intensity-based scheduling.  相似文献   

2.
Robust estimates of magnetotelluric and geomagnetic response functions are determined using the coherency and expected uniformity of the magnetic source field as quality criteria. The method is applied on data sets of three simultaneously recording sites. For the data acquisition we used a new generation of geophysical equipment (S.P.A.M. MkIII), which comprises novel concepts of parallel computing and networked, digital data transmission. The data-processing results show that the amount of noise on the horizontal components of the magnetic field varies considerably in time, between sites and over the frequency range. The removal of such contaminated data beforehand is essential for most data-processing schemes, as the magnetic channels are usually assumed to be free of noise. The standard remote reference method is aimed at reducing bias in response function estimates. However, this does not necessarily improve their precision as our results clearly show. With our method, on the other hand, we can filter out source field irregularities, thereby providing suitable working conditions for the robust algorithm, and eventually obtain considerably improved results. Contrary to previous concepts, we suggest rejecting as much data as feasible in order to concentrate on the remaining parts of high-quality observations.  相似文献   

3.
Measurement of dispersed vitrinite reflectance in organic sediments is one of the few regional data sets used for placing bounds on the thermal history of a sedimentary basin. Reflectance data are important when access to complementary information such as high‐quality seismic data is unavailable to place bounds on subsidence history and in locations where uplift is an important part of the basin history. Attributes which make vitrinite reflectance measurements a useful data set are the relative ease of making the measurement, and the availability of archived well cores and cuttings in state, provincial, and federal facilities. In order to fully utilize vitrinite data for estimating the temperature history in a basin, physically based methods are required to calibrate an equivalent reflectance from a modelled temperature history with measured data. The most common method for calculating a numerical vitrinite reflectance from temperature history is the EASY%Ro method which we show systematically underestimates measured data. We present a new calculated reflectance model and an adjustment to EASY%Ro which makes the correlation between measured vitrinite values and calculated vitrinite values a physical relationship and more useful for constraining thermal models. We then show that calibrating the thermal history to vitrinite on a constant age date surface (e.g., top Cretaceous) instead of calibrating the thermal history in depth removes the heating rate component from the reflectance calculation and makes thermal history calibration easier to understand and more directly related to heat flow. Finally, we use bounds on the vitrinite–temperature relationships on a constant age date surface to show that significant uncertainty exists in the vitrinite data reported in most data sets.  相似文献   

4.
The availability of spatial data on an unprecedented scale as well as advancements in analytical and visualization techniques gives researchers the opportunity to study complex problems over large urban and regional areas. Nevertheless, few individual data sets exist that provide both the requisite spatial and/or temporal observational frequency to truly facilitate detailed investigations. Some data are collected frequently over time but only at a few geographic locations (e.g., weather stations). Similarly, other data are collected with a high level of spatial resolution but not at regular or frequent time intervals (e.g., satellite data). The purpose of this article is to present an interpolation approach that leverages the relative temporal richness of one data set with the relative spatial richness of another to fill in the gaps. Because different interpolation techniques are more appropriate than others for specific types of data, we propose a space–time interpolation approach whereby two interpolation methods – one for the temporal and one for the spatial dimension – are used in tandem to increase the accuracy results.

We call our ensemble approach the space–time interpolation environment (STIE). The primary steps within this environment include a spatial interpolation processor, a temporal interpolation processor, and a calibration processor, which enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In this article, we first describe STIE conceptually including the data input requirements, output structure, details of the primary steps, and the mechanism for coordinating the data within those steps. We then describe a case study focusing on urban land cover in Phoenix, Arizona, using our working implementation. Our empirical results show that our approach increased the accuracy for estimating urban land cover better than a single interpolation technique.  相似文献   

5.
ABSTRACT

An increasing number of social media users are becoming used to disseminate activities through geotagged posts. The massive available geotagged posts enable collections of users’ footprints over time and offer effective opportunities for mobility prediction. Using geotagged posts for spatio-temporal prediction of future location, however, is challenging. Previous studies either focus on next-place prediction or rely on dense data sources such as GPS data. Introduced in this article is a novel method for future location prediction of individuals based on geotagged social media data. This method employs the hierarchical density-based clustering algorithm with adaptive parameter selection to identify the regions frequently visited by a social media user. A multi-feature weighted Bayesian model is then developed to forecast users’ spatio-temporal locations by combining multiple factors affecting human mobility patterns. Further, an updating strategy is designed to efficiently adjust, over time, the proposed model to the dynamics in users’ mobility patterns. Based on two real-life datasets, the proposed approach outperforms a state-of-the-art method in prediction accuracy by up to 5.34% and 3.30%. Tests show prediction reliability is high with quality predictions, but low in the identification of erroneous locations.  相似文献   

6.
基于径向基函数(RBF)神经网络技术设计了一种新的超光谱热红外大气辐射传输模型快速算法,通过模拟实验,确定神经网络样本和输入输出数据,并对典型波段训练出相应的多层神经网络,用于快速计算超光谱热红外大气顶部辐射传输亮度.实验中分别训练了9 μm、10 μm和12μm波长处相应的神经网络,实验结果表明所建算法不仅具有较高的计算精度,而且每个波长对应亮温的计算速度比利用4A模型的计算速度最多可提高100倍以上,同时,在波段的选择上也具有更高的灵活性.  相似文献   

7.
Polygon intersection is an important spatial data-handling process, on which many spatial operations are based. However, this process is computationally intensive because it involves the detection and calculation of polygon intersections. We addressed this computation issue based on two perspectives. First, we improved a method called boundary algebra filling to efficiently rasterize the input polygons. Polygon intersections were subsequently detected in the cells of the raster. Owing to the use of a raster data structure, this method offers advantages of reduced task dependence and improved performance. Based on this method, we developed parallel strategies for different procedures in terms of workload decomposition and task scheduling. Thus, the workload across different parallel processes can be balanced. The results suggest that our method can effectively accelerate the process of polygon intersection. When addressing datasets with 1,409,020 groups of overlapping polygons, our method could reduce the total execution time from 987.82 to 53.66 s, thereby obtaining an optimal speedup ratio of 18.41 while consistently balancing the workloads. We also tested the effect of task scheduling on the parallel efficiency, showing that reducing the total runtime is effective, especially for a lower number of processes. Finally, the good scalability of the method is demonstrated.  相似文献   

8.
Conventionally, a raster operation that needs to scan the entire image employs only one scanning order (i.e., single scanning order (SSO)), and the scan usually runs from upper left to lower right and row by row. We explore the idea of alternately applying multiple scanning orders (MSO) to raster operations that are based on the local direction, using the flow accumulation (FA) calculation as an example. We constructed several FA methods based on MSO, and compared them with those widely used methods. Our comparison includes experiments over digital elevation models (DEMs) of different landforms and DEMs of different resolutions. For each DEM, we calculated both single-direction FA (SD-FA) and multi-direction FA (MD-FA). In the theoretical aspect, we deducted the time complexity of an MSO sequential algorithm (MSOsq) for FA based on empirical equations in hydrology. Findings from the experiments include the following: (1) an MSO-based method is generally superior to its counterpart SSO-based method. (2) The advantage of MSO is more significant in the SD-FA calculation than in the MD-FA calculation. (3) For SD-FA, the best method among the compared methods is the one that combines the MSOsq and the depth-first algorithm. This method surpasses the commonly recommended dependency graph algorithm, in both speed and memory use. (4) The differences between the compared methods are not sensitive to specific landforms. (5) For SD-FA, the advantage of MSO-based methods is more obvious in a higher DEM resolution, but this does not apply to MD-FA.  相似文献   

9.
Yin  Xin  Liu  Quansheng  Pan  Yucong  Huang  Xing  Wu  Jian  Wang  Xinyu 《Natural Resources Research》2021,30(2):1795-1815

Rockburst is a common dynamic geological hazard, severely restricting the development and utilization of underground space and resources. As the depth of excavation and mining increases, rockburst tends to occur frequently. Hence, it is necessary to carry out a study on rockburst prediction. Due to the nonlinear relationship between rockburst and its influencing factors, artificial intelligence was introduced. However, the collected data were typically imbalanced. Single algorithms trained by such data have low recognition for minority classes. In order to handle the problem, this paper employed stacking technique of ensemble learning to establish rockburst prediction models. In total, 246 sets of data were collected. In the preprocessing stage, three data mining techniques including principal component analysis, local outlier factor and expectation maximization algorithm were used for dimension reduction, outlier detection and outlier substitution, respectively. Then, the pre-processed data were split into a training set (75%) and a test set (25%) with stratified sampling. Based on the four classical single intelligent algorithms, namely k-nearest neighbors (KNN), support vector machine (SVM), deep neural network (DNN) and recurrent neural network (RNN), four ensemble models (KNN–RNN, SVM–RNN, DNN–RNN and KNN–SVM–DNN–RNN) were built by stacking technique of ensemble learning. The prediction performance of eight models was evaluated, and the differences between single models and ensemble models were analyzed. Additionally, a sensitivity analysis was conducted, revealing the importance of input variables on the models. Finally, the impact of class imbalance on the prediction accuracy and fitting effect of models was quantitatively discussed. The results showed that stacking technique of ensemble learning provides a new and promising way for rockburst prediction, which exhibits unique advantages especially when using imbalanced data.

  相似文献   

10.
We present a complete ray theory for the calculation of surface-wave observables from anisotropic phase-velocity maps. Starting with the surface-wave dispersion relation in an anisotropic earth model, we derive practical dynamical ray-tracing equations. These equations allow calculation of the observables phase, arrival-angle and amplitude in a ray theoretical framework. Using perturbation theory, we also obtain approximate expressions for these observables. We assess the accuracy of the first-order approximations by using both theories to make predictions on a sample anisotropic phase-velocity map. A comparison of the two methods illustrates the size and type of errors which are introduced by perturbation theory. Perturbation theory phase and arrival-angle predictions agree well with the exact calculation, but amplitude predictions are poor. Many previous studies have modelled surface-wave propagation using only isotropic structure, not allowing for anisotropy. We present hypothetical examples to simulate isotropic modelling of surface waves which pass through anisotropic material. Synthetic data sets of phase and arrival angle are produced by ray tracing with exact ray theory on anisotropic phase-velocity maps. The isotropic models obtained by inverting synthetic anisotropic phase data sets produce deceptively high variance reductions because the effects of anisotropy are mapped into short-wavelength isotropic structure. Inversion of synthetic arrival-angle data sets for isotropic models results in poor variance reductions and poor recovery of the isotropic part of the anisotropic input map. Therefore, successful anisotropic phase-velocity inversions of real data require the inclusion of both phase and arrival-angle measurements.  相似文献   

11.
In spatial data sets, gaps or overlaps among features are frequently found in spatial tessellations due to the non-abutting edges with adjacent features. These non-abutting edges in loose tessellations are also called inconsistent boundaries or slivers; polygons containing at least one inconsistent boundary are called inconsistent polygons or sliver polygons. The existing algorithms to solve topological inconsistencies in sliver polygons suffer from one or more of three major issues, namely determination of tolerances, excessive CPU processing time for large data sets and loss of vertex history. In this article, we introduce a new algorithm that mitigates these three issues. Our algorithm efficiently searches the features with inconsistent polygons in a given spatial data set and logically partitions them among adjacent features. The proposed algorithm employs the constrained Delaunay triangulation technique to generate labelled triangles from which inconsistent polygons with gaps and overlaps are identified using label counts. These inconsistent polygons are then partitioned using the straight skeleton method. Moreover, each of these partitioned gaps or overlaps is distributed among the adjacent features to improve the topological consistency of the spatial data sets. We experimentally verified our algorithm using the real land cadastre data set. The comparison results show that the proposed algorithm is four times faster than the existing algorithm for data sets with 200,000 edges.  相似文献   

12.
The analysis of interaction between movement trajectories is of interest for various domains when movement of multiple objects is concerned. Interaction often includes a delayed response, making it difficult to detect interaction with current methods that compare movement at specific time intervals. We propose analyses and visualizations, on a local and global scale, of delayed movement responses, where an action is followed by a reaction over time, on trajectories recorded simultaneously. We developed a novel approach to compute the global delay in subquadratic time using a fast Fourier transform (FFT). Central to our local analysis of delays is the computation of a matching between the trajectories in a so-called delay space. It encodes the similarities between all pairs of points of the trajectories. In the visualization, the edges of the matching are bundled into patches, such that shape and color of a patch help to encode changes in an interaction pattern. To evaluate our approach experimentally, we have implemented it as a prototype visual analytics tool and have applied the tool on three bidimensional data sets. For this we used various measures to compute the delay space, including the directional distance, a new similarity measure, which captures more complex interactions by combining directional and spatial characteristics. We compare matchings of various methods computing similarity between trajectories. We also compare various procedures to compute the matching in the delay space, specifically the Fréchet distance, dynamic time warping (DTW), and edit distance (ED). Finally, we demonstrate how to validate the consistency of pairwise matchings by computing matchings between more than two trajectories.  相似文献   

13.
To understand residential clustering of contemporary immigrants and other ethnic minorities in urban areas, it is important to first identify where they are clustered. In recent years, increasing attention has been given to the use of local statistics as a tool for finding the location of racial/ethnic residential clusters. However, since many existing local statistics are primarily developed for epidemiological studies where clustering is associated with relatively rare events, its application in studies of residential segregation may not always yield satisfactory results. This article proposes an optimisation clustering method for delineating the boundaries of ethnic residential clusters. The proposed approach uses a modified greedy algorithm to find the most likely extent of clusters and employs total within-group absolute deviations as a clustering criterion. To demonstrate the effectiveness of the method, we applied it to a set of synthetic landscapes and to two empirical data sets in Auckland, New Zealand. The results show that the proposed method can detect ethnic residential clusters effectively and that it has potential for use in other disciplines as it offers an ability to detect large, arbitrarily shaped clusters.  相似文献   

14.
Contemporary variants of the lichenometric dating technique depend upon statistical correlations between surface age and maximum lichen sizes, rather than an understanding of lichen biology. To date three terminal moraines of an Alaskan glacier, we used a new lichenometric technique in which surfaces are dated by comparing lichen population distributions with the predictions of ecological demography models with explicit rules for the biological processes that govern lichen populations: colonization, growth, and survival. These rules were inferred from size–frequency distributions of lichens on calibration surfaces, but could be taken directly from biological studies. Working with two lichen taxa, we used multinomial‐based likelihood functions to compare model predictions with measured lichen populations, using only the thalli in the largest 25% of the size distribution. Joint likelihoods that combine the results of both species estimated moraine ages of ad 1938, 1917, and 1816. Ages predicted by Rhizocarpon alone were older than those of P. pubescens. Predicted ages are geologically plausible, and reveal glacier terminus retreat after a Little Ice Age maximum advance around ad 1816, with accelerated retreat starting in the early to mid twentieth century. Importantly, our technique permits calculation of prediction and model uncertainty. We attribute large confidence intervals for some dates to the use of the biologically variable Rhizocarpon subgenus, small sample sizes, and high inferred lichen mortality. We also suggest the need for improvement in demographic models. A primary advantage of our technique is that a process‐based approach to lichenometry will allow direct incorporation of ongoing advances in lichen biology.  相似文献   

15.
ABSTRACT

Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis.  相似文献   

16.
We have developed a new geodetic inversion method for space–time distribution of fault slip velocity with time-varying smoothing regularization in order to reconstruct accurate time histories of aseismic fault slip transients. We introduce a temporal smoothing regularization on slip and slip velocity through a Bayesian state space approach in which the strength of regularization (temporal smoothness of slip velocity) is controlled by a hyperparameter. The time-varying smoothing regularization is realized by treating the hyperparameter as a time-dependent stochastic variable and adopting a hierarchical Bayesian state space model, in which a prior distribution on the hyperparameter is introduced in addition to a conventional Bayesian state space model. We have tested this inversion method on two synthetic data sets generated by simulated aseismic slip transients. Results show that our method reproduces well both rapid changes of slip velocity and steady-state velocity without significant oversmoothing and undersmoothing, which has been hard to overcome by the conventional Bayesian approach with time-independent smoothing regularization. Application of this method to transient deformation in 2002 caused by a silent earthquake off the Boso peninsula, Japan, also shows similar advantages of this method over the conventional approach.  相似文献   

17.
Hybrid terrain models combine large regular data sets and high-resolution irregular meshes [triangulated irregular network (TIN)] for topographically and morphologically complex terrain features such as man-made microstructures or cliffs. In this paper, a new method to generate and visualize this kind of 3D hybrid terrain models is presented. This method can integrate geographic data sets from multiple sources without a remeshing process to combine the heterogeneous data of the different models. At the same time, the original data sets are preserved without modification, and, thus, TIN meshes can be easily edited and replaced, among other features. Specifically, our approach is based on the utilization of the external edges of convexified TINs as the fundamental primitive to tessellate the space between both types of meshes. Our proposal is eminently parallel, requires only a minimal preprocessing phase, and minimizes the storage requirements when compared with the previous proposals.  相似文献   

18.
We have developed a new array method combining conventional migration with a slowness-backazimuth deviation weighting scheme. All seismic traces are shifted based on the theoretical traveltime of the scattered wave from specific gridpoints in a 3-D volume. Observed slowness and backazimuth are calculated for each raypath and compared with theoretical values in order to estimate slowness and backazimuth deviations. Subsequently, stacked energy calculated by a conventional migration method is weighted by the slowness and backazimuth deviations to suppress any arrival energy whose slowness and backazimuth are inconsistent with the expected theoretical values. This new method was applied to two P- wave data sets which comprise (1) underside reflections at the 410 and 660 km mantle discontinuities and (2) D" reflections as well as their corresponding synthetic data sets. The results show that the weighting scheme dramatically increases the resolution of the migrated images and enables us to obtain well-constrained, focused images, making upper-mantle discontinuities and D" reflections more distinct by reducing their surrounding energy.  相似文献   

19.
Digital data on the position and characteristics of river networks and catchments are important for the analysis of pressures and impacts on water resources. GIS tools allow for the combined analysis of digital elevation data and environmental parameters in order to derive this kind of information. This article presents a new approach making use of medium-resolution digital elevation data (250-m grid cell size) and information on climate, vegetation cover, terrain morphology, soils and lithology to derive river networks and catchments over extended areas.In general, methods to extract channel networks at small scale use a constant threshold for the critical contributing area, independent of widely varying landscape conditions. As a consequence, the resulting drainage network does not reflect the natural variability in drainage density. To overcome this limitation, a classification of the landscape is proposed. The various data available are analysed in an integrated approach in order to characterise the terrain with respect to its ability to develop lower or higher drainage densities, resulting in five landscape types. For each landscape type, the slope–area relationship is then derived and the critical contributing area is determined. In the subsequent channel extraction, a dedicated critical contributing area threshold is used for each landscape type.The described methodology has been developed and tested for the territory of Italy. Results have been validated comparing the derived data with river and catchment data sets from other sources and at varying scales. Good agreement both in terms of river superimposition and drainage density could be demonstrated.  相似文献   

20.
Most multiple‐flow‐direction algorithms (MFDs) use a flow‐partition coefficient (exponent) to determine the fractions draining to all downslope neighbours. The commonly used MFD often employs a fixed exponent over an entire watershed. The fixed coefficient strategy cannot effectively model the impact of local terrain conditions on the dispersion of local flow. This paper addresses this problem based on the idea that dispersion of local flow varies over space due to the spatial variation of local terrain conditions. Thus, the flow‐partition exponent of an MFD should also vary over space. We present an adaptive approach for determining the flow‐partition exponent based on local topographic attribute which controls local flow partitioning. In our approach, the influence of local terrain on flow partition is modelled by a flow‐partition function which is based on local maximum downslope gradient (we refer to this approach as MFD based on maximum downslope gradient, MFD‐md for short). With this new approach, a steep terrain which induces a convergent flow condition can be modelled using a large value for the flow‐partition exponent. Similarly, a gentle terrain can be modelled using a small value for the flow‐partition exponent. MFD‐md is quantitatively evaluated using four types of mathematical surfaces and their theoretical ‘true’ value of Specific Catchment Area (SCA). The Root Mean Square Error (RMSE) shows that the error of SCA computed by MFD‐md is lower than that of SCA computed by the widely used SFD and MFD algorithms. Application of the new approach using a real DEM of a watershed in Northeast China shows that the flow accumulation computed by MFD‐md is better adapted to terrain conditions based on visual judgement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号