首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

The aim of site planning based on multiple viewshed analysis is to select the minimum number of viewpoints that maximize visual coverage over a given terrain. However, increasingly high-resolution terrain data means that the number of terrain points will increase rapidly, which will lead to rapid increases in computational requirements for multiple viewshed site planning. In this article, we propose a fast Candidate Viewpoints Filtering (CVF) algorithm for multiple viewshed site planning to lay a foundation for viewpoint optimization selection. Firstly, terrain feature points are selected as candidate viewpoints. Then, these candidate viewpoints are clustered and those belonging to each cluster are sorted according to the index of viewshed contribution (IVC). Finally, the candidate viewpoints with relatively low viewshed contribution rate are removed gradually using the CVF algorithm, through which, the viewpoints with high viewshed contribution are preserved and the number of viewpoints to be preserved can be controlled by the number of clusters. To evaluate the effectiveness of our CVF algorithm, we compare it with the Region Partitioning for Filtering (RPF) and Simulated Annealing (SA) algorithms. Experimental results show that our CVF algorithm is a substantial improvement in both computational efficiency and total viewshed coverage rate.  相似文献   

2.
As increasingly large‐scale and higher‐resolution terrain data have become available, for example air‐form and space‐borne sensors, the volume of these datasets reveals scalability problems with existing GIS algorithms. To address this problem, a kind of serial algorithm was developed to generate viewshed on large grid‐based digital elevation models (DEMs). We first divided the whole DEM into rectangular blocks in row and column directions (called block partitioning), then processed these blocks with four axes followed by four sectors sequentially. When processing the particular block, we adopted the ‘reference plane’ algorithm to calculate the visibility of the target point on the block, and adjusted the calculation sequence according to the different spatial relationships between the block and the viewpoint since the viewpoint is not always inside the DEM. By adopting the ‘Reference Plane’ algorithm and using a block partitioning method to segment and load the DEM dynamically, it is possible to generate viewshed efficiently in PC‐based environments. Experiments showed that the divided block should be dynamically loaded whole into computer main memory when partitioning, and the suggested approach retains the accuracy of the reference plane algorithm and has near linear compute complexity.  相似文献   

3.
A rapid and flexible parallel approach for viewshed computation on large digital elevation models is presented. Our work is focused on the implementation of a derivate of the R2 viewshed algorithm. Emphasis has been placed on input/output (IO) efficiency that can be achieved by memory segmentation and coalesced memory access. An implementation of the parallel viewshed algorithm on the Compute Unified Device Architecture (CUDA), which exploits the high parallelism of the graphics processing unit, is presented. This version is referred to as r.cuda.visibility. The accuracy of our algorithm is compared to the r.los R3 algorithm (integrated into the open-source Geographic Resources Analysis Support System geographic information system environment) and other IO-efficient algorithms. Our results demonstrate that the proposed implementation of the R2 algorithm is faster and more IO efficient than previously presented IO-efficient algorithms, and that it achieves moderate calculation precision compared to the R3 algorithm. Thus, to the best of our knowledge, the algorithm presented here is the most efficient viewshed approach, in terms of computational speed, for large data sets.  相似文献   

4.
Visibility modelling calculates what an observer could theoretically see in the surrounding region based on a digital model of the landscape. In some cases, it is not necessary, nor desirable, to compute the visibility of an entire region (i.e. a viewshed), but instead it is sufficient and more efficient to calculate the visibility from point to point, or from a point to a small set of points, such as computing the intervisibility of predators and prey in an agent-based simulation. This paper explores how different line-of-sight (LoS) sample ordering strategies increase the number of early target rejections, where the target is considered to be obscured from view, thereby improving the computational efficiency of the LoS algorithm. This is of particular importance in dynamic environments where the locations of the observers, targets and other surface objects are being frequently updated. Trials were conducted in three UK cities, demonstrating a robust fivefold increase in performance for two strategies (hop, divide and conquer). The paper concludes that sample ordering methods do impact overall efficiency, and that approaches which disperse samples along the LoS perform better in urban regions than incremental scan methods. The divide and conquer method minimises elevation interception queries, making it suitable when elevation models are held on disk rather than in memory, while the hopping strategy was equally fast, algorithmically simpler, with minimal overhead for visible target cases.  相似文献   

5.
Viewshed analysis remains one of the most popular GIS tools for assessing visibility, despite the recognition of several limitations when quantifying visibility from a human perspective. The visual significance of terrain is heavily influenced by the vertical dimension (i.e. slope, aspect and elevation) and distance from the observer, neither of which are adjusted for in standard viewshed analyses. Based on these limitations, this study aimed to develop a methodology which extends the standard viewshed to represent visible landscape as more realistically perceived by a human, called the ‘Vertical Visibility Index’ (VVI). This method was intended to overcome the primary limitations of the standard viewshed by calculating the vertical degrees of visibility between the eye-level of a human and the top and bottom point of each visible cell in a viewshed. Next, the validity of the VVI was assessed using two comparison methods: 1) the known proportion of vegetation visible as assessed through imagery for 10 locations; and 2) standard viewshed analysis for 50 viewpoints in an urban setting. While positive, significant correlations were observed between the VVI values and both comparators, the correlation was strongest between the VVI values and the image verified, known values (r = 0.863, p = 0.001). The validation results indicate that the VVI is a valid method which can be used as an improvement on standard viewshed analyses for the accurate representation of landscape visibility from a human perspective.  相似文献   

6.
Abstract

In most documentation of geographical information systems (GIS) it is very rare to find details of the algorithms used in the software, but alternative formulations of the same process may derive different results. In this research several alternatives in the design of viewshed algorithms are explored. Three major features of viewshed algorithms are examined: how elevations in the digital elevation model are inferred, how viewpoint and target are represented, and the mathematical formulation of the comparison. It is found that the second of these produces the greatest variability in the viewable area (up to 50 per cent over the mean viewable area), while the last gives the least. The same test data are run in a number of different GIS implementations of the viewshed operation, and smaller, but still considerable, variability in the viewable area is observed. The study highlights three issues: the need for standards and/or empirical benchmark datasets for GIS functions; the desirability of publication of algorithms used in GIS operations; and the fallacy of the binary representation of a complex GIS product such as the viewshed.  相似文献   

7.
Regionalization is an important part of the spatial analysis process, and the solution should be contiguity-constrained in each region. In general, several objectives need to be optimized in practical regionalization, such as the homogeneity of regions and the heterogeneity among regions. Therefore, multi-objective techniques are more suitable for solving regionalization problems. In this paper, we design a multi-objective particle swarm optimization algorithm for solving regionalization problems. Towards this goal, a novel particle representation for regionalization is proposed, which can be expressed in continuous space and has flexible constraints on the number of regions. In the process of optimization, a contiguous-region method is designed that satisfies the constraints and improves the efficiency. The decision solution is selected in the Pareto set based on a trade-off between the objective functions, and the number of regions can be automatically determined. The proposed method outperforms six regionalization algorithms in terms of both the number and the quality of the solutions.  相似文献   

8.
TOPMODEL中地形指数ln 的新算法   总被引:4,自引:0,他引:4  
TOPMODEL中地形指数ln (α/tanβ)被用来近似表征流域径流源面积和地下水水位的空间分布特征,目前广泛使用的计算地形指数的方法为多流向算法 (FD8算法)。本文首先介绍了多流向算法的基本原理,并基于此对流动累积分配中的有效等高线长度精确计算提出了几何锥面内切圆算法,同时改进了传统的地形指数中单位等高线汇流面积α的计算方法,增强了多流向算法对DEM中异常栅格的处理能力。改进后的地形指数新算法在两个不同流域和不同分辨率DEM上与传统多流向算法进行了对比分析。结果表明新算法在原理上更符合地形指数物理意义,实际应用中其计算结果更准确。这种地形指数新算法的提出对于流域水文过程机理分析及陆面过程定量化研究具有一定的现实意义。  相似文献   

9.
10.
Conventional methods have difficulties in forming optimal paths when raster data are used and multi‐objectives are involved. This paper presents a new method of using ant colony optimization (ACO) for solving optimal path‐covering problems on unstructured raster surfaces. The novelty of this proposed ACO includes the incorporation of a couple of distinct features which are not present in classical ACO. A new component, the direction function, is used to represent the ‘visibility’ in the path exploration. This function is to guide an ant walking toward the final destination more efficiently. Moreover, a utility function is proposed to reflect the multi‐objectives in planning applications. Experiments have shown that classical ACO cannot be used to solve this type of path optimization problems. The proposed ACO model can generate near optimal solutions by using hypothetical data in which the optimal solutions are known. This model can also find the near optimal solutions for the real data set with a good convergence rate. It can yield much higher utility values compared with other common conventional models.  相似文献   

11.
为解决传统地图查询方法中对复杂地址对象难以定位的问题,该文面向地理网格的时空大数据分析需求,结合面要素特点与语义分析,提出一种面要素语义位置的GeoSOT网格定位方法。首先通过最小外包矩形(Minimum Bounding Rectangle,MBR)将面要素以九宫格形式分割成9个区块,单独计算各区块的网格编码集合;然后提取位置语句中的名称与空间关系,根据具体情境进行直接查询,或利用网格编码计算目标位置;最后根据语义考虑是否对编码结果做差集运算,从而得到目标的GeoSOT网格定位范围。实验表明,对比现有的地图服务,该方法能够实现语境更加复杂的位置定位,具有可行性和有效性,其提供的未知要素推理定位方法能够有效补充、扩展原有的面要素数据集,可为后续针对面状区域的时空大数据分析提供网格化的数据资源。  相似文献   

12.
田间土壤含水率的空间结构及取样数目确定   总被引:18,自引:0,他引:18  
杨诗秀  雷志栋 《地理学报》1993,48(5):447-456
本文以田间试验资料为基础,研究田间土壤含水率的空间变异性与空间结构。对1m土层不同深度的含水率与1m土层平均含水率作了统计、自相关与半方差分析,表明田间土壤存在着空间变异性。1m土层平均含水率的变差系数Cv小于各不同深度含水率的Cv值。资料还表明含水率在平面位置上相关性不显著。将田间含水率作为空间变异的随机变量,可计算其符合要求的置信水平P1和估值精度的合理取样数目。本文提供的取样数目表亦可供确定合理取样数目时参考。  相似文献   

13.
多尺度空间单元区域划分方法   总被引:20,自引:0,他引:20  
传统空间单元的区域划分通常仅以属性数据作为划分依据,而对单元之间空间依赖关系考虑不周。在尺度空间理论基础上,提出多尺度空间单元区域划分方法,在考虑空间单元属性信息的同时,增加了空间单元的相互依赖关系,使得在空间尺度在由小变大过程中,具有高度空间相互依赖关系的空间单元相互融合,得到不同空间尺度下的区域划分。以江苏省从1978年到1995年的18年社会经济发展数据为基础,进行了全省社会经济发展水平的区域划分的试验,结果表明与实际发展水平的分布情况相吻合。  相似文献   

14.
Territory or zone design processes entail partitioning a geographic space, organized as a set of areal units, into different regions or zones according to a specific set of criteria that are dependent on the application context. In most cases, the aim is to create zones of approximately equal sizes (zones with equal numbers of inhabitants, same average sales, etc.). However, some of the new applications that have emerged, particularly in the context of sustainable development policies, are aimed at defining zones of a predetermined, though not necessarily similar, size. In addition, the zones should be built around a given set of seeds. This type of partitioning has not been sufficiently researched; therefore, there are no known approaches for automated zone delimitation. This study proposes a new method based on a discrete version of the adaptive additively weighted Voronoi diagram that makes it possible to partition a two-dimensional space into zones of specific sizes, taking both the position and the weight of each seed into account. The method consists of repeatedly solving a traditional additively weighted Voronoi diagram, so that each seed's weight is updated at every iteration. The zones are geographically connected using a metric based on the shortest path. Tests conducted on the extensive farming system of three municipalities in Castile-La Mancha (Spain) have established that the proposed heuristic procedure is valid for solving this type of partitioning problem. Nevertheless, these tests confirmed that the given seed position determines the spatial configuration the method must solve and this may have a great impact on the resulting partition.  相似文献   

15.
人口统计数据的空间转换   总被引:11,自引:2,他引:9  
在经济和社会研究中,所要研究的区域之上经常没有数据,而这些数据需要由已知区域的数据求得,即统计数据需要空间转换,这就通常涉及到面积内插。本文从GIS的角度研究如何解决人口内插问题,认为面积内插和GIS中的叠加分析是一致的。在传统的面积内插方法的基础上是提出了基于人口真实分布的面积内插方法,并推导出了公式。同时提出了人口密度的递归算法,即把居住区分为人口稀疏地区和人口稠密地区,估计出人口稀疏地区的人口密度,就可以求出人口密集地区的人口密度;再把人口密集区分为新的人口稀疏区和密集区,此过程反复直至求出接近于人口真实分布的人口模型。  相似文献   

16.
An origin-destination (OD) flow can be defined as the movement of objects between two locations. These movements must be determined for a range of purposes, and strong interactions can be visually represented via clustering of OD flows. Identification of such clusters may be useful in urban planning, traffic planning and logistics management research. However, few methods can identify arbitrarily shaped flow clusters. Here, we present a spatial scan statistical approach based on ant colony optimization (ACO) for detecting arbitrarily shaped clusters of OD flows (AntScan_flow). In this study, an OD flow cluster is defined as a regional pair with significant log likelihood ratio (LLR), and the ACO is employed to detect the clusters with maximum LLRs in the search space. Simulation experiments based on AntScan_flow and SaTScan_flow show that AntScan_flow yields better performance based on accuracy but requires a large computational demand. Finally, a case study of the morning commuting flows of Beijing residents was conducted. The AntScan_flow results show that the regions associated with moderate- and long-distance commuting OD flow clusters are highly consistent with subway lines and highways in the city. Additionally, the regions of short-distance commuting OD flow clusters are more likely to exhibit ‘residential-area to work-area’ patterns.  相似文献   

17.
18.
This paper presents a new, intelligent approach to discover transition rules for geographical cellular automata (CA) based on bee colony optimisation (BCO–CA) that can perform complex tasks through the cooperation and interaction of bees. The artificial bee colony miner algorithm is used to discover transition rules. In BCO–CA, a food source position is defined by its upper and lower thresholds for each attribute, and each bee searches the best upper and lower thresholds in each attribute as a zone. A transition rule is organised when the zone in each attribute is connected to another node by the operator ‘And’ and is linked to a cell status value. The transition rules are expressed by the logical structure statement ‘IF-Then’, which is explicit and easy to understand. Bee colony optimisation could better avoid the tendency to be vulnerable to local optimisation through local and global searching in the iterative process, and it does not require the discretisation of attribute values. Finally, The BCO–CA model is employed to simulate urban development in the Xi’an-Xian Yang urban area in China. Preliminary results suggest that this BCO approach is effective in capturing complex relationships between spatial variables and urban dynamics. Experimental results indicate that the BCO–CA model achieves a higher accuracy than the NULL and ACO–CA models, which demonstrates the feasibility and availability of the model in the simulation of complex urban dynamic change.  相似文献   

19.
Pedestrian navigation at night should differ from daytime navigation due to the psychological safety needs of pedestrians. For example, pedestrians may prefer better-illuminated walking environments, shorter travel distances, and greater numbers of pedestrian companions. Route selection at night is therefore a multi-objective optimization problem. However, multi-objective optimization problems are commonly solved by combining multiple objectives into a single weighted-sum objective function. This study extends the artificial bee colony (ABC) algorithm by modifying several strategies, including the representation of the solutions, the limited neighborhood search, and the Pareto front approximation method. The extended algorithm can be used to generate an optimal route set for pedestrians at night that considers travel distance, the illumination of the walking environment, and the number of pedestrian companions. We compare the proposed algorithm with the well-known Dijkstra shortest-path algorithm and discuss the stability, diversity, and dynamics of the generated solutions. Experiments within a study area confirm the effectiveness of the improved algorithm. This algorithm can also be applied to solving other multi-objective optimization problems.  相似文献   

20.

This paper proposes a new approach to the mining exploration drillholes positioning problem (DPP) that incorporates both geostatistical and optimization techniques. A metaheuristic was developed to solve the DPP taking into account an uncertainty index that quantifies the reliability of the current interpretation of the mineral deposit. The uncertainty index was calculated from multiple deposit realizations obtained by truncated Gaussian simulations conditional to the available drillholes samplings. A linear programming model was defined to select the subset of future drillholes that maximizes coverage of the uncertainty. A Tabu Search algorithm was developed to solve large instances of this set partitioning problem. The proposed Tabu Search algorithm is shown to provide good quality solutions approaching 95% of the optimal solution in a reasonable computing time, allowing close to optimal coverage of uncertainty for a fixed investment in drilling.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号