首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《The Cartographic journal》2013,50(2):130-140
Abstract

<title/>

The cartogram, or value-by-area map, is a popular technique for cartographically representing social data. Such maps visually equalize a basemap before mapping a social variable by adjusting the size of each enumeration unit by a second, related variable. However, to scale the basemap units according to an equalizing variable, cartograms must distort the shape and/or topology of the original geography. Such compromises reduce the effectiveness of the visualisation for elemental and general map-reading tasks. Here we describe a new kind of representation, termed a value-by-alpha map, which visually equalizes the basemap by adjusting the alpha channel, rather than the size, of each enumeration unit. Although not without its own limitations, the value-by-alpha map is able to circumvent the compromise inherent to the cartogram form, perfectly equalizing the basemap while preserving both shape and topology.  相似文献   

2.
《The Cartographic journal》2013,50(3):250-256
Abstract

title/>

Lenticular visualisation methods are innovative advancements of modern presentation media in cartography. Owing to the mainly three-dimensional perception of people, this technique, in print as well as on screen, offers the possibility to introduce autostereoscopic, i.e. three-dimensional views in cartography. On the basis of true 3D, it has already found its way into the visualisation of relief relations. Moreover, it also opens perspectives for the more widespread products of thematic cartography (thematic maps). Multi-layered representation becomes possible on the basis of three-dimensional or sequentially differentiated depictions of spatial phenomena. Thus, several parameters or dimensions of cartographic content can be displayed at the same time. This essay discusses some potential applications of the lenticular foil technique for thematic cartography on a theoretical basis.  相似文献   

3.
ABSTRACT

Understanding the characteristics of tourist movement is essential for tourist behavior studies since the characteristics underpin how the tourist industry management selects strategies for attraction planning to commercial product development. However, conventional tourism research methods are not either scalable or cost-efficient to discover underlying movement patterns due to the massive datasets. With advances in information and communication technology, social media platforms provide big data sets generated by millions of people from different countries, all of which can be harvested cost efficiently. This paper introduces a graph-based method to detect tourist movement patterns from Twitter data. First, collected tweets with geo-tags are cleaned to filter those not published by tourists. Second, a DBSCAN-based clustering method is adapted to construct tourist graphs consisting of the tourist attraction vertices and edges. Third, network analytical methods (e.g. betweenness centrality, Markov clustering algorithm) are applied to detect tourist movement patterns, including popular attractions, centric attractions, and popular tour routes. New York City in the United States is selected to demonstrate the utility of the proposed methodology. The detected tourist movement patterns assist business and government activities whose mission is tour product planning, transportation, and development of both shopping and accommodation centers.  相似文献   

4.
5.
《The Cartographic journal》2013,50(4):323-329
Abstract

title/>

Traffic delays can be used to quantify traffic congestion on the road. However, traffic delay data itself cannot provide readily useful content about congestion directly. The aim of this research is to develop a multi-scale visualisation of traffic delay (excess travel time in minutes per kilometre) derived from automatic number plate reading (ANPR) data, to enable traffic commuting patterns to and from Central London to be understood and analysed further.  相似文献   

6.
《The Cartographic journal》2013,50(4):372-386
Abstract

For decades, uncertainty visualisation has attracted attention in disciplines such as cartography and geographic visualisation, scientific visualisation and information visualisation. Most of this research deals with the development of new approaches to depict uncertainty visually; only a small part is concerned with empirical evaluation of such techniques. This systematic review aims to summarize past user studies and describe their characteristics and findings, focusing on the field of geographic visualisation and cartography and thus on displays containing geospatial uncertainty. From a discussion of the main findings, we derive lessons learned and recommendations for future evaluation in the field of uncertainty visualisation. We highlight the importance of user tasks for successful solutions and recommend moving towards task-centered typologies to support systematic evaluation in the field of uncertainty visualisation.  相似文献   

7.
Abstract

<title/>

Cartogram is a technique for visualisation of the geographical distribution of spatial data. It has two main types, i.e. distance cartogram and area cartogram. Area cartogram is a transformed map in which areas are resized in proportion to an attribute value. A number of techniques have been developed for the generation of area cartograms. Some researchers consider cartogram as a very effective technique for visualisation of spatial data, while others doubt about the effectiveness because of the possible distortion in shape and/or disconnectivity in topology. This study aims to evaluate the effectiveness of area cartogram for visualizing spatial data. In this study, two comparative experiments have been conducted. One is the comparison between thematic maps and cartograms, and the other is the comparison among different types of area cartogram. Two sets of data with different characteristics are used, i.e. 2005 China population data and 1996 US election data. Results show that cartogram is more effective in the representation of the 1996 US election data which provides a qualitative result (i.e. in binary form or nominal data), but thematic map is far more effective in the representation of 2005 China population data which provides a quantitative result (in classes or ordinal data). It is also found that among different types of area cartogram, pseudo-cartogram is the most preferred technique and Dorling cartogram is the least preferred one.  相似文献   

8.
ABSTRACT

Eighty percent of big data are associated with spatial information, and thus are Big Spatial Data (BSD). BSD provides new and great opportunities to rework problems in urban and environmental sustainability with advanced BSD analytics. To fully leverage the advantages of BSD, it is integrated with conventional data (e.g. remote sensing images) and improved methods are developed. This paper introduces four case studies: (1) Detection of polycentric urban structures; (2) Evaluation of urban vibrancy; (3) Estimation of population exposure to PM2.5; and (4) Urban land-use classification via deep learning. The results provide evidence that integrated methods can harness the advantages of both traditional data and BSD. Meanwhile, they can also improve the effectiveness of big data itself. Finally, this study makes three key recommendations for the development of BSD with regards to data fusion, data and predicting analytics, and theoretical modeling.  相似文献   

9.
ABSTRACT

Geovisualisation is a knowledge-intensive art in which both providers and users need to possess a wide range of knowledge. Current syntactic approaches to presenting visualisation information lack semantics on the one hand, and on the other hand are too bespoke. Such limitations impede the transfer, interpretation, and reuse of the geovisualisation knowledge. In this paper, we propose a knowledge-based approach to formally represent geovisualisation knowledge in a semantically-enriched and machine-readable manner using Semantic Web technologies. Specifically, we represent knowledge regarding cartographic scale, data portrayal and geometry source, which are three key aspects of geovisualisation in the contemporary web mapping era, coupling ontologies and semantic rules. The knowledge base enables inference for deriving the corresponding geometries and portrayals for visualisation under different conditions. A prototype system is developed in which geospatial linked data are used as underlying data, and some geovisualisation knowledge is formalised into a knowledge base to visualise the data and provide rich semantics to users. The proposed approach can partially form the foundation for the vision of web of knowledge for geovisualisation.  相似文献   

10.
ABSTRACT

In this opinion paper, we, a group of scientists from environmental-, geo-, ocean- and information science, argue visual data exploration should become a common analytics approach in Earth system science due to its potential for analysis and interpretation of large and complex spatio-temporal data. We discuss the challenges that appear such as synthesis of heterogeneous data from various sources, reducing the amount of information and facilitating multidisciplinary, collaborative research. We argue that to fully exploit the potential of visual data exploration, several bottlenecks and challenges have to be addressed: providing an efficient data management and an integrated modular workflow, developing and applying suitable visual exploration concepts and methods with the help of effective and tailored tools as well as generating and raising the awareness of visual data exploration and education. We are convinced visual data exploration is worth the effort since it significantly facilitates insight into environmental data and derivation of knowledge from it.  相似文献   

11.
Abstract

A significant Geographic Information Science (GIS) issue is closely related to spatial autocorrelation, a burning question in the phase of information extraction from the statistical analysis of georeferenced data. At present, spatial autocorrelation presents two types of measures: continuous and discrete. Is it possible to use Moran's I and the Moran scatterplot with continuous data? Is it possible to use the same methodology with discrete data? A particular and cumbersome problem is the choice of the spatial-neighborhood matrix (W) for points data. This paper addresses these issues by introducing the concept of covariogram contiguity, where each weight is based on the variogram model for that particular dataset: (1) the variogram, whose range equals the distance with the highest Moran I value, defines the weights for points separated by less than the estimated range and (2) weights equal zero for points widely separated from the variogram range considered. After the W matrix is computed, the Moran location scatterplot is created in an iterative process. In accordance with various lag distances, Moran's I is presented as a good search factor for the optimal neighborhood area. Uncertainty/transition regions are also emphasized. At the same time, a new Exploratory Spatial Data Analysis (ESDA) tool is developed, the Moran variance scatterplot, since the conventional Moran scatterplot is not sensitive to neighbor variance. This computer-mapping framework allows the study of spatial patterns, outliers, changeover areas, and trends in an ESDA process. All these tools were implemented in a free web e-Learning program for quantitative geographers called SAKWeb© (or, in the near future, myGeooffice.org).  相似文献   

12.
Abstract

Many of the traditional data visualization techniques, which proved to be supportive for exploratory analysis of datasets of moderate sizes, fail to fulfil their function when applied to large datasets. There are two approaches to coping with large amounts of data: data selection, when only a portion of data is displayed, and data aggregation, i.e. grouping data items and considering the groups instead of the original data. None of these approaches alone suits the needs of exploratory data analysis, which requires consideration of data on all levels: overall (considering a dataset as a whole), intermediate (viewing and comparing collective characteristics of arbitrary data subsets, or classes), and elementary (accessing individual data items). Therefore, it is necessary to combine these approaches, i.e. build a tool showing the whole set and arbitrarily defined subsets (object classes) in an aggregated way and superimposing this with a representation of arbitrarily selected individual data items.

We have achieved such a combination of approaches by modifying the technique of parallel coordinate plot. These modifications are described and analysed in the paper.  相似文献   

13.
《The Cartographic journal》2013,50(3):240-246
Abstract

The use of computer-generated perspective views, often named as three-dimensional (3D) maps, is growing. These terrain visualisations should be more understandable for users without cartographic education, which are not familiar with contour lines. Within the study, two eye-tracking experiments and online questionnaire were used for investigating the difference between user cognition of classical two-dimensional (2D) visualisation with contour lines and perspective 3D view. Questionnaire was focused on maps understandability, suitability and aesthetics. Results of the questionnaire shows, that the majority of participants prefer 3D visualisation. First eye-tracking experiment was designed as a pair of maps in one stimulus. One shows 2D visualisation, the other 3D visualisation. No significant differences between user preferences of 2D and 3D visualisation were found, but the results were influenced with the order of the maps in the stimuli. Because of that another experiment was designed. In this case stimuli contained only one of two possible visualisations (2D and 3D). ScanPath comparison of this experiment results confirmed that users have different strategies for cognition of 2D and 3D visualisation, although statistically significant difference between both types of visualisation was found in the ScanPath length metric only.  相似文献   

14.
ABSTRACT

Turning Earth observation (EO) data consistently and systematically into valuable global information layers is an ongoing challenge for the EO community. Recently, the term ‘big Earth data’ emerged to describe massive EO datasets that confronts analysts and their traditional workflows with a range of challenges. We argue that the altered circumstances must be actively intercepted by an evolution of EO to revolutionise their application in various domains. The disruptive element is that analysts and end-users increasingly rely on Web-based workflows. In this contribution we study selected systems and portals, put them in the context of challenges and opportunities and highlight selected shortcomings and possible future developments that we consider relevant for the imminent uptake of big Earth data.  相似文献   

15.
Abstract

The vision of a Digital Earth calls for more dynamic information systems, new sources of information, and stronger capabilities for their integration. Sensor networks have been identified as a major information source for the Digital Earth, while Semantic Web technologies have been proposed to facilitate integration. So far, sensor data are stored and published using the Observations & Measurements standard of the Open Geospatial Consortium (OGC) as data model. With the advent of Volunteered Geographic Information and the Semantic Sensor Web, work on an ontological model gained importance within Sensor Web Enablement (SWE). In contrast to data models, an ontological approach abstracts from implementation details by focusing on modeling the physical world from the perspective of a particular domain. Ontologies restrict the interpretation of vocabularies toward their intended meaning. The ongoing paradigm shift to Linked Sensor Data complements this attempt. Two questions have to be addressed: (1) how to refer to changing and frequently updated data sets using Uniform Resource Identifiers, and (2) how to establish meaningful links between those data sets, that is, observations, sensors, features of interest, and observed properties? In this paper, we present a Linked Data model and a RESTful proxy for OGC's Sensor Observation Service to improve integration and inter-linkage of observation data for the Digital Earth.  相似文献   

16.
《测量评论》2013,45(94):349-361
Abstract

A recent investigation into the flatness of Multiplex diapositive slides has shown that flatness errors occur ranging from 0 to 0·03–0·04 mm. referred to the flat projector stage. In a first attempt to ascertain the effects of errors of this kind on Multiplex bridges, the flatness values of two sets of nine diapositives each were measured using a simple interferometric method and the vertical (wants of correspondence) and horizontal parallaxes introduced by these flatness errors were subjected to computational bridging. The resulting height errors at the end of the two strips proved to be of noticeable size, as large as +0·6 and ?0·9 mm. respectively. Indicative as these figures may already be, it seems useful to abstract the investigation from the vagaries of the individual case and to put it on a more general footing.  相似文献   

17.
Abstract

Digital Earth essentially consists of 3D and moreD models and attached semantic information (attributes). Techniques for generating such models efficiently are required very urgently. Reality-based 3D modelling using images as prime data source plays an important role in this context. Images contain a wealth of information that can be advantageously used for model generation. Images are increasingly available from satellite, aerial and terrestrial platforms. This contribution briefly describes some of the problems which we encounter if the process of model generation is to be automatised. With the help of some examples from Digital Terrain Model generation, Cultural Heritage and 3D city modelling we show briefly what can be achieved. Special attention is directed towards the use of model helicopters for image data acquisition. Some problems with interactive visualisation are discussed. Also, issues surrounding R&D, professional practice and education are also addressed.  相似文献   

18.
Abstract

At the end of the 1980s, the computer experts who had been in the vanguard of cartographic development lost their position due to the fact that computers became democratised. This may be ascribed to the 'Macintosh' effect. This in turn led cartographic companies back to the core of their professional know-how: it is cartographers themselves who now develop the scope of their profession, utilising all the resources provided by the new computer technology. But, if cartographers want to keep playing a major role in the geographic information arena, they have to determine and develop the specific elements of their discipline: if technology mobilizes all forces to the detriment of theory, then the discipline progressively weakens and ends in being swallowed by another discipline.

It is the cartographers' task to transform the spatial information from its verbal, social and numerical form into visual form for visual thinking; this visualisation provides for cognitive functions, communication functions, decision support functions and social functions. In order for maps to perform these functions cartographers should continue, now with digital tools, to safeguard data quality by monitoring the compilation stage during which they have to see to it that the heterogeneous datasets in databases will be made comparable, both from a geometrical, semantical, updatedness and completeness point of view. This main aspect of the cartographer's job can be called its engineering part. The other main aspect will remain the map design part, that leads to proper communication of the spatial information. Both aspects will remain the cartographer's domain if he/she succeeds in providing a theoretical basis for his/her work.  相似文献   

19.
Abstract

After the set-up of a spatial data infrastructure (SDI) and a national information infrastructure (NII) in many countries, the provision of geo-services became one of the most important and attractive tasks. With the integration of global positioning system (GPS), geographic information system (GIS) and remote sensing (RS), we can, in principle, answer any geo-spatial related question: when and where what object has which changes? An intelligent geo-service agent could provide end-users with the most necessary information in the shortest time and at the lowest cost. Unfortunately there is still a long way to go to achieve such goals. The central component in such geo-services is the integration of the spatial information system with a computing grid via wire- and wireless communication networks. This paper will mainly discuss the grid technology and its integration with spatial information technology, expounding potential problems and possible resolutions. A novel categorising of information grids in the context of geo-spatial information is proposed: generalised and specialised spatial information grids.  相似文献   

20.
Obituary     
《The Cartographic journal》2013,50(4):315-322
Abstract

title/>

In the area of volunteered geographical information (VGI), the issue of spatial data quality is a clear challenge. The data that are contributed to VGI projects do not comply with standard spatial data quality assurance procedures, and the contributors operate without central coordination and strict data collection frameworks. However, similar to the area of open source software development, it is suggested that the data hold an intrinsic quality assurance measure through the analysis of the number of contributors who have worked on a given spatial unit. The assumption that as the number of contributors increases so does the quality is known as ‘Linus’ Law’ within the open source community. This paper describes three studies that were carried out to evaluate this hypothesis for VGI using the OpenStreetMap dataset, showing that this rule indeed applies in the case of positional accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号