首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 565 毫秒
1.
This paper presents the first application of spatially correlated neutral models to the detection of changes in mortality rates across space and time using the local Morans I statistic. Sequential Gaussian simulation is used to generate realizations of the spatial distribution of mortality rates under increasingly stringent conditions: 1) reproduction of the sample histogram, 2) reproduction of the pattern of spatial autocorrelation modeled from the data, 3) incorporation of regional background obtained by geostatistical smoothing of observed mortality rates, and 4) incorporation of smooth regional background observed at a prior time interval. The simulated neutral models are then processed using two new spatio-temporal variants of the Morans I statistic, which allow one to identify significant changes in mortality rates above and beyond past spatial patterns. Last, the results are displayed using an original classification of clusters/outliers tailored to the space-time nature of the data. Using this new methodology the space-time distribution of cervix cancer mortality rates recorded over all US State Economic Areas (SEA) is explored for 9 time periods of 5 years each. Incorporation of spatial autocorrelation leads to fewer significant SEA units than obtained under the traditional assumption of spatial independence, confirming earlier claims that Type I errors may increase when tests using the assumption of independence are applied to spatially correlated data. Integration of regional background into the neutral models yields substantially different spatial clusters and outliers, highlighting local patterns which were blurred when local Morans I was applied under the null hypothesis of constant risk.This research was funded by grants R01 CA92669 and 1R43CA105819-01 from the National Cancer Institute and R43CA92807 under the Innovation in Biomedical Information Science and Technology Initiative at the National Institute of Health. The views stated in this publication are those of the authors and do not necessarily represent the official views of the NCI. The authors also thank three anonymous reviewers for their comments that helped improve the presentation of the methodology.  相似文献   

2.
A thorough assessment of human exposure to environmental agents should incorporate mobility patterns and temporal changes in human behaviors and concentrations of contaminants; yet the temporal dimension is often under-emphasized in exposure assessment endeavors, due in part to insufficient tools for visualizing and examining temporal datasets. Spatio-temporal visualization tools are valuable for integrating a temporal component, thus allowing for examination of continuous exposure histories in environmental epidemiologic investigations. An application of these tools to a bladder cancer case-control study in Michigan illustrates continuous exposure life-lines and maps that display smooth, continuous changes over time. Preliminary results suggest increased risk of bladder cancer from combined exposure to arsenic in drinking water (>25 g/day) and heavy smoking (>30 cigarettes/day) in the 1970s and 1980s, and a possible cancer cluster around automotive, paint, and organic chemical industries in the early 1970s. These tools have broad application for examining spatially- and temporally-specific relationships between exposures to environmental risk factors and disease.This study was supported by grant R01 CA96002-10, Geographic-Based Research in Cancer Control and Epidemiology, from the National Cancer Institute. Development of the STISTM software was funded by grants R43 ES10220 from the National Institutes of Environmental Health Sciences and R01 CA92669 from the National Cancer Institute. Access to cancer case records was provided by Michigan Cancer Surveillance Program within the Division for Vital Records and Health Statistics, Michigan Department of Community Health. The authors thank Michigan Public Health Institute for conducting the telephone interview and Stacey Fedewa and Lisa Bailey for entering written surveys into a database. The authors thank 3 anonymous reviewers for their helpful comments.  相似文献   

3.
This paper deals with the extension of internet-based geographic information systems with functionality for exploratory spatial data analysis (esda). The specific focus is on methods to identify and visualize outliers in maps for rates or proportions. Three sets of methods are included: extreme value maps, smoothed rate maps and the Moran scatterplot. The implementation is carried out by means of a collection of Java classes to extend the Geotools open source mapping software toolkit. The web based spatial analysis tools are illustrated with applications to the study of homicide rates and cancer rates in U.S. counties.This research was supported in part by a number of grants from the US National Science Foundation: NSF Grant SBR-9410612, BCS-9978058, to the Center for Spatially Integrated Social Science (csiss), and a grant from the National Consortium on Violence Research (ncovr is supported under grant SBR-9513040 from the National Science Foundation). In addition, support was provided by grant RO1 CA 95949-01 from the National Cancer Institute. Special thanks to Dr. Eugene J. Lengerich of the Pennsylvania State Cancer Institute for providing the data on colon cancer diagnoses.  相似文献   

4.
When assessing maps consisting of comparable regional values, it is of interest to know whether the peak, or maximum value, is higher than it would likely be by chance alone. Peaks on maps of crime or disease might be attributable to random fluctuation, or they might be due to an important deviation from the baseline process that produces the regional values. This paper addresses the situation where a series of such maps are observed over time, and it is of interest to detect statistically significant deviations between the observed and expected peaks as quickly as possible. The Gumbel distribution is used as a model for the statistical distribution of extreme values; this distribution does not require the underlying distributions of regional values to be either normal, known, or identical. Cumulative sum surveillance methods are used to monitor these Gumbel variates, and these methods are also extended for use when monitoring smoothed regional values (where the quantity monitored is a weighted sum of values in the immediate geographical neighborhood). The new methods are illustrated by using data on breast cancer mortality for the 217 counties of the northeastern United States, and prostate cancer mortality for the entire United States, during the period 1968-1998.The research assistance of Ikuho Yamada is gratefully acknowledged. I also am grateful for the support of Grant 1R01 ES09816-01 from the National Institutes of Health, the support of National Cancer Institute Grant R01 CA92693-0, and the helpful comments made by the referees  相似文献   

5.
Spatial autocorrelation analysis was used to identify spatial patterns of 1991 Gulf War (GW) troop locations in relationship to subsequent postwar diagnosis of chronic multisymptom illness (CMI). Criteria for the diagnosis of CMI include reporting from at least two of three symptom clusters: fatigue, musculoskeletal pain, and mood and cognition. A GIS‐based methodology was used to examine associations between potential hazardous exposures or deployment situations and postwar health outcomes using troop location data as a surrogate. GW veterans from the Devens Cohort Study were queried about specific symptoms approximately four years after the 1991 deployment to the Persian Gulf. Global and local statistics were calculated using the Moran's I and G statistics for six selected date periods chosen a priori to mark important GW‐service events or exposure scenarios among 173 members of the cohort. Global Moran's I statistics did not detect global spatial patterns at any of the six specified data periods, thus, indicating there is no significant spatial autocorrelation of locations over the entire Gulf region for veterans meeting criteria for severe postwar CMI. However, when applying local G* and local Moran's I statistics, significant spatial clusters (primarily in the coastal Dammam/Dharhan and the central inland areas of Saudi Arabia) were identified for several of the selected time periods. Further study using GIS techniques, coupled with epidemiological methods, to examine spatial and temporal patterns with larger sample sizes of GW veterans is warranted to ascertain if the observed spatial patterns can be confirmed.  相似文献   

6.
Geographic Information System (GIS) software is constrained, to a greater or lesser extent, by a static world view that is not well-suited to the representation of time (Goodchild 2000). Space Time Intelligence System (STIS) software holds the promise of relaxing some of the technological constraints of spatial only GIS, making possible visualization approaches and analysis methods that are appropriate for temporally dynamic geospatial data. This special issue of the Journal of Geographical Systems describes some recent advances in STIS technology and methods, with an emphasis on applications in public health and spatial epidemiology.The STIS expert workshops were funded in part by grants R01CA092669 and R01CA096002 from the National Cancer Institute, and by grants R43-ES010220 and R44-ES010220 from the National Institute of Environmental Health Sciences. Gillian AvRuskin provided cheerful editorial assistance. We thank the participants at the workshops for providing invaluable expertise and critical insights.  相似文献   

7.
Present methodological research on geographically weighted regression (GWR) focuses primarily on extensions of the basic GWR model, while ignoring well-established diagnostics tests commonly used in standard global regression analysis. This paper investigates multicollinearity issues surrounding the local GWR coefficients at a single location and the overall correlation between GWR coefficients associated with two different exogenous variables. Results indicate that the local regression coefficients are potentially collinear even if the underlying exogenous variables in the data generating process are uncorrelated. Based on these findings, applied GWR research should practice caution in substantively interpreting the spatial patterns of local GWR coefficients. An empirical disease-mapping example is used to motivate the GWR multicollinearity problem. Controlled experiments are performed to systematically explore coefficient dependency issues in GWR. These experiments specify global models that use eigenvectors from a spatial link matrix as exogenous variables.This study was supported by grant number 1 R1 CA95982-01, Geographic-Based Research in Cancer Control and Epidermiology, from the National Cancer Institute. The author thank the anonymous reviewers and the editor for their helpful comments.  相似文献   

8.
The North American datum of 1983: Project methodology and execution   总被引:1,自引:0,他引:1  
A new adjustment of the geodetic control networks in North America has been completed, resulting in a new continental datum—the North American Datum of 1983 (NAD 83). The establishment ofNAD 83 was the result of an international project involving the National Geodetic Survey of the United States, the Geodetic Survey of Canada, and the Danish Geodetic Institute (responsible for surveying in Greenland). The geodetic data in Mexico and Central America were collected by the Inter American Geodetic Survey and validated by the Defense Mapping Agency Hydrographic/Topographic Center. The fundamental task ofNAD 83 was a simultaneous least squares adjustment involving 266,436 stations in the United States, Canada, Mexico, and Central America. The networks in Greenland, Hawaii, and the Caribbean islands were connected to the datum through Doppler satellite and Very Long Baseline Interferometry (VLBI) observations. The computations were performed with respect to the ellipsoid of the Geodetic Reference System of 1980. The ellipsoid is positioned in such a way as to be geocentric, and its axes are oriented by the Bureau International de l'Heure Terrestrial System of 1984. The mathematical model for theNAD readjustment was the height-controlled three-dimensional system. The least squares adjustment involved 1,785,772 observations and 928,735 unknowns. The formation and solution of the normal equations were carried out according to the Helmert block method. [Authors' note:This article is a condensation of the final report of the NAD 83 project. The full report (Schwarz,1989) contains a more complete discussion of all the topics.]  相似文献   

9.
Space‐time data are becoming more abundant as time goes by, with hands‐on interest in them becoming more prevalent. These data have a very sensitive ordering in space and time, one that the simplest of recording errors can scramble. These data are also complex, containing both spatial and temporal autocorrelation coupled with their interaction. One goal of many researchers is to disentangle and account for these autocorrelation components in a parsimonious way. This article presents three competing model specifications to achieve this end. In addition, it outlines seven best practices for vetting space‐time datasets. This article cites a publicly available corrupt (containing at least errors of omission) rabies dataset to illustrate how a large volume of potentially valuable data can be rendered meaningless. In addition, it exemplifies postulated contentions about the United States National Cancer Institute Surveillance, Epidemiology, and End Results Program’s 1969–2018 population‐by‐county dataset, a collection of population counts held in high esteem. One major empirical finding is that this particular dataset exhibits traits that may merit remedial revisions action. A key conceptual finding is a suggested set of best practices for space‐time data proofreading. These two findings contribute to an ultimate goal of a large collection of certified open access space‐time datasets supporting repeatable and replicable scientific analyses.  相似文献   

10.
This article introduces a software package named GeoSurveillance that combines spatial statistical techniques and GIS routines to perform tests for the detection and monitoring of spatial clustering. GeoSurveillance provides both retrospective and prospective tests. While retrospective tests are applied to spatial data collected for a particular point in time, prospective tests attempt to incorporate the dynamic nature of spatial patterns via analyzing time-series data to detect emergent clusters as quickly as possible. This article will outline the structure of GeoSurveillance as well as describe the statistical cluster detection methods implemented in the software. It concludes with an illustration of the use of the software to analyze the spatial pattern of low birth weights in Los Angeles County, California.   相似文献   

11.
Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrological and geomorphological analysis by indicating areas where too few headwater channels are represented. Natural drainage density patterns are not consistently available in existing hydrographical data bases for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is implemented in parallel fashion by simultaneously executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-m resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high-performance computing environment is shown to facilitate and refine the choice of drainage density extraction parameters and more readily improve extraction procedures than conventional processing.  相似文献   

12.
In this study, we develop a new method using self-organizing maps (SOMs) for the selection of hydrographic model generalization. The most suitable attributes of the stream objects are used as input variables to the SOM. The attributes were weighted using Pearson’s chi-square independence test. We used the Radical Law to determine how many features should be selected, and an incremental approach was developed to determine which clusters should be selected from the SOM. Two drainage patterns (dendritic and modified basic) were obtained from the National Hydrography Datasets of United States Geological Survey at 1:24,000-scale (high resolution) and used in order to derive stream networks at 1:100,000-scale (medium resolution). The 1:100,000-scale stream networks, derived in accordance with the proposed approach, are similar to those in the original maps in both quantity and visual aspects. Stream density and pattern were maintained in each subunit, and continuous and semantically correct networks were obtained.  相似文献   

13.
Multi-representation databases (MRDB) are used in several Geographical Information System applications for different purposes. MRDB are mainly obtained through model and cartographic generalizations. The model generalization is essentially achieved with the selection/elimination process in which a decision must be made to include or exclude the object at the target level. In this study, support vector machines (SVM) was, for the first time, used for the selection/elimination process in stream network generalization. Within this context, the attributes to be used as input data in the SVM method were determined and weighted according to the associations determined in a chi-squared independence test. 1:100,000-scale (medium resolution) stream networks were derived from two 1:24,000-scale (high resolution) stream networks with different patterns in the United States Geological Survey National Hydrography Data-sets. The derived stream networks were quite similar to the 1:100,000-scale original stream networks in both qualitative and visual aspects.  相似文献   

14.
This paper describes a workflow for automating the extraction of elevation-derived stream lines using open source tools with parallel computing support and testing the effectiveness of procedures in various terrain conditions within the conterminous United States. Drainage networks are extracted from the US Geological Survey 1/3 arc-second 3D Elevation Program elevation data having a nominal cell size of 10 m. This research demonstrates the utility of open source tools with parallel computing support for extracting connected drainage network patterns and handling depressions in 30 subbasins distributed across humid, dry, and transitional climate regions and in terrain conditions exhibiting a range of slopes. Special attention is given to low-slope terrain, where network connectivity is preserved by generating synthetic stream channels through lake and waterbody polygons. Conflation analysis compares the extracted streams with a 1:24,000-scale National Hydrography Dataset flowline network and shows that similarities are greatest for second- and higher-order tributaries.  相似文献   

15.
Density‐based clustering algorithms such as DBSCAN have been widely used for spatial knowledge discovery as they offer several key advantages compared with other clustering algorithms. They can discover clusters with arbitrary shapes, are robust to noise, and do not require prior knowledge (or estimation) of the number of clusters. The idea of using a scan circle centered at each point with a search radius Eps to find at least MinPts points as a criterion for deriving local density is easily understandable and sufficient for exploring isotropic spatial point patterns. However, there are many cases that cannot be adequately captured this way, particularly if they involve linear features or shapes with a continuously changing density, such as a spiral. In such cases, DBSCAN tends to either create an increasing number of small clusters or add noise points into large clusters. Therefore, in this article, we propose a novel anisotropic density‐based clustering algorithm (ADCN). To motivate our work, we introduce synthetic and real‐world cases that cannot be handled sufficiently by DBSCAN (or OPTICS). We then present our clustering algorithm and test it with a wide range of cases. We demonstrate that our algorithm can perform equally as well as DBSCAN in cases that do not benefit explicitly from an anisotropic perspective, and that it outperforms DBSCAN in cases that do. Finally, we show that our approach has the same time complexity as DBSCAN and OPTICS, namely O(n log n) when using a spatial index and O(n2) otherwise. We provide an implementation and test the runtime over multiple cases.  相似文献   

16.
Disaster resilience is a major societal challenge. Cartography and GIS can contribute substantially to this research area. This paper describes a cyberinfrastructure for disaster resilience assessment and visualization for all counties in the United States. Aided by the Application Programming Interface-enabled web mapping and component-oriented web tools, the cyberinfrastructure is designed to better serve the US communities with comprehensive resilience information. The resilience assessment tool is based on the resilience inference measurement model. This web application delivers the resilience assessment tool to the users through applets. It provides an interactive tool for the users to visualize the historical natural hazards exposure and damages in the areas of their interest, compute the resilience indices, and produce on-the-fly maps and statistics. The app could serve as a useful tool for decision makers. This app won the top 10 runners-up in the Environmental Systems Research Institute (ESRI) Climate Resilience App Challenge 2014 and the top 5 in the scientific section of the ESRI Global Disaster App Challenge 2014.  相似文献   

17.
Spatial data quality is a paramount concern in all GIS applications. Existing spatial data accuracy standards, including the National Standard for Spatial Data Accuracy (NSSDA) used in the United States, commonly assume the positional error of spatial data is normally distributed. This research has characterized the distribution of the positional error in four types of spatial data: GPS locations, street geocoding, TIGER roads, and LIDAR elevation data. The positional error in GPS locations can be approximated with a Rayleigh distribution, the positional error in street geocoding and TIGER roads can be approximated with a log‐normal distribution, and the positional error in LIDAR elevation data can be approximated with a normal distribution of the original vertical error values after removal of a small number of outliers. For all four data types considered, however, these solutions are only approximations, and some evidence of non‐stationary behavior resulting in lack of normality was observed in all four datasets. Monte‐Carlo simulation of the robustness of accuracy statistics revealed that the conventional 100% Root Mean Square Error (RMSE) statistic is not reliable for non‐normal distributions. Some degree of data trimming is recommended through the use of 90% and 95% RMSE statistics. Percentiles, however, are not very robust as single positional accuracy statistics. The non‐normal distribution of positional errors in spatial data has implications for spatial data accuracy standards and error propagation modeling. Specific recommendations are formulated for revisions of the NSSDA.  相似文献   

18.
This paper reports on generalization and data modeling to create reduced scale versions of the National Hydrographic Dataset (NHD) for dissemination through The National Map, the primary data delivery portal for USGS. Our approach distinguishes local differences in physiographic factors, to demonstrate that knowledge about varying terrain (mountainous, hilly or flat) and varying climate (dry or humid) can support decisions about algorithms, parameters, and processing sequences to create generalized, smaller scale data versions which preserve distinct hydrographic patterns in these regions. We work with multiple subbasins of the NHD that provide a range of terrain and climate characteristics. Specifically tailored generalization sequences are used to create simplified versions of the high resolution data, which was compiled for 1:24,000 scale mapping. Results are evaluated cartographically and metrically against a medium resolution benchmark version compiled for 1:100,000, developing coefficients of linear and areal correspondence.  相似文献   

19.
As this article is published, the U.S. Census Bureau is completing work for the twenty third decennial census of the United States. Once again, the MAF/TIGER system served as the geospatial infrastructure supporting numerous census operations and data collection, tabulation, and dissemination activities. From data collection to data dissemination we trace the recent activities of the 2010 Decennial Census of the United States to illustrate the role maps and geospatial data play in an increasing variety of public and private sector activities across the nation. To ensure a successful 2010 Census, millions of maps had to be created. This article will give an overview of the automated mapping system designed to create these maps. This includes a discussion about associated software needed and the variety of map types that were developed. Finally, future map production and geospatial activities at the Census Bureau will be discussed.  相似文献   

20.
为了更好地整合统计地理空间信息,联合国成立了统计和地理信息整合专家组,目标是推进全球统计地理空间框架的实施.全球统计地理空间框架为各国提供了通用的标准和方法,以促进统计和地理空间信息的整合,并提高了结合空间信息的统计数据的可用性,让统计更好地服务于决策.本文介绍了构成统计地理空间框架五项原则的具体内容和相关国际经验,并分析了国内实施框架的基础条件、有利因素和困难.建议国家统计局在今后的普查和统计调查中,参照全球统计地理空间框架原则,加强统计和地理信息的整合,并提出了具体措施.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号