We propose to adopt a deep learning based framework using generative adversarial networks for ground-roll attenuation in land seismic data. Accounting for the non-stationary properties of seismic data and the associated ground-roll noise, we create training labels using local time–frequency transform and regularized non-stationary regression. The basic idea is to train the network using a few shot gathers such that the network can learn the weights associated with noise attenuation for the training shot gathers. We then apply the learned weights to test ground-roll attenuation on shot gathers, that are not a part of training input to obtain the desired signal. This approach gives results similar to local time–frequency transform and regularized non-stationary regression but at a significantly reduced computational cost. The proposed approach automates the ground-roll attenuation process without requiring any manual input in picking the parameters for each shot gather other than in the training data. Tests on field-data examples verify the effectiveness of the proposed approach. 相似文献
A model integrating geo-information and self-organizing map (SOM) for exploring the database of soil environmental surveys was established. The dataset of 5 heavy metals (As, Cd, Cr, Hg, and Pb) was built by the regular grid sampling in Hechi, Guangxi Zhuang Autonomous Region in southern China. Auxiliary datasets were collected throughout the study area to help interpret the potential causes of pollution. The main findings are as follows: (1) Soil samples of 5 elements exhibited strong variation and high skewness. High pollution risk existed in the case study area, especially Hg and Cd. (2) As and Pb had a similar topo-logical distribution pattern, meaning they behaved similarly in the soil environment. Cr had behaviours in soil different from those of the other 4 elements. (3) From the U-matrix of SOM networks, 3 levels of SEQ were identified, and 11 high risk areas of soil heavy metal-contaminated were found throughout the study area, which were basically near rivers, factories, and ore zones. (4) The variations of contamination index (CI) followed the trend of construction land (1.353) > forestland (1.267) > cropland (1.175) > grassland (1.056), which suggest that decision makers should focus more on the problem of soil pollution surrounding industrial and mining enterprises and farmland.
We introduce a concept of generalized blending and deblending, develop its models and accordingly establish a method of deblended-data reconstruction using these models. The generalized models can handle real situations by including random encoding into the generalized operators both in the space and time domain, and both at the source and receiver side. We consider an iterative optimization scheme using a closed-loop approach with the generalized blending and deblending models, in which the former works for the forward modelling and the latter for the inverse modelling in the closed loop. We applied our method to existing real data acquired in Abu Dhabi. The results show that our method succeeded to fully reconstruct deblended data even from the fully generalized, thus quite complicated blended data. We discuss the complexity of blending properties on the deblending performance. In addition, we discuss the applicability to time-lapse seismic monitoring as it ensures high repeatability of the surveys. Conclusively, we should acquire blended data and reconstruct deblended data without serious problems but with the benefit of blended acquisition. 相似文献
With recent advances in remote sensing, location-based services and other related technologies, the production of geospatial information has exponentially increased in the last decades. Furthermore, to facilitate discovery and efficient access to such information, spatial data infrastructures were promoted and standardized, with a consideration that metadata are essential to describing data and services. Standardization bodies such as the International Organization for Standardization have defined well-known metadata models such as ISO 19115. However, current metadata assets exhibit heterogeneous quality levels because they are created by different producers with different perspectives. To address quality-related concerns, several initiatives attempted to define a common framework and test the suitability of metadata through automatic controls. Nevertheless, these controls are focused on interoperability by testing the format of metadata and a set of controlled elements. In this paper, we propose a methodology of testing the quality of metadata by considering aspects other than interoperability. The proposal adapts ISO 19157 to the metadata case and has been applied to a corpus of the Spanish Spatial Data Infrastructure. The results demonstrate that our quality check helps determine different types of errors for all metadata elements and can be almost completely automated to enhance the significance of metadata. 相似文献