首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   109篇
  免费   12篇
  国内免费   1篇
测绘学   4篇
大气科学   4篇
地球物理   32篇
地质学   39篇
海洋学   18篇
天文学   9篇
综合类   4篇
自然地理   12篇
  2023年   1篇
  2022年   1篇
  2021年   4篇
  2020年   8篇
  2019年   5篇
  2018年   7篇
  2017年   7篇
  2016年   13篇
  2015年   4篇
  2014年   6篇
  2013年   5篇
  2012年   5篇
  2011年   12篇
  2010年   8篇
  2009年   2篇
  2008年   3篇
  2007年   5篇
  2006年   5篇
  2005年   2篇
  2004年   3篇
  2003年   3篇
  2001年   1篇
  2000年   1篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1993年   1篇
  1991年   3篇
  1990年   1篇
  1989年   1篇
排序方式: 共有122条查询结果,搜索用时 15 毫秒
81.
The continental shelf off central Chile is subject to strong seasonal coastal upwelling and has been recognized as an important outgassing area for, amongst others, N2O, an important greenhouse gas. Several physical and biogeochemical variables, including N2O, were measured in the water column from August 2002 to January 2007 at a time series station in order to characterize its temporal variability and elucidate the physical and biogeochemical mechanisms affecting N2O levels. This 4-year time series of N2O levels reveals seasonal variability associated basically with hydrographic and oceanographic regimes (i.e., upwelling and non-upwelling). However, a noteworthy temporal evolution of both the vertical distribution and N2O levels was observed repeatedly throughout the entire study period, allowing us to distinguish three stages: winter/early spring (Stage I), mid-spring/mid-summer (Stage II), and late summer/early autumn (Stage III).Stage I presents low N2O, the lowest surface saturation ever registered (from 64% saturation) in a period of high O2, and a homogeneous column driven by strong wind; this distribution is explained by physical and thermodynamic mechanisms. Stage II, with increasing N2O concentrations, agrees with the appearance of upwelling-favourable wind stress and a strong influence of oxygen-poor, nutrient-rich equatorial subsurface waters (ESSW). The N2O build-up creates a “hotspot” (up to 2426% N2O saturation) and enhanced concentrations of (up to 3.97 μM) and (up to 4.6 μM) at the oxycline (4-28 μM) (∼20-40 m depth). Although the dominant N2O sources could not be determined, denitrification (mainly below the oxycline) appears to be the dominant process in N2O accumulation. Stage III, with diminishing N2O concentrations from mid-summer to early autumn, was accompanied by low N/P ratios. During this stage, strong bottom N2O consumption (from 40% saturation) was suggested to be mainly driven by benthic denitrification.Consistent with the evolution of N2O in the water column over time, the estimated air-sea N2O fluxes were low or negative in winter (−9.8 to 20 μmol m−2 d−1, Stage I) and higher in spring and summer (up to 195 μmol m−2 d−1, Stage II), after which they declined (Stage III). In spite of the occurrence of ESSW and upwelling events throughout stages II and III, N2O behaviour should be a response of the biogeochemical evolution associated with biological productivity and concomitant O2 levels in the water and even in the sediments. The results presented herein confirm that the study area is an important source of N2O to the atmosphere, with a mean annual N2O flux of 30.2 μmol m−2 d−1; however, interannual variability could not yet be properly characterized.  相似文献   
82.
Full waveform inversion (FWI) is one of the most challenging procedures to obtain quantitative information of the subsurface. For elastic inversions, when both compressional and shear velocities have to be inverted, the algorithmic issue becomes also a computational challenge due to the high cost related to modelling elastic rather than acoustic waves. This shortcoming has been moderately mitigated by using high-performance computing to accelerate 3D elastic FWI kernels. Nevertheless, there is room in the FWI workflows for obtaining large speedups at the cost of proper grid pre-processing and data decimation techniques. In the present work, we show how by making full use of frequency-adapted grids, composite shot lists and a novel dynamic offset control strategy, we can reduce by several orders of magnitude the compute time while improving the convergence of the method in the studied cases, regardless of the forward and adjoint compute kernels used.  相似文献   
83.
Seismic wavefield reconstruction is posed as an inversion problem where, from inadequate and incomplete data, we attempt to recover the data we would have acquired with a denser distribution of sources and receivers. A minimum weighted norm interpolation method is proposed to interpolate prestack volumes before wave-equation amplitude versus angle imaging. Synthetic and real data were used to investigate the effectiveness of our wavefield reconstruction scheme when preconditioning seismic data for wave-equation amplitude versus angle imaging.  相似文献   
84.
Convolution of a minimum‐phase wavelet with an all‐pass wavelet provides a means of varying the phase of the minimum‐phase wavelet without affecting its amplitude spectrum. This observation leads to a parametrization of a mixed‐phase wavelet being obtained in terms of a minimum‐phase wavelet and an all‐pass operator. The Wiener–Levinson algorithm allows the minimum‐phase wavelet to be estimated from the data. It is known that the fourth‐order cumulant preserves the phase information of the wavelet, provided that the underlying reflectivity sequence is a non‐Gaussian, independent and identically distributed process. This property is used to estimate the all‐pass operator from the data that have been whitened by the deconvolution of the estimated minimum‐phase wavelet. Wavelet estimation based on a cumulant‐matching technique is dependent on the bandwidth‐to‐central‐frequency ratio of the data. For the cumulants to be sensitive to the phase signatures, it is imperative that the ratio of bandwidth to central frequency is at least greater than one, and preferably close to two. Pre‐whitening of the data with the estimated minimum‐phase wavelet helps to increase the bandwidth, resulting in a more favourable bandwidth‐to‐central‐frequency ratio. The proposed technique makes use of this property to estimate the all‐pass wavelet from the prewhitened data. The paper also compares the results obtained from both prewhitened and non‐whitened data. The results show that the use of prewhitened data leads to a significant improvement in the estimation of the mixed‐phase wavelet when the data are severely band‐limited. The proposed algorithm was further tested on real data, followed by a test involving the introduction of a 90°‐phase‐rotated wavelet and then recovery of the wavelet. The test was successful.  相似文献   
85.
Tensor algebra provides a robust framework for multi-dimensional seismic data processing. A low-rank tensor can represent a noise-free seismic data volume. Additive random noise will increase the rank of the tensor. Hence, tensor rank-reduction techniques can be used to filter random noise. Our filtering method adopts the Candecomp/Parafac decomposition to approximates a N-dimensional seismic data volume via the superposition of rank-one tensors. Similar to the singular value decomposition for matrices, a low-rank Candecomp/Parafac decomposition can capture the signal and exclude random noise in situations where a low-rank tensor can represent the ideal noise-free seismic volume. The alternating least squares method is adopted to compute the Candecomp/Parafac decomposition with a provided target rank. This method involves solving a series of highly over-determined linear least-squares subproblems. To improve the efficiency of the alternating least squares algorithm, we uniformly randomly sample equations of the linear least-squares subproblems to reduce the size of the problem significantly. The computational overhead is further reduced by avoiding unfolding and folding large dense tensors. We investigate the applicability of the randomized Candecomp/Parafac decomposition for incoherent noise attenuation via experiments conducted on a synthetic dataset and field data seismic volumes. We also compare the proposed algorithm (randomized Candecomp/Parafac decomposition) against multi-dimensional singular spectrum analysis and classical prediction filtering. We conclude the proposed approach can achieve slightly better denoising performance in terms of signal-to-noise ratio enhancement than traditional methods, but with a less computational cost.  相似文献   
86.
Reweighting strategies in seismic deconvolution   总被引:2,自引:0,他引:2  
Reweighting strategies have been widely used to diminish the influence of outliers in inverse problems. In a similar fashion, they can be used to design the regularization term that must be incorporated to solve an inverse problem successfully. Zero-order quadratic regularization, or damped least squares (pre-whitening) is a common procedure used to regularize the deconvolution problem. This procedure entails the definition of a constant damping term which is used to control the roughness of the deconvolved trace. In this paper I examine two different regularization criteria that lead to an algorithm where the damping term is adapted to successfully retrieve a broad-band reflectivity.
Synthetic and field data examples are used to illustrate the ability of the algorithm to deconvolve seismic traces.  相似文献   
87.
Data quality control in geochemistry constitutes a fundamental problem that is still to be solved from the application of statistics and computation. We used refined Monte Carlo simulations of 10,000 replications and 190 independent experiments for sample sizes of 5 to 100. Statistical contaminations of 1 to 4 observations were used to compare 9 statistical parameters (4 central tendency—mean, median, trimean, and Gastwirth mean, and 5 dispersion estimates—standard deviation, median absolute deviation, S n , Q n , and \( {\widehat{\sigma}}_n \)). The presence of discordant observations in the data arrays rendered the outlier-based and robust parameters to disagree with each other. However, when the mean and standard deviation (outlier-based parameters) were estimated from censored data arrays obtained after the identification and separation of outlying observations, they generally provided a better estimate of the population than the robust estimates obtained from the original data arrays. This inference is contrary to the general belief, and therefore, reasons for the better performance of the outlier-based methods as compared to the robust methods are suggested. However, when all parameters were estimated from censored arrays and appropriate precise and accurate correction factors put forth in this work were applied, all of them became fully consistent, i.e., the mean agreed with the median, trimean and Gastwirth mean, and the standard deviation with the median absolute deviation, S n , Q n , and \( {\widehat{\sigma}}_n \). An example of inter-laboratory chemical data for a Hawaiian reference material BHVO-1 included sample sizes from 5 to 100, which showed that small samples of up to 20 provide inconsistent estimates, whereas larger samples of 20–100, especially >40, were more appropriate for estimating statistical parameters through robust or outlier-based methods. Although all statistical estimators provided consistent results, our simulation study shows that it is better to use the censored sample mean and population standard deviation as the best estimates.  相似文献   
88.
Although the body size of consumers may be a determinant factor in structuring food webs, recent evidence indicates that body size may fail to fully explain differences in the resource use patterns of predators in some situations. Here we compared the trophic niche of three sympatric and sexually dimorphic air‐breathing marine predators (the South American sea lion, Otaria flavescens, the South American fur seal, Arctocephalus australis, and the Magellanic penguin, Spheniscus magellanicus) in three areas of the Southwestern Atlantic Ocean (Río de la Plata and adjoining areas, Northern Patagonia and Southern Patagonia), in order to assess the importance of body size and mouth diameter in determining resource partitioning. Body weight and palate/bill breadth were used to characterize the morphology of each sex and species, whereas the trophic niche was assessed through the use of stable isotope ratios of carbon and nitrogen. The quantitative method Stable Isotope Bayesian Ellipses in R (SIBER) was used to compute the area of the Bayesian ellipses and the overlap of the isotopic niches. The results showed that morphological similarity was significantly correlated with isotopic distance between groups within the δ13C–δ15N bi‐plot space in the Río de la Plata area, but not in Northern and Southern Patagonia. Furthermore, resource partitioning between groups changed regionally, and some morphologically distinct groups exhibited a large trophic overlap in certain areas, such as the case of male penguins and male sea lions in Southern Patagonia. Conversely, female sea lions always overlapped with the much larger males of the same species, but never overlapped with the morphologically similar male fur seals. These results indicate that body size and mouth diameter are just two of the factors involved in resource partitioning within the guild of air‐breathing predators considered here, and for whom – under certain environmental conditions – other factors are more important than morphology.  相似文献   
89.
南海铁锰结核(壳)的元素地球化学研究   总被引:1,自引:0,他引:1  
本文利用南海11个铁锰结核(壳)样品的化学分析资料,研究了铁锰结核(壳)中Fe,Mn,Cu,Co,Ni,Pb,Zn,Cr,K,Na,Ca,Mg,Si,P,Al,Ti,Sr,Ba及∑REE的元素地球化学特征。结果表明:(1)铁锰结核(壳)以高Fe,∑REE,低Mn,Cu,Co,Ni等元素为主要特征;(2)铁锰结核(壳)中Fe,Mn间无明显相关,而Fe与∑REE,∑Ce,∑Y呈弱的正相关,Mn与∑REE,∑Ce,∑Y呈明显的正相关,结壳中Fe,Mn与Si,Al,Cu Co Ni呈负相关;(3)结核(壳)中Mn/Fe与Cu/Ni,Ce/La呈负相关,Mn/Fe主要受Mn控制;(4)结核(壳)中Fe,∑REE等元素主要来自南海陆源中酸性岩类的风化、淋滤和沉积。  相似文献   
90.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号