首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In conventional waterflooding of an oil field, feedback based optimal control technologies may enable higher oil recovery than with a conventional reactive strategy in which producers are closed based on water breakthrough. To compensate for the inherent geological uncertainties in an oil field, robust optimization has been suggested to improve and robustify optimal control strategies. In robust optimization of an oil reservoir, the water injection and production borehole pressures (bhp) are computed such that the predicted net present value (NPV) of an ensemble of permeability field realizations is maximized. In this paper, we both consider an open-loop optimization scenario, with no feedback, and a closed-loop optimization scenario. The closed-loop scenario is implemented in a moving horizon manner and feedback is obtained using an ensemble Kalman filter for estimation of the permeability field from the production data. For open-loop implementations, previous test case studies presented in the literature, show that a traditional robust optimization strategy (RO) gives a higher expected NPV with lower NPV standard deviation than a conventional reactive strategy. We present and study a test case where the opposite happen: The reactive strategy gives a higher expected NPV with a lower NPV standard deviation than the RO strategy. To improve the RO strategy, we propose a modified robust optimization strategy (modified RO) that can shut in uneconomical producer wells. This strategy inherits the features of both the reactive and the RO strategy. Simulations reveal that the modified RO strategy results in operations with larger returns and less risk than the reactive strategy, the RO strategy, and the certainty equivalent strategy. The returns are measured by the expected NPV and the risk is measured by the standard deviation of the NPV. In closed-loop optimization, we investigate and compare the performance of the RO strategy, the reactive strategy, and the certainty equivalent strategy. The certainty equivalent strategy is based on a single realization of the permeability field. It uses the mean of the ensemble as its permeability field. Simulations reveal that the RO strategy and the certainty equivalent strategy give a higher NPV compared to the reactive strategy. Surprisingly, the RO strategy and the certainty equivalent strategy give similar NPVs. Consequently, the certainty equivalent strategy is preferable in the closed-loop situation as it requires significantly less computational resources than the robust optimization strategy. The similarity of the certainty equivalent and the robust optimization based strategies for the closed-loop situation challenges the intuition of most reservoir engineers. Feedback reduces the uncertainty and this is the reason for the similar performance of the two strategies.  相似文献   

3.
4.
Random finite element method (RFEM) provides a rigorous tool to incorporate spatial variability of soil properties into reliability analysis and risk assessment of slope stability. However, it suffers from a common criticism of requiring extensive computational efforts and a lack of efficiency, particularly at small probability levels (e.g., slope failure probability P f ?<?0.001). To address this problem, this study integrates RFEM with an advanced Monte Carlo Simulation (MCS) method called “Subset Simulation (SS)” to develop an efficient RFEM (i.e., SS-based RFEM) for reliability analysis and risk assessment of soil slopes. The proposed SS-based RFEM expresses the overall risk of slope failure as a weighed aggregation of slope failure risk at different probability levels and quantifies the relative contributions of slope failure risk at different probability levels to the overall risk of slope failure. Equations are derived for integrating SS with RFEM to evaluate the probability (P f ) and risk (R) of slope failure. These equations are illustrated using a soil slope example. It is shown that the P f and R are evaluated properly using the proposed approach. Compared with the original RFEM with direct MCS, the SS-based RFEM improves, significantly, the computational efficiency of evaluating P f and R. This enhances the applications of RFEM in the reliability analysis and risk assessment of slope stability. With the aid of improved computational efficiency, a sensitivity study is also performed to explore effects of vertical spatial variability of soil properties on R. It is found that the vertical spatial variability affects the slope failure risk significantly.  相似文献   

5.
Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (C c), void ratio (e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.  相似文献   

6.
The clustering and classification of fracture orientation data are crucial tasks in geotechnical engineering and rock engineering design. The explicit simulation of fracture orientations is always applied to compensate for the lack of direct measurements over the entire rock mass. In this study, a single step approach based on the theory of finite mixture models, where the component distributions are Fisher distributions, is proposed for automatic clustering and simulation of fracture orientation data. In the proposed workflow, the spherical K-means algorithm is applied to select the initial cluster centers, and the component-wise expectation–maximization algorithm using the minimum message length criterion is used to automatically determine the optimal number of fracture sets. An additional advantage of the proposed method is the representation of orientation data using a full sphere, instead of the conventional hemispherical characterization. The use of a full spherical representation effectively solves the issue of clustering for fractures with high dip angles. In addition, the calculation process of the mean direction is also simplified. The effectiveness of the model-based clustering method is tested with a complicated artificial data set and two real world data sets. Cluster validity is introduced to evaluate the clustering results. In addition, two other clustering algorithms are also presented for comparison. The results demonstrate that the proposed method can successfully detect the optimal number of clusters, and the parameters of the distributions are well estimated. In addition, the proposed method also exhibits good computational performance.  相似文献   

7.
Multistage fracturing of the horizontal well is recognized as the main stimulation technology for shale gas development. The hydraulic fracture geometry and stimulated reservoir volume (SRV) is interpreted by using the microseismic mapping technology. In this paper, we used a computerized tomography (CT) scanning technique to reveal the fracture geometry created in natural bedding-developed shale (cubic block of 30 cm × 30 cm × 30 cm) by laboratory fracturing. Experimental results show that partially opened bedding planes are helpful in increasing fracture complexity in shale. However, they tend to dominate fracture patterns for vertical stress difference Δσ v  ≤ 6 MPa, which decreases the vertical fracture number, resulting in the minimum SRV. A uniformly distributed complex fracture network requires the induced hydraulic fractures that can connect the pre-existing fractures as well as pulverize the continuum rock mass. In typical shale with a narrow (<0.05 mm) and closed natural fracture system, it is likely to create complex fracture for horizontal stress difference Δσ h  ≤ 6 MPa and simple transverse fracture for Δσ h  ≥ 9 MPa. However, high naturally fractured shale with a wide open natural fracture system (>0.1 mm) does not agree with the rule that low Δσ h is favorable for uniformly creating a complex fracture network in zone. In such case, a moderate Δσ h from 3 to 6 MPa is favorable for both the growth of new hydraulic fractures and the activation of a natural fracture system. Shale bedding, natural fracture, and geostress are objective formation conditions that we cannot change; we can only maximize the fracture complexity by controlling the engineering design for fluid viscosity, flow rate, and well completion type. Variable flow rate fracturing with low-viscosity slickwater fluid of 2.5 mPa s was proved to be an effective treatment to improve the connectivity of induced hydraulic fracture with pre-existing fractures. Moreover, the simultaneous fracturing can effectively reduce the stress difference and increase the fracture number, making it possible to generate a large-scale complex fracture network, even for high Δσ h from 6 MPa to 12 MPa.  相似文献   

8.
The random finite element method (RFEM) combines the random field theory and finite element method in the framework of Monte Carlo simulation. It has been applied to a wide range of geotechnical problems such as slope stability, bearing capacity and the consolidation of soft soils. When the RFEM was first developed, direct Monte Carlo simulation was used. If the probability of failure (p f ) is small, the direct Monte Carlo simulation requires a large number of simulations. Subset simulation is one of most efficient variance reduction techniques for the simulation of small p f . It has been recently proposed to use subset simulation instead of direct Monte Carlo simulation in RFEM. It is noted, however, that subset simulation requires calculation of the factor of safety (FS), while direct Monte Carlo requires only the examination of failure or non-failure. The search for the FS in RFEM could be a tedious task. For example, the search for the FS of slope stability by the strength reduction method (SRM) usually requires much more computational time than a failure or non-failure checking. In this paper, the subset simulation is combined with RFEM, but the need for the search of FS is eliminated. The value of yield function in an elastoplastic finite element analysis is used to measure the safety margin instead of the FS. Numerical experiments show that the proposed approach gives the same level of accuracy as the traditional subset simulation based on FS, but the computational time is significantly reduced. Although only examples of slope stability are given, the proposed approach will generally work for other types of geotechnical applications.  相似文献   

9.
A three-dimensional (3D) nuclear magnetic resonance (NMR) spectrum can simultaneously provide distributions of longitudinal relaxation time (T1), transverse relaxation time (T2), and diffusivity (D); thus, it greatly improves the capacity of fluid identification, typing, and quantitative evaluations. However, several challenges that significantly hinder the widespread application of this technique remain. The primary challenges are the high time and memory costs associated with the current 3D NMR inversion algorithms. In addition, an activation sequence optimization method for 3D NMR inversions has not been developed. In this paper, a novel inversion method for 3D NMR spectra and a detailed optimization method for activation sequences and acquisition parameters were proposed. The novel method, namely randomized singular value decomposition (RSVD) inversion algorithm, can reduce memory requirements and ensure computational efficiency and accuracy. Window averaging (WA) technique was also adopted in this study to further increase computational speed. The optimized method for pulse sequences is mainly based on projections of the 3D NMR spectra in the two-dimensional (2D) and one-dimensional (1D) domains. These projections can identify missing NMR properties of different fluids. Because of the efficiency and stability of this novel algorithm and the optimized strategy, the proposed methods presented in this paper could further promote the widespread application of 3D NMR.  相似文献   

10.
In open pit mining, cutoff grade is one of the most important factors in production planning, which is simply defined as a grade that discriminates between ore and waste. It is also a sensitive parameter can have a major impact on the net present value and cash flow of the projects. On the other hand, dilution is one of the most important and sensitive parameters in the mining projects, which is closely related to the cutoff grade. Choosing the optimum cutoff grade is of considerable importance, since it has a significant impact on the mining operations. One of the most popular algorithms for determination of the optimum cutoff grade is the Lane’s algorithm. But in the Lane’s algorithm, mining dilution and its cost is not considered during the cutoff grade optimization. In this paper, effects of dilution on the cutoff grade are studied using Lane’s theory. Dilution and its cost is inserted directly into cutoff grade optimization process. The cutoff grades obtained using suggested method will be more realistic rather than ones by using the original form of the Lane’s formulation. Results of the study showed that with an increase of dilution, average grade decreases and consequently the cutoff grade increases. As a result of dilution, the quantity Q m increases and the quantities Q c and Q r decrease. Therefore, the annual profit and NPV of project is very significantly reduced.  相似文献   

11.
Despite advanced development in computational techniques, the issue of how to adequately calibrate and minimize misfit between system properties and corresponding measurements remains a challenging task in groundwater modeling. Two important features of the groundwater regime, hydraulic conductivity (k) and specific yield (S y), that control aquifer dynamic vary spatially within an aquifer system due to geologic heterogeneity. This paper provides the first attempt in using an advanced swarm-intelligence-based optimization algorithm (cuckoo optimization algorithm, COA) coupled with a distributed hydrogeology model (i.e., MODFLOW) to calibrate aquifer hydrodynamic parameters (S y and k) over an arid groundwater system in east Iran. Our optimization approach was posed in a single-objective manner by the trade-off between sum of absolute error and the adherent swarm optimization approach. The COA optimization algorithm further yielded both hydraulic conductivity and specific yield parameters with high performance and the least error. Estimation of depth to water table revealed skillful prediction for a set of cells located at the middle of the aquifer system whereas showed unskillful prediction at the headwater due to frequent water storage changes at the inflow boundary. Groundwater depth reduced from east toward west and southwest parts of the aquifer because of extensive pumping activities that caused a smoothening influence on the shape of the simulated head curve. The results demonstrated a clear need to optimize arid aquifer parameters and to compute groundwater response across an arid region.  相似文献   

12.
We study the applicability of a model order reduction technique to the solution of transport of passive scalars in homogeneous and heterogeneous porous media. Transport dynamics are modeled through the advection-dispersion equation (ADE) and we employ Proper Orthogonal Decomposition (POD) as a strategy to reduce the computational burden associated with the numerical solution of the ADE. Our application of POD relies on solving the governing ADE for selected times, termed snapshots. The latter are then employed to achieve the desired model order reduction. We introduce a new technique, termed Snapshot Splitting Technique (SST), which allows enriching the dimension of the POD subspace and damping the temporal increase of the modeling error. Coupling SST with a modeling strategy based on alternating over diverse time scales the solution of the full numerical transport model to its reduced counterpart allows extending the benefit of POD over a prolonged temporal window so that the salient features of the process can be captured at a reduced computational cost. The selection of the time scales across which the solution of the full and reduced model are alternated is linked to the Péclet number (P e), representing the interplay between advective and dispersive processes taking place in the system. Thus, the method is adaptive in space and time across the heterogenous structure of the domain through the combined use of POD and SST and by way of alternating the solution of the full and reduced models. We find that the width of the time scale within which the POD-based reduced model solution provides accurate results tends to increase with decreasing P e. This suggests that the effects of local-scale dispersive processes facilitate the POD method to capture the salient features of the system dynamics embedded in the selected snapshots. Since the dimension of the reduced model is much lower than that of the full numerical model, the methodology we propose enables one to accurately simulate transport at a markedly reduced computational cost.  相似文献   

13.
Studies of mobile pastoralist livelihoods have shown that a variety of socio-technical practices have been developed to achieve reliable outputs from livestock in variable arid and semi-arid environments. This paper builds upon the concept of pastoralists as high-reliability seekers rather than risk-averse and makes a case for understanding Mongolian herders as well adapted to livestock production in highly variable climatic conditions within a certain threshold of risk and uncertainty. This system fails, however, during instances of high uncertainty and covariate risk such as in cases of the natural hazard dzud, which requires individual households to make significant cash investments in risk management. It forwards the idea that investing in local government—soum and bag level—administrative capacity and infrastructure is needed to build system resilience to covariate risk. Based on ethnographic research in rural Bayankhongor, this paper interrogates how dzud interfaces with socio-economic factors amongst pastoralists in central west Mongolia.  相似文献   

14.
The recent capability of measuring full‐field deformations using advanced imaging techniques provides the opportunity to improve the predictive ability of computational soil mechanics. This paper investigates the effects of imperfect initial specimen geometry, platen‐soil and apparatus compliance, and material heterogeneity on the constitutive model calibration process from triaxial tests with nonlubricated platens. The technique of 3D‐Digital Image Correlation (3D‐DIC) was used to measure, from digital images, full‐field displacements over sand specimen surfaces throughout triaxial compression tests, as well as actual specimen initial shape, and deformations associated with platen and apparatus compliance and bedding settlement. The difference between predicted and observed 3D specimen surface deformations served to quantify an objective function in the optimization algorithm. Four different three‐dimensional finite element models (FEMs), each allowing varying degrees of material variability in the solution of the inverse problem, were used to study the effect of material heterogeneity. Results of the parametric study revealed that properly representing the actual initial specimen geometry significantly improves the optimization efficiency, and that accounting for boundary compliance can be critical for the accurate recovery of the full‐field experimental displacements. Allowing for nonsymmetric material variability had the most significant impact on predicted behavior. A relatively high coefficient of variation in model parameters was found among a statistical ensemble of tests, underscoring the importance of conducting multiple tests for proper material characterization. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
The Soil Conservation Service curve number (SCS-CN) method, also known as the Natural Resources Conservation Service curve number (NRCS-CN) method, is popular for computing the volume of direct surface runoff for a given rainfall event. The performance of the SCS-CN method, based on large rainfall (P) and runoff (Q) datasets of United States watersheds, is evaluated using a large dataset of natural storm events from 27 agricultural plots in India. On the whole, the CN estimates from the National Engineering Handbook (chapter 4) tables do not match those derived from the observed P and Q datasets. As a result, the runoff prediction using former CNs was poor for the data of 22 (out of 24) plots. However, the match was little better for higher CN values, consistent with the general notion that the existing SCS-CN method performs better for high rainfall–runoff (high CN) events. Infiltration capacity (fc) was the main explanatory variable for runoff (or CN) production in study plots as it exhibited the expected inverse relationship between CN and fc. The plot-data optimization yielded initial abstraction coefficient (λ) values from 0 to 0.659 for the ordered dataset and 0 to 0.208 for the natural dataset (with 0 as the most frequent value). Mean and median λ values were, respectively, 0.030 and 0 for the natural rainfall–runoff dataset and 0.108 and 0 for the ordered rainfall–runoff dataset. Runoff estimation was very sensitive to λ and it improved consistently as λ changed from 0.2 to 0.03.  相似文献   

16.
A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance–area ratio (d/A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance–area ratio of 0.05, a distance–x-grid length ratio (d/X grid) of 0.10, and a distance–y-grid length ratio (d/Y grid) of 0.20.  相似文献   

17.
Understanding methane emissions from natural sources is becoming increasingly important with future climactic uncertainty. Wetlands are the single largest natural source of methane; however, little attention has been given to how biota and interactions between aboveground and belowground communities may affect methane emission rates in these systems. To investigate the effects of vegetative disturbance and belowground biogeochemical alterations induced by biota on methane emissions in situ, we manipulated densities of Littoraria irrorata (marsh periwinkle snails) and Geukensia granosissima (gulf ribbed mussels) inside fenced enclosures within a Spartina alterniflora salt marsh and measured methane emissions and sediment extracellular enzyme activity (phosphatase, β-glucosidase, cellobiohydrolase, N-acetyl-β-D-glucosaminidase, peroxidase, and phenol oxidase) over the course of a year. Changes in snail density did not have an effect on methane emission; however, increased densities of ribbed mussels significantly increased the emission of methane. Sediment extracellular enzyme activities for phosphatase, cellobiohydrolase, N-acetyl-β-D-glucosaminidase, and phenol oxidase were correlated to methane emission, and none of the enzymes assayed were affected by the snail and mussel density treatments. While methane emissions from salt marsh ecosystems are lower than those from freshwater systems, the high degree of variability in emission rates and the potential for interactions with naturally occurring biota that increase emissions warrant further investigations into salt marsh methane dynamics.  相似文献   

18.
Measurement of barometric efficiency (BE) from open monitoring wells or loading efficiency (LE) from formation pore pressures provides valuable information about the hydraulic properties and confinement of a formation. Drained compressibility (α) can be calculated from LE (or BE) in confined and semi-confined formations and used to calculate specific storage (S s). S s and α are important for predicting the effects of groundwater extraction and therefore for sustainable extraction management. However, in low hydraulic conductivity (K) formations or large diameter monitoring wells, time lags caused by well storage may be so long that BE cannot be properly assessed in open monitoring wells in confined or unconfined settings. This study demonstrates the use of packers to reduce monitoring-well time lags and enable reliable assessments of LE. In one example from a confined, high-K formation, estimates of BE in the open monitoring well were in good agreement with shut-in LE estimates. In a second example, from a low-K confining clay layer, BE could not be adequately assessed in the open monitoring well due to time lag. Sealing the monitoring well with a packer reduced the time lag sufficiently that a reliable assessment of LE could be made from a 24-day monitoring period. The shut-in response confirmed confined conditions at the well screen and provided confidence in the assessment of hydraulic parameters. A short (time-lag-dependent) period of high-frequency shut-in monitoring can therefore enhance understanding of hydrogeological systems and potentially provide hydraulic parameters to improve conceptual/numerical groundwater models.  相似文献   

19.
Timing of highly stable millisecond pulsars provides the possibility of independently verifying terrestrial time scales on intervals longer than a year. An ensemble pulsar time scale is constructed based on pulsar timing data obtained on the 64-m Parkes telescope (Australia) in 1995–2010. Optimal Wiener filters were applied to enhance the accuracy of the ensemble time scale. The run of the time-scale difference PTens?TT(BIPM2011) does not exceed 0.8 ± 0.4 μs over the entire studied time interval. The fractional instability of the difference PTens?TT(BIPM2011) over 15 years is σ z = (0.6 ± 1.6) × 10?15, which corresponds to an upper limit for the energy density of the gravitational-wave background Ω g h2 ~ 10?10 and variations in the gravitational potential ~10?15 Hz at the frequency 2 × 10?9 Hz.  相似文献   

20.
This work concerns linearization methods for efficiently solving the Richards equation, a degenerate elliptic-parabolic equation which models flow in saturated/unsaturated porous media. The discretization of Richards’ equation is based on backward Euler in time and Galerkin finite elements in space. The most valuable linearization schemes for Richards’ equation, i.e. the Newton method, the Picard method, the Picard/Newton method and the L-scheme are presented and their performance is comparatively studied. The convergence, the computational time and the condition numbers for the underlying linear systems are recorded. The convergence of the L-scheme is theoretically proved and the convergence of the other methods is discussed. A new scheme is proposed, the L-scheme/Newton method which is more robust and quadratically convergent. The linearization methods are tested on illustrative numerical examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号