Data acquisition programs, such as surveillance and pilot, play an important role in reservoir management and are crucial for minimizing subsurface risks and improving decision quality. However, these projects often involve significant cost in terms of both capital investment and production downtime. To maximize “the bang for the buck”, it is imperative to optimize the program design before data come in, and rapidly integrate and interpret the data after they come in.
Before the data come in, a value of information (VOI) study is traditionally performed to estimate program effectiveness and provide investment justification. These VOI studies often rely on heuristically estimated “data reliability”, and thus the result can be subjective and unreliable. After data comes in, simulation models are first calibrated through history matching then used in probabilistic forecast to update the prediction S-Curve. This two-step approach is often time-consuming.
In this talk I will share some of our lessons learned from our recent development of the so-called reliable forecast technology, which is a model-based data-driven method. It explores the direct statistical relationship between the objective function and the measurement based on an ensemble of reservoir simulations. Before data come in, this direct statistical relationship quantifies expected uncertainty reduction and provides input for the decision tree evaluation for more objective VOI quantification. This method is more rigorous and objective than the traditional approach as it is based on simulation results. After data come in, the method provides a rapid update of the prediction cumulative distribution function (i.e., the S-Curve) based on the actual observed data without the need of traditional history matching process. The method provides a faster alternative to the two-step history-matching-probabilistic-forecast workflow and could substantially reduce the turn-around time.