Reservoir Simulation on Azure: HPC for Oil & Gas

This post has been republished via RSS; it originally appeared at: Azure Compute articles.

Introduction

Reservoir simulation is a key computational workload in the Oil & Gas (O&G) industry. Given the fundamental importance of being able to predict the availability and production of hydrocarbons, O&G operators and service companies strive to employ high fidelity models and sophisticated algorithms to provide high confidence to critical business and engineering decisions. Hence, reservoir simulation, along with seismic imaging is very computational-resource hungry. This motivates the application of high performance computing (HPC) to these workflows to optimize the time to solution within the constraints of technology and available resources (personnel, financial).

 

At a high level, reservoir simulation encompasses the simulation of the flow of hydrocarbons (oil, gas, water) in the sub-surface. This is accomplished by mathematically modeling fluid flow in the porous sub-surface rock formations. The resulting differential equations are discretized numerically over a spatial grid that describes the sub-surface. This numerical model is solved for the flow and physical properties over time. HPC allows the simulation of larger models with higher fidelity, capturing greater details with as close a semblance to historical and field data as possible.

 

OPM

Open Porous Media (OPM) is an open source project for the development of applications aimed at the modeling and simulation of porous media processes. Two popular applications to come out of this project are Flow and ResInsight. Flow is a fully-implicit, black-oil reservoir simulator while ResInsight offers 3D, interactive visualization of the simulation results. A popular commercial analogue to Flow would be INTERSECT from Schlumberger. In this blog post, we offer a recipe for building OPM and also present some performance scaling results from running Flow for a number of 3 phase, black-oil cases across multiple Azure HB VMs.

 

The instructions for installing OPM (OPM's site) are incomplete, specially when building from source. There are multiple dependencies and the instructions for building those dependencies are missing. Also, when building for a platform (such as Azure HB), we want to leverage a toolchain optimized for that platform. Hence we build OPM from source on the CentOS-HPC VM OS image optimized for HPC on Azure.

 

The recipe for this optimized build is made available on GitHub along with the instructions to install and run. The distributed, parallel Flow and all underlying dependencies are built with gcc-8.2 and OpenMPI 4 for optimal performance.

 

Results

Simulations are run for 3 different three-phase, black-oil models with number of cells varying as 0.5M, 1M and 6M to test scaling. The 0.5M model is the open data set from the Norne field operated by Equinor. The platform chosen for these is the AMD EPYC based Azure HB VMs with 60 physical CPU cores, 260 GB/s of memory bandwidth and 100 Gb/s EDR InfiniBand RDMA network. The runs are performed on a varying number of nodes (VMs) and also by varying the processes (MPI ranks) per node. All these results are plotted together for comparison as shown below. Note that each node (n) in the plot below implies access to the whole VMs memory bandwidth, networking, etc. despite processes (ppn) being less than 60; hence scaling is understood as a function of nodes (n).

 

Plot.PNGComparison of the runtime for 3 different cases (0.5M, 1M and 6M) for a combination of nodes and processes per node.

Overall for these small cases, Flow seems to scale reasonably with the low node counts (up to 480 cores worth of resources). Increasing the processes per node (ppn) does decrease the runtime, but not even close to linear scaling (worse for smaller models on a per node basis). In fact, for some models, there seems to be a penalty associated with increasing ppn as seen by an increase in the time taken by the linear solver while the setup time remains similar (not shown here). This work didn't determine the exact linear solver used during those runs among the various available to OPM and what can be done to optimize this step by way of various solver related runtime parameters in Flow.

 

During the course of this exercise, some runtime issues (#1, #2) with Flow were identified. The OPM documentation doesn't definitively rule out that the hybrid MPI/OpenMP mode isn't operational. However in multiple tests with various cases in various run configurations, varying the 'threads-per-process' for an MPI run didn't have any perceivable effect on runtime.

With this experience, further scale tests with larger models were ruled out for the time being till some comparative benchmarking data was available from elsewhere (say a production build of OPM). Given the trends, the expectation is that bigger models should scale well.

 

Conclusion

The objective of this exercise was not to demonstrate the first-in-class performance scalability of the HPC VMs as that has been shown before (HB and HC) blowing past 20,000 cores for one tightly coupled HPC job.  This is intended to be a starting guide to build and run the open source OPM as an example of an Oil & Gas HPC application on Azure. You can get started with Batch, CycleCloud; the scripts at AzureHPC help with an end-to-end setup to run the Flow simulations on a Linux cluster and visualize the results with ResInsight on a Windows GPU VM.

You're welcome to further optimize this recipe and run larger models at a larger node count and stress-test the scalability of OPM Flow.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.