development of a platform for parallel reservoir simulators
DESCRIPTION
University of CalgaryPresenter: Hui LiuTRANSCRIPT
Development of a Platform for Parallel Reservoir Simulators
Hui Liu
Advisor: Dr. Zhangxing (John) Chen
Outline
• Introduction
• Grid Management
• Linear Solver
• Pre-processing
• Visualization
• Numerical Experiments
Introduction: Motivation
• Desktop/workstation
• Small cases, efficient
• Large cases, tens of millions of grid cells
• Large number of wells
• Multi-core, OpenMP
Our Methods
• Cluster
• OpenMP
- multi-core, easy to use, limited scalability
• MPI
- communication
- MPI-IO
• C language
Typical Simulator
Grid
Data
Solver
IO Memory Communication
Key words
Visualization
Physics
SIM
Development Goals
• Common platform for various simulators
- Grid, data, linear solver and IO
- Pre- and post-processing
• Hundreds of millions of grid cells (or more)
• Hundreds of processors (or more)
Grid Management
• The most critical module
- Type, choice of numerical methods
- Partition, workload, communication
- Data structure, info and data distribution
- Numbering, bandwidth, input
Grid Type
• Hexahedron
• Structured (FD and FV)
• Unstructured (to be implemented)
Grid Partition
• Topological
- Connection, dual graph
- METIS, ParMETIS
• Geometry
- Location info, coordinate (centroid)
- Zoltan
Grid Partition: ParMETIS
Linear Solver
PDE → F(x) = 0 → A * x = b • Newton-Raphson iteration
• Linear solvers
- Krylov solvers: GMRES, BICGSTAB
- Algebraic multi-grid solvers
- Preconditioners
• The most time consuming module, 60%
• Efficient linear solvers are important
Parallel BLAS
• Distributed matrix
• Distributed vector
• Matrix and vector management
• Parallel matrix & vector operations (BLAS 1/2)
• Global communication modules
Linear Solvers • Solver
- GMRES(m) , ORTHOMIN(m)
- BICGSTAB, CG , CGS
- AMG (From Hypre)
• Preconditioner
- Domain decomposition preconditioner
- ILU(k), ILUT and Direct method
- AMG
- Constrained pressure residual (CPR)
Pre-processing
• Define key words and well file
• Choose model
• Setup reservoir and well data
• Setup initial and boundary condition
• Setup parameters
Pre-processing
• Set initial condition, read from file
- Saturation
- Porosity
- Permeability, etc
• Parallel read, MPI-IO
• Distributed data among processors
Visualization
• Display grid, well and physics properties
• Output format, structured and unstructured grid
• Parallel read/write operations
• Graphics engine, rendering
Visualization: VTK
• VTK toolkit, graphics engine
• VTK formats
• Parallel write, MPI-IO
• Paraview displays VTK file
Visualization: Example
num ana
Visualization: Example
Numerical Experiments
• Parallel (U of Calgary), Westgrid
• 528 standard nodes
- 2 6-core Intel Xeon E5649 processors
- 24 G memory
• InfiniBand 4X QDR, 40 Gbit/s
Exp 1: 15M case
• GMRES
• Grid: 250x250x250
• Unknowns: 15 million
# processors 2 4 8 16 32 64
Grid time (s) 44.93 21.74 10.63 5.36 2.81 1.59
Overall time (s) 122.68 59.82 27.48 13.54 6.90 3.49
Exp 2: 125M Case
• GMRES
• Grid: 500x500x500
• Unknowns: 125 million
# processors 16 32 64 128
Grid time (s) 49.19 24.18 11.84 5.45
Overall time (s) 1258.55 662.10 338.68 166.54
Exp 3: 200M Case
• GMRES
• Grid: 585x585x585
• Unknowns: 200 million
# processors 16 32 64 128
Grid time (s) 80.54 38.93 19.82 9.52
Overall time (s) 2471.98 1286.08 670.83 346.07
Exp 4: Very large Cases
• GMRES
• Grid size: billion (B)
# processors 64 128 160 200
Grid size 1 B 1B 2 B 3 B
Grid time (s) 115.76 57.56 94.74 123.62
Overall time (s) 4141.05 2029.93 3229.67 4060.92
Conclusion
• A parallel platform for reservoir simulation has been developed.
• The platform supports finite difference method and finite volume method.
• The platform has the capability to calculate very large problems.
Future Work
• Serious tests, using more processors
• Optimize
• Develop reservoir simulators
• Develop physics-based preconditioners
Sponsors