bringing high performance computing to the power grid

78
POWER SYSTEMS CONFERENCE Bringing High Performance Computing to the Power Grid Bruce Palmer Pacific Northwest National Laboratory Panel: High‐performance Computing for Future Power and Energy System

Upload: others

Post on 11-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Bringing High Performance Computing to the Power Grid

Bruce PalmerPacific Northwest National Laboratory

Panel: High‐performance Computing for Future Power and Energy System

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Outline

• Background• Problem Statement• Methodology• Results and Discussions• Conclusion and Future Work

POWER SYSTEMSCONFERENCE Background

Power Grid Modeling challenges• Faster time to solution for operations• Large number of scenarios to account for uncertainties and increasing variability

• More complicated algorithms and more detailed models

• Optimization problems are increasing in size and complexity

POWER SYSTEMSCONFERENCE Background

What did we do in the past to improve performance?• Improve code efficiency

– Reduce page faults and cache misses– Eliminate redundant or unnecessary operations– Improve algorithms

• Buy a new computer every two years

POWER SYSTEMSCONFERENCE History

POWER SYSTEMSCONFERENCE Problem Statement

Does HPC have a role to play in simulating the power grid?• Pluses

– More flops– More memory

• Minuses– New communication cost– Greater program complexity

POWER SYSTEMSCONFERENCE Communication Cost

Time to Solution

Number of Processors

Computation Bound

Communication Bound

Big Problem

Small Problem

POWER SYSTEMSCONFERENCE HPC and the Power Grid

• Networks are small– 15000‐50000 buses (nodes)– Comparable graphs in continuum problems can contain billions of nodes

• Number of contingencies is large– O(N) for N‐1– O(N2) for N‐2

POWER SYSTEMSCONFERENCE Methodology

Goal: Reduce the overhead of creating parallel power grid simulations• Create high level abstractions for common programming motifs

• Encapsulate high performance math libraries â€˘ Compartmentalize functionality and promote reuse of software components

• Hide communication and indexing

POWER SYSTEMSCONFERENCE GridPACK™ Framework

• GridPACK is written in C++ and is highly customizable– Inheritance– Software templates

• Runs on Linux‐based clusters and workstations.• Wide variety of solvers and parallel linear algebra available 

through PETSc suite of software.• Prebuilt modules

– Power flow– State estimation– Dynamic simulation– Dynamic state estimation (Kalman filter)

• Robust support for task‐based execution

POWER SYSTEMSCONFERENCE GridPACK Software Stack

GridPACK

Bus Class

Branch Class

Algebraic Equations

Algorithms

Matrix Operations and Solvers

PETSc HPC Math LibraryMPI Global Arrays ParMetis

Power Grid Application

POWER SYSTEMSCONFERENCE GridPACK Functionality

• What you supply– Bus and branch classes that define your power system application

– High level application driver describing solution procedure• What you get

– Parallel network distribution and setup– Data exchanges between processors– Parallel matrix builds and projections– I/O of distributed data– Parallel solvers and linear algebra operations– Distributed task management– Application modules for use in more complex workflows

POWER SYSTEMSCONFERENCE Power Flow

1 typdef BaseNetwork<PFBus,PFBranch> PFNetwork;2 Communicator world;3 shared_ptr<PFNetwork>4 network(new PFNetwork(world));56 PTI23_parser<PFNetwork> parser(network);7 parser.parse("network.raw");8 network->partition();9 typedef BaseFactory<PFNetwork> PFFactory;

10 PFFactory factory(network);11 factory.load();12 factory.setComponents();13 factory.setExchange();14 15 network->initBusUpdate();16 factory.setYBus();1718 factory.setSBus();19 factory.setMode(RHS);20 BusVectorMap<PFNetwork> vMap(network);21 shared_ptr<Vector> PQ = vMap.mapToVector();22 factory.setMode(Jacobian);23 FullMatrixMap<PFNetwork> jMap(network);24 shared_ptr<Matrix> J = jMap.mapToMatrix();

25 shared_ptr<Vector> X(PQ->clone());2627 double tolerance = 1.0e-6;28 int max_iteration = 100;29 ComplexType tol = 2.0*tolerance;30 LinearSolver solver(*J);3132 int iter = 0;3334 // Solve matrix equation J*X = PQ35 solver.solve(*PQ, *X);36 tol = X->normInfinity();3738 while (real(tol) > tolerance &&39 iter < max_iteration) {40 factory.setMode(RHS);41 vMap.mapToBus(X);42 network->updateBuses();43 vMap.mapToVector(PQ);44 factory.setMode(Jacobian);45 jMap.mapToMatrix(J);46 solver.solve(*PQ, *X);47 tol = X->normInfinity();48 iter++;49 }

POWER SYSTEMSCONFERENCEResults: Contingency Analysis

10.0

100.0

1000.0

10000.0

1 10 100 1000 104

Tim

e (s

econ

ds)

Number of Processors

Static Contingency Analysis of WECC system using 1 processor per contingency for 3638 contingencies

POWER SYSTEMSCONFERENCE Dynamic Simulation

Generator Interface

Governor Interface

Exciter Interface

Relay Interface

Load Interface

Relay Interface

Dynamic Simulation Bus 

Class

Dynamic Simulation Branch Class

Generator Model

Governor Model

Exciter Model

Relay Model

Load Model

Relay Model

Bus Interface

Branch Interface

GridPACK Dynamic Simulation

POWER SYSTEMSCONFERENCE WECC Simulation

• WECC system of 17,000 buses with detailed dynamic models, 20 seconds simulation, results compared with PowerWorld.

• Achieve Faster-than-real-time simulation with 16 cores.

No. of Cores

Total Solution Time (seconds)

1 72.92

2 45.04

4 30.96

8 22.9516 19.57

POWER SYSTEMSCONFERENCE DS Timing Profile

0.1

1

10

100

1 2 4 8 16

TotalPower flowLinear SolverMake NortonCheck Security

Tim

e (s

ec)

Number of Processors

Faster than real‐time simulation:• Very fast serial solver• Parallelize all non‐

solver functions• Use a fast computer 

(fast memory bandwidth)

POWER SYSTEMSCONFERENCEDynamic Contingency Analysis

10.0

100.0

1000.0

10000.0

1 10 100 1000

SolveMultiplyTotal

Tim

e (s

econ

ds)

Number of Processors

Simulation of 16 contingencies on 16351 bus WECC network

2 levels of parallelism

POWER SYSTEMSCONFERENCE Future Directions

• Optimization– Planning: long term investment decisions– Operations: Resiliency and economic efficiency

• Co‐simulation– Integration of different codes into parallel workflows

– Adapt commercial codes for use in Linux systems• Improving HPC usability for assessment of large‐scale grid resiliency and reliability

POWER SYSTEMSCONFERENCE Summary

• Open source software for running HPC power grid simulations

• Written in C++ and designed to run on Linux platforms with MPI

• Many applications already available– Power flow– Dynamic simulation– State estimation– Kalman Filters

• Can be reused to develop own applications• Download and documentation at www.gridpack.org• Contact us at [email protected]

POWER SYSTEMSCONFERENCE Acknowledgements

• US Department of Energy (DOE) Office of Electricity (OE) Advanced Grid Modeling (AGM) Program

• US Department of Energy (DOE) Grid Modernization Laboratory Consortium (GMLC) 

• US Department of Energy (DOE) Office of Advanced computing Scientific Research (ASCR)

• US Department of Energy (DOE) Advanced Research Program Agency â€“ Energy (ARPA‐E)

• Bonneville Power Administration (BPA) Technology Innovation Program

POWER SYSTEMSCONFERENCE Questions?

• Download GridPACK at www.gridpack.org• Contact us at [email protected]

POWER SYSTEMSCONFERENCE Matrix Contributions

1 2 3

4

5

6

78

12

11

109

No matrix contribution

No matrix contribution

No matrix contribution

POWER SYSTEMSCONFERENCE MatVecInterface

// Implemented on busesvirtual bool matrixDiagSize(int *isize,

int *jsize) constvirtual bool matrixDiagValues(ComplexType *values)

// Implemented on branchesvirtual bool matrixForwardSize(int *isize,

int *jsize) constvirtual bool matrixReverseSize(int *isize,

int *jsize) constvirtual bool matrixForwardValues(ComplexType *values)virtual bool matrixReverseValues(ComplexType *values)

POWER SYSTEMSCONFERENCEDistribute Component Contributions

POWER SYSTEMSCONFERENCEDistributed Power Flow Jacobian

16351 bus WECC system

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Methods to achieve faster-than-real-time state estimation using HPC

Michael S. Mazzola, Ph.D, P.E. Energy Production and Infrastructure Center (EPIC)

University of North Carolina Charlotte

Panel: High‐performance Computing for Future Power and Energy System

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Outline

• Background• Problem Statement• Methodology• Results and Discussions• Conclusion and Future Work

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Acknowledgements• This work was originally performed at the High

Performance Computing Collaboratory (HPC2) at Mississippi State University under the sponsorship of the Department of Defense, Idaho Bailiff Program.

Collaborators:• Brian Sullivan, Strategic Planning Engineer,

Transmission Planning, Entergy Services Inc.• Dr. Jian Shi, Assistant Professor, University of Houston

POWER SYSTEMSCONFERENCE

BackgroundFaster simulations for large scale power systems are necessary for various reasons (such as planning, forecasting, real time analysis and control, contingency analysis, etc.)

Real time, and look ahead analysis is a long term goal recognized by the power and energy society

Faster Than Real Time simulation is a long term goal and can benefit many aspects of system analysis including planning, design, operator training, real time response, look ahead contingency analysis, hardware in the loop simulation and more

POWER SYSTEMSCONFERENCE

Problem StatementFor Dynamic Simulation:

• Scale of the system to be simulated (size) trade off• Requirements for high-fidelity simulation: e.g. the integration of

distributed energy resources and loads (complexity) • Can result in hundreds of thousands of DAEs to be solved every

time step• Solving systems of DAE at this scale is the motivation for this work

How to address these challenges• Hardware- Advanced computing platforms (e.g. high-performance

computing platforms)• Algorithm Design- Paralleling process during algorithm design and

implementation• Software- Proper mapping between software programs and

hardware architectures

POWER SYSTEMSCONFERENCE

MODELING POWER SYSTEM DYNAMICS•In the general form, the power system dynamic problem can be represented by a set of first order DAE

• , ,•0 , ,

•In general generators and loads can be represented by unique sets of DAE describing their dynamic behaviors, while they are coupled through a large sparse linear algebraic network equation in the form of:

• ⋮ ⋯

⋮ ⋱ ⋮⋯

⋮

POWER SYSTEMSCONFERENCE

MethodologyTwo general methods to solve (e.g., [1])

• Partitioned Method/ Alternating Solution Method (ASM)• Simultaneous Method/ Direct Solution Method (DSM)

The relaxation methods can be applied to ASM or DSMASM has the most potential for parallelization, and many commercial solvers utilize ASMMost works which obtain faster than real time performance use ASM

[1] B. Stott, “Power system dynamic response calculations,” Proceedings of the IEEE, vol. 67, no. 2, pp. 219–241, 1979.

POWER SYSTEMSCONFERENCE

Parallel General Norton with Multiport Equivalent (PGNME)

• Mississippi State University Novel Preconditioning algorithm to enhance stability and convergence of existing algorithm

• Classical Model• Trapezoidal Rule• ASM• Coarse Grained (Can also utilize fine grain methods within)• Parallel In Space (Possibility of utilizing Parallel In Time)

J. Shi, B. Sullivan, M. Mazzola, B. Saravi, U. Adhikari, T. Haupt, “A Relaxation-based Network Decomposition Algorithm for Parallel Transient Stability Simulation with Improved Convergence”, in IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 3, pp. 496-511, March 1 2018. doi: 10.1109/TPDS.2017.2769649

B. Sullivan, J. Shi, M. Mazzola and B. Saravi, "Faster-than-real-time power system transient stability simulation using Parallel General Norton with Multiport Equivalent (PGNME)," 2017 IEEE Power & Energy Society General Meeting, Chicago, IL, 2017, pp. 1-5. doi: 10.1109/PESGM.2017.8274433

Methodology (cont.)

POWER SYSTEMSCONFERENCE

PGN represents the basic iterative method driving PGNME

PGN is known from the literature, the iterative method is derived from work done in Russia in the 90’s. In previous work it was applied to a two partition system for EMTP simulation.

This work adds to existing literature by reporting for the first time the performance of this method when extended to many partitions.

The extension is called Parallel General Norton with Multiport Equivalent (PGNME).

Sub-problem q

Methodology (cont.)

POWER SYSTEMSCONFERENCE

Simulator DevelopmentBased on these principles a scalable solver in C++ was developed and tested on realistic power systems(9,118,-45k bus).

POWER SYSTEMSCONFERENCE

HPC Hardware - Shadow IIHardware configurations

• Cray CS300-LC• 110 Computational Nodes• 2 Intel Xeon E5-2680 v2 10C 2.8GHz processors(2200 total

cores)• 2 Intel Xeon Phi 5110P coprocessors• 512 GB RAM per node(56.3TB total)• 80 GB SSD per node• FDR InfiniBand• FDR InfiniBand Director-class Switch

POWER SYSTEMSCONFERENCE

Case Study• A 45,552 bus model of the Eastern Interconnect

• Consisting of 7,167 generators• Largest scale system reported in the literature to run real time

• Each generator modeled with the classical DAE model

• Loads are constant impedance

PGNME targets to decompose the coupled linear network equation

Component dynamic equations

POWER SYSTEMSCONFERENCE

• Trapezoidal Rule with a 5ms time step results in a 1.0076 second simulation time for a 10 second simulation

• This is 10x FTRT using a 5ms time step on a 45k bus system!!!

• Trapezoidal Rule with a 1ms time step results in a 9.0086 second simulation time for a 10 second simulation

• Achieved FTRT with the lowest time step on the largest system compared with the surveyed literature

• Highlight of the work• Same modeling strategies - classical model, positive sequence balanced• We obtain 10x FTRT results on a 45k bus system

Results

POWER SYSTEMSCONFERENCE

Discussion: Comparison to other work[1] J. Ye, Z. Liu and L. Zhu, "The Implementation of a Spatial Parallel Algorithm for Transient Stability Simulation on PC Cluster," 2nd IEEE Conference on Industrial Electronics and Applications, Harbin, 2008[2] G. Soykan, A. J. Flueck and H. Dag, "Parallel-in-space implementation of transient stability analysis on a Linux cluster with infiniband," North American Power Symposium (NAPS), 2012, Champaign, IL, 2012[3] V. Jalili-Marandi, F. J. Ayres, E. Ghahremani, J. Belanger, and V. Lapointe, “A real-time dynamic simulation tool for transmission and distribution power systems,” 2012 IEEE Power and Energy Society General Meeting, San Diego, CA, 2012, pp. 1-7.[4] Shuangshuang Jin, Zhenyu Huang, Ruisheng Diao, Di Wu and Yousu Chen, "Parallel implementation of power system dynamic simulation," 2013 IEEE Power & Energy Society General Meeting, Vancouver, BC, 2013[5] S. Jin, Z. Huang, R. Diao, D. Wu and Y. Chen, "Comparative Implementation of High Performance Computing for Power System Dynamic Simulations," in IEEE Transactions on Smart Grid, vol. 8, no. 3, pp. 1387-1395, May 2017.

*FTRT-R = Faster Than Real Time Ratio = simulated time / wall-clock time

Literature Year # of buses

# of gens

Time step (ms)

FTRT-R

Ye et al. [1] 2008 1923 173 20 1.21

Soykan et al. [2] 2012 7935 2135 10 2.60

Jalili-Marandi et al. [3] 2012 7020 1800 10 1.11

Jin et al. [4][5]2013

/2017

16072 2361 5 4.26

PGNME 2017 45552 7167 5 9.92

POWER SYSTEMSCONFERENCE

Conclusions and Future WorkHPC or specialized computing hardware appears to be the most attractive way to achieve real time results on a system of significant size

PGNME appears to be an attractive method for the application of FTRT Transient Stability simulation in an HPC environment

PGNME still has room for improvement using partitioning optimization to enhance performance and ensure lower iteration count

PGNME can be explored when applied to EMTP and can be investigated in conjunction with temporal decomposition

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

High Performance Computing Applications in Power Grid: Past,

Present, and FutureShrirang Abhyankar

Power Systems ScientistEnergy Systems Division

Argonne National Laboratory

Panel: High‐performance Computing for Future Power and Energy System

POWER SYSTEMSCONFERENCE IEEE PES HPC WG• PES WG on HPC applications• 10+ years• Over 100+ members• Panel sessions every year

http://sites.ieee.org/pes‐hpcgrid/

Special Edition in Transactions on Smart Grid

17 papers

POWER SYSTEMSCONFERENCE

High Performance computing (HPC) in power systems

• Computation on more than one processing unit• HPC – prevalent term in SuperComputing world, but

Supercomputing HPC• HPC in Power Systems since 1990s

– D. Falcao: High performance computing in power systems– R. Green: Applications and Trends of HPC in electric power

systems• Inherently parallel applications (scenario-based)

– Contingency analysis– Monte-Carlo analysis based

• Master-slave– SCOPF, SCUC

• Difficult to parallelize– Load flow, Transient stability,

Electromagnetic transients simulation

POWER SYSTEMSCONFERENCE Microprocessor trends

POWER SYSTEMSCONFERENCE

Evolution of computing architectures

POWER SYSTEMSCONFERENCE

Drivers: Lack of 

memory/computing Exploring parallel 

methods/algorithms

Power Grid HPC applications: Past

1979

Alvarado 1979, â€œParallel Solution of Transient Problems by Trapezoidal Integration”

Transient Stability

1996

Contingency Analysis

1994

EMTP

1990

Small‐signal stability

2000

State estimation

Power Flow

OPF

SCOPF

POWER SYSTEMSCONFERENCE Power Grid HPC applications: Present

2018

Transient Stability

Contingency Analysis

EMTP

2000

State estimationDynamic State estimation

Power Flow

Probabilistic Power Flow

OPF

SCOPF/SCUC

2009

Distribution

Drivers: Renewable 

generation New architectures HPC technology 

maturation

ML

OPF

POWER SYSTEMSCONFERENCE

Power grid HPC applications: Future

• Uncertainty quantification• Transmission‐Distribution(‐Communication?)• Infrastructure interdependencies• Large‐scale real‐time electromagnetics simulation

• Data analytics, machine learning, and AI• Large‐scale combinatorial problems (Quantum computing)

POWER SYSTEMSCONFERENCE

OPEN‐SOURCE Co‐simulation platform for transmission‐distribution‐communication simulations

Best of Existing Tools

Best of Existing Tools

FESTIV:ISO Markets, UC & AGC

MATPOWERTransmission/Bulk:

AC Powerflow, Volt/VAr

FESTIV Runtime plug-in

Bus AggregatorBus Aggregator

Bus Aggregator

bus.py bus.py ...

ZMQ

MPI

IGMS-Interconnect

bus.pybus.py

GR

IDLa

b-D

Dis

tribu

tion

Pow

erflo

w,

Hom

e &

App

lianc

e Ph

ysic

s

GR

IDLa

b-D

Dis

tribu

tion

Pow

erflo

w,

Hom

e &

App

lianc

e Ph

ysic

s

GR

IDLa

b-D

Dis

tribu

tion

Pow

erflo

w,

Hom

e &

App

lianc

e Ph

ysic

s

Alte

rnat

e D

istri

butio

n M

odel

Tim

eser

ies,

etc

.

...

HTT

P

HTT

P

HTT

P

Scen

ario

Aut

omat

ion

ISO

Tran

smis

sion

Dis

tribu

tion

Bui

ldin

gAp

plia

nce

FNCS/ GridLAB‐DFNCS/ 

GridLAB‐D

FSKit/ GridDynFSKit/ GridDyn

IGMS/FESTIVIGMS/FESTIV

Use CaseRequirementsUse Case

Requirements New Platform DesignNew Platform Design

End-Use Cont

rol

Markets

Communication

Distribution Transmission

“TDC Tool”

https://github.com/GMLC‐TDC/HELICS‐srcSponsored by Grid Modernization Laboratory Consortium (GMLC) effort

POWER SYSTEMSCONFERENCE HPC @ ANL

Supercomputing machines Open‐source HPC numerical solvers

TAO

PIPS DSP

Applicationscase21k scalability

8

16

32

64

128

256

512

1 2 4 8 12 16

Execution tim

e (Sec)

Ncores

Alternating‐RK2

Alternating‐RK3

Implicit ‐ CN

Implicit ‐ CN variable stepImplicit ‐ ROSW2 variable stepImplicit ‐ ROSW3 variable stepImplicit ‐ ROSW4 Variable stepImplicit ‐ RK2 variable stepImplicit ‐ RK3 variable step

20

Faster‐than Real‐time

Slower‐than Real‐time

Real‐time simulation speedachieved by using ROSW +GMRES + Overlapping SchwarzPreconditioner (Overlap = 3)

Power System Dynamics Stochastic OptimizationPower Flow

POWER SYSTEMSCONFERENCE

QUESTIONS?

POWER SYSTEMSCONFERENCE

Factors to consider before porting to HPC

• What type of parallelism can be exploited by the application?– Scenarios, Spatial, Temporal?

• What is the problem size?– Rule of thumb: 5K variables per processor

• How much scalability does the hardware offer?– STREAMS benchmark (Uflorida benchmark)

• What numerical problem are you trying to solve?– Linear, Nonlinear, Time‐stepping, Optimization?– Dense, Sparse?– Spectral structure (Eigen values)?

• Which programming language?– C, C++, Fortran, Python

• Operating system?– Windows, Unix, Linux?

POWER SYSTEMSCONFERENCEXeon CPU, Nvidia GPU, and Intel Xeon 

Phi Comparison

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Electrical Digital Twin

Joseph Canterino, Siemens Power Technologies International

Panel: High‐performance Computing for Future Power and Energy System

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Outline

• Background• Problem Statement• Methodology• Results and Discussions• Conclusion and Future Work

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCE

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Handling Uncertainties via Situational Intelligence 

G. Kumar Venayagamoorthy, PhD, MBA, FIET, FSAIEEDuke Energy Distinguished Professor &

Director & Founder of the Real‐Time Power and Intelligent Systems LaboratoryThe Holcombe Department of Electrical & Computer Engineering

Clemson University, SC, 29634,  USA

E-mail: [email protected]://people.clemson.edu/~gvenaya

http://rtpis.org

September 5, 2018NSF: IIP # 1312260, EFRI #1238097, and ECCS #1231820 

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

August 14, 2003 Blackout

Regular Night August 14, 2003

• > 60 GW of load loss; â€˘ > 50 million people affected;• Import of ~2GW caused reactivepower to be consumed;

• Eastlake 5 unit tripped;• Stuart‐Atlanta 345 kV line tripped;• MISO was in the dark;• A possible load loss (up to 2.5 GW)• Inadequate situational awareness.

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Situational Awareness in Control Centers

• When a disturbance happens, the operator isthinking:

• Received a new alert!• Is any limit in violation?

• If so, how bad?• Problem location?

• What is the cause?• Any possible immediate corrective or mitigative

action?• What is the action?• Immediate implementation or can it wait?

• Has the problem been addressed?• Any follow up action needed?

• SA is aimed at looking into a complex systemfrom many different perspectives in a holisticmanner.

• Local regions are viewed microscopically andthe entire system is viewed macroscopically.

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Situational Intelligence (SI)

• Integrate historical and real-time data to implement near-future situational awareness

Intelligence (near-future) = function(history, current status, some predictions)

• Predict security and stability limits• RT operating conditions• Dynamic models• Forecast load• Predict/forecast generation• Contingency analysis

• Advanced wide-area visualization• Integrate all applications• Topology updates and geographical influence (PI

and GIS – Google earth tools) - predictions

Predictions is critical for Real‐Time Monitoring

POWER SYSTEMSCONFERENCE

Big Data Big DataVolume, Velocity, Variety, Veracity & Value

Visualization

Data is Mission‐Critical

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

A cellular computational network (CCN) consists of computational units connected to each other in a distributed manner.

CCNs are suited to model systems with temporal and spatial dynamics.

Cellular Computational Network

111 1 1( ) ( ( ), ( ),..., ( ), ( ))N

i i n n

Nss i SS n nSS SS

O k f O k O k O k u k

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Multi-Dimensional Multi-layered CCN

Smart Grid Sensors(PMUs, Speed, Voltage etc.)

Modes of oscillation

Speed deviations

Power losses/Power flows

Bus voltages

Transient stability margins

Voltage stability margins

Frequency

D

D‐1

3

2

1

Load shedding index

Network system security index

Patent :US #9,507,367SA/SI System and Method for Analyzing, Monitoring, Predicting and Controlling Electric Power Systems

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

C7

C3C2

C5

C4

C6

C9

C8 C1

C11

C10

C12

C13

C16

C15

C14

Scalable CCN based System Monitoring

Luitel B, Venayagamoorthy GK, “Decentralized Asynchronous Learning in Cellular Neural Networks”, IEEE Transactions on Neural Networks, November 2012, vol. 23. no. 11, pp. 1755-1766,

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

CCN based System Monitoring

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

CCN based System Monitoring

POWER SYSTEMSCONFERENCE

CCN: speedNet

Real‐Time Power and Intelligent Systems Lab (http://rtpis.org) 11

C7

C3C2

C5

C4

C6

C9

C8 C1

C11

C10

C12

C13

C16

C15

C14

POWER SYSTEMSCONFERENCE

12

C1

C11

C10

C12

C14C10C15C16

C10C12

C2C10C13C16

C3C10

C2

Computational Network for Generator G10

POWER SYSTEMSCONFERENCE

13

Generator G10 Responses

POWER SYSTEMSCONFERENCEPOWER SYSTEMSCONFERENCE

Thank You!G. Kumar Venayagamoorthy

Director and Founder of the Real‐Time Power and Intelligent Systems Laboratory &Duke Energy Distinguished Professor of Electrical and Computer Engineering

Clemson University, Clemson, SC 29634

http://rtpis.org [email protected]

September 7th, 2018