performance analysis of cloud computing services for many-tasks scientific computing

19
June 28, 2022 1 Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing Berkeley, CA, USA Alexandru Iosup , Nezih Yigitbasi, Dick Epema Parallel and Distributed Systems Group, Delft University of Technology, The Netherlands Simon Ostermann, Radu Prodan, Thomas Fahringer Distributed and Parallel Systems, University of Innsbruck, Austria

Upload: nedra

Post on 20-Mar-2016

51 views

Category:

Documents


1 download

DESCRIPTION

Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing. Alexandru Iosup , Nezih Yigitbasi, Dick Epema. Simon Ostermann, Radu Prodan, Thomas Fahringer. Parallel and Distributed Systems Group, Delft University of Technology, The Netherlands. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

April 24, 20231

Performance Analysis of Cloud ComputingServices for Many-Tasks Scientific Computing

Berkeley, CA, USA

Alexandru Iosup, Nezih Yigitbasi, Dick EpemaParallel and Distributed Systems Group,Delft University of Technology,The Netherlands

Simon Ostermann, Radu Prodan, Thomas Fahringer

Distributed and Parallel Systems,University of Innsbruck,

Austria

Page 2: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

About the Team

• Team’s Recent Work in Performance• The Grid Workloads Archive (Nov 2006)• The Failure Trace Archive (Nov 2009)• The Peer-to-Peer Trace Archive (Apr 2010)• Tools: GrenchMark workload-based grid benchmarking, other

Monitoring and Perf. Eval. tools

• Speaker: Alexandru Iosup• Systems work: Tribler (P2P file sharing), Koala (grid

scheduling), POGGI and CAMEO (massively multiplayer online gaming)

• Grid and Peer-to-Peer workload characterization and modelingApril 24, 2023

2

Page 3: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Many-Tasks Scientific Computing

• Jobs comprising Many Tasks (1,000s) necessary to achieve some meaningful scientific goal• Jobs submitted as bags-of-tasks or over short

periods of time• High-volume users over long periods of time

• Common in grid workloads [Ios06][Ios08]

• No practical definition (from “many” to “10,000/h”)

April 24, 20233

Page 4: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Cloud Futures Workshop 2010 – Cloud Computing Support for Massively Social Gaming 4

The Real Cloud

• “The path to abundance”• On-demand capacity• Cheap for short-term

tasks• Great for web apps (EIP,

web crawl, DB ops, I/O)

• “The killer cyclone”• Not so great

performance for scientific applications1 (compute- or data-intensive)

• Long-term perf. variability2

http://www.flickr.com/photos/dimitrisotiropoulos/4204766418/ Tropical Cyclone Nargis (NASA, ISSS, 04/29/08)

1- Iosup et al., Performance Analysis of Cloud Computing Services for Many Tasks

Scientific Computing, (under submission).

2- Iosup et al., On the Performance Variability of Production Cloud Services,

Technical Report PDS-2010-002, [Online] Available:

http://pds.twi.tudelft.nl/reports/2010/PDS-2010-002.pdf

VS

Page 5: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Research Question and Previous WorkDo clouds and Many-Tasks Scientific Computing

fit well, performance-wise?• Virtualization Overhead

• Loss below 5% for computation [Barham03] [Clark04]• Loss below 15% for networking [Barham03] [Menon05]• Loss below 30% for parallel I/O [Vetter08] • Negligible for compute-intensive HPC kernels [You06] [Panda06]

• Cloud Performance Evaluation• Performance and cost of executing a sci. workflows [Dee08]• Study of Amazon S3 [Palankar08]• Amazon EC2 for the NPB benchmark suite [Walker08] or

selected HPC benchmarks [Hill08]Theory: just use virtualization overhead results.

Practice?April 24, 2023

5

Page 6: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

April 24, 20236

Agenda

1. Introduction & Motivation2. Proto-Many Task Users3. Performance Evaluation of Four Clouds4. Clouds vs Other Environments5. Take Home Message

Page 7: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Proto-Many Task Users

MTC user• At least J jobs in

B bags-of-tasks

Trace-based analysis• 6 grid traces,

4 parallel productionenvironment traces

• Various criteria (combinations of values for J and B)Results• “number of BoTs submitted 1,000 & number of tasks submitted 10,000”• Easy to grasp + Dominate most traces (jobs and CPUTime) + 1-CPU jobs

April 24, 20237

Page 8: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

April 24, 20238

Agenda

1. Introduction & Motivation2. Proto-Many Task Users3. Performance Evaluation of Four Clouds

1. Experimental Setup2. Selected Results

4. Clouds vs Other Environments5. Take Home Message

Page 9: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Experimental SetupEnvironments

Four commercial IaaS clouds (NIST definitions)• Amazon EC2• GoGrid• Elastic Hosts• Mosso

No Cluster instances(not releasedin Dec’08-Jan’09)

April 24, 20239

Page 10: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Experimental SetupExperiment Design

Principles• Use complete test suites• Repeat 10 times• Use defaults, not tuning• Use common benchmarks

Compare results with results for other systems

Types of experiments• Resource acquisition and release• Single-Instance (SI) benchmarking• Multiple-Instance (MI) benchmarking

April 24, 202310

Page 11: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Resource Acquisition: Can Matter

• Can be significant• For single instances (GoGrid)• For multiple instances (all)

• Short-term variability can be high (GoGrid)

• Slow long-term growth

April 24, 202311

Page 12: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Single Instances: ComputePerformance Lower Than Expected• ECU = 4.4 GFLOPS (at 100% efficient code)

= 1.1GHz 2007 Opteron x 4 FLOPS/cycle (full pipeline)

• In our tests: 0.6-0.8 GFLOPS• Sharing of the same physical machines (working set)• Lack of code optimizations beyond –O3 –funroll-loops• Metering requires more clarification

• Instances with excellent float/double addition perf. may have poor multiplication perf. (c1.medium, c1.xlarge)

April 24, 202312

Page 13: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Multi-Instance: Low Efficiency in HPL

Peak Performance• 2 x c1.xlarge (16 cores) @ 176 GFLOPS,

HPCC-227 (Cisco, 16c) @ 102, HPCC-286 (Intel, 16c) @ 180• 16 x c1.xlarge (128 cores) @ 1,408 GFLOPS,

HPCC-224 (Cisco, 128c) @ 819, HPCC-289 (Intel, 128c) @ 1,433

Efficiency• Cloud: 15-50% even

for small (<128)instance counts

• HPC: 60-70%April 24, 2023

13

Page 14: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Cloud Futures Workshop 2010 – Cloud Computing Support for Massively Social Gaming 14

Cloud Performance Variability

• Performance variability of production cloud services• Infrastructure:

Amazon Web Services• Platform:

Google App Engine

• Year-long performance information for nine services• Finding: about half of the cloud services

investigated in this work exhibits yearly and daily patterns; impact of performance variability depends on application.A. Iosup, N. Yigitbasi, and D. Epema, On the Performance Variability of Production Cloud Services, (under submission).

Amazon S3: GET US HI operations

Page 15: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

April 24, 202315

Agenda

1. Introduction & Motivation2. Proto-Many Task Users3. Performance Evaluation of Four Clouds4. Clouds vs Other Environments5. Take Home Message

Page 16: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Clouds vs Other Environments

• Trace-based simulation, DGSim (grid) simulator• Compute-intensive, no data IO

• Source Env v Cloud w/ source-like performance v Cloud w/ real (measured) performance• Slowdown for Sequential: 7 times, Parallel: 1-10 times

• Results• Response time 4-10 times higher in real clouds• Good for short-term, deadline-driven projects

April 24, 202316

Page 17: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

April 24, 202317

Take Home MessageTake Home Message

• Many-Tasks Scientific Computing• Quantitative definition: J jobs and B bags-of-tasks• Extracted proto-MT users from grid and parallel prod. envs.

• Performance Evaluation of Four Commercial Clouds• Amazon EC2, GoGrid, Elastic Hosts, Mosso• Resource acquisition, Single- and Multi-Instance benchmarking• Low compute and networking performance

• Clouds vs Other Environments• An order of magnitude better performance needed for clouds• Clouds already good for short-term, deadline-driven sci. comp.

Page 18: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

April 24, 202318

Potential for CollaborationPotential for Collaboration

• Other performance evaluation studies of clouds• The new Amazon EC2 instance—Cluster Compute• Other clouds?

• Data-intensive benchmarks

• General logs• Failure Trace Archive• Grid Workloads Archive

• …

Page 19: Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

April 24, 202319

Thank you! Questions? Observations?

More Information:• The Grid Workloads Archive: gwa.ewi.tudelft.nl• The Failure Trace Archive: fta.inria.fr • The GrenchMark perf. eval. tool: grenchmark.st.ewi.tudelft.nl • Cloud research:

www.st.ewi.tudelft.nl/~iosup/research_cloud.html• see PDS publication database at: www.pds.twi.tudelft.nl/

email: [email protected]

Big thanks to our collaborators: U. Wisc.-Madison, U Chicago, U Dortmund, U Innsbruck, LRI/INRIA Paris, INRIA Grenoble, U Leiden, Politehnica University of Bucharest, Technion, …