Transcript
Page 1: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Introduction to Parallel Computing„Paralleles Rechnen“

Jesper Larsson Trä[email protected]

Parallel Computing, 184-5Favoritenstrasse 16, 3. Stock

Sprechstunde: Per email-appointment

Page 2: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Parallel Computing

Parallel computers are around and everywhere

(it was not always like that, although…) Why is that?What are they good for?What is a parallel computer?

Page 3: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Parallel Computing

Parallel computers are around and everywhere

(it was not always like that, although…) Why is that?What are they good for?What is a parallel computer?

Page 4: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Parallel Computing

Parallel computers are around and everywhere

(it was not always like that, although…) Why is that?What are they good for?What is a parallel computer?

p0

Shared memory

p1 p2 pi…

Page 5: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Parallel Computing

Parallel computers are around and everywhere

(it was not always like that, although…)

How to use them?

• Efficiently?• In practice?

How do they look?

Algorithms, Languages, Interfaces, (Applications)

Architecture, Models

Why is that?What are they good for?What is a parallel computer?

Page 6: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Parallel Computing: Prerequisites

Some basics on:

• Programming, programming languages (we will use C…)• Algorithms and data structures, asymptotic, worst-case

analysis of algorithms O(f(n))• Computer architecture• Operating systems

Interest in solving problems faster and better in theory andpractice…

Page 7: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

• Introduction: Aims, motivation, basics, history (Amdahl‘s Law, Moore‘s „Law“,…); examples

• Shared memory parallel computing• Concrete language: OpenMP, pthreads, Cilk

• Distributed memory parallel computing• Concrete interface: MPI (Message-Passing Interface)

• New architectures, new languages (GPU, CUDA, OpenCL)• Other languages, paradigms

This VU: Introduction to parallel computing

Theory and PRACTICE: Learning by doing (the project)

Page 8: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Focus on Principles: Parallel algorithms, (architectures), languages and interfaces

Introduction to parallel computing

Lot‘s of approaches, languages, interfaces that will not betreated (Java threads, TBB, upc, CUDA, OpenACC, …).

Follow up later: Bachelor-thesis, project, master-thesis, masterlectures, seminars. See us!

Standard, paradigmatic, actual, much-used languages and interfaces (MPI, OpenMP, pthreads/C threads, Cilk)

Page 9: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Introduction to parallel computing: Elements

Lectures: More or less each Monday during semester

Projects: Where you learn, Q&A sessions

Exam: Based on project, but covers all material

This is the last time the lecture is offered in WS. Next round will be SS2019 (mandatory course in “Software and Information Engineering”)

Page 10: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Lecturer, personnel

• Prof. Dr. Jesper Larsson Träff, lecture, examination, everything

• Dr. Sascha Hunold, projects, accounts, systems• Markus Spreitzer, systems

Parallel Computing group: www.par.tuwien.ac.at, course www.par.tuwien.ac.at/teaching/2017w/184.710.html

“Sprechstunde” per email appointment: [email protected]

Page 11: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Lectures, projects, exam

• Monday, 10:00-12:00 MANDATORY• Occasionally: Thursday, 10:00-12:00 (also MANDATORY)

MONDAY: Freihaus Hörsaal 7 (FH7)THURSDAY: EI 5 Hochenegg (Gusshausstarsse 25)

Project work: ON YOUR OWN: There will be Q&A sessions(Thursday and Monday slots)

Can start early, complete before end of lecture, discussion/examination at end of semester (late January, earlyFebruary)

Page 12: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Lectures, exercises, projects

Capacity?

Lecture was originally planned for 50+ students…

We only have two parallel systems, and limited man-power…

SIGN UP in TISS SIGN OFF in TISS

…if you decide not to follow the lecture, makes administration easier

Page 13: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Requirements, credit (4 hours/week, 6ECTS)

• Lecture attendance MANDATORY• Active participation during lectures• Presentation of project work (exam) MANDATORY• Hand-in of project work MANDATORY:

1. Short write-up2. Program code3. Results

Practical project work: Should be done in groups of 2

NOTE: See me (“Sprechstunde”) in case of problems with schedule (unable to finish project in time), email appointment

Page 14: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Practical project work: Should be done in groups of 2

GRADE: Based on project presentation (exam) and hand-in

Requirements, credit (4 hours/week, 6ECTS)

• Lecture attendance MANDATORY• Active participation during lectures• Presentation of project work (exam) MANDATORY• Hand-in of project work MANDATORY:

1. Short write-up2. Program code3. Results

Page 15: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Practical project work: Should be done in groups of 2

NOTE: Solutions to project exercises can possibly be found somewhere.Don’t cheat yourself!!

Requirements, credit (4 hours/week, 6ECTS)

• Lecture attendance MANDATORY• Active participation during lectures• Presentation of project work (exam) MANDATORY• Hand-in of project work MANDATORY:

1. Short write-up2. Program code3. Results

Page 16: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Practical project work: Should be done in groups of 2

NOTE: Solutions to project exercises can possibly be found somewhere.Don’t cheat us: be open about what you took from others, plagiarism will automatically result in Failing Grade (6)!

Requirements, credit (4 hours/week, 6ECTS)

• Lecture attendance MANDATORY• Active participation during lectures• Presentation of project work (exam) MANDATORY• Hand-in of project work MANDATORY:

1. Short write-up2. Program code3. Results

Page 17: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Grading: DON’T OPTIMIZE

Active participation in lectures…

Written project solution, quality of programs (correctness, performance, readibility), oral explanation, knowledge of course material

Each project will consist of 3 parts; deliberately, not everything is said explicitly (but enough should be said)

Very roughly:• 1-2: All parts solved, performance/speed-up achieved,

everything can be explained• 2-3: 2 out of 3…• Fail: Less than 1 out of tree

Page 18: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Grading: DON’T OPTIMIZE

Groups of two:Stand or fall as a group, ideally both get same grade

Means:…Both group members should have contributed and feel responsible for all parts of the solutions

Page 19: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

ECTS breakdown

• Planning, intro („Vorbesprechung“): 1h• Lectures: 15 x 2h = 30h• Preparation: 45h• OpenMP: 20h• Cilk: 20h• MPI: 20h• Write-up: 10h• Presentation, including preparation: 4h

Total: 150h = 6ECTS

Page 20: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Projects

• Programming projects using the main three languages/interfaces covered in the lecture (OpenMP/pthreads, Cilk, MPI). Each project will explore the same problem in all three paradigms

• There will be 4 possible projects. Select and solve 1

• Focus on achieving and documenting improved performance (good benchmarking)

• Correctness first!

• (Some) room for creativity

Page 21: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Report:• IN ENGLISH (as far as possible), DEUTSCH ERLAUBT• State problem, hypothesis, explain (briefly) solution,

implementation details and issues, state of solution (correct, what works, what not), testing and benchmarking approach, document performance (plots or tables)

• Compare/comment on paradigms• 8-15 pages per exercise, including performance plots

Project exercises, credit

Document solution with code and short report

Code: readable, compilable, correct

Project exercises in groups of two

Page 22: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Schedule, TENTATIVE

9.10: Overview/Plan („Vorbesprechung“). Motivation, definition19.10 (Thursday): Concepts, speed-up, Amdahl‘s law23.10: Parallelization paradigms, performance notions. Merging30.10: HPC, Moore‘s „law“. Prefix-sums6.11: Shared memory, pthreads

13.11: OpenMP20.11: OpenMP27.11: Cilk4.12: Cilk11.12: Distributed memory, MPI18.12: MPI8.1.18: Advanced MPI15.1: Performance

Project w

ork

15.1.2017: Project hand-in

9.11 (Thursday): Projects’ presentation Indeed some Thursdays

All you need for project

Page 23: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Schedule, TENTATIVE

Idea:All basics, and all 3 interfaces (OpenMP, Cilk, MPI) covered before Christmas (some Thursdays may be necessary)January: Other architectures and interfaces; project work

Project hand-in: 15.1.2018Exams: 22.1-26.1.2018

IF problems with any of these dates, contact me in advance!Later hand-in of projects NOT possible (a later or earlier examination may be, with good reason)

Per group sign-up in TISS

Strict deadline, no extensions

Page 24: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

After the lecture

• EXAM

• Feedback appreciated (TISS, e.g.)

• Constructive criticism always helpful (sometimes things change)

• Other related courses in master curriculum

Page 25: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Literature, course material

Slides in English. Will be made available at

www.par.tuwien.ac.at/teaching/2017w/184.710.html

Look here and TISS for information (cancelled lectures, changeof plans, …). Will try to keep up to date, timely (but lectures will not be ready much in advance…)

No script; slides should be enough for doing the project work, additional material can be found easily. Read a book!

Page 26: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Organizational

TUWEL for

• Forming the groups• Getting accounts• Your discussions?• Uploading code/reports

Register in groups of 2 now (until 6.11.17)!

First exercise:Apply for account (ssh key) via TUWEL until 6.11.17

Page 27: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Key deadlines (no extensions)

• Getting accounts, TUWEL: 6.11.17• Registering in groups, TISS: 6.11.17• Presentation of the projects: 9.11.17 (Thursday)• Project hand-in: 15.1.2018 (Monday, evening)• Exam: 22.1-26.1.2018

Page 28: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Literature: general

Thomas Rauber, Gudula Rünger: Parallel Programming formulticore and cluster systems. Springer, 2nd edition 2013

Randal E. Bryant, David R. O‘Hallaron: Computer Systems. Prentice-Hall, 2011

• Grama, Gupta, Karypis, Kumar: Introduction to Parallel Computing. Second edition. Pearson 2003

• Michael J. Quinn: Parallel Programming in C with MPI andOpenMP. McGraw-Hill, 2004

• Calvin Lin, Lawrence Snyder: Principles of parallel programming. Addison-Wesley, 2008

• Peter Pacheco: An introduction to parallel programming. Morgan Kaufmann, 2011

Page 29: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Literature: general

Encyclopedia of Parallel Computing. David Padua (editor). Springer, 2011.

Handbook of Parallel Computing. Rajasekaran/Reif (editors). Chapman&Hall, 2008

Page 30: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Literature: OpenMP, MPI, CUDA

• Chandra, Dagum et al.: Parallel Programming in OpenMP. Morgan Kaufmann, 2001

• Barbara Chapman, Gabriele Jost, Ruud van der Pas: UsingOpenMP. MIT, 2008

• MPI: A message-passing interface standard. Version 3.1. Message Passing Interface Forum, June 4th, 2015. www.mpi-forum.org/docs/docs.html

• William Gropp, Ewing Lusk, Anthony Skjellum: Using MPI. MIT, 1999

David B. Kirk, Wen-mei Hwu: Programming massively parallel processors. Morgan Kaufmann, 2010

Page 31: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Page 32: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Page 33: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Systems, hardware

OpenMP, Cilk

48-core AMD-based shared-memory cluster, „Saturn“

MPI

36-node InfiniBand AMD-based 2x8 core cluster = 576 processor cores, „Jupiter“

Access via ssh (instructions to follow), program at home/TU. Noactual lab

saturn.par.tuwien.ac.at

jupiter.par.tuwien.ac.at

Page 34: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Jupiter: small InfiniBand cluster, AMD processors

Saturn: AMD-based, shared-memory system

Page 35: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Other systems at TU Wien parallel computing

• Pluto: 16-core Ivy-bridge system + 2xNVidia K20x GPU + 2xIntel Xeon-Phi 60-core accelerator

• Mars: 80-core Intel Westmere system, 1TB shared memory• Ceres: 64-core Oracle/Fujitsu shared memory system, Sparc-

based with HW-support for 512 threads, 1TB shared memory

Research systems for bachelor, master and PhD work…

Page 36: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

The Austrian Top500 HPC system: www.vsc.ac.at

Page 37: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Access to TU Wien systems saturn and jupiter

Login via ssh

Get account from Sascha Hunold via TUWEL exercise.

Some information on how to login and use in TUWEL

Page 38: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Using the systems

• Saturn: for shared-memory project part (Cilk and OpenMP): interactive access

• Jupiter: for distributed memory project part (MPI): access via SLURM scheduler (description in TUWEL)

START EARLY on the projects!

Page 39: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Bachelor:

VU Parallel Computing, 6ECTS

Master:

VU Parallel Algorithms, 3ECTS:• PRAM• Scheduling for parallel computingVU Advanced Multiprocesor Programming, 4.5ECTS• Lock-free algorithms and data structuresVU High Performance Computing, 4.5ECTSSE Topics in Parallel Programming Models, Languages, Algorithms, Architectures

Bachelor thesis

Master‘s thesisProject

Follow-up

Page 40: Introduction to Parallel Computing „Paralleles Rechnen“ · Parallel Computing Parallel computers are around and everywhere (it was not always like that, although…) How to use

©Jesper Larsson TräffWS16/17

Research Group Parallel computing

Some information at www.par.tuwien.ac.at

Favoritenstrasse16, 3rd floor

Next to U1, Taubstummengasse, exit Floragasse


Top Related