|department||Department of Computer Science and Automation|
|ID of group
2252 (Computer Graphics)
|module leader||Prof. Dr. Patrick Mäder|
|term ||summer term only|
|on-campus program (h)||45|
|exam||examination performance with multiple performances|
|details of the certificate||Das Modul Parallel Computing mit der Prüfungsnummer 220424 schließt mit folgenden Leistungen ab:|
- mündliche Prüfungsleistung über 20 Minuten mit einer Wichtung von 60% (Prüfungsnummer: 2200630)
- alternative semesterbegleitende Prüfungsleistung mit einer Wichtung von 40% (Prüfungsnummer: 2200631)
Details zum Abschluss Teilleistung 1:
- oral exam after the lecture period with appointments negotiated during the final lectures
Details zum Abschluss Teilleistung 2:
- one or multiple assignment projects to be solved at home and turned-in via Moodle at a defined due date announced with the task
- assignments are accompanioned by a short physical, oral presentation and discusion in front of the peer group OR a short video presentation; students will be informed about the selected form upon announcing assignment topics
- students must register via thoska for this exam, typically within the 3rd and 4th week of the semster
|signup details for alternative examinations|
|maximum number of participants|
|previous knowledge and experience|
- basic programming skills in C are beneficial
|learning outcome||Professional Competence mostly gained in lectures and evaluated in the oral exam:|
- Students have knowledge about the fundamental concepts and terminology of parallel systems.
- Students have knowledge about different taxonomies to classify parallel hardware and the advantages and disadvantages per class.
- Students know different methodologies for decompisting, agglomerating, and mapping a given problem into a set of parallel executable tasks.
- Students know and can apply different synchronizatio techniques for parallel programms.
- Students have knowledge about different metrics for evaluating parallization success and are informed about best practices and problems when profiling parallel software.
Methodological Competence mostly gained in seminars and evaluated in the aPl (assignments):
- Students gained the ability to implement parallel programms on different hardware plattforms including the ability to analyze and decompose a given problem for parallel computing.
- Students are able to independly develop individual parallel implementations to a given problem and are able to judge and compare the quality and succes in terms of parallization.
- Students gained the ability to evaluate and troubleshoot parallel programs.
- Students gained the ability to use development tools and computational resources (e.g., cloud computing instaces) for programming parallel programms.
Social Competence gained through lectures and seminars:
- Students can discuss advantages and disadvantages of different deep learning approaches among each other and with their lectureres.
|content||The goal of this master-level course is giving a structured introduction into the concepts of parallel programming. Students will learn fundamental concepts of parallelization and will be able to judge the correctness, performance and construction of parallel programs using different parallelization paradigms (e.g. task parallelization, data parallelization) and mechanisms (e.g. threads, task, locks, communication channels). The course also provides an introduction to the concepts of programming and practical aspects of programming massively parallel systems and cloud computing applications (using Amazon AWS). At the end of the course, students shall be able to design and implement working parallel programs, using shared memory programming on CPU (using pThreads and OpenMP) and GPU (using Cuda) as well as distributed memory programming (using MPI) models. The concepts conveyed in lectures are deepened by practical programming exercises.|
The following topics will be covered through lecture and seminar:
- Fundamentals of parallel algorithms
- Decomposition, Communication, Agglomeration, and Mapping of parallel tasks
- Styles of parallel programs
- Shared-memory programming
- Processes, threads, and synchronisation
- Hardware architecture for parallel computing
- Shared and distributed memory
- Flynn's Taxonomy
- Interconnection networks und routing
- Distributed-memory programming
- Message passing programming
- Analytical program models
- Amdahl's law, etc.
- Parallel algorithms
massivly parallel systems
und CUDA Programmierung
- Warehouse-scale computing
|media of instruction|
- Lecture and seminar slide decks through Moodle
- Tutorials, white-papers and scientific papers
- Development tools
- Extracts of development projects
- Assigments managned through Moodle
- Amazon AWS compute instances to perform assigment and seminar work (require student's personal computer)
|literature / references|
to Parallel Computing: Zbigniew J. Czech, Cambridge University Press (2017)
to Parallel Computing (Second Edition): Ananth Grama, Anshul Gupta, George
Karypis, Vipin Kumar, Addison Wesley (2003), ISBN 0-201-64865-2
Massively Parallel Processors: A Hands-on Approach, D.B. Kirk and W.W. Hwu,
Morgan Kaufmann, 2. Ed. (2012)
in Matrix Computations, E. Gallopoulos, B. Philippe, A.H. Sameh, Springer
Programming, T. Rauber and G. Rünger, Springer (2013)
|evaluation of teaching|