Technische Universität Ilmenau

Parallel Computing - Modultafeln of TU Ilmenau

The module lists provide information on the degree programmes offered by the TU Ilmenau.

Please refer to the respective study and examination rules and regulations for the legally binding curricula (Annex Curriculum).

You can find all details on planned lectures and classes in the electronic university catalogue.

Information and guidance on the maintenance of module descriptions by the module officers are provided at Module maintenance.

Please send information on missing or incorrect module descriptions directly to

module properties Parallel Computing in degree program Master Informatik 2013
module number200003
examination number220424
departmentDepartment of Computer Science and Automation
ID of group 2252 (Computer Graphics)
module leaderProf. Dr. Patrick Mäder
term summer term only
credit points5
on-campus program (h)45
self-study (h)105
obligationelective module
examexamination performance with multiple performances
details of the certificateDas Modul Parallel Computing mit der Prüfungsnummer 220424 schließt mit folgenden Leistungen ab:
  • mündliche Prüfungsleistung über 20 Minuten mit einer Wichtung von 60% (Prüfungsnummer: 2200630)
  • alternative semesterbegleitende Prüfungsleistung mit einer Wichtung von 40% (Prüfungsnummer: 2200631)

Details zum Abschluss Teilleistung 1:
  • oral exam after the lecture period with appointments negotiated during the final lectures

Details zum Abschluss Teilleistung 2:
  • one or multiple assignment projects to be solved at home and turned-in via Moodle at a defined due date announced with the task
  • assignments are accompanioned by a short physical, oral presentation and discusion in front of the peer group OR a short video presentation; students will be informed about the selected form upon announcing assignment topics
  • students must register via thoska for this exam, typically within the 3rd and 4th week of the semster
signup details for alternative examinations
maximum number of participants
previous knowledge and experience
  • basic programming skills in C are beneficial
learning outcomeProfessional Competence mostly gained in lectures and evaluated in the oral exam:


  • Students have knowledge about the fundamental concepts and terminology of parallel systems.
  • Students have knowledge about different taxonomies to classify parallel hardware and the advantages and disadvantages per class. 
  • Students know different methodologies for decompisting, agglomerating, and mapping a given problem into a set of parallel executable tasks.
  • Students know and can apply different synchronizatio techniques for parallel programms.
  • Students have knowledge about different metrics for evaluating parallization success and are informed about best practices and problems when profiling parallel software.


Methodological Competence mostly gained in seminars and evaluated in the aPl (assignments):
  • Students gained the ability to implement parallel programms on different hardware plattforms including the ability to analyze and decompose a given problem for parallel computing.
  • Students are able to independly develop individual parallel implementations to a given problem and are able to judge and compare the quality and succes in terms of parallization.
  • Students gained the ability to evaluate and troubleshoot parallel programs.
  • Students gained the ability to use development tools and computational resources (e.g., cloud computing instaces) for programming parallel programms.

Social Competence gained through lectures and seminars:
  • Students can discuss advantages and disadvantages of different deep learning approaches among each other and with their lectureres.
contentThe goal of this master-level course is giving a structured introduction into the concepts of parallel programming. Students will learn fundamental concepts of parallelization and will be able to judge the correctness, performance and construction of parallel programs using different parallelization paradigms (e.g. task parallelization, data parallelization) and mechanisms (e.g. threads, task, locks, communication channels). The course also provides an introduction to the concepts of programming and practical aspects of programming massively parallel systems and cloud computing applications (using Amazon AWS). At the end of the course, students shall be able to design and implement working parallel programs, using shared memory programming on CPU (using pThreads and OpenMP) and GPU (using Cuda) as well as distributed memory programming (using MPI) models. The concepts conveyed in lectures are deepened by practical programming exercises.

The following topics will be covered through lecture and seminar:
  • Fundamentals of parallel algorithms
    • Decomposition, Communication, Agglomeration, and Mapping of parallel tasks 
    • Styles of parallel programs
  • Shared-memory programming
    • Processes, threads, and synchronisation
    • pThreads
    • OpenMP
  • Hardware architecture for parallel computing
    • Shared and distributed memory
    • Flynn's Taxonomy
    • Cache
    • Interconnection networks und routing
  • Distributed-memory programming
    • Message passing programming
    • MPI
  • Analytical program models 
    • Amdahl's law, etc.
    • Metrics
    • Profiling
  • Parallel algorithms
  • Programming
    massivly parallel systems
    • GPU
      und CUDA Programmierung
    • OpenCL
  • Warehouse-scale computing
media of instruction
  • Lecture and seminar slide decks through Moodle
  • Tutorials, white-papers and scientific papers
  • Development tools
  • Extracts of development projects
  • Assigments managned through Moodle
  • Amazon AWS compute instances to perform assigment and seminar work (require student's personal computer)
literature / references
  • Introduction to Parallel Computing: Zbigniew J. Czech, Cambridge University Press (2017)
  •  Introduction to Parallel Computing (Second Edition): Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar, Addison Wesley (2003), ISBN 0-201-64865-2
  • Programming Massively Parallel Processors: A Hands-on Approach, D.B. Kirk and W.W. Hwu, Morgan Kaufmann, 2. Ed. (2012)
  • Parallelism in Matrix Computations, E. Gallopoulos, B. Philippe, A.H. Sameh, Springer (2015)
  • Parallel Programming, T. Rauber and G. Rünger, Springer (2013)
evaluation of teaching