Technische Universität Ilmenau

Parallel Computing - Modultafeln der TU Ilmenau

Die Modultafeln sind ein Informationsangebot zu den Studiengängen der TU Ilmenau.

Die rechtsverbindlichen Studienpläne entnehmen Sie bitte den jeweiligen Studien- und Prüfungsordnungen (Anlage Studienplan).

Alle Angaben zu geplanten Lehrveranstaltungen finden Sie im elektronischen Vorlesungsverzeichnis.

Informationen und Handreichungen zur Pflege von Modulbeschreibungen durch die Modulverantwortlichen finden Sie unter Modulpflege.

Hinweise zu fehlenden oder fehlerhaften Modulbeschreibungen senden Sie bitte direkt an modulkatalog@tu-ilmenau.de.

Modulinformationen zu Parallel Computing im Studiengang Master Informatik 2013
Modulnummer200003
Prüfungsnummer220424
FakultätFakultät für Informatik und Automatisierung
Fachgebietsnummer 2252 (Data-intensive Systems and Visualization)
Modulverantwortliche(r)Prof. Dr. Patrick Mäder
TurnusSommersemester
SpracheDeutsch/Englisch
Leistungspunkte5
Präsenzstudium (h)45
Selbststudium (h)105
VerpflichtungWahlmodul
AbschlussPrüfungsleistung mit mehreren Teilleistungen
Details zum AbschlussDas Modul Parallel Computing mit der Prüfungsnummer 220424 schließt mit folgenden Leistungen ab:
  • mündliche Prüfungsleistung über 20 Minuten mit einer Wichtung von 60% (Prüfungsnummer: 2200630)
  • alternative semesterbegleitende Prüfungsleistung mit einer Wichtung von 40% (Prüfungsnummer: 2200631)

Details zum Abschluss Teilleistung 1:
  • oral exam after the lecture period with appointments negotiated during the final lectures

Details zum Abschluss Teilleistung 2:
  • one or multiple assignment projects to be solved at home and turned-in via Moodle at a defined due date announced with the task
  • assignments are accompanioned by a short physical, oral presentation and discusion in front of the peer group OR a short video presentation; students will be informed about the selected form upon announcing assignment topics
  • students must register via thoska for this exam, typically within the 3rd and 4th week of the semster
Anmeldemodalitäten für alternative PL oder SL
max. Teilnehmerzahl
Vorkenntnisse
  • basic programming skills in C are beneficial
Lernergebnisse und erworbene KompetenzenProfessional Competence mostly gained in lectures and evaluated in the oral exam:

 

  • Students have knowledge about the fundamental concepts and terminology of parallel systems.
  • Students have knowledge about different taxonomies to classify parallel hardware and the advantages and disadvantages per class. 
  • Students know different methodologies for decompisting, agglomerating, and mapping a given problem into a set of parallel executable tasks.
  • Students know and can apply different synchronizatio techniques for parallel programms.
  • Students have knowledge about different metrics for evaluating parallization success and are informed about best practices and problems when profiling parallel software.

 

Methodological Competence mostly gained in seminars and evaluated in the aPl (assignments):
  • Students gained the ability to implement parallel programms on different hardware plattforms including the ability to analyze and decompose a given problem for parallel computing.
  • Students are able to independly develop individual parallel implementations to a given problem and are able to judge and compare the quality and succes in terms of parallization.
  • Students gained the ability to evaluate and troubleshoot parallel programs.
  • Students gained the ability to use development tools and computational resources (e.g., cloud computing instaces) for programming parallel programms.

Social Competence gained through lectures and seminars:
  • Students can discuss advantages and disadvantages of different deep learning approaches among each other and with their lectureres.
InhaltThe goal of this master-level course is giving a structured introduction into the concepts of parallel programming. Students will learn fundamental concepts of parallelization and will be able to judge the correctness, performance and construction of parallel programs using different parallelization paradigms (e.g. task parallelization, data parallelization) and mechanisms (e.g. threads, task, locks, communication channels). The course also provides an introduction to the concepts of programming and practical aspects of programming massively parallel systems and cloud computing applications (using Amazon AWS). At the end of the course, students shall be able to design and implement working parallel programs, using shared memory programming on CPU (using pThreads and OpenMP) and GPU (using Cuda) as well as distributed memory programming (using MPI) models. The concepts conveyed in lectures are deepened by practical programming exercises.

The following topics will be covered through lecture and seminar:
  • Fundamentals of parallel algorithms
    • Decomposition, Communication, Agglomeration, and Mapping of parallel tasks 
    • Styles of parallel programs
  • Shared-memory programming
    • Processes, threads, and synchronisation
    • pThreads
    • OpenMP
  • Hardware architecture for parallel computing
    • Shared and distributed memory
    • Flynn's Taxonomy
    • Cache
      Coherence
    • Interconnection networks und routing
  • Distributed-memory programming
    • Message passing programming
    • MPI
  • Analytical program models 
    • Amdahl's law, etc.
    • Metrics
    • Profiling
  • Parallel algorithms
  • Programming
    massivly parallel systems
    • GPU
      und CUDA Programmierung
    • OpenCL
  • Warehouse-scale computing
Medienformen
  • Lecture and seminar slide decks through Moodle
  • Tutorials, white-papers and scientific papers
  • Development tools
  • Extracts of development projects
  • Assigments managned through Moodle
  • Amazon AWS compute instances to perform assigment and seminar work (require student's personal computer)
Literatur
  • Introduction to Parallel Computing: Zbigniew J. Czech, Cambridge University Press (2017)
  •  Introduction to Parallel Computing (Second Edition): Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar, Addison Wesley (2003), ISBN 0-201-64865-2
  • Programming Massively Parallel Processors: A Hands-on Approach, D.B. Kirk and W.W. Hwu, Morgan Kaufmann, 2. Ed. (2012)
  • Parallelism in Matrix Computations, E. Gallopoulos, B. Philippe, A.H. Sameh, Springer (2015)
  • Parallel Programming, T. Rauber and G. Rünger, Springer (2013)
Lehrevaluation