Unit I
Introduction From serial to parallel programming- Hardware and software paradigms, Shared infrastructure Parallel Computer Organization, Pipelining and Throughput, Latency and Latency hiding, Memory Organization Inter-process communication.
Course Name | Parallel Programming |
Course Code | 23CSE466 |
Program | B. Tech. in Computer Science and Engineering (CSE) |
Credits | 3 |
Campus | Amritapuri ,Coimbatore,Bengaluru, Amaravati, Chennai |
Introduction From serial to parallel programming- Hardware and software paradigms, Shared infrastructure Parallel Computer Organization, Pipelining and Throughput, Latency and Latency hiding, Memory Organization Inter-process communication.
Basic Parallel Algorithmic Techniques, Pointer Jumping, Divide-and-Conquer, Partitioning, Pipelining, Accelerated Cascading, Symmetry Breaking, Synchronization (Locked, Lock-free), Deadlock; race conditions. Parallelism in Modern Computing: Multi-core and many-core architectures, Parallel programming for distributed systems and cloud computing ,Heterogeneous computing and accelerators
Parallel Algorithms Sorting algorithms, Algorithms for Broadcast/Reduction and collective operations, Scalability, Distributed Parallel Applications Matrix Multiplication, Interconnection Topologies, Fault Tolerance, Domain decomposition, communication-to-computation ratio, load balancing.
Case study: CUDA, OPENMP
Pre-requisite(s): 23CSEXXX Computer Architecture
Course Objectives
Course Outcomes
CO1: Understand the key parallel computational models, message passing and shared memory paradigms.
CO2: Understand basic principles of performance modeling and optimization and apply memory system optimization techniques.
CO3: Analyze communication and coordination issues in parallel computing.
CO4: Apply parallel programming models for accelerator enhanced computation.
CO-PO Mapping
PO/PSO |
PO1 |
PO2 |
PO3 |
PO4 |
PO5 |
PO6 |
PO7 |
PO8 |
PO9 |
PO10 |
PO11 |
PO12 |
PSO1 |
PSO2 |
CO |
||||||||||||||
CO1 |
2 |
1 |
– |
– |
1 |
– |
– |
– |
– |
3 |
– |
– |
3 |
2 |
CO2 |
3 |
2 |
2 |
2 |
2 |
– |
– |
– |
– |
3 |
2 |
– |
3 |
2 |
CO 3 |
2 |
1 |
– |
– |
1 |
– |
– |
– |
– |
3 |
2 |
– |
3 |
2 |
CO4 |
3 |
2 |
2 |
2 |
2 |
– |
– |
– |
– |
3 |
2 |
– |
3 |
2 |
Evaluation Pattern: 70:30
Assessment |
Internal |
End Semester |
MidTerm Exam |
20 |
|
Continuous Assessment – Theory (*CAT) |
10 |
|
Continuous Assessment – Lab (*CAL) |
40 |
|
**End Semester |
30 (50 Marks; 2 hours exam) |
*CAT – Can be Quizzes, Assignments, and Reports
*CAL – Can be Lab Assessments, Project, and Report
**End Semester can be theory examination/ lab-based examination/ project presentation
Textbook(s)
Peter S Pacheco, “An Introduction to Parallel Programming”, Morgan Kaufmann, 2011.
Reference(s)
DE Culler, A Gupta and JP Singh, “Parallel Computer Architecture: A Hardware/Software Approach”, Morgan-Kaufmann, 1998.
Marc Snir, Steve W. Otto, Steven Huss-Lederman, David W. Walker and Jack Dongarra, “MPI – The Complete Reference”, Second Edition, Volume 1, The MPI Core.
William Gropp, Ewing Lusk, Anthony Skjellum, “Using MPI: portable parallel programming with the message-passing interface”, 3rd Ed., Cambridge MIT Press, 2014.
A Grama, A Gupta, G Karypis, and V Kumar, “Introduction to Parallel Computing”. 2nd Ed., Addison-Wesley, 2003.
DISCLAIMER: The appearance of external links on this web site does not constitute endorsement by the School of Biotechnology/Amrita Vishwa Vidyapeetham or the information, products or services contained therein. For other than authorized activities, the Amrita Vishwa Vidyapeetham does not exercise any editorial control over the information you may find at these locations. These links are provided consistent with the stated purpose of this web site.