Computer_Science
 Home | People | Curriculum | Projects | Resources | Media

CMSC 287: High Performance Scientific Computing

Links to Course Resources, including the course syllabus.

text cover photo

Instructor: John Dougherty

Semester & Year: Spring 2017

Schedule:

Text: Wilkinson, B., and Allen, M. Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers, second edition. Prentice-Hall, Upper Saddle River, NJ, 2005 [ISBN 0-13-140563-2].
Please note, there are other texts in the CS Teaching Lab/Lounge that supplement the course as well -- JD

Additional Materials:

  • MPI: The Complete Reference, by M. Snir, S. Otto, S. Huss-Lenderman, D. Walker, and J. Dongarra. (online & in CS Teaching Lab)
  • Parallel Scientific Computing in C++ (in CS Teaching Lab)
  • OpenMP online resources
  • Unix Networking Programming, by R. Stevens et.al. (in CS Teaching Lab)
  • Requirements: Two exams, term project, programming labs and homeworks.

    Learning Accomodations: Haverford College is committed to supporting the learning process for all students. Please contact me as soon as possible if you are having difficulties in the course. If you think you may need accommodations because of a disability, please visit the Office of Disabilities Services and contact hc-ods@haverford.edu. If you have already been approved to receive academic accommodations and would like to request accommodations in this course, please meet with me privately at the beginning of the semester (within the first two weeks if possible) with your verification letter.

    Department of Computer Science Policy on Collaboration

    Prerequisites: CMSC 106: Data Structures; suggested - CMSC240: Computer Organization, CMSC355: Operating Systems

    Description: The goal of this course is to introduce the student to the challenges involved in solving computationally demanding problems in the sciences and economics. The course will also cover the potential gains and consequences of computation where concurrency is exploited as parallel/distributed systems; will also cover the basics of networking. Concepts will be supported by labwork using the Linux workstations using MPI and OpenMP. Students will be expected to understand principles, as well as work to implement parall applications (including installing software, benchmarking and conducting experiments).

    The foundations of HPSC computing are presented, including:

    Architectures
    • message passing
    • shared memory
    • distributed shared memory
    • clusters/Beowulf
    • grids/cloud(s)
    • multi-core
    Algorithms
    • "brain-dead" parallelism
    • divide and conquer
    • pipelining and vector processing
    • data parallel
    • sorting
    • searching and optimization
    Implementation Issues
    • correctness and debugging
    • models and metrics for performance
    • load balancing and scheduling
    • termination detection
    • real-time
    • networking and IPC
    • dependability and performability
    Scientific Applications
    • Numerical/Matrix Computations
    • Monte Carlo Methods
    • N-body
    • Bioinformatics/Genomics
    • Simulation
    • Image Processing

    Haverford College Page maintained by John Dougherty.
    Computer Science Department, Haverford College.