IMPORTANT: The data files are now at
Shared-memory programming with Cilk
Shared-memory programming with threads
Message passing with MPI
Parallel and high-performance computer architectures:
Implementations of shared-memory
Introduction to parallel algorithms:
Dense matrix algorithms (multiplication, solution of linear systems of equations)
The fast-Fourier transform
C programming, Linear Algebra, Computer Structure
I plan to give a home exam during a 14-day period that will be decided upon with the students. That is, the grade will be based on an exam which will be part programming and experimentation and part theory. Students will have 14 days to complete the exam. The exam is individual.
There will also be exercises during the course, probably 4 programming exercises. They are also individual and mandatory.
Running Cilk programs:
Exercise 1: sort.cilk (instructions inside)
Exercise 3: Parallel matrix-vector multiplication. You also need a couple of files: lab3.c, lab3-orig, airfoil.grid, airfoil_map.p16, pwt.grid, pwt_map.p16. See below on how to run MPI programs under PBS.
Lecture notes for October 29 (and probably for the next lecture)
Chapter 1 and 2 of the lecture notes (these are somewhat old so they focus mostly on MPI, see the next items for material on Cilk).
Chapter 3 of the lecture notes (together with the MPI material in chapter 1, this is the material for the December 17 lecture).
Chapter 4 of the lecture notes (for the January 7 lecture).
A Minicourse on Multithreaded Programming, by Charles E. Leiserson and Harald Prokop.
Running MPI Programs under PBS
We run MPI programs using a job-sumission system called PBS. The programs can be compiled on nova or any of the plab computers (plab-02 up to plab-34; some may be down). The plab computers are also serving as Linux workstations in the lab in room 004. Programs are submitted from nova (only; not from plabs) and they run on plab computers. Here are instructions that explain how to compile MPI programs and how to use PBS.
I recommend that you download the sample files below on a Unix or Linux machine, as opposed to downloading on a Windows machine and transferring them to your account. In one case that I have seen downloading the samples on a Windows machine caused some hidden control characters to be inserted into the files and this prevented PBS from running them properly.
To compile an MPI program, use the command
We run programs using a job submission system called PBS. To run a program, you submit a script to PBS wait until it completes. You can only use PBS on nova!
Before you try to use PBS, make sure that you can use rsh
to/from any of the plabs without typing your password, otherwise PBS and MPI
won't work. To ensure that rsh works, copy
Here is a sample PBS script called
We run the program using the command
We can find out whether the job is running or not and what
other jobs are running using the command
You can find out more details about about your job using
You can cancel the job, whether it is still waiting or
running using the command
When the job completes, its standard output and standard
error are copied into the directory that contains the script under the names
PBS will send you mail when the job starts running, exits,
or is aborted. (Due to the line
The script requests 5 minutes CPU time (total over all
processors) using the line
The graphical tool
Books and Links
The following books are in the library:
Parallel Computer Architecture: a Hardware-Software Approach, by Culler and Singh.
Using MPI: writing portable parallel programs with the message-passing interface, by Gropp et al.
Parallel Programming with MPI, by Pacheco.
Introduction to Parallel Computing: design and analysis of algorithms, by Kumar et. al.
Operating Systems, by Sivan Toledo, in Hebrew (only the chapter on programming with threads)
The following links should prove useful:
The MPICH homepage (a widely-used implementation of the standard)
HPCU, Israelís supercomputing center
The Top 500 computers in the world
Last updated on Sunday, March 03, 2002