Course Home | Course Policies | Homework | Lab Open Hours | Programming | Labs | Schedule & Lecture Notes

CS 180: Data Structures
Fall 2009

Erin Chambers
Contact Info: echambe5 - at - slu.edu
Office: 011 Ritter Hall
Office Hours: Monday 11-12am, Wednesday 2-3pm, or by appointment

The table below lists the programming assignments and associated dates.

Lab Topic Class Date Due Collaboration Policy
lab00 A First Glance at C++ Thursday, August 27, 2009 Sunday, August 30, 2009, 11:59pm individual
lab01 Copier Reductions Thursday, September 3, 2009 Sunday, September 6, 2009, 11:59pm pair
lab02 Speed Limit Thursday, September 10, 2009 Sunday, September 13, 2009, 11:59pm pair
lab03 Symmetric Order Thursday, September 17, 2009 Sunday, September 20, 2009, 11:59pm pair
lab04 Doubles Wednesday, September 23, 2009 Sunday, September 27, 2009, 11:59pm pair
lab05 Tanning Salon Wednesday, October 7, 2009 Sunday, October 11, 2009, 11:59pm pair
lab06 Symmetric Order revisited Thursday, October 15, 2009 Sunday, October 18, 2009, 11:59pm pair
lab07 Overflowing Bookshelf Thursday, October 22, 2009 Sunday, October 25, 2009, 11:59pm pair
lab08 Booklet Thursday, October 29, 2009 Sunday, November 1, 2009, 11:59pm pair
lab09 Tree grafting Thursday, November 5, 2009 Sunday, November 8, 2009, 11:59pm pair


Information About Lab Assignments

ACM's International Collegiate Programming Contest

This year, each of our lab assignments will be a problem taken from a past offering of the ACM International Collegiate Programming Contest (ICPC). Students work in teams of three to solve as many problems as possible in a five-hour time period.

Hundreds of regional contests are held each Fall, involving thousands of teams across the world. The top 100 teams from the regionals qualify for the World Finals held in the Spring. More information can be found on the official ICPC site. Also, if you have any interest in participating on SLU's teams, please let me know next Fall.

General Problem Format

Each problem is computational in nature, with the goal being to compute a specific output based on some input parameters. Each problem defines a clear and unambigous form for the expected input and desired output. Relevant bounds on the size of the input are clearly specified. To be successful, the program must complete within 60 seconds on the given machine (thus, efficiency can be important for certain problems).

Each problem description offers a handful of sample inputs and the expected output for those trials as a demonstration. Behind the scene, the judges often have hundreds of additional tests. Submitted programs are "graded" by literally running them on all of the judges' tests, capturing the output, and comparing whether the output is identical (character-for-character) to the expected output.

If the test is successful, the team gets credit for completing the problem. If the test fails, the team is informed of the failure and allowed to resubmit (with a slight penalty applied). However, the team receives very little feedback from the judges. In essence, they are told that it failed but given no explanation as to the cause of the problem, or even the data set that leads to the problem.

Actually, the feedback is slightly more informative. Upon submitting a program, the team formally receives one of the following responses:

Important Conventions

Because of the automated nature of the judging, it is important that programs follow these conventions:

Please note as well that the format of most problems is designed so that judges can specify multiple tests as part of a single execution. Typically this is done by having an input format where initial parameters are read, using a special value (such as 0 or #) to designate the end of the trials. Therefore, most programs will need to iterate through multiple trials using an outer loop. It is also important that relevant data structures be initialized for each trial, so that earlier trials do not affect later ones.

Testing Your Implementation

Formal Submission

Although the online tools give you a way to test your implementation, you must submit to the instructor via email in order to get credit for the lab. Please keep in mind that we will award half-credit for any sincere attempt at a lab, even if you are unable to succeed with the automated testing. However, you will get zero credit if you do not ever submit your attempt.

Also, make sure to include the names of all team members in comments at the top of the submit source code.