The table below lists the programming labs and associated dates.
Lab | Topic | Class Date | Due | Collaboration Policy |
---|---|---|---|---|
Lab 1 | Compiling and running a C++ program | Jan. 17 | Jan. 19 by 11:59pm | Individual |
Lab 2 | copier.cpp | Jan. 23 | Jan. 26 by 11:59pm | Individual |
Lab 3 | order.cpp | Jan. 30 | Feb. 2 by 11:59pm | Individual |
Lab 4 | speed.cpp | Feb. 6 | Feb. 9 by 11:59pm | Individual |
Lab 5 | order.cpp (again) | Feb. 13 | Feb. 16 by 11:59pm | Individual |
Lab 6 | doubles.cpp (again) | Feb. 27 | March 2 by 11:59pm | Individual |
Lab 7 | phone.cpp | March 6 | March 9 by 11:59pm | Pair |
Lab 8 | anagrams.cpp | March 20 | March 23 by 11:59pm | Pair |
Lab 9 | graft1.cpp | April 9 | April 13 by 11:59pm | Pair |
Lab 10 | graft2.cpp | April 17 | April 27 by 11:59pm | Pair |
This year, each of our lab assignments will be a problem taken from a past offering of the ACM International Collegiate Programming Contest (ICPC). Students work in teams of three to solve as many problems as possible in a five-hour time period.
Hundreds of regional contests are held each Fall, involving thousands of teams across the world. The top 100 teams from the regionals qualify for the World Finals held in the Spring. More information can be found on the official ICPC site. Also, if you have any interest in participating on SLU's teams, please let me know next Fall.
Each problem is computational in nature, with the goal being to compute a specific output based on some input parameters. Each problem defines a clear and unambigous form for the expected input and desired output. Relevant bounds on the size of the input are clearly specified. To be successful, the program must complete within 60 seconds on the given machine (thus, efficiency can be important for certain problems).
Each problem description offers a handful of sample inputs and the expected output for those trials as a demonstration. Behind the scene, the judges often have hundreds of additional tests. Submitted programs are "graded" by literally running them on all of the judges' tests, capturing the output, and comparing whether the output is identical (character-for-character) to the expected output.
If the test is successful, the team gets credit for completing the problem. If the test fails, the team is informed of the failure and allowed to resubmit (with a slight penalty applied). However, the team receives very little feedback from the judges. In essence, they are told that it failed but given no explanation as to the cause of the problem, or even the data set that leads to the problem.
Actually, the feedback is slightly more informative. Upon submitting a program, the team formally receives one of the following responses:
Success
Submission Error
This is reported if the submitted program does not properly
compile, is not properly named, or is clearly an attempt at a
different problem.
Run-time Error
This is reported if the program crashes during execution.
Wrong Answer
This designates that the program ran to completion, but the
content of the output does not match the expected results.
Presentation Error
In spirit, there are some cases where the students got the
wrong output not because their computations were wrong, but
due to a superficial problem in formatting their output. This
can occur if they misspell words, use incorrect or missing
punctuation, capitalize incorrectly, use too few or too many
spaces or extra blank lines, or present the wrong number of
significant digits in numeric output.
Because of the automated nature of the judging, it is important that programs follow these conventions:
The source code solving a problem must be self-contained in a single file.
Each problem statement will clearly designate a precise filename that must be used for the source code (e.g., copier.cpp).
Please put a comment at the top of your source code indicating the names of the contributing team members.
All input should be read from standard input (i.e., using cin in C++).
All output should be written to standard output (i.e., using cout in C++).
Please note as well that the format of most problems is designed so that judges can specify multiple tests as part of a single execution. Typically this is done by having an input format where initial parameters are read, using a special value (such as 0 or #) to designate the end of the trials. Therefore, most programs will need to iterate through multiple trials using an outer loop. It is also important that relevant data structures be initialized for each trial, so that earlier trials do not affect later ones.
Testing on your own
Since each problem specification includes at least a few
sample cases of input and the expected output, there is no
reason to submit your program to the judges until you are
confident that it succeeds on those sample inputs.
Although the program is formally written to to accept input from standard input, it can be quite tiring (not to mention, error-prone) to have to type the test input each time you want to run your program. For some problems, the test input is also quite large.
It is possible to redirect input from a text file to the standard input without needing to rewrite your source code. On our system, this can be done with the following syntax:
./myProgram < myInput.txtThis behaves just as if someone were to type the characters from the file myInput.txt on the keyboard while the program executes.
Testing on judge's data (on turing)
When you have the program working on the sample data and you
wish to test it on the judges hidden data, you may execute the
following command from your turing account.
/public/chambers/180/labs/judge myProgram.cpp(of course, using the actual name of the source code file). Our automated program is not quite as professional as the real judges, but it will do. In particular it does not automatically terminate after 60 seconds elapse. In fact, it never terminates. It will tell you when it starts executing. If too much time has passed, you may press cntr-C to kill the process yourself.
Also, a correct distinction between "wrong output" and "presentation error" is difficult to automate. We've made a decent attempt to look for common presentation errors, but in case of doubt, we will report it as a "wrong output".
Testing on judge's data (web-based)
If you get really competitive, there is an official website
that archives all of the past ICPC problems and allows you to
submit your source code on their system. They also keep track
of statistics and have a leaderboard available for those who
have the most efficient solutions for a given problem.
The site is at
In order to submit attempts on that website, you must register and get an ID used for tracking. You should also be aware that the compiler that they use on that system is not necessarily configured the same as our on turing. So it is possible that you have code that compiles and executes properly on turing, but which results in a compile error on this website. If this happens, you may indicate that you want to receive the full compile errors in an email message.
Although the online tools give you a way to test your implementation, you must submit to the instructor via email in order to get credit for the lab. Please keep in mind that we will award half-credit for any sincere attempt at a lab, even if you are unable to succeed with the automated testing. However, you will get zero credit if you do not ever submit your attempt.
Also, make sure to include the names of all team members in comments at the top of the submit source code.