Oftentimes I stumble upon two misconceptions.
The first one regards grading. A grade is not a statement about one's worth. It is only a (hopefully) objective feedback about a given performance. It is also a crude feedback in the sense it does not go into many details. Handling out detailed feedback for a medium to large group is no trivial task but you can always come to get a finer feedback. That said, please only come when the feedback will be helpful. A very low grade reflects that some misunderstanding must be tackled first. Also, do not come to nitpick. If there is a gross error, we will correct it. But coming to try to raise the grade over details is unbecoming and/or hints at the second misconception. Anyway, if you do come, bear in mind that grading (i) is imperfect and (ii) about the performance, do not take it personnally.
The second misconception is that the goal of the course is to pass the exam, whereas, in all actuality, we meet for another objective: learning. Assignments are opportunities to get pratical knowledge and delve into specific topics. The final exam is a way to see how you fare with your new knowledge. Moreover, the best way to score is to learn, so let's just aim for that.
Report and code.
Although some parts of the report cannot be completed before the code (empirical studies, typically), the optimal way to do the assignment is to answer the theoretical questions prior to coding.
Writing the pseudo-code first helps focus on the algorithmic part and not the underlying language-specific details. Whereas writing the code and transforming it to pseudo-code often results in a hard-to-read, over-detailed gibberish.
Knowing the complexity of an algorithm can help spot implementation mistakes. While the converse is true, it is much easier to double check and be confident about the complexity analysis than a piece of code.
Testing and responsibility.You should regard your submission as production-ready code. As such, it is a tacit claim that it is doing what it is supposed to, and, as for any claim, it is the reponsibility of the maker to prove it. This implies that
- the automatic feedback of the submission platform is not a debuggin facility;
- overfitting the public test might result in an unpexected low grade;
- imagine if the feedback wa erroneous;
- the improvement cycle streches because of the delay for getting the feedback;
- you are consuming a lot more resources and delaying others;
- when failing some automatic tests, we will not help you figure out what is wrong with your code if you do not explain how your tested the code on your side.
More generally, anticipating how to test rigoursly your code is a good way of ensuring you understand the statement well. For group assignment it is also a good way to split the workload while working on everything.
Help during assignments
As teaching assistants, we are there to help you master the content of the course, especially via the assignments. It is not our role to do the projects in your stead, however. As such, our willingness to help tends to dwindle somewhat under some circumstances. Namely:
- when googling a question would have solve the problem;
- when receiving questions asking, a few days before the deadline, to clarify a statement handed out several weeks before;
- when receiving questions regarding the automtic testing while it is apparent your code is not tested rigorously on your side;
The following list compiles the main criteria used for grading projects.
Consult the relevant course pages for information about lateness, missing assignment, plagiarism, etc.
- Present pseudo-code
- Analyze complexities
- Answer questions
- Present results and analyze them
When you are asked to analyze something, it is not sufficient to spot it and describe it. We expect you to come up with an explanation of the reasons why phenomenon occurs. Providing the correct answer without justification will, most often than not, result in a zero.
After all, it is the explanation which turns a correct belief into knowledge (at least for Socrates).
- Overall structure easy to follow --> follow the one we give you
- Correct spelling and grammar: together, they ease the reading and the understanding
- Clarity and concision; there is usually some elegance in simple yet efficient designs --> stick to the number of pages we indicate. Do not make tables of content.
- Use an appealing formatting, or rather, do not use an ugly one --> use latex for the report and one of its algorithmic package for the pseudo-code (you can use the same one as the course's)
- Regarding pseudo-code:
- Provide a specification for every pseudo-code function you write! It is hard to grade code we do not understand...
The grade corresponding to the code are split into three main categories: correctness, language and style. We consider that writing good-quality code (language and style) is as important as writing correct code.
- Misleading feedback from the automatic testing cannot constitue any form of excuse/pretext. It might indeed happen that the tests are wrong. This usually (when it does) occurs at an early stage, when the testing scripts are not robust enough. Most of the time, however, the bug does lie in the student code. Before reporting an issue, ensure you cannot reproduce the error. Otherwise, report it and we will adapt as soon as possible.
- The tests do not determine alone the grade you will get.
- Not all tests are visible. We are encouraged you to develop your own tests. Thinking how to test a piece of a code is usually as formative as writing it.
- If automatic testing is set up, we will make no exception regarding the critical mistakes; reading the feedback is enough to prevent such problems.
- The submission is in a wrong format
- Some files are missing/empty
- There are some compilation errors
- Warnings at compile time --> use the additional flags to get all warnings when you develop your code
- The code is not following the specification
- The code does not work (segfault, infinite loop, etc.)
- Illegal access to memory
- Memory leaks
- Non-acceptable slowness (e.g. wrong class of complexity)
- Useless or missing inclusion
- Usage of global variables or macros
staticin signature of local functions
- Badly structure code (e.g. monolithical code, uselessly complex code, avoidable redoundancy, etc.)
- Uselessly limited code (e.g. size fixed at compile time)
- Misplace variable declaration (e.g. declaring a variable far from where it is used, not following c99 conventions, etc.)
- Inadequate type (e.g.
size_tfor array length or iterating variables,
- Inadequate control structure (e.g. using
forto iterate over an array, using
whilewith a complex condition, using several
switchis more coherent, etc.)
- Not using shortcuts (e.g.
b, not using ternary operator, etc.)
- Not verifying output values (e.g. with
- Using non-informative names for variables, functions, etc. (except for the course algorithm)
- Using non-conforme variable/function/structure names
- Indenting inconsistently
- Using inconsistent convention (spacing around operators, spacing in signatures, brackets layout, etc.)
- Inappropriate use of comments