A stray thought about exam questions

Having just finished the final units I need to qualify for an undergraduate degree, the topic of examinations is still fresh in my mind. Generally these fall into two categories: open-book and closed-book; with two major categories of question: multiple-choice and short-answer.

The exact mix of open/closed and MC/SA will vary from professor to professor and course to course. It can also vary based on the nature of the field and the ratio of teaching staff to students. As a law student I faced a common theme of open-book short-answer exams. During an “intro to psych” unit, all exams were multiple choice — there were 600 students in the course and two lecturers.

But all of these formats have one thing in common: the exam questions are secrets.

Much of the efficacy of the exam is tied up with protecting the questions from disclosure. In a sense this is a bit like relying on a secret key for the efficiency of a cipher: as soon as the key is revealed, the cipher is no longer effective.

Why have a secrecy requirement? Consider the opposite case where the questions are simply reused every year. The problem is that the student can simply memorise “the” answers. This is generally considered unacceptable because the potential set of questions is always going to be too small to properly determine the student’s mastery of the subject.

What this reveals is that exams are basically an attempt at statistical sampling: some quasi-random subset of all possible questions is selected. The student’s performance on that subset is taken as a meaningful proxy of their overall mastery of the subject.

So far so good. But note that I said it’s a quasi-random subset. Why does that subset have to be created from scratch each year? Because of the secrecy-of-questions requirement.

But what if, instead of creating new questions each year, there was instead some portfolio of (say) 1,000 questions that is reused each year? The student is then examined on (say) 10 of these in the final exam.

At no point are the questions secret. Students may study and review them whenever and however they please. They simply will not know in advance which of the questions will be asked of them. Some set of questions will be randomly selected immediately before the exam papers are printed. It could even be made double-blind, with lecturers not knowing which questions will be asked.

I imagine that one of three things could happen:

  1. Students could devise and memorise answers to all, or a large subset of, the questions. In which case, won’t they have had to learn the subject matter? Even the act of rote memorisation can lead to pre-conscious synthesis of key principles as a basis for future reasoning.
  2. Students with prodigious memory or trained in mnemonic techniques will do better; but they do so already.
  3. Some students will not be motivated and will simply fail under the new scheme. Again, no change.

Therefore I hypothesise that this approach – the “question portfolio” – would provide a better method of examination than the current approach.

Additional benefits:

  • Questions are already linked with learning outcomes — students could be told what the link is.
  • Students can precisely calibrate their current understanding by taking randomised tests when it suits them.
  • Questions can receive much higher investment, as they will not be discarded each year.


  • High initial cost of developing a large corpus of questions.
  • Ongoing costs of “managing the portfolio” to reflect improvements, changes in subject etc.
  • It’s unusual and may face resistance or bureaucratic inertia. For instance, it may not be compatible with university rules.

Of course this is all mere speculation on my part. I am not an expert in education; but with the greatest possible respect, neither are my professors.

At the very least, we could put this to the test. Develop a corpus of questions for (say) 20 subjects. Then, at the beginning of the semester, randomly select 10 of them to be taught with open questions, 10 of them to be taught to secret questions. Compare the average performance of those two sets with historical performances. That should give a fuzzy feel for whether it works better or not. I’m sure Andrew Leigh would know a better way to do it, but that’s my gut sense of how it might work.


This entry was posted in Education, Geeky Musings. Bookmark the permalink.

9 Responses to A stray thought about exam questions

  1. Patrick says:

    That’s a bit confusing. I am not entirely sure what you would gain, apart from recycling exam questions (which is already common enough over a three-five year cycle).

    And don’t ‘secret’ questions measure students’ ability to rapidly synthesise information and apply the relevant principles – isn’t this at least less obvious in the ‘open’ question format? After all, in real life, there is certainly not a list of questions that your client has to choose from!

  2. James Farrell says:

    Interesting post, Jacques.

    You distinguish between MC and short answer questions, but in fact these are quite similar in that they tend to be mechanical questions with cut-and-dried answers. In both cases, but especially that of MC, many students would opt for the memorisation route even if there were hundreds of questions (thousands is a bit unrealistic). This is time consuming, but a lot of students don’t mind putting in a lot of hours as long as you don’t ask them to actually think. The effect would be to divert their time from reading, thinking and discussing — too bad if these happen to be the cherished aims of the instructor. Class attendance would be low if the answers could be found elsewhere.

    I think your scheme would work better for essay questions, which are much more open ended, and require students to include and integrate a range of material. This is hard to do convincingly without understanding the topic, and provides an incentive to read with curiosity and get involved in in class.

    I use your scheme in one course, with quite complex essay questions. But, far from a bank of a thousand questions, there are just twelve — one corresponding to each lecture topic and reading assignment. The students know from the beginning that the exam will consist of four of these, chosen at random.

    Incidentally, what with the Yankspeak? The other day we had Don going on about ‘math’, and now all of your instructors are ‘professors’ (maybe they really are). I suppose resistance to globalisation is futile.

  3. Jacques Chester says:

    How long have you used the 12-questions scheme, James? How does it compare?

    I default to “professor” as an honorific — most of them are associate this and deputy under that. If they want to correct me they do. I know most of my professors on a “first name” basis.

  4. Tel_ says:

    But what if, instead of creating new questions each year, there was instead some portfolio of (say) 1,000 questions that is reused each year? The student is then examined on (say) 10 of these in the final exam.

    You are not supposed to notice but if you go back through 10 years of past exams you will almost certainly find a portfolio being reused regardless of the subject. That’s what got me through *shrug*.

    The New South Wales RTA driver’s exam runs in a similar manner…


    There’s a test screen in RTA offices, you can sit and practice all day. I presume they figure the worst thing that could happen is a few people actually learn something about driving. Not sure exactly how big the database is, presumably it’s an ongoing thing.

    If anyone wants to explain how merging works in this state it would be useful.

  5. Jacques Chester says:

    Are you in WA? Because the basic theory of merging in this state seems to be “if I drive precisely parallel with the other driver, he or she will give way”. Curiously, this doesn’t scale.

  6. James A says:

    UWA recently inflated all its level descriptors so nearly everyone is a professor of some sort – before that it went {associate,,senior} lecturer, {associate,} professor and professorial fellow.

  7. SJ says:

    I don’t think it’s generally the case that the questions are kept secret. In my experience as a lecturer and as a student, students expect the range of questions to be very limited, and get really cheesed off if they aren’t.

    James’s 12 question format seems to be quite common, though the number varies, like 6 to 20. You have to be able to do something like 4 of the questions in the exam. If you can memorise the proofs and write them out, then that in itself demonstrates sufficient evidence that you’ve at least put in some effort. Note that these sorts of questions don’t have simple answers like say “67”.

    Note also that being told in advance the exact wording of every question in the exam doesn’t necessarily give any advantage. If you know for a fact that question 1 in the exam is going to be “Describe the doctrine of estoppel, giving its history in English and Australian law, and at least one example where its usage might be critical in a future case”, there’s no standard answer to that question that you can memorise.

    Anyway, Jacques, on another point, have you come to some sort of decision about what you’re going to do next year?

  8. Edward Mariyani-Squire says:

    In my experience with 1st year economics students:

    1. With respect to multiple-choice and short-answer questions:
    Giving students a bank of questions ahead of the test verses giving them no bank (with only the instruction that the questions will be based on material in chapters X, Y, Z of the textbook) results in no substantial change in the distribution of marks. It does substantially decrease the level of whining and crying on the part of students both before and after the test.

    2. With respect to exam essays:
    I’ve tried three scenarios. Telling students before the test
    (a) only the topic area of each question (e.g. “there’s a question about externalities”).
    (b) the exact wording of x questions that will be in the the test, of which they have to select x-y to answer.
    (c) something similar to James’ suggestion (exact wording of x possible questions, of which a random x-y number of questions will be in the test).
    In my experience, there has not been a substantial difference in the quality of the answers between (b) and (c). The quality of answers was much lower in the case of (a). The worry with options (b) and (c) is that some students may collaborate before the test, whereas others (the introverted, the shy, and the misanthropic) may not – the latter thereby being at a disadvantage. Then again, this used to be countered to some extent by the fact that students had been organised into teams for aother tasks, and so everyone was likely to naturally engage in collaborative study and preparation for the exam anyway (which can be quite a good thing).

  9. conrad says:

    I have an alternative, which, in many areas, is to get rid of exams all together. It seems to me all exams to do is (a) test the ability of students to do things quickly (that might be good in some areas); (b) give students something they won’t complain about doing on a feedback sheet (they’re habituated to doing them); and (c) give the marker something quick and easy to mark.
    My belief is that students learn by far the most from doing assignments — We get students that have done these “pre-entry” courses that cover a lot of material but have no real assignments, and they’re awful in terms of skills.

Comments are closed.