Julia Gillard has announced that the new national website for schools will include average NAPLAN scores. Principals hate the idea, as do some education academics. The Minister has responded to the criticisms by being uncharacteristically evasive. She invokes ‘transparency’, but always uses less controversial examples than the NAPLAN scores to illustrate the merits of transparency.
Much of the noise in the media has been generated by the political sideshow in New South Wales, where parliament passed legislation, introduced by the Greens, prohibiting the publication of school league tables; the Government then tried to revoke the new law but was blocked by the Opposition. In an act of phony bravado, the SMH last week defied the legislation by publishing NAPLAN scores for three government schools — enough to breach the letter of the law, but not really the spirit. No-one is likely to prosecute. It’s obviously not viable legislation, and will be overhauled one way or another eventually. George Williams thinks it may even violate the constitution.
In fact every state has its own sorry history of back-peddling and meaningless commitments.
But the real issue is whether education departments, whether federal or state, should be publishing school-level data on external tests. Is it good for society if schools compete for customers on the basis of their students’ performance on standardised exams?
Opponents think that comparing schools’ performance by average test scores is as silly as comparing hospitals using a measure of their patients’ average health scores. Apart from being meaningless, the tables humiliate low-ranking schools, ‘distort priorities’ toward narrow exam coaching, and disrupt children’s lives as their parents shift them between schools in search of illusory quality. (Andrew Norton thinks parents deserve more credit.)
Supporters of league tables believe they give enough indication of performance to be genuinely useful to parents in choosing schools, and that people have a right to information that’s useful to them. To the most enthusisatic supporters, school education is like any other consumer product, and the more information the better. Performance in external exams reveals the quality of the school’s product. The best performers will attract the most students, command the highest fees, earn the biggest profits, and encourage the others to emulate them — to the benefit of all.
But the standard consumer model is not self-evidently applicable. National policy on releasing information should be based on answers to some fundamental questions: How does the information affect families’ choices? What pressures and incentives does this create for schools? Do league tables really contain information that prospective students can use to advantage? Does society benefit on balance from the way that schools and parents respond to the information? If some individuals benefit, but society in aggregate is harmed, by the publication of school-level data, does the public still have a right to see data at some particular level of aggregation? No-one, for example, seems to be arguing that parents have a fundamental right to class-level NAPLAN scores to help them choose among the teachers at a given school. So the cut-off point for disaggregating the data is somewhat arbitrary.
These questions don’t apply only to NAPLAN — schools have been competing for years on the basis of various measures of HSC performance, with major consequences. It’s anecdotally well established that families move suburbs just to be in the catchment for a public school that ranks highly on the newspapers’ league tables. The mania about school-level HSC data is due to the HSC being the main rationing device for scarce places in university courses, which in turn confers labour market advantages.
Once papers begin publishing NAPLAN league tables, parents will inevitably become just as obsessed with them as they are with HSC league tables. They will assume that good NAPLANs predict good HSC scores, and, as with HSC results, that their child will do better if he goes to a higher ranked school.
It hardly needs pointing out that this assumption is not necessarily true: the school may register a high average score simply because its pupils are of higher average ability. Furthermore, if the high average scores attract more high-ability applicants, this will push the scores higher still in a self-fuelling cycle, which has nothing to do with the quality of teaching in that school. In this extreme case, both the individual families and the schools are worse off due to unnecessary travel costs, and emotional trauma caused by moving children from school to school in a futile chase for higher scores.
Alternatively, it may be true that children perform better individually when surrounded by clever kids. In this case it’s definitely in their individual interest to attend schools with higher average scores, even if — again — the high scores have nothing to do with the quality of teaching. But this is a zero sum game: every time a bright child shifts to a ‘better’ school, this worsens the environment for the classmates he leaves behind at his old school. Thus there is no net social gain, only a streaming process that benefits stronger students at the expense of waeker. So the transport costs and disruption are still not justified.
Of course if we change the premise, and suppose that students benefit from streaming, that changes the outcome. In the jostling for places in the best schools, children will be arranged by an invisible hand into schools populated by children of similar ability, allowing them to be taught more efficiently. But if this is the real philosophy underlying the system, it’s based on manipulation. One would think that an explicit policy of streaming, going beyond the current selective school system, would be more transparent — to use the buzzword of choice.
But now let’s suppose that schools can in fact influence their students’ performance in the tests. From the individual student’s point of view, assuming his main goal is to get into a scarce university place, it’s clearly in his interest to go to the school that achieves the best results by preparing pupils for tests. For the individual, the knowledge is valuable. However, the services of top-performing schools are valuable to society as a whole only if the same skills and knowledge useful in exams are also useful in work and life. If the schools merely ‘teach the test’ — rather than provide a rounded education, impart an appetite for learning, teach critical thinking, and so on — then the students are collectively worse off.
These arguments can’t be dismissed as mere fear of the unknown: if British experience is anything to go by, teachers don’t warm to these schemes once they’ve seen them in action. Ken Boston, the former Director General of Education NSW who later ran the UK’s ‘Qualifications and Curriculum Authority’, said in August that the problem in Britain was not so much ‘teaching to the test’, but rather
…drilling practice tests. In spring term in England in key stage two at the end of primary school, there are many schools which spend 70 per cent of the term simply doing practice tests, to the neglect of music and outdoor education and physical education and history and geography and all those things which are so important in a full and rich curriculum for primary aged children. The – an important lesson for Australia is to keep NAPLAN for the purpose for which it is designed, and not bolt onto it a host of other functions as well which’ll make it terribly high stakes and potentially then distort teaching practice and curriculum in schools.
Though one might expect private schools to be the worst offenders when it comes to distorting priorities, it seems that state education departments are already responding to the incentive.
Boston prefers ‘rich reports which explain why a school may be performing less well, not just simplistic league tables. Don’t massage the data, no jiggery pokery, no smoke or mirrors, just present the data as it is. My belief is this would offer greater public accountability than league tables.’
Perhaps Boston’s solution is the most practical. Don’t withhold the unhelpful information: just swamp it with better information. This might include ‘value added’ data, although Boston himself thinks it’s too probably too hard to devise a widely acceptable measure of the improvement in a child’s results that’s due to his school’s input.
But there remains the basic philosophical question: does this issue involve a ‘right to information’? Governments withhold information in the interests of safety and privacy, but what about situations where a case can be made that the information will cause people to act in a manner whose aggregate outcome is socially detrimental? I confess I’m having trouble thinking of analogous cases and precedents.