Performance pay for teachers is in the news at the moment, what with federal Education Minister Julie Bishop in Darwin today for a meeting with her State and Territory counterparts. Apparently she intends blustering and bullying the States about performance pay, despite an unpublished Australian Council of Educational Research report that debunks Bishop’s proposal to give school principals the power to hire, fire and promote teachers.
It doesn’t take very long to conclude that this is a particularly stupid idea. Moreover, designing fair and workable performance pay systems is very difficult, a lesson I learned by managing a medium sized legal firm in Darwin during the 1980s and early 90s. At Andrew Leigh’s blog, commenter Anna Hough succinctly summarised the problems in the context of teaching when this issue raised its head last year:
I can see difficulties with all the types of measurements I can think of â¦
– âvalued addedâ (student improvement over a given time frame) – difficulties for teachers of special needs or less able students, danger that teachers may only âteach to the testâ or âspoonfeedâ students;
– student assessment of teachers – unworkable in primary schools, danger of malicious student assessment in high schools (e.g. stricter teachers may get unfair assessments);
– principal or other teachers assessing teachers – subjective, may create a tense and competitive, rather than cooperative, environment in schools, and the presence of other staff in a classroom affects student and teacher performance, so may not lead to a true assessment of the teacherâs ability.
There’s also a serious risk that giving principals that sort of power will result in bias and cronyism. In addition, giving principals powers over their staff equivalent to those of a small business manager in the private sector makes no sense in a situation where they do not have available to them any meaningful “marketplace” information and are not subject to the inherent discipline of the marketplace.
That isn’t to deny that better career paths for teachers, and tangible recognition of genuine merit and excellence, should be pursued. Teachers’ current pay scales plateau after about 10 years in the classroom, resulting in excellent, experienced teachers seeking professional advancement elsewhere. A performance-based promotion system to provide a viable career path for more experienced teachers would reward post-graduate specialist qualifications, acquisition of professional development in-service certificates and other reasonably objective measures of professional excellence. A career path that provides tangible encouragement to ongoing professional development will help to combat the staleness, rigidity and lack of imagination too often evident in teachers after years of the classroom grind.
However, that isn’t the main focus of this post. There are a couple of underlying issues that I want to explore. At least some of the impetus towards performance pay (with both Labor and the Coalition embracing varying versions of such proposals) flows from a perception that levels of literacy/competence have fallen over time, in both teachers and their students. But is this actually true? My own amateur investigations suggest not. A significant part of the problem seems to lie with somewhat misleading characterisations of research by ANU economist and blogger Andrew Leigh. Some of the confusion may even result from Andrew’s own words.
Let’s start with the perception that student literacy and numeracy have fallen over time. First, this isn’t borne out by international comparative studies like PISA, which consistently show Australia in the top handful of countries in both literacy and numeracy. Misunderstandings about a 2002 ACER paper by Sheldon Rothman seem to have provided the genesis of a contrary perception. Moreover, that perception seems in part to have been fuelled by populist MSM articles by Andrew Leigh:
Quiz time. How much do you think the literacy and numeracy standards of Australian 15-year-olds improved from 1975 to 1998? Those who answered “a little” or “a lot”, stay back after class. The correct answer is: they fell. A report by Sheldon Rothman of the Australian Council of Educational Research shows the literacy and numeracy standards of 15-year-old Australians were lower at the end of the 1990s than in the mid-1970s.
This claim is just plain wrong. For a start, Rothman’s study looked at 14 year olds not 15 year olds. But even with that correction Andrew’s claim is still rather misleading. Rothman’s study analysed literacy and numeracy results between 1975 and 1998 (the most recent period for which comparable test results to 1975 are available). It provided results for:
- kids who were 14 years old or in year 9 at school (“the whole cohort”) (in fact, to be more accurate, the “whole cohort” samples in earlier years included a very few kids who weren’t 14 but a much wider spread of school years, while the samples in later years included only kids in year 9 but a much larger number of kids who weren’t 14);
- kids who were 14 years old irrespective of their school year level; and
- kids who were both 14 years old and in year 9.
Only for the latter sub-group was there a clear but small drop in literacy and numeracy over the 23 year period. There was no such clear drop either for the whole group/cohort studied or for the sub-group of 14 year olds irrespective of school year. In literacy there was a small but statistically significant overall drop (about 2%) for boys, but no change at all for girls. In numeracy, there was a small but statistically insignificant overall improvement for both boys and girls. And there was a significant improvement (around 3%) for students from a non-English speaking background. Rothman observes:
The results reported here indicate that the achievements of Australian 14-year-olds in reading comprehension and mathematics have remained constant during the period. For some groups, there has been improvement, most notably for students from language backgrounds other than English. For other groups, however, results indicate a significant achievement gap. The most significant gap is between Indigenous Australian students and all other students in Australian schools.
Clearly there is a small but significant literacy problem with boys, a reasonably well known phenomenon not confined to Australia. As ACER comments in relation to international comparative PISA tests: “girls outperformed boys in all aspects of reading in Australia, as in all other countries in the survey.” But why the anomalous result for the sub-group of 14 year olds in Year 9? Rothman concludes that it does not connote worse educational outcomes or stupider students, but merely reflects changes over time in school policies on promoting kids from one school year to the next. In 1975 schools tended to make much greater use of forcing lower-achieving kids to repeat a school year, whereas by 1998 schools were forcing less kids to repeat a year. The change is significant: around 8-9%. Thus, in 1975 the results of 14 year olds in year 9 were inflated by comparison with 1998, because less of the lower-achieving 14 year olds had even reached Year 9. Accordingly, the change in aggregate results of this sub-group over time says little or nothing meaningful about overall changes in literacy and numeracy. As Rothman comments:
[T]he group of 14-year-olds in Year 9 in the earlier cohorts may have been of higher ability, because of school-entry and grade retention policies and practices; the decline in scores noted here are more likely a reflection of changing enrolment and promotion practices in individual States and Territories than of changing achievement levels in reading comprehension and mathematics.
It’s unfortunate that Andrew’s MSM articles did not make this critical distinction clear. This confusion may have contributed to this entire discussion being conducted on a false premise that student literacy and numeracy levels have declined when there is no evidence of that. In fact, one can plausibly argue that keeping school literacy and numeracy levels constant is a fairly impressive achievement of Australia’s education system given the enormous social and demographic challenges schools have faced over the last 30 years. As Rothman comments:
In 1975, when students in the first cohort in this report were tested, the war in Vietnam ended. In May of that year, post-war refugees began arriving from that and neighbouring countries in greater numbers than had arrived previously. Since then, immigration from South East Asian and other countries where English is not the main language spoken has increased dramatically. Between 1986, when data were first collected, and 2000, the last year for which data are available, the number of students from non-English-speaking backgrounds enrolled in New South Wales government schools rose by 60 per cent (NSW Department of School Education, 1993a; NSW Department of Education and Training, 2000). In that time, the proportion of students from language backgrounds other than English rose from 15.2 per cent to 23.7 per cent of all enrolments.
Other evident challenges to literacy over the period since 1975 include the large increase in divorce and the number of single parent families (no-fault divorce was only introduced in 1975), and the widespread use in the home of colour TV and cable TV, computers and the Internet (all of which might be expected to operate as a distraction from kids developing consistent reading habits).
Andrew’s somewhat jaundiced take on changes in literacy and numeracy over time seems subsequently to have led him (together with fellow ANU academic Chris Ryan) to examine changes in teacher “aptitude” and “quality” over time as a possible explanation for the (essentially non-existent) drop in student literacy and numeracy. Leigh and Ryan define “aptitude” by reference to ACER literacy and numeracy test scores of new teachers, and find that this has fallen by about 8 percentage points between 1983 and 2003. They conclude:
We believe that both the fall in average teacher pay, and the rise in pay differentials in non-teaching occupations are responsible for the decline in the academic aptitude of new teachers over the past two decades.
I don’t have a problem with that statement, indeed I agree with it. However, Leigh and Ryan also appear at least to an extent to conflate teacher “aptitude” (i.e. ACER test scores from the teachers’ own school days) with teacher “quality” (their paper is titled ‘How and Why has Teacher Quality Changed in Australia?’). Andrew acknowledges the distinction in passing in a MSM article about their research, but fails to explore it:
Teacher performance also may be amenable to development through effective training.
In fact I suggest that is precisely what has occurred. In 1975, and to a considerable extent even in 1983, there were lots of 2 and 3 year-trained teachers. My ex-wife was one of them. She upgraded her teaching qualification from 2 years to 3 in 1982 and from 3 to 4 in 1986-7 as a result of Education Department expectations/requirements. She was only one of many. Nowadays teachers in just about every State and Territory must be 4 year trained as a prerequisite to teacher registration
I certainly wouldn’t dispute that the entry-level “aptitude” may have fallen over the last 20-30 years, for reasons ranging from declining comparative salaries to the need to recruit many more teachers in total because of greatly increased student Year 12 retention rates and the drive to reduce class sizes. But education authorities responded to those pressures by requiring aspiring teachers to spend longer at university. One could only seriously entertain the proposition that teacher “quality” or “performance” have fallen if you believe that an additional year or two of higher education has made no positive difference to professional outcomes. The fact that student literacy and numeracy levels have not fallen over the last 20 years, despite major social and demographic challenges, suggests that a fall in average teacher performance is somewhat implausible.