Are we going easy on foreign students in order to get more revenue?

Of course we are, but in order to convince the outside world that we are has needed someone to collect the data on the grades given to foreign students and analyse it. Gigi Foster of UNSW has done just that in a study looking at the marks of students of different backgrounds in different classes. The variation she can work with is the performance of international students relative to domestic ones in courses that have varying degrees of these two types (at UniSA and UTS). She consistently finds that the internationals do worse but, because courses are graded ‘on the curve’ (the distribution of marks is almost mandatorily the same across large courses in university) international students do better when there are fewer domestics in the course to pinch the higher marks. Grades within tutorials within a course, are lower for domestics who are in the tutorials with more international students, which is quite strong evidence that there is a dumbing down in those tutorials, which Foster argues is due to the poor language skills of the average international student.

The Australian clearly thought the study was courageous in that a large revenue streams for universities was deemed to have a negative effect on the standards of courses, which in turn will negatively affect domestic students. It brings into focus a trade-off between the revenue stream of fee-paying foreign students and the educational quality enjoyed by Australian students.

Educational quality is notoriously hard to improve by administrative means because, in the absence of market forces, administrations manned by people with short-run incentives have little cause to increase quality and every cause to decrease it. Two ways to go are then to either seek some kind of outside quality signals (a kind of University inspectorate) or else to allow more competition between universities so that it starts to make sense for universities to offer higher quality.

This entry was posted in Uncategorized. Bookmark the permalink.
Subscribe
Notify of
guest

32 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
conrad
conrad
13 years ago

I thought it was a bit of beat up actually — the effects are tiny.

From the Australia:

“Her main statistical findings are that international students from non-English-speaking backgrounds underperform domestic students based on mean marks by four points on a 100 grade scale.”

4 points on a 100 point scale is didleys for things like marks.

“with the stunning finding that classes comprised entirely of international students would on average be 6.5 points higher than those courses comprised solely of domestic students.”

This is a problem of grade inflation — perhaps 6.5 might sound a lot, but if you compared, say, honours theses from 30 years ago, where a small percentage got H1 marks, to now, where some universities give 100% of people H1 marks, it is cleary comparatively small compared to any long term trends. There has also been massive inflation of undergraduate marks (which universities are almost obliged to do incidentally, as it makes their students more competitive for things like scholarships).

“She also found that for every 1 per cent increase in the number of international students from non-English-speaking backgrounds in tutorials, the marks of domestic students in the tutorials fell by 0.0134 points.”

This is my favourite. First, it ignores the money international students bring in. More importantly, even if you ignore this, then if you went from having 0% of your population being international students to 50%, then the marks of domestic students would decrease by .67 points (i.e., .0134 * 50). I doubt that’s even contemplates being reliable, let alone being something worthwhile reporting.

“He notes that last year, out of a class of 110 students, more than 80 were international and, of these, 40 had such bad English that he felt it was compromising their performance.”

Of course, if your data is so weak, then why not use an anecdote?

conrad
conrad
13 years ago

oops, that should be “Australian” and “that” should be “that’s” on “I doubt that’s”.

Paul Frijters
Paul Frijters
13 years ago

I think 6.5 is quite a lot as it is within university – not over time or some other unfair comparison.
The 0.0134 is the direct effect of merely being in the same tutorial, which is a tiny part of the whole course experience, hence you wouldnt expect the magnitude to be huge. That you find a significant effect already from the interaction in the tutorial is already an accomplishment.

Ross
Ross
13 years ago

I cannot comment on how the result may impact on overall result curves, but when I did my original degree back in the 1990s, that international students were being ‘looked after’ at my then-institution, was so blatant it simply wasn’t funny. Fail a subject? Not a problem – automatic supplementary exam. But a ‘domestic’ – had to shift heaven and hell to do so. Do I have a problem with international students per se? Nope. Do I believe in a fair go for someone with English as a Second Language? You betcha. But at the same time, the same standards of achievement should apply to all, regardless of whether or not you are full fee-paying or not. Sadly, my experience was that internationals were looked after, with revenue being the only possible justification, far more so than domestic applicants, ultimately to the detriment of us all in the long run. Of course if government put more money into tertiary education, this need to court international full-fee paying students would not be necessary.

Gigi Foster
Gigi Foster
13 years ago

Thanks for the write-up Paul.

Conrad, if you are saying that the data i’m using are weak, then i would challenge you to come up with better Australian data to look at this question. If you use a broad-based survey approach, where you directly ask a large number of university lecturers whether these effects exist, then do you really think they will tell you the honest truth? The parallel opinion piece in the Australian today by Tony English, where he provides many anecdotes about soft marking that he has amassed by assuring his interviewees of strict anonymity in the context of the one-on-one interview with him, tells you the answer. It would be hard (read expensive, and of doubtful reliability) to generate a large enough and unbiased enough sample in this way that you could feel confident believing its results. My data are administrative, so by definion, no lying is possible and N is large.

These are to my knowledge the first multi-institutional student-level panel data we have ever had access to in this country. I have 75,000 observations on students taking courses, which provides the variance required to estimate the effects on marks of own background plus spillovers at both the tutorial and course level, controlling for heaps of things that most other data sets that could be used to address this sort of question simply don’t offer. What i most hope comes out of this is more policymakers seeing the value of tapping such data to address important concerns in the sector – not just about international students but about low SES students, mature-age students, students in particular types of programs, and so forth.

In terms of effect sizes, everything’s relative. As a sidebar in the current analysis, since i control for gender i can see that women robustly outperform men, to the tune of 1.8 mark points on average. This effect is highly statistically significant, but at half the size of the international student effect, is it “big”? The international educational community has often made a fuss about average effects of lesser absolute size.

Chris Lloyd
Chris Lloyd
13 years ago

It is brave of this author to publish her findings, as her employer (the ASB) is almost entirely dependent on fee paying OS students.

A real problem is marking to a curve. According to uni rules, you could put 50 chimpanzees in a course and they would get a mean of 75. This does get games (by the smarter students). My school has a specific policies of imposing grading curves across different cohorts to try and stop this kind of thing. But there is no obvious way I can see of doing it for OS students.

Constance
Constance
13 years ago

Paul, Answer to your opening question is YES. Domestic students are losing out and international students are just seen as cash cows to the detriment of quality education. The VET international students especially, are mainly seeking permanent residency and dodgy colleges have been set up just for this reason. Many education providers accept international students regardless of their level of English.

conrad
conrad
13 years ago

Gigi,

First of all, I should say that I’ve only read The Australian article, which could be entirely unrepresentative of what the report is really about, so I could be complaining about the Australian and not you (that would be no surprise). In addition, I’m not disputing your data is good (it may be the best data on the issue in Aus), I’m basically saying that that the results reported are sensationalized in the typical The Australian way and don’t appear to concur with what said, making them less worrying than they appear.

For example, the first quote under your photo says: “Gigi Foster believes international students from non-English-speaking backgrounds are being allowed to underperform”. However, as far as I can tell, your data doesn’t suggest this. The main finding of interest appears to be that if you have paying students, then the _overall_ mark of the class goes up 6 points. So what you’ve found is another factor contributing to grade inflation (which has been going on for donkeys years), unless you are claiming that the 4 points less that OS students get needs to be hidden, and this is done by increasing overall marks 6 points, which isn’t really hiding it anyway (cf. simply giving higher marks to students in courses where you know many pay).

I could name 20 other factors in grade inflation and more serious problems, but Tony English has beaten me to it — but most of these have nothing to do with overseas students. Things like the over use and abuse of all of those satisfaction surveys combined with non-random sampling is far worse — I don’t even know who the OS students are in my courses, for example, but I do know that if I catch 6 students for plagiarism who all give me 1 on all of the categories (as they will), this will bring my average down massively since only 15 will bother to fill it in. I will then have to explain to a committee why my average was low, and I also won’t have any chance of promotion.

In the end, are we to worry about grade inflation? Maybe, but if we did too much, most of use would have died of stress in the last 2 decades. It’s also clearly different to hiding the under-performance of groups.

Now the second paragraph of content actually about the students says:

“At the same time, these poor English skills weigh on the results of domestic students in the same tutorials.”

Yet the effect size on this so trivially small, it’s hardly worth mentioning in this sort of article (I’m not saying here that’s it worthless to know incidentally). If you were the only English speaking student in a class of 100, your mark would be 1 point less than if you were in a class of only English speaking students. This is even ignoring the fact that your class probably has 15 people instead of 30 because of these students. If this is “weighing” on the results of domestic students, it isn’t weighing very heavily, at least compared to many other things, like many of them having to work too much.

Gigi Foster
Gigi Foster
13 years ago

For Conrad, and anyone else wishing to find out what i actually do in this paper (rather than what The Australian says i do), find the paper here: http://ssrn.com/abstract=1756829

I am the last person to deny that the international student “effect” is the largest one impacting students’ marks. Neither do i think that grade inflation is the most important factor determining a given academic’s professional satisfaction or career advancement. The takeaway should be instead that the data exist in unversity databanks to answer many of the questions implicit in many of the higher-ed policy debates we see around us, and it’s only our own laziness and/or fear that keeps us from using these data to examine issues head on in an objective and rigorous fashion and trying to address them, for the good of the sector and the students it serves. Pointing to my results and saying, “Hey! The difference between intl and domestic students isn’t that big anyway – only four marks. Nothing to worry about in the grand scheme!” at least acknowledges that we’ve made progress: we actually know something about an important issue, based on a large-N sample of microdata on students, controlling for lots of things the economists say we should control for.

The tragedy of all this is that the institutions that were brave enough to submit their data for analysis in the first instance may now feel sufficiently freaked out by the media attention that they clam up. If the sector backs away from the rigorous use of hard empirical data to answer important higher education questions, then my efforts in this endeavor will have been in vain.

Martin
Martin
13 years ago

I think that we need to be careful about extrapolating from the two universities to all universities as well as being careful as to causation. I’ve taught in in two Melbourne Universities since the middle 90s in both IT and Business and have taught many international students, so some comments:

1) I’ve never been pressured by anyone to change a mark for an international student, never been pressured in anyway to provide special treatment for an international student. Now I know that this goes on, and I don’t doubt Ross’s account, but I doubt that it is consistent across all universities and I doubt that it is even consistent within most universities. I hear far more allegations of this, than I hear actual cases.

2) Until the ESOS Act came in, I never knew who was an international or a domestic student and even now I only know for some students because I’m actively involved in the administration of the degree and we have specific rules that we must follow for international students laid down by the Department of Immigration. Rules which for the most part are negative in effect for international students. I also learnt very early on, not to assume who is or is not an Australian student. Having a non-Australian accent doesn’t mean that you aren’t an Australian, having an Australian accent doesn’t mean that you are an Australian, I’ve been surprised both ways.

3) Yes, there is an English problem for some international students, but I know that the university that I teach at, has a wide range of programs and assistance available to all students who have English issues, which often includes Australian students. I’m currently teaching in the IT area and assuming that because a piece of work has poor English means that it is an international student is the author is a far from valid assumption. But, yes, having poor English clearly reduces the chance and level of success in studying at Uni.

4) The assumption that all international students should have wonderful levels of English is invalid, we set a reasonable level of English for our international students to achieve, a level which allows many of our international students to go on to have very successful university careers. But as it is a broad standard that needs be applied to individuals, there are a number of individuals who despite meeting the standard in their particular case need to be better at English to succeed. Perhaps they are not quite as bright as the average, perhaps they have a more difficult time adjusting to life in Australia, perhaps their support network amongst their friends is not as good, perhaps they don’t work hard enough at University. One or more of these combined with a the basic level of English might put them on the road to failure, but it’s pretty clear that an international student with an IELTS score of 6.5 has in many cases adequate English to perform well in a business or IT degree. It has often been proposed to up the level of English for international students, Bob Birrell has often argued this, but this will prevent many international students from joining our degrees and successfully completing them.

5) A lot of the paper assumes that results are graded on a curve, this may be common at UTS and UNISA, I’m not sure, though I do know from colleagues at both those institutions that it is definitely not universal there, but in 16 years of university teaching, I’ve never marked on a curve and never been asked to mark on a curve. I’ve had a tutorial where I awarded 50% HDs, 40% DIs and 10% CR and there was no grade inflation, they were just an amazing group of students who all ended up in the same tute together and really pushed everyone up. On the other hand, I had a tute where I failed 50% of them and no one got a HD. In neither case, did anyone raise a word. Ive seen whole subjects have fail rates of 50%, I know colleagues in degrees at universities with a low ENTERs who have failed 70% of the students, semester after semester. So when academics say ‘We have to mark on a curve’ I’m always sceptical, I’ve never seen any University policy insisting on marking on a curve and I’ve taught at 4 of the Victorian universities. The one time I taught in a degree where there was any grading on a curve, I indicated that I didn’t see the need for it in my subject and that I was happy with the mark distribution, that was fine and no pressure was applied to me to change my mind.

6) Gigi, how do you know that the courses were graded on a curve, this doesn’t strike me as being the sort of data the university would be likely to store administratively. As well, your quote “In particular, it is well known
both from anecdote and in the education literature that many university courses are marked on a curve.” implies to me that you don’t actually have data on this and you are making an assumption that grading to a curve is prevalent. Criterion-referenced assessment as you mention in footnote 2 is nothing new, I saw it in the late 80s as a student and it’s been commonly used in both business and IT degrees since I started lecturing in 1995. My personal perception is that grading on a curve is common in the United States, but far less common in Australia.

7) One of the reasons that I’m doubtful about the results is that I’ve analysed the data in the degree I run from 2001 to 2007, covering many thousands of course results (not as large a sample as Gigi’s and focussed on just one degree, but still a respectable sample size) and I saw very little in the way of significantly different results between international and domestic students, some nationalities were higher on average than domestics, some lower, only one nationality being statistically significantly lower at the 0.05 level and I don’t think that is going to hold up when I add the 2008-2010 data.

8) In terms of grade inflation, my perception is that is no easier to get a HD in the degree I’m involved in than it was 20 plus years ago when I was doing my undergraduate in a similar degree. In fact I can say for a fact that in our degree, there has been grade deflation over the last five years, which has nothing to do with international students specifically.

Finally, let me say that though I have my doubts about the interpretations of the results that have been presented by Gigi and others, I think this is a great piece of research and she has my congratulations for doing this sort of work. Far too little work is done on actually analysing the mounds of data we gather about our student’s performance, so work like this need to be done more often and encouraged. I also think that UTS and UNISA should be congratulated for making the data available and that we shouldn’t jump on any bandwagon attacking them for being soft on international students, as I doubt that they are any different from the rest of us, whatever the interpretation or result.

conrad
conrad
13 years ago

“The tragedy of all this is that the institutions that were brave enough to submit their data”

If they want data leaning the other way, then they should submit data from other faculties. For example, some of the engineering people I know find the opposite, since the OS students can actually do maths.

Gigi Foster
Gigi Foster
13 years ago

Martin,

I agree with you that it’s likely the findings based on the data I have are not going to hold for every institution-by-discipline in the Australian higher ed landscape. Wouldn’t it be great to actually *know* where the effects are greatest, so we could do something about it?

In regard to your question about grading on a curve, what i have are administrative data. If you read the paper you will see that what i have done is calculate the overall percentage of international NESB students in a course, and used that as a regressor (along with heaps of other things) in an equation predicting marks. The effect is positive and significant. When i look at exactly who is feeling this positive effect, it’s international students themselves. Effectively, they are free-riding on each other within the course. Of interest, this effect is NOT in evidence at the tutorial level, so the explanation is unlikely to be about international students helping each other out when they represent a larger fraction of the class. Now, is this a case of marking standards adjustment, or something else? If you have a better interpretation, shoot.

Andrew Norton
13 years ago

I think Gigi’s paper is a very useful one, but for the broader ‘soft marking’ debate draw largely opposite conclusions to those appearing in newspaper reports. The results are consistent with grading on a curve inflating the results of classes that happen to have lots of weaker students, but not any systematic policy of soft marking international students.

None of the papers have reported the actual average marks – 57% for internationals, 62% for domestics – which hardly look like the product of a regime of soft marking. The histograms on p.8 of Gigi’s paper show large fail rates and that very few students get high marks. That most marks are mediocre or worse is the major concern I have after reading this paper.

One data source that has not been utilised in this – Gigi’s paper aside – anecdote driven debate are the pass rates published by DEEWR. One reason I had been sceptical of soft marking claims on internationals is that there had been no upsurge in the pass rate for internationals over time. However recently some institutions have shown strange increases that are worth further investigation.

Martin
Martin
13 years ago

Gigi,

I don’t know what the reason for the effect is, sometimes the data just doesn’t provide enough information to draw a reasonable interpretation. Your interpretation is based on the assumption that we have the international students marks being increased by a grading to a curve effect, but that’s a fairly big assumption and your data provides little if no evidence to support it. I lecture in a business faculty and I can tell you that marking to the curve is not common here and in my experience elsewhere in Australia, maybe my experience is atypical, but before I made your interpretation, I would be looking for solider grounds to support it. I agree that the result you found is very interesting and at an effect size of 6.5%, large enough to make a real difference and should be investigated further.

You may be right, it could be that informal grading to the curve is going on, in that lecturers are not using criterion-referenced marking to any extent and just roughly grading to the curve in their heads without even realising it, but again, I’d like to see some evidence of that. Do you see nice normal curves in the grade distribution in the subjects, if that was common that might be evidence of some form of marking to the curve, even if there is no official marking to the curve. In that case of course, it’s likely that the effect has nothing to do with the level of international students though they benefit from it, but just a general tendency on the part of lecturers to kowtow to the normal curve. Personally I often see bimodal distributions (Pass-Distinction or Credit-High Distinction), even occasionally trimodal distributions (Fail – Credit – High Distinction), so that combined with my criterion-referenced assessment means I’m pretty sure that I’m not grading to a curve, even subconsciously.

In terms of “actually knowing”, you are 100% correct, I’ve done similar (though simpler) analyses of the data from the degree I manage and it has driven some useful learning in terms of running the degree. What amazes me however, is how you got this past the Ethics Committee at UNSW, at my institution it’s been made very clear that we can’t use this sort of data to publish unless we get specific permission from the students, which of course is basically impossible for this type of data. I’m very surprised (pleased as well) that you were allowed to use it by UNSW, UTS and UNISA, was this an issue and if it was how did you address it? If it wasn’t an issue, maybe I can get the UNSW ethics committee to approve my ethics applications :-)

Andrew, I don’t know about in general, but I know that in the business faculty I’m a part of, the performance of international students is a concern and we do a lot to help both them and the domestic students to succeed at University, with mentoring programs, additional tutoring programs, study skill programs, counselling services and English support programs. Of course, assuming a relatively equal skill set for domestic and international students, it’s not surprising that they do a bit worse. Studying in a non-native language (regardless of how competent you are) and studying in a foreign country is fundamentally a pretty stressful task, so it’s no surprise that some international students don’t cope and it’s no surprise that the proportion of those students as compared to the domestic students who don’t cope a is larger therefore leading to lower average marks because the stresses on them are greater.

cheers

Martin

Gigi Foster
Gigi Foster
13 years ago

Hi Martin,

Whether you call it “marking to a curve” or “relaxed standards” to me boils down to semantics, since they both translate as “relative marking.” It’s true that the former implies a shift in the entire distribution, and for that reason it is not a perfect linguistic fit to this situation, where the action is in the low to mid range of the distribution.

Naturally i had to have a water-tight Ethics situation for this study. What happens as a general protocol is that UNSW’s Ethics board is in charge of the project as a whole, so they approve the project (and, incidentally, each survey instrument that i use – i have matched survey data too, i just didn’t use them in this paper), and then UNSW’s approval is then ratified by the Ethics committees of UniSA and UTS.

Any notion that researchers should not use student data to learn things that could help students and the sector (as well as being publishable eventually, in order to encourage good researchers to put their effort behind it) is disgraceful, in my view. I’m not against some shielding – i never identify individual students, and i don’t even report analysis for either institution on its own – but running away from data is puerile.

Laurence
Laurence
13 years ago

Access to the relevant ‘working paper’ would be very helpful. Also, are the data available to others in university researchers?

Laurence
Laurence
13 years ago

..or would “…others engaged in research” be better (poor English is not confined to NESBs)?

trackback

[…] paper by Gigi Foster has received massive attention in the Australian and also some publicity at Club Troppo (cross posted at Core Economics) and Andrew Norton. GIGI Foster knows her disturbing research […]

Andrew Norton
13 years ago

Laurence – Gigi gives the link at comment 9.

trackback

[…] renewed debate about ‘soft marking’ (Club Troppo here, my original post here, Catallaxy here), I have revisited a post from last year about pass rates […]

Brendan
Brendan
13 years ago

Gigi, can you release the data set?

Martin
Martin
13 years ago

Gigi,

this is where we have a misunderstanding, marking to a curve has quite a specific meaning and it is quite different to just having relaxed standards. Marking to a curve implies having a fixed distribution of grades in every class, e.g. 10% HDs, 20% DIs, 40% CRs, 20% PAs and 10% Fails or some other distribution. A lot of people who do marking to a curve, like to use the normal distribution. This doesn’t imply at all that marks for anyone goes up, it can quite easily mean that overall marks go down from the raw scores. Personally, I’m not in favour of this at all except in certain restricted circumstances. Those being where you have multiple markers handling different sections of the subject and you can’t put the time and effort in to ensure that each of the markers is marking to a similar standard. A US professor back in the dot.com boom pointed out to me that he had many sections in one of their subjects at the time, and they were desperate for tutors/lecturers and basically had to take whoever they could, which meant that the quality of lecturers/tutors taking the subject varied dramatically, in such a case marking to the curve does have an element of fairness.

So if your result is generated by marking to the curve, then I don’t really think it says anything about international students at all, except that on average they are a bit weaker than domestic students, a result that is no surprise. Tutes with high proportions of international students just happen to benefit from the marking to the curve in the same way a tute with a high proportion of weak domestic students would, and as has been pointed out, such tutes happen.

On the other hand, relaxed marking standards are very different and means that we give a mark that we feel the student doesn’t actually deserve for some external reason, with an obvious external reason being monetary for the university. This is far more serious, as it represents a real erosion of the quality of the educational system at university. I really think the terminology you are using is confusing and if you really mean relaxed standards and you feel your data supports that conclusion, then you should call a spade a spade :-)

In terms of the data availability, the argument goes, the National Statement on Ethical Conduct in Human Research talks extensively about consent and to the relevant ethics committees, the students who provided the personal data (i.e. UNISA and UTS students) did not give consent and therefore you can’t use their data :-(. It’s a massive leap from a statement which was primarily developed for medical research and ignores several qualifications on the whole idea of consent, but there are many hard-liners on ethics committees, I know I’ve argued with them about this many times. It makes doing any research on your actual teaching either very difficult or much less valid.

Martin
Martin
13 years ago

Laurence,

highly unlikely that the data would be available, I’m surprised that Gigi was able to get it.

cheers

Martin

conrad
conrad
13 years ago

“In terms of the data availability, the argument goes, the National Statement on Ethical Conduct in Human Research talks extensively about consent and to the relevant ethics committees, the students who provided the personal data (i.e. UNISA and UTS students) did not give consent and therefore you can’t use their data…there are many hard-liners on ethics committees”

I must admit, whilst I’m personally not fussed about this data set being used (mainly because it is so large), this would never have got through my university ethics. If it did, and if someone complained to my governing body about it (or even if they found out), we would potentially be in for serious trouble (be glad it went through economics!). Even if someone complained to the university that information about them was put into the public domain without consent (even in the form of means or distributions), I imagine there still could be problems — whoever gave the ethics permission would be in for trouble and no doubt the university lawyers would be working overtime to make sure nothing else happened.

Paul Frijters
Paul Frijters
13 years ago

Martin, Conrad,

I am a bit confused as to where you are going with your privacy arguments. No individual students are named, the ethics processes have been followed and the type of analysis done on their data is entirely of a population-type, just as 90% of applied work in economics. Are you seriously suggesting that we shouldnt publish ‘averages of student information’ or that any sensible ethics committees should block it? Like the number of students at universities, the pass rate in different courses, the number of international students at different unis, and a hundred and one aggregate numbers brought in the public domain by unis? Or do you mean any analysis that might be embarrassing to particular organisations?

Ethics should not start to mean a fig-leaf to kill unwelcome research.

conrad
conrad
13 years ago

“Are you seriously suggesting that we shouldnt publish ‘averages of student information’ or that any sensible ethics committees should block it?”

I’m not suggesting that (did I mention the word sensible?) — I’m just pointing out what is typical of ethics requirements in many universities (including my own — and my guys are hard and pedantic but not nearly the worst I’ve heard of for ludicrous blocking of research), and hence what you may have to deal with one day (sooner rather than later I’d bet). In general, either deception (even if entirely harmless) or using data without consent is exceptionally hard to do, and that includes if you are only going to publish means. For example, it violates the NH&MRC guidelines which mean that you should need special permission to get it in the least case (i.e., justify harm vs. gain). On this note, I was surprised at some of the studies that Andrew Leigh got away with before he became a politician, so I would think that your university is currently on the easy end of consent. (I don’t doubt many good studies never get run because of this incidentally)

It’s easy to see why. For example, if somebody who was in this study complained that they they didn’t know their data was being used and wouldn’t have allowed it had they known, I imagine the university would be trouble since standard guidelines of outside ethics organizations have been broken. They could also complain that this study damaged their career prospects etc. (it’s not like frivolous complaints are unknown), which would complicate things even further. I imagine the best you would get away with in this case is a few committees of high ranking people having to make some judgment about it and an apology letter.

The other problem is that once you allow consent for these sorts of things, your ethics committee would be obliged to allow anyone else to run studies like this (broadly defined — i.e., using large data sets obtained without consent), otherwise it would be workplace bullying (i.e., one person is allowed to do something but another isn’t, for no reason). It seems to me that it is unlikely any university is going to want to expose themselves to these types of problems time and time again.

Martin
Martin
13 years ago

Paul,

not my argument at all, but it is the argument I and others face all the time and sadly it has reduced the amount of research I do in this area considerably as it’s hard to get any research of this kind past our ethics committee. To give you an example, one of my PhD students who happens to be on staff in our school is looking at students perceptions and attitudes to a particular aspect of our curriculum (an aspect which students have particular difficulties with). She’s planning to mix qualitative and quantitative methods and the initial work is a series of interviews with individual students. The initial proposal from the committee was that this staff member commit to *never* teaching any of the students that she interviewed ever again. We managed with some effort to get that changed to any of the interviewed students can request that some one other than this particular staff member mark their work where she is teaching them.

Another PhD student I know who was conducting an anonymous survey with businesses about an aspect of their business was initially instructed that she should phone each of the businesses to ascertain whether they would permit her to send the survey. Another colleague who was conducting research on a classroom intervention to determine its effect was told that she had to get permission from all students individually to use their results in the class in her analysis and if she didn’t have their permission, she could not use that data in any way. Even down to the fact that she was not to look at the demographics of the non-responders and determine if the non-responders were different as a group to the responders.

In my field there has been a multi-national, multi-university study ongoing over the last five years to analyse student responses to certain types of exam questions to try and determine which types of exam questions actually assess the relevant knowledge the best and to analyse what are the common misconceptions students have about these areas of the syllabus. I have the data sitting in my room, I have the relevant demographic data sitting in the student database. I’m not allowed to use it, as I didn’t get permission from the students to use it, and when I discussed whether I could do this from now on, it was thought that because I’m in a position of power relative to the students, it wouldn’t be ethical for me to do this, so I’m going to have to wait until I’m no longer teaching the relevant subject to do this. It’s also considered unethical to get permission after the event by our ethics committee.

So, you can see why I was surprised that Gigi’s research got through the ethics committees of the three universities. Personally I congratulate the universities involved, it will only benefit them, their students and the rest of the university community in the long run to allow this sort of research, but try telling many of the people on ethics committees that!

My solution is that on enrolment, students give the university permission to use their educational data in a non-identifying way in research, i.e. marks, assignments, exams etc. with analysis at the group level being fine.

cheers

Martin

Gigi Foster
Gigi Foster
13 years ago

All,

Martin claims about that “the students who provided the personal data (i.e. UNISA and UTS students) did not give consent and therefore you can’t use their data.” This is an issue that i addressed very early on in this project, knowing that universities in Australia were/are unused to making large-N dumps of their student data for research purposes. Let me quote directly from a letter i wrote to the UTS Ethics committee in January 2008, having already obtained the approval of the UniSA Ethics committee to undertake the project:

“Finally, the committee may note that while data are to be extracted that cover the entire cohort of enrolled students in the Business Faculty, permission to extract and match these data to students’ survey responses is only requested from those students who elect to respond to the survey. In my experience both at and at overseas institutions, it is common practice to treat the institution holding vast quantities of individual administrative records as the party responsible for granting permission to use those records for approved research purposes. This protocol is followed in the education sector (at the tertiary and lower levels), the health sector, and the government sector, and without it many critical research projects would not be possible. This is partly because once a threshold number of individuals is reached, it is no longer feasible to request separate permission from each individual and stay within a reasonable research budget.”

This protocol was approved by both participating institutions through their formal Ethics Committee channels. The confidentiality of student records is my primary ethics-related concern, as reflected throughout the many Ethics documents associated with this project over the years. Moreover, because I am using data from two institutions in this study and others, it is possible for either institution to claim that the results are driven by the other institution: neither is directly isolated. This is by design.

Can i let others see the data? Let me quote my original ARC application:
“Subject to institutional permissions, the longitudinal data set amassed during the course of this project will be fully blinded and made available for use through the CI for other economics and education researchers investigating peer effects and other research questions in higher education.” In other words, if the institutions involved were to agree, then i would be delighted to share the blinded data. I haven’t asked them, but if you are interested in the data, please feel free to approach them for permission.

As a final note to add some perspective, let me add that educational institutions overseas have a record of making their student-level data available to researchers in economics. Public school systems in New York City, North Carolina, Texas, and Florida have all made blinded student-level data available to researchers who perform objective academic research in order to help both the school systems and their own academic careers. This is a win-win situation: the schools receive policy-relevant advice that then feeds through to better outcomes for students, and the academics involved fulfill their intellectual interests and their professional mandate to publish. Several overseas universities have also provided blinded student-level data to academic researchers. The University of Maryland, Berea College, Dartmouth College, and Williams College have all provided such data to economists doing educational research for the good of the institution and the progress of our understanding of educational phenomena. These phenomena range from the persistence of disadvantage to the effects of peers, race, and gender on university performance. It is industry-standard practice for the institutions providing the data to be identified while the confidentiality of individual students’ records is maintained. Once again, this is a win-win situation for all parties. I want to congratulate UniSA and UTS yet again for modelling international best-practice in this regard.

conrad
conrad
13 years ago

Gigi,

your ethics committee is still giving you permission in breach of NH&MRC guidelines (which is good, because any legal responsibility is now theirs!). For example if you read the section on when you can use data without consent, there is in 2.3.2:”there is no known or likely reason for thinking that participants would not have consented if they had been fully aware of what the research involved.”. I also personally don’t see why, as Martin pointed out, you simply couldn’t have got the students to tick a box, in which case you could have just deleted those that didn’t want to participate, so it’s clear there was an obvious alternative to using data without consent.

It’s worthwhile noting that how important consent is differs depending on the country you are in, so your examples about what happens in the rest of the world (mainly the US) are not especially relevant. One of the reasons that animal research has had it’s day in the UK, for example, is because you can’t get animals to consent — so now it’s possible to stick an electric pulse through a human brain but not a monkey. Alternatively, there are some Nordic countries where you can get quite personal details about people without consent that you could never get here (it’s where some of the best social science data sets come from).

Martin
Martin
13 years ago

Gigi,

I don’t actually disagree, I don’t think there are really any ethical issues with using the data as you have, but that is not the position that is taken by many Human Ethics Committees around Australia. But I can tell you that your explanation would not cut any soap at the two universities I’ve worked with and you would not have been allowed to use the data, which is a real problem for educational research.

I don’t understand one aspect of what you’ve indicated in your letter to the UTS HEC:

permission to extract and match these data to students’ survey responses is only requested from those students who elect to respond to the survey

Does this mean that you actually surveyed the individual students for permission? In the paper you state:

To create the data set, information from the enrolment systems of each institution was merged with data from students’ applications to university, resulting in a final data set that includes detailed demographics (such as age, gender, and other observable characteristics, including international student status and whether the student speaks English in the home) as well as detailed information about which courses and tutorials each student took in each
covered semester, and what final percentage marks were achieved in each.

So, how many of the 12,000 plus students responded to the survey and which survey are you talking about? And what percentage of the population w=of business students were included, I had the impression that it was 100%, but now I’m confused

On the second point, I’m even happier that the blinded data is potentially available to other researchers, that’s great.

cheers

Martin

Martin
Martin
13 years ago

Conrad,

my suggestion is that it is a condition of enrolment to agree to have blinded panel data used in educational research. If you want to be fuzzier about it, you could put the tickbox on the enrolment form/screen and allow them to opt out, but that was clearly not an option for Gigi, getting the permission of 12,000 plus students to access their enrolment/results data is a nightmare unless it is built in to the enrolment system. As well, once you allow students to opt out, it’s clear that those opting out won’t be a random selection of the students, and this will immediately introduce significant selection bias in to the sample

cheers

Martin

conrad
conrad
13 years ago

“my suggestion is that it is a condition of enrolment”

That would be considered coercion so it would never (and shouldn’t) pass.