Extrinsic incentives in schools

Unintended Consequences of Rewards for Student Attendance: Results from a Field Experiment in Indian Classrooms
by Sujata Visaria, Rajeev Dehejia, Melody M. Chao, Anirban Mukhopadhyay – #22528 (CH DEV ED)

In an experiment in non-formal schools in Indian slums, a reward
scheme for attending a target number of school days increased average
attendance when the scheme was in place, but had heterogeneous
effects after it was removed. Among students with high baseline
attendance, the incentive had no effect on attendance after it was
discontinued, and test scores were unaffected. Among students with
low baseline attendance, the incentive lowered post-incentive
attendance, and test scores decreased. For these students, the
incentive was also associated with lower interest in school material
and lower optimism and confidence about their ability. This suggests
incentives might have unintended long-term consequences for the very
students they are designed to help the most.

This entry was posted in Economics and public policy, Education. Bookmark the permalink.
Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
paul frijters
paul frijters
5 years ago

certainly an interesting headline, but when I read the paper, it failed to convince.

Overall, they find no effects of the scheme, which only consisted of rewarding kids for attending primary school by giving them 2 pencils and an eraser, which even in Indian slums is pretty much nothing (I doubt they could sell it on!). And when they disaggregate, they find that the kids who were not there before they started measuring attendance for reward were the ones with the drop in further attendance and test scores, but the group was small (a little over 100 kids), whilst there is a slightly opposite effect for the bigger group of kids who were there when they started measuring attendance. Hard to know what to make of something like this, particularly as there is no good reason to expect a drop in scores from this scheme 3 months afterwards, suggesting we are looking at either an accident or a selection effect (out of the non-attendees the worst performers remain).

Overall you just have to say they found no marked effects of a very small intervention. Only if they slice their data in enough smaller groups, they can find a small group where things look negative. You need both a bigger sample and a clearer idea beforehand what you are going to look for.