|Get the RSS feed|||||Leslie Wilson Archives|
"My Teacher Liked My Essay: The Algorithm Didn't"
by Leslie Wilson
For years, teachers of writing and English have bemoaned the time intensive task of evaluating students’ writings. The job is huge. It involves assessing and providing feedback on all aspects of the written piece. This includes grammar, syntax, spelling, content, placement of adjectives, adverbs – the list goes on. The time between delivering a final paper and students’ receiving the feedback and evaluation is lengthy. Students could do a whole lot more writing in between those two time frames…but it would be done without the benefit of knowing how they fared on their last document.
Barbara Chow of the Hewlett Foundation discussed the findings of their sponsored study of automated essay scoring engines which are now commercially available (New York Times, Sunday, June 10, 2012). The researchers found that the programmed feedback scores were actually identical to those of human graders. The thought of machines scoring student writing makes me uneasy. The average state end of year writing assessments are typically evaluated by two people. The machine is set to replace only one of those two humans. Chow says that people are not necessarily the best reviewers – for one thing, she notes, they spend an average of only 3 minutes per essay.
Not only does the software provide an efficient and accurate scoring process, it has reignited states’ desires to have students perform end of year writings. Prior to this discovery, more and more states were letting go of this important student learning artifact for human and financial budget reasons. Theoretically, according to Mark D. Shermis, dean of the college of education at the University of Akron, OH, the current $2 or $3 cost per grading each essay can be eliminated by using the automated system. And as the automation becomes more sophisticated, it could be used at the classroom level, which could expand students’ opportunities for writing assignments.
The automated writing scoring system could provide the markups and identify problems in the document. It could also provide full explanations, practice sessions that remediate those problems – and assess those as well. Ideally, the teacher would continue to assess the content and other crucial goals for the writing – still providing that rich feedback to the student.
Jason Tigg, a London member of the team that developed the essay grading automated system, said that the software they developed uses small data sets and regular PCs. Hence the additional infrastructure school costs would be nonexistent for this kind of automation.
Three points jump out for me about this development. They dovetail with the revenue positive findings from our Project RED work.
1) Using technologies for productive and efficient accurate feedback to learners saves humans and resources time and money for this task.
2) When the software evolves and costs are reduced, more schools will use it and students will do dramatically more writing across all content areas.
3) Teachers will have more time to focus on learners’ higher order, deeper skill and knowledge development.
This is a good example of how technology can facilitate the changing role of teachers, for the better, with a closer focus on students’ skill and knowledge acquisition.
Leslie Wilson, CEO-One-to-One Institute, www.one-to-oneinstitute.org
Co-author: Project RED www.projectred.org
Co-author: A Guidebook for Change