Quality Over Quantity–Computer Assisted Grading Revisited

A Reuter’s report describes recent efforts to create computer software that could scan and grade common errors in student essays.  Mark Shermis, Dean of the College of Education at the University of Akron, is supervising a contest created by the William and Flora Hewlett Foundation that would award $100,000 to the programmer who creates an effective automated grading software.

Shermis argues that if teachers weren’t swamped by so many student papers in need of grading, they would assign more writing and student’s would greatly improve their written communication skills.  He sees this new technology as an aide to the overworked writing teacher rather than a potential replacement.

Steve Graham, a Professor at Vanderbilt who has conducted research on essay grading techniques, argues, in contrast, that the replacement of writing teachers by grading software is not only “inevitable” but also desirable as “the reality is humans aren’t very good at doing this.”

As the writer of the Reuter’s article notes, talk about paper grading software is not new.  It began in the 1960s.  Now, however, technology has reached a level where such grading is not only possible but also probable.  But the question still remains:  Is it a good idea?

Leaving aside for a moment the question of faculty employment, machine grading sidesteps a more important question than how to get students to write more and grade that writing effectively.  Namely–what is writing and who is responsible for teaching it.

In too many schools writing is viewed as the “problem” of the English department.  Students are sent to writing classes to learn essay structure, research techniques, and grammar.  Only the last of these is universal.  The other two skill sets are discipline specific.  I guess that explains why to my students everything they read is a novel and every paper a literary analysis.  They’ve been taught after all that writing equals English.

If we really want students to learn not just writing but effective communication, parents, teachers, and administrators need to spread the responsibility for this instruction across the curriculum.  Some schools already do this but most are content to leave communication training to literary scholars.  Machines won’t change this.  They will be programmed to evaluate whatever curriculum is currently in place.  Until the curriculum is changed, the machine will not only replicate the error but multiply it.

Moving on to the issue of employment, part of my unease with a machine that grades papers is it would most likely put me out of a job.  I have 48 student essays in need of grading that are staring at me right now as I pen this post.  Of course, the curricular changes I suggest would more than likely have the same effect, with or without machine assistance.  The way to counter this, however, is to lower class sizes.

This is the other aspect of the issue that is completely ignored by most research.  If class sizes are lessened, not only will more teachers have employment but writing will become a less onerous task to teach and evaluate.  It could also then be meaningfully integrated into the entire curriculum and not remain the purview of the English Department.

Would such changes cost a lot of money?  Yes.  But it is a good investment.  Far better than the money we’ve wasted in Iraq and Afghanistan and the even larger sums of money we spend incarcerating drug offenders.  It’s even better, dare I say, than the cost of a certain software currently being designed to solve all my problems.

 

 

 

, , , , , , , , ,

  1. #1 by VanessaVaile on April 3, 2012 - 2:03 pm

    And then there is composing software, human free writing, no doubt to be read and assessed by grading software…

  2. #2 by VanessaVaile on April 3, 2012 - 1:12 pm

    TOEFL has been using machine grading for essay section of the test for some years now. As I recall from articles read at the time (around 2003-4 or so), ETS had hoped to use it for SAT writing but met with strong resistance. About the same time, some text analysis software program that machine grading is based on was still in beta and publicly available. I tried it in a few classes. We were not impressed. A few found that the programs helped them self-monitor for revision ~ but not as much as going all Elbow.

    Alas, I doubt higher ed writing instruction will avoid the embrace of the machine indefinitely. As with other developments, the bottom line temptation will be too strong. Back in the mid-90s at UC Davis I advised grad cohort that one day the mark of an elite education would be having been taught by humans. They thought I was joking.

  1. “Ethel…I think we’re fighting a losing game.” « More or Less Bunk