Preparing for the singularity: How machine grading challenges our humanity

Imagine the following scenario: You submit a 10-page essay to BlackBoard at midnight and by 12:01am, you have your grade back with commentary. Though no human reader could evaluate an essay at this pace, robot readers certainly can and without tire. Programs like this exist and being employed by real educators. Take Pearson Education Inc.’s Intelligent Essay Analyzer, for example, which designers claim “automatically evaluates the meaning of text, not just grammar, style and mechanics.”

There’s research to back up these claims, too. In 2012, researchers at the University of Akron conducted a study where they compared grades evaluated by human readers and robotic readers for over 16,000 student essays. What did they find? That the robots graded essays as well, if not better than human graders.

Researchers call this innovation “fast, accurate, and cost-effective,” claiming that machine grading allows teachers to spend less time pouring through stacks of essays and more time engaging with students. For testing organizations, this technology provides tempting possibilities for cheap and efficient mass turn-around. But many writing educators aren’t sold by this argument for efficiency. I attended a writing conference this weekend where the keynote speaker, Dr. Chris Anson from North Carolina State University, made a case against machine grading. Written texts are made to interact with readers, he argued, and without a human response, we lose touch with that purpose.

Anson argues that these robotic readers are programed with algorithms which seek big words and complex sentence structure. During his presentation, he showed the audience an utterly nonsensical essay which received a 6/6 from a machine grader by using words like “efficacious” and “myriad.”

Seemingly, computers are incapable of assessing the stuff we should really care about such as: accuracy of evidence, logical reasoning, clarity of ideas, clear organization, emotional appeal, ethics, and originality of the idea expressed. Good grammar does not necessitate original ideas, the fodder for true intellectualism

Once they’re able to do all of these things as, well, humanly as a human, I imagine we’ll have another set of problems on our hands.

Many disciplines do seem to lend themselves well to machine grading. Take math for example: students are asked to think critically, but ultimately, are expected to come to an objective conclusion. If you know the answer to a question in such field, you can feel a tangible rightness, released cathartically when you darken the scantron bubble with graphite.

Writing, on the other hand, draws upon our subjectivity, asking us to engage in creative problem-solving, to provide a fresh perspective on a topic. Writing asks us to use our supremely human logics: interpretation, analysis, innovation, audience understanding, and art. As a writing major myself, I’ll admit that I have personal investment in this issue. But I also believe that there are larger and far more ominous implications to passing writing education into the hands of robots. Are not these subjective skills the very things that make us human? To me, the very suggestion of forking our subjectivity over to robot graders is offensive not only to our students, but also to our species. Marc Bousquet of the Chronicle of Higher Education argues it is not machine grading which lies at the heart of this issue, but our style of education. “The fact is,” he says, “Machines can reproduce human essay-grading so well because human essay-grading practices are already mechanical.”

If any of you remember writing Core Democratic Value essays in high school history, you know what I’m talking about. We ask students to write these bullshit, five-paragraph essays which do not promote critical thinking, but rather, the ability regurgitate what is expected. Bosquet argues that this has created a generation of students who “have no flipping idea of the purpose of academic and professional writing, which is generally to make a modest original contribution to a long-running, complicated conversation.”

If Bousquet is right, then robots are not at the heart of the issue, but rather our cultural and educational emphasis on rigid, mechanical thinking. If this is true and this trend continues, we may not have any reason to fear the singularity. We will have turned into robots ourselves.

If you would like more information or would like to sign a petition against machine grading, visit www.humanreaders.org.