Effective instructor? Agree or disagree

Carly Simpson

The Universal Student Evaluation of Teaching Implementation, a task force assigned to researching student evaluations at Grand Valley State University, has begun to share their findings with faculty.

USETI has several concerns with student ratings at GVSU. First, 25 different instruments are used across the university. Most of these evaluations are homemade and have not been tested for validity or reliability.

According to USETI, a professor at GVSU received the comment, “She was cool. I’d like to set her up with my dad.”

Another received, “The entire structure of this class is messed up! The 10 minute quizzes were bull***t. She taught this class like it was a SWS class but it’s not SWS. Therefore she needs to change that. This final exam is bull***t. We have never been tested on the material yet.”

“Sometimes the comments are useful and they help me improve,” said mathematics professor Edward Aboufadel, chair of USETI. “Other times, the comments make you feel bad but aren’t helpful. Of course, as faculty we also get a lot of positive comments.”

USETI is also concerned that student ratings are a subjective matter, but are often treated as an objective measure.

“We have a merit evaluation system, and every year, each faculty member receives an overall rating based on teaching, scholarship and service,”Aboufadel said. “Student evaluation reports factor into the overall evaluation, and it is difficult for someone with below average evaluations to get the highest overall rating, exemplary.”

Student evaluations also factor into contract renewal, tenure and promotion decisions for faculty.

“I believe that the evaluation of faculty should take into account the opinions of all who interact or are affected by a professor, which includes students, other faculty, administrators and staff,” Aboufadel said. “I also count on faculty to read student evaluations carefully and with the proper context.”

Student ratings of teaching is one of the most studied aspects of college teachings with over 3,000 citations on the subject. While going through this research, the task force found several factors that are associated with lower numerical ratings of professors. These include:

  • Teachers of math and science courses are rated lower than teachers of humanities courses.
  • Teachers of lower-level courses are rated lower than teachers of higher-level courses.
  • Teachers of required classes are rated lower than teachers of elective classes.

The task force also listed several factors that are not strongly related to ratings, which include the gender of the student or instructor and the time of day the class is taught. The group also found that assigning higher grades doesn’t result in better evaluations. In fact, the difficulty of the course has a positive relationship with higher ratings, which Aboufadel said came as a surprise to him.

GVSU isn’t the only public university that lacks consistency when it comes to student evaluations though.

“Most (public universities) do not have a universal form used by all faculty,” Aboufadel said. “The ones that do have created ‘home-brew’ forms that haven’t been studied for validity or reliability.”

Michigan State University has a common evaluation form; however, it allows departments to use other forms. MSU also sequesters grades until students complete the evaluations.

USETI was created in April 2014 by the Executive Committee of the Senate. It was charged with the three tasks: to find a common student evaluation form, recommend guidelines concerning the use of the form and to provide an implementation procedure. USETI’s findings are due to the ECS in December 2014.

For more information regarding the work of the task force, visit gvsu.edu/useti.