Gender, Teacher Evaluations, and Key Words 2: Understanding Omitted Variables?

 An example of a performance evaluation form used by students to rate their courses and instructors.

An example of a performance evaluation form used by students to rate their courses and instructors.

In my prior post, Gender, Professor Evaluations, and Key Words Part 1I described an interesting website that can be used to show how male and female professors differ in their professor evaluations by any key word across two dozen disciplines.   That post ended with this cliffhanger question: Can we interpret all this information to tell us that male professors are more boring, but female professors are more frequently incompetent and beautiful?  

When we attempt to understand differences along a key dimension, (such as difference in whether a teacher is rated as "boring" by gender), we need to consider whether the relationship we are observing is capturing  the "whole story," or whether there is  some other confounding factor that could explain what we are observing.  In this example, in order to determine whether students rate professors differently whether they are male or female, we need to make sure we are comparing identical professors along all other dimensions. Among economists and statisticians, drawing a conclusion about a relationship between two factors when ignoring a key alternative factor is known as omitted variable bias.

It won't surprise you that a range of factors could go into a student's subjective professor evaluations.  Just off the top of my head, I came up with a few examples:

  • How difficult was the class and subject material?
  • Did the student get an A in the class?
  • How much homework did the student receive?
  • Was the class at a time that college students might not like (such as the 8 AM Friday class?)
  • How accessible was the professor?
  • Was the class an introductory course, a course required for a major, or an elective class?

Now, the fact that a number of factors could ultimately determine student rankings is still not enough to draw a conclusion.  The important question is whether any of these "other factors" vary systematically by gender.   Take an extreme example:  Assume in the Math department, female professors were always assigned to teach the much harder required Calculus class, whereas male professors were assigned to teach the very popular statistics elective.   If we observed significantly more negative ratings for female professors, it could be simply driven by the fact that female professors were disproportionately teaching the difficult, less popular class that students were more prone to rank negatively in end of course evaluations.

This simple example is not to imply that the differences may not be due to the fact that students rank professors differently depending on gender--but we surely can't tell that from a statistical perspective based on these simple observed differences.

And, as an aside, for an interesting view on how effective student evaluations are in assessing professor performance, I point you to this blog from Berkeley Statistics Professor Philip Stark.

 

Gender, Professor Evaluations, and Key Word Descriptions- Part I

As a former professor who used to get end-of-semester evaluations, and a labor economist who studies gender differences in the labor market, I found this article from the New York Times Upshot (@upshotNYT) blog fascinating:  Is the Professor Bossy or Brilliant: Much Depends on Gender

So I looked up the original website from Professor Ben Schmidt (@benmschmidt) which allows the user to put in any word and see how frequently the word appears in Rate My Professor reviews for male and female professors across more than two dozen disciplines.   

For example, I entered the word "boring" into Professor Schmidt's website and got the following graph, which shows that in most disciplines, male professors were described as boring more often than female professors.

Then I tried something slightly different-- "incompetent."   In Engineering, female professors were described as "incompetent" slightly more than 18 times per milllion words of text whereas male professors were described similarly only about 12 times per million words.

Finally, I tried a word that I did not expect would come up in teacher evaluations--"beautiful."  Female professors had such a descriptor far more frequently than male professors in every academic discipline studied.

I am not the only person fascinated with this exercise--there have been many very interesting and thought provoking articles and blogs on this topic.   A sampling:

This article from Slate about a North Carolina State study that had a set of identical on-line classes taught by the same professor, only telling one group of students the professor was male and the other the professor was female.   According to the study, the class with the "female professor" was consistently rated lower.

This article from the blog Inequality by Interior Design (@tristanbphd) which provides a thoughtful perspective on the issue of gender differences in teaching evaluations based on sociological research and the blog-writers own experiences.

The Washington Post Monkey Cage blog had a five part symposium on Gender Wage Gaps, with this article on gender differences in professor teaching evaluations of particular interest.

In Part 2 of this blog, I'll explain some of the statistical challenges in isolating the effects of gender on teaching evaluations.  Can we interpret all this information to tell us that male professors are more boring, but female professors are more frequently incompetent and beautiful?   Not quite...more on this soon.

Data Visualization, Legos, and the Super Bowl

This story from the American Journalism Review describes a creative way that journalism professors are teaching students how to think about data visualization using Legos.

Update

One of the most striking illustrations I have seen so far on the use of legos comes from the Count Bayesie Blog in which lego blocks are used to illustrate the rational behind Bayes Theorem.   Bayes Theorem is the basis for bayesian probability--the idea that as one gains additional knowledge they can systematically update the expected probability of an event occuring.   Count Bayesie does a great job of explaining the idea so check out the post here.