Follow up: Opt-out of project to develop ML/AI based summary of teaching evaluation comments


This is a follow-up to last week’s e-mail (included below).  We received a small handful of opt-outs, which was about a handful more than I expected.  Given limitations of e-mail, I feel it is important for me to further explain the project, goals and potential outcomes.

First, I want to emphasize that this is an exploration into new ways to manage and process course evaluation data that we already collect.  Second, there will be no change in the actual course surveys.  Third, if it ultimately proves useful, this will only be an addition to the way survey results are communicated to the faculty.  And finally, it is an exploration that will need careful assessment and evaluation by numerous groups before there is any roll-out.  Ultimately, we just want to develop more effective ways to communicate data in forms that are useful ad actionable to those teaching the courses.

The current work is essentially “re-analysis” of old data in appropriate support of a core academic responsibility of the college.  But being sensitive to individual faculty privacy concerns, we are reaching out to offer the opt-out opportunity ( from training and test data that will be used for evaluating the potential and appropriateness of this approach.

We are keenly aware of limitations, biases, participation rates, and other issues related to on-line course evaluations.  But they are data that we need to continue collecting, and the data does have some value to faculty as they continuously improve their teaching.  This is just trying to enable faculty to make more effective use of the data.

Two final notes.  Kathy and Madeleine are working to ensure the data is anonymized as effectively as possible before being shared with a few select students as training data.  And results of this study will only be shared within the MTEI group at this time.

================= Last week’s e-mail =====================

Student evaluations of courses are an important element in our “continuous improvement process”.  While the questions give easily quantified data, the numbers don’t always capture the full breadth of student’s experience.  For that, the comments are commonly far more valuable.

For a small class, it is not difficult to read all the comments and get a reasonably good perspective.  But for large classes, the sheer number of comments can be overwhelming and often there is insufficient time to fully digest the data.

Kathy Dimiduk and Madeleine Udell are exploring development of a supporting tool that would provide a “summary” of the comments, looking for common threads and prioritizing the positives and the concerns.  Ultimately, such a tool might provide a bulleted list of actionable summaries that would facilitate improvements, as well as help guide a more comprehensive reading of the comments.

A group of students this semester will be working to develop first prototypes of this tool.  Past course evaluations, anonymized to remove faculty names and course numbers, will be used as the training data and to assess the effectiveness of the program.  Kathy and I met with the IRB (Institutional Review Board) to confirm that this would not be classified as a “human participatory research” project.

However, there is the potential for some personally identifiable data making it through the anonymizing filters (badly misspelled names for example).  As course evaluations can at times be sensitive issues for faculty, we wanted to give you the opportunity to opt-out and not have your prior evaluations included in the training data.  To opt-out, please fill out the Qualtrics survey at

I believe that this project has the potential to be extremely valuable to all of us, providing actionable recommendations from the evaluations, and helping us do our jobs well with less effort.  I hope you agree and are willing to have your data included in the training and evaluation.  We will keep you posted with results of the initial studies.