Measuring Growth, Part 2: Self-Evaluation in Compass

Dec 4, 2012   //   by Dimitri   //   Blog, Physics Education, Teaching Philosophy  //  No Comments

hummingbird One student symbolized their growth with a hummingbird for three reasons. (1) According to the student, the hummingbird is “special in its ability to hover” which “embodies the moments over the course in which I felt . . . ‘stuck’ in my studies, unable to propel myself forward.” (2) The hummingbird can also “fly in any direction, including backwards. . . . The second midterm, for example, was my fall backward.” (3) On a more positive note, the student compared their growth to a hummingbird that “elevates and flies over ground, ascending over obstacles and any problems.”[/caption]

Introduction

This post is the second part of a two-part story about how Compass encourages students to measure their growth according to a rubric of qualitative skills (which can be found here). The rubric that Compass uses is an adaptation of two rubrics developed by Jon Bender. How Jon developed his rubrics is the subject of the first part of this story, and can be found here. Compass’s rubric is very similar to those created by Jon, and a helpful description was given in Part 1:

These rubrics include a bunch of qualitative behaviors and skills, e.g., persistence, communication, skepticism, and self-compassion. They can be used in two ways: (1) by teachers to provide feedback to students, and (2) by students to evaluate themselves. Jon has taken both approaches, whereas Compass uses the rubrics primarily in the context of student self-evaluation.

In order for students to gain a rough understanding of the skills, each skill is accompanied by a list of defining questions. For example, one of the questions that accompanies “persistence” is: what do you do when you’re frustrated? A particular student’s proficiency can be ranked as either beginning, developing, or succeeding according to the rubric. The stages of proficiency are described through qualitative statements. For instance, the rubric characterizes the beginning stages of persistence by the following statements: I tend to try one or two things; and, I give up more easily than I should. On the other hand, succeeding at persistence is characterized by look[ing] for new ways to think about a problem.

In this post, I discuss how and why Compass started using Jon’s self-evaluation rubrics in our courses and I describe how we adapted the rubrics to serve our needs.

Getting frustrated with students’ frustration with failure

As an experimental atomic physicist, I know that the things I build won’t work on the first try. Or on the second try. Or the third. And so on, ad infinitum (or so it seems).

Failure is definitely frustrating. I can’t even count how many times I’ve fixed the “last” problem with a piece of electronics and plugged it in only for it to start smoking about two minutes later. All I can say is that closing my eyes for a few seconds, drawing a deep breath, and deciding that it’s time to take a break and go for a walk around Memorial Glade has become part of my weekly (maybe even daily) routine as an experimentalist.

But despite my frustration, I know that each failure is a learning experience. After troubleshooting faulty equipment, I’ve learned something practical (e.g., don’t use arbitrary colors of wire when building electronics), and after each failed experiment, I gain new insight into atomic physics (e.g., how the intensity of a laser affects the width of spectral lines). To me, frustration is an essential part of learning, and I know that it can take days, weeks, or months of tedious tinkering before I get my equipment or code working properly.

So why the heck are my students giving up on physics because their problem sets are frustrating?

(Note: With only 38% of freshmen who are interested in science going on to complete degree in science, attrition is indeed an ongoing problem in science. I don’t want to over-simplify the reasons people leave physics, which include a lack of positive role models, harsh grading practices, unapproachable faculty, overpacked curricula, and frustration with conceptual difficulties. See here and here for in-depth discussions by Elaine Seymour.)

It was only after lots of long conversations with my colleagues in Compass that I came to understand a perverse truth about our culture: there seems to be a pervasive belief that “smart” people “just get it” without even trying, and therefore people who struggle or spend a lot of time trying to learn something new must be “stupid.” I started to better understand what many of my students are likely experiencing: they probably think that their frustration is evidence of their stupidity. No wonder students give up in the face of frustration–why bother struggling with physics if you think you’re too stupid to be a physicist?

In the education community, this pattern of thought is related to Dweck’s “fixed mindset,” which Jon discussed in Part 1 of this story. Like Jon, I wanted to know how to get students to abandon the fixed mindset. I wanted them to interpret failure as integral to the learning process and to believe that trying hard and persevering through frustration results in deeper understanding. Essentially, I wanted students to adopt a worldview similar to Dweck’s “growth mindset.” (You can read more about these mindsets here.)

Fortunately, the Compass community was on board with trying to change students’ beliefs about failure, and Jon gave us some tools to help make that happen.

Measuring success, failure, and growth

In Fall 2011, the Compass community decided to address issues related to failure and growth by pioneering a new course in the freshman sequence: a two-unit spring semester course analogous to the one that already existed in the fall. This new course, ultimately called Intro to Measurement but often referred to as “Spring 98”, was taught for the first time in 2012 by Geoff Iwata (who was a senior undergraduate at the time) and me. I’ll focus on the growth aspect of the course in this post, although its other major goal was learning how to collect, analyze, and interpret scientific data.

Although Geoff and I taught the inaugural Spring 98, we were definitely not the only people who contributed to its design. For instance, Angie Little helped shape its learning goals and student projects. She also introduced me to Jon, whose contributions to the course (e.g., the rubrics) have since taken on a bigger role within Compass. Angie, Jon, and I talked a lot about the role of failure and the importance of compassion in the learning process, conversations which had a profound impact on the character of Spring 98. In its early stages, Angie referred to our discussions as the “Grades-Are-Whack Collaboration.”

Why did Geoff and I name the Spring 98 course Intro to Measurement and why did Angie call our collaboration “Grades-Are-Whack”? What do measurement and grades have to do with failure and growth? The answer is pretty simple: teachers routinely use grades as measurements of failure (F’s) and success (A’s) and some teachers, like Jon and many people in Compass, use qualitative rubrics to measure growth as a complement to grades.

One of our goals for Spring 98 was to get students to view failure as part of learning. To do that, we had students use the rubrics as a tool for monitoring their growth. The hope was that they would start to view the concept of “growth” as a framework to interpret their grades. For example, a “C” on a physics exam may indicate any number of things, including a gap in conceptual understanding, difficulty with mathematical procedures, or a lack of proficiency with skills like organization and communication. By using the rubrics as a complement to grades, students can move away from an overly-simple interpretation of a “C” as an indicator that they are “bad at physics” and that they need to “study better.” The rubrics empower students to say, “I need a better understanding of the concepts covered on this exam, and to make that happen, I need to get organized and work on my communication skills.” (Of course, this type of response requires skill with self-compassion, i.e., the ability to act kindly towards oneself and view failure as a learning experience.)

The evolving role of the Self-Evaluation Rubric in Compass

Geoff and I introduced Jon’s rubrics to Spring 98 making very few modifications to the documents themselves in the process. For homework, we asked students to use the rubrics to evaluate their growth in a math or science class of their choosing. Each week, students submitted a written self-evaluation where they discussed their proficiency with various skills from the rubrics, like persistence, courage, collaboration, self-compassion, etc., using specific examples from class as appropriate.

It’s worth mentioning that Geoff and I also used the rubrics to evaluate ourselves. Geoff was monitoring his growth as a student in an upper-division physics laboratory course, and I was evaluating myself in the context of my PhD research. We made our weekly self-evaluations available to the whole class, the idea being that our evaluations could model for students what a “good” evaluation looked like. Personally, I found this experience challenging because I wasn’t always excited about admitting my shortcomings either to myself or my students. However, I also found it incredibly helpful to be able to give a name to those shortcomings (e.g., I often needed to work on persistence and self-compassion because, as I said earlier in this post, research can be frustrating and I sometimes feel “stupid” and want to give up).

Students’ self-evaluations were never graded; full credit was given for turning something in, even if it was off topic. Instead of assigning a letter grade or other score, Geoff and I provided students with qualitative feedback on their self-evaluations. Sometimes this meant pointing students to additional campus resources (like tutoring services at the Student Learning Center), while other times it meant congratulating them on their growth. Most often, though, our feedback took the form of a series of follow-up questions aimed at encouraging students’ self-awareness and helping guide them towards monitoring (and working on) a few specific traits or skills.

At the end of the semester, we had students reread their self-evaluations and look over their grades. We asked them to describe their growth according to these two measures (evaluations and grades) and to discuss whether they told the same story. Following Angie’s suggestion, we also asked students to create a visual representation of their growth. The resulting projects were incredibly creative: one student visualized their growth as a hummingbird (see figure above). Other visualizations included a seed growing into a budding flower that was supported by stilts, a person walking up a staircase where each stair represented a stage of growth, a tumblr full of animated gifs symbolizing different emotional responses to the phases of growth, and more. Geoff and I were quite impressed by the level of introspection, honesty, and thoughtfulness in our students’ final projects.

Both Geoff and I were also mentors for Compass’s mentoring program. Coincidentally, some of our mentees were also our students. We found that the weekly self-evaluations made us better mentors because the rubrics gave us (and our mentees) a shared vocabulary that we could use to talk about growth. Geoff and I felt that all mentor/mentee pairs could benefit from using Jon’s rubrics in this way, and spent a lot of time talking and thinking about how this might work.

Over the summer, the Compass community addressed two issues: (1) how to turn the fall/spring semester courses into a true sequence where students in the spring build on what they learned in the fall, and (2) how to introduce structure into the mentoring program in order to better support mentor/mentee pairs. One common thread in both these endeavors was Jon’s rubrics. We decided that the rubrics should become a staple of both the fall/spring sequence and the mentoring program, and we devised a way for students in the courses to be able to share their weekly self-evaluations with their mentors. John Haberstroh and Joel Corbo, the current teachers of the fall course, are piloting that sharing system now. This practice is similar–but not identical–to Jon’s implementation of peer-to-peer collaboration where his students “help one another draw conclusions about the development of their scientific habits.” The main difference between our approaches is that, because mentors are often graduate students, they are not necessarily the students’ direct peers.

The road ahead: Future use of the rubrics

Next spring, Jesse Livezey and Punit Gandhi are going to continue the practices established by John and Joel this fall. Beyond that, I’m not sure what the future holds.

I think it would be awesome if we established a tradition of teachers and mentors evaluating themselves, too, and sharing their evaluations with their students and mentees. This would show students that growth never really ends and that we are all in a perpetual struggle to improve ourselves.

To end this post, I’ll frame the rubrics in the context of Compass’s broader mission. The skills and character traits articulated in Jon’s rubrics are about more than succeeding on a particular assignment or in a particular course. Indeed, the larger arc of college is itself a project whose completion requires persistence, courage, self-compassion, and many other skills. In this sense, I see the rubrics as tools to help Compass achieve its mission of making sure that nobody who is interested in science leaves the field because they feel “stupid” and want to give up.

Growth is change, and change is never easy. Compass has always placed an emphasis on supporting students as they grow into professionals, and self-evaluation is an essential ingredient for success. The most important support for any learner comes from within.

Edit: As is true of all Compass endeavors, a lage number of people helped shape the use of self-evaluation in the Compass classroom. One of those people is Daniel Reinholz, a PhD student in the Graduate School of Education at UC Berkeley who studies student assessment and self-assessment in mathematics. I am grateful to him for his invaluable input, which was often offered over Indian food at Biryani House.

Leave a comment