Standards-Based Grading: Science – Quantitative (5 of 7)

This is the fifth in a series on the implementation of standards-based grading in specific disciplines. A lot has already been written on this subject that I hope to expand on.  A bit of motivation before moving on: Ask yourself, “Does my gradebook mean anything to anyone other than myself? How can I use assessment tools to better communicate with  students about their progress?” Finally, can we use assessment to empower students to control their own formative behavior in the classroom?

I can feel it coming in the air tonight, oh science. (fast-forward to 3:40, seriously) Science, you’re up for the SBG juggernaut, and I can’t wait. This is what I do. Quantitative Science is what I’m trained most specifically for, and I can’t wait to tell all of you about how Standards-Based Grading has changed my room for the better.

To be clear, when I say “Quantitative Science,” I mean Physics, Chemistry, and any other science-y class that uses numbers more often than not.

These classes, Physics and Chemistry, are built for SBG. They have hundreds of interconnected facets that form into large hulking standards that often underpin the entirety of scientific study. Concepts like Energy Conservation, Dimensional Analysis, Force, and Measurement are all interwoven through these curricula in a way that allows for mathematical treatment. Many other courses treat these concepts qualitatively, and do so well, but we’re hitting the calculators here with a fury.

Unsheath thine weapon!
Many students' weapon of choice

My Physics classroom looks like this:

  • Inquiry-Based (Specifically, the Coupled-Inquiry Cycle)
  • Centered on presenting science as a descriptor (instead of a controller)1
  • Power Tools
  • Mathematical Rigor

We move our focus from topic to topic through my list of district mandated standards, but really we never leave anything behind. All topics are used as we develop new topics, and SBG helps me assess a student’s development as we do this. I felt criminal when I didn’t use SBG; knowing that a kid could perform a chi-hypothesis test a month later, but having their first data analysis quiz hold them down made my heart hurt.

My goal — yours too? — is to create a gradebook that reflects the standards that you actually care about. The assignments in my gradebook are not broken up by assignment, they are broken up by concept. A kid can have an A+ in Statistical Analysis of Data, and an F- in Momentum Conservation. Even though they may have just done an investigation dealing with momentum. This tells the students what they do and do not know as they prepare for a summative exam. It also tells me what to reteach.

So how does a typical unit look? On the first day I present a question for the students to work with. (A Guided Investigation) For instance, the first day’s question is: “Is there a connection between the circumference and the diameter of a circle?” Kids come up with all sorts of hand-waving statements, which generally distill down to, “If one goes up, so does the other.” My response is, “Oh yea, how much? Prove it.” Then they get going. They run around the school measuring any circle they find by any method they can devise. This then results in a lesson about Standard Deviation and Chi-Reduced Hypothesis testing.

The understanding of those two statistical methods is what I want the kids to learn, because those skills will show up the rest of year, and will help them in a myriad of other situations later. Do I want them to know about Pi? Sure, but I wouldn’t consider it a Standard. This is what precipitates to my gradebook as a result of the first unit:

  1. Standard Deviation: Student can perform the mathematics necessary to conduct a standard deviation test on appropriate sets of data. (-/10)
  2. Standard Deviation: Student understands the meaning and implications of the standard deviation test. (-/10)
  3. Hypothesis Testing: Student can perform the mathematics necessary to conduct a hypothesis test. (-/10)
  4. Hypothesis Testing: Student understands the meaning of a hypothesis test and how to apply it to appropriate data. (-/10)

The traditional gradebook would have this:

  1. Circles Lab (-/40)

I might assess the skills by having the students hand in their work from that day, or I might give a quiz with simulated data on Excel. No matter how I assess it, I’m going to break it into those 4 separate grades (But that sounds like extra work! Yeah, like 3 extra seconds, stop whining.) What I, and the students gain, is a better recorded understanding of their abilities. It’s very hard to remember the current state of every student’s learning, so I farm that task out to my gradebook with as fine a resolution as I can handle.

Here’s the formative part: These statistical tests (or whatever other standard) show up for the rest of the semester. So, whenever a kid demonstrates improvement (or regression), so does that grade. It makes kids more cautious, which is nice. You and I both know that the specific content in Physics and Chemistry aren’t the most important aspects, and this is a fantastic way to eliminate that smoke screen. The students may do 20 labs/investigations throughout the semester, but they’re always practicing and showing me the same basic set of things.

I do add standards for specific content, but in general those standards that address experimental procedure never change throughout the semester. They always know what they’re trying to show me; they have to think about how this experiment aligns with the major pillars of science. Now, I’m getting choked up and misty eyed…

As mentioned above, there are standards that relate to specific science content, for instance: “Student understands the difference between velocity and acceleration.” It is a very common misconception that velocity and acceleration are always equal, and the distinction is quite fundamental, so I make it a standard. This concept might be assessed on the Kinematics quiz, or the F=ma quiz, but the understanding of it is reported in a much more fundamental (and informative) way.

Reassessment in quantitative science can also be a lot like math. Kids can easily stop by during off hours and attempt another problem. This is nice for the computation related standards. This can require you to have alternative questions ready in the holster, and some of you may be uncomfortable with that. My solution is to make a problem up not knowing the answer and be willing to work it after the student does. This way I get to do a think-aloud through the problem. I can often develop empathy for the student a bit better, especially in the rare occasion that I’ve accidentally created an unsolvable problem. You can also, in the right circumstances, have the student design and solve their own problem.

To finish out a cycle, I then allow the students to choose a topic for an investigation within the parameters of our current topic, usually whatever interests them. I control this through a grant writing process — I plan to post about this in more detail later.  They then present their own more unique findings via a “Lab Report.” We then go over the concepts through mini lectures, and the students can choose how much of the copious “Homework” I assign, based on their own self-diagnostic.

Lab Reports:

The hallmark of many science courses is the lab report. I’m not totally sure why, but I’ve done a lot of lab reports in high school, undergrad, and graduate school. They seem to be some vague attempt at hitting communication standards thinly veiled with the reasoning that, “you’ll need to write reports as a professional scientist in order to get published.” I’m just sayin’ but, that less than 2% of your kids will be science majors…

The goal here is obviously the communication portion, and it’s a good standard indeed, but the rigid scientific paper limits students unnecessarily. Let them present to you the data, and you be the judge of its communicative fidelity.

Through the use of SBG, lab reports have transformed in my room from concrete papers to an amorphous media consisting of powerpoints, movies, seminars, and sometimes interpretive dances (not kidding). The goal of these varied reports are to strike back at the core standards that the kids have been aware of (via gradebook) since day 1. This has worked nicely.

No matter what, there’s a spectrum, and I’m beginning to build a database of all of this student work: I’ve had some really awful ‘reports.’ Those kind where it’s almost painful to try and sift through the layers of misconceptions. On the other hand I’ve had some truly transcendent reports, where I feel like, by allowing the students this metered freedom, I’ve gotten a picture of their understanding that I could never have attained with a quiz.

What’s it all mean, Basil?

Separate your gradebook into the things you actually want kids to learn (and work on). Stop putting in “Force Quiz,” they don’t know what that is or how many little ideas it contained. One quiz might become 3 different grades (Maybe: “Straight forward F=ma,” “Vector F=ma,” and “F=ma combined with Kinematics”), and so be it. The minuscule amount of extra work is worth it for the change in student behavior.

1: This is one of my soapbox points. Many teachers and students view Science as a generator of action, which is simply not true. The heavens were spinning long before anyone attempted to describe them using science. Science is a descriptor, so please stop saying, “..and that’s why you move when pushed, F = ma!” Who are you, Mr. Wizard? Show some humility.