ThinkThankThunk

Dealing with the fear of being a boring teacher.

21

To Drop or Not to Drop?

It’s been a wacky week in the world of Cornally: hospitals, psychotically long programming assignments, thunderstorms with a bit of extra wrath, you know, Iowa on a good day.

Also, I’ve been busy. The SBGradeBook fall-semester launch is mere days away! I’ve migrated to my own servers so that I can 99% insure your data’s fidelity and FERPAtitude. I know, I’ve never been so excited for a piece of grading software either…

So, until then, I’ll leave you with this assessment nugget:

An unnamed instructor in an unnamed land decides to drop the lowest quiz score from each of his/her students’ final grade calculation.

Attack or defend this assessment practice in the comments. I’ll mail bacon to whoever first guesses how I feel about it.

Some discussion starters:

  1. Why did the instructor give the quiz if s/he was going to just ignore the information it yielded?
  2. This could conceivably be a different assessment for each student, is this fair?
  3. How does this help a student who has yet to assess less than perfect?
  4. How does this help a student who has yet to assess at a proficient level?
  5. Does this “raising of the grade” help, hurt, confuse, or otherwise bewilder students?
  6. What does this communicate to students about points vs. understanding?
  7. Does this actually raise the student’s grade?
  8. What they hell does a grade mean then, if it can be raised on a whim?
  9. Oops, perhaps I’ve shown my cards. There’s still bacon in it for the most articulate of you.

Hamlet is my favorite play because it's the first book that a teacher ever took the time to find out if I actually understood it or not. Assessmenteffingmatters.

Shawn Cornally • July 25, 2010


Previous Post

Next Post

Comments

  1. Think Thank Thunk » To Drop or Not to Drop: A Well-Tempered Discussion
  2. Jen MacDonald August 8, 2010 - 2:20 pm

    I hate the idea of dropping the lowest grade. I’ve dropped grades once in my – not yet long – career and that was when I felt that I was the one who screwed up in writing the test.

    If we want our students to learn everything in the curriculum (and hopefully we are teaching it because it is worth learning) then it should be included in their assessment. As I have just now discovered and jumped head first into reading all of these blogs on SBG, I think I might be in love. There is no worry over a “bad test day” because students can show their understanding later – and they know what they need to improve upon later since grades are broken down into topics areas.

  3. hillby August 4, 2010 - 11:03 am

    Thank you GasStationWithoutPumps, that was wonderfully clarifying.

    I think how that final grade gets calculated should depend on how it’s going to be used. If you’ve got standards grades, it doesn’t seem to make sense to remove outliers – because my assumption is that there have been multiple assessment opportunities.

    I know that once they leave my classroom, that grade is pretty much going to be a ranking. The best that I can do is decide how I rank my students – whether I agree with ranking or not.

  4. John Golden July 29, 2010 - 9:03 am

    The most interesting thing about all these comments is the peek it allows you into how people think about what problem assessment is trying to solve. Makes sense as we try to make it serve many purposes. Several people seem to feel its mostly about accurately measuring student performance with a single number. My predisposition about the assessment is data for me about understanding – which I get whether it’s in the grade or not. The grade, for me, is feedback to the student. I don’t care what grade they get, except as it correlates with understanding. So what matters most to me about grades is what it tells the student.

  5. gasstationwithoutpumps July 29, 2010 - 8:46 am

    Hillby points out the main problem with averaging: that it assumes we are looking at nearly independent measures of the same object, when in reality our assessments are measures of many different dimensions. The point of SBG is to report the different dimensions separately, as a large vector.

    But when you provided a summation at the end of the course (required in most schools and universities), you are often restricted to a single dimension. The challenge then is how to reduce the multi-dimensional measurements into a single dimension. This is usually done by projecting onto a line (dot product with a vector defining the line, also known as weighted averaging).

    Once you’ve decided how you are going to project your multi-dimensional data onto a line, it makes sense (to the extent that single-dimensional grading makes sense) to use standard data-cleaning methods to get the best possible estimate of where on that line the point should be placed. Hence the use of dropping outliers, repeating measurements of outliers, and other commonly used approaches in computing grades.

    The problem is not with the dropping of grades or re-testing, it is with the reduction of multi-dimensional data to a single dimension by projection.

    There are other ways to reduce dimensionality, such as taking the minimum or maximum over (weighted) grades. The ones I know of tend to be even less informative about student performance than averaging, being more affected by the unusual performance of the student, rather than their usual work.

    Note: for the past 24 years, I’ve worked in a system that required detailed narrative evaluation of students. Over the years, the narrative has become less and less important to the students and the faculty, from being the only thing on the transcript, to narrative plus optional grade, to narrative plus required grade, to grade plus optional narrative. It seems that the people who use the transcripts really want the teacher to do the reduction to a single dimension: they don’t want to have to do it themselves and they can’t deal with multi-dimensional data.

  6. Jason Buell July 26, 2010 - 11:58 pm

    Send HILLBY some bacon. He gets averaging. To quote every single assessment book I’ve ever read: Averaging assumes no learning has occurred.

    btw, impressive gallery of commenters. Even got Dan Greene out here. Hey Dan! Holla!

  7. hillby July 26, 2010 - 7:57 pm

    Re: GASSTATIONWITHOUTPUMPS

    You say that assessment is noisy, and people average noisy data. But when you average data from an experiment, you are averaging several measurements of the same object. This is an acknowledgement that the property of the object isn’t changing with time.
    However, you’re talking about averaging and dropping outliers on tests that assess completely different topics! That’s like averaging the mass of an apple and the glucose content of an orange.

    And furthermore, if the average of your assessment is 50%, then you must be trying to measure the center, not student’s understanding of the topics that you’ve supposedly set them up to learn.

    I have never taken an engineering class, but the persistent rumors are that the purpose of those assessments is to cut out or demoralize students so class sizes are smaller later. Is that what professors are trying to measure? Which students weren’t “worthy” of engineering? Or perhaps the need for reassuring assessments is created by the demoralizing nature of a weed-out assessment.

  8. hillby July 26, 2010 - 7:42 pm

    This is such a meaty question. Bad day forgiveness, I can buy that. But why does it have to be forgive and forget by dropping the grade? (assume SBG doesn’t exist) Within the traditional grading system, it seems that the better policy would be to allow one re-take. It acknowledges the frailty of humans while maintaining accountability.
    This just dawned on me why almost everyone doing SBG allows retakes. You’ve upped the accountability, and now you have to forgive the bad day because random points aren’t available.
    So I think I just used that “drop one grade” to provide another context for re-testing in SBG.

  9. David Fleming July 26, 2010 - 6:47 pm

    Love the post, love the comments, love the thoughts!

    While I see the argument “everyone has a bad day”… But if your final grade as a student is to reflect the content that you have mastered, how does ignoring the fact that you did terribly on a trigonometry unit in grade ten Math show honestly, what the student has mastered.

    Of course, the next problem is in grade 11, when the student takes trigonometry and doesn’t have any background knowledge at all… but that student’s teacher basically said “don’t worry about trigonometry kid, it’s not THAT important”… if it’s important enough to teach, it should be reflected in the year-end mark.

  10. TEACHING|chemistry» Blog Archive » Unhelpful Grading Practices (Part 1 of ??)
  11. gasstationwithoutpumps July 26, 2010 - 11:24 am

    Assessment is noisy. There are many ways to reduce the noise: averaging many samples and dropping outliers are both common techniques for reducing noise in a experimental data. To avoid bias, the teacher should drop the top grade as well as the bottom grade.

    Of course, most assessment is deliberately not designed for measuring, but for reassuring students. If it were really designed for measuring, the average score would be around 50%, and the standard deviation around 15–20%. Some engineering professors do design tests like this—I certainly have.

    If the purpose of assessment is to reassure students, then most of Mr. Cornally’s questions are irrelevant.

  12. Russ Goerend July 26, 2010 - 9:45 am

    1. Dropping the grade doesn’t mean the info was ignored. Just means that grade was dropped.

    2. I think that’s precisely what makes it fair. Fair in this context at least. Fair isn’t always equal (Wormeli, 2006)

    3. How does it hurt them? The feedback (assumption) hasn’t been removed. Only the points. Assuming they’re averaged, nothing changes.

    4. It helps by raising their grade. Am I to assume the score on the quiz is the only feedback the students are getting?

    5. The traditional student? Help. Someone like you? Irritate.

    6. I would hope it communicates that points/grades are a necessary evil in college, so we’ll supplement them with timely feedback. Doesn’t sound like this was the case though.

    7. You’re the math guy. It sure seems like it would, but I really don’t know. Run some simulations.

    8. A grade means: did you want to have a cord draped over your shoulder at graduation? This will help.

    9. I watched Food, inc. last night. I’m passing on the bacon for a few days, at least.

    :)

  13. David Cox July 26, 2010 - 8:43 am

    Dropping scores either invalidates the assignment, shows a recognition that traditional grading doesn’t work or both. If it’s the assignments, then come up with something better. If it’s worth assigning, then it should be worth learning. Don’t waste student’s time. If it’s the grading system, then…we already know the system is stupid. If both, come up with a valid skill/concept list and get on board already. Regardless, dropping scores is stupid.

  14. Colin July 26, 2010 - 7:54 am

    Forgot to check “Notify me of followup comments via e-mail”.

    Maybe this misstep in understanding how the commenting system works would be taken to heart if I bothered to reassess my click-happyness. Or should I just “drop” this failure, ignore it, and pretend it didn’t happen? Hmmm.

  15. Colin July 26, 2010 - 7:52 am

    Dropping grades is a half-attempt at SBG and without the SB. It is recognition that students are not at the top of their game 100% of the time and that scores are just that: scores. It fails, however, to follow through with reassessment. Because it fails to reassess then it reinforces to students that points are points and further disconnects students from learning and assessment.

    It’s like car accidents: you don’t ignore that one crash and shrug it off, you stop and try to understand what caused the accident. You do not learn to avoid the second accident by ignoring the first.

  16. Rachael July 26, 2010 - 7:41 am

    Based on the tone of your questions, you don’t agree with dropping the lowest grade, and I believe I can make at least some argument both for and against it. (We like bacon.)

    For: Kids have bad days, too. It shows up in their grades sometimes, and dropping the lowest acknowledges that. I’ve done it to give my students a “pass”, thus not dooming them b/c of one bad performance. For my more perfectionist students, it eases the pressure and they often perform better across the board than otherwise (less stress = better performance), and for my less studious students … well… it eases their stress also. It says to my students that I recognize the game that grades are, and I’m trying to play in their favor, even if I don’t always know the best way to do that. If the material won’t ever be covered again, then the fact that a student goofed on the quiz is a moot point.

    Against: Its makes grades into even more of a game than before. It potentially says, “I, the teacher, run the game, make the rules and change the rules when I want to.” It says “that you didn’t *get* the material is no biggy, I don’t care if you’ve learned anything or not, I only care that I don’t have too many low grades in my class.” I can think of so many other things this practice can say to students (now that I’m thinking about it on a deeper level), and I don’t like it!

    My conclusion: I’ve done it, I dropped lowest quiz/test grades with the best of intentions. But as I really think about it, I don’t like the other ideas it potentially communicates. It’s not that every student hears/things these negative things when I drop a lowest grade, but it does instill these kinds of assumptions. Bah — another screw up on my part. :(

  17. Tim Erickson July 26, 2010 - 7:28 am

    I think the usual justification is as others have commented: everybody bombs once in a while; we recognize this and give you a break. (But bomb twice and you’re toast!)

    The underlying, pernicious assumption, alas, is that the “average” is a suitable indication of overall student understanding.

    Let us ask this, from a SBG point of view: suppose Aloysius demonstrates deep understanding of 39 of the 40 standards you have proposed, but somehow can’t for the life of him demonstrate more than cluelessness about, say, #17. Do you average them so Aloysius still gets an A? Do you drop one standard so he gets an A+? Do you say, hmm, I must have mis-factored my standards, that shouldn’t be possible? Do you have Aloysius tested for 17-deficit-disorder? Or do you look at that and say, what the hell? I’m using SBG but at the end I have to average because the system demands a summative grade?

    It’s a problem. As a stats teacher, I call this the “Tyranny of the Center.” (Don’t steal that title!) We have a cultural blind spot that lets us substitute a single measure (usually a measure of center such as mean or median) for the whole set of data and use it for comparison. Your class average is better than the class average of another teacher? You feel good; you feel that you’re better. The median home price in your ZIP code went up; you feel richer. We grade kids: Aloysius gets 39/40 and Penelope gets 37/40. Our knees jerk, and we say that Aloysius understood more than Penelope. In fact, the system demands the simplification and sets up the comparison, when the truth is the actual constellation of understandings.

  18. Ellena Bethea July 26, 2010 - 1:42 am

    This has to be a fairly common practice, as “drop lowest score(s)” seems to be a feature of every gradebook system I’ve encountered so far. It is a product of a points grading system, as the only goal is to raise a grade. (generally, lowest scores are only dropped when they don’t hurt).

    In practice, I think teachers choose it because it gives their students a “freebie.” If they are taking SATs and cramming for another class, or just having a rotten day, they have one quiz that they can skip studying for without fear that it will affect their grade. At the college level (where whole test grades may be dropped), it may be an adjustment issue, where a student may not know what to expect on their first test.

    Above all, it lets the teacher off the hook. No need to meet with a student and explain material that you won’t be going over again, when you can just say “don’t worry, your lowest grade is dropped.” It takes away incentive for the teacher to evaluate effectiveness and accuracy of the assessment itself. And it minimizes the embarrassment or hassle that may result from having too many low grades in the class.

    Dropping a lowest score in a traditional grading system deems X% of the course material unimportant. It is grade inflation intended to mask the inherent unfairness/inaccuracy of our traditional points grading system.

  19. Z. Shiner July 26, 2010 - 1:07 am

    The drop-the-lowest-score approach is that it values inconsistency. The student who benefits the most is the one who gets the lowest grade on a quiz. When I was a student I would use it as an opportunity to skip a class (or not learn specific material). From that end of the spectrum it was great. From this side I can see how it negates the meaning of assessment.

  20. Dan Greene July 26, 2010 - 12:34 am

    Students learn pretty quickly that bombing a major assessment in a traditionally graded classroom early on can lead to a low grade in the class, no matter how much work is put in afterward. The drop-the-lowest-grade idea might help give students hope, so that they don’t give up early on. That being said, if the assessment policies of the class tend to induce fear in the students, instead of helping them learn, then the teacher probably needs to rethink things a bit more deeply. Drop-the-lowest is just a small band-aid on a giant affective gash.

  21. John Golden July 25, 2010 - 11:47 pm

    What I like about it is that it is an attempt at approximation. In math there is a typical attitude that you either have it or you don’t. (In general, too, but here referring to a particular technique or topic.)

    However, that’s not how people learn. We need a time to muck around with something. This, at least, might let the students know the teacher doesn’t expect perfection. (Just perfection -1.)

    If the teacher has found it effective, I’d want to know: effective in what way or by what measure, how does it affect student learning (if that wasn’t the measure), and what other ways there are to address that concern.

Comments are closed.