Tuesday, October 30, 2012
Monday, October 1, 2012
- Data taken before and after some intervention, and
- A model that uses pre-intervention data, possibly along with other factors, to predict the post-intervention data.
- An interpretation of any differences between the post-intervention data and the model.
The VAM doesn't simply score students based on their end-of-year scores, but looks for growth from the beginning of the year to the end of the year. So if a group of students starts 5th grade reading at the 3rd grade level, and finishes the year reading at the 4th grade level, the teacher is supposed to get credit for a year of growth.Half-Baked Objection 2: Students aren't plants, and teachers aren't fertilizer.
Of course they aren't. But by itself, this objection says "You can't measure anything." And while measuring teachers badly hurts the profession, claiming that what we do can't be measured doesn't help either.
- Longitudinal Consistency Teachers change over time, but not necessarily that much in any given year. So unless we have evidence that a teacher is taking substantial steps to improve his or her practice, or strong evidence that something has come unhinged, we would expect teacher scores to stay roughly the same from one year to the next. If teacher scores fluctuate wildly, that casts doubt on whether the score is really measuring something that the teacher is doing.
- External Validity There are research-based strategies for exemplary teaching; that is, people have actually compiled lists that describe what teachers need to do to be effective. One such model is Charlotte Danielson's Framework for Teaching, but it's not the only one. Because these strategies are themselves validated by research demonstrating their impact on student learning, we would expect that, in general, teachers who are doing the things on these lists would score highly on the value-added metric, and that teachers who are not doing these things would score poorly. Of course, there's no canonical list that we need treat as gospel: it's possible that, over time, our views of what constitutes good teaching will evolve, and that this evolution will be informed by results of a metric system.
- Fairness We don't want our measurement system to treat one group of teachers differently from another, and it should be mostly immune to sabotage or "gaming" by malevolent or savvy administrators and teachers.
- Appropriate Incentives Peter Drucker's maxim "What is measured, improves," has a corollary: make sure you measure the things that you want to improve. In an era when almost any fact can be Googled, when the phrase "21st Century Skills" has gone from a war cry to a banality, we need to be careful that our metric creates incentives for teachers to teach the skills, concepts, and habits that we want kids to learn. We also want to ensure that the metric doesn't create perverse incentives for teachers to skip over crucial content, revert to large-scale rote memorization, or avoid teaching certain students. For example, the current NCLB regime has the well-documented "Bubble Effect": it's to a teacher's advantage to concentrate on those students who are near the proficiency borderline, to the exclusion of students who are so far from proficiency that a single year's work is unlikely to make the difference.
Thursday, September 20, 2012
You can tell these people by how they argue about the metric. For example, "If the kids start out the year behind, how is it fair to penalize the teacher for the fact that they end the year behind?" (It wouldn't be, but the "added" part of the value-added metric means that the metric is trying to describe change, not just absolute performance at the end of the year.) Or "If the class starts out at 80% and then ends at 85%, the teacher's responsible for the other 5%." (The value-added model doesn't compare average scores directly, which is good, because we would hope that students grow over the year anyway.)
I'm no fan of value-added metrics in teacher evaluations, but we won't get anywhere arguing about them if we don't even know what we're arguing about. So this blog is a sort of crib sheet for teachers and education people who haven't gotten totally immersed in the statistics and psychometrics stuff.
To make this description go, we'll apply it to a situation where a VAM might actually be useful. Say you want to determine whether a particular fertilizer treatment makes plants grow faster. If this were your fourth-grade science fair project, you'd just take two groups of plants, compute the mean height of each group, and then treat one group with fertilizer. At the end of the experiment, you'd compute the mean heights again. If the fertilized plants have a higher mean height, then the fertilizer works. Right?
Wrong. Computing means of groups doesn't tell you much about what happens to the individual plants. For example, in the (totally cooked-up-to-make-this-point) dataset below, at the end of the experiment, two groups of plants have mean heights of 3.89cm (fertilized) and 4.05cm (unfertilized). At the end of the experiment, the fertilized plants have mean height 5.97cm, while the unfertilized plants have mean height 6.05cm. So the fertilized plants have it, by a whisker: unfertilized plants grew an average of 2cm, while fertilized plants grew by 2.08cm. Problem is, in my model, all I did was assume that the below-average-height fertilized plants grew 4 cm, while the above-average-height fertilized plants didn't grow at all. All unfertilized plants grew by 2cm.
We can see these differences clearly in the scatterplots below, comparing fertilized (left) with unfertilized (right). In both graphs, the blue line is y = x, representing no growth at all. Points above the blue line represent plants that grew; points on the line represent plants that didn't grow, and points below the line (of which there aren't any) would represent plants that actually shrank.
|Heights of Unfertilized Plants|
Monday, July 9, 2012
Math and science can be hard to learn—and that’s OK. The proper job of a teacher is not to make it easy, but to guide students through the difficulty by getting them to practice and persevere. “Some of the best basketball players on Earth will stand at that foul line and shoot foul shots for hours and be bored out of their minds,” says Williams. Math students, too, need to practice foul shots: adding fractions, factoring polynomials.And whether or not the students are bright, “once they buy into the idea that hard work leads to cool results,” Williams says, you can work with them.and, as Dan points out,
Dan's right on target on both points, but I don't think he goes far enough.
- Drills aren't a basketball player's first, only, or most prominent experience with basketball.Drills come after a student has been sufficiently enticed by the game of basketball — either by watching it or playing it on the playground — to sign up for a more dedicated commitment. If a player's first, only, or most prominent experience with basketball is hours of free-throw and perimeter drills, she'll quit the first day — even if she's six foot two with a twenty-eight inch vertical and enormous potential to excel at and love the game.
- Basketball players aren't bored shooting foul shots.Long before "math teacher" was on my resume, I was a lanky high school basketball player trying to get his foul shooting above 50%. I'd shoot for hours but I wouldn't get bored, as Williams suggests I must have been. That's because I knew my practice had a purpose. I knew where that practice would eventually be situated. I knew it would pay off in a game where I'd be called to the line for a shot that had consequences.
- Our (national) approach to teaching math is to avoid doing anything requiring actual thought or creativity until we've convinced as many students as possible that there's nothing worth thinking about in math; eventually, the few "survivors" get to do actual mathematics. If we taught English that way, it would be all grammar and spelling until senior year, when a lucky few would get to read actual poetry. Right now, the problem isn't that the U.S. curriculum doesn't have enough skill practice; it's that it doesn't consist of much besides skill practice.
- As Dan suggests, what *makes* skills important is their placement within the big picture of doing actual mathematics. Being able to multiply accurately isn't worth a darn--especially in the age of calculators--if you don't have good ideas about when and what to multiply (and when and what not to). We can be excited when kids know their times tables, the way we might be excited about a kid being able to spell really well, or lift something really heavy, but by itself, multiplying not a really useful skill except in the context of multiplication tests.
If we taught athletes the way we taught mathematics, there would be no Kobe Bryant, although there would be handful of strikingly eccentric bodybuilders who would get together to run around, lift heavy things, and engage in odd activities that make no sense to the rest of us couch potatoes.
Tuesday, June 26, 2012
Konstantin Kakaes's argument essentially boils down to three elements:
- Many math teachers who use technology do so ineffectively.
- Many students who are taught mathematics with calculators don't have a good grasp of basic arithmetic, or other "traditional" mathematics.
- A few teachers teach math successfully without calculators to some students, where "successfully" is defined as "according to traditional criteria."
Therefore, teaching math with calculators results in students who can't do math.
In fact, I would argue that calculators have made possible one of the great sea changes in mathematics education in the western world. In 1960, there was a dropout rate of 27%, and of the 73% of U.S. students who graduated from high school, very few took any math beyond geometry or trigonometry, which was still a course offered at many colleges. In 2009, there was a dropout rate of 8.1%, and of the 91.9% of U.S. students who graduated from high school, something like 50% (77% in 2004) had trigonometry or higher. Put differently: we are now in a world in which about half of U.S. students are expected to learn substantial amounts of advanced algebra and trigonometry before graduating from high school. Technology makes it possible, as great teachers like my friends John Benson and Natalie Jakucyn, to name two have shown, to increase students' access to higher mathematics. With technology, it's possible for a student who doesn't know how to add fractions to learn what a derivative is, what it means, and what you can do with it--and how to let a computer do the computations that he needs to use the derivative in an actual application.
If you learn how to multiply 37 by 41 using a calculator, you only understand the black box. You’ll never learn how to build a better calculator that way.Besides the inaccurate alarmism of his example--even calculator-active elementary school curricula like Trailblazers and Connected Math expect that students will be able to multiply two-digit numbers by hand (and explain their computations, a higher cognitive skill than was demanded in my day)--he proves too much. If it were necessary to teach everyone a skill to ensure the supply of programmers able to create machines in the future, we would presumably also teach the following:
- Computation of decimal approximations of square roots, using the "two digits at a time" method found in old textbooks, or continued fractions, or the Babylonian method.
- Approximations of transcendental functions using Taylor series.
- Approximations of trignonometric functions using matrix multiplication (faster and better for most angles, actually).
- Approximations of transcendental functions using tables and linear interpolation.
Kakaes does raise some valid points. Technology by itself isn't the sole indicator of high-quality math instruction: there's lots of low-quality math instruction with technology (just as there's lots of low-quality math instruction without technology). Promethean boards do not raise outcomes by themselves. And (as Sugatra Mitra says, in his argument for why technology can be transformative for the poorest children), for kids in affluent districts (which, by his standards, is much of the U.S.), the marginal impact of any given new technology might be quite low. But as Zalman argues, the question is never "should we teach students to use technology?", but "which technologies should we teach students to use?" That question--not this fake "can Johnny learn math with a calculator?" question--is where the discussion should start.
Update: A version of this article is now a posted response on Slate!
Monday, June 25, 2012
- Except for testing days, each day's class has an objective: something students are to know, understand, or be able to do that they didn't know or understand, or weren't able to do nearly as well the previous day. Content is not simply repeated from year to year or even day to day.
- The day's objective is clearly related to overall course goals, to local and national standards, and to what the students already know.
- Assessment is frequent and individual: at least a couple of times per week, students' work is collected (or assessed in class) individually to find out what they know, to give them feedback on what they need to improve, and to adjust instruction. Assessment tasks are nontrivial, especially on formative assessments.
- The mathematics presented each day is correct.
- The mathematics is presented each day in a reasonably logical order. When asked, a teacher can explain the motivation for each step, not just what the step is.
- The time allocated to mathematics is spent actually doing mathematics, not graduation practice, watching a non-math movie, or taking a break. (I don't make these up, but please don't ask me to name names.)
- The time allocated to mathematics is spent with the students either (a) doing mathematics, (b) listening to brief explanations about how to do mathematics, or (c) asking each other or the teacher questions about mathematics. ["Will this be on the test?" is not a question about mathematics.]
Tuesday, June 19, 2012
- If everyone already knows it, we don't cover it. If most but not everyone knows it, we don't cover it as a class; I provide an opportunity to review or relearn the idea either as a pull-out, or as part of a larger task, or as one option among many activities. If a few people know it, I give them something else to do while the rest of the class learns.
- I figure out ahead of time and at the time how many people can already do what I want them to do, and how well, so I can do item #1.
- I help students connect each day's lesson to course themes and to material from other courses (and also to real life). I make sure they know why that day's lesson is important.
- Class time is for work that can't be done at home: because it involves high-level problem-solving, demands that they share ideas, requires higher-level thinking that they can't do independently, or because they need guided practice or reinforcement that isn't available online or with a worksheet with answers.
- Class time is not for watching movies, reading, lecture, or even whole-class discussion, unless I expect ideas to build on each other, students to critique each others' ideas, etc. In particular, we don't "report out" results unless there's something to do or discuss from the reports. Time I spend talking is, as far as I can tell, mostly time wasted.
- When the assigned work is done, I always have more math for students to work on, so that the ones who get done early don't sit around getting bored. This strategy also decreases the incentive for students to rush through the material without thinking carefully.
- "Who here knows about Gardner's Multiple Intelligences? [or Bloom's Taxonomy, or the Common Core Standards for Mathematical Practice, or ... ]" The teachers are all over the place, but it's hard to tell exactly what each teacher knows, because "Who knows about ____ ?" is not exactly a fine-grained assessment.
- "Everyone do this worksheet reviewing the different Intelligences [levels of Bloom's//Standards for Mathematical Practice//etc." Now there's no opportunity for choice or differentiation. When you're done, you just wait around until everyone else is finished. There's no immediate followup task.
- "Let's watch this TED talk about ___ ". Or: "Read this article about ___ " I could have done this at home. In fact, I love watching TED talks at home, so I'd be happier watching it at home and using the class time productively. Also, what am I supposed to get out of the TED talk or reading? Why not tell me up front? Occasionally, the TED talk actually shows a process or strategy that would be hard to summarize, like this one by Dan Meyer.
- "Let me tell you about ... " What is my take-away? What do I need to get out of this? Could I just read what you're planning to say? and then spend group time doing some task related to the take-away?
- "Well, we can wind up at many different places with this ... " Obviously, we're all professionals, and so it's hard to tell someone they're flat-out wrong. But it is important to have standards and to communicate them clearly. If the point of the activity is to rewrite a textbook activity to achieve a certain aim, and the proposed rewrite doesn't achieve it, then whom does it help to let the activity slide by?
In this area, I think we teachers are our own worst enemies. In my classes, one norm is that everyone is wrong at least sometimes, and that correcting an error or misconception is an important job for everyone. But how often do we sit in PD and watch someone say something that is clearly incorrect without challenging it? Maybe one reason why in-school or departmental PD is more effective (at least for me) than inter-school PD is that we're only willing to challenge people we know well and trust.
- What is the purpose of this lesson?
- Why is this important to learn?
- In what ways am I challenged to think in this lesson?
- How will I apply, assess, or communicate what I've learned?
- How will I know how good my work is and how I can improve it?
- Do I feel respected by other students in this class?
- Do I feel respected by the teacher in this class?
Update: In this morning's PD, taking my own maxim to heart, I challenged a teacher who said that you have to go over every homework problem and every answer to in-class tasks. I said that what I see is that when the teacher "goes over" problems and answers, the energy level and engagement drop dramatically, and that time spent going over homework is mostly wasted. Immediately another teacher said "Where you do you teach? Oh, Payton." I stuck to my guns, pointing out--as we've discussed on this blog--that no matter what high school we're at, if more than 20-30% of the studentscan't do a particular homework problem, then that problem probably wasn't appropriate for independent work, and that if that's the situation for many problems on the assignment, then the assignment itself was too hard. But they'd already stopped listening....
Tuesday, May 15, 2012
I'm at Intel ISEF, the world's largest HS math, science, and engineering fair, as part of a team from Chicago trying to increase the number of students doing math research in high school. On Monday, I walked down the math aisle and talked to the five or six kids I found setting up their projects--cool ideas, like using fractal dimension to quantify the distinction between cancerous and noncancerous cells, or linking quadratic residues to the number of digits in the base b expansion of 1/p. And I asked them three questions: Do you do a lot of math contests? Have you ever done a summer math program? Are you part of a math circle? All eighteen answers: no.
Now I like to think that doing these extracurricular math activities makes kids more interested in math and more likely to investigate mathematical ideas on their own, but this makes me wonder. Some hypotheses in search of more data...let me know what you think and I'll report back after more extensive conversations on Thursday.
- Maybe the kids who do math research are doing it because they don't have any other outlets for their math interest, as a sort of last resort.
- Maybe the kids who do lots of other math stuff simply don't have the time or energy to do math research, because the other math stuff they do consumes all that time and energy.
- Maybe the kids who do lots of other math stuff are also the kids driven (literally) to lots of other "Race to Nowhere" activities, so that they don't have time or energy to explore and play, not because of the math they do, but because of everything they do.
- Maybe the kids who are driven to do high-quality research are exactly the kinds of curious loners unlikely to be attracted to math contests and summer math programs (ugh! other people!) in the first place.
Sunday, April 15, 2012
This question assumes that you teach by asking interesting questions and allowing students to figure out the math. If you do not teach in this mode, I am not sure this question applies. I am also not sure I have much to offer, because I think good teaching starts by recognizing that our job is to ask interesting questions and to help students figure out the math behind the questions. I think a teacher helps by watching, listening and letting the students do the work. Telling a student how it works does not work.
But I have talked about that before. This idea is new, I think. P.J. reminded me of this yesterday so I thought I would write about it before he did.
It is our job.
Sunday, March 18, 2012
As we worked through chapter one with the students, we both learned a lot about how to make calculus meaningful and understandable to our students. We had decided to collaboratively write tests, and so we did. The first test covered the ideas in chapter 1 rather well we thought, and we were eager to see how students performed.
To say the first test was a disaster would be an understatement. There was not one student who even tried to work all of the problems. Many students left three or four blank. Ron and I looked at the test, and it measured what we thought was important, but because of the difficulty it measured little or nothing and created considerable discontent among our students. And these are the best students we had. We adjusted the grading scale on the test, admitting that we had totally failed to create a fair test, amd promised that we would do better for the next chapter.
In considering how to fix the problem, we had several ideas. One was to break the chapters into two tests. We rejected that because it would mean giving up too much instructional time for formal assessment; we would spend the year writing and grading tests. Another idea was to only test the easy stuff. We rejected that approach as not being in the best interests of our students. We were committed to making the class a rich mathematical experience that matched the wonderful way Ostebee and Zorn were allowing the course to unfold. Then one morning Ron came to me with one of the best ideas I have encountered. And I resisted at first. I offered reasons why it was a bad idea. After all of that, I agreed to try it. I have never looked back.
Ron's idea was to make a collaborative problem part of the test. The original plan said the day before the test, we would give each student a collaborative problem. This collaborative problem would consist of several parts and would encompass the main ideas of the chapter. The problem would allow us to address some of the subtle concepts or more complicated aspects of the material covered in the chapter. Each student would be required to work with at least two other people, and each person would turn in one copy of the team's perfect, well-organized, well-written solution when the student came in to take the in-class part. It would count as about one seventh of the test grade.
After a couple of tries, we modified the conditions a bit. In particular: we gave students the collaborative problem several days before the test, and always so it would be in their hands over a weekend. We posted the collaborative problem on our websites to allow absent students to access it. We helped shy students find collaborators. We encouraged collaboration with students in the other sections of BC Calculus.
The collaborative problem turned in to one of the best educational experiences in my career. Most of the work was correct, making them easy to grade. The in-class part was now manageable, but we were assessing all of the material. More importantly, students were learning mathematics while taking a test. What an amazing experience! Ron and I listened to their conversations as they worked the problems in the math lab, we heard them talking at the beginning of class, and we flat-out asked them about their experiences with the collaborative problem. All of what we heard was exciting. There was an outcry when we did not offer a collaborative problem for a test we gave on a half chapter. We had found a way to help them consolidate their ideas before they took the in-class part, so the in-class tests were also done better.
We began to notice that the collaborative groups became entities in themselves. The students started to get together just to study. Some of them met during a common free period every day in the math lab or the cafeteria and went over homework questions. Parents praised us for the learning they saw taking place in their homes as students gathered. Collaborative groups compared reaults with other collaborative groups. Students made friends and learned that learning is not an isolated activity.
I also taught a class in Multivariable Calculus and another in Linear Algebra for those students who had finished BC Calculus and had not graduated. There was a clamor for a collaborative problem in that class, so I happily agreed to their demands.
One moral to the story: when you try something new, it rarely works the way you
intended it. But the thing to do is not to throw it out--then you wind up
doing the same things you were dissatisfied with before that led you to make
the change. Rather, you need to try and identify what's not working and
fix that piece, By iterating several times, you come up with a new
strategy that does accomplish your goals--and even gets you places you hadn't
realized you wanted to be!
By the way, our students performed better than ever on the AP test and have ever since. It is nice when one's observations are valideted by an outside source.
Monday, February 6, 2012
An idea I took from George Milauskas takes the "only two points for a correct answer" one step further. George would give his students "magic dots" (= the circles punched out from paper with a 3-hole punch) which they could redeem--in combination with a single point--for any formula required on a test. That is, a student could ask "What's the midpoint formula?" (even though requests for that particular formula make me cringe) and get it, for a mere one point. George's reasoning, which persuaded me instantly, was as follows:
- If a student just writes down the correct formula--but does no other work--he or she will usually get one point of "partial" credit. Most of the problem consists in using or thinking about the formula, not regurgitating it. (That leads to a whole nother conversation, about problems that are just about formulas.)
- Giving a student the formula allows the student to demonstrate what he or she can do with it.
- Leaving the student stranded without the formula means that he or she can't demonstrate anything.
Monday, January 30, 2012
At the top of every paper I intend to grade is the sentence :
"You must show enough work so that I can reproduce your results."
I have found that this phrase solves a lot of practical problems about how much work a student needs to show. It also helps when the students solves a problem in an unexpected way. It allows a student to use technology intelligently as long as I am given enough information so that I can get the same result using the same technology. It enables me to effectivly evaluate the error a student has made and give the appropriate amount of credit for the work.
This leads to another aspect of grading a student's work that developed carefully over the thousands of problems I graded. I give points for correct mathematics. I do not take off points. When a student looks at the number of points earned for a particular problem, the student will see a +10, not a -2. The students got the 10 for doing several things correctly that would have lead to the correct answer. Unfortunately, the student made an arithmetic error when computing part of the answer and so did not earn the 2 points allotted for determining the correct answer. It should be noted that one consequence of this grading policy is that a bald answer without supporting work will get 2 of 12 possible points.
Consequently, it takes a long time to grade a set of tests. The effects, however, make it worth the effort. I have learned a lot of mathematics by following the work of a student who took an unexpected path. But more importantly, grading lots of papers teaches the grader what sort of misunderstandings students have, and that in turn enables the grader to try to find ways to eliminate thsoe errors next time around. At the very least, one learns to warn students about a mistake students often make on a particular kind of problem. At the most, the teacher can modify the problems used to teach the concept so the class will make the mistakes early on, exposing and hopefully eliminating the potential for those mistakes to occur.
Another consequence of grading this way is that the grader learns a lot about the particular habits of mind of the student. This information in turn is very helpful when the student or the parent wishes to know what can be done to improve. The teacher is aware of a lack of organization, or not checking work,or poor computation skills, or careless marking of a diagram or any of the other habits that interfere with success and can communicate those habits to the students and parent.
Comments to the student congratulating a clever move will have more impact than criticism about a bad move. A teacher can take the opportunity to point out specifically to a student what might have resolved the error. The feedback is up-close and personal and has impact.
But perhaps the most important aspect of grading papers this way is that it conveys a sense of value to students. It implicitly tells them that the important part of mathematical work is the process. They will get points for failed attempts if those attempts are appropriate and reasonable. They will get more points for a clever observation than for remembering a few steps from a previous problem. The students will learn that mathematics is about logical reasoning and making connections more than mathematics is about remembering rules and following them carefully. And that is a big deal.
Wednesday, January 11, 2012
Perhaps the worst part is that frequently grades become the goal and overshadow learning. Teachers sometimes use grades to coerce students to comply. Teachers do things like not giving credit for math work done in pen, even if the work is exemplary. Students use grades as an excuse for not doing work. Students will not investigate a problem further bacause they know it will not be tested.
And yet, grades persist. I think they can play an important role in education. Students and parents need feedback about their accomplishments, they need advice about how to improve, and they benefit from legitimate praise and criticism.
Allow me to share two very personal experiences that are vivid memories of my grade school days. For two years in a row, I was told by the music teacher that I could not sing. (She was right.) I was then instructed to mouth the words as the other children sang. As I grew up, I realized that I love music, but I can't sing, so I don't try. I have been told by several people that anyone can learn to sing. I don't believe them, because I was told at a very young age that if I sang, it would interfere with the singing process in class. I can't do singing. The second experience happened in eighth grade when my social studies teacher was trying to explain inflation. I rasied my hand and asked a question about the consequences of inflation. He looked at me and said, "You must be really good at math." Fifty years later, I remember that moment in class; I have devoted my life's work to mathematics.
Both teachers were assessing my work. Neither assessment had anything to do with grades. My grades in school were never very good because I frequently didn't comply with the teacher's wishes about how to do the work (I really liked doing math with a pen, for example). But I did learn and so consider myself to have had a good education in spite of all those C grades.
When I started teaching, I had the good fortune to be in situations where I had the freedom to decide how grades were going to be given. I spent a lot of time thinking about it, tried many plans, and eventually hit on one or two that worked. I think my grading schemes helped me become a better teacher. I would like to share some of my thoughts about grades in the next few blogs.
I have always thought that in order to earn a good grade, a student should demonstrate knowledge of the subject and the ability to apply that knowledge in a variety of situations. That means that each assessment should include some routine exercises to see if the student has learned the basic material, problems right out of the book with different numbers. The students should also be expected to do problems similar to some of the really hard problems that we did in class. And the student who wishes to earn an A ought to demonstrate the ability to apply the information from the unit, as well as from the entire course studied so far, to a new situation. So, my tests are usually about 50% routine problems, 25% difficult problems that are similar to problems they have worked on , and 25% original problems. I give them one period to work on the problems unless they are legally entitled to more time. I carefully look at their work and give credit for correct mathematics relevent to the problem.
Solving a difficult problem takes time, and there is often a certain amount of luck involved. A promising approach may lead to a dead end through no fault of the problem solver, while an equally promising approach may work just right. If we intend to assess out students' success as problem-solvers, we must ask them to solve problems on tests, not just do exercises. That in turn influences how we associate a grade with work done.
I would like to know who decided that 95% was the benchmark for excellent work and what sort of work were they thinking of. Nothing I can think of that is reasonably difficult can be done correctly 95% of the time. The best baseball players that ever lived were successful at getting on base if they could get on base 40% of the time. Most players don't even come close, because hitting a baseball is very hard to do. A 30% success-rate is outstanding.
One of the national standards of excellence in the U.S., the Advanced Placement test, gives only five grades: 5,4,3,2, or 1. In order to get a 5, a student needs to get approximately 72% of the test correct. That level of excellence will often earn college credit for the course in question.
P.J reminded me about Dr. Paul Sally's rubric: "If you're getting 50%, you're doing well." Dr. Sally taught Honors Analysis at the University of Chicago. No one ever accused Dr. Sally of having low standards.
That brings me to another thought about grades. My first two years I computed grades two ways. I kept track of total points earned by students, and I also assigned a letter grade to each assessment and then used the letter grades to determine a final grade. It became apparent theat the letter grade method was far superior in two respects. First, students always seemed to have a feeling for where they were. Second and more important, the letter system was fairer, the letter system meant the grade was less influenced by a really bad test, and the letter system allowed me to assign points to problems without regard to making the total come out to a pre-specified number.
Another principle I followed without exception: Every problem was worth the same number of points. I didn't want students to have to worry about how much time to spend on this one or that one bacause one problem was worth more points than another. I did want my students to look the problems over and work the ones they were most confident about first. Since I established the cutoff points, this was a very good and fair policy.
The point is that the elephant is there, and it matters. Grades influence our effectiveness as teachers, and we must spend considerable time and effort working out systems that enhance our teaching, emphasize the things we think are important, inform parents and students about the quality of their work, and are even-handed and fair.
Next time I will share with you some specific things that worked for me. Until then, please reflect on the grading policy that you are using and how it alters your ability to teach mathematics. No matter what you think, it does make a difference.