Questions arise from time to time in my classes about the way in which I assess work submitted to me by students. My practices are at variance with many others’, as I well know, and the divergence sometimes leads to confusion. Some explication therefore suggests itself as worth conducting. As such, I describe my grading practices below, with comments about the practical and philosophical underpinnings and a short conclusion following. Notes are also appended.
The remarks below are provisional and represent my thoughts at the time of writing. I will doubtlessly return to them in the future; I expect that, if and as I continue to teach, my opinions of how to assess my students’ work will change. They should, certainly, if I am paying attention to things as I ought to be.
Grading Practices
Most of my classes assess student performance on assigned writing tasks. There are some few assignments that follow different forms, usually quizzes to ascertain whether or not students have done the assigned reading and paid attention to prevailing classroom discussions. Even those, however, are usually written–or at least involve writing–rather than being only or even primarily multiple-choice, completion, or fill-in-the-blank.
Assessing writing is always problematic, given the demands and expectations of students, programs, faculty in other programs, and other stakeholders. Although I acknowledge problems in doing so, I tend to apply explicit rubrics to my grading, identifying a number of categories in which I mean to assess students and assigning different weights thereto; I also offer representative questions that indicate what I mean in noting each category, trying to make explicit the expectations I have of my students’ work. The individual categories will vary by the assignment, although a few are relatively consistent across courses and tasks. For example, because I often operate under programmatic requirements for page length and word count, I explicitly note students’ adherence to those quantities. I also generally look at whether students have followed formatting standards I make explicit to the students and whether their usage adheres to a particular style manual–almost always that of the Modern Language Association of America, given my membership therein and disciplinary commonplaces.
More fluid categories focus on informational content and quality, explanatory thoroughness, organizational principles, and the like; the phrasing and standards of each category depend on the specific assignment. Different assignments act in different genres, and different genres have different conventions they are expected to follow. They should be assessed differently therefore, and I work to reflect that in the categories I include in the rubrics I use to assess papers.
A category I try to include in assignments is one I label as “Engagement Developed.” I offer it as a sort of extra-credit component of my assignments, one that is admittedly subjective (even more so than most writing assessments). I typically define it as identifying whether the paper offers something unusually compelling or innovative for the level of class being taught (so lower-division classes are more likely to see it awarded than upper-division or graduate courses, given appropriately different expectations of performance based on prior training and experience), although I am relatively open about what that “compelling or innovative” can be, and I tend to reward a sincere attempt even if it is not entirely successful.Note
Each category–standing, fluid, or extra-credit-like–is framed in a binary of sufficient proficiency and its lack; I tend to err on the side of success when I have questions about whether or not it has been achieved, particularly in earlier assignments and earlier versions of later assignments. Success or failure in each category results in an adjustment of the grade by a number of “steps,” as judged against a common grading scale I use throughout my classes. On that scale, I start all papers at a grade of C, assuming base-line competence from my students and asserting that base-line competence as a criterion-referenced average performance. The final grade for each paper–or each component of an assignment, as sometimes happens–results from the total number of steps changed, as outlined in the table below.
Reported Grade |
Steps Change |
Numerical/Percentile Equivalent |
A+ |
+7 or moreNote |
98 |
A |
+6 |
95 |
A- |
+5 |
92 |
B+ |
+4 |
88 |
B |
+3 |
85 |
B- |
+2 |
82 |
C+ |
+1 |
78 |
C |
+0 |
75 |
C- |
-1 |
72 |
D |
-2 |
65 |
F |
-3 or more |
55 |
0 (Zero) |
SpecialNote |
0 |
Category scores are not reported in isolation; giving only indications of success or failure is not helpful for students who seek to improve the quality of their work, whether out of a sincere desire for betterment or out of a local and immediate desire for higher grades. On my formal assignments, those which prompt formal rubrics, I offer not only an indication of whether the category has been successfully addressed, but notes about why I have arrived at my assessment thereof. I also offer overall comments at the end of the assessment rubric, a filled-out copy of which I append to each student’s work as it is returned. The comments with each category address issues specific to it, while those at the end of the rubric encapsulate my more readerly responses to the work. (I flatter myself a good reader after three degrees in English.) Students therefore receive comments at multiple levels of readership, which they can then use to improve their future writing if they are inclined to do so.
Return to top.
Practical Reasoning
My grading practices have developed as a result of the institutional pressures under which I have worked since beginning to teach college classes in 2006. I attest to some of that history on “About Geoffrey B. Elliott,” here, but what does not show up clearly on that combination of resume and CV is the amount of work done in each position. As a graduate student on campus, I taught one or two classes each term–while taking my own. Once off campus, in New York City, I taught five classes a term until my promotion to full-time status, at which point I began to teach six or more–sometimes for as many as twenty-four hours of coursework–in addition to working on my doctoral dissertation; most of my classes started with thirty or more students in them. In Oklahoma, I have carried a 4/3 teaching load, supplemented by a fair bit of outside labor. In brief, I have done more classroom work than many others who teach at the collegiate level, and it is to such pressures that my grading has responded.
Because I have carried the teaching loads I have, I have had to learn to compress my grading. Rubrics facilitate that compression, although, as I note above, they do have problems.Note Similarly facilitating is the reduction of categories to acceptable or unacceptable completion, although I admit that reduction is also a problem. Marvell’s comment to his coy mistress would apply here, however, and I have never had world enough or time. The practices do, however, have the advantage of being easy to understand. Calling attention to specific categories allows for targeted effort and improvement, and identifying successful completion is something that registers decently enough for the students with whom I have had experience over years of teaching. Too, such things tend to read well with administration outside my own teaching areas, and while there are certainly problems with accountability cultures, multiple audiences are involved in any communicative act, and those that are known or can be guessed at should be addressed as much as can be done.
Some students have noted that my category-specific comments are not always helpful in that they are not exact. This is particularly true for those comments treating adherence to standards of usage articulated in whatever style manual prevails in the class. And it is true that I do not perform a line-by-line proofreading of student papers, which is what the students who make such comments generally reference and expect. There are instructional reasons I abstain from doing so. When I have done so, students have tended to address only those things explicitly marked, “fixing” their papers at a surface level without revising for the more important concerns of structure and content noted; I see no point in “correcting” words that I expect to be changed or removed. Too, the students who attend to the comments I leave inevitably ask questions about specifics, even when I leave more detailed line-item comments; since they will come to me in any event–which is preferable, in all honesty–I see no point in laying out an initial effort that will be repeated for those who seek to benefit therefrom, or offering it to those who will not respond favorably. Finally, if I do all of the work of proofreading my students’ papers, they will not learn how to do so for themselves; they have not yet in years of having others proofread their papers for them, as I see in their work and as many have told me mouth to ear.
It will likely be noted also that the grading scale I use assigns numbers ending in 8 to -plus grades. That is, a C+ translates to a 78, a B+ to 88, and an A+ to 98. (I have always regarded D+ as an oddity, and F+ seems inane.) This is, in part, to minimize arguments. Were I to assign numbers ending in 9 to -plus grades, I have no doubt that I would be inundated with requests for “just one more point”; in the past, when I have graded on a point-build system, I have gotten such requests from students earning 59, 69, 79, 89, and 99. The answer was almost always “no,” but having to handle the requests took up time that could have been better spent on other things–such as helping students to improve their performance rather than the rating assigned to performance already completed and observed. With -plus grades ending in 8, however, such requests are vastly reduced, freeing up time for lesson planning, assessment, reflection, and the work I do outside the classroom in the hopes of excelling inside it.
Return to top.
Philosophical Reasoning
That I give some thought to the principles underlying my pedagogical practice is, I think, a good thing. It is also something I have discussed before, as attested here and in the reports of course surveys I post in this webspace (here, here, here, here, and here as of this writing). More targeted discussion of those principles seems in order, hence what appears below.
The most important idea undergirding my practice is that I mean to help my students. My own educational background and classroom experiences tell me that students benefit from having some explicit guidance, which my grading practice provides. It does not prescribe in detail what students are expected to do, however, allowing them room to try approaches I had not considered previously, which is good, as well as obliging them to consider critically what they must do in addressing the tasks I set before them, which is also good. And if I do grade somewhat strictly, as a binary system tends towards having happen, I also maintain that if there is no challenge, there is no reason to improve–and improvement is eminently desirable.
Something else to consider is the purpose to which education is directed. The present document does not admit of enough space to treat the many, many arguments about what that purpose is or ought to be–and there are many, indeed. Those I have seen tend to push for education to prepare students for the workforce or for active and engaged citizenship. My grading practices serve to help prepare students to face either case. Workplace writing does tend to work in terms of success or failure, and common genres of workplace writing do fairly narrowly prescribe what documents should look like and contain. Active and engaged citizenship demands that people attend closely to forms and figure out what is being asked of them, much as my grading tends to do. So if education is directed toward either of those ends, the way I assess student work befits the end goal.Note
It will be noted also that the regular grading scale in my classes (as distinct from that imposed by the institutions that employ me) caps at A+, which I tend to define as 98 points on a 100-point scale. That the number ends in an 8 is simply an artifact of my usual grading pattern, put in place because grades ending in 9 tend to prompt pleas for “just one more point.” That the number is not 100, however, has attracted some comment and so bears a bit of explanation.
In the classes I have taught and continue to teach (as of this writing), most of the grade comes from writing. There are some few other assignments given, usually completion grades of one sort or another, but the bulk of grading derives from what I see in the writing my students do. At the beginning of each term in my more writing-intensive classes (such as Composition I at Oklahoma State University and Composition II at both Oklahoma State University and Northern Oklahoma College), I make the comment to my students that writing can always be improved. Typically, I do so with a joking reference to Shakespeare; the Bard always plays well in English classes. But even couched in jest, the core idea holds: Writing can always be better. Those of us who write professionally struggle with the idea continually; the writing arrives at a point of “good enough to send off” rather than an actual “good enough,” and even piece that are published to great acclaim are often viewed later by their writers as deficient in one way or another.
Because the writing can always be improved, it is necessarily not perfect. To my mind, a grade of 100 out of 100 signifies perfection. Since no writing can be perfect, no writing can earn a grade that signifies perfection; to do so would be inaccurate at best and diminutive of the value of perfection at worst. This does not mean that the writing cannot be excellent, for which reason I offer a grade of A+ to my students despite what standard grading scales at my institutions allow, but there is a difference between excellence and perfection. And in such a case, the 100 remains in place as an ever-elusive goal, something towards which to strive despite its unattainability, asymptotically approached but never actually encountered–because getting better is a big part of the point of it all, if not the whole of it.
Return to top.
Conclusion
I am aware that the way I work is idiosyncratic, emerging from my specific circumstances of work and background over many years. (Indeed, some of the underpinnings of how I assess students work now can be found in notes I took and projects I submitted during my undergraduate years, when I sought teaching certification.) They may well not work for others; I have, in fact, received complaints about my methods, largely based upon their differences from the practices of others. But they work for me, allowing me to look over student work and identify areas where they need support and additional reflection, as well as areas where they are doing well, so that they can address the former and enhance the latter. My practice does offer me something to use when institutional pressures act upon me, as they do upon most who teach at one point or another, but it does more to help those students who want to do more than go through the motions of credentialing, and that benefit is what matters.
Return to top.
Notes
One example that comes to mind is a student who wrote an Evaluation Essay for Oklahoma State University’s Composition I class as I taught it;. The student’s paper looked at articles treating gun control issues, and the student framed the discussion through a target-practice metaphor, ultimately identifying the focal article as on target but outside the grouping of the other articles’ shots. The framing is perhaps awkward, but it still represents a sincere and thoroughgoing attempt to unify a paper via a consistent and thematically appropriate metaphor. It received points for developing engagement. Return to text.
The “or more” arises in a fluke on an earlier grading rubric, in which students could earn more steps above C than seven. Return to text.
Grades of zero (0) are awarded only for non-submission or violations of prevailing academic integrity principles. Return to text.
The same can be said, of course, for any practice. Each is a human product, and so each is necessarily flawed. The issue becomes one of negotiating the problems more or less successfully, whatever the practice. Return to text.
If the end goal is not one of the two noted, as it may well not be, then I am still confident that my practice will address what it needs to. How it would do so is beyond the scope of the current discussion, however. Return to text.
Return to top.