Class Reports: ENGL 1213, Sections 015, 023, and 040–4 March 2016

After addressing questions from the previous class meeting, discussion continued to explicate concerns supporting the Infog, as well as rehearsing materials treating the giving and receiving of feedback in advance of the next due date.

Students are reminded of the following due dates:

  • Infog PV (in hard copy as class begins on 7 March 2016)
  • Infog RV (via D2L before class begins on 11 March 2016)
  • Infog FV (via D2L before class begins on 25 March 2016)

Regarding meetings and attendance:

  • Section 015 met as scheduled, at 1030 in Classroom Building Room 217. The class roster showed 16 students enrolled, unchanged since the previous report. All attended, verified through a brief written exercise. Student participation was good.
  • Section 023 met as scheduled, at 1130 in Classroom Building Room 121. The class roster showed 17 students enrolled, unchanged since the previous report. Fourteen attended, verified through a brief written exercise. Student participation was adequate.
  • Section 040 met as scheduled, at 0830 in Morrill Hall Room 206. The class roster showed 15 students enrolled, a decline of one since the previous report. Ten attended, verified through a brief writing exercise. Student participation was restrained.
  • Three students attended office hours, which were abbreviated in favor of another appointment.

Sample Annotated Bibliography: Why Not Have a Rhetoric Requirement among UL Lafayette PhD Students in English?

What follows is an annotated bibliography such as my students are asked to write for the AnnBib assignment during the Spring 2016 instructional term at Northern Oklahoma College. As is expected of student work, it treats an issue of its writer’s curriculum. It also adheres to the length requirements expressed to students (they are asked for a two-paragraph introduction that contextualizes the project and outlines the methods for selecting materials, as well as six annotative entries, exclusive of heading, title, and page numbers; the sample below provides them), although its formatting will necessarily differ from student submissions due to the differing medium. How the medium influences reading is something well worth considering as a classroom discussion, particularly for those students who are going into particularly writing- or design-intensive fields.

Please note that the bibliography below treats the same topic addressed in earlier sample assignments written throughout the Spring 2016 instructional term; it is, in effect, an expanded version of the T&S assignment required of students at Oklahoma State University, for which a sample assignment has been provided (here). Some materials will be duplicated from the earlier version.

I hold a doctorate in English from the University of Louisiana at Lafayette (ULL). Earning it obliged me to take many hours of coursework, draft and defend a dissertation, and sit for a battery of comprehensive exams. Those exams are described by the ULL English Department as helping to prepare students for teaching and research–but most of the teaching that I have done since leaving ULL has been in rhetoric and composition, and the training the exams promote and assess did not require me to make much if any formal study of that area of English studies. That a combination of logistical and disciplinary factors contribute to the lack of a rhetoric requirement in a battery of generalist English exams seems likely, but more investigation is needed to ascertain whether or not it is.

Conducting such an investigation suggests looking at discussions of comprehensive exams, generally, as well as of the disciplines in which the specific exams being discussed might exist. Those discussions are easily found in a number of disciplinary-education journals, such as are available through the Oklahoma State University library and through subscriptions to publications of organizations invested in English education, such as the National Council of Teachers of English. A few prominent results of searches through such materials are related below; they, and other sources yet, argue for a dominant format of comprehensive exams and a view of the field into which graduates of the ULL English PhD program will enter, highlighting some of the disconnections between how the program prepares its students for their likely career paths.


Hassel, Holly, and Joanne Baird Giordano. “Occupy Writing Studies: Rethinking College Composition for the Needs of the Teaching Majority.” CCC 65.1 (September 2013): 117-39. Print.

The article argues against perceptions among writing scholars that devalue the work done by most writing teachers, who work in two-year and open-admission institutions. After defining a number of its terms, the authors note that studies of such teachers are not proportionate to the work they do. They continue with discussions of the two-year teaching environment, the focus of writing scholarship on four-year and elite institutions and the concomitant problems associated with community colleges, and what benefits would accrue to teachers and scholars from a reconsideration of such positions as they outline. The article concludes with a few recommendations of how to proceed, namely the support of research by and about two-year and open-admission institutions.

Of particular importance in the article is a quotation from a  Chronicle of Higher Education article by Schmidt, one noting that non-tenure-track faculty account for more than three quarters of teaching positions (119). While it does not discuss the comprehensive exam as an item, it does point towards the ubiquity of writing instruction by those with graduate degrees in English, irrespective of their specialization; it is a point the article reiterates. As such, it helps provide context and support for the need for graduate students in English to take exams and concomitant training in rhetoric, since it is from rhetoric that the practice of teaching writing emerges.


Nolan, Robert E. “How Graduate Students of Adult and Continuing Education Are Assessed at the Doctoral Level.” Journal of Continuing Higher Education 50.3 (Fall 2002): 38-43. PDF file.

The article encourages discussion of the forms comprehensive examinations in doctoral coursework should take to increase completion rates and more accurately reflect the expectations placed on those who pursue advanced graduate study. After explicating then-current demographic data among graduate students, the piece lays out its purpose and summarizes previous studies of the topic. It then lays out its methods–noting the group surveyed and describing the survey used. Findings follow, identifying major trends about the timing, format, and intentions of comprehensive exams. The article concludes with notes that indicate no consensus among programs about how to hold comprehensive exams and what they ought to do.

The article may suffer somewhat from concerns of age, and repeated mentions of what various things “presumably” do weaken some of the rhetorical force of the piece. The brevity of the piece may also be of some concern. The article does, however, provide a useful summary of tendencies in how examinations have been conducted at the doctoral level across disciplines. In that regard, the article offers a useful starting point for discussion of any topic treating comprehensive exams at the doctoral level. As background material for framing investigation of the comprehensive exam, then, it is worth reading.


Palmquist, Mike, and Sue Doe. “Contingent Faculty: Introduction.” College English 73.4 (March 2011): 353-55. Print.

Introducing a special issue of College English they edit, Palmquist and Doe note the centennial of the National Council of Teachers of English, the quarter-century anniversary of the Wyoming Resolution (one of the major statements regarding contingent those members of college and university faculties with the least protection), and the many statements made by scholarly societies calling for improvements to the working conditions contingent faculty face. They then lay out the contents of the special issue of the journal, summarizing three articles and three discussion forums that occupy the following pages.

Of particular note in the piece are cited comments from the American Association of University Professors and a committee of the Modern Language Association of America. Combined, the comments speak to the prevailing conditions faced by those who will teach English. Most postsecondary teaching positions are contingent, and most composition teaching is done by contingent faculty. The chance that a graduate of any English PhD program will teach composition off of the tenure track is therefore substantial, making preparation for that work all the more important–and its lack all the more curious.


Ponder, Nicole, Sharon E. Beatty, and William Foxx. “Doctoral Comprehensive Exams in Marketing: Current Practices and Emerging Perspectives.” Journal of Marketing Education 26.3 (December 2004): 226-35. PDF file.

The authors identify and explain then-current and -emerging practices regarding doctoral comprehensive exams in United States marketing programs. After offering a general introduction to the topic, the authors review available literature on the topic, focusing largely on Bloom’s taxonomy. Methodology follows, with a survey described and the process of its dissemination, completion, and interpretation articulated. Results detailing the perceived purposes of doctoral comprehensive exams, structures of those exams, and changes to the latter are presented, and less traditional emergent structures–an “original papers” approach, an “extended take-home,” a “specialist,” and a “no exam–no paper” approach–are explicated. Results are discussed, and a conclusion suggesting that the traditional closed-book format of comprehensive exams will be less common in marketing schools finishes the article.

Although Ponder, Beatty, and Foxx discuss marketing, specifically, many of their assertions are likely applicable to other fields. Despite common perceptions of advanced education as liberal and socially deconstructive, academia tends to remain wedded to older structures, so the “traditional” examination structures discussed in the article are likely to be represented in other fields and programs entirely. If such points of correspondence are in place, then others may also be, making the conclusions reached by the article at least provisionally applicable to other areas of advanced education. Also notable in the article is the concern voiced by some faculty that changes to traditional exam structures “are depriving students of the opportunity to integrate a broad range of knowledge at a deeper level than they will ever have an opportunity to achieve again” (234), offering an unusual perspective on the comprehensive exam that may well bear examination.


Schafer, Joseph A., and Matthew J. Giblin. “Doctoral Comprehensive Exams: Standardization, Customization, and Everywhere in Between.” Journal of Criminal Justice Education 19.2 (July 2008): 275-89. PDF file.

The authors describe general tendencies regarding treatment of comprehensive exams by programs awarding doctoral degrees in criminal justice. The need for systematic study of criminal justice programs is articulated before the doctoral comprehensive exam is contextualized. Exam procedures are described and historicized. Study methods–largely focused on conducting surveys and interviews–are described and findings articulated, the latter focusing largely on the forms the exams take. Findings are subsequently discussed, identifying and commenting on the patterns that emerge from the study and treating relative merits of several exam formats. The article concludes with questions about the ongoing utility of curricular standards to both the discipline and the broader community the discipline serves.

Although Schafer and Giblin treat the discipline of criminal justice, specifically, they ground their article in information deriving from studies of other fields–notably including rhetoric–and assert that their own discipline largely follows the structures of others. The conclusions they reach about their own field therefore present themselves as able to be generalized back to those other fields, so that what they say about comprehensive exams can be applied to other areas than their own. Additionally, their relatively recent (to this writing) article allows their conclusions to be taken as more timely, and their relatively extensive bibliography offers useful insights as to further reading.


Scott Shields, Sara. “Like Climbing Jacob’s Ladder: An Art-Based Exploration of the Comprehensive Exam Process.” Arts & Humanities in Higher Education 14.2 (April 2015): 206-27. PDF file.

Following an epigraph taken from Scripture, Scott Shields explains that her piece is a reflection on the experience of doctoral comprehensive exams. The reflection is framed in terms of the general shape and purpose of the doctoral exam, described as having ritual aspects that are not clear to graduate students who will soon take such tests; the author notes desiring to explicate the ritual through narration in reflection. Excerpts of exam questions and answers, as well as visual and verbal materials taken from personal journal entries relating to the exam experience follow; reflections on individual exam components accompany each set of materials. Ultimately, the author arrives at the notion that the value of the comprehensive exam is in its facilitation of individual focus on personal growth leading to shared experiences.

While the piece is unconventional, it is of value in that it offers an inside perspective on comprehensive exams; most treatments of the subject look at them from the perspective of having long completed them. The anecdotal and idiosyncratic nature of the article may read to some as lessening the effectiveness of the piece as a whole, but that same individualistic narration does much to remind readers of the deeply personal nature of the comprehensive exam. It bespeaks the overall engagement with subject matter inherent in the comprehensive exam, making it all the more important that the exercise is directed to good effect.

Class Report: ENGL 1213 at NOC, 2 March 2016

After addressing questions from previous classes, discussion asked after student impressions of the Explore, of which the FV was to have been submitted electronically before class began. Class then proceeded to lengthy discussion of the AnnBib.

A report of results from the recent survey is available here.

Students are reminded of the following due dates:

  • AnnBib RV (due online before class begins on 23 March 2016)
  • AnnBib FV (due online before class begins on 30 March 2016)
  • ResPpr RV (due online before class begins on 13 April 2016; note that materials for the assignment are not yet developed as of this writing)

The section met as scheduled, at 1300 in North Classroom Building Room 311. The roster listed seven students enrolled, a (surprising) loss of one since the previous class meeting. All attended, verified by a brief written exercise. Student participation was reasonably good.

No students attended office hours since the previous class meeting.

Class Reports: ENGL 1213, Sections 015, 023, and 040–2 March 2016

After addressing questions from the previous class meeting, discussion continued to examine the Infog and concerns supporting it. Noted was the presence of a sample Infog, available here.

A report of results from the recent survey is available here.

Students are reminded of the following due dates:

  • Infog PV (in hard copy as class begins on 7 March 2016)
  • Infog RV (via D2L before class begins on 11 March 2016)
  • Infog FV (via D2L before class begins on 25 March 2016)

Regarding meetings and attendance:

  • Section 015 met as scheduled, at 1030 in Classroom Building Room 217. The class roster showed 16 students enrolled, unchanged since the previous report. Fifteen attended, verified through a brief written exercise. Student participation was good.
  • Section 023 met as scheduled, at 1130 in Classroom Building Room 121. The class roster showed 17 students enrolled, unchanged since the previous report. Fourteen attended, verified through a brief written exercise. Student participation was adequate.
  • Section 040 met as scheduled, at 0830 in Morrill Hall Room 206. The class roster showed 16 students enrolled, unchanged since the previous report. Twelve attended, verified through a brief written exercise. Student participation was adequate.
  • No students attended office hours.

Report of Results from the Spring 2016 Week 7 Surveys

Following up on practices identified as useful during the Fall 2015 term, I asked students after their impressions of the course near the middle of the Spring 2016 instructional term at both Oklahoma State University and Northern Oklahoma College, where I teach as of this writing. Students were asked to fill out a survey administered anonymously online via Google, one offering a grade reward to encourage participation; initial announcements of the event are here for Oklahoma State University students and here for Northern Oklahoma College students, and the surveys were open 22-29 February 2016, so that students had ample time to address the surveys. They were substantially similar, asking the same questions but about assignments specific to the individual courses.

Throughout the survey, 49 students were enrolled in my sections of ENGL 1213: Composition II at Oklahoma State University: 16 in Section 015, 17 in Section 023, and 16 in Section 040. Eight were enrolled in the section of ENGL 1213: Composition II I teach at Northern Oklahoma College. Recorded were a total of 41 responses: 12 from Section 015, 14 from Section 023, seven from Section 040, and seven from the section at Northern Oklahoma College. While sample sizes are relatively small (particularly in the case of Section 040), they do represent large portions of the classes surveyed, making them useful in adjudging overall student impressions of how teaching is received.

As in the survey conducted at the midpoint of the Fall 2015 term at Oklahoma State University, University students were asked to identify the section of enrollment before answering open-ended questions about events in the class. Students at Northern Oklahoma College were only asked the open-ended questions, the fact of a single section being taught negating the need to ask after the section of enrollment. The specific questions posed to University and College students, as well as summaries of the answers provided, appear below, followed by comments about impressions and implications thereof.

The report makes use of nomenclature common to classroom discussion and documentation. Reference thereto can be verified on the course syllabi and calendars for the courses surveyed, available for the University here and the College here.

Oklahoma State University

The questions posed to my students at Oklahoma State University were

Answers to the First Question

The T&S and its components registered with respondents as the most helpful assignments. Nine students indicated that the whole T&S was of use, while three each indicated that the PV and RV thereof were particularly useful. Reported reasons repeatedly attest to how informative the assignment is, not only in terms of constructing an annotated bibliography, but also in terms of helping students learn more about their fields of study and the professions they hope to enter.

The StratRdg and its components also attracted favorable comment. Five students indicated that the StratRdg RV was particularly helpful, with three more commenting on the value of the StratRdg FV, and one each noting the PV and the overall assignment as of use. Reasons reported include a feeling of marked increase in writing ability and an increased attention to the details of reading and reading processes.

Additionally, three students indicated that both RVs were of benefit, citing (along with others who identified individual RVs and who are not included in this count) the access to instructor feedback as useful. Two each indicated that all assignments have been helpful and that none of them have been. Answers to the former cite remediation of lingering deficiency and focused insight into a field of study; answers to the latter cite assignments as  being unhelpful for language development and of disfavored writing types.

Return to questions.

Return to top.

Answers to the Second Question

The StratRdg and its components attracted no small amount of negative comment, as well, with seven identifying the assignment as a whole as problematic. Five thought the PV was particularly bad, three the RV, one condemned both, and one other the FV. Reasons reported include lack of experience with the instructor and the type of assignment, as well as difficulty in selecting appropriate readings.

The T&S also attracted negative attention. Eight respondents disdained the PV, three the assignment as a whole. Reported reasons include difficulty in finding appropriate sources. It is notable that many respondents neglected to offer reasons for their answers when abjuring the T&S.

Further, four students reported that no assignment stood out as least helpful, with more than one noting that all are useful and informative; another notes difficulty with assignment sheets across all assignments. One other student cites both PVs as of little value, joining comments on the individual PVs in noting that fedback derived from other students is of limited or no utility, whether due to lack of expertise or unwillingness to critique peer work.

Return to questions.

Return to top.

Answers to the Third Question

Oddly, twelve responses indicate no perceived deficiencies in instructional performance, although one seemingly jokingly notes a desire to have doughnuts brought to class. One student commented on the exceptional quality of instruction, and one commented that instruction is “detailed and entertaining.”

Ten students reported a desire for more detailed explanation, whether on assignment sheets or in comments on returned assignments. One such asks for better explanations in terms of simpler language (and two other responses speak to a desire for less elevated vocabulary in the class).

Additionally, four responses ask for some form of workshopping, whether in the sense of displaying examples of current students’ work in class or using other examples as classroom models. One speaks to each of a desire to have class notes posted online, a desire for more attentiveness to student questions, a desire for more gradual scaffolding of assignments, and more systematic use of the course textbook.

Return to questions.

Return to top.

Answers to the Fourth Question

Also oddly, fifteen responses indicate no desire to see any specific practices discontinued. Several comment favorably about the quality of instruction, with one noting the instructor has “done an excellent job as [the students’] teacher.”

Three responses each treat grading, classroom tangents, and vocabulary issues. Comments about the first note both the perceived harshness of grading and the unusual nature of the grading scale applied in class. The second sees mixed commentary, with one student noting overall enjoyment of them but annoyance at the distraction they represent; another joins a different comment (regarding a desire not to be held in class for the full scheduled meeting) in remarking that early release would be possible without the tangents. The third sees expressions of annoyance at word choice as a deliberate motion to increase the class’s difficulty.

Two students speak to perceived impoliteness (whether in terms of bluntness or in terms of ridicule of students), with a possible third moving that way. The third remarks upon yelling in class, but the response does not elaborate on whether it is thought impolite or merely an annoyance.

A few unique responses were also offered. One student comments that an assumption of good prior teaching seems to underlie instructional practice. Another notes that a repeated attendance question becomes difficult to answer from day to day. Yet another complains of spending the majority of class time treating assigned readings and student progress on papers rather than offering and discussing examples of the kind of work to be done on assignments.

Return to questions.

Return to top.

Answers to the Fifth Question

Eight responses cite instructor humor as something worth having continue in class. Notably, one of the responses to the question came signed, with a student self-identifying in the explanatory comments.

Seven other responses note the level of instructor involvement with the goings-on in class as something to maintain.  Reported responses include the “prodding” helping students to open up and thereby get direction and an expressed appreciation for examples and early indications of assignment materials.

Another six responses attest to the desirability of continuing the explanatory practices at work in the course. Responses report appreciation of the detail included on assignment documents, as well as the organization and layout thereof.

Three more students express appreciation for the examples provided by the instructor. Another two comment favorably on the alignment of the major assignments around a single project, with yet another two noting that discussions in the class should keep happening. One student speaks to each of provision of completion grades, lecture notes being given in type rather than script, peer review, and elevated vocabulary as desirable to see continued. Yet one more appears to have answered the wrong question, talking about a practice to see begun rather than one to be maintained.

Return to questions.

Return to top.

Northern Oklahoma College

The questions posed to my students at Northern Oklahoma College were

Answers to the First Question

Three responses note the utility of the drafts, whether combined (one) or focused on the Explore Draft (two). The latter was cited as being helpful in terms of how the course’s ongoing research project was clarified by the exercise.

One response noted the usefulness of the Prop RV, although it did not elaborate on reasons. Another noted that no assignment was “helpful,” although each has been instructive.

Further, one response commended practices but did not address the question posed. Another appears to have addressed a question posed for another survey entirely.

Return to questions.

Return to top.

Answers to the Second Question

No patterns of answers to the question emerge, as each of the seven responses differs. The closest thing to a pattern is that three responses comment aspersively on the drafts; one each rebukes the Explore Draft, the Prop Draft, and the drafts as a unit. The first is not explained, but the second cites a lack of understanding early in the class, and the third notes the unhelpfulness of peer comments.

Three responses move away from those solicited. One notes difficulty with the assignment sheets given, although it acknowledges that students’ “questions are always answered.” Another explicitly repeats an assertion from the first question, that no assignment is “helpful,” but that all are instructive. A third expresses annoyance with illegible comments. Finally, one answer notes that the FV has no value beside the grade issued for it.

Return to questions.

Return to top.

Answers to the Third Question

Interestingly, five students note having nothing to identify as an unmet need in the class. The remaining two ask for more organized lectures and notes, focusing on providing and clarifying data rather than querying student opinion on and response to the material.

Return to questions.

Return to top.

Answers to the Fourth Question

Also interestingly, five responses identify no practice as needing to be discontinued from the classroom. The remaining two responses must be discarded due to a survey formatting error.

Return to questions.

Return to top.

Answers to the Fifth Question

Two responses indicate a general approval of current practices. Another indicates that time for outside-of-class discussion of work is good to see. Yet another indicates that the humor deployed in the classroom is appreciated. Still another asks for continued detailed discussion of assignments. A response approves of remediation, and a final response notes that the development of rapport is a beneficial thing.

Return to questions.

Return to top.

Impressions and Implications

Relatively high completion rates suggest that the responses provided have value for the classes as they are currently being taught. There is always room for error, of course, and the classes have proceeded since the surveys were drafted and opened; answers now may well differ from answers then, were I to adjust the survey to suit new circumstances. But the data provided reads as reliable enough to use, at least for now.

I have worked to address some of the complaints that have been raised. Those about grading, in particular, have received attention in “Some Remarks about Grading.” (Indeed, some of the comments made in the document are responses to early reviews of survey responses.) Problems with tangents were identified previously, and I have worked on them, but it seems that I still have more work to do. Problems with “overshooting,” whether in harshness of grading or difficulty of working, I have also addressed, as noted in the Conclusions and Implications section of the Fall 2015 midterm report linked earlier in this report.

Yet other complaints confuse me. As noted above, several students ask for more detailed explanations of assignments. My assignment sheets already run several thousand words in length each, well in excess of the word-counts requested of students’ assignments. They already divide major writing tasks into series of smaller pieces of work to do. How much more detail can be provided in such circumstances is not clear to me–although I will note that on many of the occasions when I ask if there are any questions, I am met with silence from my students.

Many at both the University and the College, however, seem generally to approve of how their class is conducted, which seems to argue in favor of what I do in the classroom, as well as arguing against earlier online commentaries and more formal assessment instruments that have treated my teaching. Why this would be is unclear to me; I have not made much adjustment to the way I do things in the classroom in the past few terms, although I have been sure to obey the dictates expressed to me by those in authority over me. Whatever the reason, however, I appreciate that I seem to be appreciated by my current students; it is not something that is often true for those at the fronts of classrooms. I hope to be able to continue to do well through the rest of the term and into future work.

Return to top.

Sample Infographic Portfolio Assignment: Context to Answer a Question about the Comprehensive Exams for UL Lafayette PhD Students in English

What follows is an infographic portfolio such as students are asked to write for the Infog assignment during the Spring 2016 instructional term at Oklahoma State University. As is expected of student work, it treats an issue of its writer’s curriculum. It also adheres to the length requirements expressed to students. (They are asked for a statement of goals and purposes of approximately 500 words, exclusive of heading, title, and page numbers, as well as for a hand-drawn and digital-original version of an infographic; the statement below is 498 words long when judged by those standards). Its formatting, however, will necessarily differ from student submissions due to the differing medium. How the medium influences reading is something well worth considering as a classroom discussion, particularly for those students who are going into particularly writing- or design-intensive fields.

The sample below treats a question voiced in the earlier “Sample Developing a Topic and Locating Sources Assignment: Questions about the Comprehensive Exams for UL Lafayette PhD Students in English,” continuing the same project being treated for the benefit of students in classes at both Oklahoma State University and Northern Oklahoma College. Because it is a continuation of the same project, some phrasing will likely be similar to that of sample assignments written for both sets of students.

Earning a doctorate in English from the University of Louisiana at Lafayette (ULL) obliged me to sit for a series of comprehensive exams. Like my contemporaries, I had to take four five-hour tests over such areas of inquiry within English studies as English languages and literatures to 1500, early modern English literature, American literature to approximately 1900, and contemporary fantasy literatures in the US and UK. The idea behind such exams, according to how the ULL Department of English describes them on its website as of 23 September 2015, is to facilitate both teaching and continued research, ensuring that students who complete the program adequately reflect the generalist orientation of the program. Yet while the expressed requirements for the exams note that coursework across literary areas and periods must be taken, many of the students in the program are not obliged to range outside of literary studies, and most graduates will teach outside those areas–namely in rhetoric and composition. Why this is so is unclear; such a lack of clarity merits investigation.

Finding out why the PhD program in English at ULL acts as it does prompts looking at peer institutions of that school. The most immediate peers are the other members of the University of Louisiana system, given that they are unified at their higher administrative levels and therefore operate under many of the same constraints and restrictions that will affect ULL. The problem with doing so is that ULL is the only institution in the University of Louisiana system that offers a PhD in English. A slightly less immediate set of peers is the membership of the Sun Belt Conference, the schools against which ULL competes in athletic events; conferences tend to be organized to bring comparable schools into competition, so the member institutions are likely juxtaposed fairly. Too, more of the schools have PhD programs in English than in the University of Louisiana system, offering more useful data for comparison.

In composing the infographic, I sought to connect my presentation to the materials being presented. Consequently, I strove initially to mimic the color scheme and typefaces used by my focal materials, the description of the comprehensive exams hosted by the ULL English Department; the idea therein was that the treatment of the subject would reflect the subject itself. The infographic title was put at the top center to foreground it most prominently. Information was positioned to move from broad to narrow, leading readers down the page along a single center line. Alignment and grouping were seen to via tables, into which the increasingly narrow data-sets were set for ease of reading; while the focal information was placed near the bottom of the infographic, the plethora of earlier information serves to balance the document. Citations follow typical online practice, providing URLs for the materials collected and grouping them under a single heading at the bottom of the page; they need to be present but unobtrusive, and the placement and smaller typeface conduce to that end.

Raw-Form Infographic

G. Elliott Spring 2016 ENGL 1213 Infog Raw Form

Digital-Original Infographic

G. Elliott Spring 2016 ENGL 1213 Infog Digital Original Form

Class Report: ENGL 1213 at NOC, 29 February 2016

After addressing questions from previous classes, discussion treated continued work on the Explore, of which the RV was returned to students electronically over the weekend. It also treated concerns of the assigned readings and citation.

The survey noted last week is closed. Results will be reviewed and reported.

Students are reminded of the following due dates:

  • Explore FV (due online before class begins on 2 March 2016)
  • AnnBib RV (due online before class begins on 23 March 2016)
  • AnnBib FV (due online before class begins on 30 March 2016)

The section met as scheduled, at 1300 in North Classroom Building Room 311. The roster listed eight students enrolled, unchanged since the previous class meeting. Six attended, verified by a brief written exercise. Student participation was good.

No students attended office hours since the previous class meeting.

Class Reports: ENGL 1213, Sections 015, 023, and 040–29 February 2016

After addressing questions from the previous class meeting, discussion turned to further consideration of the Infog, as well as carrying over talk of previously assigned readings and materials meant to supplement them.

The survey noted last week has closed. Results will be reviewed and reported.

Students are reminded of the following due dates:

  • Infog PV (in hard copy as class begins on 7 March 2016)
  • Infog RV (via D2L before class begins on 11 March 2016)
  • Infog FV (via D2L before class begins on 25 March 2016)

Regarding meetings and attendance:

  • Section 015 met as scheduled, at 1030 in Classroom Building Room 217. The class roster showed 16 students enrolled, unchanged since the previous report. All attended, verified through a brief written exercise. Student participation was good.
  • Section 023 met as scheduled, at 1130 in Classroom Building Room 121. The class roster showed 17 students enrolled, unchanged since the previous report. Twelve attended, verified through a brief written exercise. Student participation was adequate.
  • Section 040 met as scheduled, at 0830 in Morrill Hall Room 206. The class roster showed 16 students enrolled, unchanged since the previous report. Fourteen attended, verified through a brief writing exercise. Student participation was marginal.
  • No students attended office hours.

Some Remarks about Grading

Questions arise from time to time in my classes about the way in which I assess work submitted to me by students. My practices are at variance with many others’, as I well know, and the divergence sometimes leads to confusion. Some explication therefore suggests itself as worth conducting. As such, I describe my grading practices below, with comments about the practical and philosophical underpinnings and a short conclusion following. Notes are also appended.

The remarks below are provisional and represent my thoughts at the time of writing. I will doubtlessly return to them in the future; I expect that, if and as I continue to teach, my opinions of how to assess my students’ work will change. They should, certainly, if I am paying attention to things as I ought to be.

Grading Practices

Most of my classes assess student performance on assigned writing tasks. There are some few assignments that follow different forms, usually quizzes to ascertain whether or not students have done the assigned reading and paid attention to prevailing classroom discussions. Even those, however, are usually written–or at least involve writing–rather than being only or even primarily multiple-choice, completion, or fill-in-the-blank.

Assessing writing is always problematic, given the demands and expectations of students, programs, faculty in other programs, and other stakeholders. Although I acknowledge problems in doing so, I tend to apply explicit rubrics to my grading, identifying a number of categories in which I mean to assess students and assigning different weights thereto; I also offer representative questions that indicate what I mean in noting each category, trying to make explicit the expectations I have of my students’ work. The individual categories will vary by the assignment, although a few are relatively consistent across courses and tasks. For example, because I often operate under programmatic requirements for page length and word count, I explicitly note students’ adherence to those quantities. I also generally look at whether students have followed formatting standards I make explicit to the students and whether their usage adheres to a particular style manual–almost always that of the Modern Language Association of America, given my membership therein and disciplinary commonplaces.

More fluid categories focus on informational content and quality, explanatory thoroughness, organizational principles, and the like; the phrasing and standards of each category depend on the specific assignment. Different assignments act in different genres, and different genres have different conventions they are expected to follow. They should be assessed differently therefore, and I work to reflect that in the categories I include in the rubrics I use to assess papers.

A category I try to include in assignments is one I label as “Engagement Developed.” I offer it as a sort of extra-credit component of my assignments, one that is admittedly subjective (even more so than most writing assessments). I typically define it as identifying whether the paper offers something unusually compelling or innovative for the level of class being taught (so lower-division classes are more likely to see it awarded than upper-division or graduate courses, given appropriately different expectations of performance based on prior training and experience), although I am relatively open about what that “compelling or innovative” can be, and I tend to reward a sincere attempt even if it is not entirely successful.Note

Each category–standing, fluid, or extra-credit-like–is framed in a binary of sufficient proficiency and its lack; I tend to err on the side of success when I have questions about whether or not it has been achieved, particularly in earlier assignments and earlier versions of later assignments. Success or failure in each category results in an adjustment of the grade by a number of “steps,” as judged against a common grading scale I use throughout my classes. On that scale, I start all papers at a grade of C, assuming base-line competence from my students and asserting that base-line competence as a criterion-referenced average performance. The final grade for each paper–or each component of an assignment, as sometimes happens–results from the total number of steps changed, as outlined in the table below.

Reported Grade Steps Change Numerical/Percentile Equivalent
A+ +7 or moreNote 98
A +6 95
A- +5 92
B+ +4 88
B +3 85
B- +2 82
C+ +1 78
C +0 75
C- -1 72
D -2 65
F -3 or more 55
0 (Zero) SpecialNote 0

Category scores are not reported in isolation; giving only indications of success or failure is not helpful for students who seek to improve the quality of their work, whether out of a sincere desire for betterment or out of a local and immediate desire for higher grades. On my formal assignments, those which prompt formal rubrics, I offer not only an indication of whether the category has been successfully addressed, but notes about why I have arrived at my assessment thereof. I also offer overall comments at the end of the assessment rubric, a filled-out copy of which I append to each student’s work as it is returned. The comments with each category address issues specific to it, while those at the end of the rubric encapsulate my more readerly responses to the work. (I flatter myself a good reader after three degrees in English.) Students therefore receive comments at multiple levels of readership, which they can then use to improve their future writing if they are inclined to do so.

Return to top.

Practical Reasoning

My grading practices have developed as a result of the institutional pressures under which I have worked since beginning to teach college classes in 2006. I attest to some of that history on “About Geoffrey B. Elliott,” here, but what does not show up clearly on that combination of resume and CV is the amount of work done in each position. As a graduate student on campus, I taught one or two classes each term–while taking my own. Once off campus, in New York City, I taught five classes a term until my promotion to full-time status, at which point I began to teach six or more–sometimes for as many as twenty-four hours of coursework–in addition to working on my doctoral dissertation; most of my classes started with thirty or more students in them. In Oklahoma, I have carried a 4/3 teaching load, supplemented by a fair bit of outside labor. In brief, I have done more classroom work than many others who teach at the collegiate level, and it is to such pressures that my grading has responded.

Because I have carried the teaching loads I have, I have had to learn to compress my grading. Rubrics facilitate that compression, although, as I note above, they do have problems.Note Similarly facilitating is the reduction of categories to acceptable or unacceptable completion, although I admit that reduction is also a problem. Marvell’s comment to his coy mistress would apply here, however, and I have never had world enough or time. The practices do, however, have the advantage of being easy to understand. Calling attention to specific categories allows for targeted effort and improvement, and identifying successful completion is something that registers decently enough for the students with whom I have had experience over years of teaching. Too, such things tend to read well with administration outside my own teaching areas, and while there are certainly problems with accountability cultures, multiple audiences are involved in any communicative act, and those that are known or can be guessed at should be addressed as much as can be done.

Some students have noted that my category-specific comments are not always helpful in that they are not exact. This is particularly true for those comments treating adherence to standards of usage articulated in whatever style manual prevails in the class. And it is true that I do not perform a line-by-line proofreading of student papers, which is what the students who make such comments generally reference and expect. There are instructional reasons I abstain from doing so. When I have done so, students have tended to address only those things explicitly marked, “fixing” their papers at a surface level without revising for the more important concerns of structure and content noted; I see no point in “correcting” words that I expect to be changed or removed. Too, the students who attend to the comments I leave inevitably ask questions about specifics, even when I leave more detailed line-item comments; since they will come to me in any event–which is preferable, in all honesty–I see no point in laying out an initial effort that will be repeated for those who seek to benefit therefrom, or offering it to those who will not respond favorably. Finally, if I do all of the work of proofreading my students’ papers, they will not learn how to do so for themselves; they have not yet in years of having others proofread their papers for them, as I see in their work and as many have told me mouth to ear.

It will likely be noted also that the grading scale I use assigns numbers ending in 8 to -plus grades. That is, a C+ translates to a 78, a B+ to 88, and an A+ to 98. (I have always regarded D+ as an oddity, and F+ seems inane.) This is, in part, to minimize arguments. Were I to assign numbers ending in 9 to -plus grades, I have no doubt that I would be inundated with requests for “just one more point”; in the past, when I have graded on a point-build system, I have gotten such requests from students earning 59, 69, 79, 89, and 99. The answer was almost always “no,” but having to handle the requests took up time that could have been better spent on other things–such as helping students to improve their performance rather than the rating assigned to performance already completed and observed. With -plus grades ending in 8, however, such requests are vastly reduced, freeing up time for lesson planning, assessment, reflection, and the work I do outside the classroom in the hopes of excelling inside it.

Return to top.

Philosophical Reasoning

That I give some thought to the principles underlying my pedagogical practice is, I think, a good thing. It is also something I have discussed before, as attested here and in the reports of course surveys I post in this webspace (here, here, here, here, and here as of this writing). More targeted discussion of those principles seems in order, hence what appears below.

The most important idea undergirding my practice is that I mean to help my students. My own educational background and classroom experiences tell me that students benefit from having some explicit guidance, which my grading practice provides. It does not prescribe in detail what students are expected to do, however, allowing them room to try approaches I had not considered previously, which is good, as well as obliging them to consider critically what they must do in addressing the tasks I set before them, which is also good. And if I do grade somewhat strictly, as a binary system tends towards having happen, I also maintain that if there is no challenge, there is no reason to improve–and improvement is eminently desirable.

Something else to consider is the purpose to which education is directed. The present document does not admit of enough space to treat the many, many arguments about what that purpose is or ought to be–and there are many, indeed. Those I have seen tend to push for education to prepare students for the workforce or for active and engaged citizenship. My grading practices serve to help prepare students to face either case. Workplace writing does tend to work in terms of success or failure, and common genres of workplace writing do fairly narrowly prescribe what documents should look like and contain. Active and engaged citizenship demands that people attend closely to forms and figure out what is being asked of them, much as my grading tends to do. So if education is directed toward either of those ends, the way I assess student work befits the end goal.Note

It will be noted also that the regular grading scale in my classes (as distinct from that imposed by the institutions that employ me) caps at A+, which I tend to define as 98 points on a 100-point scale. That the number ends in an 8 is simply an artifact of my usual grading pattern, put in place because grades ending in 9 tend to prompt pleas for “just one more point.” That the number is not 100, however, has attracted some comment and so bears a bit of explanation.

In the classes I have taught and continue to teach (as of this writing), most of the grade comes from writing. There are some few other assignments given, usually completion grades of one sort or another, but the bulk of grading derives from what I see in the writing my students do. At the beginning of each term in my more writing-intensive classes (such as Composition I at Oklahoma State University and Composition II at both Oklahoma State University and Northern Oklahoma College), I make the comment to my students that writing can always be improved. Typically, I do so with a joking reference to Shakespeare; the Bard always plays well in English classes. But even couched in jest, the core idea holds: Writing can always be better. Those of us who write professionally struggle with the idea continually; the writing arrives at a point of “good enough to send off” rather than an actual “good enough,” and even piece that are published to great acclaim are often viewed later by their writers as deficient in one way or another.

Because the writing can always be improved, it is necessarily not perfect. To my mind, a grade of 100 out of 100 signifies perfection. Since no writing can be perfect, no writing can earn a grade that signifies perfection; to do so would be inaccurate at best and diminutive of the value of perfection at worst. This does not mean that the writing cannot be excellent, for which reason I offer a grade of A+ to my students despite what standard grading scales at my institutions allow, but there is a difference between excellence and perfection. And in such a case, the 100 remains in place as an ever-elusive goal, something towards which to strive despite its unattainability, asymptotically approached but never actually encountered–because getting better is a big part of the point of it all, if not the whole of it.

Return to top.

Conclusion

I am aware that the way I work is idiosyncratic, emerging from my specific circumstances of work and background over many years. (Indeed, some of the underpinnings of how I assess students work now can be found in notes I took and projects I submitted during my undergraduate years, when I sought teaching certification.) They may well not work for others; I have, in fact, received complaints about my methods, largely based upon their differences from the practices of others. But they work for me, allowing me to look over student work and identify areas where they need support and additional reflection, as well as areas where they are doing well, so that they can address the former and enhance the latter. My practice does offer me something to use when institutional pressures act upon me, as they do upon most who teach at one point or another, but it does more to help those students who want to do more than go through the motions of credentialing, and that benefit is what matters.

Return to top.

Notes

One example that comes to mind is a student who wrote an Evaluation Essay for Oklahoma State University’s Composition I class as I taught it;. The student’s paper looked at articles treating gun control issues, and the student framed the discussion through a target-practice metaphor, ultimately identifying the focal article as on target but outside the grouping of the other articles’ shots. The framing is perhaps awkward, but it still represents a sincere and thoroughgoing attempt to unify a paper via a consistent and thematically appropriate metaphor. It received points for developing engagement. Return to text.

The “or more” arises in a fluke on an earlier grading rubric, in which students could earn more steps above C than seven. Return to text.

Grades of zero (0) are awarded only for non-submission or violations of prevailing academic integrity principles. Return to text.

The same can be said, of course, for any practice. Each is a human product, and so each is necessarily flawed. The issue becomes one of negotiating the problems more or less successfully, whatever the practice. Return to text.

If the end goal is not one of the two noted, as it may well not be, then I am still confident that my practice will address what it needs to. How it would do so is beyond the scope of the current discussion, however. Return to text.

Return to top.

Class Reports: ENGL 1213, Sections 015, 023, and 040–26 February 2016

After addressing questions from earlier classes, discussion asked after student thoughts on the T&S, which was due online before class began. Assigned readings and the Infog were treated thereafter.

The survey noted in class on 22 February 2016 can still be found here: http://goo.gl/forms/IWo6IHesrq. Students who confirm completion before the beginning of class time on Monday, 29 February 2016, will receive an A grade on the relevant assignment.

Students are reminded of the following due dates:

  • Infog PV (in hard copy as class begins on 7 March 2016)
  • Infog RV (via D2L before class begins on 11 March 2016)
  • Infog FV (via D2L before class begins on 25 March 2016)

Regarding meetings and attendance:

  • Section 015 met as scheduled, at 1030 in Classroom Building Room 217. The class roster showed 16 students enrolled, unchanged since the previous report. Fourteen attended, verified through a reading quiz. Student participation was adequate following the quiz.
  • Section 023 met as scheduled, at 1130 in Classroom Building Room 121. The class roster showed 17 students enrolled, unchanged since the previous report. Twelve attended, verified through a brief written exercise. Student participation was good, if distracted.
  • Section 040 met as scheduled, at 0830 in Morrill Hall Room 206. The class roster showed 16 students enrolled, unchanged since the previous report. Eight attended, verified informally; students in attendance were awarded bonus points. Student participation was good.
  • Two students attended office hours.

Attendance figures from 24 February 2016 have been recorded, as well. The efforts of Prof. Michael J. Beilfuss are appreciated.