Common Assignments and Evaluation Tools in College Composition and 11th Grade English

Elizabeth Johnston, MCC;  Scott Nash, Rush-Henrietta Senior High; Maria Brandt, MCC

Background

We are a collaborative team of English teachers.  Two of us teach College Composition and one of us teaches 11th grade English.  We are all concerned by our students’ lack of both the motivation and skills required to read complex texts.  Deteriorating writing and reading skills are a nationwide concern, as described in a recent College Board report.  In Rochester, the issue is of grave concern; see the New York Times report on a national study indicating that 5% of RCSD graduates are considered college-ready.  In coming together to design common assignments and common evaluation tools, we hoped to address this concern on a microcosmic level (in our individual classrooms) and at a macrocosmic level (aligning our curriculum).

We chose to focus our study on teaching summary–the ability to identify and articulate the main claim and subclaims of an essay is foundational to student success in a writing course. To ensure that we were using a common vocabulary, we co-designed a rubric to assess student abilities and communicate the main ideas of texts in formal summaries.

After doing preliminary research in rubric design  and the Common Core,  we chose to redesign a rubric Elizabeth Johnston was already using in her composition classrooms.  The new rubric design includes a checklist which elaborated upon the terminology of the rubric itself (see appendix).  We also aligned its language with that of the Common Core. We also chose to give our students the rubric when we assigned the summary assignment and to have them reflect on the rubric as they wrote their summaries.  We planned to collect students’ summaries, as well as their self-evaluations of their progress, and determined we would identify recommendations for future teaching practices based on the research we collected.

To ensure that we were on the same page in using the rubric to measure student success, we met to calibrate.  We looked at several sample student summaries, measured them individually with the rubrics, and then compared notes.  We were please to discover we shared the same expectations.

We decided to use different essays for the students to summarize, largely because of the different contexts of our courses.  However, the essays we chose for the students to summarize were all college-level texts of about the same complexity in thought and language.

We met several times throughout the course of the semester to discuss our students’ progress.  What follows is an individual description of the action research as it came to fruition in each of the three classrooms.

Research Narrative   Classroom 1, College Composition (Online), Instructor Elizabeth Johnston

Pre-Research Reflection

As English 101 course coordinator, it has come to my attention over the course of the last few years that students’ writing abilities seem to be decreasing.  Working closely with our developmental studies department, I’ve also come firmly to believe that low writing ability is linked to low reading ability.  A few semesters ago, I radically revised my English 101 course to a source-based approach; experiencing much success with it, I then led the charge to revise our curriculum to this approach.  When I joined the CCTE, I was interested in how working with high school English teachers to create more of a sense of sequencing in our assignments might also impact the success and retention of students in college composition.

Inquiry Question

How does teaching integrated reading and writing, particularly through source-based writing approaches, impact students?

Summary 1: Rationale and Description of Assignment

I began the semester by assigning to students chapters from Writing from Sources, 8th ed.  These chapters introduced students to critical reading, reviewing how to annotate, take notes, identify main claims and subclaims, and write a summary.  We also discussed why writing a summary is essential to any academic or career field.  We spent about two weeks in class practicing through group discussion analysis of sample essays, including “The High Cost of Being Poor,” by Barbara Ehrenreich, “A Question of Degree,” by Blanche D. Blank, “Cuss Time,” by Jill McCorkle “When Altruism Isn’t Moral” by Sally Satel, and “The Dirt on Clean,” by Katherine Ashenburg.  They practiced in discussion by doing an exercise which provided them with paragraphs about which they then wrote comprehensive (2-3 sentence) summaries.  Then, they read the essay, “How Dumb Do They Think We Are?,” by Jonathan Malesic and, again, in discussion, posted attempts at comprehensive summaries. After they had all attempted a summary and had discussed these attempts, I provided them with a sample “A” summary written by a student from the class.

In week 4, I assigned to them Nicholas Carr’s article, “Is Google Making Us Stupid?”  with an assignment to write a paragraph summary of it.  I again provided them with a rationale for summarizing.  I provided them with the assignment, the rubric I would use to grade the assignment (which used language they would now be familiar with because of two weeks of practice), a worksheet to help them revise their first draft, a set of transitions they could use to create flow between sentences, and a checklist to use with their rubric (see appendix).  I asked students to submit the summary and self-evaluation (see appendix) with their assignment.

The rationale for the self-evaluation is to move students into understanding themselves as writers: to re-imagine writing as a process and themselves as active thinkers in that process. I ask students to identify where they think they fell on the rubric and how much they used the rubric to help them make choices about their drafts.  To motivate students both to revise AND do the self-evaluation, I grade their effort (10% of the overall course grade).

Students had two class periods to complete this work.

Summary 1 Outcomes

Nineteen students completed the work.  Of these, five received A’s (exceeds), seven received B’s (approaches exceeds), 4 received C’s (meets), 1 received a D (approaches meets), and 3 received an F (fails).

I was happy with the results of the summary, which were higher than in past classes and which had the majority of the class meeting expectations; however, my students in the course came in writing at a higher level of ability.  Two, in fact, already possessed undergraduate degrees. It’s not clear that the use of rubrics and process are the sole reason for this success.

I was also fairly happy with their revision process for a first assignment. Most of the students had worked to revise their drafts at least once.  The majority of these students made revisions largely to content and style, however several worked diligently to revise at levels of content and organization, as well.

Summary as Part of Rhetorical Analysis

After students turned in the summary, they then wrote a rhetorical analysis of Carr’s essay.  We spent a few more weeks on this essay.  This time, I provided them an opportunity to discuss Carr’s essay, something I wish I had done the first time around.

They turned their analysis in during the seventh week.  They had been encouraged to use the graded rubric of the summary and my comments to re-think their understanding of the essay.  Once again, I evaluated their ability to summarize (however, now summary was one category of evaluation among many others).

Fifteen students turned in an analysis. Of these, ten fell into the “exceeds” category, four fell into the “approaches exceeds category,” and one fell into “meets.” This was a marked improvement.  Of course, we had now spent several more weeks working on critical reading and they were also able to use my feedback to improve their summaries.

Although I again asked students to provide a self-reflection, I did not ask them specifically about their process for summary, so I have not included the results of the self-reflection here.

Summary as Part of Synthesis

The next assignment asked students to synthesize five articles on a topic.  I chose the articles for them and we discussed them in class.  The rubric this time did not assess summary specifically, but instead assessed content, with effective summary of five articles being a key component of effective content.

Sixteen students turned in the multiple source essay.  Although not as many excelled at the content/summary portion, this is to be expected as the stakes were raised.  They were now working with several articles and integrating summaries through a five-six page essay.  Four students “exceeded,” nine “approached exceeds,” and one met.  Again, all students had at least met expectations.  This suggests to me that they were retaining knowledge about how to summarize.

As with the analysis, I asked students to provide a self-reflection and I graded them on revision.  However, since they did not reflect on summarizing, I have not included those results here.

Final Summary Assessment

The next assignment for the class was a research project which asked students to choose a topic, research it, identify credible sources, and integrate these sources.  Because I had not read the sources the students were using, I did not want to assess their ability to summarize these sources.  Thus, I gave students a final exam, part of which included asking them to summarize an article.  Here is the question from the exam: 1. Summary (40 points).  Please read the sample student essay on p 468-483 and provide a comprehensive summary of her argument.  Note: I will use the same rubric I used to grade your first summary as I will use to grade this summary.  Please look at where you scored on the first summary rubric and concentrate on improving areas of weakness.  Your summary should be roughly half a page (one paragraph).  Please write it in the space below:

Sixteen students took the final exam.  Scores were lower, with five students receiving an A (exceeds), four receiving a B (approaches), four receiving a C (meets), two receiving a D (approaches meets), and one receiving an F (fails).

Although these results are still fairly good, with thirteen of sixteen meeting expectations, there were some dips in grades.  This can be explained however, by several competing factors: 1. I gave them an essay we had not read or discussed in class; 2. The summary was worth only 4% of their overall grade, so they may not have given it as much attention; 3.  The summary was part of a longer exam, which again may have distracted attention; and 4. The exam was one of two final grades, and students may have spent more time worrying about the research project, worth 25% of their final grade.

Students did not turn in a self-evaluation with the exam.  Although they did turn in one with the final project, it did not ask them about summarizing.

Overall Impressions

For a comparison of Summary 1 to Final Summary, please see E. JohnstonComparison of Summary 1 to Final Summary

Overall improvement: Nine students did not improve between summary 1 and 2; six did.

Summary of Student Self-Reflections

Summary1:  the majority said they used it “a little,” but more than half said it helped “a lot.”  There appears to be a conceptual disconnect between what the rubric is designed to do and how the students are using it.  They talked about their uses of the rubric vaguely.

Summary2:  This time, I didn’t ask them how useful the rubric was, but students often referenced using the rubric to revise and used terminology associated with revision more frequently and with more depth.

It’s difficult to draw any concrete conclusions about the summary and tools used with it based solely on this research.  Only fourteen students completed the class, so the sample is small.  Further, there is no “control” by which to compare these results.  Students were more successful in this class than they’ve been in past online classes, but whether this was because of the assignments used, or because the students came in more prepared, or because I used other new assignments and practices is unclear.

In retrospect, I wish I had done a summary as the diagnostic entry to the class (I typically ask students to write an essay, which is what these students did).  I did, however, ask students in my current composition (Spring 2013) to do a diagnostic summary (their initial skills, however, are much lower than this “test” course).  I will compare the results of their first assignment (a summary of Carr’s article) to their diagnostic, and expect to see the improvement I saw between the summary and rhetorical analysis in this class.

I also wish that I had had students reflect on summarizing throughout the semester.  I may try to create exercises which allow them to isolate the skill of summarizing so that they can do so over the course of several assignments.  Given the many skills they need to learn in college composition, however, and the limited time within which to teach these skills, this may not be feasible.

The successes of this approach to teaching summary as a base for all other source-based writing assignments seem to be the following:

  • Students are reading more critically with practice, as is clear from the fact that the majority continued to meet or exceed expectations even as the assignments grew more demanding.
  • Students are retaining what they’re learning, possibly through repetitive practice of these skills.  The scaffolding of these assignments appears useful.
  • Students’ revision skills improve over the course of the semester.  This seems to be in response to one or more of the following factors: they are motivated to revise because their revisions are graded; they are motivated to revise because they can see improvement in concrete categories of skill; they are learning to revise by reading my comments to them about their strengths and weaknesses in respect to revision.
  • Students ranked improved revision skills as one of the most important skills learned in this course.
  • Students ranked learning how to critically read as one of the most important skills learned in his course.
  • Students are able to identify concrete connections between the skills learned in this course and their academic and/or career pathways.
  • In their final self-evaluations, students seem to better understand writing as a process;  students are using the language of critical reading, referencing and employing terminology from the rubric to describe their own writing.

Classroom 2, High School, Instructor Scott Nash

Pre-Research Reflection

Coming from a high school ELA environment, rubric usage to evaluate writing is fairly standard, therefore my inquiry question was how does the use of student reflection (on their work) affect their writing and effort?

The first reading that I used with my students is Nicolas Carr’s article “Is Google Making Us Stupid?” I “tested” this out during summer school with both sophomores and seniors.  Oddly, the three sophomores I had who completed summaries seemed to have had a greater level of comprehension than the two seniors who completed the assignment.  In an informal discussion about the reading after completing the summary, the students (at both grade-levels, including those who did not complete the written summary) complained about the article’s length (nine pages), but stated that the information contained in the article was not above their reading-level (with the exception of one senior – who did not complete the summary and also did not pass summer school – who did identify the article as being too “confusing to read”).

My English 11R (regents-level) students read the article at the very beginning of the school year (during the first class session for one of my two classes).  The students had a full 80-minute class to read the article and write a summary of it.  This first summary assignment was meant to serve as a “pre-assessment” to gauge their knowledge and skills, as such there was no instruction prior to or during the completion of the summary.

Pre-Assessment Reflection (September)

Of my fifty-eight students, fifty-five completed the pre-assessment (to some degree): two students were absent and one student wrote me a note about not feeling well. Of the fifty-five who completed the assignment, eighteen did not finish in one class period. [I was only present to administer the pre-assessment to one of my two classes; having to go to court for the district on the second day of school, a substitute gave out the assignment to that class.]

Before scoring the pre-assessments I had informal discusses with both of my classes regarding the reading and assignment.  The majority of my students stated that they had written summaries before, but most of them said that those summaries were of novels that they read for class (book report-style).  A few students maintained that they had never had to write a summary before, and they were “lost” trying to do it without any direction/instruction.  There were a lot (more than half by my rough estimate) of students who complained about the length of the reading.  There were also some comments (including from the Special Education teacher who “pushes-in” to one of my classes) about the difficulty (or text-complexity) of the article.

I was disappointed by the summaries I read.  Many students were not able to even identify Carr’s main point/claim, and frequently identified “interesting” details as supporting claims.  Most students did not have a clue how to present MLA documentation of the article, and more than half did not even mention the title or author within the text of their summaries.  There was only one student who revised/edited her summary, and she said it was because it was “sloppy,” not necessarily to correct any errors in grammar or usage.  The best student summaries did identify the main point and demonstrated some sense of understanding.  The worst were so error-filled that it was not possible to even evaluate the student’s understanding of Carr’s article.  Based on these results and our discussions, I decided to use the students’ explorations into recently published “news” articles (where they were in search of research topics) to serve as their next summary assignment, and then the summaries that they would be writing for their (eventual) annotated bibliographies to serve as the “final” assessment of their ability to summarize.

Self-Selected (Research) Articles Reflection (October)

Before assigning this summary, I returned (copies of) their pre-assessments with completed rubrics and explanatory checklists.  I went over a few of the comments on the checklist, and discussed the documentation required for these summaries.  I instructed students to look over their summaries, the rubric, and the checklist.

Day One of our research included formal instruction, with formula and models, on how to correctly cite sources using MLA style.  Students needed to find, read, cite, and summarize two recent articles about a subject(s) which they might be interested in researching.  Then they needed to select one of their two summaries to hand in to be evaluated.

Day Two of the students’ re-SEARCH began with some instruction about how to read informational texts, with a focus on identifying main ideas.  [This was modeled using an excerpt of Bill McKibben’s article “Global Warming’s Terrifying New Math.”]  Then students were given information on how to take the information we had highlighted and annotated within the article and write an effective summary.

Of my fifty-five students (schedule changes had reduced the size of my classes), thirty-one completed and handed in summaries.  In general, the summaries were shorter than the ones my students had written for the pre-assessment, but the summaries were, by and large, not much better.  Students still struggled to identify the main point/claim of their articles and their summaries again appeared to be first drafts that were in desperate need of proofreading and editing.  Documentation was slightly improved – more students listed their article titles in-text – but most were missing essential information and did not include the required MLA documentation.

I was at a real loss as to the reason for the “disconnect” between the summaries being produced (now from self-selected articles – of which students provided me copies) and the summaries discussed and modeled through the instruction and rubrics/checklists that the students had received.  [I have taught students how to write summaries before, and I even used some of the same approaches (if not the same materials) for that instruction.  In the past, my students did not seem to struggle to this degree – although I do not have hard data to prove this.]  Because students were reading different articles, it made it difficult to judge whether the chosen article affected the quality of the summaries that were being produced.  Based on that realization, I decided that it wasn’t such a good idea to make the annotated bibliographies the final assessment of their ability to summarize, or at least, if I did continue down that path, then I would also need to find an additional assignment to use as a more standard evaluation as well.

Change in Plans (November)

The annotated bibliography option died a slow-death.  I decided that students would still produce one as part of their research projects, but they would not be completed in time to use for my action research.  Students have been really struggling, in general, with many “basic” concepts this year and as such, we have been slow to move forward with our research projects.

Teacher-Selected “Final” Assessment (December)

Instead of another self-selected article (for their annotated bibliography), I selected four articles which were grouped into two levels of complexity.  I then assigned the articles based on the students’ ability levels.  Students were instructed to read both articles that they were given and then select one to summarize.  Students were again given copies of my summarizing overview sheet, the checklist, and the rubric.  Additionally I gave students the self-evaluation/feedback sheet and copies of two different model summaries (though students did not receive copies of the articles about which the summaries were written).

Only fifteen students have completed their “final” summaries at this point.  [I expect a handful more to trickle in, since it is the end of the marking period and the last chance for students to receive any credit for the assignment.]  The students that did turn in the assignment generally fell into two groups: my high achievers or my academically struggling ESL and IEP students.  As such, the results gathered from the final summary are a bit disjointed.  From these limited numbers, there seems to be an overall improvement, though still not to a degree where I am satisfied.

Overall Impressions

The validity of drawing any conclusive evidence from the assignments would be doubtful – only fifteen students completed all three summary assignments (though I also included results from eight more students who are likely to complete the final assignment before the end of the quarter).  In addition, the comments from the self-reflections are vague and do not seem to reference the rubric or checklist (and most do not demonstrate a connection between student expectations – they were asked to complete a self-evaluation rubric for the final assessment – and my assessment, which consistently does reference the checklist).

My initial interest in the use of self-reflection as a learning tool did not seem to demonstrate any benefits here.  [Though, I have found some value in it this year and am continuing to employ it more frequently in my teaching.]  I believe that the rubric we used was too vague without also directly using the items on the checklist, and the length of the checklist kept many students from referencing it.

I do have one of the weakest groups (if not the weakest group) of students (as a whole) that I have ever had in terms of their basic skills and work ethic.  During this research project, I tried not to offer any comments or additional one-on-one instruction in summarizing, in an effort to keep the research as “controlled” as possible – though, using the self-selected articles (as a “solution” to the comments about the difficulty/length of the pre-assessment article) probably negated any standardization anyway.  My students still need further instruction in summarizing before they write their annotated bibliographies.  I plan on revising the rubric using many of the details from the checklist – I hope that having a single, shorter document makes it more accessible for my high school audience.  I will continue to use self-reflection, adding in a component where students must use language directly from the rubric in discussing their own approaches to assignment completion.

Classroom 3, College Composition (Face-to-Face), Instructor Maria Brandt

In Maria Brandt’s ENG 101 class at Monroe Community College, students began the semester reading two short essays, Linda Hogan’s “Walking” and Rebecca Solnit’s “The Orbits of Earthly Bodies.” After reading each essay, students reviewed the essays together in class, discussing each author’s audience/purpose and summarizing each essay’s main ideas. Then, each student selected one of the two essays and typed a 250-word formal summary that argued the essay’s apparent thesis, outlined its main ideas, and argued its apparent audience/purpose. The average grade for these summaries was a 77.5. More than half the students scored “exceeds” or “approaches exceeds” on their content and their prose; more than half scored “meets” on their transitions and their grammar/documentation. Then, students typed a five-paragraph rhetorical analysis of the essay they had summarized.

As the semester progressed, students built on this original assignment by typing annotated bibliographies for six research sources, then using these sources to defend a thesis. Although not part of our shared action research for CCTE, these annotated bibliographies were structured and evaluated the same way as the original summary assignments with the goal of reinforcing both the processes and value of summarizing texts before writing about them.

Finally, at the end of the semester, students read two longer essays, Henry David Thoreau’s “Walking” and Edward O. Wilson’s “The Ethics of Biodiversity.” Again, students reviewed these essays together in class. But, it’s worth noting that these essays are significantly longer and more complex than the essays they read at the beginning of the semester. This means that class discussion didn’t reach the same level of mutual certainty about each essay’s content. It’s also worth noting that the rubric for these final summaries was slightly different than for the original assignment (see appendix). Nonetheless, each student still typed a formal 250-word summary of both essays. The average grade for these summaries was a 79.1. This time, however, more than half the students scored “meets” or “approaches meets” on their content, their organization, and their grammar. An equal number of students scored an “exceeds” or “approaches exceeds” as scored a “meets” or “approaches meets” on their style. And, more than half scored “exceeds” or “approaches exceeds” on the documentation.

These results puzzle us. In anecdotal terms, the writers in this class achieved stronger final essays than in almost any class that Professor Brandt can remember. Brandt also noted significant improvements during the semester regarding students’ abilities to read and analyze texts, to discuss their writing, and to execute substantive revisions. Yet, students’ scores on the above assignments seem to reflect the opposite. Brandt explains this in part by pointing out that the texts for the second assignment were far more difficult and that the rubric was different. She also points out that she spent much more time outlining—and workshopping outlines—in this class than she had in the past, which is independent of our action research but nonetheless would have impacted her students. She also posits that she may have graded her students more strictly as they progressed successfully through the course. Whatever the cause, it seems that Professor Brandt’s use of these rubrics didn’t help us adequately measure what actually happened in that classroom.

Indeed, what Professor Brandt learned is independent of the rubrics. She learned that spending more time reading critically and summarizing objectively encourages students to write more accurately. She learned that students feel a stronger sense of engagement with their own writing when they are more engaged with someone else’s writing first. And, she learned that building a productive community of writers involves building a productive community of readers. She certainly can argue that she “knew” all of this before, but whatever happened this semester affirmed this knowledge and will ensure that Professor Brandt continues to emphasize summarizing sources as part of the writing process.

Reflections and Future Practice, Overall

It is difficult to make any conclusive statements about the research conducted in our classrooms.  Unlike in a laboratory controlled experiment, we were unable to control many variables.  At the high school level, in particular, administrative guidelines and restrictions often dictate what can (and can’t) be achieved in a classroom.   Johnston and Brandt learned from observing high school classrooms what Nash already knew going into the project:  administrative pressure on teachers to pass students whose work is not up to par is keenly felt and skews the results of most any kind of evaluation method.  Further, because of various policies in the school districts, students are not penalized if they don’t turn in work.  Thus, Nash was working with a very small sample of students who completed the work.

It’s also important to note that the students in our college classrooms at MCC are reading and writing at a variety of levels upon entry.  For example, in one of Johnston’s College Composition courses this semester,  based on Accuplacer testing results, the range of reading ability extends from the 3rd grade to beyond the 12th grade.  Students with weak reading skills may grow over the course of a semester, but that growth may not be measurable within the course of fifteen short weeks.  Of course, time spent trying to work with students with such low abilities necessarily cuts into time spent with those reading at or above expectations.

Both Brandt and Johnston attest to the fact that teaching summary as part of a larger curriculum focused on teaching critical reading and writing is, indeed, useful for students.  Though the numbers here may not reflect that improvement, they can provide anecdotal evidence of student successes this semester not seen in semesters when summaries were not used.

There is no clear evidence that using the rubrics aided student success.  Although Johnston found that students expressed liking the summaries, they also admitted to not using them while writing their essays.  Nash found the same result.   Johnston did find that students were using the vocabulary associated with the rubrics with much more consistency by the end of the semester, but the self-evaluation tool she used also changed over the course of the semester to prompt that vocabulary use.

Conclusions

Ultimately, what seems to have been most useful about this action research is that it confirmed many of the things we already know:

  1. Students struggle with reading
  2. Because students struggle with reading, they struggle with writing
  3. Time spent in class discussing readings and assignments devoted to having students summarize readings improves reading skills
  4. Using scaffolded assignments (like the summary, followed by the analysis), helps students to retain skills and possibly improve them
  5. Documentation skills tended to improve or, if the assignment became more complex, to stabilize with practice.

We also discovered some things which will impact future practice. The rubrics as recommended by the literature review conducted last semester were not successful.  The long checklist attached to the rubric proved cumbersome for students and students admitted to not using the rubric or checklist while writing.  Those of us who plan to continue to use rubrics will simplify the rubrics.

Perhaps most importantly, however, are the conversations that occurred between instructors in high school and instructors in college.  As teachers, we all feel somewhat “under fire,” as the public complains about increasing school taxes, teacher pay, and “greedy” unions in the face of decreasing test scores.  The tendency has been to blame the teachers and to throw money at trying to solve the problem of “bad” teaching.   Even though we are all teachers, it is easy for us to get caught up in the arguments and push the blame backwards.  Thus, those teaching composition at the four year colleges blame “bad” community college teachers; community college teachers blame “bad” high school teachers; and high school teachers blame “bad” middle school teachers.    Yet, what has become immediately clear is that the problem is not “bad” teachers, but bad policies.

Conversations with a high school teacher and the chance to observe high school classes have broadened Johnston’s and Brandt’s understanding of the challenges of teaching high school readers and writers, as well as strengthened their respect and admiration for those working in the proverbial trenches.  New insights have helped Johnston and Brandt to better understand where the gaps lie (and they are decidedly not with instruction, but instead with administrative problems).  Understanding the culture of high school has better equipped these college professors to anticipate and trouble-shoot the expectations and behaviors of incoming freshman.

Of course, changing culture is hard, and to change policy, we need to change culture.  It is easier to blame bad teachers—to fire teachers and shut down schools, or to throw money at professional training opportunities designed to teach them to become “better” teachers.  But though these actions make the public happy and provide a sense that, yes, something is being accomplished—little actually is.   We can have award-winning teachers boasting numerous diplomas and with stacks of certificates on their walls, and these same teachers can have award-winning assignments….but they might as well throw it all out the window if they walk into a high school or college classroom where students are reading at a third-grade level, where students cannot be held accountable for work due, where parents can demand students be allowed to keep their cellphones on in class, where administrators can scuttle a week’s worth of lessons because a new standardized test is being required, or where there is nowhere to send disruptive students and no outside support for students struggling with basic concepts.

Ultimately, this experience has been positive in so far that it has allowed us to connect with and broaden our understanding of the differences between high school and college culture.  It has also strengthened our relationship with area high schools and opened up opportunities to talk about and align curriculum.  We hope that continued opportunities to do so evolve out of this project and that the dialogue between high school and college instructors likewise continues.  We are convinced that the spirit of the common core and its emphasis on reading and source based writing is a movement in the right direction, but we worry that misguided policies and politics will get in the way of real change.

Resources

Miller, Bert. ” Developing Successful Rubrics,” Arkansas State University. Web. May 2012.
English Language Arts Standards, Common Core State Standards Initiatives, CCSSO and NGA, 2012

Appendix

Summary Rubric