Category Archives: Assessment

“High-Impact” Teaching and the Role of Literature

This post is by NCTE member, Cody Miller. 

Cody MillerI was recently named “one of the highest impact teachers” in my state. This title was bestowed to public school teachers by the Florida Department of Education for high value-added model (VAM) rankings. Like many fellow teachers and teacher educators across the country, I find the emphasis on standardized assessments misguided. Yet, I must navigate teaching in a post-NCLB environment daily.

I teach at the University of Florida’s laboratory school, P.K. Yonge Developmental Research School. Our school’s mission includes designing and implementing instructional practices that will help all learners achieve. Our population, mandated by state law, must reflect the demographics of Florida. Our student body makeup is important because we know that too often students from middle-to-upper class families receive robust literature instruction, while the remainder of students are left with narrow curriculum that prohibits students from exploring important topics raised by literature. English classes are detracked at my school, meaning that all students are enrolled in my English Honors course, regardless of test scores. This situation requires me to constantly think about differentiation in instruction while maintaining a high standard for quality work. It also means that my class discussions are rich because of the diverse perspectives my students bring. All students read at least seven books throughout the year, ranging from YAL to AP texts. My focus on literature may seem notable in the age of Common Core, but it shouldn’t be.

It’s important to note that the Common Core’s call for texts to be 70 percent nonfiction by high school includes reading across all content areas. I often hear of teachers feeling that their ability to teach literature has been diminished since the introduction of the standards. I do not blame teachers for substituting literary works for nonfiction texts in order to secure their professional livelihoods. Indeed, I believe teachers should help students understand standardized assessments as a unique genre. However, I am suggesting a literature-rich curriculum prepares students for annual standardized assessments. More importantly, this type of curriculum helps prepare students to be critical and empathetic citizens. For example, my students make connections to current social movements like marriage equality when reading Romeo and Juliet. They read across texts and write from multiple perspectives; they analyze and synthesize in their writing. In short, they read the word and the world by engaging in literature, poetry, media, and art. And they still succeed on the standardized tests because they’re engaged in more challenging work throughout the year.

When I received the notification that I had been named a “high-impact” teacher I felt a sigh of relief. Teachers, regardless of their feelings about mandated assessments, feel the pressure of the results. It is an unfortunate feeling, but it cannot be ignored. I feel strongly that quality instruction that goes beyond the demands of the tests will result in students reaching the mandated benchmark. But more importantly, quality literature instruction will cultivate a sense of justice within students, and that is the greatest impact a teacher can have.

Cody Miller teaches ninth-grade English Language Arts at P.K. Yonge Developmental Research School, the University of Florida’s affiliated K12 laboratory school. He can be reached at cmiller@pky.ufl.edu or on Twitter @CodyMillerPKY.

Insights from edTPA Implementation

edTPA tensionsLike any significant change to a major system, edTPA (a new performance-based assessment for licensing teachers) presents challenges and tensions for those who must accommodate this change. Those tensions were recently documented in the November issue of Language Arts by Amy Johnson Lachuk and Karen Koellner, two teacher educators in an elementary education program that offers degrees leading to initial teacher certification. In their article “Performance-Based Assessment for Certification: Insights from edTPA Implementation,” Lachuk and Koellner describe efforts to adjust programming in light of their state’s recent adoption of edTPA.

While the edTPA is new, the tensions it has brought are tensions familiar to many teachers. One is a tension between wanting our students to learn things for themselves through inquiry and wanting to give our students the answers. Lachuk and Koellner write:

As teacher educators, we aim to offer teacher candidates opportunities to reflect upon and inquire into their practices. We also aim to help them experience the complexities of teaching, so that they can grow in their practices. However, a formal, performative assessment such as the edTPA makes managing the tension between telling and growing even more complicated (cf. Berry, 2008); Berry questions: “What would motivate prospective teachers to seek their own solutions to teaching problems when their formal assessment is at stake?”

Lachuk and Koellner also found that the edTPA required candidates to think about teaching in ways the preparation program had not previously felt the need to push:

For example, writing and using supporting evidence about their planning, teaching, and assessment practices are how candidates are evaluated on their ability to engage in the assess-plan-teach cycle. . . . Several candidates were very skilled in writing retrospective reflective narratives about their teaching, yet when it came time to structure these reflections as academic arguments in which they used evidence to support their claims, they struggled.

The new reality meant teaching new skills, but it also meant eliminating some lessons. “[B]ecause edTPA is a time- and labor-intensive examination, we need to accommodate the process by requiring fewer assignments as part of the student teaching course.”

An even more significant tension may be the tension between wanting to give their students accurate, reliable information, and also wanting to be perceived as sufficiently knowledgeable. Lachuk and Koellner write:

Because the edTPA was a new examination for faculty, too, we wanted to project to teacher candidates that we had a firm grasp on what it was asking them to do, when in fact we did not. For instance, we created a series of face-to-face workshops and hosted several drop-in sessions for teacher candidates who were submitting and preparing their edTPA portfolios. These support workshops and drop-in sessions were intended to coach teacher candidates throughout the process, adhering to the guidelines for faculty support provided by Pearson publishing (the publisher of edTPA).

Participating in these face-to-face workshops was particularly difficult for Amy, who was concerned about unintentionally giving teacher candidates misinformation that would negatively impact their performance on the examination. Although she was familiar with the examination, Amy felt uncertain about her interpretation of the edTPAese, or the way certain concepts (such as finding a central focus for writing) were defined and interpreted in the examination. At the same time, however, for the sake of candidates’ peace of mind, she felt that she needed to present herself as knowledgeable and confident about the examination. Throughout the time she was helping to support teacher candidates with preparing their edTPA portfolios, Amy felt herself confronting this tension between appearing knowledgeable and confident while actually feeling rather uncertain.

But Lachuk and Koellner do feel confident that all these various tensions will lessen over time. “[C]andidates will be more familiar with the requirements and will have experienced more of the supports throughout our program (rather than only during their student teaching semester when they take the exam).” As with any change, the tensions felt now will shape our adjustments to that change and will ensure that, down the road, tensions will ease.

 

Read the complete article, “Performance-Based Assessment for Certification: Insights from edTPA Implementation.”

2016 Convention Proposal FAQ

2016annualconventionquoteAre you considering submitting a proposal for the 2016 NCTE Annual Convention? You should!

We’ve been getting some questions about the process and we thought it would be a good idea to address the most frequently asked ones below.

Check out this video in which Jason Griffith, one of our proposal coaches, shares his insights. You can also read his 6 tips for crafting a proposal here.

Is the proposal system live?

Yes! The proposal system went live on December 18 and can be accessed here. Full details on the word counts and fields you will have to fill in can be found here.

What if my session idea doesn’t have anything to do with advocacy?

First of all, all session proposal ideas are welcome for consideration and we are confident your proposal does involve advocacy of some sort.

A central argument of the 2016 convention theme, Faces of Advocacy, is that the very act of being a teacher is an act of advocacy. The work we do every day in making the best choices for our students and our profession involves advocating for what we know is right.

So if you have a session on a great new strategy for doing close reading, or apps that help teach about argumentation, you’re advocating for an approach. And if you have a session on infusing social justice themes into teacher preparation programs, that’s advocacy, too.

Think about the theme of the Convention less as a defined set of activities and more as a lens through which to view the important power and potential of our profession.

Still worried your session might not fit?

Consider this broad range of topics of emphasis the selection committee is looking for:

  • Advocacy
  • Argumentation
  • Assessment
  • Community/Public Literacy Efforts
  • Composition/Writing
  • Content Area Literacies/Writing across the Curriculum
  • Digital and Media Literacies
  • Early Literacies
  • Equity and Social Justice
  • Informational Text
  • Literature
  • Multilingualism
  • Narrative
  • Oral Language
  • Reading
  • Rhetoric
  • Teacher Education and Professional Development

What’s the criteria for selecting sessions?

You can read all about the criteria here. But here are some guiding ideas to help you:

  • Be clear and thoughtful. The more specific you are, the easier it will be for reviewers to imagine what this session might be like.
  • Think engagement. Susan Houser, NCTE president-elect and conference chair for 2016, has been clear from the start that she wants more sessions that are active and engaging and fewer that are driven by information delivery alone. How might you foster conversation and interactive learning as part of your session?
  • Make it relevant. There is so much going on in education right now that it’s likely any of your ideas will fit in, but bear in mind that attendees come from all over the country, from classrooms of every shape and size. Think about how what you’re thinking and doing in your local context could resonate with folks from lots of different contexts.

The NCTE offices will be closed December 24-January 1. We’ll make sure to answer any additional questions as soon as we get back. 

Asking the Right Questions in Teacher Assessment

teacher assessmentIn the November 2015 Language Arts, Rachael Gabriel examines problems with how teachers are evaluated. Her research team reviewed the Measures of Effective Teaching (MET) project, an extensive work that studied the techniques of 3,000 teachers to determine which correlated with high VAMs (value-added measures). While the project identified many useful activities, Gabriel argues it has been used to support teacher evaluation rubrics that err by measuring quantity, not quality. She writes:

The major challenge of performance assessment via observation is that indicators are counted as if their presence or absence indicates quality. For example, one feature of classroom discourse that is often included in commercially available rubrics for observations is the use of open-ended and/or higher-order questions. Though the presence of higher-order questions . . . has been associated with increased engagement and achievement, its absence does not indicate lack of quality. . . . When analyzing MET project videos, we found higher-order questions in low-performing classrooms on every measure of the MET study, and high-scoring classrooms that had no evidence of higher-order questions.

Other examples of this abound:

When it comes to opportunities to develop literacy, it isn’t the fact of allotted time for independent reading or writing, but rather the nature and use of that time that determines its value as a practice.

For example, several videos of MET project classrooms included time spent writing for five minutes or more, but the writing tasks often involved filling in blanks of a formulaic paragraph structure or copying notes from the board into a graphic organizer. Neither of these tasks involves a robust opportunity to develop literacy because students are not generating original language, employing a writing strategy, writing for a purpose, or writing to an audience. However, in observation, especially brief observation, it may appear that students are all engaged in writing, and this instrumental engagement may be viewed as evidence of effectiveness because students are quietly complying with a writing- based activity.

Why does a rubric of activities fail to indicate quality?

It could be that every observable feature or “best practice” involves a compromise and thus cannot be viewed in isolation as evidence of effectiveness or not. For example, calling on an equal number of boys and girls may extend the length of discussion and limit time for independent practice. Similarly, pursuing a back-and-forth discussion to support a student’s understanding might limit other students’ participation. A teacher could invest in one indicator of effectiveness at the expense of another. Thus, effective teaching may be about managing the dynamic balance of certain features of instruction rather than simply displaying such features.

She concludes:

At best, rubrics are filled with actions that are sometimes associated with effectiveness, not foolproof indicators of effectiveness. This leaves evaluators in the unenviable position of attempting to come up with feedback on a teacher’s performance based on a set of indicators that may not indicate anything. Given the importance of some features, the assumption may be that more is better, thus teachers are encouraged to ask more open-ended questions, engage students in more meaningful conversations, or encourage more participation. The inclusion of such indicators to mark the highest levels of proficiency on a rubric may inspire instrumental compliance rather than thoughtful integration. Unfortunately, encouraging participation for participation’s sake may not deepen or extend learning opportunities. But, considering how participation could contribute to the goal of the lesson (how is this effective?) or how participation has been attempted (how does the teacher encourage participation?) is likely to generate useful feedback aimed at improving or expanding effective practices.

 

Read Rachael Gabriel’s complete article “Not Whether, but How: Asking the Right Questions in Teacher Performance Assessment”

Q & A with Les Perelman

Internet education flat illustrationRenowned scholar Les Perelman has dedicated his career to the support of powerful and authentic writing instruction. He has been an outspoken opponent of assessment practices he sees as counterproductive to developing strong writers. We wanted to learn more about the role he sees teachers playing in both advocacy for better assessments and the development of them. 

  1. What role, if any, should teachers play in the scoring of assessments? Why?

Teachers should be involved in all phases of the assessment process to ensure that the assessment instruments test what should be taught, not what is easy to assess. However, they should not be the only people at the table. The design team should also include assessment professionals, representatives of school boards and education departments, parents, and, I would propose, one or two high-scoring recent graduates. [These former students] should be part of both the teams designing the general formats of assessments and each specific test.

The selection of training samples for each scoring session should be done primarily by teachers, with input by assessment professionals. The actual scoring sessions, however, should be run and staffed by teachers. With Internet technology, tables of teachers can be situated anywhere, trained on the same samples, and grade papers from almost anywhere. The system can be designed so that teachers will not be grading the papers of their own students. The composition of the teachers grading these tests should be as diverse as possible, and students should see pictures of these groups to diminish the effect of stereotype threat on students of color.

There are two very powerful arguments that can be used in proposing teacher grading of high-stakes tests. First, it would encourage teacher buy-in and eliminate the current almost adversarial situation felt by many teachers where their best teaching practices and their own expertise are at odds with testing companies and professionals who design and grade the tests but never enter classroom. Second, the monies spent on the grading session would serve a double purpose. Not only would they fund the scoring of essays, but they would fund extremely effective professional development.

During my whole career in writing program administration, I observed that if we didn’t have opportunities for teacher grading sessions, we would have to invent them. Getting teachers in a room and engaging them in conversations about [. . .] the key features in effective and intellectually adept student writing is one of the best venues for professional development. Moreover, it is an excellent method for engaging teachers in other fields who may be involved in writing-in-the-disciplines initiatives.

  1. What advice do you have for people who want to take action to improve the way we assess writing in this country but fear for their jobs if they do?

This is a very difficult question. People have families and obligations, and it is not my place to tell someone to risk their job. There are, however, two strategies I can suggest that may be useful in some situations.

First, use the strategies presented in Linda Adler-Kasner’s excellent book The Activist WPA for reframing the conversation. Accept the general goals presented to you, but then argue correctly that the current implementation actually subverts those goals. Propose intellectually and pedagogically honest strategies to achieve those goals that reinforce best teaching practices. Use the same vocabulary, such as “college readiness,” but define those terms in relation to effective teaching and the document jointly developed by NCTE, WPA, and NWP, Framework for Success in Postsecondary Writing. In some contexts these strategies will not work, but in others, they have a chance.

The second strategy is for teachers who are also parents. While it may be dangerous to oppose mindless testing where you work, as a parent and a taxpayer, you have a constitutionally protected right to do so in the district where you live. As a teacher, you can help lead the opposition by building alliances with other parents. Meanwhile, if the teachers who live in the school district where you work take similar actions, we are all helping each other, and most important of all, our children and students.

  1. In light of all you’ve learned in your research and through tools like the Babel Generator, where do you find hope in the future of assessment? What’s the bright spot in a landscape that looks pretty bleak?

Fortunately, I do not think the landscape is that bleak. What the BABEL Generator proved was how stupid these machines are. They do little more than count obvious and often trivial features. It took my team just a few weeks to build our first prototype, which we expected to fail with at least some machines. I still remember sitting in my office with my three students and trying out the alpha version of the BABEL Generator. We were amazed that it received top scores on all four machines we tested it on.

The most effective argument against automated essay scoring does not rely on abstract rhetorical theory. It is simply that these machines do not work; they do not do what they claim to be doing. Students can be easily taught to generate essays that receive high scores but that are atrocious pieces of writing.

These are arguments that everyone can grasp quickly, and the BABEL Generator is a tool that people can use to make these points in dramatic and unambiguous demonstrations.

Be the little boy who cries that emperor has no clothes!