Tag Archives: Assessment

Mrs. Stuart Goes to Washington: Policy Brief, Assessment and Back to School

Project: Policy Brief

Oh, the wonderful world of writing policy briefs. This week was spent working on a piece for the National Council of Teachers of English. We are trying to uncover who is teaching English and how these educators feel about a range of topics. There is data on teachers in general, but not a lot on teachers of English specifically. As an organization, we wanted to learn about this critical group of educators. Here are some questions that arose during my research:

  1. What might the race and gender of our teachers tell us about the ways we connect with our students?
  2. Why is it important to look at the levels of education a teacher has achieved?
  3. In a post–Common Core world, have levels of job satisfaction changed?
  4. What are the professional learning needs of teachers of English?

The most glaring fact so far has been the lack of current research on teachers. The U.S. Department of Education is set to release the next set of data end of summer/beginning of fall.

You know what they say about assessment …

I took a break from writing and research to meet with Miah Daughtery, the Director of English Language Arts and Literacy at Achieve. A fellow Wolverine, Miah and I discussed everything from ideas for getting kids to read—The Reading Minute by Kelly Gallagher—was new to me! to understanding why standardized assessments are so long (if you don’t know what a psychometrician is, then you probably don’t know the answer). One thing that was suggested in my district was that we write our own district-wide benchmarks. It was such a casual comment to our little department of 10 teachers that it seemed like a simple idea. Miah and I discussed how complicated writing assessment is, especially if it is assessment that you are going to use to make claims about student achievement. Miah has a presentation called “The Top 25 Ways a Test Item Can Be Flawed.” There are more than 25? The moral of the story is, folks, we need to revisit our benchmark plan. Miah suggested checking out the assessments from Achieve the Core, so I’m going to start there.

Oh and I’m back in the classroom on Monday

When my eyes got tired of looking at data, I turned my attention to my classroom. I have 6th and 8th graders reporting to me, excited yet sleepy, Monday. As most teachers do, I have grand plans for the year. These include, but are not limited to

  • infusing global education in all of my units, and building a website for my students to interact with me while I am on my study abroad, probably on Facebook or a page on my personal site
  • getting my kids to enjoy reading (also, getting them to actually read)—this means getting my classroom library in order, which terrifies me
  • doing daily read-alouds and maybe the Reading Minute
  • pairing contemporary texts with my mandated curriculum

I’m open to ideas!

 

Seriously, need to get it together! What a mess.

Form Doesn’t Mean Formulaic: Finding Creative Approaches to ESSA Assessment Plan Design

ESSA Fordham blogI was always a free-verse guy back when I was serious about writing poetry. In creative writing classes, however, my teachers challenged me to express myself with the rules of forms—the villanelle, the sestina—that I would not have chosen on my own. This imposition of structure, even when it felt arbitrary, often forced me to be creative in new ways, to consider economy, focus, and balance in ways that enriched my less-constrained compositions.

The recently passed Every Student Succeeds Act (ESSA) offers states a great deal more flexibility in designing assessment systems than they had under No Child Left Behind and the waiver system it spawned. States will now be genuine authors of their plans to assess student learning and diagnose inequities in outcomes for the first time in over a decade. But the law does impose a form on those plans, prescribing a set of components and characteristics that each system must include. For example, school rankings must consider not only still-mandatory yearly test results aligned with state standards but also a “school quality” indicator, such as a measure of school climate or student engagement. Those of us who might prefer free-verse assessment are out of luck.

In the debate over the bill, there was much disagreement about whether these requirements were the right ones or, indeed, if the federal government ought to be setting requirements at all. This debate will certainly continue, but the immediate challenge for states and districts is to craft assessment plans that best serve their students within the rules set by the law. As this process unfolds, states could look for the paths of least resistance to comply with the law, an approach that was common under the waivers system. Alternatively, like poets writing within a form, they could use the constraints within the law to generate creative approaches that might significantly improve on the status quo.

A recent ESSA assessment design competition hosted by the Thomas D. Fordham Institute in Washington gives me hope the second approach will prevail. Loosely modeled on American Idol, the competition included ten finalists who gave brief presentations about possible designs for state assessment systems within the rules of ESSA. A panel of judges asked questions, and then both judges and audience members voted on how well they liked the proposed system. In the spirit of ESSA itself, which assumes that different approaches will work best for different states, there was no winner declared.

Like the judges, I didn’t find any one of the proposals a perfect solution. However, I was impressed by the thoughtfulness and creativity of many of the entries. Those imagining a more effective assessment system to increase equity seem to have benefited from having to work within the rules of the law.

Proposals included intriguing ideas that go well beyond path-of-least-resistance compliance. Here are some examples:

  • Sherman Dorn of Arizona State University proposed a citizen “grand jury” structure for determining which schools are in need of improvement, juxtaposed against the prevailing “algorithmic” systems that use a rigid quantitative formula for identifying underperforming schools.
  • The BE Foundation proposed the use of student digital portfolios to track student success and school quality, representing learning both in and out of school indexed to competencies and providing information relevant to students, parents, and community stakeholders.
  • Bellwether Education Partners’ proposal suggested that should states over-identify schools in need of improvement based on the blunt instruments of student achievement and growth measures and then choose the schools in which to trigger intervention based on a rigorous inspection process conducted by outside experts.
  • Separate proposals from America Succeeds and Education First both argued for allowing districts some choice of indicators within a larger state-set framework to encourage innovation and improvement in areas targeted by the local community.
  • A thoroughly impressive group of high school students from Kentucky who serve on the Prichard Committee Student Voice Team argued for measuring school climate based on student surveys, an idea echoed in proposals from the University of Southern California and TeachPlus.
  • Several proposals considered incorporation of measures of social-emotional competencies, such as persistence and relationship skills.

Not all these ideas are likely to be good ones. For example, as Bill Penuel of the University of Colorado at Boulder pointed out during the lively Twitter conversation during the competition, using existing noncognitive measures in high-stakes assessments runs into potentially serious problems of validity and gaming the system. However, these proposals do demonstrate that even within the form set by ESSA, there are opportunities to innovate in ways that have the potential to provide better information to states, districts, school, teachers, and parents about how to better prepare students for successful and happy lives. Let’s work to ensure states capitalize on the opportunity.

 

 

 

 

 

Insights from edTPA Implementation

edTPA tensionsLike any significant change to a major system, edTPA (a new performance-based assessment for licensing teachers) presents challenges and tensions for those who must accommodate this change. Those tensions were recently documented in the November issue of Language Arts by Amy Johnson Lachuk and Karen Koellner, two teacher educators in an elementary education program that offers degrees leading to initial teacher certification. In their article “Performance-Based Assessment for Certification: Insights from edTPA Implementation,” Lachuk and Koellner describe efforts to adjust programming in light of their state’s recent adoption of edTPA.

While the edTPA is new, the tensions it has brought are tensions familiar to many teachers. One is a tension between wanting our students to learn things for themselves through inquiry and wanting to give our students the answers. Lachuk and Koellner write:

As teacher educators, we aim to offer teacher candidates opportunities to reflect upon and inquire into their practices. We also aim to help them experience the complexities of teaching, so that they can grow in their practices. However, a formal, performative assessment such as the edTPA makes managing the tension between telling and growing even more complicated (cf. Berry, 2008); Berry questions: “What would motivate prospective teachers to seek their own solutions to teaching problems when their formal assessment is at stake?”

Lachuk and Koellner also found that the edTPA required candidates to think about teaching in ways the preparation program had not previously felt the need to push:

For example, writing and using supporting evidence about their planning, teaching, and assessment practices are how candidates are evaluated on their ability to engage in the assess-plan-teach cycle. . . . Several candidates were very skilled in writing retrospective reflective narratives about their teaching, yet when it came time to structure these reflections as academic arguments in which they used evidence to support their claims, they struggled.

The new reality meant teaching new skills, but it also meant eliminating some lessons. “[B]ecause edTPA is a time- and labor-intensive examination, we need to accommodate the process by requiring fewer assignments as part of the student teaching course.”

An even more significant tension may be the tension between wanting to give their students accurate, reliable information, and also wanting to be perceived as sufficiently knowledgeable. Lachuk and Koellner write:

Because the edTPA was a new examination for faculty, too, we wanted to project to teacher candidates that we had a firm grasp on what it was asking them to do, when in fact we did not. For instance, we created a series of face-to-face workshops and hosted several drop-in sessions for teacher candidates who were submitting and preparing their edTPA portfolios. These support workshops and drop-in sessions were intended to coach teacher candidates throughout the process, adhering to the guidelines for faculty support provided by Pearson publishing (the publisher of edTPA).

Participating in these face-to-face workshops was particularly difficult for Amy, who was concerned about unintentionally giving teacher candidates misinformation that would negatively impact their performance on the examination. Although she was familiar with the examination, Amy felt uncertain about her interpretation of the edTPAese, or the way certain concepts (such as finding a central focus for writing) were defined and interpreted in the examination. At the same time, however, for the sake of candidates’ peace of mind, she felt that she needed to present herself as knowledgeable and confident about the examination. Throughout the time she was helping to support teacher candidates with preparing their edTPA portfolios, Amy felt herself confronting this tension between appearing knowledgeable and confident while actually feeling rather uncertain.

But Lachuk and Koellner do feel confident that all these various tensions will lessen over time. “[C]andidates will be more familiar with the requirements and will have experienced more of the supports throughout our program (rather than only during their student teaching semester when they take the exam).” As with any change, the tensions felt now will shape our adjustments to that change and will ensure that, down the road, tensions will ease.

 

Read the complete article, “Performance-Based Assessment for Certification: Insights from edTPA Implementation.”

Asking the Right Questions in Teacher Assessment

teacher assessmentIn the November 2015 Language Arts, Rachael Gabriel examines problems with how teachers are evaluated. Her research team reviewed the Measures of Effective Teaching (MET) project, an extensive work that studied the techniques of 3,000 teachers to determine which correlated with high VAMs (value-added measures). While the project identified many useful activities, Gabriel argues it has been used to support teacher evaluation rubrics that err by measuring quantity, not quality. She writes:

The major challenge of performance assessment via observation is that indicators are counted as if their presence or absence indicates quality. For example, one feature of classroom discourse that is often included in commercially available rubrics for observations is the use of open-ended and/or higher-order questions. Though the presence of higher-order questions . . . has been associated with increased engagement and achievement, its absence does not indicate lack of quality. . . . When analyzing MET project videos, we found higher-order questions in low-performing classrooms on every measure of the MET study, and high-scoring classrooms that had no evidence of higher-order questions.

Other examples of this abound:

When it comes to opportunities to develop literacy, it isn’t the fact of allotted time for independent reading or writing, but rather the nature and use of that time that determines its value as a practice.

For example, several videos of MET project classrooms included time spent writing for five minutes or more, but the writing tasks often involved filling in blanks of a formulaic paragraph structure or copying notes from the board into a graphic organizer. Neither of these tasks involves a robust opportunity to develop literacy because students are not generating original language, employing a writing strategy, writing for a purpose, or writing to an audience. However, in observation, especially brief observation, it may appear that students are all engaged in writing, and this instrumental engagement may be viewed as evidence of effectiveness because students are quietly complying with a writing- based activity.

Why does a rubric of activities fail to indicate quality?

It could be that every observable feature or “best practice” involves a compromise and thus cannot be viewed in isolation as evidence of effectiveness or not. For example, calling on an equal number of boys and girls may extend the length of discussion and limit time for independent practice. Similarly, pursuing a back-and-forth discussion to support a student’s understanding might limit other students’ participation. A teacher could invest in one indicator of effectiveness at the expense of another. Thus, effective teaching may be about managing the dynamic balance of certain features of instruction rather than simply displaying such features.

She concludes:

At best, rubrics are filled with actions that are sometimes associated with effectiveness, not foolproof indicators of effectiveness. This leaves evaluators in the unenviable position of attempting to come up with feedback on a teacher’s performance based on a set of indicators that may not indicate anything. Given the importance of some features, the assumption may be that more is better, thus teachers are encouraged to ask more open-ended questions, engage students in more meaningful conversations, or encourage more participation. The inclusion of such indicators to mark the highest levels of proficiency on a rubric may inspire instrumental compliance rather than thoughtful integration. Unfortunately, encouraging participation for participation’s sake may not deepen or extend learning opportunities. But, considering how participation could contribute to the goal of the lesson (how is this effective?) or how participation has been attempted (how does the teacher encourage participation?) is likely to generate useful feedback aimed at improving or expanding effective practices.

 

Read Rachael Gabriel’s complete article “Not Whether, but How: Asking the Right Questions in Teacher Performance Assessment”

Innovations in Assessment Chat

new and better assessment toolsA few weeks ago, NCTE held an online conversation about innovations in assessment. On the video, you can hear the questions and comments from moderator Darren Cambridge and his panel of education experts. What’s not evident in the video is an online chat room that ran concurrently, in which educators responded with some valuable thoughts.

Here are some highlights:

When asked to define “innovation” in assessment, participant Kathryn Mitchell Pierce replied, “I think innovation is when teachers have an opportunity to design experiences that help them get to know what their students are learning. . . . When an assessment experience helps us understand our students better, helps us understand our craft better, and helps our students grow DURING the assessment event, then I think we have innovation!”

Later, Cambridge asked teachers to describe innovations they had seen at the classroom level that deserved more attention. The question drew several noteworthy ideas.

Michael Rifenburg offered, “College-level writing teachers grading a student paper with the student present. And talking about how they came to the grade with the student sitting right there. I have never done it, but have thought about the pros and cons for quite a bit.”

Maria C posted, “I work at a school that has transitioned to a STEM school. As part of our model, we use problem-based learning in all of our classes. Students are posed a real-world problem, they research and propose a solution, and then propose their solutions to a panel of community members and experts. This allows us to integrate all of their literacy skills, as well as their collaborative and problem-solving skills. I think this demonstrates to our kids that the skills they are learning and practicing in school are not isolated, but rather must be practiced together to be meaningful.”

Cambridge himself chimed in, “One simple assessment practice that was perhaps innovative at the time I began using it in my own teaching was providing audio feedback to students. Students said they felt it was more personal—sometimes too personal!—and were more likely to respond to what I had said, whether or not they took my advice.”

Barbara 1 offered another activity: “Students pick out one sentence in the writing of another student and tell why that sentence works well. I’ve heard so many discussions branch out from the one sentence to a larger segment of the writing, but starting with one sentence provides a nonformidable beginning.”

Later, the conversation turned to the value of having students keep journals as an assessment tool, and participant Ali G offered this insight:

If the goal of assessment is to improve learning rather than “audit” learning, then the socioemotional aspects are essential. Journals are applicable for every subject area, and having students write/reflect on what they learned and point out their own connections and how it was relevant for them personally requires students to transfer knowledge, make connections, conceptualize important ideas, and reflect on their own learning, which is great for self-monitoring and metacognition.

The discussion around all these issues did not end when the chatroom closed. A week later, Rifenburg gave us this observation:

Darren’s second question has stuck with me since: can assessment be innovative if it only works for one classroom? In other words, does “innovation” necessarily involve malleability, the opportunity to transplant that assessment technique from one learning environment to the other?

I ventured an answer via Twitter and through the Blackboard Collaborate chat function. I answered “no” and suggested the opposite, that maybe innovation necessitates a grounding in the specific context.

I’m not in love with that answer, partially because I don’t know what “innovation” really means.
Almost 5 days later, I don’t have a better answer. But the important thing is that I am still thinking about it.