Tag Archives: Assessment

Asking the Right Questions in Teacher Assessment

teacher assessmentIn the November 2015 Language Arts, Rachael Gabriel examines problems with how teachers are evaluated. Her research team reviewed the Measures of Effective Teaching (MET) project, an extensive work that studied the techniques of 3,000 teachers to determine which correlated with high VAMs (value-added measures). While the project identified many useful activities, Gabriel argues it has been used to support teacher evaluation rubrics that err by measuring quantity, not quality. She writes:

The major challenge of performance assessment via observation is that indicators are counted as if their presence or absence indicates quality. For example, one feature of classroom discourse that is often included in commercially available rubrics for observations is the use of open-ended and/or higher-order questions. Though the presence of higher-order questions . . . has been associated with increased engagement and achievement, its absence does not indicate lack of quality. . . . When analyzing MET project videos, we found higher-order questions in low-performing classrooms on every measure of the MET study, and high-scoring classrooms that had no evidence of higher-order questions.

Other examples of this abound:

When it comes to opportunities to develop literacy, it isn’t the fact of allotted time for independent reading or writing, but rather the nature and use of that time that determines its value as a practice.

For example, several videos of MET project classrooms included time spent writing for five minutes or more, but the writing tasks often involved filling in blanks of a formulaic paragraph structure or copying notes from the board into a graphic organizer. Neither of these tasks involves a robust opportunity to develop literacy because students are not generating original language, employing a writing strategy, writing for a purpose, or writing to an audience. However, in observation, especially brief observation, it may appear that students are all engaged in writing, and this instrumental engagement may be viewed as evidence of effectiveness because students are quietly complying with a writing- based activity.

Why does a rubric of activities fail to indicate quality?

It could be that every observable feature or “best practice” involves a compromise and thus cannot be viewed in isolation as evidence of effectiveness or not. For example, calling on an equal number of boys and girls may extend the length of discussion and limit time for independent practice. Similarly, pursuing a back-and-forth discussion to support a student’s understanding might limit other students’ participation. A teacher could invest in one indicator of effectiveness at the expense of another. Thus, effective teaching may be about managing the dynamic balance of certain features of instruction rather than simply displaying such features.

She concludes:

At best, rubrics are filled with actions that are sometimes associated with effectiveness, not foolproof indicators of effectiveness. This leaves evaluators in the unenviable position of attempting to come up with feedback on a teacher’s performance based on a set of indicators that may not indicate anything. Given the importance of some features, the assumption may be that more is better, thus teachers are encouraged to ask more open-ended questions, engage students in more meaningful conversations, or encourage more participation. The inclusion of such indicators to mark the highest levels of proficiency on a rubric may inspire instrumental compliance rather than thoughtful integration. Unfortunately, encouraging participation for participation’s sake may not deepen or extend learning opportunities. But, considering how participation could contribute to the goal of the lesson (how is this effective?) or how participation has been attempted (how does the teacher encourage participation?) is likely to generate useful feedback aimed at improving or expanding effective practices.

 

Read Rachael Gabriel’s complete article “Not Whether, but How: Asking the Right Questions in Teacher Performance Assessment”

Innovations in Assessment Chat

new and better assessment toolsA few weeks ago, NCTE held an online conversation about innovations in assessment. On the video, you can hear the questions and comments from moderator Darren Cambridge and his panel of education experts. What’s not evident in the video is an online chat room that ran concurrently, in which educators responded with some valuable thoughts.

Here are some highlights:

When asked to define “innovation” in assessment, participant Kathryn Mitchell Pierce replied, “I think innovation is when teachers have an opportunity to design experiences that help them get to know what their students are learning. . . . When an assessment experience helps us understand our students better, helps us understand our craft better, and helps our students grow DURING the assessment event, then I think we have innovation!”

Later, Cambridge asked teachers to describe innovations they had seen at the classroom level that deserved more attention. The question drew several noteworthy ideas.

Michael Rifenburg offered, “College-level writing teachers grading a student paper with the student present. And talking about how they came to the grade with the student sitting right there. I have never done it, but have thought about the pros and cons for quite a bit.”

Maria C posted, “I work at a school that has transitioned to a STEM school. As part of our model, we use problem-based learning in all of our classes. Students are posed a real-world problem, they research and propose a solution, and then propose their solutions to a panel of community members and experts. This allows us to integrate all of their literacy skills, as well as their collaborative and problem-solving skills. I think this demonstrates to our kids that the skills they are learning and practicing in school are not isolated, but rather must be practiced together to be meaningful.”

Cambridge himself chimed in, “One simple assessment practice that was perhaps innovative at the time I began using it in my own teaching was providing audio feedback to students. Students said they felt it was more personal—sometimes too personal!—and were more likely to respond to what I had said, whether or not they took my advice.”

Barbara 1 offered another activity: “Students pick out one sentence in the writing of another student and tell why that sentence works well. I’ve heard so many discussions branch out from the one sentence to a larger segment of the writing, but starting with one sentence provides a nonformidable beginning.”

Later, the conversation turned to the value of having students keep journals as an assessment tool, and participant Ali G offered this insight:

If the goal of assessment is to improve learning rather than “audit” learning, then the socioemotional aspects are essential. Journals are applicable for every subject area, and having students write/reflect on what they learned and point out their own connections and how it was relevant for them personally requires students to transfer knowledge, make connections, conceptualize important ideas, and reflect on their own learning, which is great for self-monitoring and metacognition.

The discussion around all these issues did not end when the chatroom closed. A week later, Rifenburg gave us this observation:

Darren’s second question has stuck with me since: can assessment be innovative if it only works for one classroom? In other words, does “innovation” necessarily involve malleability, the opportunity to transplant that assessment technique from one learning environment to the other?

I ventured an answer via Twitter and through the Blackboard Collaborate chat function. I answered “no” and suggested the opposite, that maybe innovation necessitates a grounding in the specific context.

I’m not in love with that answer, partially because I don’t know what “innovation” really means.
Almost 5 days later, I don’t have a better answer. But the important thing is that I am still thinking about it.

 

 

Using Formative Assessment to Improve Practice

As Connected Educator Month continues, we offer this advice from Dan Fowler, Assistant Principal at Eastern Elementary School, Washington County, Maryland.

 

 Using the Formative Assessment Process with Teachers to Improve Instructional Practice

The Maryland Department of Education defines “formative assessment” as “a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes.”

 Following “the 5 Critical Attributes of Formative Assessment” can truly transform a teacher’s practice, but can also be transformational for their students.

 

5 Critical Attributes of Formative Assessment

Learning Progressions, Success Criteria, Feedback, Self-Assessment, and Collaboration

As a school administrator, I have watched teachers who participated in our FAME (Formative Assessment for Maryland Educators) Cohort transform their practice and implement these critical attributes. I have also observed how powerful the process of formative assessment is in their classrooms and its impact on student growth.

As I went through the FAME Cohort with my teachers, I too began to transform my practice as an administrator and instructional leader. Collaborating with teachers through formative assessment allowed me to provide more descriptive feedback. It also allowed me to support teachers in their own learning to improve instructional practice.

“How can I use the formative assessment process with teachers to improve instructional practice, but also impact student learning?”

The Solution

When teachers develop SLOs (Student Learning Objectives), they identify a specific learning goal and a specific measure of student learning used to track progress toward that goal. Teachers must also identify the professional development, materials, and resources that will support their instruction and assist students in meeting their growth target.

hands in for formative assessmentMy vision this school year is to work with teachers to develop an instructional goal, connected to the teacher’s SLO, that will help shape their overall approach to instruction and ensure students learn.

From the teacher instructional goal, I collaborate with individual teachers to develop a Criteria for Success that will identify key instructional strategies that are specific, concrete, and descriptive of what success looks like.

This Criteria for Success will assist teachers in:

  • Clarifying expectations within instructional practices to help them meet their instructional goals,
  • Obtaining feedback around their instructional practices related to their Student Learning Objective,
  • Providing descriptive and specific feedback that encourages reflection,
  • Developing next steps to refine instructional practice related to the teacher’s instructional goal, and
  • Promoting teacher self-assessment.

As an instructional leader, I feel it necessary to utilize the formative process to engage in dialogue, descriptive feedback, and reflection around instruction. By utilizing the 5 Critical Attributes of Formative Assessment in a coaching method with teachers, I can provide feedback to teachers that will promote the improvement of instructional practice, but also impact student learning. In turn, I hope this process will promote self- and peer-assessment, and lead to a collaborative culture where we are all partners in improving instruction. Furthermore, this process will help teachers feel comfortable in taking instructional risks and becoming empowered to take ownership of their own learning.

 

Connected Educator Month, The Challenge

homepageThroughout the month of October, NCTE has been a theme leader, covering “Innovations in Assessment“. Throughout the month, we have also been issuing a challenge: How can we re-envision assessments for accountability and equity?

In the debate over the reauthorization of the Elementary and Secondary Education Act over the past year, many teacher groups have come out strongly in opposition to continued yearly standardized testing of all students, noting their often disastrous impact on the learning environment in schools and the inability for testing results to provide a complete picture of student learning. Many members of the public have also expressed their dissatisfaction with overtesting students, with families across the country (over 20% in New York) opting their children out of state tests this year. However, many civil rights organizations, natural and traditional allies of teachers, have vehemently opposed any reduction in testing or states’ accountability to act on the basis of the inequities that tests uncover. They argue that if we don’t test all students every year, we have no way of knowing whether students in certain communities or vulnerable or traditionally underserved groups—such as students in poverty, students of color, those with disabilities, or English language learners—are not learning what they need to know to be successful in adult life. Addressing these inequities is a crucial civil rights issue, and unless inequities are measured, they are easy for policymakers and district leaders to ignore.

Holding leaders accountable to their responsibility to ensure that ALL students have access to a high quality education and graduate with the skills they need to be successful in college, careers, and civic life was a driving force behind the debate about ESEA fifty years ago. Ensuring equity is a key civil rights issue. However, relying on yearly standardized testing as the sole measure of success is a deeply flawed approach to addressing the issue. Challenges in the current debate over reauthorization of ESEA reveals a lack of understanding about alternative ways to meet this imperative.

We invite you to join us in the challenge to envision what an accountability system of the future might look like, one that:

  • Engages the need for equity head on while also ensuring that evidence of student learning is gathered in ways that are consistent with good instructional practice
  • Mirrors the ways that educators themselves effectively use evidence to improve instruction
  • Measures the full range of important contributions to student learning and development, providing a more holistic view of student progress

It is also essential that any new system focuses on holding the whole educational system (including state policymakers) accountable for its results, not individual teachers or their students.

We’re inviting people with innovative ideas in this arena, particularly those who are already experimenting with new approaches, to share them through a series of online discussions during Connected Educators Month. NCTE will be continuing to explore this challenge beyond October, and we hope your contributions during CEM will launch deeper collaboration with us in the coming months. If you are interested in working with us on this issue but can’t commit to doing anything in October, please get in touch anyway. Have an approach to share? Let us know! Contact Darren Cambridge, NCTE director of policy research and development at dcambridge@ncte.org or +1-202-270-5224.

3 Ways to Support Formative Assessment

At NCTE, October is Connected Educator Month. We’ve been asking our members to reflect on innovations in assessment, and you have. Lisa Lienemann, an ELA teacher at Lockerman Middle School in Caroline County, MD offered the following tips for principals:

a tree of support

 

3 Important Ways Leaders Can Support the Formative Assessment Process

 

  1. Shift the Conversation

 In many places, the relationship between teachers and principals is still rooted in a punitive tradition — principals wield their pens in opposition to teacher efforts when what they see doesn’t fit the traditional boxes and molds. Formative assessment, when implemented as a process that informs teaching, is a complex approach that takes time to hone and polish. If principals want teachers to get good at it, they need to invite teachers to a new conversation, one that shifts the conversation from, “Here’s what I didn’t see. . . .” to “Why did you . . . ?” If what we want to see in our students is a growth mindset, don’t we need to encourage the same in our teachers?

 

  1. Let Them Get Messy

 Try. Experiment. Attempt. Jump in. Take a stab at it. Give it a go. These are some key messages leaders need to send to teachers. Teachers, as a rule, like things to be planned and outcomes to be predictable. They like the tried and true. In the 21st Century, we’ve got to let go of this a little. Advancement demands it, but unless leaders give them permission to use their classroom as one might a science lab, with a well thought-out experimentation plan and a means for analyzing and evaluating the results, teachers will continue teaching the way they always have. We already know that isn’t reaching kids at the levels it needs to and kids are becoming increasingly disaffected and disengaged from the learning process as school looks less and less like real life by the minute. Derail this train and put teachers on a new path by giving them permission to jump the tracks.

 

  1. Connect Efforts to Tools Already in Use

 Do you have a dusty evaluation tool in your toolbox that you’re not leveraging for teacher and student success? We did. In my district, the SLO [student learning objectives], a misunderstood and maligned creature, was seen mostly by teachers as a distasteful task that had to be completed before they started doing their real jobs. By bringing the SLO more directly into the light, key leaders understood its rationale better as a tool to encourage teacher growth and reflection, not as a caught’ya. Instead of reinventing the wheel, look around for tools already in use that might be reevaluated and reconsidered in light of the new conversation and approach.