How PLCs Use Assessments
We received a series of questions from a school grappling with developing common assessments. I summarized the questions and attempted to provide a brief response for each.
1. In establishing the essential learnings, should we begin with the intended learnings for the entire year or just create the essential learnings unit by unit as we proceed through the year?
It is best to “begin with the end in mind.” A team should agree on what they want students to know and be able to do as a result of taking the course. Once they are clear on that, they can begin thinking in terms of units. You won’t have a guaranteed and viable curriculum for your students until the team can answer the question, “What is it we want them to learn in this course?” Then they break it down into, “What is it we want them to learn in this unit?”
2. When common assessments are created, does it matter if the point totals are the same? As an example, this past summer we talked about how teachers can always enrich material beyond the essential learning of the course so long as they are teaching the essential learning and administering the common assessments created by the team. There are other teams that want to give 100 percent identical tests, but because of other differences in what they do, members weigh their tests differently (i.e., It might be worth 20 points in one class for a teacher who gives fewer total points, but 40 points in another class for a teacher who gives more total points). Am I correct to think such differences are permissible so long as we are being true to the spirit and process of PLCs?
A common assessment might include questions that are unique to a teacher, so total points may not be the same. The issue the team must consider is “How did each student do on each of the skills or concepts we taught?” So, the team should be more interested in how each student did on the eight questions dealing with this skill than on how many points a student earned on this test.
I’m not sure why teachers are using different points for the same tests, and you may want to have some conversations about what a grade represents on this team. It sounds as if students who demonstrate the same proficiency in the same course could be receiving different grades at the end of the course based on the teacher to whom they are assigned, and that is troubling.
3. When attempting to create common assessments, there also seems to be debate on the prioritization of formative versus summative assessments. People who are brand new to the process in my building (including myself) seem more comfortable in first establishing common summative assessments as a means of “getting their feet wet” in the process.
In our school, we asked teams to develop the common summative assessment they would use as the final exam immediately after they determined their essential outcomes. This again helped them begin the course with the end in mind—they knew what all students must learn and how they were going to demonstrate their learning. But teams then immediately created common formative assessments for each unit. So, I don’t think this is a case of summative or formative—both need to be created.
4. While on the topic of formative assessments, there has also been discussion about what constitutes a formative assessment. It is being indicated to us that formative assessments are basically quizzes.
Formative assessments are part of any assessment process used to (1) help the teacher (and the individual student) identify which students are struggling to achieve the intended outcome, (2) provide those students with prompt intervention and support for the specific problem they are having, and (3) give the student an additional opportunity to demonstrate that he or she has learned. So formative assessments can take many forms. A teacher who says “Work on this problem while I come around the room to check on you” is engaged in formative assessment. A teacher who assigns a paper to students and provides feedback on how to improve the draft as part of the writing process is engaged in formative assessment. Informal observations and exit slips can certainly be formative. Preassessments can be formative if they are intended to “inform” (hence formative) the teacher about the existing knowledge and skills of students. If students demonstrate they already know a skill, the teacher adjusts instruction and doesn’t devote time to teaching something that students already know. And a team that gives an exam at the end of the unit could use it as a common assessment if the school has a plan to provide systematic intervention and gives the student another chance to take the exam after they have demonstrated they have learned in the intervention setting.
In other words, it is a big mistake to say that quizzes are formative and tests are summative. It is not the assessment itself, but how you use the assessment that determines if it is formative. The same test could be summative for one team and formative for another. It is what happens after the assessment that determines if it is summative or formative. Formative assessments can take many forms, but team-developed common formative assessments must be part of the assessment process in a PLC.
5. There has also been discussion about minimum benchmarks for how many common assessments should exist in a course or in a given grading period.
Monitoring of student learning is most effective when it is frequent. It would be a mistake for teams to teach for long stretches without checking for student learning as a team through common assessments. Again, good teachers are checking for student understanding all the time, minute by minute as they are teaching. But teams should also be checking for understanding on a regular basis. I would question any team that goes more than three or four weeks without checking for student learning through a common assessment. Typically, once teams see the benefit of team-developed common assessments, they increase the frequency of the assessments.
6. Lastly, I took away from the conference in Washington, DC, that data analysis should not be done by looking at total raw scores (i.e., average score 32/40 on a test) but rather, individual items (i.e., 40 percent of students missed question #4 and thus a discussion of why that might be).
When looking at results, the team should first look student by student, skill by skill, rather than looking at individual items. The initial question should be, “What percentage of our students were able to demonstrate proficiency on this essential skill?” The result should allow a school to know, by name and by need, the students who require additional support and the precise skill or concept where they struggle. The team then can turn its attention to individual members who seek help in teaching a particular skill. Finally, the team can identify a problem area where no one seems to be getting good results and develop a strategy for improving as a team.
Remember that in a PLC, collaborative teams use evidence of student learning in three ways:
- To identify students who need intervention
- To identify an area where I as an individual teacher can use some support to improve a problem area in my teaching
- To identify an area where we as a team need to improve
Then the team could look at individual items. I have seen teams focus exclusively on items and their meetings turn into critiques of the assessment rather than discussions of student needs and how to improve their instruction. Don’t let that happen in your school.