Collaborative teams engage in professional learning when they focus on the results of their own efforts. In a professional learning community, data from team-developed common assessments serve as the linchpins of success.
Too often, however, teams are bogged down by data: the data set is too big, the opportunities for gathering the data are too sparse (just one or two common assessments in a quarter), the organization of the data is too time-consuming, the meeting time to discuss the results is too short, etc. For these many reasons, teams often confess they spend more time planning their efforts than examining the results of their efforts. Planning isn’t bad; it just isn’t sufficient in a professional learning community. Healthy and productive teams always examine the impact of their best-laid plans.
As an alternative, try having data moments in every meeting rather than just awaiting the more formal but infrequent data meetings. Bring small sample sizes of data or even single artifacts of student work to explore together during the first five minutes of your meeting time.
Significant conversations and meaningful next steps can happen during data moments as teams explore questions such as the following:
Moments with single artifacts:
- What score(s) would we give this student based on this work? Do we have inter-rater reliability? If not, what could we do to become more consistent?
- What are the students’ strengths? Has the student demonstrated growth over time (e.g. from earlier indications with prior assessments)?
- What are the students’ opportunities for growth?
- What instruction have we planned and/or delivered that should have addressed the challenges this student is still facing?
- What could we still do to help this student move forward?
- Do we have the right criteria? The right performance descriptors?
- Is there anything we should change about our assessments and attending tools (rubrics, scales, scoring guides, etc.)?
Data moments with small sample sizes:
- What score(s) would we give these students based on the provided work? Do we have inter-rater reliability?
- What has everyone mastered within the sample size?
- What errors and misconceptions are evident in the sample size?
- Based on our collective expertise, is this sample of evidence representative of the larger group? If so, how do we know? If not, what additional evidence might we need to bring to the table at the next meeting?
- Are there artifacts in the sample set that we could use anonymously in our individual classrooms (use anonymous work from someone else’s classroom to avoid exposing your own students) to help students understand strong and weak work?
- How else could we use these few artifacts to help all of our students improve?
- What program improvements (curriculum, instruction, and assessment) do the samples suggest we might need to consider?
- What are plausible next steps for the students represented here, and would those next steps be beneficial to the larger group as well?
It’s surprising how much a team can learn after five short minutes of mining data evidence and artifacts. When teams become experienced with data moments, they can be more adept in data meetings. A steady stream of data moments can provide more focus and clarity for a team’s planning efforts in the rest of the team meeting.