Prerequisites for Standards-Based Reporting

As we work with schools and teams that are doing the work associated with implementing professional learning community concepts and practices, we are often asked, “Don’t we need to develop a standards-based report card?” This is a natural question that eventually flows from doing the work of a PLC. The more deeply teams embed PLC practices in their day-to-day work, the more they find their current report card is no longer a fit. The problem that can arise is this—as teams are urging district leaders to develop a “standards-based” report card, they are often bypassing the perquisite work necessary for effective standards-based reporting. They view standards-based grading as a noun—a report card—rather than a process of which the report card is merely the final piece.

In short, district and school leaders, working in tandem with each collaborative team, must do the prerequisite work associated with standards-based reporting or teachers will simply be giving a new name to their traditional practices. In this scenario, reporting of pupil progress doesn’t stand a chance of being either accurate or meaningful.

Just what is the prerequisite work associated with standards-based grading? We offer these suggestions. First, begin with the outcome in mind—that is, what is essential for each student to learn? The standards for which students must demonstrate proficiency must be crystal clear to teams. Such clarity can only result from deep, rich discussions within each team.

Clarifying Each Standard

We often use the analogy of sharpening a pencil when speaking of clarifying essential outcomes for students. Highly effective teams simply keep “grinding” away, making the standards sharper and sharper, not only in their own minds, but also in the minds of students and parents. This “sharpening” involves tackling a number of questions, such as:

  • Which of the standards are essential (i.e., which are the “power standards”)?
  • What does each standard mean?
  • How can each standard be broken down into learning targets and stated in student-friendly language?
  • How can the teaching and learning of each standard be paced within the school calendar?

Clarifying what is essential that each student know and be able to do requires that collaborative teams literally become “students of the standards.” Rather than each teacher trying to figure out what the standards mean, teachers study the standards with their colleagues!

Of course, even though teams will identify a number of essential outcomes or power standards, the amount of time students will need to learn each essential outcome will vary, depending on the complexity of each standard. Given that the amount of time allotted to schools each year is fixed, teams must “chunk out” the year, making sure adequate time is allocated to the teaching and learning of each standard. While the term pacing guide has inherited a negative connotation in many circles (primarily because they often become too prescriptive and rigid), collaborative teams must engage in discussions around the topic of time allocation—regardless of what it is called.

Adding Meaning to Standards

Standards-based reporting that is both meaningful and accurate requires collaborative teams to drill deeper into the meaning of each standard. They continue to “grind away,” sharpening the focus of student learning outcomes. They do this by tackling questions such as:

  • What level of proficiency must a student demonstrate in order to meet each standard?
  • How many data points will be needed to determine proficiency?
  • What would each standard, if met, look like in student work?

Clarifying, adding meaning to, and identifying the level of proficiency, while necessary, is in and of itself inadequate. Teams must view the standards through the eyes of students. They must reach a common understanding about the kind of work they are looking for from students—what it should look like and the standard of quality. Importantly, discussions of standards of quality should also address the issue of data points—that is, how much data will be needed to assess proficiency and when should this data be gathered? For example, will one assessment with a significant number of questions be preferable, or would shorter, more numerous, formative assessments be better? Are we looking at progress over time, or a single, summative assessment?

Scoring the Standards

We had this experience while recently working with a high school team. The team had worked to identify what students should learn. They had all the relevant documents at the table when doing their work. They were absolutely clear about what standards were most important in the unit of study they would be assessing. They paid attention to district, state, and national standards, as well as state test item specifications. They worked to create common formative assessments to monitor the learning of students along the way. They collaboratively crafted questions in which the cognitive complexity of each question matched what students would face on state and other high-stakes assessments, and they agreed on a rough time frame regarding when they would teach the unit and administer the formative assessments. In other words, they were doing the right work of a collaborative team.

Here’s the glitch! They discovered two things that greatly impacted how they graded student work and eventually how they reported student progress. When collaboratively analyzing student work, they learned that individual teachers frequently gave different points for different answers and parts of answers. They also found that some teachers gave formative assessments under different conditions. For example, some teachers spent an entire class period reviewing for the assessments, while others spent little or no time reviewing. Some teachers allowed students to use class notes, while others did not. In short, this team quickly discovered they needed to collaboratively develop common scoring rubrics. It is simply impossible to develop effective standards-based reporting without addressing the issue of common scoring of student work.

Deep, rich discussion around the issue of common scoring ultimately leads to valuable and critical discussions about a number of related topics. For example, how will each question or parts of questions be weighted? How much weight will be given to homework or class projects? Will students be allowed, or even required, to redo work? If so, under what conditions and how much weight will be given to work that has been redone? These are but a few of the questions highly effective teams tackle.

The Folly of Perfection

Districts, schools, and teams should constantly seek to get better—to improve the quality of their work in order to positively impact student learning. They must also understand that perfection is an elusive goal—it’s always out there, but never quite within reach. This is particularly true of standards-based reporting. There simply is no perfect way to report pupil progress. Rather than perfection, the appropriate question is how do we improve the way we report student learning by making our processes and procedures more meaningful and accurate? We must realize that improvement is, indeed, a process, not an event. While perfection may ultimately be impossible, continuous improvement is not. Our advice for moving to a more standards-based approach to reporting pupil learning levels is much the same as our advice for implementing virtually all PLC practices—that is, first start with a smaller “guiding coalition” that collaboratively gains shared knowledge by seeking out best practice. This smaller guiding coalition should engage in action research, trying out on a smaller scale various approaches and practices. The key is always the same—get started, then get better, never forgetting to focus on results. Effective teams that tackle the issue of standards-based reporting ultimately focus on the right questions—are the changes we are making helping us report student learning in more meaningful and accurate ways? Does it improve student and parent understanding? Is it better than what is currently in place? And, importantly, how can we make it even better?


Linsey Hope

Very informative - wondering if you have examples of what that looks like when it has been started/worked on by teacher teams. Mike Mattos has referred to a 'Essential learning chart' with the columns (student friendly language, proficiency, pre-requisites, pacing, extensions) and I'd love another example of what that looks like.

Posted on


I appreciate you all sharing your thoughts here. This gives me several great ideas to share with the teachers and administrators at my school. We are moving into Common Core Standards right now and I am really hoping this proves to be an effective shift for our teachers and students. This should provide some much needed clarity on what teachers should focus on in order to improve student learning.

Posted on


Thanks for the great post. You've given me something to think about in terms of clarifying which standards are essential or "power standards".

Posted on

Staff at

@sl10426 - Wow! Thanks for sharing this detailed explanation! I think it will be very helpful to others interested in implementing standards-based grading in their schools.

Posted on


I could not agree more with your thoughts on the universial standard and the measuring of success vs. potential.

Posted on


My school has recently developed and implemented standards based report cards. The student report cards are several pages long. The front page contains their grades for each six weeks and then each six weeks has its own page with all benchmarks listed for each subject. The students receive the following codes for each benchmark based on how well they did on the benchmark test: AC (accomplished), IP (in progress), or NE (not evident). Parents like the fact that each benchmark the student is required to meet for the six weeks is listed on the report card and they can see which benchmarks have been accomplished and which benchmarks the student still needs to work on.

We determine benchmarks for the six weeks based off of the standards we teach. Each and every benchmark is put on the report card with the standard code. On the front page of the report card we have come up with 9 core benchmarks we feel each student should meet before moving on to the next grade. To determine if a student has met a benchmark, we give benchmark tests at the end of each six weeks for each subject area. The benchmark tests consist of 5 to 10 multiple choice questions for each benchmark. We as teachers made up each test and are in the process of redoing benchmark tests to ensure each question is truly assessing the standards. We do have some teachers that teach to the benchmark test with heavy review of the exact test and some that just give the test. We are currently discussing ways to ensure the tests are not reviewed with the students before they are given.

We now give the benchmark tests as scantron tests and the results are submitted to the district. As teachers we can now analyze the data not only in our class, but across the district as a whole.

Posted on

Where can I find serious work being done to automate student assessment so it's done continuously as a 'sampling' technique to determine the most useful of alternative presentation styles. ALthough a universal standard can give a measure of success in how well a child is doing as compared to his or her peers, it fails to address the most important measure; how well the student is doing in relation to their potential.

It wouldn't seem measuring a child's likely potential at the moment much different than correlating their test scores at any point in time with a register of know over and underachievers at the same age with similar backgrounds. A correlation, not causation analysis would seem to infer what modifications in the lessons presented would produce the best results for the individual. It would seem personalized learning can be a reality but only if we accept our inability to provide it in a group setting.

Posted on