Data-Informed Versus Data-Driven PLC Teams
One of the three big ideas of a professional learning community calls for us to be data driven. A recent blog raised an interesting perspective when it was suggested that we should be more concerned about being data informed. Is this merely a matter of semantics, or should we pay attention to the apparent distinction?
On his informative blog, Larry Ferlazzo (2013) sites Ted Appel, who contends, “If schools are data driven, they might make decisions like keeping students who are borderline between algebra and a higher level of math in algebra so that they do well on the algebra state test. Or, in English, teachers might focus a lot of energy on teaching a ‘strand’ that is heavy on the tests—even though it might not help the student become a lifelong reader. In other words, the school can tend to focus on its institutional self-interest instead of what’s best for the students. In schools that are data informed, test results are just one more piece of information that can be helpful in determining future directions.”
Indeed, the data that school-based teams focus on should be derived from common formative assessments. Whereas assessment is intended to inform teaching, there is some justification to think of the data as informing. So, if as Ted Appel argues, “test results are just one more piece of information,” what should teams look for to enhance their conversations so they lead to greater levels of student achievement? Data generated from common formative assessments are from a given point in time. We need to also pay attention to trends and patterns, and those do not always arise just from our common assessment or state test scores, benchmark assessment, or standardized assessments alone.
Professor John Hattie writes that teachers should see learning through the eyes of their students, and students should become their own teachers. In both instances, the basis that forms their respective actions is derived from information about student learning, attitudes, and dispositions, among many other attributes. Numbers or data generated from common assessments and state tests are largely necessary but entirely insufficient in helping teachers and students in making decisions about where they are going, how they are going, and where to next in the student’s learning (Hattie, 2008). To be able to arrive at answers to these three questions will surely require more information that can be obtained from just assessment data. It has to involve conversations with students, teachers, and other adults that intervene in the learning equation.
The extent to which schools ask questions of their students will lead to greater depth of understanding of the adult impact each is making. After all, the extent to which we think we are giving good feedback to students about their learning should be best judged by the students’ reaction to the feedback. How often do we really get the student’s opinion about what works best for him or her? So, as Hattie suggests in his book Visible Learning for Teachers, we need to be gathering data about student perceptions of what works best for their learning. Data about their levels of engagement, their desire to learn, their need for appropriate feedback, and their perspective on the quality of teaching should all be added to the scores students get in the common formative assessment. This additional information will surely inform the teachers about not only the results from test scores, but also the dispositions on the part of the students to learn.
What is interesting for me is we could really align our data gathering from the students according to the very questions PLC teams are encouraged to ask themselves:
What is it we expect students to learn?
How will we know when they have learned it?
How will we respond when they do not learn?
How will we respond when they already know it?
The questions Hattie (2012) encourages us to be mindful of for students again are: Where am I going? How am I going? Where to next?
In every case, we can gather the data not only of student behavior, but also of the adult actions that have caused the student responses. This, I suggest, will give PLC teams more appropriate and complete data from which to proceed in the classroom and in the school.
So, for me, data informed versus data driven is not merely a matter of semantics. I say it is clearly more complicated. We certainly have to be purposeful, attentive, and professional in examining all aspects that contribute to the success, or lack thereof, in student learning and instructional practice. As Hattie contends, teachers and school leaders must constantly examine their impact on student learning and make informed decisions on the basis of a range of evidence. We would be wise to take heed of his advice.
Ferlasso, L. (2009, August 26). Data-driven versus data-informed. Retrieved from http://larryferlazzo.edublogs.org/2009/08/26/data-driven-versus-data-informed
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York: Routledge.
Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. New York: Routledge.
Meier, D. (2009, March 5). Data informed, not data driven. Retrieved from http://blogs.edweek.org/edweek/Bridging-Differences/2009/03/dear_diane_sometime_i_imagine_1.html