Through the Looking Glass: Examining Data to Inform Instruction

Through the Looking Glass: Examining Data to Inform Instruction
Jamie Williamson, EdS

The debate about the importance of data in education is far from new. For years, educators have been working to better leverage data for student outcomes, but when we examine overall reading proficiency in the U.S., it’s clear that we are losing that battle. As I discussed in the Head Lines article in The Beacon Spring 2020 issue, the National Assessment of Educational Progress (NAEP) scores in reading have barely moved in the last 20 years: “The most recent report, released at the end of 2019, shows that only 35% of fourth graders and 34% of eighth graders are at or above proficient in reading, which was a decline from the 2017 report. If you dig a little deeper into the data, you’ll find that the most impacted readers, or readers who scored in the bottom 10th percentile, have not improved at all since 1992 (NAEP, 2019).” The impact of the COVID-19 pandemic has only heightened the need to support students’ literacy efforts, as data will almost certainly show that these scores have sunk even lower since 2019. 

To compound the achievement issue, schools are struggling to implement effective practices in order to most effectively utilize data to inform instruction. The results are that students are losing too much instructional time to poorly designed and deployed assessment programs, schools are becoming overly focused on the wrong types of data, and teachers are being shamed or penalized for outcomes, without receiving the proper support or training to leverage the information they collect. There is surely a better way. To educators who follow research-based methodologies and practices, the science provides a clear guidepost: “Data use is critical to alignment: it should drive instructional improvement, differentiate resources for students, and strengthen collaboration among teachers, families, and administrators” (Kauerz & Coffman, 2013). Assessment then becomes a critical piece in mapping effective systematic change. 

The natural follow-up question is: What assessment are you using? I have been fielding and redirecting this question for the better part of two decades, and, every time I encounter it, I reframe this query to ask a few, often overlooked but incredibly important, questions: What are you trying to assess? And why? How is the data going to be used? If there is no clarity and transparency around these questions, then I would argue that the process is doomed from the start. The answer is not necessarily more data, but rather better data. So, what does it mean to acquire better data? It begins with addressing the three questions above. 

"The answer is not necessarily more data, but rather better data.”

What Are You Trying to Assess? 
This key question sets the context for the next two questions, and it will help us home in on the real priorities. (Hint: The answer is not reading, math, social studies, or academic achievement in a broad sense.) Since Windward’s program is focused on remediating language-based learning disabilities, I will keep our focus on reading and approach this question from a somewhat granular perspective. 
First and foremost, it helps to clarify whether we are assessing students’ understanding of content or acquisition of skills. We cannot simply stop at the broad category of reading. We need to break it down to the specific set of skills that we are targeting for improvement. A solid starting point, elucidated by the National Reading Panel in 2000 after its extensive, three-year study, as well as the decades of supporting research encompassed in the Science of Reading and supported research-based practices, is with the Big Five components of Reading: alphabetic principal, phonemic awareness, fluency, vocabulary, and comprehension (National Reading Panel, 2000). An additional factor is the need for these components to be assessed in a developmentally appropriate way. For example, I would not be assessing or monitoring the reading comprehension of Grade K-1 students, because they are still primarily working on acquiring the alphabetic principal and phonemic awareness skills necessary to break the code and begin reading. Once specific skills are identified for assessment, we need to address the why. 
The Why: What Is the Purpose of the Assessment? 
For the last decade, the notion of Big Data has been at the forefront of conversations around how organizations can best serve their customers or constituents. Corporations collect and aggregate massive data sets to precisely target consumers’ needs. In the education sector, the Big Data conversation has centered around utilizing standardized tests to inform decisions at the school, district, and state levels, encompassing a broad range of initiatives such as curriculum planning, staffing, interventions, and benchmarks tied to students’ grade-level advancement. 
The problem with translating standardized test scores into instructional practice is that “big data produces measurements about schools and students after the learning process has taken place. [That is,] it is great at answering the question of which students need extra support; however, it’s too broad to give any indication of how those students can be best helped” (The Graide Network, 2018). In order to best target skills requiring remediation in real time, educators must also take snapshots of student progress at regular intervals, as well as periodic diagnostic checks of individual students’ strengths and weaknesses. Incorporating this “small data” enables educators to identify and address specific skill deficits as they occur; further, this process allows the student to become a true partner in their educational journey. Ideally, these three approaches—summative, formative, and diagnostic assessments—are represented in a comprehensive assessment plan, complementing one another to form a complete picture of a student’s academic progress. 

“In order to best target skills requiring remediation in real time, educators must also take snapshots of student progress at regular intervals, as well as periodic diagnostic checks of individual students’ strengths and weaknesses.” 

Summative assessments include the types of high-stakes tests that many people visualize when they hear the word assessment. These are typically administered at the conclusion of a unit or grade, are often heavily weighted, and offer insights into students’ overall knowledge or proficiency in a subject area (Yale Poorvu Center for Teaching and Learning, 2021). Because these types of assessments evaluate the success of instruction, they can be incredibly useful as an aerial view of student learning. For example, summative assessments in the form of standardized tests will show which students are at grade level for reading and which students require additional supports to meet grade-level benchmarks. However, as these assessments occur after learning has taken place, they lack the immediacy and flexibility of an assessment that occurs in the midst of instruction. 

Formative assessments, on the other hand, are designed to target learning gaps throughout the instructional process so that students and their instructors can respond in real time. “It can include students assessing themselves, peers, or even the instructor, through writing, quizzes, conversation, and more” (Theal and Franklin, 2010). Because the goal is to improve learning and not to earn a specific mark, this shift in perspective can bolster both student confidence and their ability to take ownership of their learning (Trumbull and Lash, 2013). “[Well-designed] formative assessment strategies improve teaching and learning simultaneously. Instructors can help students grow as learners by actively encouraging them to self-assess their own skills and knowledge retention, and by giving clear instructions and feedback” (Yale Poorvu Center for Teaching and Learning, 2021).  

Diagnostic assessments, also known as pre-assessments, can serve as a barometer that gauges what students already know about a topic. In tandem with formative assessments, employing periodic diagnostic checks can help instructors gain deeper insights into their students. These types of assessments can not only inform a teacher’s lesson plans and learning objectives, but also, and most importantly, they can help determine patterns of each student’s strengths and weaknesses. As diagnostic assessments reveal areas that may need additional (or fewer) instructional minutes, they become a useful tool for both teachers and students to differentiate instruction. 

Thoughtfully combining summative, formative, and diagnostic assessments in a strategic assessment plan allows for a balance between assessment and instruction, yielding timely, actionable data that is responsive to student needs. The final piece of the puzzle, then, is addressing precisely how we use the data we collect. 

How Is the Data Going to Be Used? 
Without a comprehensive plan tying student data to instructional objectives, administering all these assessments would be an exercise in futility. Sadly, we have seen the result of focusing on Big Data in the absence of infrastructural supports: a punitive environment for many educators, wherein teacher pay, promotion, and even district or school funding is threatened when student outcomes don’t align with statewide requirements. However, we do know that “when school leaders set goals for data use, create the infrastructure, and provide teachers the tools to use data, the results can mean greatly enhanced student performance outcomes” (Balow, 2017). 
To avoid teacher fear and the potential for teacher shaming, we first need to build a culture of data informed by continuous improvement. This type of culture begins with transparency grounded in clarity and purpose. That is, data should never be used for teacher evaluation, but rather for teacher support. When we shift the focus from student learning to high-stakes teacher evaluation, the incentive model shifts. Then you run the serious risk of creating a system that rewards shortcuts and manipulating the system: One case in point is the Chicago Public School System’s high-stakes testing program. Fundamentally, using data to inform instruction is about impacting student learning. That means giving teachers the training, resources, and support to facilitate that process.  

"Fundamentally, using data to inform instruction is about impacting student learning."

At Windward, providing a framework for teachers to utilize assessment data in the classroom is doubly important, as remediating students’ skill deficits is at the core of our academic program. With this in mind, I outlined the below considerations for educational programs to leverage data for maximum impact. 

At the heart of it, putting infrastructure in place around the use of data to inform instruction is about communication. “The real shift occurs when everyone in [our] educational community starts to change what they talk about and how they respond to conversational outcomes” (Burroughs, 2020). As we have seen at Windward, when there is buy-in at all levels of the organization, with all members striving toward a common goal, the effect on student learning can be transformative.