Fall is “assessment season” in schools across the state. Teachers are engaged in informal assessment of their students, ranging from administering Informal Reading Inventories (IRIs), asking students to complete interest and motivation surveys, and taking anecdotal notes that show the many ways their students are growing as readers and writers. The goal of these assessments is to assist teachers in planning for instruction and meeting the diverse needs of students present in any given classroom.
However, teachers are also inundated with another type of data at this time of year – standardized test scores. This may include statewide testing and screening assessments, such as AIMSweb or DIBELS, meant to identify children who are at-risk for reading difficulty. Our next series of posts will deal with how to make sense of all the data piling up in the fall in a meaningful way. What do various forms of assessment offer the classroom teacher? This month, we’ll share a few “Dos and Don’ts” of using standardized test results.
- DO look for trends in student performance. Standardized test scores can be a useful tool to identify gaps in a school’s literacy instruction. Did a large percentage of students struggle with inferential thinking questions? This could be an area to study with teams of teachers across grades. It remains important, however, to remember that standardized test scores should never stand on their own when making high stakes decisions. Caldwell (2007) reminds us that, while standardized tests scores, viewed collectively, can provide information about potential areas for improvement in a school’s literacy curriculum, these assertions should be followed up by ongoing formative assessment and progress monitoring throughout the school year. Moreover, although scores from high stakes tests can provide an overall snapshot of student performance on a range of tasks, they don’t provide sufficient information to adequately guide teacher decision-making or to identify conditions under which struggling students can be successful. Classroom-based, formative assessments, on the other hand, are intended to guide teaching and learning.
- DO involve teachers in exploration of test scores. Armstrong and Anthes (2001) studied the characteristics of school districts that had effectively used data to improve student achievement. They found that, when shared collaboratively with all stakeholders, including teachers and administrators, data from test scores was integral in changing teachers’ minds about instructional practices. This study of trend data works best when scores are shared by the literacy professional in a routine and systematic way. A Professional Learning Community (PLC) or team meeting might be devoted to this purpose at the start of the school year. These meetings can also be a place for ongoing training and working with teachers to “unpack” and interpret test scores.
- DON’T publicly compare scores across classrooms. As we talk with colleagues in various school districts, we’re saddened that standardized test scores are still often shared in a faculty meeting format with specific teacher names attached to student scores. This kind of public comparison can lead to an environment of mistrust and fear of retribution (Afflerbach, 2005). We need our schools to be open and collaborative environments with a shared vision of how to improve student learning. Comparing teachers in a public forum disintegrates this message.
- DON’T make decisions based on a single point of data. A number or a score rarely, if ever, shows the complexity of a reader. After further analysis of students who scored similarly on the Washington Assessment of Student Learning (WASL), Buly and Valencia (2004) explained that, while readers may score similarly on a standardized test, they can have quite different reader profiles. Students who receive a similar score on an assessment may have very different needs in terms of comprehension, fluency, vocabulary, and word identification. Thus, it’s necessary to follow-up with more diagnostic testing when any student has been “flagged” as underperforming by a standardized test or screening exam.
These are our takeaways from “assessment season.” What are yours? How does your district filter all the data showing up at the start of the year? We’d love to hear what works from you. Join the conversation!
References cited in this month’s post
Afflerbach, P. (2005). National Reading Conference policy brief: High stakes
testing and reading assessment. Journal of Literacy Research, 37(2), 151.
Armstrong, J., & Anthes, K. (2001). How data can help. American School
Board Journal, 188(11), 38-41.
Buly, M. R., & Valencia, S. W. (2002). Below the bar: Profiles of students who
fail state reading assessments. Educational Evaluation and Policy
Analysis, 24(3), 219-239.
Caldwell, J. S. (2007). Reading Assessment: A Primer for Teachers and
Coaches. Solving Problems in the Teaching of Literacy. Guilford