In response to the growing number of K-12 public schools shifting to some sort of distance learning to slow the spread of the Covid-19 virus in the United States, the federal government began issuing waivers for the Every Student Succeeds Act 2019-2020 spring assessments. Even though the U.S. Department of Education has waived this requirement, states are understandably reluctant to scrap the assessments altogether given the significant time and money spent developing them.

Though moving spring summative testing to the fall is one option for states to consider, administering a spring 2019-2020 assessment in the fall of the 2020-2021 academic year is not as simple as it may seem. As states figure out how to move forward, they will need to consider what information is needed from fall assessments and how to redesign their summative assessments, if possible, for those purposes. To do that, states will need to consider certain factors:

  • Should assessments capture students’ current standing on previous grade’s content areas?
  • Should assessments capture students’ standing on current grade’s content areas?
  • How will states use testing results to guide instruction?
  • How will scores be evaluated and reported?

Assessing on Content

Previous Grade Content

States must decide what grade-level test to give to each student. One option is to give students the end-of-year test from their previous grade (e.g., fourth-graders are given the end-of-year third-grade test). This would allow teachers, schools and parents to understand where students stand on previous-grade content and identify content areas where students are struggling (via sub-scores).

The interpretation of scores generated by fall testing will be quite different than it would have been if administered in the spring. For example, when third-grade math is assessed in the spring, the scores capture students’ knowledge of third-grade math and, therefore, whether they are prepared for fourth-grade math. If these tests are given in the fall, however, then what is being assessed is the distance learning from the spring and any additional learning students gained from summer activities, or the learning gaps that occur from loss of instruction during the spring and summer.  Under the current circumstances, the assessments are likely to measure some of each, depending on students’ experiences.

If the fall assessment shows that students are not prepared for the next grade’s curriculum, schools could hold students back a grade or provide changes to the school’s overall curricular scope and sequence. It’s unknown if testing vendors would have the fall data available quickly enough to make those critical decisions and adjustments in time, however. It’s also unlikely there would be a strong appetite for holding back students a grade as learning loss and thus preparedness for the next grade will vary based on things outside of the students’ control, such as quality of distance learning, online access and parent engagement. (For a deeper look into the implications on student promotions, see the third blog in this series to be posted on Friday.) A more likely scenario is that teachers will be expected to help students who experience large learning loss “catch up.”

For example, if students are performing low on specific third-grade content, fourth-grade teachers could revisit the third-grade curriculum to fill in those gaps before moving on to on-grade material. Or, teachers could determine which topic areas from third grade are critical for learning fourth-grade content and concentrate on shoring up only those gaps.

Given the likely learning loss from decreased in-person instruction this spring and summer, fall assessments cannot effectively evaluate students’ end-of-year knowledge of previous grade’s content. If they are administered in this manner, expect to see less growth, no-growth or a decline in student scores compared to what would have been expected without the instructional interruption. Using these scores as if they represent spring scores would be a misrepresentation of state education trends.

Current Grade Content

A second option is for states to give students the end-of-year test for their current grade (e.g., incoming fourth graders are given the end-of-year fourth-grade test). While this is an unconventional option, it would highlight where students are performing well in on-grade areas and where they need instruction, giving teachers a roadmap on what content to focus on throughout the year. This pre-test and post-test type design also would allow for a direct evaluation of next year’s learning gains.

While this approach would provide an easier path forward for teachers, it neglects to recognize that there could be prior grade content that students either did not effectively learn or lost due to school closures, challenges with distance learning and/or summer learning loss. With testing on current grade material, educators also wouldn’t glean as much information on the impact of school closures on learning loss or the success of online learning.

If gathering this information is a priority for states, they could administer a blended on-grade and previous-grade assessment. Specifically, testing vendors could prepare cross-grade diagnostic tests from the item pools to assess prerequisites from the prior grade and content about to be addressed. This would allow teachers to remediate as needed and cull what isn’t necessary. This option is likely to address the needs of teachers and schools much better than administering existing spring assessments. However, this approach would require more effort from testing vendors and states.

Reporting Scores

In the spring, scores are reported as summative assessments to individual students and then aggregated at the classroom, school, district and state level. These aggregated reports allow for classroom teachers as well as school, district and state staff, to evaluate students’ end-of-year comprehension of on-grade standards, after having been taught those standards for a full year. That aggregate-level information is then used to evaluate how well a school is doing through accountability metrics.

Because fall scores will be influenced by learning loss and gains, much of which is outside the control of teachers and schools, they will not provide the same information and should not be used in the same way. This begs the question as to the value of reporting aggregate scores in the fall. If fall tests are primarily used to help teachers evaluate students’ current standing on content and to guide instruction, then student-level and aggregated classroom-level score reports would provide the most benefit. This level of reporting could provide teachers with information to evaluate and to quickly act upon student learning needs.

Use Purpose to Determine Path

Overall, fall assessments may provide schools and teachers with useful information to guide instruction for the 2020-2021 school year. Before states jump all-in to administering fall assessments, they should carefully consider the value of fall testing, what purpose it should serve, and whether modified spring assessments can meet those needs.

Summative spring assessments were not designed to provide diagnostic information and, if the intention is to use fall assessments to identify specific content areas where learning loss occurred, then existing spring assessments will likely have little value. It may be possible that states’ item pools could be leveraged to build a statewide assessment that would identify content areas that require remediation and better meet the needs to teachers and schools. Once states have clearly defined their intentions for fall testing, they can make informed decisions about the score reports educators will need and appropriately evaluate their fall testing options.

This article is the first in a three-part education series by HumRRO’s Education, Research and Evaluation team. Read more on the effects of canceled assessments in part 2 and part 3 of the series.

Beth Bynum - Manager

About the Author:

Bethany H. Bynum, Ph.D.

Manager