1 Introduction
During the 2019–2020 academic year, the Dynamic Learning Maps® (DLM®) Alternate Assessment System offered assessments of student achievement in English Language Arts (ELA), mathematics, and science for students with the most significant cognitive disabilities in grades 3–8 and high school. Due to differences in the development timeline for science, separate technical manual updates were prepared for ELA and mathematics (see Dynamic Learning Maps Consortium, 2020a, 2020b).
The purpose of the DLM system is to improve academic experiences and outcomes for students with the most significant cognitive disabilities by setting high, actionable academic expectations and providing appropriate and effective supports to educators. Results from the DLM alternate assessment are intended to support interpretations about what students know and are able to do and to support inferences about student achievement in the given subject. Results provide information that can guide instructional decisions as well as information for use with state accountability programs.
The DLM Alternate Assessment System is based on the core belief that all students should have access to challenging, grade-level content. Online DLM assessments give students with the most significant cognitive disabilities opportunities to demonstrate what they know in ways that traditional paper-and-pencil, multiple-choice assessments cannot. A year-end assessment is administered in the spring, and results from that assessment are reported for state accountability purposes and programs.
A complete technical manual was created after the first operational administration in 2015–2016. After each annual administration, a technical manual update is provided to summarize updated information. The current technical manual provides updates for the 2019–2020 administration. Only sections with updated information are included in this manual. For a complete description of the DLM assessment system, refer to previous technical manuals, including the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
1.1 Impact of COVID-19 on the Administration of DLM Assessments
The COVID-19 pandemic significantly impacted administration of the spring 2020 DLM assessment. Beginning in March 2020, in response to the pandemic, many states and local school districts closed in an effort to slow the spread of the virus, as recommended by the Centers for Disease Control and Prevention (CDC; 2020b, 2020a). During school closures, students across the country were unable to complete their spring assessments, including the DLM alternate assessments. As a result, on March 20, 2020, the United States Secretary of Education used her authority under the Elementary and Secondary Education Act of 1965 (ESEA; Elementary and Secondary Education Act of 1965, 1965), as amended by the Every Student Succeeds Act (ESSA; Every Student Succeeds Act, 2015), to invite states to submit one-year waivers of the assessment and accountability requirements, which all 50 states, the District of Columbia, the Commonwealth of Puerto Rico, and the Bureau of Indian Education applied for and received (Recommended Waiver Authority Under Section 3511(d)(4) of Division A of the Coronavirus Aid, Relief, and Economic Security Act (“CARES ACT”), 2020).
Due to the cancellation of spring assessment administration, very few students participating in the DLM assessment completed their assessments as intended. Thus, summative results were not provided in 2019–2020, as the available data were not an accurate and complete reflection of students’ knowledge, skills, and understandings. Instead, limited results were optionally provided to state education agencies in order to inform instructional decisions in the subsequent school year. For a summary of results provided in 2019–2020, see Chapter 7 of this manual. The consortium governance board met with Accessible Teaching, Learning, and Assessment Systems (ATLAS) staff to discuss the level of score reporting that was appropriate and technically defensible. This information was also shared with the Technical Advisory Committee, which supported the approach.
This manual presents evidence for the limited results that were provided in 2019–2020, as well as other administration, test development, and research activities that occurred in 2019–2020 and were unaffected by the COVID-19 pandemic.
1.2 Background
In 2019–2020, DLM assessments were available to students in 20 states and one Bureau of Indian Education school: Alaska, Arkansas, Colorado, Delaware, District of Columbia, Illinois, Iowa, Kansas, Maryland, Missouri, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Rhode Island, Utah, West Virginia, Wisconsin, and Miccosukee Indian School.
Two DLM Consortium partners, Colorado and Utah, only administers assessments in ELA and mathematics.
In 2019–2020, ATLAS at the University of Kansas continued to partner with the Center for Literacy and Disability Studies at the University of North Carolina at Chapel Hill and the Center for Research Methods and Data Analysis at KU. The project was also supported by a Technical Advisory Committee.
1.3 Technical Manual Overview
This manual provides evidence collected during the 2019–2020 administration of science assessments.
Chapter 1 provides a brief overview of the assessment and administration for the 2019–2020 academic year and a summary of contents of the remaining chapters. While subsequent chapters describe the individual components of the assessment system separately, key topics such as validity are addressed throughout this manual.
Chapter 2 provides an overview of the purpose of the Essential Elements (EEs) for science, including the intended coverage with the Framework for K–12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (National Research Council, 2012) and the Next Generation Science Standards (NGSS; NGSS, 2013). For a full description of the process by which the EEs were developed, see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Chapter 3 outlines evidence related to test content collected during the 2019–2020 administration, including a description of test development activities, external review of content, and the operational and field test content available.
Chapter 4 provides an update on test administration during the 2019–2020 year. The chapter provides a summary of the Instruction and Assessment Planner for assigning instructionally embedded assessments, and a description of new data extracts for monitoring assessment administration.
Chapter 5 provides a brief summary of the psychometric model used in scoring DLM assessments. This chapter includes a summary of 2019–2020 calibrated parameters. For a complete description of the modeling method, see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Chapter 6 was not updated for 2019–2020; no changes were made to the cut points used in scoring DLM assessments. See the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) for a description of the methods, preparations, procedures, and results of the standard-setting meeting and the follow-up evaluation of the impact data. For a description of the changes made to the cut points used in scoring DLM assessments for grade 3 and grade 7 during the 2018–2019 administration, see the 2018–2019 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2019a).
Chapter 7 provides descriptions of changes to score reports and data files during the 2019–2020 administration to reflect the impact of COVID-19 on the assessment administration.
Chapter 8 was not updated for 2019–2020 due to a limited and non-representative sample of completed assessments as a result of COVID-19. For a complete description of the reliability background and methods, see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Chapter 9 describes additional validity evidence collected during the 2019–2020 administration not covered in previous chapters. The chapter provides evidence collected for four of the five critical sources of evidence: test content, internal structure, response process, and relation to other variables.
Chapter 10 describes updates to required training and the professional development offered across the DLM Consortium in 2019–2020, including participation rates and evaluation results.
Chapter 11 summarizes the contents of the previous chapters. It also provides future directions to support operations and research for DLM assessments.