Ed Tech Learning Outcomes: The impact of learning tools on disadvantaged students

Executive Summary

Education technology (ed tech) has been broadly championed as a key component of education reform in U.S. public schools. There are now thousands of tools, materials, and services on the ed tech market, many promising to help raise the level of U.S. K-12 education. While many of these tools are underpinned by solid theories of change and pedagogy, the vast majority have not been rigorously examined, evaluated or validated.

Student learning products face a series of substantial barriers to accurate and quantitative impact assessment. The barriers include:

  1. Most of the data currently collected and published publicly is based on user perceptions rather than on documented improvements. User perception is a notoriously unreliable measure of efficacy.

  2. The use of real–time data for product development and improvement has been limited, even though ed tech products are often connected to digital platforms with robust data.

  3. A scarcity of validated metrics that measure skills critical to education and career readiness.

  4. A focus on achievement scores—versus learning growth or skills competency—which neglects to account for or explain the effect of a product on a student’s learning over time.

Boundless Impact Investing partnered with BrightBytes, a learning analytics and research firm, to study the impact of student learning products, focusing particularly on the effects of these products on learning gains in students from low-income communities. The study sought to integrate learning data from numerous service agencies, districts, and schools in order to assess the return on learning (ROL) of specific products. The research team captured and analyzed student data pertaining to user perception, usage frequency and depth, and the impact of each ed tech product on student test scores.

BrightBytes and Boundless focused on how specific educational tools impact student performance in math and reading in Title I schools (schools with high numbers or high percentages of children from low-income families7) versus non-Title I schools. This study focused on answering the following questions:

  1. What is the impact of ed tech tool usage levels on learning gains as reflected on student test scores?

  2. When usage is equal, do disadvantaged students experience the same, less, or greater learning gains on tests as compared to non-disadvantaged students?

The resulting framework aimed to identify the factors that best account for such learning gains with a particular focus on disadvantaged youth and the impact of digital learning on their achievement. The framework was applied to a dataset of close to 12,000 unique students, with nearly 8,000 attending schools classified as Title I, and approximately 4,000 students attending schools that do not receive Title I funding.

After conducting a statistical analysis based on the status of the schools it was found that:

  1. There is no benefit to increased usage for any of the math or reading products studied when Title I status is ignored.

  2. Students attending Title I schools made significant and greater learning gains in both math and reading test scores as compared to their non-Title I counterparts with similar usage levels on the same tools.

  3. Usage patterns differed over the year for Title I and non-Title I students: it was observed that students at Title I schools (where high numbers or percentages are low-income) started using tools early in the school year, but usage waned midway through the year; also, students at Title I schools were more likely to use the tools during weekdays vs. weekends, indicating lack of access to computers and/or internet from home.

The study scope is small but meaningful, in that the results point to interventions that could more precisely target the needs of students from low-income communities. It also resulted in a tenable framework for quantitatively assessing the educational impact of student learning products. The hope is that this study provides impact investors with guidance on what questions to ask when assessing the actual learning outcomes of ed tech products for disadvantaged students in order to make companies, investors, and other stakeholders more accountable to the youth they serve.

EducationChristian Hodgson