A More Rigorous Approach to Determining Intervention Effectiveness

A nearly $900K grant from IES supports Ethan Van Norman’s work to develop a quantitatively rigorous and user-friendly measure of the effectiveness of interventions for students with disabilities and learning difficulties.

Story by

Kelly Hochbein

Ethan Van Norman

Ethan Van Norman

Schools provide interventions for students with disabilities and learning difficulties to help those students overcome academic challenges and succeed in the classroom. But how does a teacher or researcher know if an intervention is working or not for an individual child?

Applied researchers—those who investigate academic interventions in schools—and practitioners such as school psychologists and special education teachers traditionally use visual analysis to monitor individual student improvement, examining a graph based on student performance before, during and after an intervention.

“It makes sense just to monitor growth, but there are a lot of statistical variables and statistical problems that go into modeling growth for an individual student because most of our available techniques rely upon hundreds or thousands of students to model growth,” explains Ethan Van Norman, an assistant professor of school psychology.

Quantitative researchers, or statisticians, use these techniques to conduct large-scale analyses via studies of thousands of students to determine the effectiveness of an intervention. Van Norman hopes to harness the power of large-scale analysis for use with individual students. “So how do you extract methods to make growth defensible with the flexibility of monitoring an individual student? … How can we give teachers actionable data to determine whether they can change the intervention or keep things the same?” he asks.

Ideally, applied researchers could use the complex measurement tools available today to make such decisions. However, these tools require the ability to manipulate syntax in statistical programs—programs that many researchers might not even own or know how to use. In addition, applied researchers do not always have the capacity to recruit and intervene with thousands of students for a single study.

“There’s something to be said about statistical literacy: knowing what you’re measuring and knowing what you’re doing,” Van Norman explains. “But for some people, I would say their efforts are better used doing the interventions or doing the studies out at the schools. [Quantitative researchers] can kind of try to meet them and support them.”

Van Norman seeks a “meeting in the middle,” through which complex statistical tools are made accessible to practitioners who may not have been trained to use them. He and his colleagues at the University of Texas at Austin plan to develop a free-to-use website that calculates or measures intervention effects for single-case designs, which are studies in which the subject serves as his or her own control—in this case, an individual student or a small group of students. The team includes a “cross-section” of researchers: Van Norman, who works on both quantitative and applied research, is joined by a statistician as well as researchers who work extensively with single-case design. A three-year, $899,769 grant from the Institute of Education Sciences (IES) supports their work.

“Basically, the idea is: How can we [develop] a quantitatively rigorous measure of treatment effects without putting the burden on the applied researchers to figure out how to do it?” says Van Norman, who serves as the principal investigator of the project.

How can we [develop] a quantitatively rigorous measure of treatment effects without putting the burden on the applied researchers to figure out how to do it?

Ethan Van Norman

Pinpointing Intervention Effectiveness

Van Norman and his colleagues, David A. Klingbeil and James E. Pustejovsky of the University of Texas at Austin, want to be able to determine if an intervention is working or not for a child at a certain point. They also seek ways to best combine the outcomes of multiple single-case designs to evaluate academic interventions. They hope to do this by applying complex statistical analyses to get better estimates of treatment effects.

“We’re really good at identifying the average effect [of an intervention] across a large number of students,” he explains. “What we’re not good at is understanding the variability of that effect. So in what context is this given intervention more or less effective?”

First, Van Norman and his colleagues will conduct an exhaustive literature review to extract data from previously published—and possibly even unpublished—studies. Then, in developing their statistical tool, the team will use Bayesian statistics, an alternative approach to estimating statistics that allows for better estimates of variability among students—“the crux of individual differences,” he says.

Bayesian statistics has only been feasible in the past 20 to 30 years because of the complex algorithms and computer simulation methods it requires. The approach allows for informed priors—baseline assumptions of what a treatment effect might look like—which can provide a more precise prediction.

“Instead of relying upon large sample theory or large groups of students, Bayesian analysis gives you some flexibility [so] you can make reasonable assumptions about what your outcomes might look like, to give you more horsepower with fewer students or less data,” Van Norman explains.

The team believes its work will enable researchers to determine not only whether an intervention is effective, but also the conditions under which it works. They have three years to “stress test” their tool, on both the statistical and applied sides, with applied researchers providing feedback on its usability and how it might be improved.

Van Norman says his broader ambition within school psychology is to bridge the divide between quantitative and applied educational researchers by “doing quantitatively, methodologically rigorous work, but work that also speaks to day-to-day classroom practice.”

That’s not so easy to do, he says, but it’s critical.

“Some of my colleagues have made the argument that if a teacher goes to the research literature, finds an evidence-based intervention, and then it doesn’t work in their school setting, that actually just furthers the divide between research and practice. So if you can get a better estimate of when it’s likely to work and not work, just beyond whether it works, you’re going to get more momentum and more buy-in with these evidence-based interventions, instead of people just kind of punting on the issue.”

Story by

Kelly Hochbein

Related Stories

Nicole Johnson research

New Research Highlights Link Between School Shootings and Violence Against Women

Research led by Nicole Johnson, associate professor of counseling psychology, finds that 70% of school shooters have perpetrated violence against women can influence prevention strategies, authors say.

Lehigh

Robin Hojnoski Participates in House Education Committee Hearing on Mental Health in Schools

Hojnoski testified about the critical shortage of mental health professionals in Pennsylvania.

photo montage

2023: 10 Great Research Stories

Lehigh researchers provide a methodology for predicting wildfires, work to speed up the translation of research into actual use, discover a novel way to capture carbon pollution, and other developments.