Skip to main content

Getting Started


Using Existing Evidence-Based Clearinghouses


Ohio does not endorse or require the use of any specific evidence-based clearinghouse; districts may use the clearinghouses or stand-alone research reviews they find most useful in terms of content and usability. Existing clearinghouses and stand-alone research reviews include, but are not limited to the following list.

Please note: Under ORC 3313.6028(C), materials in grades prekindergarten to grade 5 may not use the three-cueing approach to teach students to read.

Blueprints for Healthy Youth Development


What does it provide? Blueprints for Healthy Youth Development provides a registry of evidence-based positive youth development programs designed to promote the health and well-being of children and teens

How does Blueprints evaluate evidence? Blueprints programs are rated as Promising, Model, or Model Plus. Promising programs meet the minimum standard of effectiveness. Model and Model Plus programs meet a higher standard and provide greater confidence in the program’s capacity to change behavior and developmental outcomes. See more information in their Criteria Factsheet.

How does Blueprints Align with ESSA’s evidence levels?*

Study/program ratings Criteria Alignment with Every Student Succeeds Act evidence tiers
Model+ Programs
  • At least two high-quality RCTs or one RCT and one QED.
  • Significant sustained positive impact on intended outcomes.
  • No evidence of negative effects.
  • Intervention specificity, outcomes, risk/protective factors, and logic model all specifically described.
  • Results have been independently replicated.

If large/multisite sample = Strong Evidence (Level 1).

If no sample size information is available or sample is not large/multisite = Promising Evidence (Level 3).

Model Programs
  • At least two high-quality RCTs or one RCT and one QED.
  • Significant sustained positive impact on intended outcomes.
  • No evidence of negative effects. Intervention specificity, outcomes, risk/protective factors, and logic model all specifically described.

If large/multisite sample = Strong Evidence (Level 1).

If no sample size information is available or sample is not large/multisite = Promising Evidence (Level 3).

Promising Programs
  • One high-quality RCT or two high-quality QEDs.
  • Significant positive impact on intended outcomes.
  • No evidence of negative effects.
  • Intervention specificity, outcomes, risk/protective
  • factors, and logic model all specifically described.

If large/multisite sample and RCT = Strong Evidence (Level 1).

If large/multisite sample and 2 QEDs = Moderate Evidence (Level 2).

If no sample size information is available or sample is not large/multisite = Promising Evidence (Level 3)

Effective Outcomes
  • Strong methodological rigor.
  • Short-term favorable outcome with a substantial effect favoring the treatment group.

If RCT and large/multisite sample = Strong Evidence (Level 1).

If only QEDs and large/multisite sample = Moderate Evidence (Level 2).

If no large/multisite sample = Promising Evidence (Level 3).


*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf

CrimeSolutions


What does it provide? The National Institute of Justice’s CrimeSolutions.gov assesses the strength of the evidence about whether programs and practices achieve criminal justice, juvenile justice, and crime victim services outcomes in order to inform practitioners and policy makers about what works, what doesn't, and what's promising.

How does CrimeSolutions evaluate evidence? CrimeSolutions programs are rated as Effective, Promising, Inconclusive Evidence, or No Effect. Promising or Effective entries meet the criteria to be included in the Ohio Evidence-Based Clearinghouse

  1. Rated as Effective: Programs and practices have strong evidence to indicate they achieve criminal justice, juvenile justice, and victim services outcomes when implemented with fidelity.
  2. Rated as Promising: Programs and practices have some evidence to indicate they achieve criminal justice, juvenile justice, and victim services outcomes. Included within the promising category are new, or emerging, programs for which there is some evidence of effectiveness.
  3. Inconclusive Evidence: Programs and practices that made it past the initial review but, during the full review process, were determined to have inconclusive evidence for a rating to be assigned. Interventions are not categorized as inconclusive because of identified or specific weaknesses in the interventions themselves. Instead, our reviewers have determined that the available evidence was inconclusive for a rating to be assigned.
  4. Rated as No Effects: Programs have strong evidence indicating that they had no effects or had harmful effects when implemented with fidelity.

How does CrimeSolutions Align with ESSA’s evidence levels?*

Study/program ratings Criteria Alignment with Every Student Succeeds Act evidence tiers
Promising/Effective** Some evidence to indicate intended outcomes were achieved. If includes RCT and large/multisite sample = Strong Evidence. If includes only QEDs and large/multisite sample = Moderate Evidence. If no large/multisite sample = Promising Evidence.

*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf

**The crosswalk Effective Crime Solutions interventions was created by OERC.

Evidence for ESSA


What does it provide? Evidence for ESSA provides information on programs and practices that meet each of the top three ESSA evidence standards in a given subject and grade level. The site includes reading programs and math programs in grades K-12

How does Evidence for ESSA evaluate evidence? The website uses the four levels of evidence recognized by the Every Student Succeeds Act:

  1. Strong evidence: At least one well-designed and well-implemented experimental (i.e., randomized) study.
  2. Moderate evidence: At least one well-designed and well-implemented quasi-experimental (i.e., matched) study.
  3. Promising evidence: At least one well-designed and well-implemented correlational study with statistical controls for selection bias.
  4. Demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy or intervention is likely to improve student outcomes or other relevant outcomes.
How does Evidence for ESSA align with ESSA’s evidence levels?*
Study/program ratings Criteria
Strong A program is placed in “strong” if it has a statistically significant positive effect on at least one major measure (e.g., state test or national standardized test) analyzed at the proper level of clustering (class/school or student). Programs with one significantly positive study are not listed as “strong” if there is also at least one study with a significantly negative effect.
Moderate A program is placed in “moderate” if it meets all standards for “strong” stated above, except that instead of using a randomized design qualifying studies are quasi-experiments (i.e., matched studies).
Promising Programs with at least one correlational study with controls for inputs may be placed in the “promising” category. Also, programs that would have qualified for “strong” or “moderate” but did not qualify because they failed to account for clustering (but did obtain significantly positive outcomes at the student level) may qualify for “promising” if there are no significant negative effects.

*Source: “Evidence for ESSA Standards and Procedures”

https://www.evidenceforessa.org/wp-content/uploads/2024/02/FINAL-Standards-and-Proc-02-12-24.pdf

National Technical Assistance Center on Transition: The Collaborative (NTACT:C)


What does it provide? The National Technical Assistance Center on Transition: The Collaborative (NTACT:C) is a Technical Assistance Center co-funded by the U.S. Department of Education’s Office of Special Education Programs (OSEP) and the Rehabilitation Services Administration (RSA). NTACT:C provides information, tools, and supports to assist multiple stakeholders in delivering effective services and instruction for secondary students and out of school youth with disabilities. Note: you will need to create an account with NTACT:C to access resources in this clearinghouse.

How does NTACT:C evaluate evidence? NTACT:C evaluates interventions to teach skills to secondary students and youth with disabilities regarding the amount, type, and quality of the research conducted and labels interventions as, “evidence-based,” “research-based,” or “promising.” Read more about the NTACT Criteria for Levels of Evidence.

How does NTACT:C Align with ESSA’s evidence levels?*
Study/program ratings Criteria Alignment with Every Student Succeeds Act Evidence Tiers
Evidence-Based Practice

Group Experimental Design

  • Two methodologically sound group comparison studies with random assignment to groups, demonstrating positive effects, including at least 60 total participants across studies
  • Four methodologically sound group comparison studies with non-random assignment to groups, demonstrating positive effects, including at least 120 total participants across studies
  • No negative effects

Single Case Design

  • Five methodologically sound studies, demonstrating a functional relation (positive effects) and at least 20 total participants across studies;
  • No negative effects
  • Studies are conducted by at least three research teams with no overlapping authorship at three different institutions.

Quasi-Experimental Design

  • Two methodologically sound a priori studies using propensity score modeling/ matching which demonstrate consistent significant correlations between predictor and outcome variables
  • Studies must calculate effect size or report data that allows for calculation;
  • No evidence from a methodologically sound a priori study demonstrating negative correlations between predictor and outcome variables

Mix of Group Experimental, Single Case Designs, Correlational Designs

  • Meet at least 50% of criteria for group experimental, single-case designs, and/or quasi-experimental correlational design as described

Group Experimental Design may align with Level 1 if the studies use random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students.

Group Experimental Design may align with Level 2 if the studies use non-random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students.

Group Experimental Design may align with Level 3 if either random or non-random group assignment and without a large/multi-site sample.

Single Case Design may align with Level 3 or Level 4, depending upon the study design.

Quasi-Experimental Design may align with Level 2 if sample includes 350 or more students or 50 or more groups with 10 or more students or Level 3 with a smaller sample.

Research-Based Practice

Group Experimental Design

  • One methodologically sound group comparison study with random assignment to groups, demonstrating positive effects
  • Two or three methodologically sound group comparison studies with non-random assignment to groups, demonstrating positive effects
  • No negative effects

Single Case Design

  • Two to four methodologically sound studies, demonstrating a functional relation (positive effects)
  • No negative effects
  • Studies are conducted by at least three research teams with no overlapping authorship at three different institutions

Quasi-Experimental Design

  • Two methodologically sound a priori studies which demonstrate consistent significant correlations between predictor and outcome variables
  • Studies must calculate effect size or report data that allows for calculation;
  • There are more methodologically sound a priori studies demonstrating positive correlations than methodologically sound a priori studies demonstrating negative correlations

Mix of Group Experimental, Single Case Designs, Correlational Designs

  • Meet at least 50% of criteria for group experimental, single-case designs, and/or quasi-experimental correlational design

Group Experimental Design may align with Level 1 if the study uses random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students.

Group Experimental Design may align with Level 2 if the studies use non-random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students.

Group Experimental Design may align with Level 3 if studies use either random or non-random group assignment and without a large/multi-site sample.

Single Case Design may align with Level 3 or Level 4, depending upon the study design.

Quasi-Experimental Design may align with Level 2 if sample includes 350 or more students or 50 or more groups with 10 or more students or Level 3 with a smaller sample.

Promising Practice

Group Experimental Design

  • One methodologically sound group comparison studies with non-random assignment to groups, demonstrating positive effects
  • No negative effects
  • OR
  • One or more methodologically sound studies conducted with negative effects, as long as methodologically sound studies with negative effects do not out number methodologically sound studies with positive effects.

Single Case Design

  • One methodologically sound single case study demonstrating positive effects
  • OR
  • Two or more single case studies demonstrating positive effects using methodologically weak designs
  • AND
  • The ratio of methodologically sound studies with positive effects to methodologically sound studies with neutral/mixed effects is less than 2:1;
  • OR
  • One or more methodologically sound studies conducted with negative effects, as long as methodologically sound studies with negative effects do not out number methodologically sound* studies with positive effects.

Quasi-Experimental Design

  • One methodologically sound a priori study with consistent significant correlations between predictor and outcome variables
  • OR
  • Two methodologically sound exploratory (no specific hypothesis) studies with significant correlations between predictor and outcome

Mix of Group Experimental, Single Case Designs, Correlational Designs

  • Meet at least 50% of criteria for group experimental, single-case designs, and/or quasi-experimental correlational design

Group Experimental Design may align with Level 2 if the sample includes 350 or more students or 50 or more groups with 10 or more students, with no overriding negative effects.

Group Experimental Design may align with Level 3 if studies use either random or non-random group assignment and without a large/multi-site sample.

Single Case Design may align with Level 3 or Level 4, depending upon the study design.

Quasi-Experimental Design may align with Level 3 or Level 4, depending on study design.

Top Tier Evidence


What does it provide? Top Tier Evidence identifies social programs that have been rigorously studied through well-conducted randomized controlled trials and have findings that demonstrate strong evidence of effectiveness on important outcomes.

How does Top Tier Evidence evaluate evidence? The Top Tier Evidence Initiative reviews programs to determine if they meet the “Top Tier” or “Near Top Tier” evidence standards.

  • “Top Tier” includes: Interventions shown in well-designed and implemented randomized controlled trials, preferably conducted in typical community settings, to produce sizable, sustained benefits to participants and/or society. This standard includes a requirement for replication – i.e., demonstration of effectiveness in at least two well-conducted trials or, alternatively, one large multi-site trial.
  • “Near Top Tier” includes: Interventions shown to meet almost all elements of the Top Tier standard (i.e., well-conducted randomized controlled trials… showing sizable, sustained effects), and which only need one additional step to qualify. This category includes, for example, interventions that meet all elements of the standard in a single site, and just need a replication trial to confirm the initial findings and establish that they generalize to other sites.The purpose of this category is to help increase the number of Top Tier interventions, by enabling policy officials and others to identify particularly strong candidates for replication trials whose results, if positive, would provide the final element needed for Top Tier.

How does Top Tier Evidence align with ESSA’s evidence levels?*

Study/program ratings Criteria Alignment with Every Student Succeeds Act Evidence Tiers
Top Tier
  • Well-designed, well-implemented RCTs in
  • replicable setting.
  • Large, sustained effects.
  • Must be multisite.
If sample size is large = Strong Evidence (Level 1). If sample size is not large = Promising Evidence (Level 3).
Near Top Tier
  • Meet most Top Tier standards; only need one additional step to qualify (such as replication).
Promising Evidence (Level 3)

*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf

What Works Clearinghouse


What does it provide? The What Works Clearinghouse (WWC) reviews the existing research on different programs, products, practices and policies in education. WWC includes topics such as: literacy, mathematics, science, behavior, children and youth with disabilities, the path to graduation and early childhood.

How does the What Works Clearinghouse evaluate evidence? WWC uses a systematic review process to evaluate research studies to determine the quality of the research and the strength of the evidence produced by research. This infographic illustrates the WWC rating process.

How does the What Works Clearinghouse align with ESSA’s evidence levels?*

Study/program ratings Criteria Alignment with Every Student Succeeds Act evidence tiers
Meets standards without reservations
  • Well designed, well implemented experimental study with low attrition.
  • Well designed, well implemented Regression Discontinuity Design (RDD).

If positive or potentially positive effectiveness rating with large multisite sample = Strong Evidence (Level 1).

If positive or potentially positive effectiveness rating without large multisite sample = Promising Evidence (Level 3).

Meets standards with reservations Well designed, well implemented quasi-experimental design with baseline equivalence (or an RCT with high attrition that can be reviewed as a quasi-experimental design).

If positive or potentially positive effectiveness rating with large multisite sample = Moderate Evidence (Level 2).

If positive or potentially positive effectiveness rating without large multisite sample = Promising Evidence (Level 3).

You can also use this REL Midwest Step-by-step guide for navigating the WWC to understand how to choose evidence-based strategies using the WWC.

*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf

Evidence Reviews


Evidence-Supported Interventions Associated with Black Students' Educational Outcomes - A REL Midwest Product


The list of interventions identified in this report may not be exhaustive. Only studies that explicitly mention Black students in the abstract, keywords, or descriptors were eligible for this review. Studies of other interventions that included separate analyses of Black and White subgroups but neglected to mention racial differences in the abstract, keywords, or descriptors were not identified in the searches. This is a REL Midwest product.

The studies and interventions identified in this review have the following characteristics:

  • One intervention was implemented at the state level, 3 at the district level, 5 at the school level, and 7 at the classroom level. The other 4 interventions either were implemented at the student level or were tangentially related to the education system.
  • Interventions could show positive associations with multiple outcomes. Eleven of the 20 interventions (55 percent) were positively associated with ELA achievement, 13 (65 percent) were positively associated with math achievement, 2 (10 percent) were negatively associated with high school dropout rates, and 1 (5 percent) was associated with high school graduation rates.
  • Five of the 20 interventions had positive associations for students in elementary school (grades K–5), 3 had positive associations for students in middle school (grades 6–8), 5 showed positive associations for students in high school (grades 9–12), and 7 showed positive associations for students at multiple school levels.

The report describes the interventions and the studies that provide the promising evidence.


National Dropout Prevention Center - Model Programs


The National Dropout Prevention Center has created a database of research-based programs and information. The Model Programs Database is a searchable database of research-based programs and information. The database is available for schools, organizations, and other programs to review for opportunities to implement specific model programs, to enhance existing programs, or for inspiration on creating new initiatives for dropout prevention, intervention, or reentry/recovery. The rating scale for the programs selected for the database of Model Programs is based on the evaluation literature of specific prevention, intervention, and recovery programs.


Proving Ground - Chronic Absenteeism Interventions: Overview


Proving Ground, a program operated by Harvard University's Center for Education Policy and Research, reviewed evidence associated with chronic absenteeism interventions. This overview captures information on those interventions that had a positive impact on attendance and, further, met the ESSA definitions for Level 1 evidence-base.


School Leadership Interventions Under the Every Student Succeeds Act: Evidence Review


This report was updated in January 2017 to include Appendix C and again in December 2017 to include Appendix D. The reauthorization of the U.S. Elementary and Secondary Education Act, referred to as the Every Student Succeeds Act (ESSA), emphasizes evidence-based initiatives while providing new flexibilities to states and districts with regard to the use of federal funds, including funds to promote effective school leadership. This report describes the opportunities for supporting school leadership under ESSA, discusses the standards of evidence under ESSA, and synthesizes the research base with respect to those standards. The information can guide federal, state, and district education policymakers on the use of research-based school leadership interventions; help them identify examples of improvement activities that should be allowable under ESSA; and support the rollout of such interventions. This report updates an earlier version and incorporates nonregulatory guidance from the U.S. Department of Education, analysis of tier IV evidence, and reviews of additional studies.

FAQs


About the Ohio's Evidence-Based Clearinghouse.


Ohio’s Evidence-Based Clearinghouse is intended to empower Ohio’s districts with the knowledge, tools and resources that will help them identify, select and implement evidence-based strategies for improving student success.

What are the benefits to using Ohio’s Evidence-Based Clearinghouse?

Ohio’s districts, schools and educators are not required to use Ohio’s evidence-based clearinghouse as they identify and select evidence-based strategies; they may opt to work directly with other clearinghouses, evidence-reviews or other resources.

Benefits to using Ohio's Evidence-Based Clearinghouse include:

  • Ohio’s Evidence-Based Clearinghouse brings together in one place resources from across multiple clearinghouses; and
  • Every evidence-based strategy included in Ohio’s Evidence-Based Clearinghouse is labeled as meeting the criteria for ESSA’s Level 1, Level 2 or Level 3 evidence-based definitions.
  • The search function within Ohio's Evidence-Based Clearinghouse allows practitioners to find evidence-based strategies aligned with the content focus areas (e.g., Curriculum, Instruction and Assessment, School Climate and Supports) emphasized within the Ohio Improvement Process.
What is included in the initial September 2018 release of Ohio’s Evidence-Based Clearinghouse?

When first released to the public in September 2018, Ohio’s Evidence-Based Clearinghouse will connect practitioners to evidence-based strategies that have already been reviewed by existing clearinghouses, including Blueprints for a Health Youth, CrimeSolutions, Evidence for ESSA, Top Tier Evidence and the What Works Clearinghouse.

The evidence-based strategies included in Ohio’s Evidence-Based Clearinghouse are not an exhaustive list of evidence-based strategies; the Clearinghouse will continue to grow to meet practitioners’ needs. A team of subject matter experts and researchers will continue to curate evidence-based strategies with a focus on quality and relevance to Ohio’s specific priorities and needs.

What can Ohio’s districts, schools, and educators expect from Ohio’s Evidence-Based Clearinghouse in the future?

The Clearinghouse will be a dynamic, growing resource that is practitioner-focused and responsive to changing needs among Ohio’s educators.

As Ohio’s Evidence-Based Clearinghouse continues to evolve, future phases of development will focus on:

  • Expanding the universe of evidence-based strategies included in Ohio’s Evidence-Based Clearinghouse, with a focus on connecting practitioners to evidence-based strategies that address their local needs;
  • Developing a research review team that will be responsible for reviewing the evidence base associated with strategies that have not yet been reviewed by existing clearinghouses and determining whether to include those strategies in Ohio’s Evidence-Based Clearinghouse;
  • Providing practitioners with additional resources related to Building up the Evidence Base, including more resources, and real-life examples of districts implementing Level 4 evidence-based strategies; and
  • Highlighting the success stories of Ohio districts, schools and educators who are using evidence-based strategies to the benefit of their students, staff and communities;

What are evidence-based strategies?


Evidence-based strategies are programs, practices or activities that have been evaluated and proven to improve student outcomes. Districts can have confidence that the strategies are likely to produce positive results when implemented.

The term "evidence-based" is not new. It has been used in the field of medicine since 1996 and is defined among medical professionals as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient." 1 When thinking about the term from an education perspective, "patient" could be substituted with “student."

“Evidence-based” has been catapulted into the education arena by the Every Student Succeeds Act (ESSA). Federally, this shift emphasizes the importance of making decisions that are based upon a rigorous evaluation. Prior to ESSA, the Elementary and Secondary Education Act (ESEA) consistently used “research based” when describing strategies. No Child Left Behind (NCLB) used “scientifically-based research” as its threshold. “Evidence-based” represents a higher expectation.

Note that resources created prior to enactment of ESSA (before July 2016) might have references to being “evidence-based,” but that does not necessarily mean they meet ESSA’s definition of “evidence-based.”


1 Dr. David Sackett, 1996: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2349778/

Why do evidence-based strategies matter?


An educator’s top priority is success for each and every student. Fulfilling this priority means that selected strategies must yield maximum return. This is especially important as educators support students with diverse needs, and administrators are faced with limited resources.

But using evidence to inform the selection of a strategy is not the only thing that matters. To achieve greatest impact on student outcomes, districts should carefully consider strategies that are:

  • Aligned with the district’s or school’s specific needs and educational context. In other words, district or school leaders have given careful thought to deploying the strategy in a way that recognizes its unique local characteristics;
  • Part of a cohesive improvement plan. This means the strategy is integrated into the district’s or school’s systemic improvement plan and complements its efforts;
  • Implemented with fidelity. This means that the district or school is committed to the long-term follow through of the strategy and is careful to understand its intent and preserve the integrity of its design.

Evidence of a successful strategy is determined through rigorous research and evaluation. If such evidence does not yet exist, districts should be prepared to evaluate the effectiveness of their selected strategies.

Using evidence to determine the most effective strategy — coupled with a systemic improvement plan and sustained implementation — goes a long way to enable success for each and every student.

How does the use of evidence-based strategies fit in with an overall cycle of continuous improvement?


Selecting an evidence-based strategy is one important part of an effective cycle of continuous improvement. The cycle also should include:

  • An initial needs assessment to help ensure the strategies are sensitive to the district’s or school’s specific needs;
  • An alignment test to ensure the strategy is working in service of the district’s or school’s systemic continuous improvement plan;
  • Local data analysis and evaluation to determine if the strategy is working as intended.

What are the different levels of evidence defined in the Every Student Succeeds Act (ESSA)?


ESSA (Section 8002) and the U.S. Department of Education’s Non-Regulatory Guidance: Using Evidence to Strengthen Education Investments outline four levels of evidence. Level 1 represents the strongest level of evidence and, therefore, the strongest level of confidence that a strategy will work. The table below includes ESSA’s definition for each of the four levels, along with a practical interpretation of each level.

LEVEL ESSA DEFINITION WHAT DOES IT MEAN?
Level 1 Strong evidence from at least one well-designed and well-implemented experimental study. Experimental studies have demonstrated that the strategy improves a relevant student outcome (reading scores; attendance rates). Experimental studies (Random Control Trials) are those in which students are randomly assigned to treatment or control groups, allowing researchers to speak with confidence about the likelihood that a strategy causes an outcome. Well-designed and well-implemented experimental studies meet the What Works Clearinghouse (WWC) evidence standards without reservations. The research studies use large, multi-site samples.
Level 2 Moderate evidence from at least one well-designed and well-implemented quasi-experimental study. Quasi-experimental studies have found that the strategy improves a relevant student outcome (reading scores, attendance rates). Quasi-experimental studies (Regression Discontinuity Design) are those in which students have not been randomly assigned to treatment or control groups, but researchers are using statistical matching methods that allow them to speak with confidence about the likelihood that a strategy causes an outcome. Well-designed and well-implemented quasi-experimental studies meet the What Works Clearinghouse (WWC) evidence standards with reservations. The research studies use large, multi-site samples. No other experimental or quasi-experimental research shows that the strategy negatively affects the outcome. Researchers have found that the strategy improves outcomes for the specific student subgroups that the district or school intends to support with the strategy.
Level 3 implemented correlational study. Strategy likely improves a relevant student outcome (reading scores, attendance rates). The studies do not have to be based on large, multi-site samples. No other experimental or quasi-experimental research shows that the strategy negatively affects the outcome. A strategy that would otherwise be considered Level 1 or Level 2 except that it does not meet the sample size requirements is considered Level 3.
Level 4 Demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy or intervention is likely to improve student outcomes or other relevant outcomes. Based on existing research, the strategy cannot yet be defined as a Level 1, Level 2 or Level 3. However, there is good reason to believe — based on existing research and data — that the strategy could improve a relevant student outcome. Before using a Level 4 strategy, districts should:
  • Explore Existing Research: Why do we believe this strategy will meet our needs?
  • Develop a Logic Model: How will the strategy improve student outcomes?
  • Plan to Evaluate: How will we know that the strategy is improving student outcomes?

How do local characteristics and needs factor into the levels of evidence?


While a strategy may have been proven to work for the general student population, we cannot assume that the same strategy will have the same effect on specific student subgroups.

A strategy can only be considered a Level 1 or Level 2 strategy for a district or school if the research shows that the strategy improves student outcomes for the student subgroup that the district or school intends to support. If, for example, a district or school has identified a need to offer additional supports to their students with disabilities, a Level 1 or Level 2 strategy for that district will be one that has been proven to work for students with disabilities.

Considering the unique needs of specific student subgroups is a valuable consideration regardless the level of evidence associated with a strategy. There may be cases where Ohio will require districts to take those unique needs into consideration when using Level 3 or Level 4 options for school improvement or grant opportunities. These cases will be identified and detailed on a case-by-case basis.

Are there other important considerations to keep in mind while selecting evidence-based strategies?


Beyond the technical definitions of levels, there are other important considerations to keep in mind while selecting evidence-based strategies, including:

  • How much will the strategy cost to implement? Cost of implementation is not directly factored into the technical definitions of evidence-based strategies. However, start-up and sustainability cost is certainly a factor that districts can and should consider when deciding which evidence-based strategies are best suited to meet their districts’ needs.
  • Can the strategy be implemented with fidelity? Evidence-based strategies may be less effective if they are not carried out as intended. Is there a measure for fidelity of implementation for a selected EB strategy? How might you consider measuring for fidelity of implementation?
  • Does the strategy align with Ohio’s Learning Standards? Alignment between standards, curriculum and instruction is a critical factor in ensuring student success. If the evidence-based strategies you are using are not aligned with Ohio’s Learning Standards, you may find that they do not improve student outcomes as expected.

What is the role of level 4?


Level 4 enables districts to innovate and explore new strategies that have strong potential for improving student outcomes. Often, the most promising innovations in education bubble up from the local level.

While there will be circumstances where districts will be required to use strategies identified with strong (Level 1), moderate (Level 2) or promising (Level 3) evidence, there also will be opportunities for districts to leverage Level 4 strategies. Options for using Level 4 strategies to address school improvement requirements or grant opportunities will be identified and detailed on a case-by-case basis.

Before using a Level 4 strategy, districts should:

  • Explore Existing Research: Why do we believe this strategy will meet our needs?
  • Develop a Logic Model: How will the strategy improve student outcomes?
  • Plan to Evaluate: How will we know the strategy is improving student outcomes?

What is the difference between "evidence-based" and "research-based"?


The terms "evidence-based" and "research based" are frequently used interchangeably, but they are different — and it is important to understand the difference.

A strategy that is evidence-based likely also is research based; however, the reverse is not always true. A program or strategy — especially if it is newly developed — may be research based but not meet the formal definitions of evidence-based.

For a strategy to be considered “evidence-based,” its efficacy must have been intentionally evaluated to determine the degree to which it affects outcomes as anticipated. The design and outcome of the evaluation(s) will determine what, if any, level of evidence the strategy meets.

While generally there is research that goes into the development of a strategy, it must be evaluated for efficacy, as outlined by ESSA, to fulfill Ohio’s state or federal requirements related to evidence-based strategies.