Ohio does not endorse or require the use of any specific evidence-based clearinghouse; districts may use the clearinghouses or stand-alone research reviews they find most useful in terms of content and usability. Existing clearinghouses and stand-alone research reviews include, but are not limited to the following list.
What does it provide? Blueprints for Healthy Youth Development provides a registry of evidence-based positive youth development programs designed to promote the health and well-being of children and teens
How does Blueprints evaluate evidence? Blueprints programs are rated as Promising, Model, or Model Plus. Promising programs meet the minimum standard of effectiveness. Model and Model Plus programs meet a higher standard and provide greater confidence in the program’s capacity to change behavior and developmental outcomes. See more information in their Criteria Factsheet.
How does Blueprints Align with ESSA’s evidence levels?*
Study/program ratings | Criteria | Alignment with Every Student Succeeds Act evidence tiers |
---|---|---|
Model+ Programs |
|
If large/multisite sample = Strong Evidence (Level 1). If no sample size information is available or sample is not large/multisite = Promising Evidence (Level 3). |
Model Programs |
|
If large/multisite sample = Strong Evidence (Level 1). If no sample size information is available or sample is not large/multisite = Promising Evidence (Level 3). |
Promising Programs |
|
If large/multisite sample and RCT = Strong Evidence (Level 1). If large/multisite sample and 2 QEDs = Moderate Evidence (Level 2). If no sample size information is available or sample is not large/multisite = Promising Evidence (Level 3) |
Effective Outcomes |
|
If RCT and large/multisite sample = Strong Evidence (Level 1). If only QEDs and large/multisite sample = Moderate Evidence (Level 2). If no large/multisite sample = Promising Evidence (Level 3). |
*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf
What does it provide? The National Institute of Justice’s CrimeSolutions.gov assesses the strength of the evidence about whether programs and practices achieve criminal justice, juvenile justice, and crime victim services outcomes in order to inform practitioners and policy makers about what works, what doesn't, and what's promising.
How does CrimeSolutions evaluate evidence? CrimeSolutions programs are rated as Effective, Promising, Inconclusive Evidence, or No Effect. Promising or Effective entries meet the criteria to be included in the Ohio Evidence-Based Clearinghouse
How does CrimeSolutions Align with ESSA’s evidence levels?*
Study/program ratings | Criteria | Alignment with Every Student Succeeds Act evidence tiers |
---|---|---|
Promising/Effective** | Some evidence to indicate intended outcomes were achieved. | If includes RCT and large/multisite sample = Strong Evidence. If includes only QEDs and large/multisite sample = Moderate Evidence. If no large/multisite sample = Promising Evidence. |
*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf
**The crosswalk Effective Crime Solutions interventions was created by OERC.
What does it provide? Evidence for ESSA provides information on programs and practices that meet each of the top three ESSA evidence standards in a given subject and grade level. The site includes reading programs and math programs in grades K-12
How does Evidence for ESSA evaluate evidence? The website uses the four levels of evidence recognized by the Every Student Succeeds Act:
Study/program ratings | Criteria |
---|---|
Strong | A program is placed in “strong” if it has a statistically significant positive effect on at least one major measure (e.g., state test or national standardized test) analyzed at the proper level of clustering (class/school or student). Programs with one significantly positive study are not listed as “strong” if there is also at least one study with a significantly negative effect. |
Moderate | A program is placed in “moderate” if it meets all standards for “strong” stated above, except that instead of using a randomized design qualifying studies are quasi-experiments (i.e., matched studies). |
Promising | Programs with at least one correlational study with controls for inputs may be placed in the “promising” category. Also, programs that would have qualified for “strong” or “moderate” but did not qualify because they failed to account for clustering (but did obtain significantly positive outcomes at the student level) may qualify for “promising” if there are no significant negative effects. |
*Source: “Evidence for ESSA Standards and Procedures”
https://www.evidenceforessa.org/wp-content/uploads/2024/02/FINAL-Standards-and-Proc-02-12-24.pdf
What does it provide? The National Technical Assistance Center on Transition: The Collaborative (NTACT:C) is a Technical Assistance Center co-funded by the U.S. Department of Education’s Office of Special Education Programs (OSEP) and the Rehabilitation Services Administration (RSA). NTACT:C provides information, tools, and supports to assist multiple stakeholders in delivering effective services and instruction for secondary students and out of school youth with disabilities. Note: you will need to create an account with NTACT:C to access resources in this clearinghouse.
How does NTACT:C evaluate evidence? NTACT:C evaluates interventions to teach skills to secondary students and youth with disabilities regarding the amount, type, and quality of the research conducted and labels interventions as, “evidence-based,” “research-based,” or “promising.” Read more about the NTACT Criteria for Levels of Evidence.
How does NTACT:C Align with ESSA’s evidence levels?*Study/program ratings | Criteria | Alignment with Every Student Succeeds Act Evidence Tiers |
---|---|---|
Evidence-Based Practice |
Group Experimental Design
Single Case Design
Quasi-Experimental Design
Mix of Group Experimental, Single Case Designs, Correlational Designs
|
Group Experimental Design may align with Level 1 if the studies use random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students. Group Experimental Design may align with Level 2 if the studies use non-random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students. Group Experimental Design may align with Level 3 if either random or non-random group assignment and without a large/multi-site sample. Single Case Design may align with Level 3 or Level 4, depending upon the study design. Quasi-Experimental Design may align with Level 2 if sample includes 350 or more students or 50 or more groups with 10 or more students or Level 3 with a smaller sample. |
Research-Based Practice |
Group Experimental Design
Single Case Design
Quasi-Experimental Design
Mix of Group Experimental, Single Case Designs, Correlational Designs
|
Group Experimental Design may align with Level 1 if the study uses random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students. Group Experimental Design may align with Level 2 if the studies use non-random assignment, and sample includes 350 or more students or 50 or more groups with 10 or more students. Group Experimental Design may align with Level 3 if studies use either random or non-random group assignment and without a large/multi-site sample. Single Case Design may align with Level 3 or Level 4, depending upon the study design. Quasi-Experimental Design may align with Level 2 if sample includes 350 or more students or 50 or more groups with 10 or more students or Level 3 with a smaller sample. |
Promising Practice |
Group Experimental Design
Single Case Design
Quasi-Experimental Design
Mix of Group Experimental, Single Case Designs, Correlational Designs
|
Group Experimental Design may align with Level 2 if the sample includes 350 or more students or 50 or more groups with 10 or more students, with no overriding negative effects. Group Experimental Design may align with Level 3 if studies use either random or non-random group assignment and without a large/multi-site sample. Single Case Design may align with Level 3 or Level 4, depending upon the study design. Quasi-Experimental Design may align with Level 3 or Level 4, depending on study design. |
What does it provide? Top Tier Evidence identifies social programs that have been rigorously studied through well-conducted randomized controlled trials and have findings that demonstrate strong evidence of effectiveness on important outcomes.
How does Top Tier Evidence evaluate evidence? The Top Tier Evidence Initiative reviews programs to determine if they meet the “Top Tier” or “Near Top Tier” evidence standards.
How does Top Tier Evidence align with ESSA’s evidence levels?*
Study/program ratings | Criteria | Alignment with Every Student Succeeds Act Evidence Tiers |
---|---|---|
Top Tier |
|
If sample size is large = Strong Evidence (Level 1). If sample size is not large = Promising Evidence (Level 3). |
Near Top Tier |
|
Promising Evidence (Level 3) |
*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf
What does it provide? The What Works Clearinghouse (WWC) reviews the existing research on different programs, products, practices and policies in education. WWC includes topics such as: literacy, mathematics, science, behavior, children and youth with disabilities, the path to graduation and early childhood.
How does the What Works Clearinghouse evaluate evidence? WWC uses a systematic review process to evaluate research studies to determine the quality of the research and the strength of the evidence produced by research. This infographic illustrates the WWC rating process.
How does the What Works Clearinghouse align with ESSA’s evidence levels?*
Study/program ratings | Criteria | Alignment with Every Student Succeeds Act evidence tiers | |
---|---|---|---|
Meets standards without reservations |
|
If positive or potentially positive effectiveness rating with large multisite sample = Strong Evidence (Level 1). If positive or potentially positive effectiveness rating without large multisite sample = Promising Evidence (Level 3). |
|
Meets standards with reservations | Well designed, well implemented quasi-experimental design with baseline equivalence (or an RCT with high attrition that can be reviewed as a quasi-experimental design). |
If positive or potentially positive effectiveness rating with large multisite sample = Moderate Evidence (Level 2). If positive or potentially positive effectiveness rating without large multisite sample = Promising Evidence (Level 3). |
|
You can also use this REL Midwest Step-by-step guide for navigating the WWC to understand how to choose evidence-based strategies using the WWC. |
*Source: REL Midwest “Aligning Evidence-based Clearinghouses with the ESSA Tiers of Evidence” https://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/eventhandout/ESSA-Clearinghouse-Crosswalk-Jan2018-508.pdf
The list of interventions identified in this report may not be exhaustive. Only studies that explicitly mention Black students in the abstract, keywords, or descriptors were eligible for this review. Studies of other interventions that included separate analyses of Black and White subgroups but neglected to mention racial differences in the abstract, keywords, or descriptors were not identified in the searches. This is a REL Midwest product.
The studies and interventions identified in this review have the following characteristics:
The report describes the interventions and the studies that provide the promising evidence.
The National Dropout Prevention Center has created a database of research-based programs and information. The Model Programs Database is a searchable database of research-based programs and information. The database is available for schools, organizations, and other programs to review for opportunities to implement specific model programs, to enhance existing programs, or for inspiration on creating new initiatives for dropout prevention, intervention, or reentry/recovery. The rating scale for the programs selected for the database of Model Programs is based on the evaluation literature of specific prevention, intervention, and recovery programs.
Proving Ground, a program operated by Harvard University's Center for Education Policy and Research, reviewed evidence associated with chronic absenteeism interventions. This overview captures information on those interventions that had a positive impact on attendance and, further, met the ESSA definitions for Level 1 evidence-base.
This report was updated in January 2017 to include Appendix C and again in December 2017 to include Appendix D. The reauthorization of the U.S. Elementary and Secondary Education Act, referred to as the Every Student Succeeds Act (ESSA), emphasizes evidence-based initiatives while providing new flexibilities to states and districts with regard to the use of federal funds, including funds to promote effective school leadership. This report describes the opportunities for supporting school leadership under ESSA, discusses the standards of evidence under ESSA, and synthesizes the research base with respect to those standards. The information can guide federal, state, and district education policymakers on the use of research-based school leadership interventions; help them identify examples of improvement activities that should be allowable under ESSA; and support the rollout of such interventions. This report updates an earlier version and incorporates nonregulatory guidance from the U.S. Department of Education, analysis of tier IV evidence, and reviews of additional studies.
Ohio’s Evidence-Based Clearinghouse is intended to empower Ohio’s districts with the knowledge, tools and resources that will help them identify, select and implement evidence-based strategies for improving student success.
Ohio’s districts, schools and educators are not required to use Ohio’s evidence-based clearinghouse as they identify and select evidence-based strategies; they may opt to work directly with other clearinghouses, evidence-reviews or other resources.
Benefits to using Ohio's Evidence-Based Clearinghouse include:
When first released to the public in September 2018, Ohio’s Evidence-Based Clearinghouse will connect practitioners to evidence-based strategies that have already been reviewed by existing clearinghouses, including Blueprints for a Health Youth, CrimeSolutions, Evidence for ESSA, Top Tier Evidence and the What Works Clearinghouse.
The evidence-based strategies included in Ohio’s Evidence-Based Clearinghouse are not an exhaustive list of evidence-based strategies; the Clearinghouse will continue to grow to meet practitioners’ needs. A team of subject matter experts and researchers will continue to curate evidence-based strategies with a focus on quality and relevance to Ohio’s specific priorities and needs.
The Clearinghouse will be a dynamic, growing resource that is practitioner-focused and responsive to changing needs among Ohio’s educators.
As Ohio’s Evidence-Based Clearinghouse continues to evolve, future phases of development will focus on:
Evidence-based strategies are programs, practices or activities that have been evaluated and proven to improve student outcomes. Districts can have confidence that the strategies are likely to produce positive results when implemented.
The term "evidence-based" is not new. It has been used in the field of medicine since 1996 and is defined among medical professionals as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient." 1 When thinking about the term from an education perspective, "patient" could be substituted with “student."
“Evidence-based” has been catapulted into the education arena by the Every Student Succeeds Act (ESSA). Federally, this shift emphasizes the importance of making decisions that are based upon a rigorous evaluation. Prior to ESSA, the Elementary and Secondary Education Act (ESEA) consistently used “research based” when describing strategies. No Child Left Behind (NCLB) used “scientifically-based research” as its threshold. “Evidence-based” represents a higher expectation.
Note that resources created prior to enactment of ESSA (before July 2016) might have references to being “evidence-based,” but that does not necessarily mean they meet ESSA’s definition of “evidence-based.”
1 Dr. David Sackett, 1996: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2349778/
An educator’s top priority is
But using evidence to inform the selection of a strategy is not the only thing that matters. To achieve greatest impact on student outcomes, districts should carefully consider strategies that are:
Evidence of a successful strategy is determined through rigorous research and evaluation. If such evidence does not yet exist, districts should be prepared to evaluate the effectiveness of their selected strategies.
Using evidence to determine the most effective strategy — coupled with a systemic improvement plan and sustained implementation — goes a long way to enable success for each and every student.
Selecting an evidence-based strategy is one important part of an effective cycle of continuous improvement. The cycle also should include:
ESSA (Section 8002) and the U.S. Department of Education’s Non-Regulatory Guidance: Using Evidence to Strengthen Education Investments outline four levels of evidence. Level 1 represents the strongest level of evidence and, therefore, the strongest level of confidence that a strategy will work. The table below includes ESSA’s definition for each of the four levels, along with a practical interpretation of each level.
LEVEL | ESSA DEFINITION | WHAT DOES IT MEAN? |
---|---|---|
Level 1 | Strong evidence from at least one well-designed and well-implemented experimental study. | Experimental studies have demonstrated that the strategy improves a relevant student outcome (reading scores; attendance rates). Experimental studies (Random Control Trials) are those in which students are randomly assigned to treatment or control groups, allowing researchers to speak with confidence about the likelihood that a strategy causes an outcome. Well-designed and well-implemented experimental studies meet the What Works Clearinghouse (WWC) evidence standards without reservations. The research studies use large, multi-site samples. |
Level 2 | Moderate evidence from at least one well-designed and well-implemented quasi-experimental study. | Quasi-experimental studies have found that the strategy improves a relevant student outcome (reading scores, attendance rates). Quasi-experimental studies (Regression Discontinuity Design) are those in which students have not been randomly assigned to treatment or control groups, but researchers are using statistical matching methods that allow them to speak with confidence about the likelihood that a strategy causes an outcome. Well-designed and well-implemented quasi-experimental studies meet the What Works Clearinghouse (WWC) evidence standards with reservations. The research studies use large, multi-site samples. No other experimental or quasi-experimental research shows that the strategy negatively affects the outcome. Researchers have found that the strategy improves outcomes for the specific student subgroups that the district or school intends to support with the strategy. |
Level 3 | implemented correlational study. | Strategy likely improves a relevant student outcome (reading scores, attendance rates). The studies do not have to be based on large, multi-site samples. No other experimental or quasi-experimental research shows that the strategy negatively affects the outcome. A strategy that would otherwise be considered Level 1 or Level 2 except that it does not meet the sample size requirements is considered Level 3. |
Level 4 | Demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy or intervention is likely to improve student outcomes or other relevant outcomes. | Based on existing research, the strategy cannot yet be defined as a Level 1, Level 2 or Level 3. However, there is good reason to believe — based on existing research and data — that the strategy could improve a relevant student outcome. Before using a Level 4 strategy, districts should:
|
While a strategy may have been proven to work for the general student population, we cannot assume that the same strategy will have the same effect on specific student subgroups.
A strategy can only be considered a Level 1 or Level 2 strategy for a district or school if the research shows that the strategy improves student outcomes for the student subgroup that the district or school intends to support. If, for example, a district or school has identified a need to offer additional supports to their students with disabilities, a Level 1 or Level 2 strategy for that district will be one that has been proven to work for students with disabilities.
Considering the unique needs of specific student subgroups is a valuable consideration regardless the level of evidence associated with a strategy. There may be cases where Ohio will require districts to take those unique needs into consideration when using Level 3 or Level 4 options for school improvement or grant opportunities. These cases will be identified and detailed on a case-by-case basis.
Beyond the technical definitions of levels, there are other important considerations to keep in mind while selecting evidence-based strategies, including:
Level 4 enables districts to innovate and explore new strategies that have strong potential for improving student outcomes. Often, the most promising innovations in education bubble up from the local level.
While there will be circumstances where districts will be required to use strategies identified with strong (Level 1), moderate (Level 2) or promising (Level 3) evidence, there also will be opportunities for districts to leverage Level 4 strategies. Options for using Level 4 strategies to address school improvement requirements or grant opportunities will be identified and detailed on a case-by-case basis.
Before using a Level 4 strategy, districts should:
The terms "evidence-based" and "research based" are frequently used interchangeably, but they are different — and it is important to understand the difference.
A strategy that is evidence-based likely also is research based; however, the reverse is not always true. A program or strategy — especially if it is newly developed — may be research based but not meet the formal definitions of evidence-based.
For a strategy to be considered “evidence-based,” its efficacy must have been intentionally evaluated to determine the degree to which it affects outcomes as anticipated. The design and outcome of the evaluation(s) will determine what, if any, level of evidence the strategy meets.
While generally there is research that goes into the development of a strategy, it must be evaluated for efficacy, as outlined by ESSA, to fulfill Ohio’s state or federal requirements related to evidence-based strategies.