How do you work in evidence-based policymaking?

Published:
Updated:
How do you work in evidence-based policymaking?

The premise of effective governance isn't simply having good intentions; it’s ensuring that the resources taxpayers entrust to government yield the best possible societal returns. This pursuit leads directly to evidence-based policymaking (EBPM), an approach centered on using facts, data, and rigorous research to inform decisions rather than relying solely on ideology, anecdote, or political inertia. [2][10] EBPM is not about replacing judgment; rather, it is about giving objective evidence a definitive seat at the decision-making table. [2] To understand how one works within this system, we must examine the core cycle of action, the supporting infrastructure required, and the mindset of continuous improvement that underpins the entire endeavor.

# Policy Basis

Evidence-based policymaking is a method where policy development consults credible, relevant evidence to make choices. [4] The ultimate goal of this approach, recognized by researchers and practitioners across the ideological spectrum, is twofold: first, to apply what is already known from program evaluation to current decisions, and second, to actively build new knowledge to inform future choices. [2] This iterative focus on outcomes makes the process inherently centered on effectiveness in social interventions and efficiency in resource use. [2] A key expectation is that government, much like an individual choosing a home or a smartphone, should use the best available, reliably presented information to meet its stated goals. [10] The Family Options Study, which examined interventions for homeless families using randomized assignment, demonstrated this in action: its evidence—that permanent housing subsidies worked better than specialized psychosocial services for most families—directly influenced how policymakers understood housing policy. [10]

# Evidence Cycle

Working within EBPM means engaging in a continuous cycle, often described through five key components that guide government action from initial review through to sustained measurement. [1][6] This systematic approach is designed to create a "virtuous cycle of knowledge building" where learning drives future action. [2]

# Program Review

The cycle begins with Program Assessment, which involves systematically reviewing existing public programs to understand the evidence base supporting their continued operation. [1][6] This requires looking beyond activity reports and assessing the program’s actual impact relative to the status quo. [2] For instance, a municipality might inventory all youth services to map where gaps exist in the service continuum. [1] Government units must build and compile rigorous evidence, often requiring impartial evaluations like randomized controlled trials (RCTs) or well-designed quasi-experimental studies, to establish a valid foundation of what achieves desired outcomes, such as improved health or economic mobility. [2] Crucially, this must also include analyzing Costs and Benefits to determine the cost-efficiency per outcome achieved. [2]

# Funding Decisions

Once programs are assessed, evidence directly informs Budget Development. This step moves beyond simply funding programs because they have always existed; instead, it means incorporating evidence of program effectiveness into budgetary choices, prioritizing funding for those delivering a high return on public investment. [6][9] States have pursued strategies like setting funding thresholds that favor proven programs. [1] This necessitates using tools like cost-benefit analysis to compare the relative benefits of spending across different policy areas. [2] A commitment to this principle requires a willingness to Redirect Funds away from programs conclusively shown to be ineffective or inefficient. [2]

# Execution Oversight

The third component addresses the gap between what should happen and what does happen: Implementation Oversight. It is not enough to fund an evidence-based program; its delivery must be monitored to ensure fidelity—that services are delivered as intended in terms of quantity and quality. [2][6] If a new program replicates a successful model from elsewhere, monitoring ensures it adheres to the core intentions of that original model. [2] Furthermore, successful oversight requires understanding the surrounding context, which is where Implementation Science becomes indispensable (discussed further below). [7]

# Performance Tracking

To close the loop, Outcome Monitoring requires the routine measurement and reporting of outcome data to verify if programs are achieving their intended results. [1][6] This involves defining key program components and tracking inputs, activities, outputs, and outcomes through performance management systems. [2] For example, tracking housing retention rates over time for participants in a housing program is a form of outcome monitoring. [7] This step ensures basic accountability and provides the raw data necessary for continuous learning. [2]

# Learning New Methods

The final, and perhaps most forward-looking, component is Targeted Evaluation and the related principle of encouraging Innovation. [1][2] Even as governments fund what works, they must reserve resources to rigorously test new, innovative approaches, especially in areas where the evidence base is thin. [2] This can take the form of tiered-evidence grant programs, where a tier is dedicated to piloting unproven, creative solutions, with funding contingent on building an evidence base. [2] When programs have run their course or proved unsuccessful, this final component informs the systematic De-implementation of ineffective practices—a crucial but often overlooked part of efficiency. [7]

# Data Systems

The operational success of this cycle hinges on the availability of high-quality, accessible data. In the modern governance environment, this is heavily supported by Integrated Data Systems (IDS). [7] An IDS is more than just technology; it is a formalized system for routine sharing and integration of administrative data across sectors (e.g., health, justice, housing) through record linkage. [7]

The immediate benefit of an IDS is efficiency. Negotiating data-sharing agreements for every single project can take months or years, particularly with sensitive information. Establishing a legally vetted IDS governance process dramatically increases efficiency, enabling quicker, more time-sensitive analyses. [7] For example, a child welfare agency can rapidly link its data with homeless management data to understand longitudinal needs without starting a new legal agreement each time. [7]

Moreover, IDS builds internal capacity. Agency staff often know more about data quality issues (like missing entries) in their administrative records than external researchers, making in-house analysis more effective. [7] This capacity allows governments to conduct rapid evaluations, even employing embedded RCTs, by using existing administrative data rather than incurring the time and expense of new data collection. [7]

It is important to recognize that building an IDS is an adaptive challenge, requiring consensus among diverse stakeholders on governance, ethics, and legal use. [7] My observation here is that the move toward data integration is directly linked to the goal of stopping ineffective programs. If a jurisdiction cannot easily track the long-term outcomes of a service across multiple siloed administrative systems, the political and administrative friction to de-implement that service becomes prohibitively high. A functioning IDS provides the necessary longitudinal proof, both for scaling success and for justifying the termination of practices that are not yielding results. [7]

# Practice Fidelity

While IDS addresses the data infrastructure, Implementation Science (IS) addresses the practice gap—the known challenge that even proven, evidence-based practices (EBPs) often fail when rolled out in real-world contexts. [7] IS studies the mechanisms that determine whether front-line workers and institutions can successfully adopt EBPs. [7]

A crucial aspect of IS is its focus on Context. A policy that works in one jurisdiction may fail in another due to differences in culture, legislative structures, or political climate. [7] IS offers conceptual tools, like the Consolidated Framework for Implementation Research (CFIR), to systematically analyze these external and internal factors rather than viewing context as merely a hindrance. [7]

IS also complements standard quality improvement (QI) efforts. While QI focuses on timely, incremental process improvements, IS adds conceptual rigor and mixed methods to study longer-term, theory-driven change. [7] For instance, a quality improvement team might streamline licensing paperwork, but an IS study, perhaps using the Exploration, Preparation, Implementation, Sustainment (EPIS) framework, would investigate why certain facility types pursue licensure while others do not, ensuring the improvements are sustained long-term. [7]

Furthermore, when evaluation is necessary alongside implementation, IS supports Hybrid Trials. These trials simultaneously study if an intervention works (effectiveness) and how it is being adopted (implementation). [7] This is vital because, in many human service areas, withholding an intervention via a standard RCT for ethical reasons is not feasible. [7] By focusing on implementation outcomes like acceptability, fidelity, and sustainability, agencies can ensure that the best evidence gets into the hands of the community in a way that actually lasts. [7]

# Government Mandate

In the United States, the shift toward EBPM has been formalized, particularly at the federal level, through legislation like the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act). [3][8] This law mandates that federal agencies manage and use the information they collect more strategically, emphasizing agency coordination and data linkage. [3]

To implement this, agencies must create specific deliverables, including an Evidence-Building Plan within their strategic plan and an Evaluation Plan alongside their performance plan. [3] Crucially, the Act also requires that data be "open by default," contingent on privacy protections, necessitating the creation of an open data plan and a central inventory of agency data assets. [3]

The Act establishes formal roles to drive this work:

  • Chief Data Officer (CDO)
  • Evaluation Officer (EO): Designated to lead evaluation and Learning Agenda activities, based on expertise. [3][5]
  • Statistical Official. [3]

These official structures, coupled with the requirement to establish Learning Agendas—which outline key questions agencies need answers to—signal a top-down commitment to integrating evidence into daily operations, moving beyond simply retroactively justifying past decisions. [8][10]

# Skills Required

Working successfully in this environment requires a diverse skill set that bridges the worlds of scientific research and practical governance. [5] The U.S. Office of Personnel Management (OPM) has mapped out competencies for Federal Program Evaluators, confirming that the work is inherently multidisciplinary. [5]

Policymakers and program staff need to develop strong general competencies such as Planning and Evaluating, Influencing/Negotiating, Organizational Awareness, and Decision Making. [5] However, the technical competencies are what truly define the EBPM practitioner:

  • Data Analysis: Knowing how to use both quantitative and qualitative methods, including advanced statistics or software like R, SAS, or Stata. [5]
  • Evaluation: Understanding evaluation theory, logic models, and developing sound evaluation designs. [5]
  • Research: Applying scientific principles to study design, collection, analysis, and reporting. [5]
  • Stakeholder Management: The ability to collaborate across different agencies, researchers, and community members to define relevant questions and ensure findings are used. [5]

This leads to what I consider a vital, cross-cutting skill: Knowledge Brokerage. While the sources detail the need to compile evidence in clearinghouses and the need for writing and oral communication, [2][5] the experience section for senior roles highlights the ability to translate technical concepts and findings for programmatic and non-technical partners. [5] This translation is not mere reporting; it is the act of transforming complex statistical findings into actionable intelligence that a decision-maker can directly apply to budget allocations or program redesign. This skill is what prevents high-quality evidence from dying in a formal report, making it a necessary talent for anyone seeking to work in the system rather than just produce data for it. [5]

# Continuous Refinement

Ultimately, working in evidence-based policymaking demands adopting a mindset of Continual Learning. [2][5] The goal is not just to find a perfect policy, but to establish the mechanisms—the data infrastructure (IDS) and the implementation rigor (IS)—that allow the organization to constantly check its assumptions. [7] If an evaluation reveals that an intervention is ineffective, the process requires the political and administrative will to stop funding it and test a new approach. [2] This commitment to learning, supported by formal governmental structures and specialized professional skills, is what transforms good intentions into demonstrable public benefit.

Written by

Mark Torres