Return to the lead support bureau
Chapter 9

Data & Evaluation

Data & Evaluation

Measuring LEAD Outcomes

With the increased attention to evidence-based decision-making in public, nonprofit, and philanthropic sectors, social service programs are often asked to define and measure progress toward their goals. This question can be useful and appropriate, but in working with LEAD sites, we think it’s important to begin by asking a more fundamental question:

What problems are you trying to solve with LEAD?

Are you trying to…

  • reduce incidence of crimes like trespass, shoplifting, open-air drug use, prostitution, and disorderly conduct and improve outcomes for people arrested for these offenses in a specific police precinct?
  • reduce use of punitive responses for offenses which drive high rates of racial disparity in the criminal legal system?
  • reduce homeless encampments, problematic drug use, and rates of overdose by increasing resources (such as intensive, hotel-based case management and low-barrier harm-reduction services) within a specific neighborhood or business district?
  • reduce burdens on criminal legal partners (law enforcement, prosecutors, public defenders, courts) by developing alternatives to arrest for LEAD-eligible charges in a jurisdiction with high rates of arrest for these charges as compared to other local municipalities?
  • develop a system of coordinated, community-based care to provide ongoing, street-based case management to complement your crisis-intervention or co-responder efforts?
  • handle a jail or court capacity crisis by diverting high volume low-level offenses?

Defining and building collective agreement on the “why” of your LEAD initiative should help your site determine relevant goals and metrics.

The most important data can be boiled down to this: Which problems do all stakeholders agree need a better solution? No matter what the numbers say, whatever most stakeholders feel is a significant problem in need of a better response, is an opportunity to shift the paradigm. Propose a solution to that problem – listening carefully to understand why each potential partner cares about it, and what they need from any proposed solution.

Data and Evaluation Planning

The collection and review of data are essential to efficient operations, to fulfilling LEAD’s deepest intentions, and to assessing the project’s individual and systemic outcomes. LEAD evaluation and data plans should be spearheaded and shepherded by the PCG and implemented under the day-to-day management of the project manager.

The PCG is responsible for overseeing the initiative’s data protocols, data sharing agreements and legal compliance, and data reporting; on a day-to-day basis, the project manager is responsible for ensuring that these policies are operationalized. In addition to approving protocols related to the participant data gathered within the project itself, the PCG also should define the data sets and protocols required of each of the project’s systems-level partners. To accomplish this task, the PCG often forms a data and evaluation committee.

Identifying Relevant Outcomes

Many people think of LEAD as a direct-service program; this is understandable, since LEAD strives to create a new system of response and care for people living with intensely complex problems.

But it’s important to remember that, more fundamentally, LEAD is a public safety initiative that uses human services tools (among others) to better address the challenges that can stem from unmet behavioral health needs and poverty. LEAD exists because communities are seeking strategies to advance health, safety, and equity by developing alternatives to the criminalization of behavioral health needs. Evaluations of LEAD initiatives should be rooted in this understanding.

Thus, it is important that efforts to evaluate LEAD focus on three levels: participant-level outcomes, program-level outcomes and systemic outcomes (if there is agreement on system change goals, which may often be the case).

Participant-Level Outcomes
  • improvements in individual participants’ health and well-being
  • reduced utilization of the criminal legal system
  • reductions in harmful or illicit behaviors
Program-Level Outcomes
  • higher rates of public or partner satisfaction with LEAD as compared to system-as-usual
  • increased use of diversion on LEAD-eligible offenses compared to booking and prosecution
  • increased funding for LEAD
  • expansion to other geographic regions or diversion-eligible charges
  • scaling toward capacity to take all appropriate priority referrals
  • decreased rates of arrest or incarceration for the LEAD cohort as compared to other similarly-situated people
Systemic Outcomes
  • reduction in use of jail and prosecution for divertible offenses system-wide (not just for participants) as alternative approaches gain support and credibility
  • increased collective capacity for culturally-responsive harm-reduction services
  • improved racial equity in access to and engagement with community-based services and resources
  • reduced racial disparities in people incarcerated for LEAD-eligible offenses across the jurisdiction
  • improved community satisfaction with public safety and order

Data Types and Sources

Generally speaking, there are two sources of data relevant to LEAD: operational data and administrative data.

  • Operational data are gathered specifically for LEAD. These data are specific to LEAD and the people referred into it, and should be gathered from referral to enrollment and throughout the participant’s time in LEAD. Sites should ensure that their data and evaluation plans are supported by and reflected in the initiative’s daily operational processes: Each element of data gathering, inputting, extraction, and analysis should be built in standardized forms, processes, and technological data systems.
  • Administrative data are gathered for purposes beyond LEAD. Administrative data include program-specific and larger community-level information, including: law enforcement data (police stops, arrests, demographics), jail data (bookings, referral charges), court data (prosecutions), and public systems data (health system, homeless systems, emergency and psychiatric services).

During outreach and exploration, you may have opportunities to gather quantitative information that can help inform and refine the problem you want to solve with LEAD.

External Large Scale Data

Because LEAD sits at the intersection of safety, health, and equity, it’s easy to imagine the huge array of data sources that could inform the planning process. But in most places, these data sets sit in a variety of data systems – hospitals, emergency rooms, police departments, sheriff’s offices, community-based service providers – that do not talk to one another. Furthermore, much of the data may not be not readily available to people outside each agency. Finally, many communities don’t have the resources to conduct a thorough data analysis to illuminate where a community might focus its LEAD efforts. But these realities don’t have to stand in your way – you can begin to illuminate needs and priorities by examining smaller sets of data.

Smaller Scale Data

Gathering and analyzing data even on a smaller scale can still meaningfully inform your project planning. For example, you could ask your police department to provide a year’s worth of data for misdemeanor and drug arrests, and then analyze it to identify highly discretionary charges that seem likely to be associated with behavioral health issues and poverty. Check your analysis with law enforcement (what crimes on the list which commonly produce arrests do they believe often involve people with behavioral health problems or income instability? Which crimes would they prefer to have another response for?) From this, you could determine how many times people were arrested for each crime, the per-person rates of repeat arrest, how many resulted in jail bookings, the average length of jail bookings, the proportion of arrests to prosecutions, the total amount of jail time related to these bookings, and whether the arrest resulted in a prosecution at all. All of this can be analyzed using basic Excel capacities.

A site’s focus can include people exposed to the legal system–not just those already being pulled in. In recent years, police and legal system capacity has been severely strained in many communities; the problem may no longer be just that the system is pulling too many people in for punitive responses to health and poverty issues, but rather that all systems have increasingly abandoned people with complex needs in our streets and public spaces.

LEAD can also be an effective response to dynamics where there is public pressure for enforcement action, while the enforcement and criminal legal systems are not actually equipped to provide that response, and know that isn’t the best road anyway. Data on system utilization and people actually being booked into the jail and charged in court aren’t the only data relevant to the need for LEAD services and coordination – there are also those adrift in the community who are having a significant impact, whose situation could catalyze a backlash against a paradigm shift toward community-based care, if we don’t respond to their needs.

Approaches to Evaluating LEAD

LEAD evaluation may focus as much on system impact and process change as on individual outcomes. Process, formative, and developmental evaluations are particularly useful to understand whether the site is implementing LEAD with fidelity to the evidence-based model.

Framing the Question

Because LEAD originated as an alternative to an approach that centered on incarceration, prosecution, and punishment, the evaluation of demonstration project outcomes in Seattle focused on how individuals were doing compared to members of a control group that had been jailed and prosecuted as usual. As these evaluations showed, the LEAD group did better in almost every respect.

However, as the project continued and the background condition moved away from using jail and prosecution as the norm(in part because of the impact of LEAD), Seattle LEAD partners began to ask whether the evaluation framework itself needed to change.

Instead of asking, “Is this intervention producing better results than jail and conviction?” partners concluded that future evaluation should focus on a different question of shared interest: “What are the gaps in the support available to participants; what do participants need in order to thrive and heal?” This question generated very different insights.

Thus, in designing a LEAD evaluation plan, it is important to recognize it may miss the point if sites focus on producing better outcomes in a system that was not designed to foster recovery. Rather, that question may provide misleading confirmation that the intervention provides sufficient (“better than status quo”) support, when participants are still, objectively, deprived of resources they need to be secure and safe.

The particular focus and methods used for a LEAD evaluation will vary based on local needs and the stage in LEAD’s development, but sites might consider which type of evaluation will help them assess the progress most relevant to their local contexts: developmental, formative, or summative.

Developmental Evaluation of LEAD

Developmental evaluations focus on the initial development of an initiative and its adaptation in dynamic environments and across contexts. In a project’s early years, this type of evaluation can help LEAD stakeholders understand their initiative’s context and learn more about how it’s developing. 

Developmental evaluation is generally time- and resource-intensive; sometimes in a developmental evaluation, the evaluator is embedded within a project team, spanning the boundary between researcher and project staff. In this circumstance, the research partner works closely with a project team to develop strategies for collecting information about issues as they emerge and providing frequent ongoing feedback to the team.

Questions guiding developmental evaluations could include:

  • What cultural, socioeconomic, and political factors influence the design and implementation of LEAD? How and why do these factors influence progress?
  • What systems is LEAD attempting to affect and what factors may influence changes in those systems?
  • To what extent and in what ways does LEAD tap into the strengths and assets of the community(ies)?
Formative Evaluation of LEAD

Formative evaluations focus on whether the initiative is being implemented as intended, and what pieces seem to be working or not. This type of evaluation can be helpful for examining and improving internal practices and processes, particularly in an initiative’s early to middle years.

Some formative evaluation questions for LEAD could include:

  • To what extent is LEAD implemented with fidelity to the LEAD model?
  • To what extent is LEAD being utilized by law enforcement?
  • To what extent is LEAD being utilized by the community as an alternative to calling the police?
  • Is case management capacity sufficient to meet the needs of participants being referred?
  • To what extent is LEAD reaching its intended populations, such as people apprehended for drug possession and/or coming into frequent contact with law enforcement due to unmet behavioral health needs or extreme poverty?
  • To what extent is LEAD reaching or people from groups disproportionately arrested, prosecuted, and incarcerated for drug possession and other LEAD-aligned charges?
  • To what extent are participants engaging with LEAD?
  • What is the perspective of participants on the services provided and the impact on their lives? What works, what doesn’t?
  • What does information from case management reveal regarding local service resources and gaps? What is needed to go from “doing better than if this person had gone to jail” to “this person is doing really well?”
  • To what extent is LEAD contributing to increased cross-sector coordination?
Summative Evaluation

Summative evaluations focus on the extent to which an initiative is achieving its intended outcomes. It’s likely that many stakeholders will want to assess LEAD’s long-term outcomes or impacts – bearing in mind, at all times, that LEAD is about shifting systemic policies and practices, just as much as it is devoted to supporting positive change for participants. 

Summative questions might include things like:

  • To what extent does LEAD result in decreased criminal legal system involvement for the people served, and how does that compare with similarly situated people outside of the LEAD catchment area?
  • To what extent has LEAD shifted the system-wide approach to public safety?

Outcomes questions that could be answered within a shorter period of time (depending on the availability of data) might include:

  • To what extent is LEAD contributing to improved mental and physical health for participants? What is the point of view of participants on this question?
  • To what extent are LEAD participants reducing harmful behaviors?
  • To what extent are systems stakeholders and LEAD participants satisfied with LEAD?

Developing an Evaluation Plan

Evaluations are an important tool in sustaining LEAD in any site, allowing the initiative to demonstrate impact and value. Further, evaluations support LEAD sites in adhering to their stated goals and core principles, meeting desired outcomes, assessing efficacy of systems change, and improving the lives of participants.

Find an Evaluation Partner

Finding, developing, and maintaining a relationship with an evaluation partner can support LEAD sites in tracking progress toward their key objectives, identifying areas for improvement, and telling the story of LEAD. While some initiatives have internal capacity for monitoring and evaluation, others may find that they need support, such as:

  • Developing a site-specific logic model/theory of change
  • Determining relevant metrics and data sources
  • Data management and analysis
  • Reporting and communications
Identify Essential Metrics

All LEAD sites should develop and implement an evaluation plan that measures metrics of greatest importance to the local site’s stakeholders. Identifying these metrics is not a cookie-cutter process. Sites are advised to convene multiple stakeholders – including potential evaluation partners – to engage in robust, searching conversations about the specific problems that the site is trying to address. Local executive and legislative staff analysts are particularly valuable consumers of evaluation data to consult on the research design – if they are not persuaded, the evaluation effort may not drive significant investment or commitment at the local level, even if results appear positive.

Conduct a Preliminary Data Analysis

Undertaking a preliminary data analysis can be immensely valuable to informing this conversation. Pulling arrest data (including demographics) for a given jurisdiction for a defined period of time can help sites identify high prevalence, low-level conduct that burdens law enforcement, provides little benefit to the community or change in outcome for arrested individuals, and diverts resources and opportunities by unnecessarily bypassing more effective, less costly, less harmful community services. This preliminary data analysis can also help quantify the number and nature of arrests that could be diverted and help identify the two dozen or 400 familiar faces whose suffering and disruptive conduct are apparent for all to see.

Define Primary Goals

Preliminary data analysis can help a site begin to understand its primary intention: Is it to reduce community complaints and subsequent law enforcement response for people arrested more than six times in the prior year on trespass, vagrancy, or shoplifting? Is it to reduce the challenges faced by business owners struggling with open-air drug use in a particular district? Is it to establish harm-reduction, street-based case management and services to reduce risk of overdose, reduce ER use, and increase reported quality of life for people in a community that previously has offered only clinic-based abstinence treatment?

To this end, it is recommended that sites engage potential evaluators early in the planning phase. Doing so will support sites in specifying and finding consensus on the problems they are trying to solve with LEAD and provide evaluators with the opportunity to understand the local priorities and how those goals might be tracked.