I once held a job titled “Senior Technical Advisor: Research, Evidence, and Influence.” When I started, I believed what a lot of people believe: get the facts right, and influencing decisions will follow.
That assumption didn’t survive contact with reality. Take what’s happening in America right now with the deployment of National Guard in cities like Los Angeles, DC, and Chicago. Federal and local officials looked at the same evidence about violence and crime, and reached opposite decisions about action. Federal officials cited videos of violence (some from 2020) to justify deploying the National Guard and Marines, while local officials pointed to geolocation data, arrest records, and existing capacity to argue escalation wasn’t necessary.
In fact, many have coined our current era as “post-truth,” where objective truths are contested or dismissed in favor of ideas based on emotion, ideology, or power. Ten years ago, the conversation was about post-truth politics, but increasingly this has permeated many sectors. In healthcare, vaccine skepticism and conspiracy theories have led (or been led?) to the rise of MAHA, where social media influencers seem to hold as much sway over health decisions as medical doctors.
Does this mean evidence is dead? No. But it does mean that producing rigorous research and assuming it will speak for itself is naive. In a contested information environment, evidence needs strategic communications to penetrate the noise.
This article is about how those of us in knowledge-generating jobs (like research and evaluation) – or those who use our work – can navigate this era.
Decision-makers interpret evidence through their existing frameworks.
Facts don’t speak for themselves.
Decision-makers filter evidence through their values, mandates, institutional incentives, and political goals. Two stakeholders looking at identical research findings, or facts-on-the-ground, can reach opposite conclusions because they’re operating from different frameworks about what matters and what success looks like.

This plays out across social impact work. Evidence most often gets used as a tool to justify positions, not as a north star to determine them.
The question then is how to account for this political and socio-psychological reality.
Rigorous research is non-negotiable, but only half the equation.
Without proper research fundamentals (clear questions, valid measurement, ethical data collection, sound inference, acknowledgement of bias and limitations), all you really have is opinion masquerading as evidence.
Bad research can’t be salvaged by good communications. If you design poor data collection tools, your enumerators introduce bias, your measurements aren’t valid, or your inference is flawed, no amount of strategic messaging will make your findings credible.
As a foundation, research must be designed to the highest methodological standards.
But it’s usually not enough for research to drive decisions on its own. Influence requires understanding how that evidence gets interpreted, by whom, and in what context.
Don’t believe me? Consider vaccine skepticism.
We have mountains of methodologically sound evidence that vaccines work and the risk of vaccines is very small. The research body is robust and conclusive, yet millions of people still refuse vaccination.
The frustration is real. This scene from HBO’s The Pitt captures what it feels like to work with evidence in a world where expertise is dismissed (in this case, by “Dr. Google”):
The problem isn’t the quality of the evidence. The problem is that evidence alone doesn’t change behavior or beliefs when people are filtering information through fear, distrust, identity, or conflicting values.
Strategic communications is the bridge from evidence to influence.
Once you have methodologically sound research, the work of influence begins. This means understanding the political, institutional, and social-psychological context in which your findings will be interpreted. And then communicating findings in ways that penetrate decision-makers’ existing frameworks.
Strategic communications requires knowing some fundamentals:
- What values drive your stakeholders?
- What institutional mandates constrain their choices?
- What political pressures shape their interpretation of evidence?
- What social-psychological biases affect how they process information?
- And critically, what will make them see your evidence as relevant to the decisions they’re trying to make right now?
Credible evidence needs to be packaged into narratives and communications methods, tailored to specific audiences. It’s the difference between handing a donor a 40-page evaluation report and walking into their office with three bullet points that directly address their current funding dilemma. Same evidence, different way of communicating it.
Relevance, rigor, and communications: The three ingredients for influence.
Designing applied research and evaluation that influences decisions requires three elements working together: relevance, rigor, and relationships.
Relevance is the starting point for influence. This is standard practice in evaluation and applied research (as distinct from basic research, which tends to be theory and hypothesis-driven and follows disciplinary canons). Professional evaluation frameworks explicitly recommend stakeholder mapping before writing evaluation questions. Understanding who will use your findings and for what decisions should inform which research questions you investigate.
Yet, if you are an evaluator, how often have you actually had the opportunity to do stakeholder mapping in advance of receiving the evaluation questions?
If your answer is “rarely” or “never,” it’s likely due to a systemic issue. Many organizations commission evaluations to check a box (i.e., because they’re required), not to influence decisions (this is the old “command-and-control” paradigm). The evaluation questions arrive pre-written, disconnected from who needs to use the findings. It’s no wonder why a lot of evaluation reports or applied research sits in drawers.
In applied research and evaluation, questions should be informed by understanding what decisions stakeholders actually face. If your primary audience is legislative staff making funding decisions, you might design a study that examines cost-effectiveness or scalability. If your audience is program implementers, you might focus research questions on what implementation factors predict success across different contexts. If your audience is boards evaluating organizational strategy, you might investigate long-term sustainability or comparative advantage.
Understanding stakeholder needs enables asking relevant questions.
Rigor remains non-negotiable. Once you’ve chosen the relevant questions, you design and execute the study according to research standards. You follow the evidence wherever it leads. You don’t cherry-pick findings or set up studies to prove predetermined points. The rigor is non-negotiable because without it, you lose credibility (another critical ingredient for influence).
What changes is how you communicate findings. The same study might generate a 2-page policy brief for legislative staff focused on cost per beneficiary, a technical memo for implementers focused on learning from adaptation, and a strategic brief for boards focused on implications for organizational positioning. Same evidence, different framing based on what each audience needs to make their specific decisions.
Communications enables uptake. As with evaluation practices, professional communications frameworks also emphasize engagement throughout the research process, not just at dissemination. Involving stakeholders in defining research questions, interpreting preliminary findings, and reviewing draft products increases both the relevance and the uptake of evidence.
Here’s what that looks like in practice:
- Map your stakeholder audiences with specificity. Not “donors” but which program officers at which agencies making which decisions on what timeline. Not “policymakers” but which legislative aides, ministry officials, or executive staff facing which policy windows. Professional audience analysis identifies each audience’s information needs, preferred formats, existing knowledge gaps, and decision-making constraints.
- Understand timing. Research released during budget cycles reaches different audiences than research released mid-implementation. Understanding policy windows, organizational planning cycles, and decision-maker availability determines when your evidence can actually land.
- Develop tailored communication products. The research report is one product. Strategic communications plans specify multiple formats: policy briefs for time-constrained decision-makers, infographics for rapid comprehension and social sharing, executive summaries for senior leadership, webinars or briefings for technical audiences, op-eds for public discourse, and presentations designed for specific venues and contexts.
The bottom line.
Decision-makers interpret data through values, mandates, political incentives, and social-psychological drivers, which means rigorous research without strategic communications often becomes expensive documentation that nobody acts on.
The solution is to understand that facts don’t speak for themselves when it comes to decision making. This doesn’t negate the need for rigorous and credible evidence, which remain necessary. It is simply acknowledgement that rigor alone is insufficient for influencing decisions.
Influence requires taking rigorous, relevant information and packaging the content for specific audiences in ways they can hear.
Most organizations budget extensively for data collection and analysis. They allocate almost nothing for strategic communications. If influence is the goal, the communications plan deserves the same investment as the methodology.
Want help designing evaluations or applied research that donors, boards, and policymakers actually act on? Book a call with me, I’d love to connect with you.
