Advocating for human rights is tough. Frontline activists put years of dedication and commitment to the deeply held belief that all people deserve a life of dignity. Yet for large human rights and advocacy organizations, it’s rare that we can be sure that positive changes in people’s lives are a direct or causal result of human rights activism. It may take years of tireless organizing to see legislative change, and even more years before those policies create real impact in affected communities. And even when observing positive outcomes of a successful campaign, can an organization know that the change would have occurred without its intervention?

This is particularly a challenge for human rights advocacy where the goals are prescriptive, driven by a moral imperative, which can be at odds with what is measurable. Because the desired impact is transformative, planning and monitoring for a specific result can mask necessary nuance and risk oversimplifying the work. There is also a lack of standardized definitions for human rights indicators, such that when qualifiers like “widespread” or “prevalent” are used, it can be difficult to track concrete progress toward long-term change.

As an internal evaluator at Amnesty International USA, I build the capacity of staff to design strategies that are informed by evidence and set measurable outcomes, oversee evaluations of AIUSA’s programming, develop knowledge management processes, and lead workshops and trainings to equip staff for evaluative thinking. I have come across many methods that help to assess the impact of advocacy, from policy tracking and media framing analysis to public polling and stakeholder interviews.

But even though progress has been made for human rights organizations, much of the guidance is difficult to put into practice and antiquated perceptions about evaluation persist. Evaluators know that traditional scientific methods are not well suited for assessing advocacy, but the belief remains that qualitative evidence lacks methodological rigor, as does the expectation of conclusive attribution or proof of contribution. And while internal evaluators are taught to factor uncertainty into their processes and to consider context—because evidence related to advocacy is often subjective and rarely definitive—these guiding principles are typically met with skepticism, especially by staff.

Read the full article about human rights advocacy evaluation strategies by Zehra Mirza at Stanford Social Innovation Review.