Most operational charities should not (be asked to) evaluate themselves because:

  1.  They have the wrong incentive. Their incentive is (obviously!) to make themselves look as great as possible – evaluations are used to compete for funding – so their incentive is to rig the research to make it flattering and/or bury research that doesn’t flatter them.
  2. They lack the necessary skills in evaluation. Most operational charities are specialists in, say, supporting victims of domestic violence or delivering first aid training or distributing cash in refugee camps. These are completely different skills to doing causal research, and one would not expect expertise in these unrelated skills to be co-located.
  3. They often lack the funding to do evaluation research properly. One major problem is that a good experimental evaluation may involve gathering data about a control group which does not get the programme or which gets a different programme, and few operational charities have access to such a set of people.
  4. They’re too small. Specifically, their programmes are too small: they do not have enough sample size for evaluations of just their programmes to produce statistically meaningful results.

Read the full article about measuring impact at small organizations by Caroline Fiennes at Giving Evidence.