Methodology

The evaluation methodology made use of the Evaluation Frameworks developed for advocacy and rehabilitation interventions in the humanitarian sector and the field of human rights like those practised by UNDP.[1]

Inception report

The Team Leader wrote an inception report on evaluation methodology prior to the mission[2]. She presented the evaluation methodology to RCT and the AHRC during the briefing meeting in Hong Kong.

Qualitative and quantitative approach

The evaluation used a ‘classical’ qualitative evaluation approach. Quantitative information has been used to assess achievements vis-à-vis intended results. Quantitative sources are treated with care: statistics on Human Rights Violations based on reporting figures are an inaccurate tool for assessment of Human Rights Violations as survivors may have a wide range of justifiable reasons not to report.[3]

Interviews and sources

Interviews and group discussions were held with: the AHRC project staff, RCT project staff, Staff and representatives of the five partners organisations, Torture survivors, Stakeholders and Professionals involved in the project, other key informants working on Human Rights in Sri Lanka. A list of documents consulted is attached to the report as an annex.

Partner Workshop Sri Lanka

A workshop was organised with the partner organisations in Sri Lanka.

The objectives of the workshop were:

  1. Presentation of findings to the partners,
  2. Presentation of evaluation methodology,
  3. Joint mapping of project outcome, clarifying attribution issues, joint assessment of enabling and hindering actors and factors,
  4. For the evaluators: insight in the dynamics of the partner network.

The workshop introduced participatory methods and made use of visual and physical exercises. Participants perceived the workshop as successful.

Questionnaire

A questionnaire was prepared for workshop participants. Questions included expectations, perceived outcome, indicators for outcome, cooperation with the AHRC, perceived need for changes, the partner network, perceptions on success within the project, the organisational set up, the future of the project.

Survivors’ perspectives

The evaluation mission included meetings with over 100 survivors of torture (victims).

The time factor

The emphasis of this final project evaluation is on outcome rather than impact, as in evaluation practices it is generally agreed that impact assessment is only meaningful once a certain period of time has passed since the finalisation of the project.

Output, Outcome, Impact

In the results-chain ‘input-output-outcome-impact’, evaluation of Human Rights projects may situate itself on the line between output and outcome and to some extent it would touch on impact.

In the past, projects / interventions were evaluated primarily at the level of inputs and outputs. Output evaluations are typically suitable for traditional project evaluation purposes but their scope is limited in the sense that the conclusions are restricted to expected ‘tangible’ outputs, the kind of output that is governed by SMART indicators, whereas the project objectives usually go beyond outputs. Today, the focus of evaluations is increasingly on outcomes, because this level of results reveals more about how effective actions are in achieving development changes. A focus on outcomes catches credible linkages between the action and the eventual effect in a relatively short timeframe.

Human Rights projects aim at non-tangible, ‘soft’ outcomes that can be achieved only in interaction with other development interventions and in collaboration with other actors.

The same is true for all other projects aiming at political and societal change like Peace Building Projects, projects aiming at Gender Equity, Democracy etcetera. Typically, for these kind of interventions with large components of advocacy, awareness raising and capacity building, outcome assessments tend to be more suitable that output assessments.

The culmination of successful interventions aiming at prevention of torture (or prevention of human rights violations, transforming gender inequalities, conflict transformation…) proves itself at the level of outcome, not at the level of outputs.

The attribution factor

In assessing outcome and impact of interventions aiming at the preventing torture, the attribution factor is paramount. In case of a torture prevention project, where there is a major breakthrough in the area of prevention of torture, can this outcome be attributed to the project?[4] Likewise, if there is no breakthrough, can the project be ‘blamed’ for that?

What is the impact of a human rights intervention in a country in a period of a gradual erosion of Human Rights standards and a systematic increase of abuses and human rights violations? In times of shrinking spaces, the likelihood that the overall project objectives can be achieved is limited. Whereas a project under favorable conditions may be expected to have multiplier effects (ripple effects), in times of war and state violence the impact of the same project may be measured in terms of  ‘little victories’. When assessing impact we may look at it as if it were at the ‘inner circle’ of impact. Perhaps the maximum attainable under these conditions is: did the project activities meet the expected results?

There seems to be a consensus among human rights evaluations that an assessment in terms of ‘achievements vis-a-vis intended results’ is valid, but an assessment of impact is highly questionable and ‘a station too far’ for an end-of-project evaluation[5]. However, evaluations may assess project components contributing to the likeliness of sustained impact and highlight indications of a wider, sustainable impact.

Dilemmas in Human Rights Impact Assessments

Is there a way out to this dilemma? A way out is to focus on outcome evaluation.[6]

Outcome evaluation works backwards from the outcome. It takes the outcome as its point of departure and then assesses a number of variables. The variables include the following:

  1. whether an outcome has been achieved or progress made towards it,
  2. how, why, and under what circumstances the outcome has changed (factors affecting the outcome),
  3. the contribution of the implementing organisation to the achievement of the outcome,
  4. it’s partnership strategy in pursuing the outcome..

In this evaluation report (See Chapter 5: Outcome assessment: public acknowledgement of police torture) the following model has been used:

Assessment of outcome: “x”

1. Outcome: “x”

2. Indicators for this outcome: x

3. Analysis of actors and factors contributing to this outcome:

– AHRC and partners

– Other actors and factors, national and international

4. Conclusion: it is justified/not justified to conclude that this outcome can be attributed to the project.

Methodologies used in this evaluation

This evaluation uses a combination of methodological approaches that have each of them in a different way proven to be valid in the assessment of human rights interventions.

1. An assessment of project achievements vis-à-vis intended results. Basically, this is the level of monitoring and assessment that may be expected from an implementing organisation.

2. Assessment of issues related to management, organisational development and partnership.

3. Outcome assessment:

–  taking the outcome as a point of departure and assessing the likely contribution of the project to the outcome; this includes

–  an assessment of the relative weight of the various project components in contributing to the outcome; and

–  an assessment of the relative contribution of other actors working towards the same or similar outcome.

4. This includes an assessment of the wider context including the various enabling and hindering factors and actors.

5. The changes in the context over time may be captured by constructing a time line differentiating the dynamics among the various actors and factors (e.g. overall role of the state vis-à-vis civil society, the legal framework, war and conflict, poverty, or other factors found relevant); these may be then summarized in a graph. 
This may justify conclusions on the interaction of contextual actors and factors with the project as influencing the project outcome and on the extent to which the outcome may be attributed to the project.

6. This may then also include reflections on the likeliness of sustained impact.

The outcome assessment (3-6) will constitute the main part of the evaluation.

7. In addition: A process approach, a focus on lessons learned rather than end results.

8. A perception approach: what is the perceived impact/outcome, what is the impact outcome in the perceptions of torture survivors? This could be related to the role of the project vis-à-vis the emancipation process (empowerment) of victims. It could be argued that the perceptions of the victims matter most. The report will include perceptions and opinions of the victims (case studies, quotations) so as to “make their voices heard”.

9. Case studies of exemplary cases, including extrapolations of generic factors (provided the report has sufficient space).

Properties of indicators

Properties of indicators used are SMART or SPICED depending on what is most appropriate[7].

Norms and Standards

This evaluation mission adheres to ALNAP Evaluation Principles, UNEG Norms and Standards and anthropological ethical frameworks[8].

Geographical coverage and phasing

The evaluation was carried out in three phases: Phase I – desk study; Phase II in the premises of the AHRC in Hong Kong; Phase II in Sri Lanka.

Debriefing

The findings of the evaluation were presented to project partners at the evaluation workshop in Sri Lanka. A debriefing has been organised in Denmark for RCT and AHRC.

 


[1] All major international development institutions and humanitarian institutions have their ‘corporate’ strategies on evaluation and impact assessment, for example all UN organisations, WB, OECD/DAC, EU, and the larger INGOs. These corporate strategies have a lot in common.

UNDP: Handbook on Monitoring and Evaluation for Results, UNDP Evaluation Office, New York, 2002; UNDP: RBM in UNDP: Technical Note; UNDP Evaluation Office: Guidelines for Outcome Evaluations: Monitoring and Evaluation Companion Series 1, 2002. www.undp.org/eo/

[2] Unfortunately, due to time limits the inception report could not be presented in Hong Kong.

[3] See W.Koekebakker: Report on a Field Mission to Kutum, North Darfur, August 2006.

[4] E.g. Erik Wendt, RCT, November 2009: “RCT will claim the anti-torture bill in the Philippines as an impact indicator”. “Ecuador has just ratified CAT after many years of campaigning by our partner organisation. We congratulate our partner with that achievement. But is it really their achievement?”

[5] For different positions in the discourse on methodologies of human rights evaluations see Andersen, E.A., and H.O.Sano: Human Rights Indicators at programme and project level. Guidelines for defining indicators, Monitoring and Evaluation. Copenhagen, The Danish Institute for Human Rights, 2006; Berggren. B. and P.Jotun: Democracy and Human Rights. An Evaluation of Sida’s support to five projects in Georgia. SIDA, Stockholm, 2001; DANIDA: Evaluation of Danish Support to Promotion of Human Rights and Democratisation 1990-1998. Synthesis Report. Copenhagen, Chr. Michelsen Institute, 2000.

[6]This is the approach followed by UNDP. UNDP is widely involved in interventions in the area of governance including human rights and gender equality. See: UNDP: Handbook on Monitoring and Evaluation for Results, UNDP Evaluation Office, 2002. See also: UNDP Evaluation Office: Guidelines for Outcome Evaluators. Monitoring and Evaluation Companion Series #1, New York 2002.

 

[7] See Chris Roche: Impact Assessment for Development Agencies. Learning to Value Change. Oxford, OXFAM, 2000. On Value-Based Indicators: The choice of indicators depends on the approach adopted. Establishing appropriate indicators is a critical part of planning. In ‘conventional’ planning the acronym SMART is being used for describing the properties of indicators (Specific, Measurable, Attainable, Relevant, Time bound). Alternative indicators may be characterised by SPICED properties. SPICED stands for: Subjective, Participatory, Interpreted, Cross-checked, Empowering, Diverse and Disaggregated.

In the discourse on Human Rights Impact Assessment different properties of indicators have been proposed.

[8] UNDP: Ethical Code of Conduct for UNDP Evaluation; UNDP: Evaluation report: Deliverable Description. Note, derived from Standards for evaluation in the UN system;

UNEG, United Nations Evaluation Group: Standards for Evaluation in the UN System, April 2005. http://www.uneval.org/indexAction.cfm?module=Library&action=GetFile&DocumentAttachmentID=1496

UNEG, United Nations Evaluation Group: Norms for Evaluation in the UN System, April 2005.. http://www.uneval.org/indexAction.cfm?module=Library&action=GetFile&DocumentAttachmentID=1491