Page 1 of 12

Advances in Social Sciences Research Journal – Vol. 12, No. 1

Publication Date: January 25, 2025

DOI:10.14738/assrj.121.18197.

Kern, J., Ahmed, H., & Prosperi, V. (2025). Opportunities and Limitations for Adolescent Participation in Research – Lessons Learned

from the End Child Marriage Flagship Evaluation in Ethiopia. Advances in Social Sciences Research Journal, 12(1). 166-177.

Services for Science and Education – United Kingdom

Opportunities and Limitations for Adolescent Participation in

Research – Lessons Learned from the End Child Marriage

Flagship Evaluation in Ethiopia

Johanna Kern

Center for Evaluation and Development,

Mannheim; Germany

Haithar Ahmed

United Nations Children's Fund Ethiopia Country Office,

Addis Ababa; Ethiopia

Valentina Prosperi

United Nations Children's Fund Ethiopia Country Office,

Addis Ababa; Ethiopia

ABSTRACT

Despite the increased importance of downward accountability and the inclusion of

program participants in all stages of program cycle management, the application of

participatory research methods in program evaluations is still limited. This paper

discusses the trade-offs between non-participatory and participatory evaluation

approaches in international development cooperation and explores how program

evaluations can meaningfully engage program participants while also adhering to

established standards of academic rigor and pragmatic feasibility. The paper draws

lessons and shares learnings from the End Child Marriage Flagship Evaluation,

which integrated ‘conventional’ evaluation approaches and participatory research

to meaningfully include adolescent program participants. Finally, the paper

compares experiences of the evaluation to other participatory program evaluations

and links lessons learned to a broader discussion about prerequisites and trade- offs for applying participatory evaluation approaches and calls to re-imagine

conventional evaluation standards to enable program participants to engage in a

meaningful way.

Keywords: participatory evaluation, applied research, adolescent participation in

research methodological paper, qualitative evaluation.

INTRODUCTION

The inclusion of program participants in the different stages of program cycle management has

increasingly gained traction in international development cooperation. Today, the human

rights-based approach and its principles of meaningful and inclusive participation of and

accountability towards rights holders have been solidly anchored within a vast framework of

laws, norms, standards, and principles enshrined in international core human rights treaties

and declarations. At the same time, practitioners have learned more and more to embrace the

complexity of program environments and the need for localized, context-specific solutions [1]

[2]. This has put pressure on duty bearers such as program donors and implementers to

Page 2 of 12

167

Kern, J., Ahmed, H., & Prosperi, V. (2025). Opportunities and Limitations for Adolescent Participation in Research – Lessons Learned from the End

Child Marriage Flagship Evaluation in Ethiopia. Advances in Social Sciences Research Journal, 12(1). 166-177.

URL: http://dx.doi.org/10.14738/assrj.121.18197

meaningfully include program participants in their actions, including the monitoring and

evaluation (M&E) of development programs [3].

However, commissioners and consultants of program evaluations have yet to catch up when it

comes to mainstreaming participatory research methods. While innovative evaluation

approaches such as the Participatory Impact Assessment and Learning Approach (PIALA) and

Systemic Action Research (SAR) have been piloted successfully [4] [5], participatory research

– however ‘enduring and evolving’ – remains firmly situated at the margins of mainstream

evaluations [2]. One explanation for this is that counterfactual-based approaches using (quasi-

)experimental methods are firmly established as gold standards for program impact

evaluations. Accordingly, critics of participatory evaluations underline their lack of statistical

rigor and objectiveness [4]. Proponents of participatory research, on the other hand, condemn

conventional evaluation approaches that limit the participation of program participants to their

consultation during data collection [3] as nominal, disingenuous, and tokenistic [6] [7] [8]. This

leaves the impression that evaluations can either be participatory and fail standards of

academic rigor and pragmatic feasibility or conventional and thereby pseudo-participatory.

This paper discusses the trade-offs between conventional and participatory evaluation

approaches and explores how program evaluations can ensure adherence to ethical and human

rights standards while also adhering to established standards of academic rigor and pragmatic

feasibility. The paper first introduces modes and categorizations of participatory research in

evaluations. It then describes the application of different participatory research methods in the

End Child Marriage Flagship Program (ECM Flagship) Evaluation which integrated

conventional and participatory evaluation methods. We then share lessons learned related to

the opportunities and limitations of applying participatory approaches during the different

phases of the evaluation.

We found that - within the constraints of academic rigor and pragmatic feasibility - there are

more opportunities for enabling participation during the data collection and dissemination

phases and fewer opportunities during the planning and analysis phases of the evaluation.

Observations during data collection also indicated that participatory research, even in its

‘lowest’ form – consultation - has the potential to empower program participants. Experience

also underlined the importance of considering and testing whether specific participatory

research methods are sensitive towards the socio-cultural context and established power

dynamics in order to engage program participants meaningfully in the evaluation.

Finally, the paper compares experiences of the evaluation to those of other participatory

evaluations and links lessons learned to the broader discussion on prerequisites and trade-offs

for applying participatory evaluation approaches and calls to re-imagine evaluation standards

to enable program participants to engage in a meaningful way.

MODES AND PRACTICES OF PARTICIPATORY RESEARCH IN PROGRAM EVALUATIONS

The term ‘participatory evaluation’ can cover a wide range of different modes of participation,

which can differ in terms of what is understood by ‘participation’, whose participation is sought,

what it is that those people are involved in, and how. Using participatory approaches in

program evaluations generally means involving stakeholders in specific aspects of the

evaluation process, in particular, program participants or those affected by the program [8]. In

Page 3 of 12

168

Advances in Social Sciences Research Journal (ASSRJ) Vol. 12, Issue 01, January-2025

Services for Science and Education – United Kingdom

this paper, we limit the focus of the participatory approach to the involvement of program

participants, often called beneficiaries in international development cooperation. 1 Program

participants can be involved in different roles and at different stages of a participatory

evaluation: from the planning (evaluation design) through data collection, analysis, review, and

revision, to dissemination and utilization of findings. The mode of participation - dependent on

the extent of their roles and responsibilities within the evaluation - can be categorized into

consultative, collaborative, and program participant-led approaches2.

To this day, conventional evaluations tend to reduce the participation of program participants

to a consultative approach during data collection, where they act as informants on the program

within a predetermined, externally defined, and often standardized framework to measure

success established by external experts who are meant to take a detached, impartial assessment

of the program [3]. Critics have branded this approach tokenistic or even manipulative [6] [7]

[8]. However, participatory approaches can also significantly expand the role of program

participants. They can co-plan and manage the evaluation process, support or lead the

development of the evaluation design and methodology, data collection, analysis and

dissemination, and subsequent action. In addition, participatory evaluation planning and

design tends to be inductive and adaptive, the collection and analysis process is often iterative,

prioritizing qualitative judgments over quantitative indicators [3]. Table 1 describes the key

characteristics of conventional and fully participatory monitoring and evaluation (M&E)

approaches as described by Gijit [3].

Table 1: Key characteristics of conventional and participatory M&E

Conventional M&E Participatory M&E

Who plans and manages

the process

Senior managers or outside

experts

Local people, project staff, managers,

and other stakeholders, often helped

by a facilitator

Role of 'primary

stakeholders' (program

participants)

Provide information only Design and adapt the methodology,

collect and analyze data, share findings,

and link them to action

How success is measured Externally defined, mainly

quantitative indicators

Internally defined indicators, including

more qualitative judgments

Research approach Predetermined Adaptive

Source: table adapted from Gijit [3]

Evaluation commissioners and managers need to consider a number of trade-offs when

deciding on the mode and extent of participation in program evaluations. Ethical

considerations, rigor, and feasibility are three key factors that determine opportunities and

limitations for participatory program evaluations.

Ethical considerations include the considerations on the importance the evaluation puts on the

adherence to a human rights-based approach, which recognizes program participants as key

actors in their own development rather than as passive recipients of aid and generally

1 While the article draws comparisons and conclusions for the involvement of program participants in evaluations in

general, it focuses on depicting the experiences and lessons learned from the ECM Flagship evaluation related to

efforts around the inclusion of the program’s primary target group: adolescent girls.

2 Categorization adapted from UNICEF [9]

Page 4 of 12

169

Kern, J., Ahmed, H., & Prosperi, V. (2025). Opportunities and Limitations for Adolescent Participation in Research – Lessons Learned from the End

Child Marriage Flagship Evaluation in Ethiopia. Advances in Social Sciences Research Journal, 12(1). 166-177.

URL: http://dx.doi.org/10.14738/assrj.121.18197

considers participation, empowerment, and bottom-up processes, as good programming

practices [10].

On the other hand, evaluations need to apply a rigorous research approach to ensure the

reliability, and validity of evaluation results. Impact evaluations are still dominated by

standards of statistical rigor and conventional concepts of validity and reliability which leave

little room for participation. Accordingly, critics of participatory evaluations stress their

subjectivity and lack of guidance and quality control to ensure rigorous research. However, the

gold standard of counterfactual-based approaches using (quasi-) experimental methods has

been challenged by criticizing their reductionist focus on attributable impacts and their

difficulties in working in complex environments.

Finally, it is also pragmatic considerations that determine the feasibility of participatory

program evaluations. Considerations include the technical capacity and interest of program

participants, as well as access to available time and budget. In general, it is understood that

participatory approaches to evaluations require more time and money than conventional

approaches [9] [15].

The ECM Flagship evaluation tried to navigate ethical and pragmatic considerations and

academic rigor by incorporating conventional and traditional evaluation approaches within the

different phases of the evaluation.

DEVELOPING A PARTICIPATORY DESIGN FOR THE ECM FLAGSHIP EVALUATION

In 2022 the United Nations Children's Fund (UNICEF) commissioned an external evaluation to

assess the organization’s ‘End Child Marriage (ECM) Flagship Result Program. The program is

implemented by government partners (as duty bearers) and targets adolescent girls aged 11 to

19 (as rights holders and primary target group) . It aims to contribute to a society free of child

marriage by 2025, where girls use their potential, enjoy their rights, and thrive in life.

The formative evaluation aimed to provide evidence of the program’s achievements and share

learnings and recommendations to inform future actions of UNICEF and its partners. The

evaluation used an embedded mixed-method design, where quantitative secondary data was

included to answer research questions within a predominantly qualitative evaluation.

Qualitative research included primary data collection among program participants and

implementers through key informant interviews, focus group discussions, and in-depth

interviews and was complemented by secondary qualitative data analysis of program

documents and other literature on child marriage. The evaluation was also supposed to apply

participatory methods throughout the different stages of the evaluation.

Purpose of The Participatory Research Approach

To determine the scope of the participatory research approach, it is important to reflect on its

purpose for the evaluation [8]. In the case of the ECM Flagship Evaluation, the purpose of the

participatory approach was strongly rooted in ethical considerations, in particular UNICEF’s

adherence to principles of accountability and human rights, which mandates key stakeholders,

including girls and boys, to be engaged at relevant stages of the evaluation [10]. In addition,

Page 5 of 12

170

Advances in Social Sciences Research Journal (ASSRJ) Vol. 12, Issue 01, January-2025

Services for Science and Education – United Kingdom

both UNICEF and the external evaluators3 acknowledged that the inclusion of adolescent girls

would improve the robustness and reliability of evaluation findings. Based on this

understanding, a methodology was developed to meaningfully include adolescent girls in the

program evaluation within its time and budgetary constraints.

Scope and Methodology of the Participatory Research

The evaluation merged conventional and more progressive methods to participation by

applying a mix of consultative and cooperative research throughout the different phases of the

evaluation. Graphic 1 below describes the mixed methods used throughout the different

evaluation phases.

Graphic 1: Conventional and participatory methods in the ECM Flagship Evaluation

As with conventional evaluations, the overall planning and management of the evaluation was

led by a UNICEF evaluation specialist and an external evaluation team, who (in consultation

with other stakeholders) also decided on the overall framework for the evaluation. A

consultative approach was adopted during the planning phase by including a reference group

of young female activists to review and provide feedback on the evaluation design.4 Considering

that the evaluation was by nature formative, UNICEF also opted against a quantitative design

that measures attributable impact, for a predominantly qualitative design, that allows more

3 The evaluation was conducted by the Center for Evaluation and Development (C4ED).

4The consultation consisted of two feedback rounds at different stages of the planning phase that included a

presentation of the draft evaluation design, followed by a semi-structured focus group discussion.

Page 6 of 12

171

Kern, J., Ahmed, H., & Prosperi, V. (2025). Opportunities and Limitations for Adolescent Participation in Research – Lessons Learned from the End

Child Marriage Flagship Evaluation in Ethiopia. Advances in Social Sciences Research Journal, 12(1). 166-177.

URL: http://dx.doi.org/10.14738/assrj.121.18197

room for program participants to individually define and judge the relevance and effectiveness

of the program.

For the data collection, the evaluation team decided on a mix of consultative and cooperative

research methods. While focus group discussions with adolescent girls5 can be considered a

conventional, consultative approach toward participatory evaluations, the collection of life

histories attempted to implement a complementary, collaborative approach. As a research

method, life histories can be used to shift the power imbalance between the researcher and the

researched by empowering the research participants to narrate their own stories in their own

time and to provide their own interpretation of their lives. In a collaborative approach, life

histories merge the process of data collection, analysis and sense-making. As Söderström [11]

describes: “Telling your life history creates meaning in itself and therefore it becomes part of

the meaning-making process we as researchers are interested in.”

Analysis of interviews and secondary data was conducted in a conventional way, that is, by the

external evaluators. In addition, the validation and sense-making process included a review and

discussion of findings by the reference group of young female activists. While they provided a

detailed report with policy and program recommendations for UNICEF and partners, the

evaluation team also discussed how findings could be meaningfully disseminated among and

potentially used by the adolescent program participants. The evaluation team decided to share

the girls' accounts of child marriage, presented in animated videos showcasing their life stories.

These videos can be utilized by the program to enhance participant feedback, amplify their

voices, and encourage peer-to-peer learning.

IMPLEMENTING PARTICIPATORY EVALUATION: LESSONS LEARNED

For the application of the agreed upon participatory research methods, the evaluation team

experienced a number of expected and unexpected challenges and achievements throughout

the different phases of the evaluation. Limitations and opportunities for participatory research,

as experienced by the evaluation team, are described below.

Participatory Evaluation Planning

Unlike conventional evaluations, which limit decisions on evaluation design and methodology

to evaluation commissioners and external evaluators, the ECM Flagship evaluation tried to

involve program participants in the planning phase but still encountered several challenges to

meaningfully involving them. Adolescent girls who participated in the program tended to live

in remote rural areas with no access to the internet or phones. This made access to program

participants during the evaluation’s planning phase a challenge considering time and budget

constraints. As a possible solution, UNICEF decided to explore involving young female activists

from Addis Ababa in the evaluation. Those female activists were studying or had recently

graduated from university and were engaged in promoting gender equality and women’s

empowerment. This provided the opportunity to consider views of young Ethiopian women

interested in female empowerment to meaningfully contribute to the evaluation design taking

into account their knowledge, capacities, and interests as well as the evaluation’s budgetary

constraints. In addition, UNICEF explored their interest and buy-in into the program and its

5 Mirroring the design of the program, focus group discussions were held separately with two different age cohorts,

married and unmarried girls, and in-school and out-of-school girls.

Page 7 of 12

172

Advances in Social Sciences Research Journal (ASSRJ) Vol. 12, Issue 01, January-2025

Services for Science and Education – United Kingdom

evaluation to potentially mobilize them for the dissemination of findings and further action. We

found that the contributions of young female activists were most valuable for ethical

considerations and determining practical steps for field work, such as identifying safe spaces

and stratification criteria for girls’ focus groups in the light of ‘do no harm’ and potential

negative effects of the research.

However, the involvement of female activists also had several limitations. The activists did not

necessarily share the same characteristics and experiences as adolescent program participants,

which included uneducated girls from often lower socio-economic and of diverse socio-cultural

backgrounds different from those of the activists. Accordingly, the mobilization of young

activists could not be considered a good proxy for the participation of adolescent program

participants. We also found that the interest and contributions of young female activists were

limited regarding the overall framework and design of the evaluation. One explanation can be

that the overall evaluation framework was predetermined and theory-based, aiming to test the

program’s existing Theory of change (ToC). As such the evaluation design was rather abstract

and seemed to make a certain scientific knowledge and interest a prerequisite for becoming

meaningfully involved in the fine-tuning of the research approach during the later stages of the

planning process.

Participatory Data Collection

Effects of Participant Consultation:

As expected, the evaluation team found that focusing on consulting adolescent girls increased

the validity and reliability of research findings. Focus group discussions provided an unfiltered

insight into the perceived relevance and effectiveness of the ECM Flagship from the perspective

of the program’s primary stakeholders and enabled the research team to properly triangulate

information. As an example, interviews with adolescents found that adolescent girls were more

often active decision-makers toward marriage than adult community members and program

implementers had assumed. The inductive nature of the qualitative research also opened the

evaluation up to adapting initial indicators/measures based on the program participant’s

feedback during data collection. This enabled the evaluation team to explore unintended

program effects, identify and challenge underlying program assumptions, and discover external

factors that undermined program effectiveness. In very few cases, researchers observed limited

interest of program participants in the evaluation. This was particularly true when they were

coping with ongoing emergencies. In those circumstances, interview respondents tended to

veer away from the evaluation’s scope of research, elaborating on their more immediate needs,

such as drought management and the need to acquire water.

In addition to strengthening the validity and reliability of evaluation findings, the evaluation

team also made observations that indicated that their approach towards participatory research

contributed to the empowerment of adolescent girls within their own communities. The

research challenged prevalent gender roles in several ways. The evaluation openly prioritized

interviews with adolescent girls over those of other (male and adult) community members,

thus openly validating the importance of the experiences and views of adolescent girls within

their communities. While there is no evidence whether this approach indeed had any effect on

visited communities, informal interviews did confirm that the approach transcended current

socio-cultural norms and practices where adolescent girls rarely spoke out in public or were

Page 8 of 12

173

Kern, J., Ahmed, H., & Prosperi, V. (2025). Opportunities and Limitations for Adolescent Participation in Research – Lessons Learned from the End

Child Marriage Flagship Evaluation in Ethiopia. Advances in Social Sciences Research Journal, 12(1). 166-177.

URL: http://dx.doi.org/10.14738/assrj.121.18197

approached outside of their families to be asked for their experiences and opinions, including

recommendations on policies and public service provision.

Researchers also observed a sense of enjoyment and pride among interviewed girls for

speaking out publicly during the focus group discussions. Girls vividly enjoyed demonstrating

their increased self-efficacy (a reported effect of the program’s gender clubs), showing

researchers their public speaking capabilities, and contrasting them to their low self-esteem

and timidity before the program. In this sense, one could say that the evaluation promoted

social behavior change and contributed to girls’ empowerment in the same way the program

under evaluation did.

Finally, there is some evidence that suggests that the predominantly female research team

acted as role models for adolescent girls. Informal interviews and observations confirmed that

female researchers were highly esteemed among the visited rural communities, being

associated with higher education, money, and power. As an example, when asked about her

wishes for the future of her daughter, one interviewed mother stated that she wanted her

daughter to become like the female researcher who interviewed her.

These observations show that – despite their bad reputation among participatory research

practitioners - consultative research approaches are not by default tokenistic and may even be

able to contribute to the empowerment of and behavior change among program participants.

However, it needs to be stressed that these findings and observations are merely indicative and

would require further research to validate. In addition, it needs to be stressed that promoting

the emancipation of program participants was not an objective of the participatory evaluation

design and can best described as a positive side effect of the research.

Opportunities and Limitations of Life Histories:

The evaluation team needed to consider different trade-offs for including life histories in the

evaluation. On the one hand, as a collaborative research approach, the life histories permitted

a higher level of participation for program participants. On the other hand, collecting life

histories tends to be a time-consuming process, and the applicability of individuals’ stories to

broader contexts is limited. To mitigate the latter, the evaluation team decided to add life

histories as a complementary source of data in addition to focus group discussions with

adolescent girls. This way, it was ensured that evaluation questions could be answered even if

it turned out that life histories had little to contribute to the evaluation’s quest for more general

truths. To work within the time and budgetary constraints of the evaluation, it was decided to

limit the life histories to a one-time interview, leaving room for the interview to span several

hours.

Throughout the data collection, the evaluation team struggled to apply the method of life

histories. Challenges revolved around the technical capacities of the researchers, the available

time for data collection, and the compatibility of the research method with existing norms and

power dynamics. The question of compatibility was raised early on, during the piloting of the

evaluation methods. The evaluation team originally intended to use the drawing exercise of a

lifeline, which was supposed to facilitate the elicitation and interpretation of information.

However, in several instances, both researchers and respondents remarked feeling

uncomfortable with the drawing exercise. Having identified this participatory visual method to

Page 9 of 12

174

Advances in Social Sciences Research Journal (ASSRJ) Vol. 12, Issue 01, January-2025

Services for Science and Education – United Kingdom

be more a barrier than a facilitator for the research, the evaluation team decided to abandon

the drawing exercise.

More challenges were encountered during data collection. Despite the training of experienced

qualitative researchers, it was observed that during data collection, both interviewers and

respondents tended to slip back into established patterns where the researcher asked, and the

respondent answered to interview questions. There can be several explanations for this. While

the evaluation did employ experienced qualitative Ethiopian researchers, their experience was

limited to employing conventional consultative research approaches. Difficulties in accessing

remote areas also put more time constraints on the data collection than anticipated, leaving

researchers with less time to collect life histories than planned. At the same time, researchers

found it difficult to incentivize adolescent respondents to a point where they would take over

and lead the interview. The most plausible explanation for this can be found in prevailing

cultural norms and practices as well as certain evaluation design choices. In communities that

traditionally provide limited incentives for girls to speak out their minds in public or toward

perceived authorities and which have established hierarchies and power dynamics between

women and men, children and adults, donors and beneficiaries, the evaluation team found it

difficult, if not impossible, to break through established patterns in which the

researchers/adults/perceived donor lead the discussion and the respondent/child(beneficiary

follow and answers to questions. The training provided for the researchers and the available

time to conduct life histories turned out to be insufficient to break these patterns within a one- off research assignment.

The school setting in which many interviews were conducted seemed to cement a dynamic

where researchers acted as teachers who tested students on what they had learned from the

program under evaluation. The experience underlined the challenge of finding safe spaces that

could help empower girls to lead the research. In this sense, the evaluation did not manage to

fully apply the research method of life histories, as they were only partially participant-led.

Analysis, Dissemination, and Utilization of Findings

Because of qualitative research’s greater openness towards program participants’ value

judgments and internally defined measurements, data analysis needed to consider limitations

of generalizability of findings, applying the concept of transferability instead of external

validity. While analysis of data was conducted by the external evaluation team, validation and

sense-making of findings included a variety of stakeholders. Similar to the planning stage,

research findings were presented and discussed with young female activists during this

process, the evaluation team observed moderate interest of the activists in the evaluation

findings, reflected in the moderate number of participants and limited feedback. Options for

further engagement with program participants were also discussed but seemed unrealistic,

considering that activists’ engagement tended to be more localized towards their immediate

surroundings, and activists ultimately lacked time, resources, and interest for broader

outreach.

Finally, the evaluation team discussed the feasibility of the dissemination of findings among

program participants and the potential use of findings for further action. It was difficult to

determine what findings were relevant for program participants considering that evaluation

questions focused strongly on program internal processes and considering that interviewed

Page 10 of 12

175

Kern, J., Ahmed, H., & Prosperi, V. (2025). Opportunities and Limitations for Adolescent Participation in Research – Lessons Learned from the End

Child Marriage Flagship Evaluation in Ethiopia. Advances in Social Sciences Research Journal, 12(1). 166-177.

URL: http://dx.doi.org/10.14738/assrj.121.18197

girls seemed most interested in receiving additional goods and services (additional training,

school material, meeting hall, etc.). As adolescent girls had also reported they consulted their

peers (in particular ever-married girls) as valuable sources of information on child marriage,

the evaluation team decided to disseminate experiences with child marriage that girls had

shared through their life histories in the form of animated videos. By doing so, the evaluation

team identified relevant information and tailored it in a child-friendly way to feed back to

adolescent girls. This not only amplified adolescent girls’ voices and promoted peer-to-peer

learning but also fulfilled the evaluator’s obligation to share evaluation findings with adolescent

girls in a meaningful and ethical way. Videos are meant to be disseminated within the program,

with adolescent girls as their primary target audience.

CONCLUSION AND DISCUSSION

Prescribing to the conventional understanding of evaluations, one can quickly dismiss human

rights principles of participation, accountability, and inclusion as idealist standards that create

unnecessarily complex evaluations that either fail to live up to standards of causal inference or

hit up against the real-life constraints of pragmatic practice. However, some collaborative and

action-based research approaches, such as the Participatory Impact Assessment and Learning

Approach (PIALA), and Systemic Action Research (SAR) have emerged that successfully tackle

the problem of impact attribution by challenging conventional concepts of validity and

reliability [4] [5] [14]. Still, those approaches require the buy-in of evaluation commissioners

towards reframing academic rigor and their commitment to providing necessary additional

funding.

In the absence of evaluation commissioners’ commitment to iterative, participant-led research,

the ECM Flagship evaluation provides a promising practice that combines conventional

evaluation methods with different participatory research approaches. By doing so, the

formative evaluation successfully walks the line between academic qualitative rigor, ethics, and

feasibility. The evaluation’s ‘mix and match’ approach towards participatory research supports

Aston and Apgar’s [12] conclusions that intentionally combining components of relevant

methods can make evaluations more complexity-aware and ultimately more effective.

Experience from the ECM Flagship evaluation also suggests that participation should neither

be an all-or-nothing approach and goal in itself nor that conventional consultative research

approaches should, by default, be dismissed as pseudo-participatory. This experience mirrors

guidance and opinions that evaluators should not cling to idealist concepts of participation but

rather make thoughtful choices in their evaluation design [4] [8].

At the same time, the ECM Flagship evaluation confirms that careful conceptualization and

planning of participatory research are crucial to avoid inefficiency and pseudo-participation [8]

[13]. Learnings showed that - apart from the commitment and financial support of evaluation

commissioners - participants’ capacities and interests play a major role in determining their

meaningful participation. The evaluation team also experienced that donors’ and participants’

interests do not always align. Accordingly, the evaluation found it most challenging to involve

program participants in the highly conceptual planning and analysis phase. For the

dissemination, it became obvious that the interests of participants and donors did not match.

Still, the evaluation found creative ways to consider downward accountability towards

program participants alongside conventional dissemination approaches, which ensured

upward accountability.

Page 11 of 12

176

Advances in Social Sciences Research Journal (ASSRJ) Vol. 12, Issue 01, January-2025

Services for Science and Education – United Kingdom

Observations during data collection provided indications of the added value of participatory

research, which, even in its ‘lowest’ form – consultation – may have the potential to empower

program participants. Experience when conducting life histories also uncovered the influence

of existing power dynamics on participatory research and the importance of testing whether

specific research methods are sensitive towards the socio-cultural context and established

hierarchies [8] [13].

Overall, the ECM Flagship Evaluation can be considered a step in the right direction, opening

up to participatory design while still maintaining conventional standards for qualitative

program evaluations. Still, it should be stressed that the scope of participation was determined

by the evaluation’s funding and framework, which had decision-makers opt for a one-time

evaluation conducted by external consultants. The approach certainly does not fully respond to

movements and calls to localize M&E [1]. It stands for a tentative handshake with rather than a

full embrace of the participatory evaluation process. To further unlock the potential of

participatory research in program evaluations, it is crucial that program donors and evaluation

commissioners not only rethink what constitutes rigorous data collection and analysis [2] but

also be comfortable with giving up control and not knowing everything before they start [14].

References

1. Kindler, Becka, Voltolina, Giovanna, and Fernanda Pinheiro Sequeira: “Facilitating change: Localising

monitoring, evaluation and learning.” (2022). See https://www.itad.com/article/facilitating-change- localising-monitoring-evaluation-and-learning/

2. Apgar, Marina: “Participatory evaluation: design, bricolage and paying attention to rigour.” (2023). See

https://www.ids.ac.uk/opinions/participatory-evaluation-design-bricolage-and-paying-attention-to-rigour/

3. Guijt, Irene, Mae Arevalo, and Kiko Saladores. "Participatory monitoring and evaluation." PLA Notes 31

(1998): 28. See https://www.ids.ac.uk/download.php?file=files/dmfile/PB12.pdf

4. Van Hemelrijck, Adinda, and Irene Guijt. "Balancing inclusiveness, rigour and feasibility: Insights from

participatory impact evaluations in Ghana and Vietnam." (2016). See

https://opendocs.ids.ac.uk/opendocs/handle/20.500.12413/8888

5. Participatory evaluation: design, bricolage and paying attention to rigour

https://www.ids.ac.uk/opinions/participatory-evaluation-design-bricolage-and-paying-attention-to-rigour/

6. Arnstein, Sherry R. "A ladder of citizen participation." Journal of the American Institute of planners 35.4

(1969): 216-224.

7. Cornwall, Andrea. "Unpacking ‘Participation’: models, meanings and practices." Community development

journal 43.3 (2008): 269-283.

8. Guijt, Irene. "Participatory approaches." Methodological Briefs: Impact Evaluation 5.5 (2014): 2. See

https://www.participatorymethods.org/sites/participatorymethods.org/files/Participatory_Approaches_E

NG%20Irene%20Guijt.pdf.

9. UNICEF. "UNICEF guidance note. Adolescent participation in UNICEF Monitoring and Evaluation." (2018).

See https://www.unicef.org/evaluation/media/2746/file/UNICEF%20ADAP%20guidance%20note- final.pdf.

Page 12 of 12

177

Kern, J., Ahmed, H., & Prosperi, V. (2025). Opportunities and Limitations for Adolescent Participation in Research – Lessons Learned from the End

Child Marriage Flagship Evaluation in Ethiopia. Advances in Social Sciences Research Journal, 12(1). 166-177.

URL: http://dx.doi.org/10.14738/assrj.121.18197

10. OECD, and World Bank. Integrating Human Rights into Development: Donor Approaches, Experiences, and

Challenges. The World Bank, 2013. See https://elibrary.worldbank.org/doi/abs/10.1596/978-0-8213-

9621-6.

11. Söderström, Johanna. "Life diagrams: a methodological and analytical tool for accessing life histories."

Qualitative Research 20.1 (2020): 3-21. See

https://journals.sagepub.com/doi/epub/10.1177/1468794118819068.

12. Aston, Thomas, and Marina Apgar. "The Art and Craft of Bricolage in Evaluation." (2022). See

https://opendocs.ids.ac.uk/opendocs/handle/20.500.12413/17709.

13. White, Sarah C. "Depoliticising development: the uses and abuses of participation." Development in practice

6.1 (1996): 6-15.

14. Burns, Danny. "Assessing impact in dynamic and complex environments: Systemic action research and

participatory systemic inquiry." (2014). See

https://opendocs.ids.ac.uk/opendocs/handle/20.500.12413/4387.

15. Spooner, Catherine, Saul Flaxman, and Colleen Murray. "Participatory research in challenging circumstances

lessons with a rural aboriginal program." Evaluation Journal of Australasia 8.2 (2008): 28-34.