The sexual health of adolescents is a growing global public health issue. In developing countries, approximately 60% of new HIV infections occur among 15–24-year-olds,1 and a similar proportion of pregnancies and births to adolescents are unintended.2 In industrialized countries, the incidence of STDs among youth is increasing,3 and adolescent pregnancy is associated with poor social outcomes.4 These and other statistics have garnered adolescents attention as a group facing distinct issues.
Peer-led sexual health education is one means of addressing deficiencies in adolescent sexual health. Defined as the “teaching or sharing of information, values, and behaviors by members of similar age or status group,”5 peer-led sexual health education has been developed on the basis of two observations. First, the health beliefs and habits formed during childhood and adolescence are carried into adulthood;6 second, teenagers influence each other’s attitudes and behavior.7 According to theory, peer educators may influence social behavior through their role as credible role models8 or as innovators.9 Moreover, peer-led education may be an approach by which young people, through partnerships, can define and tackle their own health needs.9,10
Although several systematic reviews of adolescent sexual health interventions have been conducted,11,12 to our knowledge only one has focused on peer-delivered health promotion interventions. Harden and colleagues reviewed outcome and process evaluations of peer-led interventions (half of them involving sexual health) from randomized and quasi-randomized controlled trials published through September 1998.13
Harden and colleagues also appraised the methodology of the trials. In systematic reviews, methodological appraisals generally evaluate four areas of potential systematic bias: selection bias (differences in comparison groups), performance bias (differences in the care provided, apart from the intervention being evaluated), attrition bias (differences in withdrawals from trials) and detection bias (differences in outcome assessment). The rationale and content of criteria for the assessment of these types of bias are similar across reviews.12–17
While they were unable to reach robust conclusions regarding the effectiveness of peer-led interventions for young people, Harden and colleagues made a number of recommendations for the development and evaluation of such interventions.13 Although other guidelines for the development and evaluation of complex sexual health interventions have been devised,18 those of Harden and colleagues are, as far as we are aware, the only ones specifically developed to guide the development and evaluation of peer-led interventions for young people. Their criteria for intervention development were informed by the following perspectives: that young people should actively participate in meeting their own health needs,19 that adolescents are not a homogenous group with uniform needs7 and that peer-led health promotion is best delivered in the context of wider sociocultural and economic health promotion strategies.20 In addition, the authors highlighted the importance of understanding the contribution that peer-led education can offer to wider health promotion strategies.
The recommendations developed by Harden and colleagues were supported by findings from their review. First, echoing other authors,21 Harden and colleagues recommended that the health needs and views of the target group be assessed; they provided examples of how specific programs in their review used this information to tailor interventions to a particular context. Second, given the challenges that they identified within peer-led education interventions—including resource constraints, conflicting value systems and constraints on young people’s autonomy (especially in schools)—they recommended that the specific boundaries of working partnerships with young people be established prior to project implementation, such that the roles of researchers and youth are clearly defined. Third, they noted that evidence suggests that the beneficiaries of peer-led sex education include the peer educators themselves; thus, they recommended the evaluation of the effects that peer education has on peer educators, and of reciprocal education, in which each member of a target population alternates between being an educator and a recipient. Fourth, they recommended that both quantitative and qualitative methods (and if possible, an integration of the two) be used to evaluate outcomes and processes.* Fifth, they concluded that the important characteristics of peer educators are unclear and recommended that to help elucidate the matter, authors describe how peer educators were recruited and selected. Finally, they recommended that young people’s views regarding the intervention, including negative views, be fully reported.
We conducted a systematic review and methodological appraisal of randomized and quasi-randomized controlled trials of peer-led sex education interventions. We also evaluated the extent to which Harden’s recommendations for the development and evaluation of peer-led interventions have been addressed in studies published since 1998.
Eligibility Criteria and Methodological Appraisal
We examined all randomized and quasi-randomized controlled trials that evaluated interventions to promote adolescent sexual health using peer educators and that were published in 1998–2005. Any peer-led intervention intended to promote sexual health in any setting (e.g., health center, youth group, local extracurricular center, school) in high-, middle- or low-income countries was eligible. For inclusion in the review, studies had to have intervention and control groups, include adolescents aged 10–19 and be published in English.
In addition, studies were required to meet four methodological criteria: The studies had to include a control or comparison group whose social and demographic characteristics were similar to those of the intervention group, provide preintervention data for all groups, provide postintervention data for all groups and report all outcomes. Primary outcomes of interest were the occurrence of pregnancy or STDs, age at first sex, number and types of sexual partnerships, condom use and contraceptive use. Relevant secondary outcomes were measures of knowledge of sexual health or contraceptive services; behavioral intentions regarding sex or contraceptive use; and attitudes about sex, sexual health or contraceptives.
To identify relevant studies, we searched the following databases: EMBASE, ERIC, PubMed, International Bibliography of Social Science, PsycINFO, specialized bibliographic registers, DoPHER and the Cochrane Central Register of Controlled Trials. We used the search terms “peers,”“adolescents,”“education” and “health promotion” in combination with the search strategy detailed in the Cochrane Reviewers’ Handbook.14 In addition, we contacted researchers, searched reference lists and hand searched all issues of the journals Health Education and Behavior and Health Education Research published in 1998–2005. The electronic records were scanned to identify potentially eligible studies. Because of resource constraints, we omitted unpublished works.
Given the similarity among existing guidelines for the methodological appraisal of randomized and quasi-randomized controlled trials,12,14–17 we used the criteria developed by the Evidence for Policy and Practice Information and Coordinating Centre,15 with additional criteria based on the Cochrane review guidelines.14 In total, we examined 10 criteria. For each study, we determined whether the authors provided a clear statement of aims, whether the description of the study design provided sufficient detail to allow replication, whether there was a randomization process for allocation to different groups (even with quasi-randomized studies), whether the numbers of participants in the intervention and control groups were provided, whether preintervention and postintervention data for each group were provided, whether losses to follow-up were reported and whether outcome reporting related to the study aims.** We also examined whether all outcomes were evaluated for all participants and whether adjustments for cluster sampling were made in clustered studies.
For the randomized controlled trials, we determined whether randomization and allocation concealment met the criteria of Jüni et al.22 Adequate approaches for randomization included use of a table of random numbers, coin tossing or computer generation of random numbers; inadequate strategies included systematic allocation, such as alternating the intervention assignment of clinic attenders. Allocation is generally considered adequately concealed if neither investigators nor subjects can foresee the latter’s assignments; however, the nature of interventions reviewed here precluded the blinding of those providing or receiving interventions. Instead, we ascertained the blinding of the researchers who assessed outcomes. When necessary, we contacted authors for clarification of these and other study details.
We also assessed the extent to which Harden and colleagues’ recommendations for the development and evaluation of peer-led interventions were addressed in study reports. For intervention development, we determined whether adolescents’ health needs and views were assessed in the initial phases of intervention development, whether young people took an active role in developing the intervention, whether subgroups of adolescents provided input, whether the research looked beyond individual change and took the community into account, and whether the authors provided a clear statement regarding working boundaries between the youth and adults.
Similarly, we examined whether the studies fulfilled Harden and colleagues’ recommendations for the evaluation of peer-led interventions. Specifically, we assessed whether the study used both qualitative and quantitative methods, whether it provided details about how peer leaders were recruited and selected, whether young people’s views were prioritized, whether the applicability of the peer-led method to the study population was critically examined, whether the relative contribution of the intervention to a broader health promotion strategy was explored, whether reciprocal peer education and the intervention’s effects on peer leaders were examined, whether the quantitative and qualitative work were integrated and whether researchers engaged in skills sharing (e.g., whether those working on outcome evaluations used data from process evaluations to explain findings).
We performed a narrative analysis, describing studies according to the use, training and recruitment of peers, composition of target population, intervention site, intervention components, theoretical basis and outcome findings. We used random effects meta-analysis to estimate pooled effects when four or more studies used the same outcome measure; odds ratios were calculated using results from final follow-up. For cluster randomized controlled trials that did not provide an intraclass correlation coefficient (a measure of consistency for a data set that has multiple groups), the results were adjusted using a conservative value of 0.05 for rho. Analyses were conducted using STATA 9 and Review Manager 4.2.
We examined heterogeneity by using the I2 test for consistency (an I2 value of 0.75 or greater indicates that variability across studies is due to heterogeneity rather than chance).23 We explored the effect of methodological diversity on heterogeneity (i.e., variance of the studies between methodological quality and Harden’s methodological and reporting recommendations, such as setting, session lengths and peer responsibilities).14 Finally, to examine effects of smaller studies and explore the possible presence of publication bias, we created a funnel plot.†14,24
The combined searches yielded 4,500 electronic records. We screened these records for eligibility and obtained 33 articles for further assessment. Thirteen of the 33 articles met all four core methodological inclusion criteria and were included in the review;25–37 nine were quasi-experimental and four were randomized controlled trials (Table 1). Modes of evaluation were questionnaire, interview, survey and (in the studies testing for STDs) vaginal swab testing (not shown). Eight of the studies were conducted in developed countries, mainly the United Kingdom and the United States, and three of the five remaining studies were set in Africa. Nine studies were conducted in school settings, and four were community- based (e.g., in a health center, youth center or village).
Peer leaders ranged in age from 14 to 26. Several studies tried to balance the proportion of male and female peer educators, but those that relied on volunteer peer educators tended to have more females than males (not shown).
Only three studies met all 10 of the quality criteria.30,31,37 Two met nine of the criteria, in both cases failing to provide an intraclass correlation coefficient.34,35 Two studies did not provide a clear definition of aims,25,29 and two others lacked sufficient description of the study design and intervention to allow replication.27,36 Random allocation was noted in all but three trials.29,32,33 Ten studies evaluated all participants; the others assessed outcomes in a random sample of youth.26,28,36 Three studies did not report attrition by group.28,32,36 All of the studies reported results for each outcome measure and provided the number of participants per condition as well as preintervention and postintervention data.
Cluster sampling was used in all 13 studies. Effects of cluster sampling were taken into account in seven studies: Three used intraclass correlation coefficients,27,32,33 while the other four mentioned adjustment for clustering effects but did not provide the coefficient.29–31,37
Development and Evaluation Criteria
None of the studies fulfilled all of Harden and colleagues’ recommendations for intervention development (Table 2, page 148). Seven reported assessing youth health needs and views,26,27,30–32,35,37 and the same number involved young people in the development process25,27,28,30,32,35,37 or took context into account.25,28,30,31,33,35,37 All of the studies reported targeting the individual as well as broader sociocultural or economic factors. Four studies sought input from subgroups of youth.25,31,34,37 Establishment of boundaries between peers, adults and research staff was described in only one study.28
Similarly, no study met all nine of Harden and colleagues’ recommendations for evaluation of interventions (Table 3, page 148). Six studies used a mix of quantitative and qualitative methods,26–28,30,35,37 and an identical number detailed the peer recruitment and training process.27,31,32,35–37 Four studies referred to the effects of being a peer educator,29,35–37 though none examined reciprocal peer education. Four studies reported prioritizing the views of young people.26,28,31,37 Applicability of peer education to high-risk groups was discussed in eight studies.25,28,30–33,35,37 All of the studies discussed the relative contribution to community well-being that sexual health education would make. Four studies integrated quantitative and qualitative methods;27,28,30,37 none utilized skills sharing.
Eleven of the reviewed studies assessed contraceptive use, in most cases condom use; several also examined other behavioral outcomes. Some of the studies measured outcomes at multiple time points; we focused on results reported at final follow-up.
Eight studies measured condom use at last intercourse.25–28,30,35–37 For the cluster studies that did not provide an intraclass correlation coefficient, we adjusted the odds ratios reported by the authors. However, the report by Aarons and colleagues25 did not provide the number of participants of each gender; without that information, we could not adjust the odds ratio for clustering.§ For the remaining seven studies, the unadjusted pooled odds of condom use at last intercourse were 1.06 (95% confidence interval, 0.92–1.21; not shown). After adjustment for clustering, the effect estimate was 1.0 (Table 4). The I2 value for these studies was 77% (not shown). The funnel plot for the data is nearly symmetrical (Figure 1); the symmetry would be more apparent if the plot included more data points. Nevertheless, the figure shows the larger studies (represented by the three leftmost points) falling close to the midline, while the smaller studies (the remaining points) are farther from the midline. This reveals that the small studies did not show a greater treatment effect than the larger, more precise studies.
The heterogeneity of studies did not fall below 75% when they were examined in subgroups according to methodological quality criteria or to most of Harden’s criteria (not shown). The exception was for the subgroup of studies that had detailed the recruitment and selection process of the peer leaders; these had an I2 value of 0.
Three studies reported findings regarding consistent condom use (not shown).29,30,35 None of these three showed statistically significant effects, although the 95% confidence intervals for the odds ratios were very wide.
Other behavioral measures assessed in the studies were number of partners, sexual activity and incidence of STDs (not shown). One study showed a clear reduction in the risk of testing positive for chlamydia (odds ratio, 0.17; 95% confidence interval, 0.03–0.92),30 but another found no impact on STD incidence.35 Aarons and colleagues reported an increase in the odds that female adolescents had never had sex (1.88, 1.02–3.47); no effect was observed among males.25 Other studies failed to show clear evidence of benefit in reducing the number of regular or casual partners,26 recent partners31 or unintended pregnancies.37
All 13 studies assessed knowledge, attitudes or intentions (not shown). Twelve of the studies measured knowledge of the information provided by peer educators, including information about STD symptoms, types of contraceptives, how to use condoms, and means of HIV transmission and prevention;25–28,30–37 all but two32,33 of these studies showed statistically significant improvements in knowledge. Moreover, all 10 of the studies that assessed attitudes and intentions reported positive effects.25–29,31–33,35,37
This article provides an overview of peer-led sex education interventions published in 1998–2005. Overall, we found no clear evidence that peer-led sex education promotes condom use or reduces the odds of pregnancy or of having a new partner. However, study results were highly heterogeneous, suggesting that there may be real differences in the effects of interventions included in the review. One study reported a statistically significant reduction in chlamydia incidence,30 and another showed an increase in the odds that female participants had never had sex.25 Both studies were randomized controlled trials and fulfilled all but one of the methodological criteria (blinding of the outcome assessors was not reported).
Most of the studies found positive effects on measures of knowledge, attitudes and intentions. These results should be viewed cautiously, however, as it was not always clear how many variables were measured and whether the length of time between intervention exposure and outcome assessment was consistent among studies.
Another reason for caution is that the methodological quality of studies was generally poor. Only 13 of 33 potentially eligible studies fulfilled the four basic methodological inclusion criteria. Even among the studies included in the review, just three met all of Harden and colleagues’ quality criteria.30,31,37 The low methodological quality suggests the potential for bias in the study results.
No study addressed all of Harden and colleagues’ recommendations for the development and evaluation of peer-led sexual health interventions. Although nearly all of the studies examined the interventions’ applicability to high-risk groups and their relative contribution to broader health strategies, each of the remaining evaluation criteria was met by fewer than half of the studies.
In general, the high level of heterogeneity across studies was not reduced by subgroup analysis according to quality criteria or fulfillment of Harden and colleagues’ recommendations, perhaps because different sources of heterogeneity were acting in different directions. However, in the analysis by selection and recruitment process for peer educators, four studies showed homogeneity (I2=0).27,35–37 The peer recruitment methods used in these four studies varied, so it is not clear why the studies would be homogeneous. The finding could be a statistical artifact and should be reassessed in future systematic reviews.
Our review had a number of limitations. In making adjustments for clustering, we used a conservative value of rho; however, this did not influence the main findings, as both the adjusted and the unadjusted 95% confidence intervals of the odds ratio for condom use at last sex included 1.0. Because of resource constraints, data in this review were extracted by only one reviewer; use of a second reviewer would have reduced the risk of error and subjectivity. Most authors did not respond to requests for missing information, and therefore the review reflects the information provided in published reports; the degree to which these reports correspond to how the studies were conducted cannot be ascertained. The assessment of whether studies addressed Harden and colleagues’ recommendations was inevitably subjective for some factors and represents a limitation in using these recommendations to evaluate peer-led sexual health interventions. Only published studies were reviewed, which could bias the results toward interventions that have shown changes in outcome. Finally, there were insufficient studies using the same outcome measures to use regression to robustly assess the impact of study quality and bias, intervention components or adherence to Harden’s recommendations on efficacy.
Implications and Recommendations
Fulfillment of the four core methodological criteria should be a standard in future research and program development. Although the randomized controlled trial design, which can show a causal association, is the “most rigorous way to evaluate the effectiveness of an intervention,”38 it is vital that such trials be of high methodological quality. Cluster sampling calculations should be part of the trial design, and analysis should demonstrate similarity among the groups assessed.
Harden and colleagues’ recommendations emerged from an analysis of the key issues, problems and gaps in peer-led interventions and their evaluation. Authors should address these recommendations or explain why they have not done so. For example, assessment of youth health needs and views would be a rational and essential component of the initial phase of project development, as identifying youths’ needs would help in determining whether peer delivery is an appropriate method. Yet only half of the studies in this review assessed youth health needs and views. Similarly, only a few studies have evaluated how interventions affect peer leaders.39,40
Although we focused on outcome evaluations in this review, process evaluations are also important. Process evaluations can assess the full impact of interventions on adolescents and peer leaders,38 and can be instrumental in developing these programs by ascertaining youth health needs and views. Process and outcome evaluations are complementary and should be implemented in all evaluations in order to attain a full overview of a program’s effects.
Finally, although most of the studies examined in this review did not find unambiguous support for peer-led interventions, we believe that this approach should not be abandoned but rather fine-tuned. Because the peer-led approach seemed to hold so much promise, interventions were sometimes designed without much thought to the details. Given that the jump from theory to practice was not quite successful, researchers should look back on the shortcomings of prior studies, and place greater emphasis on details of intervention design, when creating, implementing and evaluating future peer-led programs.