|

Using Randomized Designs to Evaluate Client-Centered Programs to Prevent Adolescent Pregnancy

Dennis McBride Anne Gienapp

First published online:

Abstract / Summary

Context: Interventions to prevent adolescent pregnancy (primarily curriculum-based programs) have not produced convincing evidence as to their success. Moreover, many evaluation approaches have been inadequate to assess program effectiveness. Therefore, rigorous evaluation of different kinds of interventions may help identify potentially effective strategies to prevent adolescent pregnancy.

Methods: An experimental design, in which clients were randomized to treatment and control groups, was used to evaluate the effects of a "client-centered" approach to reducing pregnancy among high-risk young people in seven communities in Washington State. Four projects served 1,042 youth (clients aged 9-13), and three served 690 teenagers (primarily clients aged 14-17). Projects offered a wide variety of services tailored to individual clients' needs, including counseling, mentoring and advocacy.

Results: On average, clients in the treatment group at youth sites received 14 hours of service, and their teenage counterparts received 27 hours; controls received only 2-5 hours of service. At one youth site, clients were less likely to intend to have intercourse after the intervention than before; at another, they became less likely to intend to use substances. Clients at one teenage project reported reduced sexual behavior and improved contraceptive use after receiving services; teenagers at another site reported reduced sexual intentions and drug use, and a greater intention to use contraceptives. The programs showed no other effects on factors that place young people at risk of becoming pregnant, including their sexual values and educational aspirations, communication with their parents (measured at youth sites only), and sexual and contraceptive behavior (assessed for teenagers only).

Conclusions: High-risk clients likely need considerably more intervention time and more intensive services than programs normally provide. Rigorous evaluation designs allow continued assessment that can guide program modifications to maximize effects.

Family Planning Perspectives, 2000, 32(5):227-235

In 1993, concerns about the social and financial costs of teenage pregnancy and parenthood led the Washington State legislature to pass a bill authorizing the state health department to fund community-based teenage pregnancy prevention projects, family planning services for teenagers and a statewide media campaign. Funding for the pregnancy prevention projects was provided through a competitive process that was open to health departments, schools, family planning agencies, churches and youth organizations. Of the 50 agencies that applied, 11 (with a total of 13 sites) received funding, in amounts ranging from $40,000 to $50,000 per year. In 1995, the state health department contracted with the Washington Institute, a research and training institute affiliated with the University of Washington, to conduct an evaluation of the community-based projects.

The health department, evaluators and state legislators were aware that a lack of strong program evaluations had resulted in limited knowledge about effective teenage pregnancy prevention approaches. Hence, despite the relatively small amount of funding for projects, the intent of the 1993 legislation was to conduct rigorous evaluations to test potentially effective pregnancy prevention strategies and determine their impact on teenagers' sexual behavior. In keeping with this commitment, the health department required that projects, with the assistance of the evaluation team, develop strong evaluation designs, preferably employing randomized assignment or at least using matched comparisons. Of the 11 funded projects, eight have randomized designs. One of these is being assessed as part of a national evaluation; the remaining seven are the focus of this article.

These projects are distinctive in that they use a "client-centered" intervention approach, which combines education and skills-building with a broad array of individualized services. This article, based on results of a four-year evaluation of Washington's client-centered adolescent pregnancy prevention projects, highlights the challenges experienced in the implementation of rigorous evaluation designs in small programs and the benefits of such designs for policy decisions.

Evaluations in Perspective

Since the early 1990s, the rate of teenage pregnancy has been declining nationally and in Washington State.1 Nevertheless, the problems and consequences of teenage sexual activity and pregnancy, which are well known, continue to be widespread.

Of concern are the many negative outcomes associated with teenage pregnancy—for teenage mothers and fathers, their children and society in general. For example, compared with women who give birth at ages 20-21, those who become mothers at age 17 or younger have worse outcomes on several dimensions, including the likelihood of completing school, having a subsequent pregnancy and being a single parent. Their children receive less health care than the children of older mothers, and they have lower cognitive scores, more difficulty in school, poorer health, less-stimulating and less-supportive home environments, and higher rates of incarceration and adolescent childbearing.2

Of further concern are rates of sexually transmitted diseases (STDs) among teenagers. Every year, three million teenagers acquire an STD. This total represents 25% of sexually active teenagers and 13% of all teenagers. In 1995, 10-29% of sexually experienced adolescent women were infected with chlamydia, and nearly 175,000 teenagers had gonorrhea.3

The costs associated with the consequences of early sexual activity—including pregnancy, childbirth and STDs—are enormous. Direct program costs for mothers delivering at age 17 or younger are estimated to be nearly $7 billion more than those for women delivering when they are 20-21 years of age. This figure rises even further when other, associated costs are considered.4

Investment in the prevention of early sexual activity and teenage pregnancy clearly is warranted. However, the effectiveness of many teenage pregnancy prevention interventions remains unknown or uncertain because of a lack of carefully conducted outcome evaluations.5 A definitive review of more than two decades' worth of evaluations found only 27 meeting criteria that are hardly "rigorous" by evaluation standards—namely, the evaluation had to have been published, and its design had to have included at least a comparison group.6

Despite the weaknesses and short supply of evaluations of teenage pregnancy prevention programs, there is evidence that some interventions—primarily curriculum-based programs that provide the same basic services to each client, generally in a school setting—have an effect on primary outcome measures such as sexual behavior and pregnancy.7 Yet the magnitude of these interventions' effect on teenage pregnancy rates remains uncertain, and more evaluation is needed.8 At the same time, rigorous evaluation of other kinds of programs may add to current knowledge regarding potentially effective interventions.

Research and evaluations suggest that in addition to curriculum-based activities, access to family planning services is an important factor for reducing teenage pregnancy.9 In particular, efforts to facilitate and promote teenagers' use of contraceptives seem warranted. Although the majority of teenagers who engage in sexual intercourse report using contraceptives, 25% of those aged 15-17 and 16% of those aged 18-19 use no method.10 Moreover, even among teenagers who use a method to prevent STDs or pregnancy, incorrect or inconsistent use increases the likelihood that the method will be ineffective. In addition to community-based prevention programs, family planning services aimed at teenagers yield opportunities to provide reinforcement regarding teenagers' consistent and proper use of contraceptives, even when sexual encounters are unplanned.11

Additionally, prevention programs that offer broad services and address an array of risk factors for early pregnancy may have more potential to influence teenagers' behavior than simplistic programs or interventions that address only one risk factor.12

The Client-Centered Approach

Background

Programs based on theoretical models appear to be the most effective at changing behavior and provide opportunities for strong evaluations.13 However, none of the projects described in this article is explicitly based on a clearly identified theoretical model. Instead, the "client-centered" model is an approach developed primarily by service providers and is based upon their conclusions about why teenagers become involved in risky sexual behaviors and pregnancies. According to providers, many teenagers lack "real" information about sexual activity and its consequences; lack adults and peers they can trust and confide in; lack positive coping skills to manage stress, sadness and anger; and lack consistent emotional support and positive guidance.14 Providers believe that addressing these needs is key to helping teenagers avoid risky behaviors and pregnancy.

Washington's community-based teenage pregnancy prevention projects utilize an approach that is more comprehensive than typical curriculum-based models. They address a wide range of issues and behaviors associated with early pregnancy, including values and attitudes about teenage sexual activity and pregnancy; alcohol and drug use; delay of sexual activity; prevention of STDs; enhancement of coping skills, life planning and goal-setting; and support for youth and their families. Interventions are intended to be flexible and tailored to each client's needs and risk level. Although many projects incorporate sexuality education—some use popular curricula such as Postponing Sexual Involvement, Sex Can Wait or Reducing the Risk—they modify educational messages according to teenagers' individual or community circumstances. They also provide individualized support services, including advocacy, counseling or mentorship; links to clinical family planning services; and opportunities for clients to participate in social or recreational activities.

The Projects

Six of the seven projects described in this article are administered in local middle and high schools (Table 1). Three are based in family planning organizations, three in local health departments and one in a mental health agency. Project staff include trained sexuality educators, social workers and counselors.

Four projects focused on youth (those aged 9-13), and three served teenagers (primarily 14-17-year-olds*). We distinguish in this evaluation between "youth" and "teenage" projects because while all projects have the objective of reducing adolescent pregnancy, their strategies differ according to their clients' age-group: Projects that provide services to older teenagers address sexual behavior directly; those serving younger clients address factors thought to increase the risk of too-early pregnancy.

Teenagers served by Washington's community-based projects are referred by school counselors, family planning clinics and other social service agencies. Clients are often referred because they are perceived to be at high risk of becoming involved in premature sexual activity or pregnancy. A summary of several items from the evaluation instrument that correlate with early sexual behavior (Table 2) confirms that teenagers who participate in the community-based projects are at elevated risk. For instance, 22% of clients reported that their mother did not finish high school; by contrast, the proportion was 16% in a study conducted among a general school population of the same age.15 Additionally, 17% of clients at youth sites and 13% of those at teenage sites reported getting mostly Ds and Fs in school, compared with 5-6% of students of similar ages in the general population.16 Low levels of maternal education and school achievement are associated with too-early sexual activity.17

Implementing Evaluations

Establishing rigorous evaluation protocols for adolescent pregnancy prevention projects—or for any social and health service program—is a difficult process. We were originally attracted to evaluating these projects because the state health department was willing to require rigorous evaluation despite myriad commonly heard reasons why such evaluations cannot be done. Although the evaluation is now well established, many challenges and barriers to implementation arose during the first year.

At the outset, most project staff were not accustomed to doing program evaluation, let alone following the protocol of a randomized design. Therefore, they faced a steep learning curve with regard to identifying "treatment" and "control" clients (i.e., those who will receive the intensive, client-centered services being addressed vs. those who will receive no services or the services typically provided at the site), collecting and tracking data, and adopting systematic practices for documenting program services and activities. During the first year, staff resisted evaluation because they felt that the time required to learn to implement it and then to conduct it took away from their ability to provide services to clients. Additionally, some project staff and community stakeholders viewed the evaluation protocol as ethically questionable in cases where, for comparison purposes, services were not provided to certain clients, or different services were provided to different groups.

Another challenge during the first year was that state law required all projects to obtain active parental consent before clients younger than 14 could participate in the evaluation. Obtaining active parental consent was extremely challenging and time-consuming. Other barriers included sites' ability to attract and maintain clients, staff turnover and community resistance to sexuality education activities.

To overcome these barriers, the evaluation team, project personnel and the state health department collaborated closely. To foster good working relationships, the evaluators and health department personnel conducted statewide and regional workshops, arranged regular site visits and had frequent phone contact with each project. In addition, health department personnel and the evaluation team worked together to carefully assess challenges and suggest solutions as they occurred. They also held regular meetings to discuss each project's progress and troubleshoot problems.

Gradually, solutions to problems were found. Project staff discovered ways to obtain parental consent via outreach or small incentives (e.g., coupons for pizza or movies). They also capitalized on word of mouth as more youth participated in the project and had positive experiences. Staff refined consent forms to ensure that clients have a clear understanding of the project and the evaluation, as well as to simplify procedures (e.g., forms for clients' agreement to participate and parents' consent were originally separate but were combined into one). Eventually, obtaining consent became integrated into the everyday process of conducting the projects.

Originally, we assumed that gaining staff's acceptance of randomized treatment and control groups would be the most difficult aspect of implementing the evaluation. Fortunately, this was not the case, for several reasons. First, the original request for proposals distributed by the state health department stated explicitly that a rigorous, preferably randomized design was required for funding. Hence, the expectation was established from the onset.

Second, the funding agent supported the requirement for rigorous designs throughout. In cases where projects opted to use weaker designs—e.g., because of the difficulty in implementing a rigorous design—the funding agent supported the evaluator in dissuading the projects from doing so. Instead, we worked together to overcome barriers and maintain the stronger designs. This was accomplished not through coercion but by fostering an atmosphere of mutual trust and compromise among the three partners.

Finally, and most importantly, we were able to help stakeholders understand the value of rigorous designs. While project staff were at first uncomfortable with the idea of not providing services to certain clients, they began to see that their programs cannot serve all youth in their community. Currently, projects provide comprehensive services to as many youth as resources allow for, and collect information for comparison purposes from additional youth. Most staff have come to view the randomized design not as "withholding" services from some youth, but as a rigorous test of their interventions.

In addition, our message as evaluators has consistently been that the primary goal of the evaluation is not simply to determine whether the program is effective, but to understand how well it is working so that it can be modified to maximize the effectiveness of services for clients. Strong designs give better information for program decision-making than do weaker designs. Hence, stakeholders have become more comfortable with and, consequently, more supportive of evaluation activities as they have begun to see how information feedback can be used to improve interventions.

Over the four years of the project, client numbers in each site have risen. The increases can probably be attributed to strengthened partnerships and participation agreements with schools and other collaborating agencies, improved recruitment and referral processes, and project staffs' increased experience with both program implementation and evaluation activities.

Process Evaluation

Service Delivery

Most clients are involved with projects for 1-2 years. The number of hours they spend receiving services varies, because whereas education services typically are provided in a fixed number of hours, the amount of time clients spend in other project components (e.g., meetings with advocates or mentors) differs according to their individual needs. Thus, at youth sites, participants in the treatment group received an average of 14 hours of services per year, and those in the control group received five hours of services. At teenage sites, the average was 27 hours for those in the treatment group and two hours for controls (Table 3).

In three of the youth sites, because staff were uncomfortable providing no project services to the control group, they provided some services—education and skills-building—to all clients. From an evaluation design standpoint, it would have been better if the control group had received no services. Nonetheless, controls did not receive the individualized services that treatment group clients did (i.e., counseling, advocacy or mentoring).

Focus Groups

In 1997-1999, 17 focus groups were conducted at the seven project sites to explore participants' program experiences, validate and obtain further insights into the client-centered approach, substantiate clients' risk levels and clarify factors that may influence outcomes, such as the attractiveness of the intervention, participants' level of engagement and potential implementation issues. The discussions revealed issues such as teenagers' emotional instability, involvement in risky behaviors and destructive coping methods, as well as difficulties developing meaningful relationships. Many participants lacked resources for obtaining support and guidance. Some spoke of substance use; had experienced abuse or neglect; and appeared to be lonely, isolated or angry.

Teenagers' comments suggested that client-centered programs provide an attractive environment for learning and skills-building with respect to topics related to pregnancy prevention. Participants generally described programs as fun, helpful, supportive and educational. Several teenagers expressed appreciation that information they had received through the programs was so "real" or mentioned that it was more straightforward than any information they had gotten through school, parents or other sources of sexuality education. Teenagers often alluded to feelings of isolation and a lack of consistent family or peer relationships and support; many said that the support and attention they received and the relationships they developed with project staff, mentors or other program participants were especially meaningful to them. They also identified the development of positive attitudes about sexuality and self as a program benefit.

Clients noted that programs were generally better than they had expected and said that offering more service hours could improve them. Participants identified confidentiality and trust as critical program features. Teenagers' experiences with the programs appeared to be enhanced when their sense of trust and safety was high, when their relationship with staff was strong, and when education about sexuality and contraception was reinforced through discussion, individual counseling or advocacy.

Outcome Evaluation

Methods and Procedures

Participants in project evaluations were randomly assigned to a treatment or control group, typically on the basis of whether their birth date is an odd or an even number. Appropriate consent had to be obtained for participation: For clients younger than 14, both active parental consent and client assent were required; for those 14 or older, only the client's consent was needed. The evaluation is based on results of pretests administered to clients before the start of the intervention (and generally before assignment to treatment or control groups) and posttests administered upon its completion. Data were typically collected from clients in group settings; participants who were absent for the initial test were surveyed later.

The basis for data collection was the Teenage Pregnancy Prevention Computerized Information System (TPPCIS), which is used to monitor and evaluate a wide range of teenage pregnancy prevention programs. The data system was modified to fit the specific requirements of each project, but where possible, the same information was gathered for all sites to enhance comparability. TPPCIS was designed to capture three types of variables: demographic, risk and outcome. It included items assessing teenagers' educational aspirations, the importance they attach to future education, their communication with their parents, teenagers' and parents' values concerning sexuality, and teenagers' sexual intention and sexual behavior (in both cases, including contraceptive use).

Interventions were conducted within the school year, but began at slightly different times because of differences in schools' agendas. Consequently, for the seven projects covered in this article, the interval between the pretest and posttest was 5-9 months and averaged seven months (see Table 4). Considerable emphasis was placed upon obtaining adequate follow-up. Attempts were made to obtain information from clients remaining in the project as well as those who did not continue. Most teenagers who were lost to follow-up had left the state or transferred to other schools.

We compared the demographic and risk variables shown in Table 2 between participants with follow-up data and those who were lost to follow-up. The only statistically significant difference (p<.05, two-tailed) across all sites was gender: A smaller proportion of clients in the group who were not followed up than in the followed-up group were female (61% vs. 72%). For youth sites, only two indicators were statistically significant. In site C, 50% of those lost to follow-up were females, compared with 48% of those followed up. In site D, 33% of those lost to follow-up reported receiving mostly Ds and Fs, compared with 14% of those who were followed up.

For teenage sites, the differences were more pronounced. Overall, clients who were lost to follow-up were at higher risk than those who were followed up. Their mothers were less likely to have a high school education (23% vs. 32%), they were more likely to have mostly Ds and Fs (21% vs. 13%) and they were more likely to have repeated a grade (26% vs. 14%). This bias occurred within each teenage site.

Further tests were conducted to determine if this bias occurred between treatment and control groups for those lost to follow-up. For all sites combined, only one factor was statistically significant: Clients in the treatment group were more likely than those in the control group to have mothers who were not high school graduates (40% vs. 24%). No statistically significant differences were observed for demographic and risk indicators within sites for those lost to follow-up.

A perplexing problem has to do with "diffusion" (or "contamination").18 Since each project's clients, whether assigned to the treatment or the control group, attended the same school or community-based service agency and were of similar ages, information may have been diffused via communication and interaction between the groups. While this may be a problem, we would rather use randomization and deal with potential diffusion than apply weaker designs and deal with their deficiencies. Furthermore, if diffusion occurs, we expect information that clients obtained in the project to "rub off" on clients in the control group. However, the individualized and intensive nature of the intervention should overshadow the effects of diffusion, enabling us to test the hypotheses despite the occurrence of some diffusion. Hence, while this issue is of concern, we do not expect it to be detrimental to this evaluation.§

Measures and Hypotheses

We combined pertinent items from the TPPCIS to form constructs, or scales, which we tested for reliability using Cronbach's alpha, a common measure of internal consistency of scale items. (We consider an alpha of .70 or higher to indicate a reliable scale.) The items for each scale and corresponding alphas are shown in Table 5. Most of the scales have moderate or strong alpha coefficients. The exception is the drug use scale, especially for youth. There was too little use of illicit drugs to attain reliability. Notably, marijuana use did not correlate with use of harder drugs for either teenagers or youth, but it correlated with alcohol and tobacco use.

Individual items are scored either on a five-point Likert scale (with scores ranging from one to five) or, if they are dichotomous, on a two-point scale (with zero indicating a negative response and one indicating positive). Scores for individual items are summed to yield a score for the overall construct. Thus, for example, the sexual intention construct for youth sites consists of four items whose scores may range from one to five; therefore, the score for the scale may range from four to 20.

Since youth and teenage projects have slightly different focuses, we expect them to have slightly different outcomes. In the youth sites, we hypothesize that after participating in the project, clients in the treatment group will be more likely than controls to express a decreased intent to engage in sexual behavior, increased values to delay sexual and other risky behaviors, increased communication with parents about sexuality,** increased educational aspirations, decreased substance use and decreased intent to use substances. In the teenage sites, we expect project participation to result in decreased intent to engage in sexual behavior, decreased sexual behavior, increased intent to use contraceptives,†† increased contraceptive behavior, increased educational aspirations, decreased substance use and increased values to delay sexual and other risk behaviors.

Statistical Tests

In testing the hypotheses, we compared the randomly assigned treatment and control groups by using a covariance adjustment model.‡‡ With this model, differences in adjusted mean scores between the treatment and control group at posttest indicate the effect, or lack of effect, of the intervention on each variable. (For instance, in Table 6, the adjusted means for sexual intention are 7.3 for the treatment group and 7.5 for the control group. The difference is not statistically significant, indicating no effect of the intervention across sites on this variable.) Statistical tests were done using the GLM feature in SPSS, version 10.5. Hypotheses tests are considered significant at p<.05, one-tailed.

Sample sizes vary considerably within sites for tests of different hypotheses. This variation is due partly to survey modification over time. We used a core set of questions at startup, and as time progressed, we added or modified items on the basis of feedback from clients, requests from staff and analysis of surveys. Variations in sample size are also due to conditional relationships (e.g., questions that concern contraceptive use at last intercourse apply only to sexually active clients) and missing data.

Equivalence between treatment and control groups at baseline was tested for each site using independent t-tests. The variables considered were age, gender, ethnicity, grade repetition, grades received and mother's education. In nearly all cases (40 out of 42), equivalence of treatment and control groups is supported for both teenage and youth sites (p>.05, two-tailed). An exception occurs in one youth site for gender and in one teenage site for age. In site B, there is a larger proportion of females in the treatment group (61%) than in the control group (41%). Site F has slightly older clients in the treatment group (15.1 years) than in the control group (14.7).

Results of Hypothesis Testing

Youth sites. We found only minimal support for any of our hypotheses regarding the youth sites. For clients' intention to have sexual intercourse, possible scores range from four to 20, with higher scores indicating a greater likelihood of intending to engage in sexual intercourse. In site C, the mean, adjusted for pretest score,§§ is 7.5 for the treatment group and 8.4 for the control group, and the difference is statistically significant (Table 6). Thus, results for this site support the hypothesis that youth in the project will have lowered intentions to engage in sexual intercourse. However, none of the other sites show support for this hypothesis, nor do the sites taken together show support.

Similarly, one site (B) showed a statistically significant effect in the hypothesized direction for intention to use substances. Possible scores for this construct range from three to 15. The adjusted means for both groups of youth are low, but the treatment group scores slightly lower (4.2) than controls (5.1), indicating a lower likelihood of intending to use substances. However, the power (i.e., the probability of correctly rejecting the null hypothesis when it is false) is low.

Teenage sites. The first hypothesis, regarding clients' intent to engage in sexual intercourse, is tested for sites E and G only.* The sexual intention scale has two items, and scores for the scale can range from two to 10. The adjusted mean for site E is lower for the treatment group (6.5) than for the controls (7.0); although the power is low, the difference is statistically significant (Table 7). The two sites combined also show a statistically significant difference, but the difference is clearly due to the impact of the project at site E, since scores for the treatment and control groups were virtually identical at site G.

The hypothesis that sexual behavior will be lower among treatment clients than among controls is not supported in site E or G, but is strongly supported in site F. At posttest, 50% of the treatment group at that site and 83% of the controls said that they had intercourse within the last month; this difference is statistically significant and has ample power. The effect is carried over to the test of the sites combined.

Results for sites F and G do not indicate any effect of the project on clients' intention to use contraceptives. However, in site E, the mean score was significantly higher for the treatment group than for controls.

Support for the hypothesis that project participation will be associated with an increase in contraceptive behavior is also evident in site F. At posttest, 77% of clients in the treatment group said that they had used a contraceptive at last intercourse, compared with 24% of those in the control group; the difference is statistically significant with strong power. Additionally, 47% of the treatment group and 11% of controls said that they always use a contraceptive; this difference, too, is statistically significant, but with lower power. Of concern here is that since the contraceptive questions were asked only of clients who had been sexually active in the past month, the sample sizes for these tests are small. This concern notwithstanding, there is partial support for the hypothesis.

There is no support for hypotheses that treatment and control groups would differ with respect to educational aspirations, substance use or sexual values. Site E showed some positive difference in reported drug use at posttest, but the other two sites showed differences in the opposite direction from what was expected: Treatment clients reported a higher incidence of drug use than controls. In all cases, however, the amount of illicit drug use by both treatment and control group clients is very small.

Conclusion

It has taken four years to get a solid test of these hypotheses. While one project consistently shows positive differences between treatment and control groups, and some isolated effects occur in other projects, the interventions show little or no effect across most of the projects. So, where do we go from here? Obviously, one answer would be to conclude that these interventions do not work, cut their funding and start over. Unfortunately, cutting project funds is the option usually taken by funding agencies, but in our opinion, it is a mistake. The better option is to begin modifying these interventions, and to the state health department's credit, it has concurred.

The reasons to maintain the projects are compelling. First, they have strong evaluation components. Second, we have a series of measures that are highly reliable and appear to be valid. Third, the project assumptions and orientations appear to fit well with the populations being served, and the programs are appealing to clients. Fourth, each project has had success in overcoming barriers to implementation, attracting and keeping clients, and gaining acceptance in the communities in which they operate. Fifth, sites are using evaluation data to modify and improve their programs. Hence, we have the processes in place to detect improvements when they occur. On the basis of information obtained through evaluations, modifications of the client-centered interventions began in 1999.

One issue that seems to warrant close assessment is service dosage. In site F, which showed consistent support for both sexual and contraceptive behavior hypotheses, treatment group clients spent the greatest number of hours in project activities—31, on average, compared with 14 for clients in youth sites and 27 for those in teenage sites overall. The question arises as to whether this level of service provision is sufficient to allow the interventions to have an impact on attitudes and behavior, especially given participants' risk factors. A study that measured outcomes of several health education curricula presented to youth in grades 4-7 found that program effects were limited when exposure time totaled less than 15 hours.19 Significant improvement in program effects was noted when exposure exceeded 20 hours, but approximately 40-50 hours was required to effect changes in general health attitudes and practices. Although individualized interventions are more intensive than school-based curricula and thus may require less time to affect attitudes or behaviors, an increase in service hours may be necessary for the Washington community-based projects to generate expected effects across all sites.

Many projects have also begun to look closely at whether the services they offer are focused specifically enough on changing sexual behavior and intent. While programs aim to provide services that are tailored to clients' individual needs, focus-group findings suggested that this flexible approach may lead some programs to steer services to address clients' present crises. As programs become more and more focused on "crisis management," the emphasis on sexuality education and pregnancy prevention is likely to lessen. While discussing or assisting clients with a range of risk issues is doubtless important, a more specific focus on sexuality and the behaviors associated with pregnancy may be critical to the success of pregnancy prevention programs. The issue of whether program services are linked tightly enough to the evaluation hypotheses so that positive impacts can be reasonably expected in a relatively short time has begun to receive attention.

To address the issues of service quantity and intensity, the state health department responded to evaluators' recommendations and required that all projects increase their "service dose" to a minimum of 20 hours per client. Some sites, acting on the focus-group results indicating that clients want more participation time, are increasing exposure even more.

Another issue that bears on service quantity and intensity is that higher-risk clients are likely to need not only more interventions, but more intensive interventions. And projects may not have adequate resources to serve some very high risk teenagers, who are the most likely to drop out of programs. Teenagers with many complicated issues (including mental and emotional health issues) may not be able to integrate prevention education messages and may require services that are outside the scope of the community-based projects. While many projects initially expressed a desire to serve all teenagers, regardless of need, projects have begun to see that this may not be an effective strategy. To target their interventions effectively, programs may need to develop a "hierarchy of need." Ideally, they will tailor interventions specifically on the basis of information about clients' risk levels. And given the resource limitations of the community-based projects, some teenagers may be best served by even more intensive case management programs.

Using evaluation data, we hope to discover what quantity, intensity and mix of services project clients need. We will continue to evaluate and monitor the progress of these interventions until we identify the most promising strategies for addressing the difficult issues surrounding sexual behavior that affect our youth.

Footnotes

*Teenage projects were open to clients aged 12-17, but because of consent issues, they served mainly clients 14 and older.

Participants were 105 teenagers who were receiving services from the programs, and the focus groups were conducted during regular program meeting times. Semistructured questions were used to guide discussions. Focus-group data were analyzed via a process of ethnographic description and structured analysis using the software package NU*DIST 4. Transcripts were independently coded, then categorized to reflect major substantive themes.

TPPCIS was developed in part by the lead author. Some of the core data items are published in: Card JJ, ed., Evaluating Programs Aimed at Preventing Teenage Pregnancies, Palo Alto, CA: Sociometrics, 1989; and Card JJ, ed., Handbook of Adolescent Sexuality and Pregnancy: Research and Evaluation Instruments, Newbury Park, CA: Sage Publications, 1993.

§Similarly, when this issue was discussed in a workshop of the National Campaign to Prevent Teen Pregnancy, the consensus was that while diffusion is a problem, it is not a "paralyzing" one. (Source: National Campaign to Prevent Teen Pregnancy, Evaluating Abstinence-Only Interventions, Washington, DC: National Campaign to Prevent Teen Pregnancy, 1998, p. 13.)

**The following dichotomous item measured communication with parents: "I can go to my parents with questions about sex."

††Contraceptive intent was measured by one item: "How likely is it that you will use an effective form of birth control in the next year?" Possible responses ranged from "definitely will not" (scored as one) to "definitely will" (five).

‡‡This model is an alternative to the more traditional repeated-measures approach. Both tests use only clients for whom we have both pretest and posttest data. However, the covariance adjustment model includes regression adjustment for the baseline value on the dependent variable in an analysis of covariance on the posttest data, not a comparison of change scores, the strategy of the repeated-measures technique. This is usually a more powerful approach than repeated measures. (Source: Murry D and Wolfinger R, Analysis issues in the evaluation of community trials: progress towards solutions in SAS/STAT MIXED, Journal of Community Psychology, 1994, CSAP Special Issue:140-154.)

§§Covariate adjustments were also done for pertinent demographic and risk variables (e.g., gender, age, ethnicity, mother's education, low grades and whether the client repeated a grade). As expected, because of the random assignment, including these additional adjustments did not have a significant effect on any of the hypothesized outcomes. Hence, only the pretest adjustment is used in the results reported in Tables 6 and 7.

*†Site F is omitted because a slightly different item was used to measure sexual intent there. The item asked "How likely is it that you will have sexual intercourse in the next year?" Adjusted mean scores among 71 youth in the treatment group and 58 in the control group were nearly identical: 3.0 and 3.1, respectively.

References

1. Moore K et al., Adolescent Pregnancy Prevention Programs: Interventions and Evaluations, Washington, DC: Child Trends, 1995; and Washington State Department of Health, Center for Health Statistics, Washington State Pregnancy and Induced Abortion Statistics 1997, Olympia, WA: Washington State Department of Health, 1998.

2. Maynard R, ed., Kids Having Kids: A Robin Hood Foundation Special Report on the Costs of Adolescent Childbearing, New York: The Robin Hood Foundation, 1996.

3. Kirby D, No Easy Answers: Research Findings on Programs to Reduce Teen Pregnancy, Washington, DC: National Campaign to Prevent Teen Pregnancy, 1997; and Trussell J, Card JJ and Hogue CJR, Adolescent sexual behavior, pregnancy, and childbearing, in: Hatcher R et al., eds., Contraceptive Technology, 17th rev. ed., New York: Ardent Media, pp. 701-744.

4. Kirby D, 1997, op. cit. (see reference 3); and Maynard R, 1996, op. cit. (see reference 2).

5. Brown S and Eisenberg L, eds., The Best Intentions: Unintended Pregnancy and the Well-Being of Children and Families, Washington, DC: National Academy Press, 1995; Moore K et al., 1995, op. cit. (see reference 1); Philliber S and Namerow P, Trying to maximize the odds: using what we know to prevent pregnancy, paper prepared for the Teen Pregnancy Prevention Program, Division of Reproductive Health, National Center for Chronic Disease and Prevention, Centers for Disease Control and Prevention, Atlanta, Dec. 13-15, 1995; Kirby D, 1997, op. cit. (see reference 3); and Miller B et al., Preventing Adolescent Pregnancy: Model Programs and Evaluations, Newbury Park, CA: Sage, 1992.

6. Brown S and Eisenberg L, 1995, op. cit. (see reference 5).

7. Frost JJ and Forrest JD, Understanding the impact of effective teenage pregnancy prevention programs, Family Planning Perspectives, 1995, 27(5):188-195; Moore K et al., 1995, op. cit. (see reference 1); Webster C and Weeks G, Teenage Pregnancy: A Summary of Prevention Program Evaluation Results, Olympia, WA: Washington State Institute for Public Policy, 1995; and Miller B et al., 1992, op. cit. (see reference 5).

8. Brown S and Eisenberg L, 1995, op. cit. (see reference 5).

9. Miller B et al., 1992, op. cit. (see reference 5); and Moore K et al., 1995, op. cit. (see reference 1).

10. The Alan Guttmacher Institute (AGI), Sex and America's Teenagers, New York: AGI, 1994.

11. Kirby D, 1997, op. cit. (see reference 3).

12. Ibid.; and Moore K et al., 1995, op. cit. (see reference 1).

13. Miller B et al., 1992, op. cit. (see reference 5); and Moore K et al., 1995, op. cit. (see reference 1).

14. Gienapp A, Greef E and Paulsen L, Interviews with the Washington State Department of Health's community-based teenage pregnancy prevention program coordinators, 1996.

15. McBride D, Aronson B and Malloy C, Preliminary evaluation of the Washington State abstinence education program, paper presented at the workshop Evaluating Title V Abstinence Education Programs, Bethesda, MD, July 24, 2000.

16. RMC Research Corp., Washington State Survey of Adolescent Health Behaviors, 1998: Analytic Report, Portland, OR: RMC Research Corp., 1998.

17. Kirby D, 1997, op. cit. (see reference 3).

18. Cook T and Campbell D, Quasi-Experimentation: Design & Analysis Issues for Field Settings, Chicago: Rand McNally, 1979, p. 54.

19. Connell D, Turner R and Mason E, Summary findings of the school health education evaluation: health promotion effectiveness, implementation, and cost, Journal of School Health, 1985, 55(8):316-321.

Author's Affiliations

Dennis McBride is a senior research associate at the Washington Institute for Mental Illness Research and Training, University of Washington, Tacoma. Anne Gienapp is a research associate at Organizational Research, Seattle. The State of Washington Department of Health provided funding for this project. The authors thank Melinda Harmon and the staff of the State of Washington Department of Health, Division of Community and Family Health, for ongoing support and collaboration on this project.

Disclaimer

The views expressed in this publication do not necessarily reflect those of the Guttmacher Institute.