|

Consistency in Women's Reports of Sensitive Behavior In an Interview Mode Experiment, Sáo Paulo, Brazil

Barbara S. Mensch Paul C. Hewett Heidi E. Jones, Graduate School of Public Health and Health Policy, City University of New York Carla Gianni Luppi Sheri A.Lippman Adriana A.Pinho Juan Diaz

First published online:

Abstract / Summary
CONTEXT

Inaccurate reporting of sexual behavior creates a misleading picture of individuals' risk for STI infection. Despite a substantial body of U.S. research on the consistency of self-reports of sensitive behavior, only a few such studies have been conducted in developing countries.

METHODS

Consistency in the reporting of sexual activity and other sensitive behaviors was assessed among 818 women aged 18–40 who enrolled in 2004 in a study examining STI screening and diagnosis in São Paulo, Brazil. Participants were randomized into face-to-face interview and audio computer-assisted self-interview (audio-CASI) groups, and a six-week follow-up interview was conducted using audio-CASI for all participants. Differences between groups were assessed using t tests, and logistic regression analyses were used to estimate the likelihood of inconsistency within the enrollment interview and between the enrollment and follow-up interviews.

RESULTS

Consistency in reporting at the enrollment interview was higher in the face-to-face group than in the audio-CASI group, likely because interviewers prompted women to reconcile discrepant responses, whereas the audio-CASI program did not enforce logical consistency. However, consistency between enrollment and follow-up was significantly lower in the face-to-face group for abortion, marijuana use, transactional sex, coerced sex and number of lifetime sexual partners, because of increased reporting at follow-up using audio-CASI.

CONCLUSION

Although the analysis of internal consistency at enrollment suggests that computerized interviewing may increase random measurement error, it appears to reduce social desirability bias and encourage higher reporting of sensitive behaviors.

International Family Planning Perspectives, 2008, 34(4):169–176

Collecting accurate information on sexual behavior is vital for monitoring HIV and other STI risk and for evaluating interventions to reduce disease transmission. The use of audio computer-assisted self-interviewing (audio-CASI) provides more privacy than interviewer-administered questionnaires, and therefore may offer a means of improving measurement of sensitive or stigmatized behaviors. Numerous U.S. studies have found that significantly higher levels of sensitive and, in some cases, illegal behaviors (e.g., obtaining an abortion, engaging in same-gender sex, injection drug use or violent behavior) were reported using audio-CASI than in face-to-face interviews or paper-and-pencil self-administered questionnaires.1–5

Studies using audio-CASI have been conducted in a number of developing countries, including Kenya,6,7 Malawi,8 Zimbabwe,9 Thailand,10,11 India,12 Vietnam,13 Brazil14,15 and Mexico.16 While computerized interviewing generally yielded higher reporting of risky behaviors than did standard face-to-face interviews in these studies, the findings were not always as compelling as those from studies conducted in the United States. In the developing world, effective use of computers for reporting sensitive behavior appears to depend in part on the types of questions asked, the setting and the study population.

In addition to the many studies that have examined the effect of interview mode on reporting of sensitive behavior, a substantial body of research has assessed the consistency of self-reports in U.S. surveys, both within a single interview and, more commonly, across interviews in longitudinal surveys. These analyses revealed that discrepancies in reporting were not random. Rates of inconsistency varied by respondent characteristics, such as gender, race, education and cognitive ability, as well as by the sensitivity of, or stigma associated with, the activity being reported;17–20 that is, more stigmatized behaviors were associated with higher rates of inconsistency.19,21–23 Few studies have been conducted on the consistency of self- reports in developing countries,23,24 and we are aware of only one that has explored the effect of interview mode on consistency of reporting.6

We conducted a randomized study of audio-CASI versus face-to-face interview reporting of reproductive behavior, sexual behavior, contraceptive use, prior STI infection, and alcohol and drug use among women in São Paulo, Brazil. In a previous analysis, we found that audio-CASI produced higher reporting of risky sexual behavior than did face-to-face interviews at the enrollment visit. Moreover, stronger associations between risky behavior and STI infection were observed in data collected via the audio-CASI mode, with STI-positive women being more likely to underreport risky behavior in the face-to-face interview.25,26 The present study compares the consistency of reporting of sensitive behavior within the enrollment interview and the six-week follow-up interview, as well as consistency between the enrollment and follow-up interviews, by interview mode group.

METHODS

The current analysis—an ancillary experiment to a randomized study that evaluated home- versus clinic-based screening and rapid diagnosis for STIs among women in a low- income neighborhood of São Paulo, Brazil27—assessed whether the use of computerized interviewing yielded more accurate reporting of sexual and other sensitive behaviors.

Procedures

From April to November 2004, 818 women were enrolled in the study at the Centro de Saúde Escola Dr. Alexandre Vranjac, Barra Funda, a health center run by the Santa Casa Faculty of Medical Sciences. Women participating in the clinic's family planning, cervical cancer screening, mother's group, pediatric care and general services were invited to attend study recruitment sessions, as were women from the clinic catchment area who were approached at community businesses and samba schools (clubs that rehearse and produce presentations for Carnaval parades).

The sampling strategy was dictated in part by the main objective of the parent study, which was to investigate whether women were more likely to be screened for STIs when they were given a home kit versus a clinic appointment. We wanted to ensure that a substantial proportion of participants were not habitual clinic attendees, as those who visit the clinic regularly are likely to adhere to clinic appointments, whereas the goal of home sampling was to increase the number of women screened for STIs by reaching those who did not normally come to the clinic. As a result of our sampling strategy, approximately one-third of all participants had not been previously enrolled at the health center. Recruitment sessions at the clinic included discussion of STI diagnosis and prevention, as well as a discussion of study procedures. To be eligible for the study, women had to be aged 18–40, self-identify as literate (to ensure they would be able to follow instructions for self-collection and testing procedures for the main study),* and not require immediate care for a gynecologic problem.

At enrollment, women were randomly assigned to either a face-to-face interview or audio-CASI. Stratification and block randomization methods ensured that equal numbers of respondents using each interview mode were included in the experimental (home) and control (clinic) groups. After providing informed consent, participants assigned to the face-to-face mode were interviewed by trained research staff in a private room of the clinic. Respondents randomized to audio-CASI were assigned to use one of three computers that were isolated from each other and the main clinic room by protective screens. Both the audio-CASI and face-to-face interviews were conducted in Portuguese.

Audio-CASI respondents received instructions on how to enter responses using a mini-keypad connected to a laptop computer. Although some keys were color-coded to simplify tasks (e.g., moving to the next question, replaying the audio and repeating the previous question), respondents were also required to enter numeric responses to answer questions (e.g., 1 for yes, 2 for no). Respondents listened to instructions and questions through headphones, while reading corresponding text on the computer screen. The audio-CASI program did not enforce logical consistency in the respondent's answers either within or between interviews, that is, there were no data checks (e.g., women who reported never having had sex could report having had at least one sex partner). The interviewing software, developed by the Population Council using Microsoft Visual Basic 6.0 and Access 97, was pretested among 13 women, and the speed with which the text was read was increased as a result of the pretest.

The enrollment interview asked basic questions about participants' background (age, education, family income, work aspects, marital status, skin color, home ownership, and housing and daily living attributes), reproductive behavior (number of births and pregnancies, any abortions), sexual behavior (number of partners, oral or anal sex, transactional sex), condom use, STI infections, experience of intimate partner violence (physical abuse or coerced sex), and alcohol and drug use. It also included questions about sexual activity and condom use for participants' last three sexual partners. EPI Info 6.0 was used to double-enter data from the face-to-face interview, and all study data were then analyzed using STATA 8.0.

The six-week follow-up interview, which was conducted using audio-CASI for all women, included many of the same questions on sexual behavior, partners, condom use, births, pregnancies and induced abortion. Women received up to three reminders (by phone or letter) to encourage them to return for the six-week visit. This interval gave women sufficient time to either visit the clinic (clinic group) or return their home testing kits (home group), as well as time for the laboratory to process the specimens.

Analyses

To assess women's consistency in reporting within each of the two interviews, we compared answers to logically related questions about sexual behavior and contraceptive use asked at different points during the same interview. Consistency between the enrollment interview and the six-week interview was assessed using a subset of questions asked at both interviews. For the latter comparison, the analyses differed by whether the behaviors examined could have changed over the six-week interval. Thus, for women whose pregnancy status did not change over this period, we examined consistency in their reports of the number of children ever born, the number of pregnancies and whether they had ever had an induced abortion. Because we did not ask about risky behaviors in the interval between interviews, for those behaviors that might have changed (e.g., drug and alcohol use in the last six months), we looked at the percentage of women who altered their responses in the direction of increased risk behavior. Because of the stratification and randomization methods used, we expected reported changes in behavior over time to be equal across the two interview mode groups, assuming no reporting biases.

Differences in demographic characteristics and reporting by interview mode were assessed using t tests for proportions or means for two independently drawn samples; tests were two-tailed for demographic characteristics because the assignment to group was random, and one-tailed for reporting because we assumed that audio-CASI would generate greater reporting of sensitive behavior. Logistic regression was used to calculate odds ratios for inconsistent outcomes in three analyses: inconsistency in reporting within the enrollment and six-week interviews, inconsistency between the two interviews and changes in behavior reported across the interviews. In addition to interview mode, we included the background characteristics measured at enrollment to adjust for differences between interview groups, because prior research in the United States has suggested that socioeconomic status and minority group membership frequently affect the reliability of reporting on sensitive behaviors.17–20

RESULTS

Sample Characteristics

Women in the two interview groups were similar in mean age (27–28 years), family income, marital status, rate of home ownership, and level of computer and ATM use, and also had an average of nine years of schooling (Table 1), equivalent to completing one year of high school (about 10% had 0–4 years of school and another 10% had more than 11 years—not shown). About 40% of participants self-identified as white, and the same proportion said they were of mixed race. Despite randomization to interview mode, the two groups differed significantly on several characteristics (although not enough to affect our conclusions): Women in the face-to-face interview group were more likely than those in the audio-CASI group to work for cash (73% vs. 51%) and to have internal plumbing in their homes (90% vs. 84%), but less likely to live in houses made of brick or cement (11% vs. 18%). Twenty women in each of the interview groups dropped out of the study before the six-week interview, leaving 778 women for the analysis of reporting consistency at enrollment and follow-up. Women who dropped out were not significantly different from those who completed both interviews.

Reporting of sensitive behaviors at enrollment was generally higher in the group that used audio-CASI than in the group that used face-to-face interviews. For example, one-third of the audio-CASI group reported having had anal sex in the last six months, compared with one-quarter of the face-to-face interview group (Table 2, page 170). Furthermore, higher proportions of women in the audio-CASI group reported ever having had an abortion (27% vs. 17%, among those who had ever been pregnant), alcohol use in the last month (64% vs. 56%) and transactional sex (9% vs. 3%). In addition, Table 2 indicates the percentage of missing responses for each measure; while the percentage missing is higher with audio-CASI than with face-to-face interviews, the proportion is still extremely low. (A prior interview mode experiment conducted in Kenya also found that women who were interviewed by computer were more likely to fail to answer all questions; in that study, 15% of audio-CASI respondents refused to answer at least one question, whereas all face-to-face respondents answered all the questions.6)

Consistency Within and Between Interviews

Overall, the proportion of women giving internally inconsistent responses at enrollment or at the six-week follow-up was low for both groups, yet the proportion of inconsistent answers was slightly higher in the audio-CASI than in the face-to-face interview group for all seven conditions at enrollment and all three conditions examined at follow-up; five of the seven differences at enrollment were significant (Table 3, page 171).

Among women whose pregnancy status did not change between the two interviews, the levels of inconsistent reporting of their pregnancies and births across the two interviews were not significantly different by study group (Table 4). However, women who interviewed face-to-face at enrollment had a higher level of inconsistency across interviews than those interviewed via audio-CASI when asked whether they had ever had an induced abortion (11% vs. 5%), which is legal in Brazil only in the case of rape or if a woman's life is in danger. Moreover, the direction of change among women who provided different responses at the two interviews was as expected: Only 2% of those initially interviewed via audio-CASI changed their responses from "no" to "yes," compared with 10% of those initially interviewed face-to-face.

Because participants were randomly assigned to interview groups, we would expect equal percentages of women in each group to report changes in behavior across the two interviews if there were no association between interview mode and reported behaviors. However, in 15 of the 16 reported behaviors, greater levels of inconsistency were found among women initially interviewed face-to-face, and five of these were significant; this suggests that switching from a less confidential interview method to a more confidential one elicits higher reporting of sensitive or stigmatized behaviors. The greatest difference was in women's reporting ever having had coerced sex: Nine percent of those interviewed face-to-face at enrollment changed their response from "no" to "yes" at the six-week interview, while only 3% of those interviewed via audio-CASI at enrollment did so. In addition, significantly more women in the face-to-face group than in the audio-CASI group changed their responses from "no" to "yes" for ever having had transactional sex (6% vs. 3%), having used marijuana in the last six months (4% vs. 2%) and having had a partner who was drunk in the past month (22% vs. 17%). Women initially interviewed face-to-face were also more likely than those initially interviewed by audio-CASI to report an increased number of lifetime sexual partners at the six-week follow-up (22% vs. 16%).

Multivariate Analysis

Logistic regression analyses were conducted for six of the 10 outcomes in Table 3 and for all 21 outcomes in Table 4 (not shown). The five outcomes presented in Table 5 were selected because they are examples of the different forms of reporting inconsistency, and because interview mode was a strong and significant predictor of inconsistent reporting when social and demographic covariates were included in these models.

At the enrollment interview, women in the audio-CASI group were more than six times as likely as those in the face-to-face interview group to say that they had had no sexual partners in the last six months but that they had had sex in that period (odds ratio, 6.4); while consistent with the Table 3 finding, this result was only marginally significant. Findings on the other outcome variables were consistent with results given in Table 4, and indicated that women who used audio-CASI at both interviews were less likely to be inconsistent or to report changes in behavior than those who were initially interviewed face-to-face. For example, women in the audio-CASI group had lower odds than women in the face-to-face group of giving inconsistent responses about abortion (0.2–0.4), reporting transactional sex after initially denying such behavior at enrollment (0.3) and reporting a greater number of lifetime sex partners (0.6), presumably because they had reported truthfully when first interviewed using audio-CASI.

While none of the social or demographic covariates were significant in all five analyses, some patterns emerged in both these and the other 22 models that were estimated. For example, between the enrollment and follow-up interviews, mixed-race or indigenous women were more likely than white women to change their responses and report that they had engaged in transactional sex (odds ratios, 2.8 and 10.5, respectively), while women who lived in households with more durable assets were less inclined to change their responses on such behavior (0.7). Women's age was also important: Older women were more likely to provide inconsistent responses regarding three sensitive behaviors (1.1–1.2), independent of interview mode.

DISCUSSION

Our results indicate that the type of interview—face-to-face or audio-CASI—affected the consistency of reporting both at enrollment and between enrollment and the six-week follow-up. Although overall consistency was quite high in both groups, consistency at enrollment was higher within face-to-face interviews than within audio-CASI interviews, which we suspect resulted from interviewers reconciling discrepant responses even though they had been instructed not to. Furthermore, because participants may sometimes have faulty recall or misunderstand questions, in the absence of consistency checks within the computer program, audio-CASI is likely to generate more inconsistent data than face-to-face interviews. A study of response inconsistency within a single interview among unmarried Kenyan adolescent girls revealed that while audio-CASI produced higher reporting of the most stigmatized behaviors, reporting of sexual activity was more inconsistent in computerized interviews than in face-to-face interviews.6 To the extent that we could explore consistency at the six-week interview, it did not vary significantly between the two groups, which is not surprising since all women were interviewed using audio-CASI at the follow-up.

While the overall level of missing responses was quite low at the enrollment interview, it was higher in the audio-CASI group than in the face-to-face group. Although missing data are clearly not desirable from an analytic point of view, the fact that levels were lower in the face-to-face interviews suggests that respondents may have felt pressure to answer questions, despite informed consent processes that emphasized the voluntary nature of participation. An alternative interpretation is that respondents might have been more inclined to skip tough questions in the absence of an interviewer, who would have encouraged them to answer.

Even in this sample of women, who provided fairly reliable responses to questions about sensitive behaviors, our results suggest that the more confidential the interview process, the more inclined women were to disclose illegal, embarrassing or stigmatized behaviors. For example, consistency in reporting on abortions between enrollment and the follow-up was higher in the audio-CASI group. Similarly, increases in reporting of sexual behavior and drug use between the two interviews were greater in the face-to-face interview group, presumably because of greater willingness to report sensitive behaviors when using audio-CASI at the follow-up.

Several studies have investigated the extent to which reporting inconsistency is affected by familiarity with the interviewer and by mode of interview. In longitudinal studies of drug use, repeated contact with the same interviewer decreased the reporting of sensitive behaviors and encouraged reporting of such behaviors to be withdrawn in a subsequent interview, likely because of an unwillingness to report socially undesirable activities to someone whom the respondent had gotten to know.19,28 This finding, as well as our current findings, is consistent with the conventional notion in survey research that the more anonymous the interview process, the more likely the respondent is to divulge stigmatized or embarrassing behaviors. However, a consensus on this issue does not exist in the public health literature. A small study of black and Latina teenagers attributed the high reliability of sexual reports among adolescents to the interviewers' recruitment of participants and development of close rapport.29 Moreover, an analysis of response consistency in a longitudinal household survey in rural Kenya revealed that answers were more consistent when the interviewer was familiar, a finding that challenges the view that respondents in all settings will be more honest with interviewers who are unknown to them.30

Despite our sample being fairly homogeneous, we observed that the level of reporting inconsistency between enrollment and the six-week follow-up was inversely related to respondents' number of household assets, although this was significant only for transactional sex. These results are consistent with research on the reporting of sensitive behavior in the United States, which suggests that individuals of lower socioeconomic or minority status are more suspicious of the interview process.17,19,20,28 If such respondents feel less apprehensive with audio-CASI, then replacing face-to-face interviews with this method may improve our ability to monitor the health of those who are most in need.

This study has several limitations. First, one of the eligibility criteria for the parent study was that participants be literate, and though the audio-CASI program did not require that women be able to read, they did have to be able to recognize numbers. Second, although women were randomized at enrollment, it is possible that some of the differences in reporting between the interview groups may reflect true differences in behavior rather than differences influenced by interview mode. Because the two groups were similar in makeup, however, it is difficult to imagine that all the differences observed could be attributed to actual behavioral differences. Additionally, although higher rates of reporting of sensitive behaviors suggest more honest reporting, it is possible that some respondents over- reported certain behaviors. Yet a previous analysis of these data showed that audio-CASI reports were better predictors of STI prevalence than face-to-face reports, suggesting that increased reporting of sensitive behaviors is in fact more accurate reporting.26 Finally, because we did not ask about changes in behavior in the interval between interviews, we cannot distinguish between actual changes in behaviors and inconsistency in reporting. However, as noted above, because participants were randomly assigned to the interview groups, we would expect the degree of change in behavior to be similar in the two groups.

This research contributes to a growing literature on the effect of interview mode on the reporting of sensitive behaviors in developing countries. Although our analysis of internal consistency at enrollment suggests that, in the absence of data checks, computerized interviewing might increase random measurement error, particularly in populations unfamiliar with the technology, the results from this study in Brazil—as well as those from other studies in the developing world—demonstrate that it significantly reduces social desirability bias and improves the overall reliability and validity of the data.

Footnotes

*Of the 1,038 women screened for the main study, only 26 said they could not read or write.

For four outcomes at enrollment in Table 3, no respondents in the face-to-face interview group provided inconsistent reports.

For the 25 outcomes for which we expected the interview mode to influence the reporting of inconsistency (we had no expectations regarding the numbers of births or pregnancies), the sign for the interview mode variable was in the expected direction for 21 outcomes, even if not significant.

References

1. Fu H et al., Measuring the extent of abortion underreporting in the 1995 National Survey of Family Growth, Family Planning Perspectives, 1998, 30(3):128–138.

2. Hewitt M, Attitudes toward interview mode and comparability of reporting sexual behavior by personal interview and audio computer- assisted self-interviewing: analyses of the 1995 National Survey of Family Growth, Sociological Methods and Research, 2002, 31(1):3–26.

3. Tourangeau R and Smith TW, Asking sensitive questions: the impact of data collection mode, question format, and question context, Public Opinion Quarterly, 1996, 60(2):275–304.

4. Turner CF et al., Adolescent sexual behavior, drug use and violence: increased reporting with computer survey technology, Science, 1998, 280(5365):867–873.

5. Turner CF, Miller HG and Rogers SM, Survey measurement of sexual behavior: problems and progress, in: Bancroft J, ed., Researching Sexual Behavior: Methodological Issues, Bloomington, IN, USA: Indiana University Press, 1997, pp. 37–60.

6. Hewett PC, Mensch BS and Erulkar AS, Consistency in the reporting of sexual behavior by adolescent girls in Kenya: a comparison of interviewing methods, Sexually Transmitted Infections, 2004, 80(Suppl. 2):ii43–ii48.

7. Mensch BS, Hewett PC and Erulkar AS, The reporting of sensitive behavior by adolescents: a methodological experiment in Kenya, Demography, 2003, 40(2):247–268.

8. Mensch BS et al., Sexual behavior and STI/HIV status among adolescents in rural Malawi: an evaluation of the effect of interview mode on reporting, Studies in Family Planning, 2008, 39(4):321–334.

9. Minnis AM et al., Audio computer-assisted self-interviewing in reproductive health research: reliability assessment among women in Harare, Zimbabwe, Contraception, 2007, 75(1):59–65.

10. Rumakom P et al., Obtaining accurate responses to sensitive questions among Thai students: a comparison of two data collection techniques, in: Jejeebhoy S, Shah I and Thapa S, eds., Sex Without Consent: Young People in Developing Countries, London: Zed Books, 2005, pp. 318–322.

11. Paz-Bailey G et al., Risk factors for sexually transmitted diseases in northern Thai adolescents: an audio-computer-assisted self-interview with noninvasive specimen collection, Sexually Transmitted Diseases, 2003, 30(4):320–326.

12. Potdar R and Koenig MA, Does audio-CASI improve reports of risky behavior? evidence from a randomized field trial among young urban men in India, Studies in Family Planning, 2005, 36(2):107–116.

13. Le LC et al., A pilot of audio computer-assisted self-interview for youth reproductive health research in Vietnam, Journal of Adolescent Health, 2006, 38(6):740–747.

14. Simões AA et al., A randomized trial of audio computer and in- person interview to assess HIV risk among drug and alcohol users in Rio De Janeiro, Brazil, Journal of Substance Abuse Treatment, 2006, 30(3): 237–243.

15. Simões AA et al., Acceptability of audio computer-assisted self- interview (ACASI) among substance abusers seeking treatment in Rio de Janeiro, Brazil, Drug and Alcohol Dependence, 2006, 82(Suppl. 1): S103–S107.

16. Lara D et al., Measuring the prevalence of induced abortion in Mexico City: comparison of four methodologies, paper presented at the 24th Population Conference of the IUSSP, Salvador, Brazil, Aug. 18–24, 2001.

17. Upchurch DM et al., Inconsistencies in reporting the occurrence and timing of first intercourse among adolescents, Journal of Sex Research, 2002, 39(3):197–206.

18. Lauritsen JL and Swicegood CG, The consistency of self-reported initiation of sexual activity, Family Planning Perspectives, 1997, 29(5): 215–221.

19. Fendrich M and Vaughn CM, Diminished lifetime substance use over time: an inquiry into differential underreporting, Public Opinion Quarterly, 1994, 58(1):96–123.

20. Bachman JG and O'Malley PM, When four months equal a year: inconsistencies in student reports of drug use, in: Singer E and Presser S, eds., Survey Research Methods, Chicago, IL, USA, and London: University of Chicago Press, 1989, pp. 173–186.

21. Rodgers JL, Billy JOG and Udry JR, The rescission of behaviors: inconsistent responses in adolescent sexuality data, Social Science Research, 1982, 11(3):280–296.

22. Alexander CS et al., Consistency of adolescents' self-report of sexual behavior in a longitudinal study, Journal of Youth and Adolescence, 1993, 22(5):455–471.

23. Bignami–Van Assche S, Are we measuring what we want to measure? individual consistency in survey response in rural Malawi, Demographic Research, 2003, Special Collection 1, Article 3, <http:// www.demographic-research.org/special/1/3/default.htm&gt;, accessed Feb. 28, 2007.

24. Lagarde E, Enel C and Pison G, Reliability of reports of sexual behavior: a study of married couples in rural West Africa, American Journal of Epidemiology, 1995, 141(12):1194–1200.

25. Luppi C et al., Does audio computer-assisted self-interviewing improve reporting on sensitive behaviors? findings from Brazil, paper presented at the 16th Biennial Meeting of the International Society of Sexually Transmitted Diseases Research, Amsterdam, July 11–14, 2005.

26. Hewett PC et al., Using sexually transmitted infection biomarkers to validate reporting of sexual behavior within a randomized, experimental evaluation of interviewing methods, American Journal of Epidemiology, 2008, 168(2):202–211.

27. Lippman SA et al., Home-based self-sampling and self-testing for sexually transmitted infections: acceptable and feasible alternatives to provider-based screening in low-income women in São Paulo, Brazil, Sexually Transmitted Diseases, 2007, 34(7):421–428.

28. Mensch BS and Kandel DB, Underreporting of substance use in a national longitudinal youth cohort: individual and interviewer effects, Public Opinion Quarterly, 1988, 52(1):100–124.

29. Hearn KD, O'Sullivan LF and Dudley CD, Assessing reliability of early adolescent girls' reports of romantic and sexual behavior, Archives of Sexual Behavior, 2003, 32(6):513–521.

30. Weinreb AA, The limitations of stranger-interviewers in rural Kenya, American Sociological Review, 2006, 71(6):1014–1039.

Author's Affiliations

Barbara S. Mensch is senior associate, and Paul C. Hewett is associate, both at the Population Council, New York. Heidi E. Jones is research associate, Department of Obstetrics and Gynecology, Columbia University Medical Center, New York. Carla Gianni Luppi collaborates with Centro de Estudos, Augusto Leopoldo Ayrosa Galvão, Department of Social Medicine, Santa Casa Medical School, São Paulo, Brazil. Sheri A. Lippman is a doctoral candidate, Division of Epidemiology, University of California, Berkeley, CA, USA. At the time this study was conducted, Adriana A. Pinho was a fellowship recipient at the Centro de Estudos, Augusto Leopoldo Ayrosa Galvão. Juan Diaz is consulting senior associate, Population Council, stationed in Campinas, Brazil.

Acknowledgments

The authors thank Maria Amelia Veras, Manoel Ribeiro, Rute de Oliveira, Cristhiane Herold de Jesus, Diana Careaga and Janneke van de Wijgert for collaboration on study design and implementation, Lidia Rodrigues de Oliveira Silva e Paulo for assistance with audio-CASI during the fieldwork and Barbara Miller for excellent administrative and editorial support. Implementation of this project was funded by the Office of Population and Reproductive Health, Bureau for Global Health, U.S. Agency for International Development, under award HRN-A-00-99-00010. The first two authors acknowledge support for the analysis and write-up from the National Institute of Child Health and Human Development (R01-HD047764) and the William and Flora Hewlett Foundation. The opinions expressed herein are those of the authors and not necessarily those of the funders.

Disclaimer

The views expressed in this publication do not necessarily reflect those of the Guttmacher Institute.