Measuring Family Planning Service Quality Through Client Satisfaction Exit Interviews

Timothy Williams, Médecins Sans Frontières Jessie Schutt-Aine Yvette Cuca

First published online:

Abstract / Summary

Because of the widely recognized importance of quality of care in the provision of family planning and sexual and reproductive health services, there is a great need to develop simple means of evaluating quality of care. Of particular interest are approaches that take into account clients' satisfaction with their care.


A model client exit interview developed by the International Planned Parenthood Federation, Western Hemisphere Region, was used to measure levels of client dissatisfaction with various components of quality. From 1993 through 1996, 89 surveys of more than 15,000 clients were conducted in eight Latin American and Caribbean countries.


The areas of quality that most often received more than 5% negative response from clients (termed negative response cases) were waiting time (mentioned in 70% of surveys, with a mean dissatisfaction level of 20%), ease of reaching the clinic (in 54%, with an average dissatisfaction level of 12%) and price of services (47% and 10%, respectively). Using the survey results, participating family planning associations made changes to improve quality in these areas, ranging from improving appointment systems to relocating to implementing sliding fee scales. Results from 16 subsequent follow-up surveys showed a decline in each country in the number of negative response cases, as well as in the mean level of dissatisfaction. For example, in Brazil, the mean number of negative response cases per survey declined from 2.7 to 2.2, and the mean level of dissatisfaction among them fell from 19% to 11%.


Well-known problems of measuring client satisfaction may be addressed by focusing on a low threshold of dissatisfaction as a way to uncover shortcomings in service quality. Although declines in dissatisfaction cannot be attributed entirely to the changes made as a result of the use of the questionnaires, client surveys can provide a quick and inexpensive way of determining areas of service where quality could be improved. These kinds of improvements will be necessary if service providers hope to become more sustainable and if they are to help clients meet their reproductive health needs.

Improved quality of care is an increasingly important goal of international family planning programs, for a variety of compelling reasons. From a human welfare perspective, all clients, no matter how poor, deserve courteous treatment, correct information, safe medical conditions and reliable products. It also has been argued that providing such quality services will lead to increased service utilization by more committed users, eventually resulting in higher contraceptive prevalence and lower fertility.1

Finally, there is growing recognition that quality makes sense from an economic perspective. If improved quality leads to increased demand for services, then it should have a positive net effect on service providers' income. Although some quality improvements are costly and therefore may not seem feasible in a period of declining donor resources, many others (such as more courteous attention) can be implemented at little or no cost. Conversely, failing to address quality may be more costly than most service improvements would be, as has been argued in the literature related to total quality management and customer satisfaction in private-sector companies.2 This is likely true for nonprofit family planning settings as well, at least where clients are asked to pay a portion of service costs.3

The increased interest in service quality has been accompanied by a similar increase in efforts to monitor and evaluate it. Judith Bruce's well-known quality-of-care framework4 provides an excellent starting point for the development of evaluation tools and indicators based on six central elements of quality: choice of contraceptive methods, information given to clients, technical competence, interpersonal relations, mechanisms to encourage continuity and appropriate constellation of services.

Based on this framework, a number of useful methodologies have been developed to evaluate some or all of the six elements, with situation analysis5 among the best known. A new methodology developed by MEASURE-Evaluation and the Monitoring and Evaluation Subcommittee of USAID's Maximizing Access and Quality (MAQ) Initiative, known as the Quick Investigation of Quality (QIQ) approach, also promises to contribute richly to the field of quality monitoring.6 Other approaches, such as COPE7 and Continuous Quality Improvement,8 include data collection as well as quality improvement components.

While all of these methodologies have proven useful in different settings, they may be too complex, time-consuming or expensive for small service providers to carry out on their own. To begin the process of quality evaluation in such settings, a simpler, more practical methodology may be called for. To address this need at selected family planning associations in Latin America and the Caribbean, the International Planned Parenthood Federation (IPPF), Western Hemisphere Region, developed in 1993 a simple exit interview focused on client satisfaction.*

This focus was meant to help family planning associations tailor their services to client needs and to prepare them for a future in which a greater portion of their operating costs would need to be covered by client fees. We viewed client satisfaction as a key outcome of quality of care, as well as a key component of sustainability. Thus, measuring client satisfaction can be a useful way of evaluating certain aspects of quality, and increases in satisfaction may indicate improved quality (from the clients' perspective) and better prospects for sustainability.

Tools that empower organizations to improve quality as well as to measure it can be especially useful to achieving these desirable outcomes. In this article, we discuss the rationale for focusing on client satisfaction, describe the methodology used to evaluate and improve it, and present results of surveys carried out by IPPF Western Hemisphere Region between 1993 and 1996. Finally, we discuss the methodology's potential usefulness and limitations.


Any conceptualization of quality encompasses both objective and subjective components. Objectively, products or services should meet or surpass standards of safety, proper function, cleanliness and otherwise general excellence. This is often
referred to as quality control, quality assurance or "medical" quality, and it depends mainly on providers' perspectives. Not long ago, most efforts to improve quality focused on these medical issues.

In recent years, however, the subjective side of quality has also been recognized as vital, and clients' opinions—particularly their degree of satisfaction—are seen as essential to understanding it. A similar trend in evaluation has increased efforts to measure the subjective side of quality. Given the importance of client satisfaction both as an outcome and as an indicator, simple methodologies to measure satisfaction can play an important role in broader efforts to evaluate quality of care.

Private-sector companies in developed countries (whether health-related or not) have long recognized that a focus on customer satisfaction makes good business sense. Satisfied clients make repeat purchases, spend more per purchase, produce positive word of mouth and become loyal to a particular brand.9 Conversely, dissatisfied clients may tell twice as many contacts about their negative experiences as satisfied clients tell about theirs, and are far less likely to return to buy the product or service in the future. Further, fewer than 30% of clients who experience quality-related problems complain directly to the provider of the product or service, and only 1-5% of complaints reach the headquarters level.10 Other studies have supported the hypothesis that clients in health settings are reluctant to express dissatisfaction with their service when questioned using exit interviews.11

These findings show not only the importance of client satisfaction, but also how difficult it is to assess accurately. Thus, we decided that a focus on client satisfaction would be a practical way for clinics to assess certain aspects of quality and to use the results to serve client needs more effectively. It would have to be done, however, in a way that avoided traditional measurement difficulties. We hypothesized that such a client focus would lead service providers to improve services, produce higher client satisfaction and eventually enhance institutional sustainability.

We chose exit interviews as the optimal evaluation methodology because they are simpler than other possible choices (such as household interviews and focus groups), are more practical, are less expensive to carry out and allow for the most rapid feedback. In particular, if feedback is provided in a meaningful and timely way, client satisfaction exit interviews can serve not only as a way to monitor certain aspects of quality, but also as a management tool to improve program performance and sustainability.

The main challenge with using exit interviews in this way is to overcome the well-known problem of "courtesy bias." (Clients may be reluctant to express negative opinions of services, especially while they are still at the service site.12) This difficulty has frustrated researchers in the past, as clients often claim to be satisfied even when they are not. We sought to diminish this problem by focusing on areas for improvement, as opposed to absolute levels of satisfaction, and by recognizing the importance of even very small levels of dissatisfaction.

How do quality, access, client satisfaction and sustainability all relate to each other? Access must be included in the discussion because, along with quality, it strongly affects client satisfaction. For example, clinic hours, clinic location, fees and (to an extent) waiting time are probably more related to access than quality, but all certainly influence satisfaction. Access determines whether a client "reaches the door" of the service provider, while quality is normally thought of as the set of conditions that the client confronts once she is "inside the door."13

Yet client satisfaction, and eventually sustainability, depends on both quality and access. These are normally evaluated as service outputs of programs, while client satisfaction and sustainability are evaluated as outcomes. Client satisfaction is key to clients' decisions to use and to continue using services, and is essential to long-term sustainability. Ultimately, client-focused services that meet peoples' needs and provide them with satisfying experiences should help clients achieve their reproductive intentions.

Very simply, our view of the relationship between these concepts is that client satisfaction has the central role in translating access and quality into positive outcomes such as program sustainability and achievement of reproductive intentions. In this model, clients' perceptions of program characteristics (access and quality) determine the extent to which they are satisfied with services. This in turn influences their decision whether to return and whether to recommend the service to other potential users. If the number of new and continuing users increases as a result of favorable perceptions, program sustainability is enhanced. Likewise, satisfied clients who use methods more effectively have a higher likelihood of achieving their reproductive intentions.


The surveys were conducted at clinics operated by family planning associations affiliated with IPPF in eight countries in Latin America and the Caribbean—Sociedade Civil Bem-Estar Familiar no Brasil (BEMFAM) in Brazil; Asociación Chilena de Protección de la Familia (APROFA) in Chile; Asociación Pro-Bienestar de la Familia Colombiana (PROFAMILIA) in Colombia; Fundación Mexicana para la Planeación Familiar (MEXFAM) in Mexico; Centro Paraguayo de Estudios de Población (CEPEP) in Paraguay; Instituto Peruano de Paternidad Responsable (INPPARES) in Peru; Family Planning Association of Trinidad and Tobago (FPATT); and Asociación Uruguaya de Planificación Familiar (AUPF) in Uruguay.

The original, single-page model questionnaire developed to assess client satisfaction during exit interviews contained 24 mostly yes-no questions, and took 3-5 minutes to complete, on average. According to the methodology,14 family planning associations in each country were responsible for selecting the clinics in which to carry out the survey, with a general target of one clinic every three months. From late 1993, when the methodology was first implemented, through December 1996, 89 surveys, including 25 follow-up surveys, were carried out at 64 clinics in eight countries. A total of 15,657 clients were interviewed (Table 1), with an average sample size of 176. PROFAMILIA in Colombia, with its high-volume clinics, interviewed an average of 578 clients per survey. Most other family planning associations averaged slightly more than the suggested sample size of 100.

Clients were interviewed at the end of their visit by trained interviewers who were not members of the clinic's staff. All clinic visitors were interviewed over a one-week period, in order to cover all days of the week and all hours of each day. The desired minimum sample size was 100, though some smaller clinics did not have that many visits during the one-week period. For larger clinics (those with more than 500 visits per week), quota samples by time of day were used in some cases. (For example, 20-25 interviews might be conducted each morning, lunch hour, afternoon and evening.) This approach kept the sample size manageable and limited the need for additional interviewers. So clients would feel more free to speak openly about aspects of the clinic that they felt could be improved, they were interviewed in a private area out of earshot of clinic personnel.

The main difference between this methodology and other ways of assessing client satisfaction was its emphasis on dissatisfaction and the identification of "areas for improvement." We defined an area for improvement as any item in the questionnaire about which at least 5% of respondents expressed dissatisfaction. We called such instances "negative response cases," because the main questions of analysis were worded so that an answer of "no" would always signify dissatisfaction.

The 5% threshold for identifying dissatisfaction was loosely based on observed results of earlier client satisfaction surveys, and was meant to flag a manageable number of areas for improvement with each survey. Though assigned arbitrarily, it appears to have succeeded both in identifying a workable number of problem areas and in drawing attention to client concerns that might have been overlooked if we had used traditional means of assessing client satisfaction.

For every negative response case identified through the survey, we required family planning associations and clinics to propose and implement actions that addressed each area for improvement. In this sense, the focus was heavily on using the results to improve quality, with much less emphasis on actual levels of satisfaction. Except for questions for which dissatisfaction was above the 5% threshold, these actual levels of satisfaction were not even reported by the family planning associations.

Throughout the study period, we treated the original questionnaire as a model that family planning associations could expand or adapt to fit local needs. This flexibility was meant to give family planning associations a greater sense of ownership, and increase the likelihood that the results would be used to bring about improvements. Some family planning associations made substantive changes in the questions and format, but all kept within the broad confines of the questionnaire.

At the central IPPF Western Hemisphere Region level, a running tabulation was maintained of all surveys carried out, the negative response cases identified and the proposed actions for improvement. From this record, we determined which questions had generated the most negative response cases and which had revealed the highest levels of dissatisfaction. We also analyzed the content of the proposed improvements, to assess how appropriate they were to the negative response cases they were meant to address.

Besides studying the potential of client satisfaction surveys to identify areas for improvement and encourage family planning associations and clinics to address them, we also examined the extent to which improvements consequently resulted in greater client satisfaction. Thus, in clinics where additional surveys were carried out after the actions for improvement were implemented, we analyzed the change in satisfaction between the initial survey and the follow-up survey, to investigate the improvements' possible impact on quality. We analyzed changes in dissatisfaction by clinic, by family planning association and by type of negative response case or area for improvement.

The analysis had to be limited to surveys in which identical questions appeared in both the initial and the follow-up surveys. However, since the emphasis of this approach was on practical applicability and action, as opposed to research, no protocol was established from the outset for carrying out comparable follow-up surveys. Thus, for a number of variables, only at the sites where there were identical questions in both the initial and follow-up surveys were we able to assess the change in satisfaction levels.

We began with 25 pairs of initial surveys and follow-up surveys, and listed all of the negative response cases identified in the initial surveys. To be included in the analysis, the question that resulted in a negative response case had to have been asked in an identical manner in both surveys. Excluding follow-up surveys with no comparable negative response cases eliminated six pairs of surveys from the analysis (five from Mexico and one from Peru). In three other cases (two from Mexico and one from Trinidad and Tobago), family planning associations carried out two follow-up surveys at the same clinic. In such cases, we eliminated the first follow-up survey, feeling that the main point of interest was the final level of client satisfaction and so comparing the initial and final surveys. This left 16 pairs of surveys for analysis.


Levels of Dissatisfaction

Clients generally were highly satisfied with the services they received. For the large majority of questions, more than 95% of respondents said they were pleased with the services they received. Despite the high reported levels of satisfaction, substantial numbers of clients were willing to express dissatisfaction. In the 89 surveys carried out between 1993 and 1996, 281 negative response cases were identified—about 3.2 per survey. As each negative response case represented an area for improvement, clinics were presented with a vast number of ways that they could explore to address clients' needs better.

Of the 11 core satisfaction areas on the model questionnaire, the issue that was mentioned by far the most frequently as requiring improvement was long waiting times (Table 2). This was identified as a negative response case in 70% of surveys in which that question was asked. (Because local family planning associations could choose what to include in their questionnaire, not all questions were asked in all surveys.) Other common complaints concerned difficulty in reaching the clinic (54%), service fees (47%), clinic hours (24%) and information on other contraceptive methods (22%).

On the other hand, privacy and cleanliness were identified as negative response cases far less frequently—in 10% and 2%, respectively, of surveys. Personal courtesy ("Were you treated in a friendly and respectful way?") was never identified as an area for improvement. (This does not necessarily mean that all clients were satisfied with their treatment; it is simply that personal courtesy was never mentioned as a problem by 5% or more of respondents.)

When an area was identified as a negative response case, the mean level of dissatisfaction was also calculated.§ Waiting time was again the issue with the highest average level of dissatisfaction per negative response case (20%), followed by information on other methods (17%), a clear explanation of method use (16%), an opportunity to ask questions (15%), ease of reaching the clinic (12%) and clinic hours (10%).

Actions Taken

The main actions that family planning associations and their clinics took to improve the areas identified were the primary outcome we expected from this effort.** Table 2 suggests that an impressive variety of approaches were attempted. There is substantial duplication in proposed actions among and between family planning associations; those shown in Table 2 were the most common or most innovative.

The many strategies that were implemented for reducing long waiting times generally fall into two categories: those that encourage more clients to come during off-peak hours, and those designed to manage client flow better during peak times. Among the first category, the most common strategy was the use of individual or group appointment systems. Other specific examples included offering promotional discounts to clients who visit during off-peak hours (MEXFAM) and encouraging clients to call in advance to find out how many clients are waiting at a given time (FPATT).

Strategies to accommodate high volumes of clients better at peak times include separating the areas or processes for family planning clients and reproductive health clients to improve client flow. In Peru, INPPARES opened additional consultation spaces at three clinics and hired additional medical staff. In some of its clinics, BEMFAM made group information sessions optional rather than mandatory for continuing clients. CEPEP and PROFAMILIA enforced physicians' schedules more strictly; CEPEP also expanded and renovated the waiting room so as to accommodate clients better.

Almost all actions proposed to improve satisfaction regarding clinic hours involved keeping the clinic open longer than it was at the time of the survey, especially on nights and weekends. In the case of BEMFAM, most of their clinics originally closed for lunch, but they improved staff rotations so they could provide services throughout the entire day. This helped them manage staff assignments to have more doctors and other staff available at the most desired time periods.

Most family planning associations modified the questionnaire to ask not only whether clinic hours were convenient, but also what times would be most convenient in the future. The associations' efforts to make clinic hours more acceptable to clients represent an important shift toward more client-focused services. Traditionally, hours were set largely to suit the availability and convenience of medical personnel. Finding cost-effective ways of making hours more convenient to clients, while perhaps more related to access than quality, is nevertheless a potentially vital key to increasing their satisfaction and retaining them as ongoing clients in the future.

Many family planning associations also reported more than 5% negative response for the question, "Was it easy to get to the clinic?" Some of these "negative" responses may actually indicate the high quality of the provider, because the clients were willing to travel a long way to use services. Further, addressing this situation is often difficult or out of the providers' hands, as it is determined principally by clinic location and available transportation. Although clinic location is vital to client accessibility and satisfaction, and in theory is subject to change over the long term, such changes require a substantial investment in time and resources, and can be accomplished only after much thought and preparation.

The family planning associations made some successful efforts to address this issue. In Trinidad and Tobago, for example, FPATT strengthened outreach activities to serve those living far from the urban clinic better and to inform those using temporary methods of the locations of community-based distribution posts closer to their homes. An outreach coordinator position was created to expand FPATT's partnership with government health centers. BEMFAM and MEXFAM added larger signs at certain clinics to draw greater attention to them, as well as directional signs on nearby streets. Clinics in Brazil, Colombia and Mexico were actually relocated following initial surveys, although the surveys themselves may have played only a small role in those decisions.

Not surprisingly, high fees represent another main area of dissatisfaction for many clients. Setting fees is a major challenge for family planning associations, given their mission to serve low-income clients and the apparently conflicting mandate to achieve financial self-sufficiency. One of the primary ways in which family planning associations have tried to address this issue is through flexible fee scales; most now have some degree of flexibility in their pricing policy. In addition, almost all have carried out some form of market studies to understand better their clients' willingness and ability to pay. BEMFAM went further, conducting surveys of dropouts and continuing users to determine the extent to which prices were affecting their demand for services.

Regarding information on contraceptive methods, PROFAMILIA, INPPARES and AUPF all carried out refresher training courses for counselors or hired new counselors. APROFA increased the amount of educational material available at clinics. Strategies to address most of the other negative response cases focused on improving counseling as well as adding staff and facilities.

Follow-Up Surveys

There were 16 pairs of surveys in which questions that resulted in negative response cases in the initial survey could be reasonably compared with responses to identical questions in the follow-up (Table 3).†† The total number of clients interviewed was slightly smaller in the follow-ups (2,789) than in the initial surveys (3,335). Therefore, the average sample was somewhat smaller at follow-up, although the differential was greater for Colombia than for the other countries.

The variables for which at least 5% of clients indicated dissatisfaction were the areas that we expected would show increased satisfaction in the follow-up surveys.‡‡ The mean level of dissatisfaction among these negative response cases ranged from 10% in Mexico to 22% in Paraguay (Table 4).

The results of the follow-up surveys indicate that in all five countries, both the mean number of negative response cases and the mean level of dissatisfaction among those cases decreased. This suggests an apparent beneficial impact of the improvements implemented between the two surveys. The percentage decrease in dissatisfaction ranged from 28% in Trinidad and Tobago to 76% in Paraguay.

It is important to note here that the follow-up analysis was confined to questions that were negative response cases in the first survey. Thus, the number of negative response cases per survey (those included in the follow-up column) could at most equal the number in the initial survey. However, it was possible for questions that were not negative response cases in the first survey to have become negative response cases in the second, if the negative response level increased from less than to more than 5%. Indeed, in the majority of follow-up surveys, at least one new negative response case appeared.

Likewise, the mean level of dissatisfaction among negative response cases in the follow-up surveys included here refers to dissatisfaction associated with the negative response cases identified in the initial surveys only. If all negative response cases in the follow-up surveys were included, the mean level of dissatisfaction would be higher or lower, depending on the level associated with the areas that were not negative response cases in the first survey. If the questions that became negative response cases in the second survey were also included in the levels of dissatisfaction for the first survey, those levels would decrease, and the percentage change in dissatisfaction would be weaker than is shown in Table 4. Our rationale for not including those questions in the analysis is that because they were not identified as areas for improvement in the first survey, no action was proposed. Since we are attempting here to determine whether the implemented actions had positive effects on the areas identified for improvement, we chose to confine the analysis to negative response cases from initial surveys only.

We also broke down the comparison of the initial and follow-up surveys by area for improvement (Table 5). (The "other" category includes questions for which there was only one comparable negative response case among all the surveys.) Strong decreases in dissatisfaction are evident for each of the variables, again suggesting that improvements implemented by the clinics had a positive effect. Especially strong decreases were seen for insufficient time in consultation (64%) and not enough opportunity to ask questions and clarify doubts (60%), suggesting that the improvements associated with those variables (better control over doctors' schedules, more doctors and consultation rooms, and refresher training for counselors) may have been particularly effective.

While appearing impressive, these results are aggregated from 16 sites and do not mean that client satisfaction necessarily improved for all variables at all sites. In Brazil, for example, client dissatisfaction with waiting time decreased at one site after the family planning association decided to keep the clinic open during lunch hours. However, dissatisfaction with clinic hours increased, apparently because BEMFAM simultaneously decided to close the clinic earlier in the afternoon (a decision that was subsequently reversed). Similar results at other sites show that one cannot expect satisfaction to improve following every single intervention.

Further, although aggregate dissatisfaction decreased strongly for all variables, the average level of dissatisfaction remained greater than 5% in four of the seven areas for improvement. In all individual cases where this is true, they remained negative response cases and require further improvement, even though satisfaction levels improved. In such cases, the family planning association and clinic are still expected to propose improvements to address it. This is meant to be a process of continuous improvement, one that does not stop with the application of one or two surveys. Indeed, many family planning associations found the methodology to be sufficiently useful that they continued to use it beyond the study period.


Our findings suggest that exit interviews using short, simple questionnaires among a small sample of clients can successfully identify areas of dissatisfaction among clients. Moreover, the results seem to show that efforts to address those concerns can lead to higher satisfaction. This approach offers several important advantages over other methodologies:

Ease and cost of application. The questionnaires and interview guidelines are easy to use and require minimal training. They can usually be conducted by existing staff (if central-level staff are used as interviewers) or by outside interviewers, hired as needed. They are less costly to carry out than many other quality-evaluation tools. Reporting results is also easy and systematic, and managers receive rapid feedback in an easy-to-understand format.

Practicality. In addition to the generic attributes listed above, the approach described here addresses some of the limitations of traditional methods of evaluating client satisfaction. Most important is focusing improvement efforts on areas with a negative response of at least 5%. This gives program managers something tangible to work with in analyzing results, and allows them to use results to bring about positive change. Ultimately, the methodology may be more useful as an impetus for quality improvement than as a strict evaluation device.

Client orientation and empowerment. Client exit interviews are one of the few tools that provide quantifiable data on clients' perceptions. They can also provide information on clients' knowledge about such matters as how to use the method they received. This contrasts with direct observation, another useful tool for assessing quality. Although observation is preferred as a means of evaluating provider skills and what information is conveyed and how, it does not reveal how well the information was received or understood. Further, clients at many service sites may still be so disempowered that they feel they must accept whatever quality they are offered. The mere act of asking for their opinion through a simple 3-5-minute survey, and communicating to them that their opinion makes a difference, is one small way to increase their feelings of empowerment. In terms of broader service delivery issues, a substantial added benefit is that it forces service providers to be more attentive to clients' needs and opinions, and to develop services that best address these. Eventually, making service more client-focused in these ways should also contribute to enhanced sustainability.

While these advantages are substantial, exit interviews are clearly not the best methodology for all situations where quality is being evaluated. Among the limitations of this approach, the following ones are particularly important to bear in mind:

Courtesy bias and validity. The main disadvantage in client satisfaction surveys is a tendency toward overly positive results caused by courtesy bias. In an evaluation, this poses validity problems, since stated satisfaction levels fail to measure true client perceptions. More important, from a practical perspective, artificially positive results can appear to indicate that service quality is so good that no corrective action is needed. Instead of being instruments for change, then, client satisfaction surveys can become a rationale for maintaining the status quo.15

The approach described in this article addresses this potential limitation of client satisfaction surveys by focusing on levels of dissatisfaction and by choosing a low threshold of dissatisfaction to indicate a quality shortcoming. Consistent with total quality management theory, we reasoned that any dissatisfied client represented a potential discontinuing user who would be likely to speak negatively about her experience, leading to fewer future clients through personal recommendations. Because a single negative response probably stands for a larger number who feel the same way without saying so, small numbers of such responses should be carefully heeded. Any question with more than a small level of dissatisfaction should be viewed as an indication that some aspect of services should be improved.

By using this type of focus, we emphasized that even high-quality care can be improved. Adopting such a client-focused attitude can lead family planning associations and clinics to begin moving toward a culture of continuous quality improvement.

Perhaps more serious than courtesy bias in general is the potential for differential courtesy bias. This would arise if clients found it harder to express dissatisfaction for certain types of questions than for others. In practice, we found that higher levels of dissatisfaction were expressed about matters related to access (waiting time, hours, price, and ease of reaching clinic) than about those concerning interpersonal relations. It seems plausible that the more personal the question, the more reluctant clients would be to complain. Other authors have suggested that specific and detailed questions are more likely to elicit true client responses than more general ones.16 The extent to which such differential bias affects client responses clearly needs to be considered when survey results are interpreted.

Medical quality and technical competence. In general, exit interviews are not an appropriate methodology for assessing service providers' technical competence, or indeed any component of what is considered "medical" quality. Clients are not normally familiar enough with medical techniques to judge the expertise of a service provider, nor should they be expected to. Although the model questionnaire has been modified by PROFAMILIA and INPPARES to include "technical competence" questions, these are limited to testing clients' knowledge of their current method, as a means of evaluating providers' abilities to transmit correct information. Other methodologies, such as direct observation, review of client records, provider interviews or competency tests, are far more appropriate for evaluating this extremely important component of quality.

Limitation to clinic-based services. The methodology reported here is most appropriate for clinical settings. MEXFAM and INPPARES have modified it in order to evaluate quality in community-based distribution, but to do so requires substantial alterations to the questionnaire and additional costs. To apply the methodology to community-based distribution programs, it would be necessary to interview a sample of clients in their homes, as opposed to conducting exit interviews at one central location.

Sample size. We believe that this approach works best in relatively large clinics (those with at least 100 family planning clients per week). If the methodology's guidelines are strictly followed, the cost of carrying out the survey in clinics with smaller caseloads increases substantially, since the interviewer must spend additional time on site in order to accumulate 100 interviews. On the other hand, if a smaller sample is used (as happens commonly in practice), the reliability of the results is diminished. This is a potentially serious drawback if the negative response cases identified are not the main areas clients feel need to be improved.

Family planning associations are now encouraged to take larger samples if possible, so they can analyze more than simply the number of negative response cases. If the additional resources needed to conduct a survey over a longer period would preclude an association from carrying one out at all, a less-reliable survey approach is preferable to none at all. In general, the number and efficacy of practical improvements, encouraged through use of the methodology, and not its reliability or validity, is what most justifies its use.

Causes of dissatisfaction. Although the methodology can point a family planning association or clinic toward areas that need improvement, the model questionnaire does not specify the cause of client dissatisfaction. For example, clients may feel they must wait too long for services, but the survey itself does not reveal where the problem is or what needs to be done. In some cases, the cause of the problem may be clear; in others, adding one or more questions to the model questionnaire may provide useful additional information. It is quite likely, however, that family planning associations could benefit by complementing the exit interviews with other methodologies, such as client-flow analysis, focus groups, provider interviews or a second series of more in-depth exit interviews that focus on the specific area of dissatisfaction.

Methodological considerations. The above limitations apply to client satisfaction exit interviews in general. Additionally, there are a number of characteristics of this specific methodology that need to be considered for its successful application. One concerns the flexibility that we allowed family planning associations to change the model questionnaire to fit their local needs: They were permitted to change the wording of individual questions, add new ones or change the way they are asked (for example, using scales vs. yes-no responses). Despite having positive aspects overall, this practice reduced the number of follow-up surveys with comparable negative response cases.

Additionally, the associations were allowed to choose clinics of interest to them in which to conduct the follow-up surveys; thus, the sampling was primarily purposeful or convenience. This weakens comparability with the overall results, but as most family planning associations in the study conducted follow-up surveys at most or all of their clinics, this is less of a concern. We have no reason to believe that the clinics chosen were of better or worse quality than those that were not. Even so, future applications of the methodology may benefit from using probability samples when selecting clinics to include in the survey.

Family planning associations also had substantial freedom in terms of when to carry out follow-up surveys; as a result, the length of time between surveys or between interventions and follow-up surveys varied widely. This can affect interpretation of the results, as dissatisfaction would presumably decrease rapidly soon after an improvement was made, but would then be subject to change over time as a result of other factors. Such factors, including other actions taken by the family planning association or clinic between surveys, may have been implemented for reasons completely separate from the initial client satisfaction survey results (for example, staffing changes, fee increases or new services, among others).

Another issue relates to causality between the implemented improvements and subsequent changes in satisfaction. For a number of reasons, one cannot assume that the improvements in satisfaction demonstrated here necessarily result from the use of client satisfaction surveys and resulting interventions by clinics.

The main reason for this is the nonequivalence of clients: Clients in an initial survey who were dissatisfied are more likely than satisfied users to discontinue coming to the clinic before the follow-up survey can be carried out. If this is the case, then dissatisfaction levels seen in follow-up surveys would fall from earlier levels, even if no improvements were made. Follow-up clients have been shown to express higher levels of satisfaction than first-time visitors.17 Reported satisfaction may be higher simply because the remaining clients are more willing to accept existing levels of quality. Similarly, high levels of dropout due to poor quality in certain components of services could conceivably lead to higher levels of client satisfaction in other variables, simply due to smaller caseloads. In an extreme example, poor quality in some aspect of service delivery could lead to such a decline in clientele that waiting time is no longer an issue.

This problem can be controlled for in part by looking at trends in the number of visits over time. If volume decreases sharply, for example, while satisfaction levels rise, one might question the reasons for the satisfaction results more than if volume increased over the same period. In our case, a number of trends—the continuous influx of new clients, the fact that improvements have indeed been made and the preponderance of evidence suggesting that satisfaction is changing in the right direction—lend credence to the idea that improvements are having a positive effect. Nevertheless, to avoid some of these possible misinterpretations, it would be useful to analyze trends in client volume further, to investigate satisfaction levels among former clients as well as current users, and to analyze the reasons for their discontinuation.

Another reason why we cannot assume that program improvements were due entirely to the results of the client satisfaction surveys is that the decision to implement such changes is based on many complex factors. Among these, the survey results may have played a relatively minor role. Nevertheless, the results may have contributed to decision-making by providing managers with some objective data on clients' perspectives on service quality. As such, they may have justified worthwhile new actions, even if they were not the main impetus for their implementation.

Overall, we believe that the advantages of this methodology outweigh the limitations. Nevertheless, in order to gain the most from the application of this kind of evaluation of client satisfaction, it is important to understand the main limitations and to control for them to the greatest possible extent. By keeping these points in mind, service providers should be able to use this kind of study to identify clients' areas of concern, and to develop real quality improvements that will address them.


Client satisfaction exit interviews should always be considered to be just one part of an overall quality evaluation effort. They should be used in conjunction with other quality evaluation instruments, such as direct observation, provider surveys, site inventories, reviews of client records or focus groups. Quality is a broad concept that no single approach adequately and fully measures. Alone, any one of these approaches can address only a piece of the total quality picture.

Within well-defined limits, however, our experiences suggest that client exit interviews can play a useful role in measuring many aspects of client satisfaction with family planning and reproductive health services. They can contribute to our understanding of how clients perceive certain subjective aspects of quality of care and access to services that may be difficult to assess with other evaluation methodologies. Indeed, the simple act of asking the client her views, and obligating the service provider to listen to them, is perhaps the most important outcome from the application of this methodology. Exit interviews can also help providers assess clients' knowledge and choice of method. Moreover, in the long term, a focus on client satisfaction should help make services more sustainable and should help clients achieve their reproductive health goals.

Timothy Williams is senior evaluation advisor with John Snow International, Arlington, VA, USA. Jessie Schutt-Ainé is an independent consultant. Yvette Cuca is evaluation officer with the International Planned Parenthood Federation (IPPF), Western Hemisphere Region. At the time the research described here was conducted, Timothy Williams was senior project analyst, Jessie Schutt-Ainé was evaluation officer and Yvette Cuca was consultant for the Transition Project, IPPF, Western Hemisphere Region. The development of the client satisfaction exit interview methodology was carried out as part of the Transition Project, a cooperative agreement between the U.S. Agency for International Development and IPPF, Western Hemisphere Region. The authors thank the family planning associations that used the methodology for providing feedback on how it could be improved. In particular, they thank BEMFAM, PROFAMILIA, MEXFAM and FPATT (the IPPF affiliates in Brazil, Colombia, Mexico and Trinidad and Tobago) for pretesting the original questionnaires. Finally, the authors thank Victoria Ward for her critical review of the manuscript and Inés Escandón for her review and editing of the text and tables.


*The family planning associations were all part of the Transition Project, a cooperative agreement between the U.S. Agency for International Development (USAID) and IPPF/Western Hemisphere Region. The general objective was to help selected family planning associations in Latin America and the Caribbean become more sustainable in the face of the withdrawal of USAID funding. One specific objective was to improve family planning associations' service quality within the context of sustainability.

Based on feedback from family planning associations over a three-year period, the original model questionnaire was revised. It now has 28 questions, 12 of which are yes-no in format.

Initially, the sample size of 100 was chosen arbitrarily, to allow even small clinics to collect and analyze some data in a reasonable time period. The true required sample size for estimating proportions may be smaller or larger, depending on the degree of accuracy desired and the assumptions about the expected proportion of the population demonstrating dissatisfaction. One possible interpretation (see Fisher A et al., Handbook for Family Planning Operations Research, New York: Population Council, 1991) is the following: n= z2pq/d2=(1.96)2 (0.05) (0.95)/ (0.05)2=73; where n=the desired sample size, z=the standard normal deviate (set for a 95% confidence level), p=the proportion of the target population demonstrating the characteristic under study (dissatisfaction), q=1-p and d=the degree of accuracy desired. To analyze subpopulations or perform cross-tabulations, the sample size would need to be increased. For the purposes of this article, since we are not attempting to test statistical significance, we have included all survey results with samples larger than 73.

§If fewer than 5% of respondents expressed dissatisfaction, it was not a negative response case, and therefore was not reported to us. Thus, in most instances, we did not know dissatisfaction levels when they were less than 5%. Therefore, we calculated average levels of satisfaction per negative response case, but not overall satisfaction per question.

**While there is no systematic external verification of whether improvements were implemented, family planning associations' reports indicate that all actions listed in the table were taken. In some cases, site visits were made, and in each instance this was corroborated.

††The reason that some family planning associations carried out more follow-up surveys than others related mainly to the number of clinics in the association's network and the number of surveys carried out per quarter. In theory, family planning associations were supposed to carry out one survey per quarter in a clinic of their choice. However, many did more than one per quarter, and some did fewer, especially in the beginning. Some were phased out of the project before they could do follow-up surveys. Thus, family planning associations like BEMFAM and MEXFAM, which carried out two or more surveys per quarter and which had relatively few clinics, did most of the follow-up surveys. PROFAMILIA, with more than 40 clinics, would have needed to carry out several surveys per quarter to achieve a reasonable number of follow-ups along with sufficient coverage of their clinic network. They chose to carry out one follow-up survey at a clinic that changed locations, mainly to see whether clients found the new location more convenient.

‡‡It is important to note that because some questions were eliminated from the analysis even if the survey was included, these figures do not include all negative response cases in the initial surveys. In Mexico, for example, more than eight negative response cases were identified through the six surveys, but only the eight negative response cases listed in Table 4 had equivalent questions in the follow-up surveys.


1. Jain AK, Fertility reduction and the quality of family planning services, Studies in Family Planning, 1989, 20(1):1-16; Kumar S, Jain AK and Bruce J, Assessing theQuality of Family Planning Services in Developing Countries, Programs Division Working Papers, New York: Population Council, 1989, No. 2; Bruce J, Fundamental elements of the quality of care: a simple framework, Studies in Family Planning, 1990, 21(2):61-91; Huntington D et al., Assessing the relation between the quality of care and the utilization of family planning services in Côte d'Ivoire, paper presented at the annual meeting of the American Public Health Association, Washington, DC, Nov. 8-12, 1992; and Mensch B, Arends-Kuenning M and Jain A, The impact of the quality of family planning services on contraceptive use in Peru, Studies in Family Planning, 1996, 27(2):59-75.

2. Davidow WH and Uttal B, Total Customer Service, New York: Harper Perennial, 1989; Creech B, The Five Pillars of TQM, New York: Truman Talley Books/Dutton, 1994; and Barsky JD, World-Class Customer Satisfaction, New York: Richard D. Irwin, 1995.

3. Hardee K and Gould BJ, A process for quality improvement in family planning services, International Family Planning Perspectives, 1993, 19(4):147-152; Management Sciences for Health (MSH), Using CQI to strengthen family planning programs, The Family Planning Manager, 1993, II(1):1-20; and Williams T and Townsend M, Sustainability: what it means and how to evaluate it—preliminary results from IPPF WHR's Transition Project, paper presented at the annual meeting of the American Public Health Association, Washington, DC, Oct. 30-Nov. 3, 1994.

4. Bruce J, 1990, op. cit. (see reference 1).

5. Fisher A et al., Guidelines and Instruments for a Family Planning Situation Analysis Study, New York: Population Council, 1992.

6. Measure Evaluation Project and the Monitoring and Evaluation Subcommittee of the Maximizing Access and Quality (MAQ) Initiative, Quick Investigation of Quality (QIQ): A User's Guide for Monitoring Quality of Care, University of North Carolina, Chapel Hill, NC, USA: Carolina Population Center, Feb. 18, 2000.

7. AVSC International, COPE: Client-Oriented Provider-Efficient Services, New York: AVSC International, 1995.

8. MSH, 1993, op. cit. (see reference 3).

9. Barsky JD, 1995, op. cit. (see reference 2).

10. Technical Assistance Research Programs Institute, Consumer complaint handling in America: an update study, executive summary, Washington, DC: Technical Assistance Research Programs Institute, 1986.

11. Avis M, Bond M and Arthur A, Questioning patient satisfaction: an empirical investigation in two outpatient clinics, Social Science and Medicine, 1997, 44(1):85-92; Kenny D, Determinants of patient satisfaction with the medical consultation, Psychology and Health, 1995, 10(5):427-437; and Simmons R and Elias C, The study of client-provider interactions: a review of methodological issues, Studies in Family Planning, 1994, 25(1):1-17.

12. Brown L et al., Quality of care in family planning services in Morocco, Studies in Family Planning, 1995, 26(3):154-168; and Avis M, Bond M and Arthur A, 1997, op. cit. (see reference 11).

13. Bertrand JT et al., Access, quality of care and medical barriers in family planning programs, International Family Planning Perspectives, 1995, 21(2):64-69 & 74.

14. Williams T, Cuca Y and Schutt-Ainé J, Client Satisfaction Surveys for Improved Family Planning Service: A User's Manual, New York: International Planned Parenthood Federation, Western Hemisphere Region, 1998.

15. Hull VJ, Improving Quality of Care in Family Planning: How Far Have We Come? South and East Asia Regional Working Paper, New York: Population Council, 1996, No. 5.

16. Simmons R and Elias C, 1994, op. cit. (see reference 11); and Avis M, Bond M and Arthur A, 1997, op. cit. (see reference 11).

17. Kenny D, 1995, op. cit. (see reference 11).


Ricardo Vernon is senior program associate and regional director for Latin America, Frontiers Project, at the Population Council, Mexico City. James Foreit is senior program associate at the Population Council, Washington, DC.


The views expressed in this publication do not necessarily reflect those of the Guttmacher Institute.