Main

Although surveys of practicing physicians are a valuable source of data to help guide health care policy, they typically have poor response rates (Cummings et al, 2001; Kellerman and Herold, 2001; VanGeest et al, 2007). Specifically, the average response rate of physicians to mailed surveys has traditionally been demonstrated to be only 54% to 58% (Martin, 1974; Asch et al, 1997; Cook et al, 2009) – 14% lower than that of non-physicians – and appears to be getting worse in the era of modern multimedia communications (Cull et al, 2005; Cook et al, 2009). In addition to their potential impact on population-based health care decisions, physician surveys often form the cornerstone of quality improvement efforts, as such efforts cannot take place without a reliable assessment of providers’ current practices and attitudes. If physician non-response leads to survey bias, resulting policy and practice decisions may not accurately represent the views and practices of the target population being sampled.

In oncology, as in other areas of medicine, physician surveys are increasingly being undertaken to assess patterns and quality of cancer care. Recent oncology-related physician surveys published in high-profile journals include an assessment of attitudes of American vs Canadian oncologists as to the cost-effectiveness of new cancer drugs (response rate 59% (Berry et al, 2010)), primary care physicians’ (PCPs’)(Del Giudice et al, 2009) and oncologists’ (Greenfield et al, 2009) views of appropriate follow-up care for cancer survivors (response rate 52% for the former and 36% for the latter), an assessment of surgeons’, medical oncologists’ and radiation oncologists’ involvement in clinical trials (response rate 61% (Klabunde et al, 2011)) and a survey of oncologists’ views regarding communicating the costs of chemotherapy to patients (response rate 31.5% (Schrag and Hanger, 2007)). As can be seen, physician response rates in these studies vary widely; clearly, those with higher response rates have the potential to be much more influential in informing these diverse areas of cancer-related health care policy.

In their comprehensive literature review, Edwards et al (2009) found that successful methods for increasing response rates to postal surveys include monetary incentives, use of shorter questionnaires, follow-up contact, and reply envelopes that contain a stamp rather than metered postage. Responses to electronic surveys were found to be increased with non-monetary incentives, shorter surveys, a lottery with instant notification of results, and exclusion of the word ‘survey’ in the email invitation subject line. Although Edwards et al (2009) analysis is informative, <10% of the 513 survey studies reviewed included physician respondents. Indeed, relatively few studies have specifically examined strategies to increase physicians’ responses to surveys (Field et al, 2002), and the last major review of the literature was published in 2007 (VanGeest et al, 2007).

Our goal was to review the current literature relating to obtaining high physician-survey response rates, with an eye towards improving such efforts in oncology-related research. We then aimed to present our own experience increasing the response rate of a survey targeting primary care physician behaviours with respect to referral for suspected haematologic cancers. We specifically desired to present our data to an audience of clinicians and investigators likely to undertake such surveys as part of their clinical quality improvement efforts or cancer-related health services research.

Methods

Structured literature review

English language experimental studies and literature reviews of methods to improve physician response rates were identified through searches of PubMed and PsycINFO databases, focusing on the years 2000 to 2010. We chose this time frame because we felt that prior reviews had sufficiently examined older work, and because we wanted to focus our analysis on more recent studies that would be most likely to assess both postal and electronic approaches.

Keywords used in binary combinations included: ‘physician survey’, ‘response rate’, ‘improved’, ‘questionnaire’, ‘incentives’, ‘Internet’, ‘web’, ‘mail’, and ‘postal’. Eight prior review articles regarding physician surveys (Cummings et al, 2001; Kellerman and Herold, 2001; McColl et al, 2001; Field et al, 2002; Braithwaite et al, 2003; Cull et al, 2005; VanGeest et al, 2007; Cook et al, 2009) were also assessed to identify additional primary papers. A review of relevant abstracts of primary and secondary searches revealed 38 that focused on experimental studies specifically examining factors that affect physician response rates; the full texts of these were obtained for further review (Table 1). Despite the fact that many surveys of physicians both within and outside of cancer medicine have achieved excellent response rates, we rejected articles that did not specifically compare methods for improving response rates using the same survey and physician sample. We did so because we did not feel we could rigorously compare the methods used across different analyses, given the many disparate topics and samples studied.

Table 1 Studies assessing interventions to improve physician response rates, 2000–2010*

Physician survey case study

From April to August 2010, we surveyed PCPs in Massachusetts regarding their practice patterns with respect to the diagnosis and referral of patients with suspected haematologic malignancy. Our survey was designed to detrmine the approximate number of patients seen in the last year that PCPs suspected might have haematologic malignancy, the frequency of formal specialty referral for those patients, and the frequency of informal curbside consultation. PCPs were also queried about the factors that influence their choice of specialist, and about the information exchange with the specialist.

The names of 6836 Massachusetts physicians were obtained from the American Medical Association; 375 of these were randomly selected for inclusion in the survey. We then searched the Massachusetts Board of Registration in Medicine online directory to verify that the physicians met the study's eligibility criteria: (1) currently in practice at Massachusetts; (2) graduated from medical school in 2005 or earlier; (3) listed specialty or board-certified in internal medicine, general medicine, family medicine or geriatrics; and (4) no non-primary care subspecialty listed. Investigating each name on the initial list took approximately 3.6 min, for a total of 22.5 h spent on cleaning procedures. The final pre-contact eligible sample consisted of 250 physicians. Of these, 60 reported upon contact that they did not engage in primary care and were reclassified as ineligible. The final eligible sample included 190 PCPs.

Initial recruitment

Each physician received a package delivered to her/his office using FedEx courier services, identifying the study physician-investigator (GAA) as the sender. The package included a letter inviting the physician to participate, a printed survey, opt-out card, and a pre-paid, self-addressed return envelope. The letter directed participants to either fill out and return the paper survey, or log-on to a secure website to complete the survey over the Internet. The opt-out card allowed physicians to report that they either declined to participate or that they were ineligible because they did not engage in primary care. Reminder postcards were sent to those physicians who had not yet completed the survey 2 weeks later. Three weeks after that, a second package containing the same materials and instructions was sent to all physicians who had not responded. Physicians who responded after these first three solicitations were termed ‘early responders’.

Telephone recruitment

Telephone calls were made to each physician who had not yet responded by the study's principal investigator (GAA), a medical oncologist, 7 weeks after the initial package was sent. If a potential physician respondent was not available, the study physician either left a message asking for his call to be returned, or, if directed to a voicemail system, a more detailed message regarding the survey itself. Potential physician respondents who were not reached during the first round of telephone calls were called again approximately 2–3 weeks after the initial call. Physicians who responded after the telephone calls were termed ‘late responders’. Regardless of recruitment methodology, those who completed the survey received a $100 VISA gift card by mail.

Analysis

After recruitment was complete, we assessed the overall response rate, as well response rates before and after the follow-up telephone calls. Next, using χ2 analysis or the Fischer's Exact test depending upon on how many subjects were available for each category, we analysed whether there were differences in self-reported characteristics among early vs late responders (gender, age, race, ethnicity, years post residency and practice type) and whether there were difference in characteristics obtained from the Massachusetts Board of Registration in Medicine website (gender, practice type, medical school location and years since graduation) among responders vs non-responders.

Results

Structured literature review

We found that studies of physician response rates generally have tested the effects of the mode of survey, type of incentive, or other interventions (Table 1). The interventions examined varied greatly, but monetary incentives were generally effective (9/11 positive studies), and paper surveys engendered more responses than surveys delivered in other formats such as email (7/8 positive studies). Interestingly, one study demonstrated that response rates were even better with a mailed survey that had an option to respond by email, a so-called ‘mixed methods’ approach (Seguin et al, 2004).

When using an incentive, the studies suggested that it is better to ‘pre-pay’ by sending the incentive with the survey itself vs ‘post-pay’ after completion (Leung et al, 2002), and that cash is preferable to a gift (such as a pen (Clark et al, 2001b)). In addition, a personalised cover letter stressing the importance of that individual physician's reply was shown to result in a better response rate (Leece et al, 2006). Data on the use of enrollment in a lottery as an incentive was more complex. One study suggested that enrollment in a lottery in exchange for completing a survey ($500 Canadian) was better than nothing at all (Baron et al, 2001), but another found that even a small incentive given to all ($2 US upfront) was better than the chance of enrollment in a lottery for a bigger prize ($250; Tamayo-Sarver and Baker, 2004).

Interestingly, some factors that one might assume would lead to a better response rate did not always help and could even be detrimental; for example, one study demonstrated that the addition of a letter featuring the endorsement of the survey by an expert lead to significantly lower primary response rates (Bhandari et al, 2003). Other factors that may have a positive effect included shorter survey word length (Jepson et al, 2005) and inclusion of a stamped return envelope vs a business return envelope (Streiff et al, 2001). This final analysis was the only one to present methodological data from a study of haematologists or oncologists (a mailed survey of 3000 members of the American Society of Hematology to assess their approach to diagnosis and treatment of polycythemia vera; response rate was 38% with the stamped envelope).

Physician survey case study

In our own survey, follow-up telephone calls from the physician investigator increased physician response rates from 43.7% to 70.5%. In total, these phone calls took approximately 20 h, for an average of 23.5 min of physician effort required to recruit each additional participant. Early and late responders did not differ in age, race, ethnicity, years since residency or practice type (Table 2; all P>0.05). In contrast, female physicians were more likely to be early responders (P<0.01, Table 2).

Table 2 Percentage (number) of early and late responders as a function of demographic variables in a survey of PCPs as to referral patterns for haematologic malignancya

Comparing responders to non-responders, we found similar proportions trained in foreign medical schools (24% for responders vs 27% for non-responders; χ2=0.17, ns) and a similar distribution among the two groups of family medicine, general practice, and internal medicine practices (χ2=4.45, ns). The proportions of males and females were reversed between responders and non-responders (χ2=6.62, P<0.05) such that 40% of responders were female and 60% were male, whereas non-responders were 61% female and 39% male. Finally, those who had graduated from medical school within the past 10 years were significantly more likely to respond (91%) than those who graduated more than 11 years before (65% to 67% across for those 11 to 20 years, 21 to 30 years, or 31+ years post graduation; χ2=8.02, P<0.05).

Discussion

Our literature review revealed that several factors have the potential to increase response rates to physician surveys, such as the inclusion of monetary incentives and the use of paper vs web or email formats. Several other items – from shorter survey word length to the use of a personalised cover letter – were also demonstrated to help. In addition, our case study suggested that telephone calls made by a physician investigator to potential physician respondents may greatly increase response rates among initial non-responders.

We found little difference between early and late responders with respect to most socio-demographic dimensions in our survey. This finding is reassuring, as it suggests that medical peer follow-up calls may not greatly change the characteristics of those who ultimately respond. On the other hand, although effective, personal calls by physician researchers are costly, and unlikely to be feasible for the large samples that are sometimes encountered in oncology-related health services research. Unfortunately, whether or not a follow-up telephone call by a research assistant or other non-peer clinician can capture some of that benefit with respect to increasing response rates remains unclear.

Two older studies (before 2000 so not included in our literature review above) assessed the effect of direct follow-up contact from a medical peer on physician survey response rates. The first found that follow-up telephone calls by investigating physicians to PCPs improved response rates from 62% to 92% (Bostick et al, 1992). In the other – a study of PCPs regarding their oncology consultation practices – response rates were increased from 44% to 78% after follow-up telephone calls from a medical peer to initial non-responders (Heywood et al, 1995). Our case study demonstrated a slightly smaller increase in response rate (27% vs 30% and 34% in the before studies); however, we may conclude that despite the modern milieu of email, text messaging, and social media, a follow-up peer-to-peer telephone call still has an important role in terms of assuring high physician response rates. Our results also correspond to the broader survey literature that suggests follow-up contact is essential (Edwards et al, 2009).

We found that overall, respondents (early and late) were more likely to be recent graduates and also to be male. Although the former finding is consistent with prior studies – perhaps because as more recent licensees, specialty and contact information for younger graduates obtained from public sources is more likely to be accurate (Kellerman and Herold, 2001; Barclay et al, 2002; Cull et al, 2005) – these same studies have shown that female physicians are generally more likely to respond. On the other hand, our own gender results are consistent with another large survey that used the American Medical Association physician file (McFarlane et al, 2007), which suggests that our source of respondents may have had a role.

Other than one analysis (Streiff et al, 2001), we found no other examples of methodological studies specifically assessing how to increase response rates for surveys of oncology specialty physicians. Although oncology-related surveys of PCPs can make use of the general literature on surveying physicians (as we ourselves did in our case example), additional strategies may be important to increase response rates from oncology specialists. With respect to the latter, empirical research is clearly needed (e.g., focus groups, key informant interviews or even surveys of oncologists). Possible strategies that may emerge include having the survey endorsed by an oncology specialty society (ours was not) or administered at a national oncology meeting (ours was not). Still, it may be that a ‘one size fits all’ strategy will not be the answer in oncology, and that tailoring the approach to the specific target physician population and investigative aims will dictate the best method.

Our own survey experience illustrates the importance of using a ‘clean’ sample, where attempts at verification of eligibility are made before contacting potential respondents. Indeed, despite our extensive efforts, we still contacted some physicians that were ultimately ineligible. Although such sample cleaning is time consuming and expensive, it is necessary to ensure that a pool of respondents representative of the target population is obtained, irrespective of the sample size. In addition, this can help the ultimate response rate, because, when an ineligible physician does not respond, unless that status is confirmed, he or she must be included in the response rate denominator, which has the effect of lowering the ultimate response rate. Our study also speaks to the utility of the mixed-methods approach (both postal and electronic options for reply), which may be the best way to obtain a high response rate from physicians (Beebe et al, 2007; Sprague et al, 2009) especially as Internet-only (Leece et al, 2004) and email-only (McMahon et al, 2003) approaches have been suffering from lower response rates compared with mailed surveys.

We recognise limitations to our work. First, it is conceivable that some analyses of factors that impact physician survey response rates may have been missed in our structured literature review. Indeed, our search terms were broad, and deciding which studies to include as primarily ‘methodological’ was necessarily subjective. Second, in our case study, the principal investigator was an oncologist telephoning PCPs, and it may be that increases in response rates would have differed if he were also a PCP (possibly better) or if he were telephoning fellow oncologists (possibly better). Third, it was not possible to determine whether the difference in response rates found between early and late responders in our case study was statistically significant (although the magnitude of the difference suggests it was), because the latter group included the former, and thus the two were not independent groups for which there is an appropriate statistical test. Finally, Massachusetts is a state with universal health care and a dense network of hospitals and physicians. Certainly, follow-up calls from a study physician may have different effects on physician response rates in states or countries with a different health care environment.

In summary, as the landscape of clinical practice, health insurance and health care policy evolves, it is likely that physicians will be solicited more often to complete surveys. The use of survey methods that include physicians will also likely increase in cancer medicine, a field with many health services issues ripe for study using such methods. Our work results in several recommendations for the oncology-focused physician survey. First, using a mailed survey (usually by a courier company such as FedEx) makes sense, with an option to be filled out via email or Internet. Second, we recommend a personalised letter including an upfront monetary incentive if possible. Third, paying attention to details such as shorter survey word length and stamped returned postage vs business reply envelope may be important. Finally, follow-up contact should proceed on a regular schedule, and a follow-up call by a peer physician-investigator, when feasible, may be a particularly effective tool.