Skip to main content

Main menu

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • RESOURCES
    • About BJGP
    • Conference
    • Advertising
    • BJGP Life
    • eLetters
    • Librarian information
    • Alerts
    • Resilience
    • Video
    • Audio
    • COVID-19 Clinical Solutions
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
    • RCGP e-Portfolio

User menu

  • Subscriptions
  • Alerts
  • Log in

Search

  • Advanced search
British Journal of General Practice
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
    • RCGP e-Portfolio
  • Subscriptions
  • Alerts
  • Log in
  • Follow bjgp on Twitter
  • Visit bjgp on Facebook
  • Blog
  • Listen to BJGP podcast
Advertisement
British Journal of General Practice

Advanced Search

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • RESOURCES
    • About BJGP
    • Conference
    • Advertising
    • BJGP Life
    • eLetters
    • Librarian information
    • Alerts
    • Resilience
    • Video
    • Audio
    • COVID-19 Clinical Solutions
Research

Measuring continuity of care: psychometric properties of the Nijmegen Continuity Questionnaire

Annemarie A Uijen, Henk J Schers, François G Schellevis, Henk GA Mokkink, Chris van Weel and Wil JHM van den Bosch
British Journal of General Practice 2012; 62 (600): e949-e957. DOI: https://doi.org/10.3399/bjgp12X652364
Annemarie A Uijen
Roles: GP
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Henk J Schers
Roles: GP
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
François G Schellevis
Roles: GP, professor of general practice
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Henk GA Mokkink
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chris van Weel
Roles: GP, professor of primary care
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Wil JHM van den Bosch
Roles: GP, professor of primary care
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info
  • eLetters
  • PDF
Loading

Abstract

Background Recently, the Nijmegen Continuity Questionnaire (NCQ) was developed. It aims to measure continuity of care from the patient perspective across primary and secondary care settings. Initial pilot testing proved promising.

Aim To further examine the validity, discriminative ability, and reliability of the NCQ.

Design A prospective psychometric instrument validation study in primary and secondary care in the Netherlands.

Method The NCQ was administered to patients with a chronic disease recruited from general practice (n = 145) and hospital outpatient departments (n = 123) (response rate 76%). A principal component analysis was performed to confirm three subscales that had been found previously. Construct validity was tested by correlating the NCQ score to scores of other scales measuring quality of care, continuity, trust, and satisfaction. Discriminative ability was tested by investigating differences in continuity subscores of different subgroups. Test–retest reliability was analysed in 172 patients.

Results Principal factor analysis confirmed the previously found three continuity subscales — personal continuity, care provider knows me; personal continuity, care provider shows commitment; and team/cross-boundary continuity. Construct validity was demonstrated through expected correlations with other variables and discriminative ability through expected differences in continuity subscores of different subgroups. Test–retest reliability was high (the intraclass correlation coefficient varied between 0.71 and 0.82).

Conclusion This study provides evidence for the validity, discriminative ability, and reliability of the NCQ. The NCQ can be of value to identify problems in continuity of care.

  • continuity of patient care
  • factor analysis, statistical
  • healthcare surveys
  • questionnaires
  • reproducibility of results

INTRODUCTION

Continuity of care is an important characteristic of good health care. It has a positive impact on the health of people and populations and reduces medical errors.1–4 Continuity of care is, nowadays, considered a multidimensional concept.5–8 It comprises providers’ knowledge of the patient as a person, the development of an ongoing relationship (personal continuity), and communication and collaboration between care providers to connect care. For this last dimension, slightly different concepts of informational continuity, management continuity, or team/cross-boundary continuity are used in the literature, although they have proven difficult to differentiate for patients.9

Measuring continuity allows for the identification of problems and evaluation of interventions or changes in healthcare systems aimed at improving continuity of care. To explore and improve health care, it is especially important to measure continuity of care from the patients’ perspective, particularly patients with multimorbidity.10 Disease-specific instruments cannot be used for this purpose.

Recently, the Nijmegen Continuity Questionnaire (NCQ), a generic questionnaire that aims to measure patients’ perceptions of personal, team, and cross-boundary continuity, regardless of morbidity and care setting, was developed.11 In a preliminary study, the NCQ proved to be promising for use with patients in primary care; however, further testing of reliability and validity is needed before it can be more widely implemented. The aim of this study, therefore, was to assess the psychometric properties (validity and reliability) of the NCQ.

METHOD

Participants and design

In The Netherlands, every patient is registered with a GP. The GP functions as a gatekeeper for secondary care, which is provided in a hospital/outpatient department. In a previous pilot study, the NCQ was distributed among patients with a chronic disease recruited from general practice.11 As the NCQ had been changed during the pilot testing, and it was wanted to test it in both primary and secondary care, another sample of participants was used.

In January 2010, 19 GP trainees working in practices in the eastern part of The Netherlands were asked to distribute 20 NCQs each to patients with ≥1 chronic disease. For these patients, continuity is particularly important. At the same time, six medical specialists (oncologist, internist, cardiologist, lung specialist, psychiatrist, and rheumatologist) working in an academic hospital in Nijmegen were asked to distribute 30 questionnaires each to patients in the polyclinic outpatients department.

Patients aged <18 years or those who were unable to speak or read Dutch were excluded. Patients filled in the NCQ at home and could send it back to the researchers. The GP trainees and specialists registered age, sex, and type of chronic disease(s) of participating patients; GP trainees completed some questions on the type of practice in which they worked.

How this fits in

The Nijmegen Continuity Questionnaire (NCQ) was developed and aims tomeasure patients' experienced personal, team, and cross-boundary continuity of care, regardless ofmorbidity and acrossmultiple care settings. In a previous study, the NCQ showed to be promising in primary care; this study provides evidence to support the validity, discriminative ability, and reliability of the NCQ. The NCQmay be valuable in terms of: detecting problems in the care continuum; comparing continuity experiences for different diseases and multimorbidity patterns; and evaluating interventions or changes in healthcare systems aimed at improving continuity of care.

Ethical approval for the study was granted by the ethics committee Arnhem-Nijmegen.

Measurement instruments

The NCQ (Available from the authors on request) consists of 28 items within the following three subscales:

  • personal continuity: care provider knows me (five items each for two different providers);

  • personal continuity: care provider shows commitment (three items each for two different providers); and

  • team/cross-boundary continuity (four items each for three different groups of providers).

Items were scored on a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree), with an additional option to choose ‘?’ (‘I do not know’). Patients recruited by the specialist also filled in questions about their GP and vice versa.

As well as the NCQ, all patients were asked to complete the following:

  • The ‘care suits patient’ subscale of the Consumer Quality Index General Practice Care questionnaire.12 The questionnaire measures patients’ experiences with general practice care and the subscale measures whether that care suits the patient, corresponding to their experiences with continuity. The subscale consists of nine items, which are scored on a four-point scale ranging from ‘never’ to ‘always’ (Appendix 2 available from the author on request). The option ‘not applicable’ was available for some items. The Consumer Quality Index has shown to be a valid and reliable instrument for measuring the quality of general practice; the higher the score, the better the primary care experience.

  • The Continuity of Care from the Client Perspective (VCC) questionnaire.13 This Dutch questionnaire was developed in 1998 to measure continuity of care from the perspective of patients with a chronic disease living at home. The questionnaire comprises items concerning patients’ experiences with their GP, medical specialist, physiotherapist, occupational therapist, dietitian, speech therapist, podiatrist, and home care. Patients were asked to answer only the questions concerning their GP (13 items) and their medical specialist (14 items) (Appendix 3 available from the author on request). Items are scored on a five-point scale, with an additional option of ‘not applicable’. Initial testing has shown that this questionnaire is a valid instrument with medium reliability.

  • Two questions on satisfaction (‘I am satisfied with the care I receive from my GP/specialist’).

  • Two questions about trust (‘I have trust in my GP/specialist’).

Validity

The following hypotheses were generated to assess construct validity:

  1. Principal component analysis would confirm the previously found three continuity subscales11 — personal continuity: care provider knows me, personal continuity: care provider shows commitment, and team/cross-boundary continuity;

  2. Higher scores on the three subscales about general practice (personal continuity: GP knows me, personal continuity: GP shows commitment, team/cross-boundary continuity between care providers within general practice) would be highly positively associated with scores on the Consumer Quality Index subscale ‘care suits patient’, the VCC the subscale general practice, and GP trust and satisfaction scores.

  3. Scores on the three subscales about hospital/outpatient department care (personal continuity: specialist knows me, personal continuity: specialist shows commitment, team/cross-boundary continuity between care providers within hospital/outpatient department) would be highly positively associated with scores on the VCC subscale specialist care and specialist trust and satisfaction scores.

  4. Scores on the team/cross-boundary continuity between GP and specialist subscale would be at least moderately positively associated with scores on the Consumer Quality Index care suits patient subscale, the VCC subscales, and GP and specialist trust and satisfaction scores.

Discriminative ability

The discriminative ability was tested by examining differences in the NCQ subscores for different subgroups. The following hypotheses were generated:

  1. Patients recruited from general practice would experience greater continuity in general practice than in hospital/outpatient departments, whereas patients recruited from hospital/outpatient departments would experience greater continuity in hospital/outpatient department. It was expected that, in general, patients recruited from general practice would contact the hospital/outpatient department less frequently than patients recruited from hospital/outpatient departments and vice versa. Contacting a department infrequently was likely to diminish the levels of continuity experienced by the patient.

  2. Patients who were psychiatrically ill would experience less continuity than those who were somatically ill, for example, those with diabetes mellitus. This hypothesis is powered from the literature.1,14,15

  3. Patients registered in a general practice in a small town (<40 000 inhabitants) would experience more continuity in general practice than patients registered in a general practice in a large town. Patients from a small town are often found to be healthier than their counterparts in larger towns,16 thereby probably seeing fewer care providers, and tending to move less often to another general practice,17 which increases experienced continuity of care.

  4. Patients who contacted one provider in the previous year would experience more personal continuity — be that in general practice or in hospital/outpatient departments — than patients who contacted >1 provider.

Reliability

Test–retest reliability was assessed by having participants complete the NCQ a second time, 2 weeks after their initial completion. No intervention took place in these 2 weeks. In the first questionnaire, participants were asked whether they were willing to fill in one more questionnaire after 2 weeks; if so, they had to write down their address so the retest could be sent by mail. No reminder was sent when participants did not respond.

Analyses

SPSS (version 16.0) was used to analyse the data. Item means, total subscale scores, and the percentage of responders with the highest and lowest subscale scores (ceiling and floor effect) were assessed. The subscale scores were calculated as the mean of the items in each subscale. To calculate this score, cases that were missing more than one question within a subscale were excluded.

Confirmatory principal component analysis with varimax rotation was used to verify the three continuity subscales that had been found previously.11 Principal component analysis was performed on the eight items about the patient–provider relationship across the multiple care settings and per care setting separately. In the first analysis, several observations from one patient are included (observations from the provider in general practice and in hospital/outpatient departments); in the second analysis, the observations are all independent. Principal component analysis was also performed on the four items regarding the collaboration and information exchange between the groups of providers across the multiple care settings and per care setting separately.

Validity was determined by examining the correlations between the total scores for all scales using Pearson’s product moment correlations. A moderate correlation was considered to be 0.3 to ≤0.5, and a high correlation was defined as ≥0.5.18,19 To calculate the total score for each scale, cases were excluded if more than one question within each scale was missing.

Independent student t-tests were conducted to determine the discriminative ability. To determine test–retest reliability, the intraclass correlation coefficient (ICC, two-way random effects model, absolute agreement) for the subscale scores was calculated. Reliability was assessed as good with an ICC of >0.70.20 Furthermore, the consistency of measurements was verified using the method described by Bland and Altman.21 The mean difference between the two measurements and the 95% limits of agreement (mean ±1.96 standard deviation of difference) were calculated for each subscale score. The consistency of measurement according to a Bland-Altman plot for one subscale was visualised.

RESULTS

In total, 14 GP trainees and six specialists participated; respectively they asked 192 and 162 patients to fill in the questionnaires, of which 145 (76%) and 123 (76%) respectively were returned. In total, 268 patients participated.

Patient characteristics

Table 1 shows patients’ characteristics and their medical care. Responders and non-responders did not differ in age (P = 0.34). Responders were more likely than non-responders to be male (P = 0.006).

View this table:
  • View inline
  • View popup
Table 1

Characteristics and medical care of responders and non-responders

Item and subscale analyses

Table 2 shows item means, total subscale scores, the percentage of patients with the highest and lowest subscale scores, and Cronbach’s alpha for the subscales. The percentage of patients with the highest or lowest subscale score was low (<7.5%), so the NCQ does not show a ceiling or floor effect. Internal consistency (Cronbach’s alpha) ranged from 0.86 to 0.96.

View this table:
  • View inline
  • View popup
Table 2

Item means and total subscale score

Principal component analysis confirmed the three factors that were found in a previous study (hypothesis one, construct validity);11 it was performed on the eight items about the GP and most important medical specialist together and for each care setting separately. Table 3 shows the factor loadings of this first analysis. It resulted in the same two factors as the last analysis — personal continuity: care provider knows me; personal continuity: care provider shows commitment — explaining 73.8% of the overall variance. A principal component analysis was also performed on the four items about collaboration between the groups of providers across the multiple care settings and per care setting separately. This resulted in the same single factor (team/cross-boundary continuity), explaining 88.8% of total variance.

View this table:
  • View inline
  • View popup
Table 3

Results of confirmatory principal component analysis

Construct validity

Table 4 shows the correlations between the NCQ and patient variables. As in hypothesis two, high correlations were found between the three subscales on general practice and the care suits patient subscale of the Consumer Quality Index (r = 0.57–0.75, P<0.01), the general practice subscale of the VCC questionnaire (r = 0.58–0.61, P<0.01), and GP trust (r = 0.59–0.64, P<0.01) and satisfaction (r = 0.63–0.67, P<0.01) scores.

View this table:
  • View inline
  • View popup
Table 4

Correlations among factors and patient variables

High correlations were also found between the three subscales on hospital/outpatient department care and the specialist care subscale of the VCC questionnaire (r = 0.56–0.73, P<0.01, hypothesis three). Two subscales (personal continuity: specialist knows me; personal continuity: specialist shows commitment) were highly correlated to specialist trust (r = 0.56–0.59, P<0.01) and satisfaction (r = 0.54–0.59, P<0.01) scores, whereas one subscale (team/cross-boundary continuity between care providers within hospital/outpatient departments) was moderately correlated to specialist trust (r = 0.46, P<0.01) and satisfaction (r = 0.48, P<0.01) scores (hypothesis three).

The team/cross-boundary continuity between GP and specialist subscale was at least moderately associated with the care suits patient subscale of the Consumer Quality Index (r = 0.47, P<0.01), VCC subscales (r = 0.56–0.65, P<0.01), GP trust (r = 0.30, P<0.01), GP satisfaction (r = 0.38, P<0.01), and specialist satisfaction (r = 0.33, P<0.01) scores. It was weakly correlated with specialist trust scores (r = 0.27, P<0.01) (hypothesis four).

Discriminative ability

Table 5 shows the continuity subscores of different subgroups. As outlined in hypothesis one, patients recruited from general practice experienced significantly more personal and team continuity in general practice, whereas patients recruited from hospital/outpatient departments experienced more personal continuity in hospital/outpatient departments. The score of team continuity in hospital/outpatient departments did not differ between these subgroups.

View this table:
  • View inline
  • View popup
Table 5

Continuity subscores of different subgroups

In agreement with the second hypothesis, patients who were psychiatrically ill experienced significantly less personal continuity from their GP than those with diabetes. No difference was found in other continuity subscores.

As suggested in hypothesis three, patients registered in a general practice in a small town experienced more personal and team continuity in general practice than patients registered in a general practice in a larger town.

Patients who saw one provider in the previous year experienced more personal continuity in general practice than patients who saw more providers (hypothesis four). No differences in personal continuity in hospital/outpatient departments were found (hypothesis four).

Reliability

In total, 184 patients (69%) agreed to fill in a repeat questionnaire, of whom 172 (93%) returned the retest. Patients who agreed to participate did not differ from those who did not agree in terms of age (P = 0.87), sex (P = 0.47), number of chronic diseases (P = 0.76), and NCQ subscale scores (0.18 to 0.96).

Table 2 shows the ICC per subscale for each provider(s) or group of providers; ICCs varied between 0.71 and 0.82. Table 2 also shows the mean difference between the two measurements and the limits of agreement for each subscale. The mean difference between the two measurements varied between –0.10 and 0.11. The limits of agreement are smallest for the personal continuity: care provider knows me subscale. Figure 1 shows the Bland-Altman plot of the personal continuity: GP knows me subscale. It visually shows the mean difference between test and retest (0.07) with its 95% limits of agreement (–082 to 0.97) plotted against the mean of the sum scores.

Figure 1
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1

Bland-Altman plot of the personal continuity: GP knows me subscale. Intra-individual differences between test and retest responses plotted against the mean of the sum scores. The solid line represents the mean of the intra-individual differences. The dashed lines define the 95% limits of agreement. Mean of the difference ± 1.96 standard deviation.

DISCUSSION

Summary

This study provides evidence for the validity, discriminative ability, and reliability of the NCQ as a generic questionnaire that measures patients’ experiences of continuity of care as a multidimensional concept, regardless of care setting and morbidity.

Building on previous research,11 this study provides further evidence of its construct validity through the results of the confirmatory principal component analysis and the hypothesised correlations found between the NCQ and other scales measuring quality of care, continuity of care, trust, and satisfaction. Evidence for the discriminative ability of the NCQ is provided by (hypothesised) differences in continuity subscores of different subgroups. The reliability is further supported with the results of the test–retest and the internal consistencies of the subscales.

Strengths and limitations

One limitation is that only patients with a chronic disease were included, which reduces generalisability of the tool. However, patients with ≥1 chronic disease were purposively selected as, for these patients, continuity is particularly important.22–24

Another limitation is the possible recruitment bias. Providers could decide for themselves which patients to ask to participate. GP trainees, more than medical specialists, approached fewer patients than requested. Data of patients that met the inclusion criteria, but were not approached by their provider, were not available.

A last limitation is the finding that, as hypothesised, patients experienced more continuity over the place of recruitment. This may also reflect the likeliness to express satisfaction with the care organisation in which participants are given the questionnaire. The study tried to reduce this potential bias by asking patients to fill in the NCQ at home and send it back to the researchers.

A Cronbach’s alpha of >0.90 was found on the team/cross-boundary subscale, which can imply item redundancy. However, this subscale includes only four items, so was not necessary to shorten the questionnaire.

The NCQ does not show a floor or ceiling effect, so it could perhaps be capable of showing changes in continuity scores over time (responsiveness). This will need further testing.

A strength of this study is that patients from both primary and secondary care were included. The sample size and response rate of participants were high, which strengthens the results and conclusion of this study.

Comparison with existing literature

In a previous study, the NCQ proved to be promising for use with patients in primary care.11 In this preliminary study, the internal consistencies of the subscales and the interscale correlations also provided evidence of a reliable and valid questionnaire with good discriminant abilities.

Implications for practice

Nowadays, an increasing number of providers are involved in the care of patients, especially patients with a chronic disease; this can threaten the continuity of patients’ care. The NCQ is able to identify problems and could evaluate interventions or changes in healthcare systems aimed at improving continuity of care. This will be an important area for further research, given that poor continuity is suspected to have a negative impact on the health of people and populations and also increases medical errors.1–4 Moreover, the NCQ can be used to compare continuity experiences for different diseases and multimorbidity patterns.

Generalisability to other countries

The questionnaire was developed and tested in The Netherlands, a country where the GP is a gatekeeper. It is likely that it is easily applicable in other countries that have the same care system, such as the UK. In countries with a different care system, the GP could perhaps be replaced by another provider, which could make the questionnaire applicable to other care systems. More research is needed regarding the generalisability of the tool to other countries.

Notes

Funding

Funding for this study was supplied by Frans Huygen Foundation, Nijmegen, The Netherlands

Ethical approval

Ethical approval was granted by the Medical Ethics Committee Arnhem-Nijmegen.

Provenance

Freely submitted; externally peer reviewed.

Competing interests

The authors have declared no competing interests.

Discuss this article

Contribute and read comments about this article on the Discussion Forum: http://www.rcgp.org.uk/bjgp-discuss

  • Received November 7, 2011.
  • Revision received November 28, 2011.
  • Accepted January 24, 2012.
  • © British Journal of General Practice 2012

REFERENCES

  1. ↵
    1. Adair CE,
    2. McDougall GM,
    3. Mitton CR,
    4. et al.
    (2005) Continuity of care and health outcomes among persons with severe mental illness. Psychiatr Serv 56(9):1061–1069.
    OpenUrlCrossRefPubMed
    1. Hänninen J,
    2. Takala J,
    3. Keinänen-Kiukaanniemi S
    (2001) Good continuity of care may improve quality of life in type 2 diabetes. Diabetes Res Clin Pract 51(1):21–27.
    OpenUrlCrossRefPubMed
    1. Moore C,
    2. Wisnivesky J,
    3. Williams S,
    4. McGinn T
    (2003) Medical errors related to discontinuity of care from an inpatient to an outpatient setting. J Gen Intern Med 18(8):646–651.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Stange KC,
    2. Ferrer RL
    (2009) The paradox of primary care. Ann Fam Med 7(4):293–299.
    OpenUrlFREE Full Text
  3. ↵
    1. Gulliford M,
    2. Cowie L,
    3. Morgan M
    (2011) Relational and management continuity survey in patients with multiple long-term conditions. J Health Serv Res Policy 16(2):67–74.
    OpenUrlCrossRefPubMed
    1. Haggerty JL,
    2. Reid RJ,
    3. Freeman GK,
    4. et al.
    (2003) Continuity of care: a multidisciplinary review. BMJ 327(7425):1219–1221.
    OpenUrlFREE Full Text
    1. Reid R,
    2. Haggerty J,
    3. McKendry R
    Defusing the confusion: concepts and measures of continuity of health care, Prepared for the Canadian Health Services Research Foundation, the Canadian Institute for health information and the advisory committee on health services of the federal/provincial/territorial Deputy Ministers of Health, 2002. http://www.chsrf.ca/Migrated/PDF/ResearchReports/CommissionedResearch/cr_contcare_e.pdf (accessed 6 Jun 2012).
  4. ↵
    1. Uijen AA,
    2. Schers HJ,
    3. Schellevis FG,
    4. van den Bosch WJ
    (2011) How unique is continuity of care? A review of continuity and related concepts. Fam Pract, [Epub ahead of print].
  5. ↵
    1. Gulliford M,
    2. Naithani S,
    3. Morgan M
    (2006) Continuity of care in type 2 diabetes: patients’, professionals’ and carers’ experiences and health outcomes. Research summary (National Co-ordinating Centre for NHS Service Delivery and Organisation Research and Development, London).
  6. ↵
    1. Uijen AA,
    2. Schers HJ,
    3. van Weel C
    (2010) Continuity of care preferably measured from the patients’ perspective. J Clin Epidemiol 63(9):998–999.
    OpenUrlPubMed
  7. ↵
    1. Uijen AA,
    2. Schellevis FG,
    3. van den Bosch WJ,
    4. et al.
    (2011) Nijmegen Continuity Questionnaire: development and testing of a questionnaire that measures continuity of care. J Clin Epidemiol 64(12):1391–1399.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Meuwissen LE,
    2. de Bakker DH
    (2009) [‘Consumer quality’-index ‘General practice care’ measures patients’ experiences and compares general practices with each other]. Ned Tijdschr Geneeskd 153:A180.
    OpenUrlPubMed
  9. ↵
    1. Casparie AF,
    2. Foets M,
    3. Raaijmakers MF,
    4. et al.
    (1998) Onderzoeksprogramma Kwaliteit van Zorg: vragenlijst continuïteit van zorg vanuit cliëntperspectief VCC: handleiding en vragenlijsten (NIVEL, Utrecht, the Netherlands).
  10. ↵
    1. Chien CF,
    2. Steinwachs DM,
    3. Lehman A,
    4. Fahey M,
    5. Skinner EA
    (2000) Provider continuity and outcomes of care for persons with schizophrenia. Ment Health Serv Res 2(4):201–211.
    OpenUrlCrossRef
  11. ↵
    1. Uijen AA,
    2. Bischoff EWMA,
    3. Schellevis FG,
    4. et al.
    (2012) COPD patients’ experienced continuity in different care modes and its relation to quality of life. Br J Gen Pract, DOI: 10.3399/bjgp12X649115.
  12. ↵
    1. Verheij RA
    (1996) Explaining urban-rural variations in health: a review of interactions between individual and environment. Soc Sci Med 42(6):923–935.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Schellevis FG,
    2. Jabaaij L
    (2006) [Continuiteit en verhuizende patienten]. Huisarts Wet 2:104.
    OpenUrl
  14. ↵
    1. Burns N,
    2. Grove SK
    (2001) The practice of nursing research (W.B. Saunders, Philadelphia, US), 4th edn.
  15. ↵
    1. Cohen J
    (1988) Statistical power analysis for the behavioral sciences (Lawrence Erlbaum Associates, Mahwah, NJ), 2nd edn.
  16. ↵
    1. Terwee CB,
    2. Bot SD,
    3. de Boer MR,
    4. et al.
    (2007) Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol 60(1):34–42.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Bland JM,
    2. Altman DG
    (1986) Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1(8476):307–310.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Cheraghi-Sohi S,
    2. Hole AR,
    3. Mead N,
    4. et al.
    (2008) What patients want from primary care consultations: a discrete choice experiment to identify patients’ priorities. Ann Fam Med 6(2):107–115.
    OpenUrlAbstract/FREE Full Text
    1. Gerard K,
    2. Salisbury C,
    3. Street D,
    4. et al.
    (2008) Is fast access to general practice all that should matter? A discrete choice experiment of patients’ preferences. J Health Serv Res Policy 13(Suppl 2):3–10.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Turner D,
    2. Tarrant C,
    3. Windridge K,
    4. et al.
    (2007) Do patients value continuity of care in general practice? An investigation using stated preference discrete choice experiments. J Health Serv Res Policy 12(3):132–137.
    OpenUrlCrossRefPubMed
View Abstract
Back to top
Previous ArticleNext Article

In this issue

British Journal of General Practice: 62 (600)
British Journal of General Practice
Vol. 62, Issue 600
July 2012
  • Table of Contents
  • Index by author
Download PDF
Download PowerPoint
Article Alerts
Or,
sign in or create an account with your email address
Email Article

Thank you for recommending British Journal of General Practice.

NOTE: We only request your email address so that the person to whom you are recommending the page knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Measuring continuity of care: psychometric properties of the Nijmegen Continuity Questionnaire
(Your Name) has forwarded a page to you from British Journal of General Practice
(Your Name) thought you would like to see this page from British Journal of General Practice.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Measuring continuity of care: psychometric properties of the Nijmegen Continuity Questionnaire
Annemarie A Uijen, Henk J Schers, François G Schellevis, Henk GA Mokkink, Chris van Weel, Wil JHM van den Bosch
British Journal of General Practice 2012; 62 (600): e949-e957. DOI: 10.3399/bjgp12X652364

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
Measuring continuity of care: psychometric properties of the Nijmegen Continuity Questionnaire
Annemarie A Uijen, Henk J Schers, François G Schellevis, Henk GA Mokkink, Chris van Weel, Wil JHM van den Bosch
British Journal of General Practice 2012; 62 (600): e949-e957. DOI: 10.3399/bjgp12X652364
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Mendeley logo Mendeley

Jump to section

  • Top
  • Article
    • Abstract
    • INTRODUCTION
    • METHOD
    • RESULTS
    • DISCUSSION
    • Generalisability to other countries
    • Notes
    • REFERENCES
  • Figures & Data
  • Info
  • eLetters
  • PDF

Keywords

  • continuity of patient care
  • factor analysis, statistical
  • healthcare surveys
  • questionnaires
  • reproducibility of results

More in this TOC Section

  • Prostate-specific antigen testing and opportunistic prostate cancer screening: a cohort study in England, 1998–2017
  • Matching depression management to severity prognosis in primary care: results of the Target-D randomised controlled trial
  • Time from presentation to pre-diagnostic chest X-ray in patients with symptomatic lung cancer: a cohort study using electronic patient records from English primary care
Show more Research

Related Articles

Cited By...

Advertisement

BJGP Life

BJGP Open

 

@BJGPjournal's Likes on Twitter

 
 

British Journal of General Practice

NAVIGATE

  • Home
  • Current Issue
  • All Issues
  • Online First
  • Authors & reviewers

RCGP

  • BJGP for RCGP members
  • BJGP Open
  • RCGP eLearning
  • InnovAiT Journal
  • Jobs and careers
  • RCGP e-Portfolio

MY ACCOUNT

  • RCGP members' login
  • Subscriber login
  • Activate subscription
  • Terms and conditions

NEWS AND UPDATES

  • About BJGP
  • Alerts
  • RSS feeds
  • Facebook
  • Twitter

AUTHORS & REVIEWERS

  • Submit an article
  • Writing for BJGP: research
  • Writing for BJGP: other sections
  • BJGP editorial process & policies
  • BJGP ethical guidelines
  • Peer review for BJGP

CUSTOMER SERVICES

  • Advertising
  • Contact subscription agent
  • Copyright
  • Librarian information

CONTRIBUTE

  • BJGP Life
  • eLetters
  • Feedback

CONTACT US

BJGP Journal Office
RCGP
30 Euston Square
London NW1 2FB
Tel: +44 (0)20 3188 7679
Email: journal@rcgp.org.uk

British Journal of General Practice is an editorially-independent publication of the Royal College of General Practitioners
© 2021 British Journal of General Practice

Print ISSN: 0960-1643
Online ISSN: 1478-5242