Skip to main content

Main menu

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers

User menu

  • Subscriptions
  • Alerts
  • Log in

Search

  • Advanced search
British Journal of General Practice
Intended for Healthcare Professionals
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
  • Subscriptions
  • Alerts
  • Log in
  • Follow bjgp on Twitter
  • Visit bjgp on Facebook
  • Blog
  • Listen to BJGP podcast
  • Subscribe BJGP on YouTube
British Journal of General Practice
Intended for Healthcare Professionals

Advanced Search

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
Debate & Analysis

What can science contribute to quality improvement in general practice?

Martin Marshall, Maureen Baker, Imran Rafi and Amanda Howe
British Journal of General Practice 2014; 64 (622): 254-256. DOI: https://doi.org/10.3399/bjgp14X679877
Martin Marshall
Professor of Healthcare Improvement and Lead, Improvement Science London, UCL, London.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Maureen Baker
Royal College of General Practitioners, London.
Roles: Chair
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Imran Rafi
Clinical Innovation and Research Centre, Royal College of General Practitioners, London.
Roles: Chair
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Amanda Howe
Royal College of General Practitioners, London.
Roles: Vice-Chair
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info
  • eLetters
  • PDF
Loading

INTRODUCTION

Over many decades practitioners and academics working in general practice have built a strong reputation for their commitment to developing innovative approaches to improving the quality of patient care. The orientation of these initiatives has evolved over the years. Until the 1980s, providing high quality patient care was primarily reliant on the personal motivation of individual doctors to achieve explicit standards of practice.1 These standards were maintained through a commitment to education and training, and their attainment rewarded by peer recognition, for example by membership or fellowship of professional bodies.

From the 1980s until the early part of this century, the emphasis shifted from individuals to teams and the narrative changed from the relatively static orientation of attaining standards to the more dynamic one of continuous improvement.2 Guidelines were developed that had to be delivered by multidisciplinary teams, multiprofessional audit was encouraged, team-based significant event analysis become common, methods of process improvement developed in the manufacturing sector were introduced into practices, and team-based financial incentives were designed and implemented at scale.

Over the past decade the focus has shifted again, this time to levers for improvement that operate at a health system level and which place the locus of control more with policymakers and system leaders than with individual professionals or clinical teams. Performance management, competition, transparency, regulation, and legislation have been introduced as ways of potentiating the established professionally-led methods, or replacing them when they are deemed to be failing.

In this article we describe what we see as the next phase of the ‘improvement journey’ for general practice, a phase that builds on the strengths of approaches currently being used, and attempts to address their deficiencies. We propose that this will place a stronger emphasis on the role of science — the science of improvement — in guiding improvement. We suggest that within the next decade the concept of ‘Evidence-Informed Improvement’ is likely to benefit how health services are organised and delivered in the same way as the evolving evidence-based medicine movement has influenced clinical practice over the past two decades.3 Evidence-Informed Improvement will both encourage and result in a closer relationship between practitioners (both clinicians and managers) who work in the health service and health service researchers working in the academic world.

THE CURRENT NATURE OF IMPROVEMENT

A wide range of approaches are in regular use to improve the quality of care, some utilised primarily by front-line teams (the ‘clinical microsystem’), while others operate at a larger organisational level or on the wider health system (Figure 1).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Approaches to improving the quality of care.

While most of these approaches have been subject to formal evaluation,4 the choice and implementation of improvement practices on the ground are often not based on best evidence. Improvement is frequently seen by managers and clinicians as more of an art than a science, and the potential effectiveness, costs, and risks are rarely considered in a systematic way. As a recent commentary reflected: ‘The irony is that such work risks producing exactly the opposite of improvement: resources can be wasted, energy and enthusiasm dissipated, the side-effects of interventions ignored, and in the end little demonstrable positive change may be seen’.5

THE ADDED VALUE OF SCIENCE

The suggestion that science should play a more prominent role in improvement may not go down well with those of a more practical persuasion. Criticisms of the evidence-based medicine movement from within the general practice community suggest that some people find positivist-based science to be incompatible with an interpretivist epistemological view of the world.6 But placing a stronger emphasis on scientific principles, particularly the social sciences, does not ignore the reality that improving care for patients utilises many different forms of knowledge, including political pragmatism, intuition, personal experience, and ideology. This is why calls for ‘evidence-based management’ and ‘evidence-based policy-making’ so often fall on deaf ears3 and why it is important to see improvement as a social activity as well as a technical one.

Nevertheless, by bringing theory, systematic methods, and an ability to generalise to an endeavour, science can help to make improvement more predictable and less wasteful. The rationalism of science and the messiness of practice do not have to be incompatible. Improvement science can give practitioners the tools to plan, implement, evaluate, and embed new approaches more effectively into practice. Building improvement initiatives around three key empirical questions and applying the scientific method to address these questions helps to make better use of limited resources.

The first question, ‘does it work?’, is an important one to avoid wasting time doing something that has no impact. An improvement study examining the management of domestic violence in primary care provides an example of how this question might be addressed (Box 1).

Box 1. Does it work? Improving care for victims of domestic violence

A group of primary, community, and social care practitioners working in east London wanted to address the challenge of domestic violence in their community.11 They recognised that the problem is common, has significant health and socioeconomic consequences, and is poorly recognised and inadequately managed within general practice. Drawing on academic expertise within the team, they designed an initiative that aimed to improve the identification of victims of domestic violence, and to increase the referral rate of victims to community-based specialist services. They hypothesised that the problem was fundamentally an educational one: practitioners did not have the relevant knowledge or skills to manage domestic violence. They therefore based their intervention and implementation plans on learning theory and identified an evidence-based training intervention from the literature. Committed to rigorous evaluation and led by local clinical academics, they designed an experimental study that cluster-randomised 48 practices to receive either the training intervention or to act as controls. They were able to demonstrate a threefold increase in identification rates and a 22-fold increase in referral rates. The study was published in a reputable journal and the model promoted proactively and widely among commissioning groups. As a consequence, within 2 years of publication, nearly 20 commissioning groups are now using an evidence-based approach to promoting care for victims of domestic violence (personal communication, G Feder, 2014).

The second question, ‘how does it work?’, addresses the critique that complex social interventions rarely work or don’t work. The most common answer to whether an intervention is effective is ‘it depends’: on the way the initiative is implemented, the skills of the people involved, the resources available, and many other factors. Understanding the mechanisms underpinning how an intervention works, and the facilitators and barriers to change, enables it to be more effectively replicated in other organisations or at other times.7 This question can be illustrated with a study exploring why some people with diabetes have problems controlling their blood sugar levels (Box 2).

Box 2.How does it work? Understanding compliance in people with diabetes

The treatment of type 2 diabetes mellitus provides an example of the need to understand mechanisms of action if an improvement intervention is to be effective. Robust guidelines based on strong empirical evidence have a significant impact on clinical behaviours. Considerable resource is expended on educating practice nurses, GPs, and patients themselves to ensure that the guidance is followed. Nevertheless, a significant minority of patients still have poorly-controlled blood sugar levels. Efforts to rectify this, using financial incentives, benchmarking, and more intensive educational programmes have had some impact but have not resolved the problem. So why do some patients have such poor blood glucose control? One answer can be found in a study which explored the health beliefs of people with diabetes.12 Based on in-depth interviews with 46 people, the researchers discovered that while some were highly focused on preventing complications, and therefore worked hard to maintain glycaemic control, the behaviour of others was determined primarily by how they ‘felt’. They rationalised their non-adherence to medical advice by believing that if there were no symptoms then there could not be any problems. This research provided new insights to improve the design and implementation of diabetes improvement efforts.

Since most planned improvement efforts can have some impact, a more pragmatic question than ‘does it work?’ may be the third empirical question, ‘how do we make it work better?’. Here we move into the most applied end of the improvement science spectrum, where practitioners and researchers work in close partnership, utilising a growing range of participative approaches.8,9 The emphasis is on embedding academic expertise alongside the expertise of practitioners and through this negotiating a common understanding of the challenges of improvement rather than imposing views of the ‘best’ way of achieving change. An improvement project using benchmarking of performance data in general practice to facilitate improvement in clinical outcomes illustrates this question (Box 3).

Box 3. How do we make it work better? Using comparative data to facilitate improvement

The work of the Clinical Effectiveness Group based at Queen Mary University, London13 provides a useful example of this pragmatic question. The team uses its academic expertise to support about 150 practices in socioeconomically deprived boroughs in east London to use clinical data more effectively to improve patient care. Focusing on health inequalities and the management of long-term conditions, a team of GP clinical leads, facilitators, analysts, and researchers work with front-line teams to co-design evidence-based guidelines and to support their implementation through the development of software tools, the analysis and benchmarking of data, and the use of practice-based facilitation services. Significant improvements in the care of patients with atrial fibrillation, diabetes, and chronic obstructive airways disease have been demonstrated. In addition, the group have used their academic credibility and local contacts to persuade local specialists to change their prescribing practices thereby reducing the pressure on community prescribing budgets. As a consequence of a close partnership between academics and practitioners, and despite the challenging demographic profile, participating general practices in east London can now demonstrate among the best guideline adherence in the country.14

REFLECTIONS ON THE NEXT STEPS

In this article we present science-based improvement as the next phase in general practice’s long-term commitment to improving the quality of care for patients and communities. Using practical examples, we show how practitioners who adopt a more scientific approach to improving care tend to demonstrate a number of characteristics; they clearly define their objectives and they utilise theories and conceptual frameworks to describe how they plan to achieve them; they use the published literature to guide decisions about their intervention, their methods of implementation, and the environment within which they are working; and finally they are committed to both formative and summative evaluation of their improvement work in order to create new knowledge to influence future improvement efforts.

Using the science of improvement to guide everyday practice will increase the chances of improvement activities benefiting patients and reduce the risk of wasted time, effort, and money on the part of practitioners and the wider health system. However, a number of significant challenges need to be overcome before Evidence-Informed Improvement becomes the norm.

First, there is a need to build a stronger evidence base for improvement activities in community settings. Much of the evidence used to guide decisions about the organisation and delivery of health services is derived from research undertaken in the hospital sector. General practice is very different from the practice of specialist medicine in terms of values, purpose, structures, and methods. The degree of practitioner autonomy, the resources available to support systematic improvement, the holistic and multimorbid nature of clinical practice, the management of uncertainty, and many other factors are likely to impact on the effectiveness of improvement interventions, and on their associated costs and the risks. Only by investing in more health services research in general practice settings will the effectiveness of improvement work be understood and optimised.

Second, the need for a closer relationship between practitioners and health service researchers is not best served by the current reward systems in the higher education sector. Researchers are more likely to be valued by their institutions for the scientific quality of their publications, their ability to win research grants, and for supervising doctoral students, than for the impact that their work might have on patient care. While this challenge is slowly being addressed by policymakers and by funding bodies, there is still a long way to go before the potential contribution of the academic community to service-based improvement is fully realised.

Third, people who work in primary care will need to be well trained in the science of improvement. Practitioners will increasingly find that expertise in the clinical sciences, the clinical method, and the patient–doctor relationship is no longer enough if they want to deliver the highest quality of care for their patients. In the future, they will need to have a working understanding of the published evidence relating to how health services are organised and delivered, an ability to apply this evidence in practice, to use theory and conceptual models to guide change, and to commit to rigorously evaluating the impact of their work. They will also want to know how to work with the public, health service managers, and academic colleagues to deliver improved care. This is a significant commitment but there is growing evidence that those organisations which invest in building the capacity and the capability of their workforce for systematic improvement achieve better results than those which fail to do so.10

By engaging with the science of improvement, general practice has an opportunity to further build its reputation as a leader in the field of quality improvement, and to more effectively influence the ways in which the health service is structured and delivers patient care.

Notes

Provenance

Freely submitted; not externally peer reviewed.

  • © British Journal of General Practice 2014

REFERENCES

  1. 1.↵
    (1985) What sort of Doctor?Report 23, Royal College of General Practitioners (RCGP, London).
  2. 2.↵
    1. Grol R,
    2. Wensing M,
    3. Eccles M,
    4. Davis D
    (2013) Improving patient care: the implementation of change in health care (Wiley-Blackwell, Oxford), 2nd edn.
  3. 3.↵
    1. Walshe K,
    2. Rundall TG
    (2001) Evidence-based management: from theory to practice in health care. Milbank Q 79:429–457.
    OpenUrlCrossRefPubMed
  4. 4.↵
    1. Cochrane Effective Practice and Organisation of Care Group
    , http://epoc.cochrane.org/. (accessed 7 Apr 2014).
  5. 5.↵
    1. Marshall M,
    2. Pronovost P,
    3. Dixon-Woods M
    (2013) Promotion of improvement as a science. Lancet 381:419–421.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Freeman AC,
    2. Sweeney K
    (2001) Why general practitioners do not implement evidence: qualitative study. BMJ 323(7321):1100–1102.
    OpenUrlAbstract/FREE Full Text
  7. 7.↵
    1. Parry GJ,
    2. Carson-Stevens A,
    3. Luff DF,
    4. et al.
    (2013) Recommendations for evaluation of healthcare improvement initiatives. Acad Pediatr 13(6):S23–S30.
    OpenUrlCrossRefPubMed
  8. 8.↵
    1. Van De Ven AH
    (2007) Engaged scholarship; a guide for organisational and social research (Oxford University Press, Oxford).
  9. 9.↵
    1. Marshall MN
    (2014) Bridging the ivory towers and the swampy lowlands; increasing the impact of health services research. Int J Qual Health Care 26(1):1–5, doi:10.1093/intqhc/mzt076.
    OpenUrlCrossRefPubMed
  10. 10.↵
    1. Bohmer R
    (2011) The four habits of high-value health care organisations. N Engl J Med 365(22):2045–2047.
    OpenUrlCrossRefPubMed
  11. 11.↵
    1. Feder G,
    2. Davies RA,
    3. Baird K,
    4. et al.
    (2011) Identification and Referral to Improve Safety (IRIS) of women experiencing domestic violence with a primary care training and support programme: a cluster randomised controlled trial. Lancet 378:1788–1795.
    OpenUrlCrossRefPubMed
  12. 12.↵
    1. Murphy E,
    2. Kinmonth AL
    (1995) No symptoms, no problem? Patient’s understanding of non-insulin dependent diabetes. Fam Pract 12(2):184–192.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Blizard Institute
    Clinical Effectiveness Group. http://blizard.qmul.ac.uk/research-groups/253-clinical-effectiveness-group.html (accessed 7 Apr 2014).
  14. 14.↵
    1. Hull S,
    2. Chowdhury TA,
    3. Mathur R,
    4. Robson J
    (2014) Improving outcomes for patients with type 2 diabetes using general practice networks: a quality improvement project in east London. BMJ Qual Saf 23(2):171–176, doi:10.1136/bmjqs-2013-002008.
    OpenUrlAbstract/FREE Full Text
Back to top
Previous ArticleNext Article

In this issue

British Journal of General Practice: 64 (622)
British Journal of General Practice
Vol. 64, Issue 622
May 2014
  • Table of Contents
  • Index by author
Download PDF
Download PowerPoint
Article Alerts
Or,
sign in or create an account with your email address
Email Article

Thank you for recommending British Journal of General Practice.

NOTE: We only request your email address so that the person to whom you are recommending the page knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
What can science contribute to quality improvement in general practice?
(Your Name) has forwarded a page to you from British Journal of General Practice
(Your Name) thought you would like to see this page from British Journal of General Practice.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
What can science contribute to quality improvement in general practice?
Martin Marshall, Maureen Baker, Imran Rafi, Amanda Howe
British Journal of General Practice 2014; 64 (622): 254-256. DOI: 10.3399/bjgp14X679877

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
What can science contribute to quality improvement in general practice?
Martin Marshall, Maureen Baker, Imran Rafi, Amanda Howe
British Journal of General Practice 2014; 64 (622): 254-256. DOI: 10.3399/bjgp14X679877
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Mendeley logo Mendeley

Jump to section

  • Top
  • Article
    • INTRODUCTION
    • THE CURRENT NATURE OF IMPROVEMENT
    • THE ADDED VALUE OF SCIENCE
    • REFLECTIONS ON THE NEXT STEPS
    • Notes
    • REFERENCES
  • Figures & Data
  • Info
  • eLetters
  • PDF

More in this TOC Section

  • SAFER diagnosis: a teaching system to help reduce diagnostic errors in primary care
  • An Australian reflects on the Collings report 70 years on
  • Emergencies in general practice: could checklists support teams in stressful situations?
Show more Debate & Analysis

Related Articles

Cited By...

Intended for Healthcare Professionals

BJGP Life

BJGP Open

 

@BJGPjournal's Likes on Twitter

 
 

British Journal of General Practice

NAVIGATE

  • Home
  • Current Issue
  • All Issues
  • Online First
  • Authors & reviewers

RCGP

  • BJGP for RCGP members
  • BJGP Open
  • RCGP eLearning
  • InnovAiT Journal
  • Jobs and careers

MY ACCOUNT

  • RCGP members' login
  • Subscriber login
  • Activate subscription
  • Terms and conditions

NEWS AND UPDATES

  • About BJGP
  • Alerts
  • RSS feeds
  • Facebook
  • Twitter

AUTHORS & REVIEWERS

  • Submit an article
  • Writing for BJGP: research
  • Writing for BJGP: other sections
  • BJGP editorial process & policies
  • BJGP ethical guidelines
  • Peer review for BJGP

CUSTOMER SERVICES

  • Advertising
  • Contact subscription agent
  • Copyright
  • Librarian information

CONTRIBUTE

  • BJGP Life
  • eLetters
  • Feedback

CONTACT US

BJGP Journal Office
RCGP
30 Euston Square
London NW1 2FB
Tel: +44 (0)20 3188 7400
Email: journal@rcgp.org.uk

British Journal of General Practice is an editorially-independent publication of the Royal College of General Practitioners
© 2023 British Journal of General Practice

Print ISSN: 0960-1643
Online ISSN: 1478-5242