The impact of PPI on service users, researchers and communities

shutterstock_87647002

Today we feature a joint-post from one of our established bloggers and a colleague who she has been working with on her local Patient and Public Involvement (PPI) group.

Sarah Knowles is the established blogger; she’s a Research Fellow at the NIHR School for Primary Care Research in Manchester. Ailsa Donnelly is the new addition to our blogging team; she’s the lay Chair of the PRIMER group and a firm believer in active patient/carer/public/lay involvement in health research.

No matter how complicated the research, or how brilliant the researcher, patients and the public always offer unique, invaluable insights. Their advice when designing, implementing and evaluating research invariably makes studies more effective, more credible and often more cost effective as well.
– Dame Sally Davies, Chief Medical Officer

PPI is often now expected in health research, with funding bodies such as the National Institute for Health Research requesting PPI information on grant applications and asking lay reviewers for a patient perspective. (Not all funding bodies require PPI, including, surprisingly, some charities).

In a shocking display of Elvish insolence (not ‘Elvis’ insolence as Ailsa first heard!), we’re going to disagree with Dame Sally, arguing that we still don’t know if PPI makes research ”more effective”, with methods for measuring and reporting PPI relatively new and under-used. The evidence base for PPI is limited, often reported in rose-tinted hindsight, and there are calls for more formal empirical assessment of PPI, especially its impact on research (Petit-Zeman and Locock 2013). The hope is that better measurement of “impact” will tell us whether PPI does actually make research more effective and more credible as claimed.

Brett and colleagues have contributed to the evidence base in their latest paper, taking a slightly different angle. In this sister paper to a review of the impact of involvement on research (Brett et al. 2014) they looked at the effect of participating in PPI on the people doing it. As well as being important in its own right, they suggest that understanding this may explain why some PPI projects work and others don’t. Rather than just questioning what PPI is done and what impact it has, we need to examine how it affects all those involved during the PPI process.

PPI feels intuitively right

PPI feels intuitively right, but where’s the evidence that it actually makes research “more effective”?

Methods

  • The team searched the major databases for studies between 1995 and 2012. They also hand searched specialist journals and contacted experts in the field to find any studies overlooked.
  • They included all studies, published and unpublished, reporting data on the involvement of adults in health or social care research. They assessed the quality of studies using the CASP checklists, and conducted a narrative synthesis to draw out key themes across all the studies.
  • In a nice example of “walking the walk”, 3 service users were involved in designing the study, and the team also held a PPI workshop with 24 PPI partners to discuss the study synthesis.

Results

  • 65 papers were included in the review measuring impact on service users, 35 for researchers. The majority were qualitative, including case studies, cross-sectional studies and other reviews. Studies were predominantly from the UK, with others from North America, Australia and some Scandinavian.
  • The authors separated their results into “Impact on Service Users”, “Impact on Researchers” and “Impact on the Community under research.” Each situation reported both positive and negative impacts.
  • For service users:
    • Positive impacts: gaining skills and knowledge, feeling listened to and being part of a team
    • Negative impacts: they reported feeling marginalised, ignored and underprepared
  • For researchers:
    • Positive impacts: gaining insight
    • Negative impacts: tension and scepticism over what PPI partners could contribute, with the additional burdens of finding extra funding for PPI or managing new roles and expectations
  • “Impact on the community”:
    • Positive impacts: reported greater trust that meant research had more credibility (gold star for Dame Sally!). Better dissemination due to community ‘ownership’ or through users becoming advocates was seen as very helpful
    • Negative impacts: some communities reported exposure of conflicts and difficulty in ensuring sufficient diversity in involvement
The reviewers reported on the positive and negative impacts

The reviewers reported on the positive and negative impact of participating in PPI from the service user, researcher and community perspective.

Conclusions

The authors concluded:

The evidence reported highlights that PPI impacts these different groups in different ways, and may be linked to differing motivations and values, less often explored in studies of impact. … A common theme identified in this review is the potential for challenging impacts which can result from colliding worlds, where the values and assumptions researchers have meet with the needs and aspirations of users and the community as a whole, and do not necessarily mesh well.

The issue of culture clashes is one often talked about in PPI but rarely acknowledged formally; it’s good to see this getting attention, and I/we hope it leads to more honest discussion about it, including suggestions to manage and overcome it.

Ailsa in particular noted that impact on service users was reported very much in terms of personal development, for example impact on self esteem (positive or negative!) and discussing feelings about their own roles. Impact on researchers was much less individual and couched more in terms of ‘culture’ and ‘professional development’, with very little about ‘personal journeys’. This would have been interesting to consider, as we can imagine for example that mental health research could have enormous emotional consequences on a researcher.  Perhaps acknowledging these ‘personal’ as well as ‘professional’ impacts on researchers could be key to navigating the culture clash?

Finally, it was clear that much ‘positive’ PPI was about managing expectations on both sides right from the start, and then maintaining good two-way communication, which rings true with both of us!

Good two-way communication is a vital component of any PPI work.

Clearly managed expectations and good two-way communication are vital components of any PPI work.

Limitations

  • The paper doesn’t specify search terms and doesn’t define “involvement”. This is crucial as various definitions exist (“involvement” is the preferred term in the UK but other countries may use different terminologies; would they then be excluded from the review?). There are also different levels of involvement for service users, ranging from consultation on an idea to active participation as collaborators or even project leaders. It would be useful to know whether impacts were consistently different between types of involvement or if specific impacts were linked to specific types of involvement.
  • It was not explained whether social care involvement and health care involvement were compatible and could be assessed/reported in equivalent ways.
  • The synthesis collates the issues into those affecting different groups (service users, researchers, the community), but was quite descriptive. For example there was no attempt to consider whether positive or negative impacts were more often reported in studies with particular groups or methods.
  • It was also unclear whether positive and negative impacts were exclusive or could occur in the same study, or whether studies where researchers were more positive meant more positive community impacts and so on. The authors do include some suggestions in their discussion (for example, noting that poor planning and unclear roles seem to contribute to negative impacts). Sarah thought that formally mapping out these potential relationships could have given us a deeper understanding of how these impacts occurred, and some ideas of how do to manage them.
  • It’s great to see service users involved in the paper, but we both noticed there were no details of any support offered. For example, did they have previous experience of helping reviews or were they given any training?
  • The authors don’t mention if any consistent impact assessment tools were reported in the studies, or if impact tended to be reported ad hoc (and if so, how it was collected in each study). The discussion acknowledges the problem of poor reporting and notes that a better conceptualisation of impact is necessary, but this does perhaps miss an opportunity to examine what the review itself tells us about how this could be conceptualised and what aspects of impact should be regularly assessed.
  • Ailsa wanted to know in the qualitative studies who had interviewed whom. Were they all conducted by researchers or did service users do some? Following on from this, it would be interesting to compare reports from studies that included PPI co-authors to those written or completed by researchers alone.
Do you have first hand experience of PPI work?

Do you have first hand experience of PPI work? Please share your story and tell us what you think of this review.

Links

Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, Suleman R. (2014) Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect. 2014 Oct;17(5):637-50. doi: 10.1111/j.1369-7625.2012.00795.x. Epub 2012 Jul 19. [PubMed abstract]

Petit-Zeman S, Locock L. (2013) “Health Care: Bring on the Evidence.” Nature 501 (7466): 160–61. doi:10.1038/501160a.

Share on Facebook Tweet this on Twitter Share on LinkedIn Share on Google+