Strong the Force is……

A premier for Star Wars last week and my Premier Blog this week, I’m not sure which is more exciting…

This post follows on from the theme of our last group blog, which reflected on a discussion during our reading group with the University of Western Ontario in Canada. This focused on the current status of frameworks, theories and models in implementation science.

I’m relatively new to the field of implementation science: my background is in systems-science research. However, I’ve recently had experience in using the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) framework framework to inform the evaluation of a capacity-building programme. The programme relied heavily upon the facilitation of academic mentors to educate and support healthcare analysts to develop modelling skills over a 12-month period. The framework proved useful in explaining the data I’d captured but my nagging thought was: if I’d used the framework prospectively, would this have further enabled the healthcare projects, and could the healthcare analysts use the framework themselves to support their proposals for change?

A colleague flagged up the QUERI) Implementation Network which recently hosted a one hour online seminar on the framework. This was delivered very well by Jeffrey Smith and I really valued feeling part of a conversation within a community much larger than my normal habitat.

He reflected on the differences between the original PARIHS framework (Kitson et al 2008) and the ‘integrated’ PARIHS (Harvey and Kitson 2016). This is predominantly related to the construct of ‘facilitation’. The construct of ‘facilitation’ is now considered to be the most influential construct relative to successful implementation. This is reflected in their equation:

Successful Implementation (SI) = Facilitation (Innovation x Recipient x Context)

Jeffrey touched on the prospective use of i-PARIHS and acknowledged there was limited evidence of its use in this way.

During my induction period into the literature and field of implementation science it strikes me that the force to create more frameworks is strong, whereas the testing and use of those already in existence, for different purposes and in different contexts, seems less attractive or perhaps achievable.

This seems a familiar academic situation, and one I recognise from previous work with another slippery multi-dimensional concept! So my question is: why as researchers do we not seek to test conceptual frameworks through a more applied form of research? The methodology to develop a new tool or measure requires research to demonstrate their sensitivity and validity for an intended context. Can implementation frameworks be viewed in this light, to become validated tools which can support frontline healthcare staff to design and implement change? Should we be aiming to demonstrate retrospective and prospective validity and sensitivity of only a few frameworks to enable their applied use in the field by non-implementation scientists?

Listening to the seminar, it seems that considerable efforts have been made to develop i-PARIHS as a toolkit to enable users to understand the facilitation process and role in a practical and pragmatic sense (Harvey and Kitson 2016). The supplementary material to the Harvey and Kitson 2016 paper also illustrates the difference between the concept of facilitation as a role and process at the macro, meso, and micro level of the system. This approach, which takes the academic insights and creates a practical toolkit, seems admirable in this vast field of implementation science literature and evidence. I’m unsure at this stage how implementation science informs the transfer of its own body of knowledge. Does co-design or user-centred design principles inform the development of implementation tools and frameworks and are these intended to be used by non-academics? If prospective use of these frameworks is one of the objectives then end user engagement and evaluation of carefully selected aims would seem essential to demonstrate the usability and validity of a framework. This is easy to write, but in reality perhaps much harder to achieve and I do not underestimate the challenge.

In summary, I have two nagging questions:

  1. Do or should implementation science frameworks practically help those at the frontline implement change more successfully?
  2. Should research efforts be focused towards evaluating implementation science frameworks to support the development of implementation tools for the non-academic audience?

 

Kitson, A. L., Rycroft-Malone, J,Harvey, G, McCormack, B, Seers, K, Titchen, A. (2008). “Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges.” Implementation Science 3(1): 1.

Harvey, G. and A. Kitson (2016). “PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice.” Implementation Science 11(1): 33.

Implementation barriers in dementia care: the Machine Trick

My colleague Jo Thompson Coon and I were invited to attend Alzheimer’s Society Annual Research Conference last week and give a workshop on implementation in dementia research – a topic in which we’ve a particular interest and on which the Society is funding us to do some work.

'Myee'_chaff_cutter_from_The_Powerhouse_Museum

We ran the session based around a shortened version of Howie Becker’s “Machine Trick”; the trick involves coming to understand a social problem better by imagining that you have to design a machine that would produce the situation you have observed: in this case, the failures of knowledge mobilisation around dementia research and the practice of dementia care. (A future post will go into more detail on Becker’s trick and how it can be used in implementation workshops.)

The workshop was attended by just over 40 people, a mix of researchers and research network volunteers – that is, the lay people who review and advise on Alzheimer’s Society research projects. After a brief introduction we challenged those in the room to split into small groups and identify the components that they thought our machine should have. Each of the six groups then fed back the three most important bits they had thought of, and a few people shouted out other things they though important at the end.

The components the workshop participants identified are listed below. Those that came up more than once are marked (“x2)”.

• Use of different languages by different parts of the machine x2
• Lack of understanding of barriers at outset of (research) project x2
• No patient and public involvement x2
• Lack of willingness to change or accept innovation
• Research not grounded in or exposed to reality
• Lack of leadership
• Poor quality research
• Lack of polish in presentation of findings to wider audience
• Reactive workplace with no time to plan – both for researchers and practitioners
• Poor communication and/or excessive communication between parties
• No use of experience or prior learning from previous work – for both researchers and practitioners
• Lack of appreciation of time it takes to evaluate something
• Lack of trust between parties
• Kudos and benefits only accrue to one side (typically researchers)
• Priorities geared towards immediate clinical care – lack of time and resources to think about research
• Funding geared towards finding out new stuff rather than implementing or disseminating – and no time in grants to think about implementation and dissemination
• Stifling of innovation and creativity – no time or attention available for new things
• Lack of understanding of politics – both with a small ‘p’ and a big ‘P’

These components are pretty clear and need little interpretation: the problems inherent in them, if we wanted to fix or correct the machine, are readily apparent. They also cover a lot of ground and capture some of the complications and complexities inherent in implementing healthcare research – and I use that term purposefully, since most if not all of them could be applied to many care situations, not just to dementia. In a longer workshop we might have gone on to explore how the challenges represented in the machine could be overcome or negotiated; as it was I think the format was useful in bringing researchers and non-researchers together to think about, discuss, and identify the challenges of implementation.