Strong the Force is……

A premier for Star Wars last week and my Premier Blog this week, I’m not sure which is more exciting…

This post follows on from the theme of our last group blog, which reflected on a discussion during our reading group with the University of Western Ontario in Canada. This focused on the current status of frameworks, theories and models in implementation science.

I’m relatively new to the field of implementation science: my background is in systems-science research. However, I’ve recently had experience in using the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) framework framework to inform the evaluation of a capacity-building programme. The programme relied heavily upon the facilitation of academic mentors to educate and support healthcare analysts to develop modelling skills over a 12-month period. The framework proved useful in explaining the data I’d captured but my nagging thought was: if I’d used the framework prospectively, would this have further enabled the healthcare projects, and could the healthcare analysts use the framework themselves to support their proposals for change?

A colleague flagged up the QUERI) Implementation Network which recently hosted a one hour online seminar on the framework. This was delivered very well by Jeffrey Smith and I really valued feeling part of a conversation within a community much larger than my normal habitat.

He reflected on the differences between the original PARIHS framework (Kitson et al 2008) and the ‘integrated’ PARIHS (Harvey and Kitson 2016). This is predominantly related to the construct of ‘facilitation’. The construct of ‘facilitation’ is now considered to be the most influential construct relative to successful implementation. This is reflected in their equation:

Successful Implementation (SI) = Facilitation (Innovation x Recipient x Context)

Jeffrey touched on the prospective use of i-PARIHS and acknowledged there was limited evidence of its use in this way.

During my induction period into the literature and field of implementation science it strikes me that the force to create more frameworks is strong, whereas the testing and use of those already in existence, for different purposes and in different contexts, seems less attractive or perhaps achievable.

This seems a familiar academic situation, and one I recognise from previous work with another slippery multi-dimensional concept! So my question is: why as researchers do we not seek to test conceptual frameworks through a more applied form of research? The methodology to develop a new tool or measure requires research to demonstrate their sensitivity and validity for an intended context. Can implementation frameworks be viewed in this light, to become validated tools which can support frontline healthcare staff to design and implement change? Should we be aiming to demonstrate retrospective and prospective validity and sensitivity of only a few frameworks to enable their applied use in the field by non-implementation scientists?

Listening to the seminar, it seems that considerable efforts have been made to develop i-PARIHS as a toolkit to enable users to understand the facilitation process and role in a practical and pragmatic sense (Harvey and Kitson 2016). The supplementary material to the Harvey and Kitson 2016 paper also illustrates the difference between the concept of facilitation as a role and process at the macro, meso, and micro level of the system. This approach, which takes the academic insights and creates a practical toolkit, seems admirable in this vast field of implementation science literature and evidence. I’m unsure at this stage how implementation science informs the transfer of its own body of knowledge. Does co-design or user-centred design principles inform the development of implementation tools and frameworks and are these intended to be used by non-academics? If prospective use of these frameworks is one of the objectives then end user engagement and evaluation of carefully selected aims would seem essential to demonstrate the usability and validity of a framework. This is easy to write, but in reality perhaps much harder to achieve and I do not underestimate the challenge.

In summary, I have two nagging questions:

  1. Do or should implementation science frameworks practically help those at the frontline implement change more successfully?
  2. Should research efforts be focused towards evaluating implementation science frameworks to support the development of implementation tools for the non-academic audience?

 

Kitson, A. L., Rycroft-Malone, J,Harvey, G, McCormack, B, Seers, K, Titchen, A. (2008). “Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges.” Implementation Science 3(1): 1.

Harvey, G. and A. Kitson (2016). “PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice.” Implementation Science 11(1): 33.