Strong the Force is……

A premier for Star Wars last week and my Premier Blog this week, I’m not sure which is more exciting…

This post follows on from the theme of our last group blog, which reflected on a discussion during our reading group with the University of Western Ontario in Canada. This focused on the current status of frameworks, theories and models in implementation science.

I’m relatively new to the field of implementation science: my background is in systems-science research. However, I’ve recently had experience in using the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) framework framework to inform the evaluation of a capacity-building programme. The programme relied heavily upon the facilitation of academic mentors to educate and support healthcare analysts to develop modelling skills over a 12-month period. The framework proved useful in explaining the data I’d captured but my nagging thought was: if I’d used the framework prospectively, would this have further enabled the healthcare projects, and could the healthcare analysts use the framework themselves to support their proposals for change?

A colleague flagged up the QUERI) Implementation Network which recently hosted a one hour online seminar on the framework. This was delivered very well by Jeffrey Smith and I really valued feeling part of a conversation within a community much larger than my normal habitat.

He reflected on the differences between the original PARIHS framework (Kitson et al 2008) and the ‘integrated’ PARIHS (Harvey and Kitson 2016). This is predominantly related to the construct of ‘facilitation’. The construct of ‘facilitation’ is now considered to be the most influential construct relative to successful implementation. This is reflected in their equation:

Successful Implementation (SI) = Facilitation (Innovation x Recipient x Context)

Jeffrey touched on the prospective use of i-PARIHS and acknowledged there was limited evidence of its use in this way.

During my induction period into the literature and field of implementation science it strikes me that the force to create more frameworks is strong, whereas the testing and use of those already in existence, for different purposes and in different contexts, seems less attractive or perhaps achievable.

This seems a familiar academic situation, and one I recognise from previous work with another slippery multi-dimensional concept! So my question is: why as researchers do we not seek to test conceptual frameworks through a more applied form of research? The methodology to develop a new tool or measure requires research to demonstrate their sensitivity and validity for an intended context. Can implementation frameworks be viewed in this light, to become validated tools which can support frontline healthcare staff to design and implement change? Should we be aiming to demonstrate retrospective and prospective validity and sensitivity of only a few frameworks to enable their applied use in the field by non-implementation scientists?

Listening to the seminar, it seems that considerable efforts have been made to develop i-PARIHS as a toolkit to enable users to understand the facilitation process and role in a practical and pragmatic sense (Harvey and Kitson 2016). The supplementary material to the Harvey and Kitson 2016 paper also illustrates the difference between the concept of facilitation as a role and process at the macro, meso, and micro level of the system. This approach, which takes the academic insights and creates a practical toolkit, seems admirable in this vast field of implementation science literature and evidence. I’m unsure at this stage how implementation science informs the transfer of its own body of knowledge. Does co-design or user-centred design principles inform the development of implementation tools and frameworks and are these intended to be used by non-academics? If prospective use of these frameworks is one of the objectives then end user engagement and evaluation of carefully selected aims would seem essential to demonstrate the usability and validity of a framework. This is easy to write, but in reality perhaps much harder to achieve and I do not underestimate the challenge.

In summary, I have two nagging questions:

  1. Do or should implementation science frameworks practically help those at the frontline implement change more successfully?
  2. Should research efforts be focused towards evaluating implementation science frameworks to support the development of implementation tools for the non-academic audience?

 

Kitson, A. L., Rycroft-Malone, J,Harvey, G, McCormack, B, Seers, K, Titchen, A. (2008). “Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges.” Implementation Science 3(1): 1.

Harvey, G. and A. Kitson (2016). “PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice.” Implementation Science 11(1): 33.

Implementation barriers in dementia care: the Machine Trick

My colleague Jo Thompson Coon and I were invited to attend Alzheimer’s Society Annual Research Conference last week and give a workshop on implementation in dementia research – a topic in which we’ve a particular interest and on which the Society is funding us to do some work.

'Myee'_chaff_cutter_from_The_Powerhouse_Museum

We ran the session based around a shortened version of Howie Becker’s “Machine Trick”; the trick involves coming to understand a social problem better by imagining that you have to design a machine that would produce the situation you have observed: in this case, the failures of knowledge mobilisation around dementia research and the practice of dementia care. (A future post will go into more detail on Becker’s trick and how it can be used in implementation workshops.)

The workshop was attended by just over 40 people, a mix of researchers and research network volunteers – that is, the lay people who review and advise on Alzheimer’s Society research projects. After a brief introduction we challenged those in the room to split into small groups and identify the components that they thought our machine should have. Each of the six groups then fed back the three most important bits they had thought of, and a few people shouted out other things they though important at the end.

The components the workshop participants identified are listed below. Those that came up more than once are marked (“x2)”.

• Use of different languages by different parts of the machine x2
• Lack of understanding of barriers at outset of (research) project x2
• No patient and public involvement x2
• Lack of willingness to change or accept innovation
• Research not grounded in or exposed to reality
• Lack of leadership
• Poor quality research
• Lack of polish in presentation of findings to wider audience
• Reactive workplace with no time to plan – both for researchers and practitioners
• Poor communication and/or excessive communication between parties
• No use of experience or prior learning from previous work – for both researchers and practitioners
• Lack of appreciation of time it takes to evaluate something
• Lack of trust between parties
• Kudos and benefits only accrue to one side (typically researchers)
• Priorities geared towards immediate clinical care – lack of time and resources to think about research
• Funding geared towards finding out new stuff rather than implementing or disseminating – and no time in grants to think about implementation and dissemination
• Stifling of innovation and creativity – no time or attention available for new things
• Lack of understanding of politics – both with a small ‘p’ and a big ‘P’

These components are pretty clear and need little interpretation: the problems inherent in them, if we wanted to fix or correct the machine, are readily apparent. They also cover a lot of ground and capture some of the complications and complexities inherent in implementing healthcare research – and I use that term purposefully, since most if not all of them could be applied to many care situations, not just to dementia. In a longer workshop we might have gone on to explore how the challenges represented in the machine could be overcome or negotiated; as it was I think the format was useful in bringing researchers and non-researchers together to think about, discuss, and identify the challenges of implementation.

On the Battle of Poitiers (1356) as a failure of implementation

There were three noteworthy English victories over France in the Hundred Years War. The best-known is the final one, the Battle of Agincourt (1415), but the earlier battles are just as historically and strategically interesting.

The second of these was the Battle of Poitiers (1356) in which a combined English and Gascon army led by Edward, Prince of Wales (later known as the Black Prince) defeated a much larger French force. The French had a number of apparent advantages: they were on home territory, they had many more men (probably around 16000, twice the size of Edward’s force of around 8000), and they were eager to drive the English out of France because English forces had been at large for years and had pillaged and killed widely. Yet the French lost, and lost badly: their King, Jean II, was taken prisoner and the Oriflamme, the sacred French battle standard, was captured. The defeat was met with surprise across France and Europe and marked a turning point in the status and authority of the French nobility.

Historians have proposed a number of reasons for this unexpected loss. I think that our current understandings of implementation can be used to understand some of the failures of the French army. I suggest three implementation issues were involved and in relation to each we can see how Edward’s forces were successful in making beneficial changes that the French army failed to enact.Battle-poitiers(1356)

First, English longbowmen were made central to their army. Archers were an important contributor to the English victory at Poitiers, first firing upon the French cavalry head-on and then, when the knights’ armour proved too tough to penetrate, moving to one side and felling the horses with an attach on their flanks. The successful implementation here lay first in the English recognition of the power of the longbow and second in ensuring that the archers were effectively deployed in practice. The French also had archers and knew their power: they had suffered under the fire of English longbows in the Battle of Crécy ten years earlier. But they failed to integrate the archers into their fighting force as the English did, a failure that Barbara Tuchman ascribes to established social and cultural norms on the part of the French nobles: the French archers “were never properly combined in action with knights and men-at-arms, because French chivalry scorned to share its dominance of the field with commoners.” (153) In the English side this attitude was less dominant and they were able to benefit from the ranged power of the longbow.

A second factor played out in the tactics adopted by the French during the battle. The English force was very short of water and had dug in on a hill. Marshal Clermont, an experienced general and one of the senior French nobles present, proposed blockading the English and starving them out. Edward feared that the French would try this and the approach would likely have had an excellent chance of success. However, King Jean opposed the idea because it was at odds with the rules of chivalry. He chose instead to engage with the English and Clermont was among those killed in the fighting that followed.

Third, Edward had been able to organise his forces in a new way with some semblance of what we might recognise as a military hierarchy, with soldiers answerable to officers and officers to more senior commanders (this is not strictly true but captures the general idea). The French had no such structure and their commanders were at risk, as was often the case in medieval armies, from the fact that individual nobles and their followers might decide at any point that they had had enough and make a unilateral decision to leave the field of battle. With no notion of military discipline and troops’ loyalty in the first instance to their feudal overlord, the turning tide of the battle eventually led to a rout with surviving French nobles and foot soldiers fleeing before the rampaging English.

These were not the only things that contributed to the French defeat but they were important. The French lost, in part, because: their prevailing culture did not permit the effective implementation of a new technology (longbows); sociocultural factors prevented them from acting in a tactically beneficial way in reaction to the course of the battle; and they were tied to a harmful and outmoded organisational structure.

If we turn to contemporary writing on facilitators and barriers to implementation we find similar barriers to implementation recognised. Implementation science has been described as a discipline that focuses, in part, on “the discovery and identification of social, organizational, and cultural factors affecting the uptake of evidence-based practices and policies” (Luke 2012). The “evidence” available in the fourteenth century was not the type of evidence we might want to inform policy decisions today but it seems clear that social, organisational and cultural factors were key aspects of the French failure to implement new ways of thinking and acting that became major contributors to a French military disaster. Then, as now, social, organisational and cultural factors are central elements of what we have to recognise, consider, and address when considering implementing new practices or ways of doing things.

Plus ça change, plus c’est la même chose…

REFERENCES

My understanding of this topic has been informed by Barbara Tuchman’s outstanding A Distant Mirror: The Calamitous 14th Century (Knopf, 1978).

Luke DA. Viewing Dissemination and Implementation Research through a Network Lens. In Brownson RC, Colditz GA, Proctor EK. Dissemination and Implementation Research in Health: Translating Science to Practice. (Oxford: OUP)
Laura Pickup

Laura Pickup

Research Fellow at The University of Exeter
Laura Pickup

Latest posts by Laura Pickup (see all)