Connecting Evidence and Policy: Some Tips on How To Do It… Wrong

I was recently invited to peer review an evidence synthesis produced by McMaster’s Health Forum for BC’s Ministry of Health. The review itself was very much in-line with what one would expect. It followed all the usual processes, and checklists and good practices. But the more I progressed in my reading the more struck I was by how utterly useless such a document is doomed to be. Contrarily to the opinion that the production of such evidence synthesis documents is an effective way to connect policy and evidence I believe that it mostly is a mix of ritualized behaviour and snake-oil production.

Evidence isn’t what you think it is.

A big part of the literature discussing the connection between policy and scientific knowledge is centred on the concept of “evidence”. But, despite using the term every two sentences, the field of knowledge translation pays way too little attention to the definition of that concept.

Obviously, there are plenty of insightful texts analyzing the way science can, under some circumstances, come to conclusive evidence about the nature of things or about the causal relations affecting the world (Arendt, 1967; Greenhalgh, 2010; Polanyi, 1974). But in its day to day, the field mostly rests on a simplistic view of evidence as some form of self-standing truth.

However, scientific evidence only is “evidence” as long as its validity and truthfulness can be questioned and defended. That is, the term evidence belongs to the world of science. As soon as a given scientifically derived piece of “evidence” enters debates outside of the narrow scientific field where it was produced, it changes of nature and becomes information. It’s validity and truthfulness become a credo, not something that can be questioned and defended according to the principles of science.

Otherwise said, the concept of evidence is highly context dependent. And this isn’t a relativist statement at all. It does not touch the ontological and epistemological debates upon which science rests. A piece of evidence is context dependent because one needs to have a significant level of expertise in the field it comes from to be able to figure out its actual nature and applicability. And evidence is never self-standing because its very nature makes it almost totally contingent to the rest of the scientific fields it comes from (Polanyi, 1974).

However the mainstream perspective on evidence synthesis – which I would label as the alembic view of evidence synthesis – disregards most of what is known about scientific evidence. Its foundation is a reification of the concept of evidence as some self-standing and portable entity, present in low concentration in the scientific literature. The synthesis effort is then conceived as the way in which this raw material is processed, the unwanted residues triaged out, and eventually the pure evidence extracted in the form of a synthesis document.

Not only is the synthesis process conceived on a misguided reification of what evidence is, but it is also conceived as a highly mechanical and technical business. There are simple rules and checklists and processes that ought to be dutifully followed. And the more the synthesis process is conceived as being squarely governed by clear guidelines and rules, the easier it is to view it as a technical skill that does not require much content expertise. Some push this view to the point of arguing that anybody who has been trained to conduct evidence synthesis ought to be able to synthesize evidence on any topic. And if we come back to the McMaster document I reviewed, the view was definitely pushed all the way.

Mary Poppins and Organizational Slack

What is then to be done with that nectar of pure evidence will bring us to the second problem related to the use of evidence-synthesis documents subcontracted outside of the policy-making environment. It almost feels as if the underlying model of evidence use was one where you only had to open the bottle of alembic-derived evidence nectar, sprinkle a few drops on the policy decision table and that the magic would operate.

Unfortunately, I never heard of any instances of this Mary-Poppins-model-of-evidence-use occurring in real life. On the contrary, there is a very significant amount of skills, expertise and time needed to build a meaningful connection between science-derived knowledge and policy-making (Cairney & Oliver, 2017; Prewitt, Schwandt, & Miron L. Straf, 2012). To figure this out, it is important to understand that policy choices deal with various types of incertitude, many of which not amenable to being resolved by scientific knowledge (Peterson, 1995). But more than anything, almost all policy-making deals with complex systems. Any choice made will connect and impact numerous phenomena, actors and outcomes. Because of this complexity, even when there is reliable science-derived information to work with, the use of that evidence isn’t straightforward (Dobrow, Goel, & Upshur, 2004). It will require navigating the political games involved, excellent forecasting skills regarding the behaviour of interdependent systems, the capacity to adapt and adjust along the way, etc.

And what is fascinating is that a lot of the sophisticated and complex knowledge needed to do so will often have been deliberately discarded during the synthesis process. All the nitty-gritty details that one needs to understand and integrate to make sense of the science will also be invaluable to build the connection between science and policy. At a very practical level, I believe that the resources needed to connect science and policy have to do with the presence of a sufficient amount of organizational slack (Bourgeois, 1981) as well as with the availability of internal expertise. Those are for sure things that a few pages of a not too good summary written by someone without any content expertise isn’t going to provide. Actually, most of the time, even an amazing summary written by a true expert wouldn’t cut it either.

I would venture a rule-of-the-thumb principle here. An organization that doesn’t have the human resources (expertise and time) needed to conduct in-house a review of the available science-derived knowledge won’t have the human resources (expertise and time) needed to put to use any evidence synthesis outsourced to an external entity. The corollary being that when it comes to evidence synthesis, if you can’t do it in-house there is no point having someone else doing it for you.

Policy Capacity and Outsourcing

And if you think that there must be “good evidence” supporting the current organizational fondness for evidence-synthesis documents, think again. Actually, the proof that such an approach benefits policy-making processes is everything but conclusive.

The general opinion, however, is that outsourced evidence-synthesis can only help. Best case scenario is that those documents can provide some scientifically sound insights. Worst case scenario is that they won’t make any difference. And this is where I disagree. I fear that our acceptance that outsourced evidence synthesis is a decent way to inform policy making can have very real and detrimental effects on policy capacity.

First, there is a risk of policy being based on superficial understandings and misconceptions. That’s really the Dunning-Kruger effect at play. Secondly, on the long run, any organization or governmental agency which comes to accept that it is OK not to have any organizational slack and very little in-house technical and specialized expertise because it can always contract-out according to its needs is doomed. Yes, this might sound harsh but there is plenty of good research in implementation science (the public administration strand), organizational science and political science supporting this view. And I might also add that all this only holds if one actually believes that evidence synthesis are meant for instrumental use in the first place. If the evidence-synthesis document was always meant to be a decoy to be used in a tactical or political way (Weiss 1979) then there never was much hope for better policies to start with.

Initially, academics (like myself) were the ones who developed the idea that evidence-synthesis documents are a coherent way to foster “evidence-based” policy making. And it then were the selfsame who proceeded to study how effective their pet solution was. Many of them are also involved in institutional-scale retail of the stuff. And despite the fact this shouldn’t matter if the evaluation study design were robust, one can still find lots of performativity in the field. But what really drives me mad is the fear that academics working on practical ways to connect policy-making and scientific knowledge might have actually legitimized a perspective that makes scientifically sound policies even harder to achieve.

 

Arendt, H. (1967). Truth and Politics. Originally published in The New Yorker,  Reprinted in Portable Hannah Arendt (Penguin, 2000), February 25, 1967.

Bourgeois, L. J., III. (1981). On the Measurement of Organizational Slack. The Academy of Management Review, 6(1), 29-39.

Cairney, P., & Oliver, K. (2017). Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Research Policy and Systems, 15(35).

Dobrow, M. J., Goel, V., & Upshur, R. E. G. (2004). Evidence-based health policy: context and utilisation. Social Science & Medicine, 58(1), 207-217.

Greenhalgh, T. (2010). What Is This Knowledge That We Seek to “Exchange”? The Milbank Quarterly, 88(4), 492–499.

Peterson, M. A. (1995). How Health Policy Information Is Used in Congress. In T. E. Mann & N. J. Ornstein (Eds.), Intensive care: How Congress shapes health policy (pp. 79-125). Washington, D.C: American Enterprise Institute.

Polanyi, M. (1974). Personal Knowledge. Chicago: The University of Chicago Press.

Prewitt, K., Schwandt, T. A., & Miron L. Straf. (2012). Using Science as Evidence in Public Policy. Washington: Committee on the Use of Social Science, Knowledge in Public Policy, National Research Council of the National Academies.

Weiss, C. H. (1979). “The Many Meanings of Research Utilization.” Public administration review 39(5): 426-431.

Leave a Reply

Your email address will not be published. Required fields are marked *