Billions are spent discovering new medicines every year – even with the knowledge that most projects (and even many vastly expensive Phase 3 trials) will fail. But the aim is noble, the prize for success high, and despite debatable return on capital, pharmaceutical R&D has grown exponentially over the last three decades.
The presumption behind this is that we need new medicines to improve clinical outcomes for patients. This, in turn, assumes that we make something approaching the best use of the medicine cabinet we already have.
Is there any evidence that we do that? After all, clinical trial evidence addresses only very narrow questions – did a particular group of patients do better when treated with a drug compared to standard of care, for example. But that provides no information about whether the group that should receive the drug should be bigger, smaller or altogether different, either from a clinical outcome perspective or a cost-effectiveness perspective.
Even the added layer of cost-effectiveness assessment added by bodies such as NICE and IQWIG in Europe or the payors in the US asks a broader question only in so far as it compares different medicines with the focus on cost to benefit, but it can still only work with the available clinical trial evidence.
The problem is simple: complexity. Treating any disease is a complex process. And complex means more than just difficult to understand: complex is also a technical term for a system whose behavior cannot be predicted from the behavior of its sub-processes. In other words, you cannot work out the best way to treat people with a particular disease by optimizing individually the diagnostic and therapeutic components of that treatment.
Put simply, the reductionist approach of clinical trials means we have, for the most part, no idea how efficiently we use the medicines we already have.
“There is a presumption that no ‘mere mathematician’ armed with the same computer models that revolutionized financial markets and manufacturing industries alike could possibly know better than even a humble primary care physician as to how to treat your grandmother”
And when we dig a little deeper, the information that is available suggests precisely the opposite. That we are strikingly inefficient at using the tools available today from both and outcome and cost perspective.
DrugBaron draws a parallel between the complexity of the human organism and that of the healthcare system, and asks whether the tools of systems biology could and should be applied to healthcare delivery. The results from such a paradigm shift could be dramatically improved clinical outcomes from a lower healthcare spend – something new research consistently has failed to deliver.
One of DrugBaron’s favourite interview tools for prospective science undergraduates applying to Cambridge University was an electron micrograph of an atherosclerotic plaque. Its principal advantage was that it was virtually impossible to know what it was (unless you work on the cellular ultrastructure of vascular diseases which, its safe to say, most school leavers do not). Working out what it is from the visual clues is a proper intellectual challenge. The reason, of course, is that if you look at a complex object with too much resolution you completely lose sight of the bigger picture.
Science more generally suffers from the same problem: the paradigm that dominated 20th Century science was reductionism – the principle that you could learn about the properties of a complex system by reducing it to its component parts and processes and then understand those in isolation.
For some things, this reductionist approach works quite well. You can understand how your washing machine works by understanding the function of each of its components: the motor drives the drum, the drum mixes the water with the clothes and the detergent, the control panel regulates the motor and the valves and so on.
But some things are just too complicated to be understood this way. Human biology is one of them. From the 1980s onwards, it became clear that many systems could never be understood from even a perfect description of their component processes – a discovery dubbed “chaos theory”. Over time, the term “complex system” took on a specific definition, referring not just to a complicated system but one that could not be understood by taking a reductionist approach.
Out of this realization that reductionism had its limits grew the field of systems biology. The “omics” revolution has been built on the insight that a proper understanding of the behaviour of a complex system, whether a single cell or an entire organism, requires measurement of all (or at least as many as possible) of its components simultaneously.
Systems biology became possible in part because the tools (such as gene chips, 2D gel electrophoresis, multiplexed ELISAs and LC-MS) became available to make many parallel measurements simultaneously at an acceptable cost; but also through the development of statistical methodology to cope with such large datasets.
When faced with a complex system, first you have to realize that reductionism wont work, then you have to find ways to acquire global datasets and finally you have to adopt tools that query the resulting large datasets to provide meaningful answers.
DrugBaron has already commented on the potential (and the pitfalls) for adopting these ‘big data’ approaches in clinical trials – and stressed that if you want to understand the effect of a drug on a disease you need to abandon the sacred symbol of reductionism, the primary end-point, and instead measure as many responses to the intervention as possible.
But why stop there? Turn the lens and reduce the resolution one more step, and you realize that the clinical trial itself is a reductionist paradigm.
A clinical trial answers only a very limited question: did the intervention benefit the kind of people entered into the trial. Even very large Phase 3 trials treat only a selected population. Worse still, few trials compare different treatment options to determine which would be best – most compare either to placebo or a standard of care that may have been established a decade ago. Perhaps most worrying of all, when medical and surgical options co-exist for treatment of the same condition it is even more difficult to decide the best choice for any given patient.
To illustrate the problem, consider the prevention of heart attacks (which, together with cancer, represents the major public health challenge for Western healthcare systems). The big issue here is that most of the people at risk are undiagnosed, and therefore not actively engaged with the healthcare infrastructure (at least not for cardiovascular problems).
For sure, the system identifies high-risk sub-populations (such as those with diabetes or hereditary dyslipidemias, those with symptoms such as angina or shortness of breath, and even those who have already had a heart attack). These people can be at very high risk of having a heart attack, and so properly require interventions.
But what fraction of all heart attacks occurs in these sub-groups? The answer, worryingly, is less than half.
More than 50% of all heart attacks occurring in East Anglia (the region of South Eastern England, with about 5 million inhabitants, that we study), occur in patients who have never seen a doctor about their heart.
In effect, our healthcare system misses more patients with heart disease than it catches.
By definition, treatments such as statins (which, clinical trials tell us, do cut the risk of a heart attack by 30-40%) can only be used on those who have been identified as needing them. Which means that any improvement in the therapeutic performance of interventions like statins will have no effect at all on the majority of heart attack sufferers.
Even this simple example highlights something you cannot tell from the clinical trial data: improving the diagnostic capability to identify people at risk from heart attack so that they can be treated with the effective medicines we already have will yield a far greater reduction in heart attack rates than an equivalent improvement in the efficacies of our therapies.
Such global analyses also throw out some surprises when we look at cost-effective use of the available resources. Bypass grafting and stenting are both effective therapeutic interventions to decrease the risk of heart attack in high risk patients (so clinical trials tell us). But they are expensive procedures – it is difficult to extract the real cost of performing the procedure in either the UK or the US, but the price paid for a coronary artery stenting procedure in the US is about $8000.
In isolation (that is, considering just the patients referred to a tertiary centre for possible stent placement) they are even cost-effective interventions. They reduce the later health care costs for those patients by a greater amount than the cost of the procedures themselves.
But in the bigger picture, where the aim of the healthcare system is to reduce heart attacks in the population as a whole, you could get twice as much benefit from closing down the angiography suites, performing no stenting procedures, and using the money saved to provide generic statins to the whole populace.
The real shock is that patients who undergo angiography, and who have severe blockages in all three major coronary arteries are only at ten-fold higher risk of having a heart attack in the next three years than the average man in the street of the same age with no symptoms whatsoever.
Of course, 10-fold excess risk sounds like a lot (and it is, if you are one of them!) but only a tiny fraction of the population are ever referred for angiography – and only a portion of those turn out to have severe disease. This small number of high-risk individuals then consume a disproportionately high fraction of the resources used to prevent and treat cardiovascular disease. Even if the interventions they receive are completely effective they can only prevent a small fraction of the heart attacks that occur across the population as a whole.
If closing down angiography suites world-wide and dishing out generic statins like Smarties sounds too radical (and it probably is), that should not distract from the central point: examining the efficacy of each diagnostic test and intervention independently leads to a grossly inefficient system if the objective were only to reduce heart attacks to the maximum extent possible.
The reality is that doctors and patients are met with myriad decisions about the choice and order of tests and interventions to apply in response to any given set of circumstances. These decisions have to encompass safety, tolerability, efficacy and cost (at a minimum). Once you start thinking about trying to make all these decisions rationally for the whole population simultaneously, the sheer magnitude of the task becomes apparent.
Lets be clear then: evidence-based medicine is a wish not a reality.
The evidence we do have relates to the performance of tests and interventions assessed in isolation, derived from clinical trials based on a reductionist paradigm. That gives a warm comforting feeling – because we know that whatever step the doctor takes is “locally optimum”. What we do not really know is whether the big picture of how healthcare is delivered to the population as a whole is anything close to the “global optimum”. The example taken from our cardiovascular disease dataset sets suggests that it is a very long way from perfect.
There is a solution. We urgently need to bring the skillsets of systems biology to the management of healthcare delivery – to christen the era of ‘healthcare-omics’. In reality, even biology was slow to exploit the power of multivariate statistics – process engineers and financial wizzkids got there first. Its not surprising that big data methodology found its first applications in predicting stockmarket trends, providing its pioneers with the easiest way to monetize their developing capabilities. Similarly, using big data methodology to optimize the operations of factories yielded relatively tangible financial gains from the application.
But healthcare is a massive component of national budgets – and its growing rapidly. Surely, then, it is ripe for dose of global optimization?
Undoubtedly, it’s the biggest sector of the economy yet to be reformed by system-wide thinking. Undoubtedly, the gains are massive both in terms of seizing control of runaway costs but also by improving clinical outcomes for patients. The tools are there, so why the reluctance to apply them?
The primacy of the doctor.
In no other industry is the incumbent manager given such power to control the delivery of the process in which they participate. It is laughable to assume that the manager of a chemical plant producing feedstocks for the paint industry, for example, would be allowed to retain current practice if, demonstrably, there was a way to cut costs in half while improving the purity of the product.
But there is a presumption that no ‘mere mathematician’ armed with the same computer models that revolutionized financial markets and manufacturing industries alike could possibly know better than even a humble primary care physician as to how to treat your grandmother.
The sad truth, though, is that doctors resemble the interviewee staring blankly at the electron micrograph: they have a resolution problem. Doctors have a deep and valuable knowledge of the specific – but almost no appreciation of the general. That is not meant to be a criticism: it is simply inevitable. When systems are complex (in the technical sense), its just not possible to understand how they work by understanding the component processes.
Unless and until both the public at large and the medical profession embrace the concept of global optimization (and admit that the geeks with the computer models are not to be derided simply as ‘pen-pushers’ and somehow inferior to the mighty ‘scalpel-pushers’) then the huge economic and clinical benefits of global optimization will be denied to us all.
And that represents a huge challenge. We have all benefited through increased standard of living over the past several decades as financial and manufacturing industries have been optimized using these tools. But we benefited without understanding, or for the most part even knowing, what was going on. Healthcare is different: no-one will permit radical changes driven by computer models they neither trust nor understand. Establishing ‘healthcare-omics’ as the guiding principle behind healthcare delivery is going to require a lot of education for the masses and the medics alike.
Make no mistake, though. It will come. And when it comes, it will deliver bigger gains in the health of the nation that all the billion-dollar pharmaceutical R&D worldwide in a decade – at a price lower than we pay today. Not to mention huge financial rewards to those who deliver these efficiency gains to the industry. Put like that, it makes you wonder what we are waiting for!