On January 11th, David Shaywitz () posted an excellent article on Forbes entitled “Are we the problem?”, in which he attributes many of difficulties in pharmaceutical R&D to the gap between how much we know about biology and how much we THINK we know. In other words, the hubris of researchers and managers across the industry.
DrugBaron agrees. Indeed, his piece chimed so well with DrugBaron’s view of the world that it triggered the assembly of a number of different ways in which hubris is killing pharma R&D returns.
But first, why is there such hubris? After all, returns on pharma R&D (or investments in biopharma companies) across the sector as a whole have offered disappointing returns compared to more mundane “old economy” business for more than a decade. Hardly the basis for assuming you are doing a great job.
The critical message when considering the huge investment in late stage development is “listen to the data, not the key opinion leaders”
The answer must lie in the noble nature of the endeavor, to find treatments that cure disease and improve people’s lives. More importantly, drug development is, and by its very nature always will be, a “hits driven business”. Those hits give the patina of success that rubs off on every participant in the market place.
The assumption that we understand more about the system than we do underpins the concept that extensive experimentation de-risks an asset at every step along the way. Spending money to learn things about the candidate seems like a good idea. But if we really cant use that information to predict the outcome of later stage trials, then the spend was actually wasted. Once you accept that failure is the norm, and embrace it as an essential part of pharma R&D, you on the way to reducing costs and improving returns.
At Index Ventures, we have built mathematical models of the early stage development process (up to proof of clinical concept). One of the biggest structural factors that determines the behavior of those models is the fraction of the risk in a project that can be discharged prior to the definitive test (in the case of early stage projects, the proof of clinical concept study in man). Typically, due to hubris, market participants assume that most of the risk in the project is knowable, and can be removed sequentially at lower cost than the definitive final study.
Such a model instructs you to optimize for better decision making at each small step along the way, gathering as much data to de-risk the programme as possible as long as that improves these step-wise decisions.
The poor return on capital derives not from a lack of hits, but from the knee-deep pile of failures on the cutting-room floor
But what if the majority of the risk in a discovery programme is not only unknown but also actually unknowable (ahead of the final definitive test)? The difference between unknown and unknowable are vast and important – spending money allows you to learn the unknown, but the unknowable always remains out of reach.
Once you make that leap, there are some very interesting changes in the behavior of the model. Now it emphasizes turning the clinical card as cheaply as possible as the dominant factor dictating returns. Attempting to de-risk sequentially (elevating cost and timelines to supposedly improve decision making down the line) not only fails to improve returns, but actually harms them.
Hubris, then, harms returns if you believe you understand the system better than you actually do because it makes you behave in the wrong way.
Sequential de-risking gives a feeling of comfort, because it takes away the “knowable” risk. However, if the “knowable” risk is less than half of the total risk, spending more to incrementally improve the removal of the “knowable” risk kills the returns. It would have been better to jump to the final (relatively expensive) definitive test in the cheapest way compatible with ethics and patient safety.
“What we need most is an approach that acknowledges our ignorance, relies more on empiricism, is cheap enough to allow more shots on goal, and fast enough to permit iterative learning” on Forbes
Working this way needs a change of mindset in entrepreneurs and project managers, and in the people who invest in them. If most early stage projects fail because they contained large amounts of “unknowable” risk that could not be mitigated prior to the definitive test, then failure becomes the expected outcome.
And then the managers have to differentiate between “acceptable failure” (where the team did all the right things to an asset that was flawed for “unknowable” reasons) from “unacceptable failure” (where the project failed because someone did the wrong thing). DrugBaron posted a detailed discussion of the challenges that presents to senior management and investors alike.
Changing the mindset in early stage development is key for investors like Index who focus on development of preclinical assets to clinical proof of concept (and would help big pharma too), the failure to embrace this “humble” model of low cost early development doesnt even move the needle when it comes to explaining why return on investment for pharma R&D is so poor (and the recent analysis of the class of 2012 FDA approvals does nothing to suggest that the return is going to start to improve any time soon). The single biggest issue for the industry are the failed Ph3 programmes for drug classes like CETP inhibitors and anti-amyloids.
Here billions in capital are poured into programmes that not only fail, but that were always doomed to fail. But this second, more pernicious, late stage failure issue has at its heart the same issue: hubris. Managers believe the hype around the mechanism more than they believe the early stage clinical data. DrugBaron called them “idea bubbles”, where the belief around a mechanism becomes self-sustaining no matter what the data say.
So the critical message when considering the huge investment in late stage development is “listen to the data, not the key opinion leaders”. Hubris – the idea that we understand and can explain away any unattractive wrinkles in the data set because we think we know more than we do – now puts billions at very high risk.
Once you accept that failure is the norm, and embrace it as an essential part of pharma R&D, you on the way to reducing costs and improving returns
But it goes beyond simple culture – that was the focus of the Forbes piece. There are some technical issues too. Despite their size and experience, big pharma are not that great at clinical trial design. There is a lot of room for improvement, particularly on the key Phase 2a proof of clinical concept trials, as DrugBaron noted previously.
The problem here is that pharma teams (most particularly in large companies) tend to get too seduced by the regulatory requirements for approval. As a result, regulation drives the process, rather than regulating the process. Phase 2a studies should be “safe, well-designed experiments in humans” not “miniature Phase 3 trials”. They do not need (and should not have) “primary end-points”. Primary end-points are needed for definitive (Phase 3) proof of efficacy, but they are the most pernicious concept in early stage development.
The arguments for change in Ph2a design are strong, but they are mostly not happening (yet). Why not? Back, I suspect to the same problem: hubris. Pharma teams think they are better at clinical trial design than they actually are.
Frankly its amazing that all this hubris pervades a business sector that under-delivers on return on capital for a decade or more. How can everyone continue to believe they are so good, when objectively they are under-performing? Simply because drug development is a “hits driven business” and the hits are far more visible than the many failures.
But the poor return on capital derives not from a lack of hits, but from the knee-deep pile of failures on the cutting-room floor. Realizing failure is the norm, and keeping the amount of capital wasted on eventual failures is the key to turning around pharma R&D – not by spending more to “improve decision making” but by spending less by focusing only on those factors that really determine the potential for success of a programme, and only “back-filling” the rest (which is where most of the capital goes) once commercial (and not-just regulatory) success is assured.
What if the majority of the risk in a discovery programme is not only unknown but also actually unknowable ahead of the final definitive test?
If you still think you understand biology and drug development after reading that, its time to go and take a cold shower followed by a long, hard look in the mirror.