DrugBaron’s mantra is simple: drug development is too expensive to make economic sense. Because its impossible (not just difficult) to know whether a given asset is going to make it to the next value inflection point, the numbers only add up if you can ask the value-adding questions as cheaply as possible.
Paying for more information to guide decisions only makes sense if it materially increases your chances of reaching that pay-out. And since most things make you more comfortable without making success more certain, they should be eliminated from the plan – or, rather, delayed until a positive outcome (approval and sales) are more certain.
Understanding the right WAY to cut costs is as important as embracing the need to cut costs in the first place
Driving costs out of the business in this way is central to the asset-centric investing paradigm born at Index Ventures and espoused by DrugBaron.
But watching different management teams operating in the asset-centric world, under extreme pressure to minimize costs, has revealed two very different strategies to achieve the cost reductions – one of which works; and one which definitely doesn’t.
Drug development is a complex process with a large number of separate tasks, both in series and in parallel, which come together to deliver an approvable drug product. There are, therefore, many different paths through this maze of tasks. Some things have to be done before others, but there is considerable latitude in the order and priority (and, in some cases, whether to do certain tasks at all).
Broadly speaking the costs are graded from small at the beginning when the risk is highest, through to large at the end when almost all the risk has (supposedly) been discharged.
But the costs do not increase smoothly. Two points mark a step change in the cost of the following tasks: entering the clinic and embarking on registration studies. These decision points each mark a log-increase (at least) in costs.
Today’s drug development paradigms were honed in a time of “gold rush” economics – optimized for speed not cost
As a result, the gatekeepers for these decisions are the key components of the drug development “machine”. As DrugBaron has noted before, these are the preclinical evidence for safety and efficacy, and the Phase 2a proof-of-clinical concept studies that provide confidence (at least in theory) that a full-blown registration study will be successful.
The importance of good design and sound interpretation of Phase 2 studies has occupied these pages previously. The importance of good preclinical evidence of safety and efficacy is no less important though. While the gold standard would be prior clinical evidence (for previously validated targets), or genetic evidence from human “experiments of nature”, the reality is that such evidence is often lacking for first-in-class targets.
Safety and efficacy studies in well-designed and implemented animal model studies are therefore critical. The value of such studies is often questioned, but the cold, hard reality is that, despite their limitations, there is little alternative to relying on them.
If these studies are the critical gatekeepers of the whole process, then they deserve to be accorded appropriate deference when assembling an operating plan. As DrugBaron has noted previously, the tendency should be to collect MORE data in these studies rather than less. Phase 2a proof of clinical concept studies absolutely should not resemble mini-scale Phase 3 studies with primary end-points. Animal efficacy studies should incorporate as much safety data as possible (and as many alternative measures of efficacy). This extra data really does help improve the decision-making quality at these critical interfaces between cheap(er) and more expensive development phases.
If preclinical safety and efficacy data and proof of clinical concept in man are the twin pillars underpinning a successful programme, then they should be protected from the cost-cutting axe. In fact, the drive to acquire additional data to make the judgments based on these studies even more secure, might lead to an increase in costs (although, as DrugBaron noted, clever study designs can often deliver more data even at lower cost).
Under extreme pressure to cut costs, then, there are two very different ways to behave
The sensible option is to preserve expenditure on these key activities – choose the optimum design and work out how much that will cost you (as opposed to allocating a budget for the activity and then designing studies to match). Then cut all other costs as savagely as possible. Most of the other studies that must survive the knife are those that are absolutely essential to permit the key studies to be performed. Anything else that can be delayed until the key studies read out should be delayed (and therefore cut from your budget).
This “all or nothing” cutting strategy focuses the resources on the critical studies that drive the key decisions.
The alternative is to cut more gently across all the tasks on the list, irrespective of their importance.
Watching management teams take this approach quickly reveals the problem: critical studies soon become unfit for purpose. Animal models with too few animals per group, studies with limited end-points chosen for their thrift rather than reproducibility, broad inclusion criteria for early clinical trials to simplify recruitment are all common examples of cost-saving measures that do more harm than good.
All of these follies save a small amount of cash, but cost a great deal in terms of the quality of the key decision points. Studies that have to be repeated, or where another is required to clarify something that could have been properly understood the first time inflate rather than shrink the budget.
Rigorous application of the right kind of selective cost-cutting can yield high quality proofs of clinical concept for $20million a shot
The real challenge for investors (in small companies) and senior R&D leaders (in larger ones) is how to drive down costs without dinting quality. In other words, how to induce selective cost-cutting by eliminating (or, actually, more likely delaying) whole studies whose contribution to de-risking the programme are negligible, while maintaining (or even increasing) spend on the few decision-critical studies in animal models or proof-of-concept clinical trials.
The solution – or at least part of it – is to ask hard questions of your drug developers. Make them justify why each piece of information they want to generate will materially increase the real quality of the key decisions. If the justification is not there, cut the whole study from the plan.
Critically, forbid the following justifications for any study:
“But the regulators need to see that data”. Regulators regulate our process, not specify what we must do. If there is a clear justification for delaying some particular study (or not doing it all), then make the case to the regulator. In early development, safety of trial subjects is the only concern – convincing yourself, and not the regulator, about the likelihood of success in later trials is the only purpose of early stage development. Leave the box-ticking to the end-game when the risk is (or should be) close to zero.
“We always do it that way”. Culture is a powerful tool for constraining behavior in large organizations. But it also slows the implementation of much-needed change. Unfortunately, the way we do drug development (particularly in large organizations) grew up in the gold-rush economics of the 1980s and early 90s. Finding great drugs was easy (just like digging up gold in California a century earlier), and the important thing was doing it quickly rather than doing it cheaply. As a result, the development paradigm became optimized for speed rather than cost. Old habits die hard, and the remnants of those days are still deeply embedded in pharma culture.
“If I don’t do that, I will look stupid if it all goes wrong later”. Fear of failure, and the need to do everything possible to mitigate risk leads to so much waste. The damage is done here when investors or senior managers perform a post mortem after a failed study, and with the benefit of 20/20 hindsight find something that could have predicted the bad outcome had it only been done earlier. Such retro-analysis drives development teams to play it safe and do things whose real value in de-risking the programme never justifies the cost, but rather than be criticized afterwards, they do it “just in case”.
DrugBaron has already stressed how investors or senior managers need to value “doing the right thing” rather than “getting a positive outcome”. In this instance, the right thing is avoiding expenditure on items that don’t have enough value in de-risking the key decision points, and teams should be praised for eliminating such costs (by eliminating or delaying such studies) even if the asset fails in one of the key studies that remained.
The “all or nothing” cutting strategy focuses the resources on the critical studies that drive the key decisions
These changes can have big implications for the cost of development. Rigorous application of the right kind of selective cost-cutting can yield high quality proofs of clinical concept for $20million a shot. At that price, investing heavily in early stage drug discovery and development projects just might prove attractive once again, and the flight to late stage that threatens the viability of the entire industry may just be reversed before its too late.
Getting costs down to those kind of levels is not easy. Doing it without damaging the critical decisions that underpin success is even more challenging. But understanding the right WAY to cut costs is as important as embracing the need to cut those costs in the first place.
Pingback: How NOT to cut costs