The pharma industry, with decadal time lines and multi-billion dollar R&D budgets, is particularly sensitive to progressing projects that should have been killed.
Occasionally failure is perfectly obvious, and the decision to kill a project is trivial. But much more often, there is no certainty whether you are killing a future blockbuster or saving millions in wasted resources. “Kill or continue” is a critical decision that often has to be made with little hard evidence – and made multiple times over on each project as new material data is accumulated.
The efficiency of this decision – recognizing impending failure as early as possible – is key to improving capital efficiency in the pharmaceutical industry. Yet structural factors, as much as the quality of the team or the data that’s available, unwittingly bias this key decision in favour of continuing – often with disastrous consequences.
The key to improving capital efficiency in the pharmaceutical industry is to minimize the assymetries between a decision to kill a project or continue to invest in it
A “kill” decision, crystalizing a loss (of resource and of face), is inherently less attractive than continuing. Even if the outlook is weakening, there is always a possibility things can be turned around – and even if it fails eventually that will probably happen on someone else’s watch. Similarly, a “kill” decision requires active bravery, while continuing is usually the default.
Eliminating these systemic biases, making the “kill” and “continue” options feel equally attractive, and making “continue” an active decision like “kill”, is therefore the top priority for management in this industry, whether in small biotechs or global pharma companies. Progress is being made, but it is a tough problem and radical changes are still required.
Failure is a fact of life in the drug development industry. Each step of the tortuous path from lab bench to market approval is designed to de-risk the steps that follow, but by definition there is never a certainty of success until the finishing line is crossed. Were it possible to be sure in Phase 2, for example, that a drug would work safely in Phase 3, then Phase 3 would be superfluous. Throughout the life of the project, the goal is to eliminate as much risk as possible with the smallest amount of dollars, so that the final lengthy and costly studies that expose the largest amount of capital to risk are as safe as possible.
This sequential de-risking paradigm is therefore built around a series of “kill or continue” decisions every time new material data is obtained
While it is obvious that the accuracy of these decisions will depend on the quality of the data (and DrugBaron has recently highlighted the particular importance of Phase 2a study design in this respect), there is another more insidious factor at play: the structure and culture of the organization can introduce a major bias into the “kill or continue” decision irrespective of the data quality.
The power of these cultural factors is all the greater precisely because these decisions typically have to be taken with very incomplete datasets. And because the decision is always grey, the deeply hard-wired human response to avoid crystalizing losses – on the hope that more investment can save the already-sunk costs – leads to projects continuing when the unbiased decision would have been to take the loss.
The pressure to avoid the kill (or at least kick the decision into the long grass) is different in small and large companies. In a small biotech, particularly one with only a single programme, the consequences of a kill decision on the decision-maker are likely to be swift and severe. Closure of the company, with loss of job and financial security, would undoubtedly be painful. Others will be affected too – colleagues who have shown you complete loyalty and support will lose their jobs as well. They will see you as a failure. Its hard to think of a harder decision to take short of turkeys voting for Christmas. As long as the failure is not absolute, and there is a slim chance to rescue the situation, the easier decision is likely to be to continue.
In larger companies, the pressure is less immediate but only slightly less intense. The widely held perception is that promotion and prestige comes from association with successful projects. Seeing your project killed will be an indelible stain on your record. Being the author of your own doom, therefore, seems like a poor career move. After all, in larger companies in particular, staff move from one project to another every couple of years – the objective has to be to leave the project ostensibly in rude health (without worrying if there are a few too many skeletons in the cupboard; they will be someone else’s problem). The incentive structure is therefore heavily favouring continuing as long as there is any plausible way to do so.
But progressing these “zombie projects” (things that should have been dead a long while ago, but are still mysteriously shambling forward) kills productivity, by consuming ultimately unproductive capital. And because the costs of a drug development programme are steeply graded from small to large, any delay at all in killing translates into declining return on capital.
Making the “continue” decision carry as much weight for the individual as it does in an asset-centric biotech company also requires a change to the compensation structure in large pharma companies
The major task for R&D management, then, is how to remove these external biases from the critical “kill or continue” decision. And this was the central topic of discussion of the panel on ‘Organizing Research and Development’ at the recent Index Forum in Florence in August 2012.
There are two parts to the equation: how to make “kill” and “continue” feel equally painless to the decision maker is only one half of the solution. The other side is to make a decision to “continue” an active decision with consequences for the individual in just the same way that a “kill” decision clearly has. Listening to the panel at the Forum, it was clear that organizations, big and small, are making progress dealing with the first issue but, for large organizations at least, the second issue remains problematic. Finding a solution may require radical change.
Making “kill” a less painful call needed a culture shift – the organization needs to embrace failure (without, of course, actively encouraging it). Critically, though, all failures are most definitely not acceptable. Embracing failure, then, requires a strategy to distinguish “acceptable failure” from “unacceptable failure”. Acceptable failures need to be praised rather than criticized, while unacceptable failures should still fall with full force on the shoulders of the perpetrators.
What is an “acceptable failure”? In this context, it is a project where the underlying technology turns out not to have the properties hoped for it, but the team did all the right things to demonstrate that quickly and cheaply. The project may have failed, but the team behind it did well. In marked contrast, “unacceptable failure” is when the team chose the wrong clinical trial design that yielded equivocal data when a properly designed trial would have given a clear signal – one way or the other. The difference is between asset failure versus execution failure.
“You take the risk, I will take the blame” Paul Janssen, founder of Janssen Pharmaceutica (now part of J&J), one of the true visionaries of the 20th Century pharmaceutical industry
Its perfectly obvious that the team operating on an asset do not deserve censure for discovering early that the asset does not have the potential it was hoped to have when virtually nothing was known about it. The problem is that differentiating between the two types of failure is much harder than it looks. Biology is particularly complex, and there are almost always “excuses” for poor execution – scientists in particular are good at constructing biological explanations for human frailties, and R&D Management have a hard task distinguishing asset failure (which always gets the blame) from execution failures within the team.
Large companies are trying hard to implement a culture of recognizing individuals for taking difficult “kill” decisions. Ultimately, they aim to make acceptable failure not merely tolerated by actually a badge of honour. Its almost like a medal for bravery. And because people are less likely to lose their jobs when a project fails (since the organization usually recycles the project team members into other new projects), such a culture change is all that is actually required to make “kill” as attractive as “continue”.
In small biotech companies, though, a posthumous medal is poor recompense for suddenly finding yourself out of a job. That’s why Index Ventures have gone a step further: Index Drug Developers (IDDs) are part of the platform, able to shelter safely under the Index umbrella when a project fails, until they can be recycled into the next opportunity. This takes away the threat of losing financial security, and empowers the decision-makers to take tough decisions. It also stops them “throwing the baby out with the bathwater” – just because an asset failed to live up to its promise, why would you want to lose a team that did everything correctly? This approach takes away the pain from calling time on an “acceptable” failure and reduces the asymmetry between the “kill” and “continue” options.
Optimally implemented, these changes make “kill” and “continue” equally palatable to the decision maker – and ensure that the decision is based only on the available data.
Acceptable failure: where the underlying technology turns out not to have the properties hoped for it when virtually nothing was known about it, but the team did all the right things to demonstrate that quickly and cheaply
But these changes do not deal with the other asymmetry between the options: “kill” is an active decision with immediate consequences, while “continue” is a passive, default option with big consequences for the organization (if large, expensive and ultimately doomed late stage trials are eventually initiated) but few if any consequences for the decision maker themselves.
This second asymmetry, however, is one of the driving forces that led to the development of the Index “asset-centric” investing model. By giving each decision maker only a single asset to work on, the decision to “continue” suddenly acquires a much greater significance. Since the real financial upside of working in a small company comes with success (however that is defined, whether as an exit sale to a larger company, an IPO or a product launch), there is now a real opportunity cost for the individual in progressing an asset.
As soon as an asset falls below a threshold of attractiveness – the person best placed to judge it now sees its chances of success to be lower than the chances for a new, poorly characterized asset – then they are incentivized to shout “kill” loudly and quickly. The IDD platform takes away the immediate negative impact of the decision, while opening up the possibility of working on a new asset with a greater chance of success. The result is better alignment between the impact of the decision on the individual and on the investor.
There cannot be the same “skin” in the decision if that decision-maker is simultaneously operating on multiple assets, as in a conventional “pipeline play” biotech company. That individual is better keeping three or four arrows in his quiver even if one of them has only a small chance of success – it might work out OK and earn him a big prize, but if it doesn’t all the loss is born by the investor (and the arrow is not occupying a ‘slot’ in the quiver, keeping out a straighter arrow). The incentive now is very clearly on the side of “continue” as long as there remains a plausible path for doing so.
If the asset-centric model promises to improve return on capital in the Venture space, how might you gain a similar competitive advantage in a large company environment?
Many global pharmaceutical companies are attempting to mimic the best parts of the asset-centric model, dividing their R&D activities into swat teams focused on a single product candidate. In many ways, these teams resemble the composition of a venture-backed asset-centric company, with the large parent organization providing the same “umbrella” as the Index IDD platform (ensuring teams struck by “acceptable failures” can be recycled painlessly, retaining good talent to work on multiple sequential assets).
But without more radical change, the decision to “continue” remains too passive, and lacks consequences for the individual decision maker. One approach being adopted by forward-thinking companies is to tie the individual to their project for the lifetime of the project. This introduces an “opportunity cost” for career progress – attach yourself to an ultimately unproductive asset and the chances of shining brightly, and hence progessing rapidly up the career ladder, are diminished. And because looking good can only occur when an external validation event occurs, which may take years, the individual starts to think about continuing a programme that will eventually fail to cross that line – with them still at the helm.
But making the “continue” decision carry as much weight for the individual as it does in an asset-centric biotech also requires a change to the compensation structure. In many large companies, too much of the reward derives from the financial success of the organization as a whole (success that was mostly built on R&D activities a decade or more previously in any case), through share option deals and generous salaries and benefits. A better model takes another leaf out of the biotech book and pays a “milestone bonus” to the team as each key de-risking milestone is passed.
Now the individual decision maker has to make the same active decision to “continue” a project as his biotech doppelganger. He is only part of one project team, so he has only one chance to earn a “milestone bonus” – which could be very significant at a personal level. With such an opportunity cost, he needs to be confident that the project really has a better than average chance of making it to the next milestone.
Such an arrangement has an additional benefit: if moving to the next phase triggers payments to the team, the managers will look more stringently at the decision to continue. Anything that adds stringency to the assessment of the decision to continue will pay dividends (perhaps literally) in improved capital productivity for the organization as a whole.
The structure and culture of an organization is as important as data quality in the efficiency of the “kill or continue” decision
And the down-side of such a radical shift in compensation model for pharma companies? Surely, such an arrangement just incentivizes everyone in the organization to call intermediate milestones positive thereby triggering payment of valuable bonuses. At best that just increases costs, and at worst actually hampers the “kill” decision. Not at all. The key is to make the final decision to progress entirely separate from the recommendation of the team. In biotech, that split occurs naturally: the decision is taken by an external organization – a larger company (or public investors) need to actually buy the company to trigger a return to the team. Ultimately, the full benefits of the asset-centric biotech model can only realized by pharma if they create an internal market for their own projects.
None of this is easy – but the rewards for the first organizations to get it right will be enormous. The big experiment with the asset-centric model in venture investing has been running for five or six years, and we will soon have the evidence for its impact. The next big challenge for pharma is to realize these same advantages in their organizations – before the painfully low capital productivity of R&D under their current model kills the weakest.
This article is based loosely on the ideas discussed at the ‘Organizing Research and Development’ panel, moderated by DrugBaron, at the recent Index Forum (an invitation-only gathering of senior figures in the biotech and pharmaceutical industry), held in Florence in August 2012.
Pingback: Why hubris kills returns on pharma R&D | Drug Baron
Pingback: Can A New Org Structure Help Drug Developers Make Better Early Decisions? : INNOVATIS
Pingback: Can A New Org Structure Help Drug Developers Make Better Early Decisions? : STRATEGIES
Pingback: Venture Capital 2.0 | Drug Baron
Pingback: Evidence that “pick the winners” is precisely the wrong strategy | Drug Baron