IDEA Collider

IDEA Collider | David Grainger

Episode Summary

David Grainger, biotech entrepreneur and pharma R&D blogger (@sciencescanner on Twitter) David is a co-founder and the Chief Scientific Advisor at Medicxi. He is currently the Chairman/CEO of several Medicxi portfolio companies (including Morphogen-IX, Divide & Conquer, Critical Pressure,Methuselah and Z-Factor) and Co-Founder of The Foundation Institute for 21st Century Medicine. David also led Medicxi's investment in Padlock Therapeutics (acquired by Bristol-Myers Squibb) and co-founded XO1 (acquired by Janssen Pharmaceuticals). Prior to Medicxi, David was a Venture Partner with Index Ventures for four years, having joined the life sciences team in 2012. Before joining Index, David founded Funxional Therapeutics and the out-sourced drug developers Total Scientific and RxCelerate. In addition, he led an internationally-recognised research group in Cambridge University's Department of Medicine, where he published more than 80 first author papers in leading journals including Nature, Science and Nature Medicine. David is the Chair of the Translation Awards Committee of the British Heart Foundation. He ran a widely-read blog on life sciences-related topics, DrugBaron, before moving his writings to Forbes.com, where he is a Contributor. David has over 150 patents and patent applications in his name, and holds MA and PhD degrees in Natural Sciences from the University of Cambridge. (Apologies for the AV challenges on this one… The Mevo camera is a remarkable unit, but here had some operator error! A couple of audio spikes, and phone interruptions… But this was so wonderful a conversation that I didn’t want to go back and ‘polish’)

Episode Notes

0:00 Defining innovation

01:00 Incremental innovation vs big changes

01:45 On designing back from the unmet need, and introducing innovation

(Interruption by a phone call �)

02:54 Problem backwards vs solution forwards

03:55 On the ‘guided random walk’ and adoption of agility/ serendipity (low validity environments in pharma)

04:45 Prediction, hubris and certainty in process

06:50 The stopping rule in drug development (07:30 interruption by a phone call �)

07:50 Zombie projects

08:00 The ‘Keytruda story’ as ‘the biggest poison in our industry’

08:50 On ‘busters’ vs blockbusters

09:20 On breadth of exploration in Discovery/ ‘pick the winners’/ ‘kill the losers’

11:05 The misaligned incentives that lead to decisions to continue - the ‘legions of zombies’

11:50 Spreading resource too broadly without good filters

12:20 On the development of better filters, and too much resource in the ecosystem

13:00 Does constraining resource lead to better outcomes?

15:00 On ‘Follow the Science’

16:30 On giving people the benefit of the doubt… Now what…?

(17:40 One more phone interruption - sorry! � Leads to some audio spiking from here…)

19:00 On a disease like Alzheimer’s - pinning a tail on a large donkey

19:50 On ‘value signals’ in development

20:45 On hubris in selection of models

22:15 On allowing ‘the whole market’ to distort clinical development

25:00 How important are measures of innovation? The role of the incentive structure

25:30 On decision quality (and the distraction of ‘resources’)

26:30 How does more data improve decision quality?

27:00 On being successful or not being blamed for failure

29:00 On the feedback loop and its utility in pharma

30:20 What would a better incentive structure look like?

31:00 What do we mean by failure?

32:00 On the misattribution of error

33:30 The way we misuse language, biases, and the impact of language on ‘failure’

34:40 What are the most important lessons you’ve learned over time?

34:55 On the power of dissociating asset from infrastucture, idea from process

37:20 On the ‘organisation’ problem - separate nodes with a ‘project pilot’

38:20 On the translation of success in one therapeutic area into another - ‘process structures are not transferable’

39:15 On ‘retrenchment’ in major pharma, into fewer therapeutic areas

40:50 On the ‘nonsense’ of product profiling too early

42:30 ‘Instead of recognising you’re pinning the tail on a donkey, you think you’re aiming’

42:50 What drives David Grainger?

44:30 What is the role of tech and AI in early development?

45:00 What problem is AI solving?

45:30 Better predictions in a low validity environment

46:30 What kind of ‘training data’ would we use?

47:00 Unknown vs unknowable data

48:30 On which books David would recommend

50:30 What are David’s ambitions?

52:30 Does 2019 look very different than 1999?

Episode Transcription

https://www.youtube.com/watch?v=A4savqANzY8&t=8s

 

IDEA Pharma: IDEA Collider David Grainger

 

 

Mike Rea:              So, David, tell me a little bit more about -- one thing which I think is fundamental to all of these conversations, which is the definition of innovation. How do you define innovation?

 

David Grainger:      Well, I was fortunate I think, to have a rather unusual education. I did both sciences and classics right through my school days. 

 

Mike Rea:              A proper old British education. 

 

David Grainger:      Yeah. Well, definitely from the days. Many years before I went through -- I was one of the last bastions of a classics education for a Scientist. When I came to Cambridge University I really couldn't decide, initially, whether I wanted to do natural sciences or classics. So, with that background innovation, from the Latin Novus, clearly anything new would fall under the broadest definition of innovation. But much of that I've, perhaps rather pejoratively, called incremental innovation. So, just changing something a little, make it a little cheaper, a little bit more efficient. And I tend to exclude that from the definition of what you might call true innovation -- a step change, something completely new as a solution to a problem. And that's my definition of innovation. A big change. 

 

Mike Rea:              Yeah, a big change. A step change. And one thing that I think you mentioned is this kind of introduction of novelty to a system. How important is the introduction part of that? Because you know, I come from the commercial world. If you come up with an invention, that's nice. But unless you do something with it, you haven't innovated. 

 

David Grainger:      Absolutely. And that's absolutely right. Scientists in particular are very guilty of assuming that the technical component of the entire process is the only one. If we can solve the technical solution then we're fine. But it doesn't work like that. Unless we get that out into the hands of the users, unless we persuade them to use it, unless we demonstrate that it's actually useful in their hands, the innovation cycle isn't complete. The inventor who first thought, "Oh, I wonder if we invent a new vacuum cleaner with no moving parts --" and thinks how to do that, actually hasn't achieved the greatest problem at all, which is persuading people -- whether they needed a new one, whether they were prepared to pay for a new type of vacuum cleaner, whether it did actually perform better under the real world usage circumstances.

 

Mike Rea:              So, do you think it's important then that you're almost applying this kind of design of solving from the problem back into the solution?

 

David Grainger:      Yes absolutely. I mean I've written extensively about the difference between problem backwards versus where we are today forwards. The danger with where we are today, "I know how to do this, what can I do with it?" is that that does lead to a rather incrementalistic kind of approach. If you're going to make a step change you have to leave behind where we currently are, look at what problems need to be solved, and then scan the horizon for the kinds of capabilities that you might bring together, perhaps in totally different ways to the people who are developing them right now. The person with that technology in university or in a small company might be thinking that it was useful for one disease, but in actual fact you can see that it could be the technological solution to a problem which is a much bigger problem than sitting over here.

 

Mike Rea:              Yeah, for sure. Actually, then that's a sort of side tangent, which I think a lot of the more interesting drugs on the market are the products of serendipity. Either they turned left some point in development, or they were on [mark] and then found them a more interesting niche. How important is that kind of agility and serendipity within the drug discovery and development process?

 

David Grainger:      Oh, I think it's absolutely essential. You can't just -- It's not a linear pathway, it's a random walk; a guided random walk, would be a better way of looking at it. I mean the key insight, really here, came from Daniel Kahneman, then Nobel Prize winning sort of Psychological Economist, who described situations like drug development as low validity environments, which essentially means the data available to you doesn't allow you to predict with any confidence what the right way forward is. So, you have to keep trying things and abandoning things that don't look promising, and moving through --

 

Mike Rea:              Yeah. Which is interesting isn't it? Because I've long been of the view that prediction is hard, especially when it comes to this environment. But actually, a lot of our processes are currently set up based on prediction.

 

David Grainger:      Correct. Because that's the natural -- it's human hubris. People don't like to admit that actually they've got next to no clue what they're actually doing. You don't get very far in this world if you stand up in front of the boss and go, "Yeah, I'm running an early stage discovery and development program, and yeah, like everyone else I know probably one percent of the biology I need to know in order to be able to intervene with this system. I'm going to do some fairly random things and hopefully out of that something good will come up." And the next guy stands up and goes, "Well, I've got this process and I'm going to do this and then I'm going to do that and I'm going to do the other." Accountants and people from old economy businesses are much more comfortable with a process. They can be sold on the idea that there is much more predictive power than there is. And as a result, they design whole companies around those kinds of rigid siloed operations in which you assume that if I do [inaudible 00:05:42] very, very well and I do toxicology very, very well and I do each node of activity very, very well in isolation, that somehow out of that I'm bound to end up at the right place. And because in drug development you never know whether you've actually been successful until the very end, and the very end no longer means, "Did I get it approved?" "Is anybody buying and using this thing?" Which maybe, not even now, seven or eight years from inception -- it might even be 15 years from inception. And because you've got no feedback at all about your relative success, you can keep kidding yourself that your predictive steps and that you're powerful nodes are actually delivering you an outcome, when in actual fact it was serendipity and random chance that eventually led to the blockbuster.

 

Mike Rea:              Yeah. The good drugs happen almost by accident, despite the companies [inaudible 00:06:31] because of them. I mean, we've written before about this problem with this stage [gate] -- these toll gate processes which are built on predictability.  They're based on probabilities [of] technical success being typically 70 percent or 80 percent, because if they're lower, they don't get moved forward. But, the record of success is way lower than that. So, you know if there is a feedback loop, which you know is wrong, currently, but no one's changing the system to reflect that.

 

David Grainger:      Well, I think it's reflected -- I think that the major issue with those kind of systems is that they only stop things when there is powerful evidence to stop. As opposed to setting them up so that you only continue if there is a powerful reason to continue. And it sort of relates to the sunk cost fallacy. Where we're going now, it probably took months to organize the team of people who were going to work on this, you've spent time developing [inaudible 00:07:26] plan, persuading the board to give you money. And now we six months in and actually, things don't look terribly promising. But, you think, "Well, it took a lot of effort --" and the beauty of biology is it's infinite degrees of freedom. I can probably switch things around a little bit, "Maybe if we do this again; maybe if you do that." And that results in this overwhelming tendency for projects to shamble along. I've called them zombie projects;  I think you've called them, "Yeah, I'm dead." They're carrying on despite the fact that nobody is terribly enthusiastic about the prospects of success.

 

Mike Rea:              And sometimes good things happen there. That was what happened to [inaudible 00:08:05] in essence, wasn't it? It hit market despite the best efforts of a company that put it on the shelf for long time.

 

David Grainger:      But those stories though, are the biggest poison in our industry. Because once you start believing -- You hear the one in a thousand occasion when a false negative -- that is something that should have been killed actually shambled through and turned out to be successful -- as somehow a justification for keeping things going that don't look very promising. Actually, the vast majority of things that don't look very promising, really are not very promising. And that mentality is super, super dangerous. Because, what kills us in drug development efficiency is not the things that we didn't bother with, it's actually the things that we did bother with that turn out to be useless. If I get a drug approved -- I've called these busters rather than blockbusters -- because if I spend all that money getting a drug approved and then nobody wants to buy it, that is absolutely catastrophic. If I had something that was actually going to be useful but I just decided to leave it on the table in preclinical development, as long as there's an infinite supply of plausible things to try, I've actually not damaged my efficiency at all. 

 

Mike Rea:              Absolutely. That's interesting. So, one of the things that often hasn't gotten right is there's a breadth of exploration and in an early phase. Do you feel that farmer could be doing more to characterize what they have in their hands early?

 

David Grainger:      Well, it's interesting to think about this problem, because I think years ago there wasn't enough bread. Because, if you believe in the predictability model then you would have something which I think people have called 'pick the winners' -- which is you'd scout around, you'd have people looking at things, thinking very sagely, and then coming to the conclusion, "This is the best project. This one is going to yield a really good drug, so we're going to put all our resources on that and we're going to work really hard on that." And people tried that and discovered that actually due to the fact that we can't really predict what's going on, because all the unknowns are so great, even the smartest sages, people couldn't pick the right things. Then you move to the other extreme which I've called 'kill the losers', which is essentially that you would start everything -- at each stage all you eliminate are the things that are self-evidently rubbish. So, people present you a project that's based on incorrect data or incorrect assumptions or they just didn't know something, and you know that that can't work, you know, kill it. Otherwise, everything else that looks plausible, give it a little bit of resource, move it forward a little bit and keep going. Keep pruning out the losers till eventually there's only one strand left, and hopefully you've got the strongest plant out of the collection. And pharma, I think, and [VCs] and biotech sort of move to that model. Probably over the last decade we've seen a lot cheaper innovation. But the trouble is that unless you deal with the misaligned incentives that result in the kill or continue decision, being bias towards continue, what you'll create is actually a legion of zombies. And in fact, that's what a lot of the early stage pharma opportunities are in danger of doing. If we look at -- I'm not singling them out specially because this is true across the board pretty much, but something like the Johnson & Johnson innovation centers and JLabs, are fantastic opportunities for increasing the breadth because they rightly see that you've got to increase the breadth. But unless you have some powerful mechanism for then pruning out the things that you're starting, you're very soon overwhelmed in a tsunami of data that you have to try and analyze in real time to pick out the signal from the noise. So, yes, the signal's gone out, but so is the noise. So, your filter has to get better. And I think we're now in a phase of refinement in which people are accepting that the bread on the water approach of just tossing a little bit of resource to a million things, doesn't work unless you're in a framework in which killing everything is the default. 

 

Mike Rea:              And do you think our filters are getting better or are you seeing evidence of better filters around the industry?

 

David Grainger:      Unfortunately, we've entered a period of something of a macro and economic boom, particularly in the U.S. over the last several years. And what happens is --

 

Mike Rea:              You said, "Unfortunately we've --"

 

David Grainger:     I say unfortunate from an innovation perspective. Because that has just resulted in, in essence, a return to the 'pick the winners' kind of approach. We're seeing the get a really good-named team of people who have pronounced that with all the predictive powers that they have that this is going to be a wonderful thing to do and then we'll pile a hundred-million-dollar Series A into that. So, I actually think there's been something of a reversal. I think that the economic crash of 2007, 2008 actually favored the, Let's try a little bit of this." And then, of course things are going to die because there wasn't enough resources. If you take a garden and fertilize it heavily everything will grow, weeds and all. We did see progress towards my utopia, and I think we've seen a heavy retrenchment of that over the last couple of years.

 

Mike Rea:              So, you're a believer in the idea of necessity being the mother of invention and --

 

David Grainger:      Yeah. I definitely am. Keeping people resource-limited makes them think much, much more carefully about what they're doing, and how that fits into the global objective. And usually, the objective for a scientist can be kept close to where they currently are. So, you're saying, "You'll get some more resources when you've achieved X -- the next little thing right in front of you." And that way you can lead a horse to water. You can essentially -- if you've got a solution backwards view of where you're trying to head, and you've got a team who are really talented at moving one step at a time forwards, by using resource limitation you can actually help guide them in a productive direction. Whereas, if you just load them up with money then it's somewhat unclear where they'll -- well, they'll end up in a different place. I'm not saying that there isn't room in the global innovation ecosystem for both kinds of models, but they have different outcomes. 

 

Mike Rea:              Yeah, for sure. Just want to loop back to this idea -- [basically you're] a scientist with this kind of mantra of follow the science and discover and development. How do you feel about pronouncements like that -- that that's the goal of an R&D organization -- to follow the science?

 

David Grainger:      Well, I don't think it is, actually. I think it's one of the pillars that you must have there, because high quality science is a sine qua non. If you can't do the science properly and you can't interpret results and you can't design experiments, you aren't going to get very far. But, to make that the driving force of an R&D organization misses the fact that the technological piece is only one of the pillars in order to be successful. 

 

Mike Rea:              Yeah, absolutely. And that would [accord] with my view. I think innovation actually is a process which involves a lot of people who don't think they're part of an innovation process -- like the manufacturing, like the distribution, like the marketing folks and a bunch of others involved in pricing and access and so on. That it takes a lot of people to get something introduced that's successful. 

 

David Grainger:     But I would say, interestingly, that part of the reason for that is very much because of the overemphasis of the technological hurdle in the early parts of the innovation process. And I would like to think that the services that you've perhaps provided to large companies, helping them with market access and commercialization strategy, shouldn't even be necessary. The way I approach early stage innovation is, I ask a question. If somebody comes to present it to a [VC] an early pitch for a discovery stage project for example, the first thing I'm doing is I'm just giving them the complete benefit of the doubt. I'm assuming that everything they say they can do they'll be able to do. So, let's take all the technical and give them it. Now, that may sound bonkers but , "Okay. Yeah. You can find a molecule that will do exactly that. Let's assume that it really does in humans what you say it will do in these animals that you've studied. Let's assume that your regulatory strategy is going to be approved and everything is going to work as well as you said." Now, does anybody want that thing? And if they do, how much would they be prepared to pay for it? And you'd be shocked how often people have got this complex technological solution that when you grant them every technological success to a hundred percent of the degree that's possible, it's still not worth having when you've finished. And then, because people are doing that without applying that filter rigorously at the beginning, then they do get through to the end. And now, they need you know a lot of help in order to try and seek a home, a commercially successful home for this rather unlovely thing that they created in the first place. Now of course, we may end up there because [inaudible 00:17:38] will not reach a hundred percent. It's obviously not going to be true that everything they said they could do they'll be able to do. This thing might work to 70 percent. I might not quite be able to remove that wrinkle, so the thing still tastes horrible, or is bright green with pink spots on it. It will not be the perfect entity that that discovery stage people did. And I'm still going to then need commercial support in order to get this rather unlovely thing I've created out. But if I start on a project where having given you all the technology the thing looks marginally saleable -- and that's all the truer in early stage development because I might be seven years away from the market. I mean, it might just beat the current standard of care; it might just be a little bit cheaper than we're selling things for now. But the world won't look anything like what it looks now when any discovery stage project reach the market. So, if I wasn't miles above the threshold, I can't take into account the fact that what I actually produce will be worse than I hoped. And of course, the bar of innovation that I've got to cross in order to be commercially successful, will be higher than it is now. And so many times, those two pathways have intersected long before the thing ever gets to the end, and people aren't even monitoring that. So, at the moment, when the inversion occurs, "Oh no. No. We'll spend another $10 million on more Phase 2 trials for this thing that is already not competitive with where we're going."

 

Mike Rea:              Yeah, for sure. And how do you feel that translates into this almost gravitational pull of a, say a therapeutic area, like Alzheimer's? Which is so poorly described in terms of its biology, so cruelly described in terms of as pathology or even the way that the disease segments. But it remains this kind of vision on the horizon for a lot of companies. How do you feel that that influences or distracts or pollutes the environment?

 

David Grainger:      I think it depends on your goal. The objective, for me, would be to create value in a non-predictable world. So, I need some intermediate value signals, and there are none in something like Alzheimer's. So, I would liken working in a disease that does have those more predictable intermediates to show I'm on the right path, is it's rather like building a company. You're taking risks, but you sort of know where you're going. Whereas, [going] for Alzheimer's is rather like playing the lottery. I mean, the prize is huge, but what's the chances that the six random numbers that I just put into a pill are essentially going to be the right six random numbers? And today, the approach to Alzheimer's, I think, is largely 'pin a tail on a donkey' on a very, very, very large donkey. 

 

Mike Rea:              And faith in some biology that, well actually, probably isn't terribly well characterized in terms of it's -- how much value of intervention plays in the biology.

 

David Grainger:     I think humans, again, are rather poor. It's hubris, again, sadly. And I'm not suggesting I don't suffer from hubris like everyone else, I'm sure I do. But I can also see the problem there and [inaudible 00:20:51] just beautiful. Was there ever a need for the damn things? No. Were they ever going to likely be able to deliver incremental value over what was already there? No. And [inaudible 00:21:02] hasn't -- we're proven right, now. But I was saying, just because it's a beautiful scientific and biological story -- and that's where you go back to your point about do, we want to let science -- should R&D be a scientifically driven phenomena? Absolutely not. Because you'll get a beautifully scientifically elegant utterly pointless thing if you do that. 

 

Mike Rea:              Yes. Again, they're not breaching any confidentiality, but we spoke to one of the companies with the PCSK9 at the time, and the idea that you would use it for something that other drugs can't do, that was an interesting hypothesis. Using them for something that was defined by the market size, which is really where they wanted this. They saw the  [inaudible 00:21:47] as a kind of place ready for diving into. And of course, a lot of what they did in the clinic was then based on that. 

 

David Grainger:      Absolutely, it was. Yeah. 

 

Mike Rea:              Which is, what money does [inaudible 00:21:58] leave on the table? It wasn't based on medical and [their] need, if you like. 

 

David Grainger:      No, no. 

 

Mike Rea:              Because there's a bunch of things you could do with PCSK9 that are interesting medically. None of which have been studied yet. And I think that the likelihood is that they will continue [inaudible 00:22:11]

 

David Grainger:     But there are endless examples of that. What [inaudible 00:22:13] have done with [inaudible 00:22:14] is a case in point. They've gone for the whole market because [inaudible 00:22:19] was a fantastically successful drug with a huge market. In reality, there are 20 percent of people who are not well served by to [inaudible 00:22:29] because it's actually a pro drug which is activated by a particular [CYP] enzyme that 20 percent of the population have a low activity isoform variant of. And had [inaudible 00:22:39] decided that they would get 100 percent of the 20 percent by doing the clinical trial appropriately, in my belief it would be a far bigger drug in terms of area under the curve sales and it has been attempting to somehow be an expensive version of [inaudible 00:22:53] with, at best, marginal benefits. And they've been running 22,000- person trials trying to prove a 0.1 percent superiority in benefit event rates versus bleeding risks. That maybe there, just about, but certainly isn't enough to sustain a tenfold differential in price versus a generic. Yet, within the subset there is a real and genuine benefit. And PCSK9s again, if you're starting to [look in] people with -- there were subsets of people for whom [inaudible 00:23:28] were not the right answer, and strong clinical trial evidence there would have generated a much greater market uptake. And actually, it's easier to expand a market which exists than it is to just create a large one. If I want to create a new restaurant brand, I'm not going to take on McDonald's by simply -- even if I had all the money in the world, if I just opened up a new venue in every location with my new brand -- it would be vastly easier too, to find the right formula in Derby and iterate on the branding, the service offering, till I've got that right. And once I've conquered Derby, well then everywhere else sort of falls --

 

Mike Rea:              For sure. Well, we've discussed this next Lipitor case study, for example. Lipitor coming fifth in its class. Also explored unmet need areas -- clinical unmet need areas -- and it was cheaper than the ones that are already on the market, and better clearly. Which, they'd largely discovered after the fact rather than before the fact, that it wasn't necessarily set up to be better at all the things that everyone else was good at. 

 

David Grainger:      Well, the same thing has happened exactly with Eliquis, the factor 10a inhibitors. Initially set up as a straight head-to-head comparison with Warfarin, it actually doesn't look like a viable drug. It's not sufficiently better across the entire indication where Warfarin is used, to justify that the price differential. But over time, with use, with further trials, people have now defined the way to use these 10a inhibitors in ways that does generate a sufficient clinical benefit in order to justify the price and then we've got an $8 billion a year drug as a consequence.

 

Mike Rea:              Yeah. So how important David, do you think definitions and measures of innovation, or do you think about it in your day-to-day work, or is it a byproduct, if you like, of an approach to work that you apply?

 

David Grainger:      So, I'd definitely go about it the other way around. Which is, I believe that what happens is pretty much down to the incentive structure that you put in place. So, I focus on believing that the model, the general approaches we take to ensure better decision quality and better and more efficient use of resources. Because, those are the two principal categories. People say, "Why have we not got more innovation?" If you did a quick straw poll, most people would say, "There isn't enough resources." Which, in a sense, is a silly answer, because there's never enough resources; there could always be more. And innovation has to be in competition with things like paying pensions to the elderly and looking after the sick and disabled. There's a constant competition of resources, so there'll only ever be so much resources for innovation. So, the question is not there isn't enough, it's how efficiently do I use whatever is available to me? And therefore, to create an efficient -- we are looking for frameworks to generate efficient use of resources, high quality decisions with the least amount of money spent. So, when we ask, "Shall we generate some more data?" Well, more data will probably mostly improve our decision quality. That shouldn't be taken for granted, nor used -- As you do more experiments and get more information, if they're not carefully targeted, the noise could go up faster than the signal and more data might actually result in a poorer decision-making quality. But even if it is data that will improve your decision, depending on how much it cost you may still be better not paying for that information before you took the decision. Which, in turn, comes from, "What am I trying to achieve? Am I more interested in whether I can be successful or in not being blamed for failure?" And therefore, my approach is not a metric measurement, but trying to set incentive structures that cause people to behave in the right way. Because I actually believe that most pharmaceutical R&D organizations are not set up in a way that incentivizes the right kind of behavior. 

 

Mike Rea:              Yeah. And I'm going to come back to that. So, talk me a little bit about -- you use the phrase 'decision quality' which I've written about before, what do you mean by decision quality and how do you look for a higher quality decision?

 

David Grainger:      Well the problem, of course, is -- this is why metrics are hopeless, because by definition I'm only going to find out whether I was right in a long multi-step process on the fraction of those where I took the decision to keep it going. Wherever I take a decision to kill something, I never find out whether that was a good quality decision or not. I published the Monte Carlo simulations five or six years ago because the only way to decide whether the decision quality is improving is at the global level. Am I doing better? Am I producing more good stuff? -- whatever your definition of good stuff is. For a pharma company that's going to be sales for a reducing R&D spend. The reality is, they're producing slightly increasing sales on the back of massively increasing R&D budgets. So, the decision quality is deteriorating by an objective measure of the system as a whole. And if you run Monte Carlo simulations of the system as a whole using, "Did the system produce more good stuff per dollar?" and then you start to vary some of the parameters related to decision quality, you can start to understand the relationship between decision quality and expenditure.

 

Mike Rea:              And do you think that the feedback loop is useful as it currently stands? Or, as you described it, you've got this -- if it's too long between outcome and --

 

David Grainger:      Yeah. [Why don't they] do it in silico? I can't use the feedback loop of -- I couldn't take over a large pharma company, change the way that R&D is run and then use real metrics of sales per dollar of R&D spend. Because the supertanker is only going to turn around 15 years from now. So, I have to believe -- it's about predictability. But do I believe that in silico modelling of the whole system is going to give me a better guide of how to design that R&D operation, than a gut belief in predictability and the importance of science and having the very best [inaudible 00:29:45] unit and the very best toxicology unit? So, it comes down to a belief. Can I evangelize that idea better than the guy who says, "No; no; no; you need to spend more money getting better [inaudible 00:29:59] people and better toxicology people because you don't make a mistake there. I can't prove I'm right. I can model it. I can talk to you about it. It would take 15 years to --

 

Mike Rea:              So, given the importance of the incentive structure you mentioned, we must incentivize people correctly to do the right things. Because I think people are working against the incentives that they have today, within their organizations. What would be some of the things that you would do to change the incentive structure, either a small scale or a larger scale development?

 

David Grainger:      Well I wrote a piece on Drug Baron blog which was entitled Managing Failure is the Key to Success. And one of the biggest problems, from an incentive perspective, is what we do when something fails. There is a tendency to attribute the failure of that to the people who were actually doing the experiments that prove that it doesn't work. In reality -- a good example is in venture capital. If we look at a project and I think, "Yeah, this is great. I'm going to put in the money." and the people -- give me an [inaudible 00:31:14] "Yeah, that looks like a sensible operating plan." And then they do it and then it doesn't work. And in a very short period of time, they've shown me that the whole idea was absolute rubbish. Who's going to suffer here? It was my faulty decision because I took the decision [inaudible 00:31:31] trying, I agreed that the operating plan was correct. They went away and executed like geniuses and now they're left with no job and a reputation for people who are working on something that doesn't actually work. And I get to carry on investing in the next thing, as long as I make a reasonable number of successes. 

 

Mike Rea:              As long as one of them comes [off]. Yeah.

 

David Grainger:      And that we have to change that mentality. We have to recognize [that] a crap team who's capable of showing that the thing which you thought was marvelous, isn't -- is actually what we're searching for. And we mustn't attribute to them. So, it's a mis-attribution of the decision quality error. Which decision was wrong, mine? Or did they choose to do the wrong [inaudible 00:32:15] discover that the design of their experiments was all wrong, and they've wasted the money and we haven't proven that thing worked. We might have discovered that they're making up the data. We might have discovered that they just make stupid mistakes and keep dropping the cell culture flask. There are lots of ways in which you can fail. And we don't, at the moment, pretty much distinguish between a small biotech company that went under because they had a whole load of ham-fisted individuals, or one which went under because that entire concept on which they were working was wrong. But they are worlds apart. And we have to find ways -- and those ways can be in terms of the recognition that's given to people, in terms of our use of language. These are small nudges, but just using the same word of failure to describe those two outcomes, results in -- People act according to those things. We don't need to change how we pay them. But just simple changes in the language and the way in which we assess that, we can celebrate good failure.

 

Mike Rea:              And it's interesting because I've noticed that as well in the way that these things get reported. Phase 2 trials flop, for example. Or it might have done exactly what it was intended to, which is to find out if it works in indication A versus B. But you know, that language isn't subtle even in many cases, is it?

 

David Grainger:      No. And I mean, this is a problem for the 21st century media -- do drive certain behaviors. And I think in the words of diversity and equality, we've become very aware of the power of language to illustrate the inherent biases that we all carry around with us.  For some reason, we're not recognizing some much, much more glaring misuses of language in the media, and indeed, we help them. We might use the word flop, but the press release might say, "This has failed to meet its objective." Well, no it didn't, it failed to meet its p-value. That doesn't mean it failed to meet its objective. It's determined whether this worked [in that] or not. And I think if we don't get it right what hope is there that the people who are then reporting on our ecosystem and amplifying those words might get it right? 

 

Mike Rea:              Definitely. So, over the time that you've worked in biotech what have been the standout things that you've learned with a view to applying those things more rigorously in future?

 

David Grainger:     I think the biggest learning was the power of dissociating asset from infrastructure, idea from process. We're sitting here at our accelerate because five, six years ago we saw the problems associated with having infrastructure in a company. If you've got a drug development platform that's expensive and difficult to build, you had to recruit some good people, you had to buy some equipment, you had to get a rental lease with somebody or other that might have five years in it -- huge effort involved. And once you've made all that effort, you're much less likely to go, "Actually you know what, I'm not really keen on this."  after six months of data coming through. So, you are further reinforcing that kill continue decision in favor of continue. And then I learned, having seen that simple idea, that in fact there are other huge advantages to outsourcing or separating the infrastructure from the asset. We've already discussed one. It makes it clear who got it wrong -- the process provider or the idea provider. So, the blame is, in a sense -- although all the benefit is appropriately allocated -- it makes the cost of every action you take explicit. If I'm using my own infrastructure that I'm paying for anyway, then if I make an animal study more complicated, add an extra couple of groups to it, there's no -- I can't tell how much extra that's costing. It's just when they're all too busy then we'll recruits another 20 people and my costs just go out of control. Once I'm outsourcing it, I'm actually asking the process provider, "Well, how much does it cost if I add this?" And then I'd stop to think, "Good god, really? Another £10,000 just for that?" That information is no way going to improve my decision quality by £10,000 worth, so I'm just not going to do it. So, all of those factors have taken this sort of initial seed of an idea that having the concept and the process separate would be beneficial. It's actually proved that that's true in many ways that I probably hadn't anticipated when we started. We're still learning. 

 

Mike Rea:              It's interesting because I think there's a hypothesis in The Innovation Illusion book which talks about the organization itself. Because organizations are set up to organize current sets of activity, not necessarily the right kinds of activity. And that's sort of what you're describing here, which is that as soon as you start building structures you end up with what the structure was designed to achieve.

 

David Grainger:      Absolutely. It concretes it in. And that's why it's actually better to have those nodes that we were describing, sort of anatomy and a toxicology expertise, whatever. And have them separate, like different piece of a jigsaw puzzle. And then you just have to have what we would call a project pilot, essentially somebody in a helicopter at 10,000 feet who is reassembling those capabilities in the appropriate order and in the appropriate way with the appropriate set of instructions for this particular asset. And that way, there's nothing concrete about the path that has to be taken. You're making the decision on the most efficient, the lowest cost for the best decision quality at every stage. Because every possibility is open to you. You've just got this constant array of pieces and you can pick and choose from them as you want. 

 

Mike Rea:              It's interesting because I think the one thing that we've seen recently in pharma has been this emergence of companies that have become successful in one therapeutic area, suddenly believe that they can apply the same fundamental beliefs or decision structures to another, and have found that they wallow or they flounder or just think they [won't] bring anything through.

 

David Grainger:      And bizarrely, they've responded to that then by essentially retrenching. So that even our world's largest pharmaceutical companies are now essentially operating in only two or three therapeutic areas. And that's because they've realized that process structures are not transferable. And instead of just getting rid of them and effectively becoming an open structure that doesn't impose a structure, they've said, "Okay, this structure actually works in oncology, so we're an oncology company."

 

Mike Rea:              So, what do you think about retrenchment along the lines of, say, what GSK have done in saying, "Well look, we're going to focus on these therapeutic areas to the exclusion of all else." I mean, that's this year anyway. What they say next year might well be very different. Do you think that's a sensible approach or the opposite of a sensible approach?

 

David Grainger:      Well I suppose it depends, like everything depends on something. It depends on your future vision for the company. If you believe that you are going to do, which I think most big pharma companies should do -- which is admit that they are becoming a late stage development and commercialization organization -- then it makes sense to have your areas of expertise. That's where you've got your marketing forces, that's where you understand the market dynamics, you understand patients in those areas. That becomes a real competitive advantage in those circumstances. If you want to really innovate, if you want to discover drugs any longer and do early stage development, then it's absolute madness. Because you want to be working on the best projects, the best opportunities that you come across wherever they are. And actually, how I would go about discovering or developing a drug in the early stages is exactly the same, no matter what the indication is. I don't need to -- And because I should be looking for a project, if I'm starting discovery, that is so far above the current cutoff for -- this idea that I might need my marketing organization to tell me whether this early stage development thing is going to be competitive, is comical. I mean, they'll have me generate some kind of product profile or something for this thing that doesn't exist yet. I mean, it could be anything that I -- I could write anything I like into the product profile and then they'll do some kind of analysis and say, "Oh well, it would sell for $10,000." "Oh no, that project isn't worth carrying out." If I need to go through that exercise -- that's how my filter works -- if I might need to ring somebody up and ask them about the market dynamics in disease X, then this is no good.       It's already gone. 

 

Mike Rea:              Now, it's interesting because I think, as you say, there's so many myths written down onto those pieces of paper which just get baked into the whole system of the eNPV nonsense and everything that goes with it. Even though we know it's wrong all the time, it's a bit like that old famous general quote, isn't it? "I know they're wrong, but they're useful."

 

David Grainger:      Well they are. Yes, it is. It's this idea of process, again, and analysis based on data that is either meaningless or has such wide [area] bars on it that the outcome could range from -- the value of any discovery project ranges from minus its development costs to an almost infinite number. And nothing you can do now, while you don't know what the properties of the thing is, is useful.  But people feel comfortable if they can carry out some of the kinds of --

 

Mike Rea:              People knowing what they're doing next week. Yeah, absolutely.

 

David Grainger:      And the danger then is, those metrics become -- they actually become part of the decision tree and then you start making bad decisions. Because instead of recognizing you're pinning a tail on the donkey, you think you're actually aiming. 

 

Mike Rea:              Yes. What a fantastic quote. So, tell me, in terms of you and your history and the choices that you've made, what is it that drives you personally within this space?

 

David Grainger:      In many ways now, I think, it's become an opportunity to understand the innovation mechanism in R&D. In other words, not within the companies just to make them successful, but I've got a menagerie; I've got different companies running different ways.

 

Mike Rea:              So, you’ve got some experiments running?

 

David Grainger:      And so, it's actually an experiment running of where there wasn't a pre-existing framework like there would be in a large pharma that, "This is how we do things here." and gradually that might evolve. Because there was nothing and we've planted these seeds in an open field, we are refining the model, and we're getting things wrong and we're getting things right.

 

Mike Rea:              As you should,  yeah.

 

David Grainger:      And learning all the time. So, I think the fact that I serendipitously arrived in a situation where I've probably got more scope to address some of these questions that we're theorizing about in practice than many people have --

 

Mike Rea:              So, you've [arrived] in the kind of opposite of a monoculture in terms of companies that you work with and ideas that you play with?

 

David Grainger:      Yeah. 

 

Mike Rea:              Okay. Interesting diversity of thought. And --

 

David Grainger:      But in the end, to bring better medicines to the end, that is the final goal.

 

Mike Rea:              And do you think, just as a sidestep, this kind of fundamental belief in the West Coast of America in AI and other things as a solution to all of this, that suddenly you get better at disease prediction and targeting and so on. Where do you see the role of tech and AI in your world?

 

Mike Rea:              I think it's a hype that we're going to see a lot of. There are a few very specific applications that will work where it will provide incremental improvement, but you have to think what kind of problems is AI solving? Because AI makes it sound grander than it is. I mean, it's more A than I. So, they're essentially just multivariate data analysis approaches. They're essentially tools to make better predictions. And if you're in a low validity environment in which the predictions can't be made because we don't actually know what's happening, then you're not going to do a very good job. If you're in a data rich highly predictive environment -- so if I've got lots of customers coming to my website and some of them are buying and some of them are not, these kinds of multivariate models can make an excellent job of refining my website to make it better. And then for the people who do that and have been successful in it, to then come into our industry and think that at least at any kind of global level that they can make a contribution, then no. They are going to be working within these nodes and silos. They can make [inaudible 00:46:10] better, they could make toxicology better, they could make in silico drug design better, for sure. Can they resolve the kinds of questions we've spent the last 40 minutes talking about?

                             No. In fact, they'll make it worse.

 

Mike Rea:              Because [your] training data [are] weak. 

 

David Grainger:      Weak. Absolutely. [Won't]  matter how good your algorithm is, you're not going to, you know.

 

Mike Rea:              Yeah. And we got very little, actually, even awareness of what you would say was successful in the past. Never mind unsuccessful --

 

David Grainger:      Well, we don't have anything to train it on for all the reason we described, because we're comparing apples with oranges with pears. Every project is different, has different reasons why it didn't work. Every place operating it was different. Learning algorithms require the idea that you iterated something which is comparable under different circumstances. And that just isn't true. But more importantly, the vast majority of what isn't known in a drug development project isn't because I wasn't smart enough or I haven't bothered to measure it, it actually is unknowable at that point. I'm going to have to do things in a particular order. I'm going to have to make decisions ahead of the availability of data in order to make those decisions. Now, of course, the AI people can respond say, "Well David, there's something tautologias about what you just said. If you think you can do it better then clearly whatever it is, you're doing, the machine could do." I think in the end, that's absolutely true. Everything I do and everything I am could be produced in silico, given enough time and resources. We will get there. But what we are calling AI today and what we are introducing into the drug development process today is multivariate statistics with a posh tag and a high price.

 

Mike Rea:              Exactly. Yeah. No, I think that, as you say, some of these things are just fundamentally intractable problems. As in, they are problems that take the time, they take the resolve. Instead of asking Google what the time is and it takes 24 hours to tell you, some of these things just take time to work their way through. So, I know you're an avid reader. Are there books that you would recommend to anyone within either the innovation world or in the pharma R&D space?

 

David Grainger:      Well I think the books that have had the greatest impact on me haven't been about pharma R&D. My favorites are things like Freakonomics and Malcolm Gladwell's Outliers. These are about realizing how incentives alter people's behavior. Those kind of books had a huge impact on me.

 

Mike Rea:              Yeah. I completely concur with both of those suggestions. And I think that perspective is what's often missing from this almost unilateral view of the way that pharma and discovery and development works, which is this purely scientific hypothetical approach. 

 

David Grainger:      Yeah. But I think in terms of the rest, it's mostly odd bits out of many different books.

                             Black Swan, I find rather annoying in the tone of its writing, but it contains some very insightful thoughts. Nassim Nicholas Talab clearly has some very interesting thoughts that maybe I would have communicated slightly differently. Everyone should read that book as well. But actually, you can go onto the internet these days and there are interesting nuggets, paragraphs, all spread across social media, all spread on blogs. It's become much more diverse. Five years ago, one tended to read books. Even the books I described, even Freakonomics, Outliers, they're actually a collection of tales. They're small, they're unrelated points, if you like, that have been gathered together. And it is the weight of reading all of them that suddenly makes you realize the common connection. Rather than books which often get written which attempt to be a synthesis somehow. 

 

Mike Rea:              Yeah. [Like] textbooks [subjective] --

 

David Grainger:      And I think that the answer is, because we don't know how to do this -- we're discussing a live and dynamic problem. How would you structure innovation in order to get a better outcome? Because there isn't an answer to that, I can't write a book that is somehow -- not just me, no one can write a book that somehow represents an answer. You can only look at anecdotes and discuss their relative merits, in some ways much as we've done --

 

Mike Rea:              So just to sign off, because I could carry on for another few hours, what are your ambitions for the next five years? Let's give you a five-year horizon.

 

David Grainger:      I think it comes right back to where we started which is, you are essentially nowhere if you are sitting in a corner on your own and know the answer to the life, the universe and everything. I think it's one thing me satisfying myself that I can understand how to do these things better, I need to share that, to evangelize, to convince other people, to see changes in behaviors and in cultures resulting from the lessons which I've learned. So yeah, five years from now we'll carry on innovating and incrementally innovating. But how much will we -- because this isn't just me, lots of the thoughts I've described [with you] today belong to a wider ecosystem. We're spending a lot of time thinking about these things, learning about it. [inaudible 00:51:53] thinking about it all of the time. The entire concept of separating asset from infrastructure, Francesca [de Roberti’s] and Kevin Johnson introduced me to that 10 nearly 15 years ago, and we've together been working on that ever since. So, it's sort of like the Bloomsbury set -- there's a group of people and to a large extent, I'd count you among them -- we're thinking in similar ways. Can we collectively get out there and change? Because actually, we haven't changed much right now. I think we've learnt a lot, but if you look at the way the USBC market is operating, the way pharma companies are operating, does it look very different from 1999? Are we any different from just before the biotech and dot com crash? No. I would say we're headed for history repeating itself, and that represents a failure. Even if I know why it's going wrong now. Unless we successfully persuade people to change and then see an object response to that, we're nothing. So that's the five years to come. 

 

Mike Rea:              Okay. I think we can all sign up to that one. Well thank you so much. And if folks want to know more about you -- you are very public out there at science scanner and the blog. 

 

David Grainger:      Yeah, Drug Baron.

 

Mike Rea:              Do you still write [anything in that]

 

David Grainger:      Yes.

 

Mike Rea:              Less frequent than you used to, is it?

 

David Grainger:      Less frequently than it used to be. Yes. But mainly because I feel I've said most of what needs to be said. I don't feel I want to blog just to say the same things again. 

 

Mike Rea:              Okay, fantastic. Thank you so much, David.

 

David Grainger:      Pleasure. Thanks, Mike.