I talked about rational drug design in an earlier post, which is where chemists try to improve upon a promising compound by adding structure to it in a planned way, using a model of the target site to predict the effect of such changes. I want to expand upon this subject in this post.
Historically, a lot of drugs on the market have been found by a combination of serendipity and systematic screening. To be glib, an initial screen found a compound, the chemistry department went to town on it, changing whatever they could and seeing what happened. Then, at the end of the process, the drug candidate.
Along came rational drug design, in which a target in the body is identified, the effect of an agonist or antagonist on that target evaluated. Then, given the 3D structure of that target, a compound is made to fit the pocket and make all the right interactions to make the drug candidate. In theory, much more focussed and targeted and, well, rational.
The only trouble is the reality is something of a combination of the two. An initial hit is still often found by screening. The chemist might use the structural information available to make rational choices about what changes to make to the molecule, but then he will also very likely make a bunch of much less rational changes, just to see what happens.
I am a believer in a rational approach. I don’t think it is important to make a whole lot of compounds, just the right one to expand and test our knowledge about the system. The computer models of active sites are very useful for doing this. But it has been my experience so far that they are much better at providing the explanation for a result after you have it than in predicting it in the first place.
Of course there are many good reasons for that – often the model is itself a work in progress, especially if the actual x-ray structure is not available. Even if there is an x-ray, that is a snapshot static picture of a dynamic system. Add in variation in binding mode and physicochemical factors and the predicted subnanomolar inhibitor can quickly lose efficacy.
Even if you do get the potency in the enzyme, taking that into real life and animal models is perilous. That is not unique to rational drug design by any means, but if pharmacokinetics don’t get you, you might find selectivity does – you may even find the effect you are having is not due to the inhibition of that kinase after all, but due to the inhibition of a completely different kinase. Although selectivity between kinases is a can of worms for another blog post on another day, but the basic point is that you can be all rational and design your molecule and still get this kind of result.
Like a lot of other useful techniques in science, the computer model is a tool, not a crutch. Certainly, it has had successes. The newer wave of fragment-based drug design has put together some interesting drug candidates and that is still a developing field. You can use it to guide your discovery efforts, but you need to keep a dose of reality nearby – it is not the be all and end all.