6.2k words, 29 minutes reading time
www.owlposting.com
This is great, an interesting essay on some very useful work.
summary:
Most pharma companies don't really care too much about discovering every off-target effect of whatever drug they are pushing through clinical trials. Why would they? Trying to figure that out is resources taken away from the only thing they really care about (profit from): the drug actually
working. Everything is secondary to that! So, yes, safety-related off-target effects will be explored, since that gets in the way of the drug working, but everything else is largely ignored. If a drug binds to ten other receptors unrelated to its intended use — and those bindings don’t obviously cause toxicity or regulatory delays — nobody in industry is going to spend time or money mapping them out.
But learning what those ten receptors are would likely be useful for a lot of things! For example, drug repurposing, creating multi-target drugs (e.g. Ozempic follow-ups), and for validation data for chemical machine-learning models. What if this could be done for every FDA-approved drug, across the entire human proteome? What if there's an immense amount of low-hanging fruit there? But until a few years ago, nobody had done this, because it was in a weird position of being a fuzzy value proposition for industry to justify and too expensive for academia to prioritize.
Several years back, EvE Bio spun up, funded largely through philanthropic dollars, to do exactly this: map the off-target effects of every FDA-approved drug. As of today, they have created dose-dependent agonism/antagonism curves for
56 human GPCR’s and 29 human NR’s across 1,600 FDA-approved drugs, releasing the data underneath a NC-CC license. This is basically the only dataset of its kind out there, and they have already found potential drug repurposing indications and ML companies interested in the data. Over the course of their existence, they plan to cover a select set of the 200 GPCR’s and all 48 NR’s. In time, they hope to also expand to tyrosine kinases, failed drugs, and tool chemicals.
This essay walks through all of this in a lot more detail, including how they managed to achieve such an immense amount of data generation scale, the utility of the data, why nobody else has created something like it, and a lot more!
6.2k words, 29 minutes reading time
www.owlposting.com