Historically, the field of Artificial Intelligence (AI) has gone through several cycles of initial excitement, intense hype, optimism, and promises of revolution – dubbed as AI summers; only to be followed by periods of disappointments, aptly named AI winters, in which expectations failed to materialise, and government and research funds slowly moved on to other prospects (Chauvet, 2018).
We currently seem to be in a new AI summer phase, with several influential breakthroughs following the emergence of deep learning, a sub-field of machine learning based on artificial neural networks. In the last decade, this has slowly surpassed the performance of existing statistical and classical machine learning techniques.
The current success of deep learning techniques stems from engineering and technological advancements, which loosened existing bottlenecks and allowed for a new period of research and development, leading to many innovations we are currently witnessing. The exponential growth in this field has – and continues to have – an impact on society through the development of various products and applications.
One industry which has seen some interesting ideas and development has been the pharmaceutical industry, especially in the application of deep learning for drug development, notably through the use of multi-task deep neural networks as well as generative models.
Drug discovery is the process by which new candidates for medications are discovered. The process of drug discovery has been reshaped over the years, from identifying active ingredients in traditional remedies to screening for small molecules in intact cells to discover substances with the desired therapeutic effect, phenotypic screening and target-based drug discovery, just to name a few.
Historically speaking, drug development has been a very time-consuming, complex, and expensive process, with a low successful transition rate from an initial phase-I clinical trial to an actual approval. The procedure is largely dependent on the effect of the drug molecules interactions with the body’s proteins. There has thus been a large incentive within the pharmaceutical industry to harness the power of machine learning to predict the degree of interaction between these two components, and thus reduce the development cost and time at all stages of the drug discovery pipeline.
A broad survey by Vamathevan et al. (2019) shows that there have also been numerous novel applications of Machine Learning in a few specific stages of drug discovery, namely the identification and validation of a ‘target’ molecular structure in the body where the drug would act out, the design and optimisation of the small-molecule, predictive biomarkers, and computational pathology.
In this blog post, we shall focus on how the use case of small-molecule design and optimisation has been explored using classical machine learning and deep learning. Small-molecules are typically pharmaceutical drugs with a low molecular weight. The initial discovery of such candidates involves extensive virtual and experimental screening of large compound libraries, as these are drugs which can effectively block or activate the target protein of interest. These small molecules can be further refined to improve target specificity and selectivity, along with optimized pharmacodynamic, pharmacokinetic and toxicological properties. Initial attempts to improve the virtual screening process were made through substructural analysis and statistical machine learning techniques, such as k-Nearest Neighbours or Support Vector Machines, to compute the probability that a compound is active (Gillet, 2013).
However, more recent approaches employing multi-task deep learning have shown significant improvement over their classical counterparts, as these models are much stronger at inferring the properties and activities of small molecules, and thus can more effectively identify lead compounds with similar chemical structures. Even more modern techniques, such as ‘one-shot learning’, have been employed to further decrease the computational toll which comes from working with a large dataset, when predicting the readout of a molecule.
Hybrid models have also yielded interesting results for various other optimisations. Modern algorithms, such as the Monte-Carlo tree search was jointly employed with deep neural networks to increase the efficiency of routes of chemical synthesis by a factor of 30 (Segler et al., 2018). Cutting-edge generative models were also used for feature extraction (Kadurin et al., 2017), and proved to be valuable when combined together with reinforcement learning to design compounds with ideal values for pharmacokinetic properties, solubility, and other parameters (Olivecrona et al., 2017).
However, deep learning models come with a few of their own issues. There is a significant decrease in interpretability of the models when compared to classical machine learning algorithms. This is a particularly significant issue in life sciences when trust plays an important part in making decisions which would directly impact others, and is further problematic when attempting to diagnose and correct a fault in an implementation on new data. Reproducing results consistent with past iterations has always been a critical challenge in deep learning tasks due to the high volume of parameters, especially as some need to be randomly initialised.
Furthermore, deep models also require substantially more data samples than their classical counterparts, each of which need to be consistently and correctly labelled with a high degree of confidence. Currently, many such datasets only exist in human-readable forms, and cannot be directly processed by machines, thus creating another significant bottleneck. In some cases, collecting data again from scratch but in machine-readable format may prove to be a better path.
Data representation of chemical structures is another area to fully and critically assess. It is particularly relevant to the case of small-molecule design, as its optimal representation is still to be discovered from the numerous possible permutations.
It is currently hard to imagine the pharmaceutical companies being influenced by the decisions of a neural network deciding which small-molecules to focus research and development on, without having significant clarity about the factors under consideration and the reasoning of the model. However, should further innovations reduce the uncertainty and interpretability factor, yield medical results and financial pay-offs, it might be foreseeable to see a widespread adoption of deep learning approaches in this industry in the near future.
Chauvet, J. M. (2018). The 30-year cycle in the ai debate. arXiv preprint arXiv:1810.04053.
Costa, P. R., Acencio, M. L., & Lemke, N. (2010, December). A machine learning approach for genome-wide prediction of morbid and druggable human genes based on systems-level data. BMC genomics (Vol. 11, No. S5, p. S9). BioMed Central.
Gillet, V. (2013). Ligand-Based and Structure-Based Virtual Screening. Presentation, The University of Sheffield.
Jeon, J., Nim, S., Teyra, J., Datti, A., Wrana, J. L., Sidhu, S. S., … & Kim, P. M. (2014). A systematic approach to identify novel cancer drug targets using machine learning, inhibitor design and high-throughput screening. Genome medicine, 6(7), 57.
Kadurin, A., Nikolenko, S., Khrabrov, K., Aliper, A., & Zhavoronkov, A. (2017). druGAN: an advanced generative adversarial autoencoder model for de novo generation of new molecules with desired molecular properties in silico. Molecular pharmaceutics, 14(9), 3098-3104.
Olivecrona, M., Blaschke, T., Engkvist, O., & Chen, H. (2017). Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1), 48.
Segler, M. H., Preuss, M., & Waller, M. P. (2018). Planning chemical syntheses with deep neural networks and symbolic AI. Nature, 555(7698), 604-610.
Vamathevan, J., Clark, D., Czodrowski, P., Dunham, I., Ferran, E., Lee, G., … & Zhao, S. (2019). Applications of machine learning in drug discovery and development. Nature Reviews Drug Discovery, 18(6), 463-477.