The MPRINT Webinar Series: "Interpretable AI: Data Driven Mechanistic Modeling for Chemical Toxicity and Drug Safety Evaluations"
Hao Zhu, PhD
Professor, Center for Biomedical Informatics & Genomics
School of Medicine
Tulane University
Addressing the safety aspects of new chemicals has historically been undertaken through animal testing studies, which are expensive and time-consuming. Computational toxicology is a promising alternative approach that utilizes machine learning (ML) and deep learning (DL) techniques to predict toxicity potentials of chemicals. Although the applications of ML and DL based computational models in chemicals toxicity predictions are attractive, many toxicity models are “black box” in nature and difficult to interpret by toxicologists, which hampers the chemical risk assessments using these models. The recent progress of interpretable ML (IML) in the computer science field meets this urgent need to unveil the underlying toxicity mechanisms and elucidate domain knowledge of toxicity models. In this new modeling framework, the toxicity feature data, model interpretation methods, and the use of toxicity knowledgebase in IML development advance the applications of computational models in chemical risk assessments. The challenges and future directions of IML modeling in toxicology are strongly driven by heterogenous big data and newly revealed toxicity mechanisms. The big data mining, analysis, and mechanistic modeling using IML methods will advance artificial intelligence in the big data era to pave the road to future computational chemical toxicology and will have a significant impact on the risk assessment procedure and drug safety.