HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Journal articles

Investigating ADR mechanisms with Explainable AI: a feasibility study with knowledge graph mining

Abstract : Abstract Background Adverse drug reactions (ADRs) are statistically characterized within randomized clinical trials and postmarketing pharmacovigilance, but their molecular mechanism remains unknown in most cases. This is true even for hepatic or skin toxicities, which are classically monitored during drug design. Aside from clinical trials, many elements of knowledge about drug ingredients are available in open-access knowledge graphs, such as their properties, interactions, or involvements in pathways. In addition, drug classifications that label drugs as either causative or not for several ADRs, have been established. Methods We propose in this paper to mine knowledge graphs for identifying biomolecular features that may enable automatically reproducing expert classifications that distinguish drugs causative or not for a given type of ADR. In an Explainable AI perspective, we explore simple classification techniques such as Decision Trees and Classification Rules because they provide human-readable models, which explain the classification itself, but may also provide elements of explanation for molecular mechanisms behind ADRs. In summary, (1) we mine a knowledge graph for features; (2) we train classifiers at distinguishing, on the basis of extracted features, drugs associated or not with two commonly monitored ADRs: drug-induced liver injuries (DILI) and severe cutaneous adverse reactions (SCAR); (3) we isolate features that are both efficient in reproducing expert classifications and interpretable by experts (i.e., Gene Ontology terms, drug targets, or pathway names); and (4) we manually evaluate in a mini-study how they may be explanatory. Results Extracted features reproduce with a good fidelity classifications of drugs causative or not for DILI and SCAR (Accuracy = 0 .74 and 0 .81 , respectively). Experts fully agreed that 7 3 % and 3 8 % of the most discriminative features are possibly explanatory for DILI and SCAR, respectively; and partially agreed (2/3) for 9 0 % and 7 7 % of them. Conclusion Knowledge graphs provide sufficiently diverse features to enable simple and explainable models to distinguish between drugs that are causative or not for ADRs. In addition to explaining classifications, most discriminative features appear to be good candidates for investigating ADR mechanisms further.
Document type :
Journal articles
Complete list of metadata

Contributor : Adrien Coulet Connect in order to contact the contributor
Submitted on : Wednesday, May 11, 2022 - 3:32:09 PM
Last modification on : Tuesday, May 17, 2022 - 11:24:27 AM


Publication funded by an institution



Emmanuel Bresso, Pierre Monnin, Cédric Bousquet, François-Elie Calvier, Ndeye-Coumba Ndiaye, et al.. Investigating ADR mechanisms with Explainable AI: a feasibility study with knowledge graph mining. BMC Medical Informatics and Decision Making, BioMed Central, 2021, 21 (1), pp.171. ⟨10.1186/s12911-021-01518-6⟩. ⟨hal-03240476⟩



Record views


Files downloads