An Interpretable Complex Knowledge Multi-hop Reasoning Model for Predicting Synthetic Lethality in Human Cancers. Academic Article uri icon

Overview

abstract

  • Synthetic lethality (SL) has emerged as a promising strategy in cancer medicine. However, complex biomolecular interactions make wet lab methods time-consuming and expensive. Machine learning methods have gained widespread adoption for SL prediction in recent years. Although these methods have demonstrated particular effectiveness, they suffer from weak interpretability, making it difficult for users to understand the specific reasoning processes of the models. Also, they typically focus on a simple gene pair, thus struggling with more meaningful reasoning tasks involving other medical factors as in real life. To address these gaps, we propose an explainable multi-hop reasoning model EFOL-SL based on first-order logic queries. We first construct query graphs with triplet transformations for different tasks. Node embeddings are then fed into a sparse Transformer encoder and a visualized graph attention decoder to generate comprehensive multi-hop logical reasoning chains. By masking nodes in intermediate reasoning steps, our model can explicitly predict each node, allowing observation of its exact reasoning process. Additionally, we conduct extensive experiments on two widely used benchmarks with complex SL prediction tasks involving diverse medical entities. Evaluations demonstrate superior performance of our model over state-of-the-art methods on various tasks. Notably, EFOL-SL provides specific multi-hop logical reasoning chains behind its predictions, offering meaningful insights into the model's reasoning process.

publication date

  • October 28, 2025

Research

keywords

  • Computational Biology
  • Neoplasms
  • Synthetic Lethal Mutations

Identity

Digital Object Identifier (DOI)

  • 10.1109/TCBBIO.2025.3626526

PubMed ID

  • 41150233

Additional Document Info

volume

  • PP