Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model. Academic Article uri icon

Overview

abstract

  • PURPOSE: Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells. APPROACH: In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning. RESULTS: Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality. CONCLUSIONS: Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.

publication date

  • September 4, 2025

Identity

PubMed Central ID

  • PMC12410749

Digital Object Identifier (DOI)

  • 10.1117/1.JMI.12.5.057501

PubMed ID

  • 40918610

Additional Document Info

volume

  • 12

issue

  • 5