The Use of Deep Learning in Distinguishing Chalazion and Eyelid Mass. Academic Article uri icon

Overview

abstract

  • PURPOSE: Our study investigated the ability of artificial intelligence to differentiate eyelid lesions to support its potential use as a tool to better inform referrals to oculoplastic surgery specialists by other healthcare providers. Specifically, our study tested artificial intelligence's ability to distinguish benign chalazia from alternative eyelid masses that may require advanced subspecialized care with oculoplastic specialists. METHODS: This retrospective case-control study included 206 photographs of diagnosed chalazia from 183 patients and 517 photographs from 486 patients with non-chalazia eyelid lesions to train and test a convolutional neural network (CNN). Network architectures including VGG-16, VGG-19, ResNet50, Xception, and MobileNetV2 were trained. Their performances were compared using the area under the curve (AUC) as the main outcome metric. Additionally, performances of CNN models were compared to those of frontline physicians. RESULTS: VGG-16 and VGG-19 architectures achieved meaningful performance when trained with photographs of chalazion and eyelid mass achieving AUCs of 0.797 and 0.703, respectively. Adjusting detection thresholding allowed the VGG-16 and VGG-19 models to achieve sensitivity of 93% and 98% in predicting eyelid mass, respectively. This was an improvement over classification by frontline physicians who achieved an accuracy of 61% and a sensitivity of 65% for mass detection. CONCLUSIONS: We showed that using a CNN trained with clinical external photographs could successfully distinguish a chalazion from an alternative eyelid mass, supporting its potential use as a tool for healthcare providers to assist in determining whether a mass requires oculoplastic referral for subspecialty care.

publication date

  • April 17, 2026

Identity

PubMed Central ID

  • PMC13089635

Digital Object Identifier (DOI)

  • 10.1155/joph/8878251

PubMed ID

  • 42007239

Additional Document Info

volume

  • 2026