Automatic classification of informative laryngoscopic images using deep learning. Academic Article uri icon

Overview

abstract

  • Objective: This study aims to develop and validate a convolutional neural network (CNN)-based algorithm for automatic selection of informative frames in flexible laryngoscopic videos. The classifier has the potential to aid in the development of computer-aided diagnosis systems and reduce data processing time for clinician-computer scientist teams. Methods: A dataset of 22,132 laryngoscopic frames was extracted from 137 flexible laryngostroboscopic videos from 115 patients. 55 videos were from healthy patients with no laryngeal pathology and 82 videos were from patients with vocal fold polyps. The extracted frames were manually labeled as informative or uninformative by two independent reviewers based on vocal fold visibility, lighting, focus, and camera distance, resulting in 18,114 informative frames and 4018 uninformative frames. The dataset was split into training and test sets. A pre-trained ResNet-18 model was trained using transfer learning to classify frames as informative or uninformative. Hyperparameters were set using cross-validation. The primary outcome was precision for the informative class and secondary outcomes were precision, recall, and F1-score for all classes. The processing rate for frames between the model and a human annotator were compared. Results: The automated classifier achieved an informative frame precision, recall, and F1-score of 94.4%, 90.2%, and 92.3%, respectively, when evaluated on a hold-out test set of 4438 frames. The model processed frames 16 times faster than a human annotator. Conclusion: The CNN-based classifier demonstrates high precision for classifying informative frames in flexible laryngostroboscopic videos. This model has the potential to aid researchers with dataset creation for computer-aided diagnosis systems by automatically extracting relevant frames from laryngoscopic videos.

publication date

  • February 8, 2022

Identity

PubMed Central ID

  • PMC9008155

Scopus Document Identifier

  • 85124525310

Digital Object Identifier (DOI)

  • 10.1002/lio2.754

PubMed ID

  • 35434326

Additional Document Info

volume

  • 7

issue

  • 2