Surgical skill level classification model development using EEG and eye-gaze data and machine learning algorithms. Academic Article uri icon

Overview

abstract

  • The aim of this study was to develop machine learning classification models using electroencephalogram (EEG) and eye-gaze features to predict the level of surgical expertise in robot-assisted surgery (RAS). EEG and eye-gaze data were recorded from 11 participants who performed cystectomy, hysterectomy, and nephrectomy using the da Vinci robot. Skill level was evaluated by an expert RAS surgeon using the modified Global Evaluative Assessment of Robotic Skills (GEARS) tool, and data from three subtasks were extracted to classify skill levels using three classification models-multinomial logistic regression (MLR), random forest (RF), and gradient boosting (GB). The GB algorithm was used with a combination of EEG and eye-gaze data to classify skill levels, and differences between the models were tested using two-sample t tests. The GB model using EEG features showed the best performance for blunt dissection (83% accuracy), retraction (85% accuracy), and burn dissection (81% accuracy). The combination of EEG and eye-gaze features using the GB algorithm improved the accuracy of skill level classification to 88% for blunt dissection, 93% for retraction, and 86% for burn dissection. The implementation of objective skill classification models in clinical settings may enhance the RAS surgical training process by providing objective feedback about performance to surgeons and their teachers.

publication date

  • October 21, 2023

Research

keywords

  • Robotic Surgical Procedures
  • Robotics
  • Surgeons

Identity

PubMed Central ID

  • PMC10678814

Scopus Document Identifier

  • 85174545099

Digital Object Identifier (DOI)

  • 10.1007/s11701-023-01722-8

PubMed ID

  • 37864129

Additional Document Info

volume

  • 17

issue

  • 6