Development and validation of an objective scoring tool to evaluate surgical dissection: Dissection Assessment for Robotic Technique (DART). Academic Article uri icon

Overview

abstract

  • PURPOSE: Evaluation of surgical competency has important implications for training new surgeons, accreditation, and improving patient outcomes. A method to specifically evaluate dissection performance does not yet exist. This project aimed to design a tool to assess surgical dissection quality. METHODS: Delphi method was used to validate structure and content of the dissection evaluation. A multi-institutional and multi-disciplinary panel of 14 expert surgeons systematically evaluated each element of the dissection tool. Ten blinded reviewers evaluated 46 de-identified videos of pelvic lymph node and seminal vesicle dissections during the robot-assisted radical prostatectomy. Inter-rater variability was calculated using prevalence-adjusted and bias-adjusted kappa. The area under the curve from receiver operating characteristic curve was used to assess discrimination power for overall DART scores as well as domains in discriminating trainees (≤100 robotic cases) from experts (>100). RESULTS: Four rounds of Delphi method achieved language and content validity in 27/28 elements. Use of 3- or 5-point scale remained contested; thus, both scales were evaluated during validation. The 3-point scale showed improved kappa for each domain. Experts demonstrated significantly greater total scores on both scales (3-point, p< 0.001; 5-point, p< 0.001). The ability to distinguish experience was equivalent for total score on both scales (3-point AUC= 0.92, CI 0.82-1.00, 5-point AUC= 0.92, CI 0.83-1.00). CONCLUSIONS: We present the development and validation of Dissection Assessment for Robotic Technique (DART), an objective and reproducible 3-point surgical assessment to evaluate tissue dissection. DART can effectively differentiate levels of surgeon experience and can be used in multiple surgical steps.

authors

  • Vanstrum, Erik B
  • Ma, Runzhuo
  • Maya-Silva, Jacqueline
  • Sanford, Daniel
  • Nguyen, Jessica H
  • Lei, Xiaomeng
  • Chevinksy, Michael
  • Ghoreifi, Alireza
  • Han, Jullet
  • Polotti, Charles F
  • Powers, Ryan
  • Yip, Wesley
  • Zhang, Michael
  • Aron, Monish
  • Collins, Justin
  • Daneshmand, Siamak
  • Davis, John W
  • Desai, Mihir M
  • Gerjy, Roger
  • Goh, Alvin C
  • Kimmig, Rainer
  • Lendvay, Thomas S
  • Porter, James
  • Sotelo, Rene
  • Sundaram, Chandru P
  • Cen, Steven
  • Gill, Inderbir S
  • Hung, Andrew J

publication date

  • September 1, 2021

Identity

PubMed Central ID

  • PMC10150863

Scopus Document Identifier

  • 85138731583

Digital Object Identifier (DOI)

  • 10.1097/upj.0000000000000246

PubMed ID

  • 37131998

Additional Document Info

volume

  • 8

issue

  • 5