An Assessment Tool to Provide Targeted Feedback to Robotic Surgical Trainees: Development and Validation of the End-To-End Assessment of Suturing Expertise (EASE).
Academic Article
Overview
abstract
PURPOSE: To create a suturing skills assessment tool that comprehensively defines criteria around relevant sub-skills of suturing and to confirm its validity. MATERIALS AND METHODS: 5 expert surgeons and an educational psychologist participated in a cognitive task analysis (CTA) to deconstruct robotic suturing into an exhaustive list of technical skill domains and sub-skill descriptions. Using the Delphi methodology, each CTA element was systematically reviewed by a multi-institutional panel of 16 surgical educators and implemented in the final product when content validity index (CVI) reached ≥0.80. In the subsequent validation phase, 3 blinded reviewers independently scored 8 training videos and 39 vesicourethral anastomoses (VUA) using EASE; 10 VUA were also scored using Robotic Anastomosis Competency Evaluation (RACE), a previously validated, but simplified suturing assessment tool. Inter-rater reliability was measured with intra-class correlation (ICC) for normally distributed values and prevalence-adjusted bias-adjusted Kappa (PABAK) for skewed distributions. Expert (≥100 prior robotic cases) and trainee (<100 cases) EASE scores from the non-training cases were compared using a generalized linear mixed model. RESULTS: After two rounds of Delphi process, panelists agreed on 7 domains, 18 sub-skills, and 57 detailed sub-skill descriptions with CVI ≥ 0.80. Inter-rater reliability was moderately high (ICC median: 0.69, range: 0.51-0.97; PABAK: 0.77, 0.62-0.97). Multiple EASE sub-skill scores were able to distinguish surgeon experience. The Spearman's rho correlation between overall EASE and RACE scores was 0.635 (p=0.003). CONCLUSIONS: Through a rigorous CTA and Delphi process, we have developed EASE, whose suturing sub-skills can distinguish surgeon experience while maintaining rater reliability.