Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances. Academic Article uri icon

Overview

abstract

  • Deep neural networks (DNNs) have rapidly become a de facto choice for medical image understanding tasks. However, DNNs are notoriously fragile to the class imbalance in image classification. We further point out that such imbalance fragility can be amplified when it comes to more sophisticated tasks such as pathology localization, as imbalances in such problems can have highly complex and often implicit forms of presence. For example, different pathology can have different sizes or colors (w.r.t.the background), different underlying demographic distributions, and in general different difficulty levels to recognize, even in a meticulously curated balanced distribution of training data. In this paper, we propose to use pruning to automatically and adaptively identify hard-to-learn (HTL) training samples, and improve pathology localization by attending them explicitly, during training in supervised, semi-supervised, and weakly-supervised settings. Our main inspiration is drawn from the recent finding that deep classification models have difficult-to-memorize samples and those may be effectively exposed through network pruning [15] - and we extend such observation beyond classification for the first time. We also present an interesting demographic analysis which illustrates HTLs ability to capture complex demographic imbalances. Our extensive experiments on the Skin Lesion Localization task in multiple training settings by paying additional attention to HTLs show significant improvement of localization performance by ~2-3%.

publication date

  • February 6, 2023

Identity

PubMed Central ID

  • PMC10089697

Scopus Document Identifier

  • 85148996283

Digital Object Identifier (DOI)

  • 10.1109/wacv56688.2023.00496

PubMed ID

  • 37051561

Additional Document Info

volume

  • 2023