Toward Improving Health Literacy in Patient Education Materials with Neural Machine Translation Models. Academic Article uri icon

Overview

abstract

  • Health literacy is the central focus of Healthy People 2030, the fifth iteration of the U.S. national goals and objectives. People with low health literacy usually have trouble understanding health information, following post-visit instructions, and using prescriptions, which results in worse health outcomes and serious health disparities. In this study, we propose to leverage natural language processing techniques to improve health literacy in patient education materials by automatically translating illiterate languages in a given sentence. We scraped patient education materials from four online health information websites: MedlinePlus.gov, Drugs.com, Mayoclinic.org and Reddit.com. We trained and tested the state-of-the-art neural machine translation (NMT) models on a silver standard training dataset and a gold standard testing dataset, respectively. The experimental results showed that the Bidirectional Long Short-Term Memory (BiLSTM) NMT model outperformed Bidirectional Encoder Representations from Transformers (BERT)-based NMT models. We also verified the effectiveness of NMT models in translating health illiterate languages by comparing the ratio of health illiterate language in the sentence. The proposed NMT models were able to identify the correct complicated words and simplify into layman language while at the same time, the models suffer from sentence completeness, fluency, readability, and have difficulty in translating certain medical terms.

publication date

  • June 16, 2023

Identity

PubMed Central ID

  • PMC10283125

Scopus Document Identifier

  • 85095410329

Digital Object Identifier (DOI)

  • 10.1504/ijcse.2020.110536

PubMed ID

  • 37350905

Additional Document Info

volume

  • 2023