Abstract
Annotating unstructured texts in Electronic Health Records data is usually a necessary step for conducting machine learning research on such datasets. Manual annotation by domain experts provides data of the best quality, but has become increasingly impractical given the rapid increase in the volume of EHR data. In this article, we examine the effectiveness of crowdsourcing with unscreened online workers as an alternative for transforming unstructured texts in EHRs into annotated data that are directly usable in supervised learning models. We find the crowdsourced annotation data to be just as effective as expert data in training a sentence classification model to detect the mentioning of abnormal ear anatomy in radiology reports of audiology. Furthermore, we have discovered that enabling workers to self-report a confidence level associated with each annotation can help researchers pinpoint less-accurate annotations requiring expert scrutiny. Our findings suggest that even crowd workers without specific domain knowledge can contribute effectively to the task of annotating unstructured EHR datasets.
Original language | English (US) |
---|---|
Pages (from-to) | 86-92 |
Number of pages | 7 |
Journal | Journal of Biomedical Informatics |
Volume | 69 |
DOIs | |
State | Published - May 1 2017 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Health Informatics
- Computer Science Applications
Keywords
- Crowdsourcing
- EHR data
- Logistic regression
- Sentence classification
- Text annotations