DocumentCode :
3672175
Title :
Interleaved text/image Deep Mining on a large-scale radiology database
Author :
Hoo-Chang Shin; Le Lu;Lauren Kim;Ari Seff;Jianhua Yao;Ronald M. Summers
Author_Institution :
Imaging Biomarkers and Computer-Aided Diagnosis Laboratory Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, United States
fYear :
2015
fDate :
6/1/2015 12:00:00 AM
Firstpage :
1090
Lastpage :
1099
Abstract :
Despite tremendous progress in computer vision, effective learning on very large-scale (> 100K patients) medical image databases has been vastly hindered. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital´s picture archiving and communication system. Instead of using full 3D medical volumes, we focus on a collection of representative ~216K 2D key images/slices (selected by clinicians for diagnostic reference) with text-driven scalar and vector labels. Our system interleaves between unsupervised learning (e.g., latent Dirichlet allocation, recurrent neural net language models) on document- and sentence-level texts to generate semantic labels and supervised learning via deep convolutional neural networks (CNNs) to map from images to label spaces. Disease-related key words can be predicted for radiology images in a retrieval manner. We have demonstrated promising quantitative and qualitative results. The large-scale datasets of extracted key images and their categorization, embedded vector labels and sentence descriptions can be harnessed to alleviate the deep learning “data-hungry” obstacle in the medical domain.
Keywords :
"Radiology","Semantics","Machine learning","Medical diagnostic imaging","Visualization"
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
Electronic_ISBN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2015.7298712
Filename :
7298712
Link To Document :
بازگشت