• DocumentCode
    49811
  • Title

    Multimodal Sparse Representation-Based Classification for Lung Needle Biopsy Images

  • Author

    Yinghuan Shi ; Yang Gao ; Yubin Yang ; Ying Zhang ; Dong Wang

  • Author_Institution
    State Key Lab. for Novel Software Technol., Nanjing Univ., Nanjing, China
  • Volume
    60
  • Issue
    10
  • fYear
    2013
  • fDate
    Oct. 2013
  • Firstpage
    2675
  • Lastpage
    2685
  • Abstract
    Lung needle biopsy image classification is a critical task for computer-aided lung cancer diagnosis. In this study, a novel method, multimodal sparse representation-based classification (mSRC), is proposed for classifying lung needle biopsy images. In the data acquisition procedure of our method, the cell nuclei are automatically segmented from the images captured by needle biopsy specimens. Then, features of three modalities (shape, color, and texture) are extracted from the segmented cell nuclei. After this procedure, mSRC goes through a training phase and a testing phase. In the training phase, three discriminative subdictionaries corresponding to the shape, color, and texture information are jointly learned by a genetic algorithm guided multimodal dictionary learning approach. The dictionary learning aims to select the topmost discriminative samples and encourage large disagreement among different subdictionaries. In the testing phase, when a new image comes, a hierarchical fusion strategy is applied, which first predicts the labels of the cell nuclei by fusing three modalities, then predicts the label of the image by majority voting. Our method is evaluated on a real image set of 4372 cell nuclei regions segmented from 271 images. These cell nuclei regions can be divided into five classes: four cancerous classes (corresponding to four types of lung cancer) plus one normal class (no cancer). The results demonstrate that the multimodal information is important for lung needle biopsy image classification. Moreover, compared to several state-of-the-art methods (LapRLS, MCMI-AB, mcSVM, ESRC, KSRC), the proposed mSRC can achieve significant improvement (mean accuracy of 88.1%, precision of 85.2%, recall of 92.8%, etc.), especially for classifying different cancerous types.
  • Keywords
    cancer; cellular biophysics; compressed sensing; data acquisition; diagnostic radiography; feature extraction; genetic algorithms; image classification; image fusion; image representation; image segmentation; image texture; learning systems; lung; medical image processing; needles; ESRC; KSRC; LapRLS; MCMI-AB; automatic segmentation; cancerous class; cell nuclei region; computer-aided lung cancer diagnosis; data acquisition procedure; feature extraction; genetic algorithm guided multimodal dictionary learning approach; hierarchical fusion strategy; image color; image shape; image texture; lung needle biopsy image classification; mSRC; mcSVM; multimodal sparse representation-based classification; needle biopsy specimen; testing phase; training phase; Biological cells; Cancer; Feature extraction; Image color analysis; Lungs; Shape; Training; Dictionary learning; genetic algorithm; lung cancer image classification; sparse representation-based classification (SRC); Algorithms; Biopsy, Needle; Cell Nucleus; Humans; Image Interpretation, Computer-Assisted; Lung; Lung Neoplasms; Pattern Recognition, Automated; Reproducibility of Results; Sensitivity and Specificity;
  • fLanguage
    English
  • Journal_Title
    Biomedical Engineering, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9294
  • Type

    jour

  • DOI
    10.1109/TBME.2013.2262099
  • Filename
    6514483