Author/Authors :
Nezamabadi-pour, H Department of Electrical Engineering - Shahid Bahonar University of Kerman - Kerman, Iran , Kashef, Sh Department of Electrical Engineering - Shahid Bahonar University of Kerman - Kerman, Iran
Abstract :
Multi-label classification has gained significant attention during recent years, due to the increasing number
of modern applications associated with multi-label data. Despite its short life, different approaches have been
presented to solve the task of multi-label classification. LIFT is a multi-label classifier which utilizes a new
strategy to multi-label learning by leveraging label-specific features. Label-specific features means that each
class label is supposed to have its own characteristics and is determined by some specific features that are the
most discriminative features for that label. LIFT employs clustering methods to discover the properties of
data. More precisely, LIFT divides the training instances into positive and negative clusters for each label
which respectively consist of the training examples with and without that label. It then selects representative
centroids in the positive and negative instances of each label by k-means clustering and replaces the original
features of a sample by the distances to these representatives. Constructing new features, the dimensionality
of the new space reduces significantly. However, to construct these new features, the original features are
needed. Therefore, the complexity of the process of multi-label classification does not diminish, in practice.
In this paper, we make a modification on LIFT to reduce the computational burden of the classifier and
improve or at least preserve the performance of it, as well. The experimental results show that the proposed
algorithm has obtained these goals, simultaneously