Abstract :
Summary form only given. The SVM learning model has been successfully applied to an enormously broad spectrum of application domains and has become a main stream of the modern machine learning technologies. Unfortunately, along with its success and popularity, there also raises a grave concern on it suitability for big data learning applications. For example, in some biomedical applications, the sizes may be hundreds of thousands. In social media application, the sizes could be easily in the order of millions. This curse of dimensionality represents a new challenge calling for new learning paradigm as well as application-specific parallel and distributed hardware and software. This talk will explore cost-effective design on kernel-based machine learning and classification for big data learning applications. It will present a recursive tensor based classification algorithm, especially amenable to systolic/wavefront array processors, which may potentially expedite realtime prediction speed by orders of magnitude. For time-series analysis, with nonstationary environment, it is vital to develop time-adaptive learning algorithms so as to allow incremental and active learning. The talk will tackle the active learning problems from two kernel-induced perspectives, one in intrinsic space and another in empirical space. The talk will show, if time permits, an algorithmic example highlighting the application of Map-Reduce technologies to supervised kernel (Slackmin) learning under a parallel and distributed processing framework.
Conference_Titel :
Application-Specific Systems, Architectures and Processors (ASAP), 2013 IEEE 24th International Conference on