DocumentCode :
3775893
Title :
IAPR keynote lecture IV: Deep learning
Author :
Yoshua Bengio
Author_Institution :
University of Montreal, Canada
fYear :
2015
Abstract :
Deep learning has arisen around 2006 as a renewal of neural networks research allowing such models to have more layers. Theoretical investigations have shown that functions obtained as deep compositions of simpler functions (which includes both deep and recurrent nets) can express highly varying functions (with many ups and downs and different input regions that can be distinguished) much more efficiently (with fewer parameters) than otherwise. Empirical work in a variety of applications has demonstrated that, when well trained, such deep architectures can be highly successful, remarkably breaking through previous state-of-the-art in many areas, including speech recognition, object recognition, language models, and transfer learning. This talk will summarize the advances that have made these breakthroughs possible, and end with questions about some major challenges still ahead of researchers in order to continue our climb towards AI-level competence.
Publisher :
ieee
Conference_Titel :
Pattern Recognition (ACPR), 2015 3rd IAPR Asian Conference on
Electronic_ISBN :
2327-0985
Type :
conf
DOI :
10.1109/ACPR.2015.7486451
Filename :
7486451
Link To Document :
بازگشت