Title :
Continuous Learning Method for a Continuous Dynamical Control in a Partially Observable Universe
Author :
Dambreville, Frédéric
Author_Institution :
Delegation Gen. pour l´´Armement, CTA, Arcueil
Abstract :
In this paper, we are interested in the optimal dynamical control of sensors based on partial and noisy observations. These problems are related to the POMDP family. In this case however, we are manipulating continuous-valued controls and continuous-valued decisions. While the dynamical programming method will rely on a discretization of the problem, we are dealing here directly with the continuous data. Moreover, our purpose is to address the full past observation range. Our approach is to modelize the POMDP strategies by means of dynamic Bayesian networks. A method, based on the cross-entropy is implemented for optimizing the parameters of such DBN, relatively to the POMDP problem. In this particular work, the dynamic Bayesian networks are built from semi-continuous probabilistic laws, so as to ensure the manipulation of continuous data
Keywords :
Markov processes; belief networks; decision theory; dynamic programming; entropy; learning systems; probability; sensors; POMDP family; continuous dynamical control; continuous learning method; cross-entropy; dynamic Bayesian network; dynamical programming method; noisy observation; partially observable Markov decision process; semicontinuous probabilistic law; sensors; Bayesian methods; Dissolved gas analysis; Dynamic programming; Learning systems; Manipulator dynamics; Optimal control; Optimization methods; Process planning; Resource management; Surveillance; Crossentropie method; Dynamical control; Optimization; Resource allocation; Tracking;
Conference_Titel :
Information Fusion, 2006 9th International Conference on
Conference_Location :
Florence
Print_ISBN :
1-4244-0953-5
Electronic_ISBN :
0-9721844-6-5
DOI :
10.1109/ICIF.2006.301623