DocumentCode :
1176090
Title :
Learning possibilistic graphical models from data
Author :
Borgelt, Christian ; Kruse, Rudolf
Author_Institution :
Dept. of Knowledge Process. & Language Eng., Otto-von-Guericke-Univ. of Magdeburg, Germany
Volume :
11
Issue :
2
fYear :
2003
fDate :
4/1/2003 12:00:00 AM
Firstpage :
159
Lastpage :
172
Abstract :
Graphical models - especially probabilistic networks like Bayes networks and Markov networks - are very popular to make reasoning in high-dimensional domains feasible. Since constructing them manually can be tedious and time consuming, a large part of recent research has been devoted to learning them from data. However, if the dataset to learn from contains imprecise information in the form of sets of alternatives instead of precise values, this learning task can pose unpleasant problems. In this paper, we survey an approach to cope with these problems, which is not based on probability theory as the more common approaches like, e.g., expectation maximization, but uses the possibility theory as the underlying calculus of a graphical model. We provide semantic foundations of possibilistic graphical models, explain the rationale of possibilistic decomposition as well as the graphical representation of decompositions of possibility distributions and finally discuss the main approaches to learn possibilistic graphical models from data.
Keywords :
graph theory; inference mechanisms; learning (artificial intelligence); possibility theory; probability; context model; graphical models; learning from data; possibilistic networks; possibility theory; probabilistic networks; probability; reasoning; Calculus; Databases; Graph theory; Graphical models; Humans; Iterative algorithms; Markov random fields; Possibility theory; Probability; Uncertainty;
fLanguage :
English
Journal_Title :
Fuzzy Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
1063-6706
Type :
jour
DOI :
10.1109/TFUZZ.2003.809887
Filename :
1192694
Link To Document :
بازگشت