Title :
Knowledge Representation and Possible Worlds for Neural Networks
Author :
Healy, Michael J. ; Caudell, Thomas P.
Abstract :
The semantics of neural networks can be analyzed mathematically as a distributed system of knowledge and as systems of possible worlds expressed in the knowledge. Learning in a neural network can be analyzed as an attempt to acquire a representation of knowledge. We express the knowledge system, systems of possible worlds, and neural architectures at different stages of learning as categories. Diagrammatic constructs express learning in terms of pre-existing knowledge representations. Functors express structure-preserving associations between the categories. This analysis provides a mathematical vehicle for understanding connectionist systems and yields design principles for advancing the state of the art.
Keywords :
distributed programming; knowledge representation; learning (artificial intelligence); neural nets; distributed system; knowledge representation; learning; neural architectures; neural networks; Computer networks; Computer science; Distributed computing; Electronic mail; Knowledge based systems; Knowledge engineering; Knowledge representation; Mathematical model; Neural networks; Vehicles;
Conference_Titel :
Neural Networks, 2006. IJCNN '06. International Joint Conference on
Conference_Location :
Vancouver, BC
Print_ISBN :
0-7803-9490-9
DOI :
10.1109/IJCNN.2006.247264