Title of article :
Learning to construct knowledge bases from the World Wide Web Original Research Article
Author/Authors :
Mark Craven، نويسنده , , Dan DiPasquo، نويسنده , , Dayne Freitag، نويسنده , , Andrew McCallum، نويسنده , , Tom Mitchell، نويسنده , , Kamal Nigam، نويسنده , , Se?n Slattery، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2000
Pages :
45
From page :
69
To page :
113
Abstract :
The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs. The first is an ontology that defines the classes (e.g., company, person, employee, product) and relations (e.g., employed_by, produced_by) of interest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This article describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.
Keywords :
Relational learning , Information extraction , Machine learning , world wide web , Knowledge bases , Web spider , Text classification
Journal title :
Artificial Intelligence
Serial Year :
2000
Journal title :
Artificial Intelligence
Record number :
1206833
Link To Document :
بازگشت