Fri . 20 Jan 2020
TR | RU | UK | KK | BE |

Knowledge extraction

definition extraction tp de chimie organique, knowledge extraction
Knowledge extraction is the creation of knowledge from structured relational databases, XML and unstructured text, documents, images sources The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing Although it is methodically similar to information extraction NLP and ETL data warehouse, the main criteria is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema It requires either the reuse of existing formal knowledge reusing identifiers or ontologies or the generation of a schema based on the source data

The RDB2RDF W3C group is currently standardizing a language for extraction of RDF from relational databases Another popular example for knowledge extraction is the transformation of Wikipedia into structured data and also the mapping to existing knowledge see DBpedia and Freebase


  • 1 Overview
  • 2 Examples
    • 21 Entity linking
    • 22 Relational databases to RDF
  • 3 Extraction from structured sources to RDF
    • 31 1:1 Mapping from RDB Tables/Views to RDF Entities/Attributes/Values
    • 32 Complex mappings of relational databases to RDF
    • 33 XML
    • 34 Survey of Methods / Tools
  • 4 Extraction from natural language sources
    • 41 Traditional information extraction IE
    • 42 Ontology-based information extraction OBIE
    • 43 Ontology learning OL
    • 44 Semantic annotation SA
    • 45 Tools
  • 5 Knowledge discovery
    • 51 Input data
    • 52 Output formats
  • 6 See also
  • 7 References


After the standardization of knowledge representation languages such as RDF and OWL, much research has been conducted in the area, especially regarding transforming relational databases into RDF, identity resolution, knowledge discovery and ontology learning The general process uses traditional methods from information extraction and extract, transform, and load ETL, which transform the data from the sources into structured formats

The following criteria can be used to categorize approaches in this topic some of them only account for extraction from relational databases:

Source Which data sources are covered: Text, Relational Databases, XML, CSV
Exposition How is the extracted knowledge made explicit ontology file, semantic database How can you query it
Synchronization Is the knowledge extraction process executed once to produce a dump or is the result synchronized with the source Static or dynamic Are changes to the result written back bi-directional
Reuse of vocabularies The tool is able to reuse existing vocabularies in the extraction For example, the table column 'firstName' can be mapped to foaf:firstName Some automatic approaches are not capable of mapping vocab
Automatization The degree to which the extraction is assisted/automated Manual, GUI, semi-automatic, automatic
Requires a domain ontology A pre-existing ontology is needed to map to it So either a mapping is created or a schema is learned from the source ontology learning


Entity linking

  1. DBpedia Spotlight, OpenCalais, Dandelion dataTXT, the Zemanta API, Extractiv and PoolParty Extractor analyze free text via named-entity recognition and then disambiguates candidates via name resolution and links the found entities to the DBpedia knowledge repository Dandelion dataTXT demo or DBpedia Spotlight web demo or PoolParty Extractor Demo

President Obama called Wednesday on Congress to extend a tax break for students included in last year's economic stimulus package, arguing that the policy provides more generous assistance

As President Obama is linked to a DBpedia LinkedData resource, further information can be retrieved automatically and a Semantic Reasoner can for example infer that the mentioned entity is of the type Person using FOAF software and of type Presidents of the United States using YAGO Counter examples: Methods that only recognize entities or link to Wikipedia articles and other targets that do not provide further retrieval of structured data and formal knowledge

Relational databases to RDF

  1. Triplify, D2R Server, Ultrawrap, and Virtuoso RDF Views are tools that transform relational databases to RDF During this process they allow reusing existing vocabularies and ontologies during the conversion process When transforming a typical relational table named users, one column egname or an aggregation of columns egfirst_name and last_name has to provide the URI of the created entity Normally the primary key is used Every other column can be extracted as a relation with this entity Then properties with formally defined semantics are used and reused to interpret the information For example, a column in a user table called marriedTo can be defined as symmetrical relation and a column homepage can be converted to a property from the FOAF Vocabulary called foaf:homepage, thus qualifying it as an inverse functional property Then each entry of the user table can be made an instance of the class foaf:Person Ontology Population Additionally domain knowledge in form of an ontology could be created from the status_id, either by manually created rules if status_id is 2, the entry belongs to class Teacher or by semi-automated methods ontology learning Here is an example transformation:
Name marriedTo homepage status_id
Peter Mary http://exampleorg/Peters_page 1
Claus Eva http://exampleorg/Claus_page 2
:Peter :marriedTo :Mary :marriedTo a owl:SymmetricProperty :Peter foaf:homepage <http://exampleorg/Peters_page> :Peter a foaf:Person :Peter a :Student :Claus a :Teacher

Extraction from structured sources to RDF

1:1 Mapping from RDB Tables/Views to RDF Entities/Attributes/Values

When building a RDB representation of a problem domain, the starting point is frequently an entity-relationship diagram ERD Typically, each entity is represented as a database table, each attribute of the entity becomes a column in that table, and relationships between entities are indicated by foreign keys Each table typically defines a particular class of entity, each column one of its attributes Each row in the table describes an entity instance, uniquely identified by a primary key The table rows collectively describe an entity set In an equivalent RDF representation of the same entity set:

  • Each column in the table is an attribute ie, predicate
  • Each column value is an attribute value ie, object
  • Each row key represents an entity ID ie, subject
  • Each row represents an entity instance
  • Each row entity instance is represented in RDF by a collection of triples with a common subject entity ID

So, to render an equivalent view based on RDF semantics, the basic mapping algorithm would be as follows:

  1. create an RDFS class for each table
  2. convert all primary keys and foreign keys into IRIs
  3. assign a predicate IRI to each column
  4. assign an rdf:type predicate for each row, linking it to an RDFS class IRI corresponding to the table
  5. for each column that is neither part of a primary or foreign key, construct a triple containing the primary key IRI as the subject, the column IRI as the predicate and the column's value as the object

Early mentioning of this basic or direct mapping can be found in Tim Berners-Lee's comparison of the ER model to the RDF model

Complex mappings of relational databases to RDF

The 1:1 mapping mentioned above exposes the legacy data as RDF in a straightforward way, additional refinements can be employed to improve the usefulness of RDF output respective the given Use Cases Normally, information is lost during the transformation of an entity-relationship diagram ERD to relational tables Details can be found in object-relational impedance mismatch and has to be reverse engineered From a conceptual view, approaches for extraction can come from two directions The first direction tries to extract or learn an OWL schema from the given database schema Early approaches used a fixed amount of manually created mapping rules to refine the 1:1 mapping More elaborate methods are employing heuristics or learning algorithms to induce schematic information methods overlap with ontology learning While some approaches try to extract the information from the structure inherent in the SQL schema analysing eg foreign keys, others analyse the content and the values in the tables to create conceptual hierarchies eg a columns with few values are candidates for becoming categories The second direction tries to map the schema and its contents to a pre-existing domain ontology see also: ontology alignment Often, however, a suitable domain ontology does not exist and has to be created first


As XML is structured as a tree, any data can be easily represented in RDF, which is structured as a graph XML2RDF is one example of an approach that uses RDF blank nodes and transforms XML elements and attributes to RDF properties The topic however is more complex as in the case of relational databases In a relational table the primary key is an ideal candidate for becoming the subject of the extracted triples An XML element, however, can be transformed - depending on the context- as a subject, a predicate or object of a triple XSLT can be used a standard transformation language to manually convert XML to RDF

Survey of Methods / Tools

Name Data Source Data Exposition Data Synchronisation Mapping Language Vocabulary Reuse Mapping Automat Req Domain Ontology Uses GUI
A Direct Mapping of Relational Data to RDF Relational Data SPARQL/ETL dynamic N/A false automatic false false
CSV2RDF4LOD CSV ETL static RDF true manual false false
Convert2RDF Delimited text file ETL static RDF/DAML true manual false true
D2R Server RDB SPARQL bi-directional D2R Map true manual false false
DartGrid RDB own query language dynamic Visual Tool true manual false true
DataMaster RDB ETL static proprietary true manual true true
Google Refine's RDF Extension CSV, XML ETL static none semi-automatic false true
Krextor XML ETL static xslt true manual true false
MAPONTO RDB ETL static proprietary true manual true false
METAmorphoses RDB ETL static proprietary xml based mapping language true manual false true
MappingMaster CSV ETL static MappingMaster true GUI false true
ODEMapster RDB ETL static proprietary true manual true true
OntoWiki CSV Importer Plug-in - DataCube & Tabular CSV ETL static The RDF Data Cube Vocaublary true semi-automatic false true
Poolparty Extraktor PPX XML, Text LinkedData dynamic RDF SKOS true semi-automatic true false
RDBToOnto RDB ETL static none false automatic, the user furthermore has the chance to fine-tune results false true
RDF 123 CSV ETL static false false manual false true
RDOTE RDB ETL static SQL true manual true true
RelationalOWL RDB ETL static none false automatic false false
T2LD CSV ETL static false false automatic false false
The RDF Data Cube Vocabulary Multidimensional statistical data in spreadsheets Data Cube Vocabulary true manual false
TopBraid Composer CSV ETL static SKOS false semi-automatic false true
Triplify RDB LinkedData dynamic SQL true manual false false
Ultrawrap RDB SPARQL/ETL dynamic R2RML true semi-automatic false true
Virtuoso RDF Views RDB SPARQL dynamic Meta Schema Language true semi-automatic false true
Virtuoso Sponger structured and semi-structured data sources SPARQL dynamic Virtuoso PL & XSLT true semi-automatic false false
VisAVis RDB RDQL dynamic SQL true manual true true
XLWrap: Spreadsheet to RDF CSV ETL static TriG Syntax true manual false false
XML to RDF XML ETL static false false automatic false false

Extraction from natural language sources

The largest portion of information contained in business documents about 80% is encoded in natural language and therefore unstructured Because unstructured data is rather a challenge for knowledge extraction, more sophisticated methods are required, which generally tend to supply worse results compared to structured data The potential for a massive acquisition of extracted knowledge, however, should compensate the increased complexity and decreased quality of extraction In the following, natural language sources are understood as sources of information, where the data is given in an unstructured fashion as plain text If the given text is additionally embedded in a markup document e g HTML document, the mentioned systems normally remove the markup elements automatically

Traditional information extraction IE

Traditional information extraction is a technology of natural language processing, which extracts information from typically natural language texts and structures these in a suitable manner The kinds of information to be identified must be specified in a model before beginning the process, which is why the whole process of traditional Information Extraction is domain dependent The IE is split in the following five subtasks

  • Named entity recognition NER
  • Coreference resolution CO
  • Template element construction TE
  • Template relation construction TR
  • Template scenario production ST

The task of named entity recognition is to recognize and to categorize all named entities contained in a text assignment of a named entity to a predefined category This works by application of grammar based methods or statistical models

Coreference resolution identifies equivalent entities, which were recognized by NER, within a text There are two relevant kinds of equivalence relationship The first one relates to the relationship between two different represented entities eg IBM Europe and IBM and the second one to the relationship between an entity and their anaphoric references eg it and IBM Both kinds can be recognized by coreference resolution

During template element construction the IE system identifies descriptive properties of entities, recognized by NER and CO These properties correspond to ordinary qualities like red or big

Template relation construction identifies relations, which exist between the template elements These relations can be of several kinds, such as works-for or located-in, with the restriction, that both domain and range correspond to entities

In the template scenario production events, which are described in the text, will be identified and structured with respect to the entities, recognized by NER and CO and relations, identified by TR

Ontology-based information extraction OBIE

Ontology-based information extraction is a subfield of information extraction, with which at least one ontology is used to guide the process of information extraction from natural language text The OBIE system uses methods of traditional information extraction to identify concepts, instances and relations of the used ontologies in the text, which will be structured to an ontology after the process Thus, the input ontologies constitute the model of information to be extracted

Ontology learning OL

Main article: Ontology learning

Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms from natural language text As building ontologies manually is extremely labor-intensive and time consuming, there is great motivation to automate the process

Semantic annotation SA

During semantic annotation, natural language text is augmented with metadata often represented in RDFa, which should make the semantics of contained terms machine-understandable At this process, which is generally semi-automatic, knowledge is extracted in the sense, that a link between lexical terms and for example concepts from ontologies is established Thus, knowledge is gained, which meaning of a term in the processed context was intended and therefore the meaning of the text is grounded in machine-readable data with the ability to draw inferences Semantic annotation is typically split into the following two subtasks

  1. Terminology extraction
  2. Entity linking

At the terminology extraction level, lexical terms from the text are extracted For this purpose a tokenizer determines at first the word boundaries and solves abbreviations Afterwards terms from the text, which correspond to a concept, are extracted with the help of a domain-specific lexicon to link these at entity linking

In entity linking a link between the extracted lexical terms from the source text and the concepts from an ontology or knowledge base such as DBpedia is established For this, candidate-concepts are detected appropriately to the several meanings of a term with the help of a lexicon Finally, the context of the terms is analyzed to determine the most appropriate disambiguation and to assign the term to the correct concept


The following criteria can be used to categorize tools, which extract knowledge from natural language text

Source Which input formats can be processed by the tool eg plain text, HTML or PDF
Access Paradigm Can the tool query the data source or requires a whole dump for the extraction process
Data Synchronization Is the result of the extraction process synchronized with the source
Uses Output Ontology Does the tool link the result with an ontology
Mapping Automation How automated is the extraction process manual, semi-automtic or automatic
Requires Ontology Does the tool need an ontology for the extraction
Uses GUI Does the tool offer a graphical user interface
Approach Which approach IE, OBIE, OL or SA is used by the tool
Extracted Entities Which types of entities eg named entities, concepts or relationships can be extracted by the tool
Applied Techniques Which techniques are applied eg NLP, statistical methods, clustering or machine learning
Output Model Which model is used to represent the result of the tool e g RDF or OWL
Supported Domains Which domains are supported eg economy or biology
Supported Languages Which languages can be processed eg English or German

The following table characterizes some tools for Knowledge Extraction from natural language sources

Name Source Access Paradigm Data Synchronization Uses Output Ontology Mapping Automation Requires Ontology Uses GUI Approach Extracted Entities Applied Techniques Output Model Supported Domains Supported Languages
AeroText plain text, HTML, XML, SGML dump no yes automatic yes yes IE named entities, relationships, events linguistic rules proprietary domain-independent English, Spanish, Arabic, Chinese, indonesian
AlchemyAPI plain text, HTML automatic yes SA multilingual
ANNIE plain text dump yes yes IE finite state algorithms multilingual
ASIUM plain text dump semi-automatic yes OL concepts, concept hierarchy NLP, clustering
Attensity Exhaustive Extraction automatic IE named entities, relationships, events NLP
Dandelion API plain text, HTML, URL REST no no automatic no yes SA named entities, concepts statistical methods JSON domain-independent multilingual
DBpedia Spotlight plain text, HTML dump, SPARQL yes yes automatic no yes SA annotation to each word, annotation to non-stopwords NLP, statistical methods, machine learning RDFa domain-independent English
EntityClassifiereu plain text, HTML dump yes yes automatic no yes IE, OL, SA annotation to each word, annotation to non-stopwords rule-based grammar XML domain-independent English, German, Dutch
K-Extractor plain text, HTML, XML, PDF, MS Office, e-mail dump, SPARQL yes yes automatic no yes IE, OL, SA concepts, named entities, instances, concept hierarchy, generic relationships, user-defined relationships, events, modality, tense, entity linking, event linking, sentiment NLP, machine learning, heuristic rules RDF, OWL, proprietary XML domain-independent English, Spanish
iDocument HTML, PDF, DOC SPARQL yes yes OBIE instances, property values NLP personal, business
NetOwl Extractor plain text, HTML, XML, SGML, PDF, MS Office dump No Yes Automatic yes Yes IE named entities, relationships, events NLP XML, JSON, RDF-OWL, others multiple domains English, Arabic Chinese Simplified and Traditional, French, Korean, Persian Farsi and Dari, Russian, Spanish
OntoGen semi-automatic yes OL concepts, concept hierarchy, non-taxonomic relations, instances NLP, machine learning, clustering
OntoLearn plain text, HTML dump no yes automatic yes no OL concepts, concept hierarchy, instances NLP, statistical methods proprietary domain-independent English
OntoLearn Reloaded plain text, HTML dump no yes automatic yes no OL concepts, concept hierarchy, instances NLP, statistical methods proprietary domain-independent English
OntoSyphon HTML, PDF, DOC dump, search engine queries no yes automatic yes no OBIE concepts, relations, instances NLP, statistical methods RDF domain-independent English
ontoX plain text dump no yes semi-automatic yes no OBIE instances, datatype property values heuristic-based methods proprietary domain-independent language-independent
OpenCalais plain text, HTML, XML dump no yes automatic yes no SA annotation to entities, annotation to events, annotation to facts NLP, machine learning RDF domain-independent English, French, Spanish
PoolParty Extractor plain text, HTML, DOC, ODT dump no yes automatic yes yes OBIE named entities, concepts, relations, concepts that categorize the text, enrichments NLP, machine learning, statistical methods RDF, OWL domain-independent English, German, Spanish, French
Rosoka plain text, HTML, XML, SGML, PDF, MS Office dump Yes Yes Automatic no Yes IE named entities, relationships, attributes, concepts NLP XML, JSON, RDF, others multiple domains Multilingual 230
SCOOBIE plain text, HTML dump no yes automatic no no OBIE instances, property values, RDFS types NLP, machine learning RDF, RDFa domain-independent English, German
SemTag HTML dump no yes automatic yes no SA machine learning database record domain-independent language-independent
smart FIX plain text, HTML, PDF, DOC, e-Mail dump yes no automatic no yes OBIE named entities NLP, machine learning proprietary domain-independent English, German, French, Dutch, polish
Text2Onto plain text, HTML, PDF dump yes no semi-automatic yes yes OL concepts, concept hierarchy, non-taxonomic relations, instances, axioms NLP, statistical methods, machine learning, rule-based methods OWL deomain-independent English, German, Spanish
Text-To-Onto plain text, HTML, PDF, PostScript dump semi-automatic yes yes OL concepts, concept hierarchy, non-taxonomic relations, lexical entities referring to concepts, lexical entities referring to relations NLP, machine learning, clustering, statistical methods German
ThatNeedle Plain Text dump automatic no concepts, relations, hierarchy NLP, proprietary JSON multiple domains English
The Wiki Machine plain text, HTML, PDF, DOC dump no yes automatic yes yes SA annotation to proper nouns, annotation to common nouns machine learning RDFa domain-independent English, German, Spanish, French, Portuguese, Italian, Russian
ThingFinder IE named entities, relationships, events multilingual

Knowledge discovery

Knowledge discovery describes the process of automatically searching large volumes of data for patterns that can be considered knowledge about the data It is often described as deriving knowledge from the input data Knowledge discovery developed out of the data mining domain, and is closely related to it both in terms of methodology and terminology

The most well-known branch of data mining is knowledge discovery, also known as knowledge discovery in databases KDD Just as many other forms of knowledge discovery it creates abstractions of the input data The knowledge obtained through the process may become additional data that can be used for further usage and discovery Often the outcomes from knowledge discovery are not actionable, actionable knowledge discovery, also known as domain driven data mining, aims to discover and deliver actionable knowledge and insights

Another promising application of knowledge discovery is in the area of software modernization, weakness discovery and compliance which involves understanding existing software artifacts This process is related to a concept of reverse engineering Usually the knowledge obtained from existing software is presented in the form of models to which specific queries can be made when necessary An entity relationship is a frequent format of representing knowledge obtained from existing software Object Management Group OMG developed specification Knowledge Discovery Metamodel KDM which defines an ontology for the software assets and their relationships for the purpose of performing knowledge discovery of existing code Knowledge discovery from existing software systems, also known as software mining is closely related to data mining, since existing software artifacts contain enormous value for risk management and business value, key for the evaluation and evolution of software systems Instead of mining individual data sets, software mining focuses on metadata, such as process flows eg data flows, control flows, & call maps, architecture, database schemas, and business rules/terms/process

Input data

  • Databases
    • Relational data
    • Database
    • Document warehouse
    • Data warehouse
  • Software
    • Source code
    • Configuration files
    • Build scripts
  • Text
    • Concept mining
  • Graphs
    • Molecule mining
  • Sequences
    • Data stream mining
    • Learning from time-varying data streams under concept drift
  • Web

Output formats

  • Data model
  • Metadata
  • Metamodels
  • Ontology
  • Knowledge representation
  • Knowledge tags
  • Business rule
  • Knowledge Discovery Metamodel KDM
  • Business Process Modeling Notation BPMN
  • Intermediate representation
  • Resource Description Framework RDF
  • Software metrics

See also

  • Cluster analysis
  • Data archaeology


  1. ^ RDB2RDF Working Group, Website: http://wwww3org/2001/sw/rdb2rdf/ , charter: http://wwww3org/2009/08/rdb2rdf-charter, R2RML: RDB to RDF Mapping Language: http://wwww3org/TR/r2rml/
  2. ^ LOD2 EU Deliverable 311 Knowledge Extraction from Structured Sources http://staticlod2eu/Deliverables/deliverable-311pdf
  3. ^ "Life in the Linked Data Cloud" wwwopencalaiscom Retrieved 2009-11-10 Wikipedia has a Linked Data twin called DBpedia DBpedia has the same structured information as Wikipedia – but translated into a machine-readable format 
  4. ^ a b Tim Berners-Lee 1998, "Relational Databases on the Semantic Web" Retrieved: February 20, 2011
  5. ^ Hu et al 2007, "Discovering Simple Mappings Between Relational Database Schemas and Ontologies", In Proc of 6th International Semantic Web Conference ISWC 2007, 2nd Asian Semantic Web Conference ASWC 2007, LNCS 4825, pages 225‐238, Busan, Korea, 11‐15 November 2007 http://citeseerxistpsuedu/viewdoc/downloaddoi=1011976934&rep=rep1&type=pdf
  6. ^ R Ghawi and N Cullot 2007, "Database-to-Ontology Mapping Generation for Semantic Interoperability" In Third International Workshop on Database Interoperability InterDB 2007 http://le2icnrsfr/IMG/publications/InterDB07-Ghawipdf
  7. ^ Li et al 2005 "A Semi-automatic Ontology Acquisition Method for the Semantic Web", WAIM, volume 3739 of Lecture Notes in Computer Science, page 209-220 Springer doi:101007/11563952_19
  8. ^ Tirmizi et al 2008, "Translating SQL Applications to the Semantic Web", Lecture Notes in Computer Science, Volume 5181/2008 Database and Expert Systems Applications http://citeseeristpsuedu/viewdoc/download;jsessionid=15E8AB2A37BD06DAE59255A1AC3095F0doi=10111403169&rep=rep1&type=pdf
  9. ^ Farid Cerbah 2008 "Learning Highly Structured Semantic Repositories from Relational Databases", The Semantic Web: Research and Applications, volume 5021 of Lecture Notes in Computer Science, Springer, Berlin / Heidelberg http://wwwtao-projecteu/resources/publications/cerbah-learning-highly-structured-semantic-repositories-from-relational-databasespdf
  10. ^ a b Wimalasuriya, Daya C; Dou, Dejing 2010 "Ontology-based information extraction: An introduction and a survey of current approaches", Journal of Information Science, 363, p 306 - 323, http://ixcsuoregonedu/~dou/research/papers/jis09pdf retrieved: 18062012
  11. ^ Cunningham, Hamish 2005 "Information Extraction, Automatic", Encyclopedia of Language and Linguistics, 2, p 665 - 677, http://gateacuk/sale/ell2/ie/mainpdf retrieved: 18062012
  12. ^ Erdmann, M; Maedche, Alexander; Schnurr, H-P; Staab, Steffen 2000 "From Manual to Semi-automatic Semantic Annotation: About Ontology-based Text Annotation Tools", Proceedings of the COLING, http://wwwidaliuse/ext/epa/cis/2001/002/paperpdf retrieved: 18062012
  13. ^ Rao, Delip; McNamee, Paul; Dredze, Mark 2011 "Entity Linking: Finding Extracted Entities in a Knowledge Base", Multi-source, Multi-lingual Information Extraction and Summarization, http://wwwcsjhuedu/~delip/entity-linkingpdf retrieved: 18062012
  14. ^ Rocket Software, Inc 2012 "technology for extracting intelligence from text", http://wwwrocketsoftwarecom/products/aerotext retrieved: 18062012
  15. ^ Orchestr8 2012: "AlchemyAPI Overview", http://wwwalchemyapicom/api retrieved: 18062012
  16. ^ The University of Sheffield 2011 "ANNIE: a Nearly-New Information Extraction System", http://gateacuk/sale/tao/splitch6html#chap:annie retrieved: 18062012
  17. ^ ILP Network of Excellence "ASIUM LRI", http://www-aiijssi/~ilpnet2/systems/asiumhtml retrieved: 18062012
  18. ^ Attensity 2012 "Exhaustive Extraction", http://wwwattensitycom/products/technology/semantic-server/exhaustive-extraction/ retrieved: 18062012
  19. ^ Mendes, Pablo N; Jakob, Max; Garcia-Sílva, Andrés; Bizer; Christian 2011 "DBpedia Spotlight: Shedding Light on the Web of Documents", Proceedings of the 7th International Conference on Semantic Systems, p 1 - 8, http://wwwwiwissfu-berlinde/en/institute/pwo/bizer/research/publications/Mendes-Jakob-GarciaSilva-Bizer-DBpediaSpotlight-ISEM2011pdf retrieved: 18062012
  20. ^ Cite error: The named reference entityclassifier was invoked but never defined see the help page
  21. ^ Balakrishna, Mithun; Moldovan, Dan 2013 "Automatic Building of Semantically Rich Domain Models from Unstructured Data", Proceedings of the Twenty-Sixth International Florida Artificial Intelligence Research Society Conference FLAIRS, p 22 - 27, http://wwwaaaiorg/ocs/indexphp/FLAIRS/FLAIRS13/paper/view/5909/6036 retrieved: 11082014
  22. ^ 2 Moldovan, Dan; Blanco, Eduardo 2012 "Polaris: Lymba's Semantic Parser", Proceedings of the Eight International Conference on Language Resources and Evaluation LREC, p 66 - 72, http://wwwlrec-conforg/proceedings/lrec2012/pdf/176_Paperpdf retrieved: 11082014
  23. ^ Adrian, Benjamin; Maus, Heiko; Dengel, Andreas 2009 "iDocument: Using Ontologies for Extracting Information from Text", http://wwwdfkiuni-klde/~maus/dok/AdrianMausDengel09pdf retrieved: 18062012
  24. ^ SRA International, Inc 2012 "NetOwl Extractor", http://wwwsracom/netowl/entity-extraction/ retrieved: 18062012
  25. ^ Fortuna, Blaz; Grobelnik, Marko; Mladenic, Dunja 2007 "OntoGen: Semi-automatic Ontology Editor", Proceedings of the 2007 conference on Human interface, Part 2, p 309 - 318, http://analyticsijssi/~blazf/papers/OntoGen2_HCII2007pdf retrieved: 18062012
  26. ^ Missikoff, Michele; Navigli, Roberto; Velardi, Paola 2002 "Integrated Approach to Web Ontology Learning and Engineering", Computer, 3511, p 60 - 63, http://wwwusersdiuniroma1it/~velardi/IEEE_Cpdf retrieved: 18062012
  27. ^ McDowell, Luke K; Cafarella, Michael 2006 "Ontology-driven Information Extraction with OntoSyphon", Proceedings of the 5th international conference on The Semantic Web, p 428 - 444, http://turingcswashingtonedu/papers/iswc2006McDowell-finalpdf retrieved: 18062012
  28. ^ Yildiz, Burcu; Miksch, Silvia 2007 "ontoX - A Method for Ontology-Driven Information Extraction", Proceedings of the 2007 international conference on Computational science and its applications, 3, p 660 - 673, http://publiktuwienacat/files/pub-inf_4769pdf retrieved: 18062012
  29. ^ semanticweborg 2011 "PoolParty Extractor", http://semanticweborg/wiki/PoolParty_Extractor retrieved: 18062012
  30. ^ IMT Holdings, Corp 2013 "Rosoka", http://wwwrosokacom/content/capabilities retrieved: 08082013
  31. ^ Dill, Stephen; Eiron, Nadav; Gibson, David; Gruhl, Daniel; Guha, R; Jhingran, Anant; Kanungo, Tapas; Rajagopalan, Sridhar; Tomkins, Andrew; Tomlin, John A; Zien, Jason Y 2003 "SemTag and Seeker: Bootstraping the Semantic Web via Automated Semantic Annotation", Proceedings of the 12th international conference on World Wide Web, p 178 - 186, http://www2003org/cdrom/papers/refereed/p831/p831-dillhtml retrieved: 18062012
  32. ^ Uren, Victoria; Cimiano, Philipp; Iria, José; Handschuh, Siegfried; Vargas-Vera, Maria; Motta, Enrico; Ciravegna, Fabio 2006 "Semantic annotation for knowledge management: Requirements and a survey of the state of the art", Web Semantics: Science, Services and Agents on the World Wide Web, 41, p 14 - 28, http://staffwwwdcsshefacuk/people/JIria/iria_jws06pdf, retrieved: 18062012
  33. ^ Cimiano, Philipp; Völker, Johanna 2005 "Text2Onto - A Framework for Ontology Learning and Data-Driven Change Discovery", Proceedings of the 10th International Conference of Applications of Natural Language to Information Systems, 3513, p 227 - 238, http://wwwcimianode/Publications/2005/nldb05/nldb05pdf retrieved: 18062012
  34. ^ Maedche, Alexander; Volz, Raphael 2001 "The Ontology Extraction & Maintenance Framework Text-To-Onto", Proceedings of the IEEE International Conference on Data Mining, http://userscsccalpolyedu/~fkurfess/Events/DM-KM-01/Volzpdf retrieved: 18062012
  35. ^ Machine Linking "We connect to the Linked Open Data cloud", http://thewikimachinefbkeu/html/indexhtml retrieved: 18062012
  36. ^ Inxight Federal Systems 2008 "Inxight ThingFinder and ThingFinder Professional", http://inxightfedsyscom/products/sdks/tf/ retrieved: 18062012
  37. ^ Frawley William F et al 1992, "Knowledge Discovery in Databases: An Overview", AI Magazine Vol 13, No 3, 57-70 online full version: http://wwwaaaiorg/ojs/indexphp/aimagazine/article/viewArticle/1011
  38. ^ Fayyad U et al 1996, "From Data Mining to Knowledge Discovery in Databases", AI Magazine Vol 17, No 3, 37-54 online full version: http://wwwaaaiorg/ojs/indexphp/aimagazine/article/viewArticle/1230
  39. ^ Cao, L 2010 "Domain driven data mining: challenges and prospects" IEEE Trans on Knowledge and Data Engineering 22 6: 755–769 doi:101109/tkde201032 

definition extraction, definition extraction method, definition extraction occupation, definition extraction tp de chimie organique, knowledge extraction, knowledge extraction from text

Knowledge extraction Information about

Knowledge extraction

  • user icon

    Knowledge extraction beatiful post thanks!


Knowledge extraction
Knowledge extraction
Knowledge extraction viewing the topic.
Knowledge extraction what, Knowledge extraction who, Knowledge extraction explanation

There are excerpts from wikipedia on this article and video

Random Posts



A book is a set of written, printed, illustrated, or blank sheets, made of ink, paper, parchment, or...
Boston Renegades

Boston Renegades

Boston Renegades was an American women’s soccer team, founded in 2003 The team was a member of the U...
Sa Caleta Phoenician Settlement

Sa Caleta Phoenician Settlement

Sa Caleta Phoenician Settlement can be found on a rocky headland about 10 kilometers west of Ibiza T...

Bodybuildingcom is an American online retailer based in Boise, Idaho, specializing in dietary supple...