Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach
June 18, 2017 ยท Entered Twilight ยท ๐ International Joint Conference on Artificial Intelligence
"Last commit was 6.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, 0-OOKB-raw-datasets, 1-starndard-setting, 2-OOKB-setting, 3-draw-score-history, README.md
Authors
Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, Yuji Matsumoto
arXiv ID
1706.05674
Category
cs.CL: Computation & Language
Citations
361
Venue
International Joint Conference on Artificial Intelligence
Repository
https://github.com/takuo-h/GNN-for-OOKB
โญ 41
Last Checked
1 month ago
Abstract
Knowledge base completion (KBC) aims to predict missing information in a knowledge base.In this paper, we address the out-of-knowledge-base (OOKB) entity problem in KBC:how to answer queries concerning test entities not observed at training time. Existing embedding-based KBC models assume that all test entities are available at training time, making it unclear how to obtain embeddings for new entities without costly retraining. To solve the OOKB entity problem without retraining, we use graph neural networks (Graph-NNs) to compute the embeddings of OOKB entities, exploiting the limited auxiliary knowledge provided at test time.The experimental results show the effectiveness of our proposed model in the OOKB setting.Additionally, in the standard KBC setting in which OOKB entities are not involved, our model achieves state-of-the-art performance on the WordNet dataset. The code and dataset are available at https://github.com/takuo-h/GNN-for-OOKB
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted