Abstract
Knowledge graph embedding (KGE) is to project entities and relations of a knowledge graph (KG) into a low-dimensional vector space, which has made steady progress in recent years. Conventional KGE methods, especially translational distance-based models, are trained through discriminating positive samples from negative ones. Most KGs store only positive samples for space efficiency. Negative sampling thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks, such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughts about negative sampling in KGE.
| Original language | English |
|---|---|
| Journal | International Journal of Artificial Intelligence and Applications |
| Volume | 12 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 31 Jan 2021 |
Keywords
- generative adversarial network
- knowledge graph embedding
- negative sampling
Fingerprint
Dive into the research topics of 'Understanding negative sampling in knowledge graph embedding'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver