Abstract
In recent years, knowledge graphs have gained interest and witnessed
widespread applications in various domains, such as information retrieval,
question-answering, recommendation systems, amongst others. Large-scale
knowledge graphs to this end have demonstrated their utility in effectively
representing structured knowledge. To further facilitate the application of
machine learning techniques, knowledge graph embedding (KGE) models have been
developed. Such models can transform entities and relationships within
knowledge graphs into vectors. However, these embedding models often face
challenges related to noise, missing information, distribution shift,
adversarial attacks, etc. This can lead to sub-optimal embeddings and incorrect
inferences, thereby negatively impacting downstream applications. While the
existing literature has focused so far on adversarial attacks on KGE models,
the challenges related to the other critical aspects remain unexplored. In this
paper, we, first of all, give a unified definition of resilience, encompassing
several factors such as generalisation, performance consistency, distribution
adaption, and robustness. After formalizing these concepts for machine learning
in general, we define them in the context of knowledge graphs. To find the gap
in the existing works on resilience in the context of knowledge graphs, we
perform a systematic survey, taking into account all these aspects mentioned
previously. Our survey results show that most of the existing works focus on a
specific aspect of resilience, namely robustness. After categorizing such works
based on their respective aspects of resilience, we discuss the challenges and
future research directions.