Abstract
In various tasks related to artificial intelligence, data is often present in multiple forms or modalities. Recently, it has become a popular approach to combine these different forms of information into a knowledge graph, creating a multi-modal knowledge graph (MMKG). However, multi-modal knowledge graphs (MMKGs) often face issues of insufficient data coverage and incompleteness. In order to address this issue, a possible strategy is to incorporate supplemental information from other multi-modal knowledge graphs (MMKGs). To achieve this goal, current methods for aligning entities could be utilized; however, these approaches work within the Euclidean space, and the resulting entity representations can distort the hierarchical structure of the knowledge graph. Additionally, the potential benefits of visual information have not been fully utilized.
To address these concerns, we present a new approach for aligning entities across multiple modalities, which we call hyperbolic multi-modal entity alignment (HMEA). This method expands upon the conventional Euclidean representation by incorporating a hyperboloid manifold. Initially, we utilize hyperbolic graph convolutional networks(HGCN) to acquire structural representations of entities. In terms of visual data, we create image embeddings using the densenet model and subsequently map them into the hyperbolic space utilizing HGCN. Lastly, we merge the structural and visual representations within the hyperbolic space and utilize the combined embeddings to forecast potential entity alignment outcomes. Through a series of thorough experiments and ablation studies, we validate the efficacy of our proposed model and its individual components.
You have full access to this open access chapter, Download chapter PDF
1 Introduction
In recent times, there has been a noticeable trend of integrating multimedia data into knowledge graphs (KGs) to facilitate cross-modal activities that involve the interplay of information across multiple modalities, e.g., image and video retrieval [27], video summaries [19], visual entity disambiguation [17], visual question answering [32], etc. To this end, several multi-modal KGs (MMKGs) [16, 28] have been constructed very recently. An example of MMKG can be found in Fig. 9.1. For this study, we focus on MMKGs that consist of two modalities, namely, the KG structural details and visual information, while retaining a generalizable approach.
Example Figure 9.1 shows a partial MMKG, which consists of entities, image sets, and the links between them. To elaborate, the KG structural data entails the relationships between the different entities, whereas the visual data is sourced from the sets of images. For the entity The Prestige, its image set may contain scenes, actors, posters, etc.
However, many of the current MMKGs have been sourced from restricted data sources, causing them to have inadequate domain coverage [22]. To broaden the scope of these MMKGs, one potential solution is to incorporate valuable knowledge from other MMKGs. An essential step in consolidating knowledge across MMKGs is to identify matching entities in different KGs, given that entities serve as the links that connect the diverse KGs. This technique is also referred to as multi-modal entity alignment (MMEA).
MMEA is a complex undertaking that necessitates the modeling and amalgamation of information from multiple modalities. For the KG structural information, existing entity alignment (EA) approaches [3, 9, 25, 33] can be directly adopted to generate entity structural embeddings for MMEA. These methods usually utilize TransE-based or graph convolutional network(GCN)-based models [1, 12] to learn entity representations of individual KGs, which are then unified using the seed entity pairs. Despite this, all of these techniques generate entity representations in the Euclidean space, which can result in significant distortion when embedding real-world graphs that possess scale-free or hierarchical structures [4, 23]. Concerning the visual information, the VGG16 model has been utilized to create embeddings for images linked to entities and subsequently employed for alignment. However, the VGG16 model is not adept at extracting valuable features from images, which limits the efficacy of the alignment process. Lastly, the integration of information from both modalities must be executed meticulously to enhance overall effectiveness.
To tackle the problems mentioned above, we introduce a multi-modal entity alignment technique that works in hyperbolic space (HMEA). More specifically, we expand the Euclidean representation to the hyperboloid manifold and utilize the hyperbolic graph convolutional networks (HGCN) to develop structural representations of entities. With regard to visual data, we create image embeddings using the densenet model and also map them into the hyperbolic space with HGCN. Ultimately, we combine the structural embeddings and image embeddings in the hyperbolic space to forecast potential alignments.
To sum up, the key contributions of our technique can be outlined as follows:
-
We propose a novel MMEA approach, HMEA, which models and integrates multi-modal information in the hyperbolic space.
-
We apply the hyperbolic graph convolutional networks (HGCNs) to develop structural representations of entities and showcase the benefits of the hyperbolic space for knowledge graph representations.
-
We use a superior image embedding model to acquire improved visual representations for alignment.
-
We perform thorough experimental evaluations to confirm the efficacy of our proposed model.
Organization
Section 9.2 overviews related work, and the preliminaries are introduced in Sect. 9.3. Section 9.4 describes our proposed approach. Section 9.5 presents experimental results, followed by conclusion in Sect. 9.6.
2 Related Work
In this section, we introduce some efforts that are relevant to this work.
2.1 Multi-Modal Knowledge Graph
Many knowledge graph construction studies concentrate on organizing and discovering textual data in a structured format, neglecting other resources available on the Web [28]. Nevertheless, real-world applications require cross-modal data, such as image and video retrieval, visual question answering, video summaries, visual commonsense reasoning, and so on. Consequently, multi-modal knowledge graphs (MMKGs) have been introduced, which comprise diverse information (e.g., image, text, KG) and cross-modal relationships. However, building MMKGs poses several challenges. Collecting substantial multi-modal data from search engines is a time-consuming and laborious task. Additionally, MMKGs often have low domain coverage and are incomplete. Integrating multi-modal knowledge from other MMKGs is an effective way to enhance their completeness. Currently, there are few studies about merging different MMKGs. Liu et al. [16] built two pairs of MMKGs and extracted relational, latent, numerical, and visual features for predicting the SameAs link between entities. And some approaches of multi-modal knowledge representation involve visual features from entity images for knowledge representation learning; IKRL [31] integrates image representations into an aggregated image-based representation via an attention-based method.
2.2 Representation Learning in Hyperbolic Space
Essentially, most of the existing GCN models are designed for graphs in Euclidean spaces [2]. However, research has found that graph data exhibits a non-Euclidean structure [18], and embedding real-world graphs with a scale-free or hierarchical structure results in significant distortion [4, 23]. Moreover, recent studies in network science have shown that hyperbolic geometry is ideal for modeling complex networks, as the hyperbolic space can naturally reflect some graph properties [14]. One of the key features of hyperbolic spaces is that they expand more rapidly than Euclidean spaces, which expands exponentially rather than polynomially. Due to the advantages of hyperbolic space in representing graph structure data, there has been growing interest in representation learning in hyperbolic spaces, particularly in learning the hierarchical representation of a graph [20]. Furthermore, Nickel et al. [21] have demonstrated that the Lorentz model of hyperbolic geometry has favorable properties for stochastic optimization and leads to substantially enhanced embeddings, particularly in low dimensions. Additionally, some researchers have begun to extend deep learning methods to hyperbolic space, achieving state-of-the-art performance on link prediction and node classification tasks [7, 8, 26].
3 Preliminaries
In this section, we start by providing a formal definition of the MMEA task. Then, we provide a brief overview of the GCN model. Lastly, we introduce the fundamental principles of hyperbolic geometry, which serve as the foundation for our proposed model.
3.1 Task Formulation
The goal of MMEA is to align entities in two MMKGs. An MMKG typically encompasses information in several modalities. In this study, we concentrate on the KG structural information and visual information, without any loss of generality. Formally, we represent MMKGs as \(MG = (E,R,T,I)\), where E, R, T, and I denote the sets of entities, relations, triples, and images, respectively. A relational triple \(t \in T\) can be represented as \((e_1, r, e_2)\), where \(e_1, e_2 \in E\) and \(r \in R\). An entity e is associated with multiple images \(I_e = \{i_e^0, i_e^1,\ldots ,i_e^n\}\).
Given two MMKGs, \(MG_1 = (E_1, R_1, T_1, I_1)\), \(MG_2 = (E_2, R_2, T_2, I_2)\), and seed entity pairs (pre-aligned entity pairs for training) \( S=\{(e_s^1, e_s^2)|e_s^1\leftrightarrow e_s^2, e_s^1 \in E_1, e_s^2 \in E_2 \}\), where \(\leftrightarrow \) represents equivalence, the task of MMEA can be defined as discovering more aligned entity pairs \(\{(e^1, e^2)|e^1 \in E_1, e^2 \in E_2 \}\). We use the following example to further illustrate this task.
Example Figure 9.2 shows two partial MMKGs. The equivalence between The Dark Knight in \(MG_1\) and The Dark Knight in \(MG_2\) is known in advance. EA aims to detect potential equivalent entity pairs, e.g., Nolan in \(MG_1\) and Nolan in \(MG_2\), using the known alignments. â–¡
3.2 Graph Convolutional Neural Networks
GCNs [10, 13] are a neural network type that works directly with graph data. A GCN model comprises several stacked GCN layers. The inputs to the l-th layer of the GCN model are node feature vectors and the graph’s structure. \(\boldsymbol {H}^{(l)} \in {R} ^ {n \times d^{l}}\) is a vertex feature representation, where n is the number of vertices and \(d^{l}\) is the dimensionality of feature matrix. \(\boldsymbol {\hat A} = \boldsymbol {D}^{-\frac {1}{2}}(\boldsymbol {A} + \boldsymbol {I})\boldsymbol {D}^{-\frac {1}{2}}\) represents the symmetric normalized adjacency matrix. The identity matrix \(\boldsymbol {I}\) is added to the adjacency matrix \(\boldsymbol {A}\) to obtain self-loops for each node, and the degree matrix \( \boldsymbol {D} = \sum _j(\boldsymbol {A}_{ij}+\boldsymbol {I}_{ij})\). The output of the l-th layer is a new feature matrix \(\boldsymbol {H}^{(l+1)}\) by the following convolutional computation:
3.3 Hyperboloid Manifold
We provide a brief overview of the critical concepts in hyperbolic geometry. For a more comprehensive description, please refer to [6]. Hyperbolic geometry refers to a non-Euclidean geometry that features a constant negative curvature used to measure how a geometric object differs from a flat plane. In this work, we use the d-dimensional Poincare ball model with negative curvature \(-c\) \((c > 0)\): \(P^{(d,c)}=\{ \mathbf {x} \in R^d: \| \mathbf {x} \|{ }^2 < \frac {1}{c}\}\), where \(\| \cdot \|\) is the \(L_2\) norm. For each point \( x \in P^{(d,c)}\), the tangent space \(T^c_x \) is a d-dimensional vector space at point x, which contains all possible directions of paths in \(P^{(d,c)}\) leaving from x. Next, we present several fundamental actions in the hyperbolic space, which play a critical role in our proposed model.
Exponential and Logarithmic Maps
Specifically, let \(\boldsymbol {v}\) be the feature vector in the tangent space \( T^c_{\mathbf {o}}\); \(\mathbf {o}\) is a point in the hyperbolic space \(P^{(d,c)}\), which is also used as a reference point. Let \(\mathbf {o}\) be the origin, \(\mathbf {o} = 0\). The tangent space \( T^c_{\mathbf {o}} \) can be mapped to \(P^{(d,c)}\) via the exponential map:
And conversely, the logarithmic map which maps \(P^{(d,c)}\) to \( T^c_{\mathbf {o}} \) is defined as:
Möbius Addition
Vector addition does not have a well-defined meaning in the hyperbolic space. Adding the vectors of two points directly, as in Euclidean space, in the Poincare ball could yield a point outside the ball. In this case, the Möbius addition [7] provides an analogue to the Euclidean addition in the hyperbolic space. Here, \(\oplus _{c} \) represents the Möbius addition as:
4 Methodology
In this section, we present our proposed approach HMEA, which operates in the hyperbolic space. The framework is shown in Fig. 9.3. We first adopt HGCN to obtain the structural embeddings of entities. Subsequently, we transform the corresponding entity images into visual embeddings employing the densenet model, which are further projected into the hyperbolic space. In the end, we join these embeddings in the hyperbolic space and predict the alignment outcomes utilizing a pre-determined hyperbolic distance. We use the following example to illustrate our proposed model.
Example Further to the previous example, by using structural information, it is easy to detect that Nolan in \(MG_1\) is equivalent to Nolan in \(MG_2\). However, solely relying on structural data is insufficient and might result in an incorrect alignment of Michael Caine in \(MG_1\) with Christian Bale in \(MG_2\). In this scenario, the utilization of visual information would be highly beneficial as the images of Michael Caine in \(MG_1\) and Christian Bale in \(MG_2\) are significantly dissimilar. Consequently, we consider both structural and visual information for alignment. â–¡
In the following, we elaborate on the various components of our proposal.
4.1 Structural Representation Learning
We acquire the structural representation of MMKGs by employing hyperbolic graph convolutional neural networks, which extends convolutional computation to manifold space and leverages the effectiveness of both graph neural networks and hyperbolic embeddings. Initially, we transform the input Euclidean features to the hyperboloid manifold. Then, through feature transformation, message passing, and nonlinear activation in the hyperbolic space, we can get the hyperbolic structural representations.
Mapping Input Features to Hyperboloid Manifold
In general, the input node features are produced by pre-trained Euclidean neural networks, and hence, they exist in the Euclidean space. We begin by establishing a conversion from Euclidean features to the hyperbolic space.
Here, we assume that the input Euclidean features \({{\boldsymbol {x}}^{E}} \in T_{\mathbf {o}}H_c\), where \(T_{\mathbf {o}}H_c\) represent the tangent space referring to \(\mathbf {o}\), and \(\mathbf {o} \in H_c\) denotes the north pole (origin) in hyperbolic space. We obtain the hyperbolic feature matrix \({\boldsymbol {x}}^H\) via: \( {\boldsymbol {x}}^H = \operatorname {exp}_o^c({\boldsymbol {x}}^{E})\), where \(\operatorname {exp}_o^c(\cdot )\) is defined in Eq. (9.2).
Feature Transformation and Propagation
The core operations in hyperbolic structural learning, similar to GCN, are feature transformation and message passing. While these operations are well-established in the Euclidean space, they are considerably more complex in the hyperboloid manifold. One possible solution is to perform these functions with trainable parameters in the tangent space of a point within the hyperboloid manifold, as the tangent space is Euclidean. To this end, we utilize the \(\operatorname {exp(\cdot )}\) map and \(\operatorname {log(\cdot )}\) map to convert between the hyperboloid manifold and the tangent space. This enables us to make use of the tangent space \(T_{\mathbf {o}}H_c^d\) for executing Euclidean operations.
The initial step involves using the logarithmic map to map the hyperbolic representation \({\boldsymbol {x}}_v^H \in R^{1 \times d}\) of node v to the tangent space \(T_{\mathbf {o}}H_c^d\). Next, in \(T_{\mathbf {o}}H_c^d\), we compute the feature transformation and propagation rule for node v as:
where \(\boldsymbol {x}_v^T \in R^{1\times d'} \) denotes the feature representation in the tangent space and \(\boldsymbol {\hat A}\) represents the symmetric normalized adjacency matrix; \(\boldsymbol {W}\) is a \( d' \times d \) trainable weight matrix.
Nonlinear Activation with Different Curvatures
Once the features have been transformed in the tangent space, a nonlinear activation function \(\sigma ^{\otimes ^{c_{l}, c_{l+1}}}\) is applied to learn nonlinear transformations. Specifically, in the tangent space \( T_{\mathbf {o}} H^{d}_{c_{l}} \) of layer l, Euclidean nonlinear activation is performed before mapping the features to the manifold of the next layer:
where the hyperbolic curvatures at layer l and \(l+1\) are denoted as \(-1/c_{l}\) and \(-1/c_{l+1}\), respectively. The activation function \(\sigma \) used is the \(\operatorname {ReLU}(\cdot )\) function. This step is critical in enabling us to vary the curvature smoothly at each layer, which is necessary for achieving good performance due to limitations in machine precision and normalization.
Based on the hyperboloid feature transformation and nonlinear activation, the convolutional computation in the hyperbolic space is redefined as:
where the convolutional computation in hyperbolic space involves using learned node embeddings in the hyperbolic space at layer \(l+1\) and layer l, represented respectively as \({\boldsymbol {H}}^{l+1}\in R^{n \times d^{l+1}}\) and \({\boldsymbol {H}}^{l} \in R^{n \times d^l}\). The initial embeddings are represented as \({\boldsymbol {H}}^{0} = {\boldsymbol {x}}^{H}\). The symmetric normalized adjacency matrix is represented by \(\boldsymbol {\hat A}\), and the trainable weight matrix is represented by \(\boldsymbol {W}\), which has dimensions \(d^l \times d^{l+1}\).
4.2 Visual Representation Learning
The densenet model [11] is used to learn image embeddings, which has been pre-trained on the ImageNet dataset [5]. The softmax layer in densenet is removed and 1920-dimensional embeddings are obtained for all images in the MMKGs. These embeddings are then projected into the hyperbolic space using HGCN to enhance their expressive power.
4.3 Multi-Modal Information Fusion
As both visual and structural information can impact the alignment results. To combine these two types of information, we propose a novel method that merges the structural information and visual information of MMKGs. Specifically, we obtain the merged representation of entity \({\mathbf {e}}_i\) in the hyperbolic space using the following approach:
where \({\boldsymbol {H}}_{s}\) and \({\boldsymbol {H}}_{v}\) are structural and visual embeddings learned from HGCN model, respectively; the hyper-parameter \(\beta \) is used to adjust the relative weight of the structure and visual features in the final merged representation. The Möbius addition operator \(\oplus _c\) is used to combine the structural and visual embeddings. However, the dimensions of the structural and visual representations should be identical.
4.4 Alignment Prediction
To predict the alignment results, we compute the distance between the entity representations from two MMKGs. The Euclidean distance and Manhattan distance are popular distance measures used in the Euclidean space [15, 30]. However, in the hyperbolic space, we must use the hyperbolic distance between nodes as the distance measure. For entities \(e_i\) in \(MG_1\) and \(e_j\) in \(MG_2\), the distance is defined as:
where \({\boldsymbol {h}}_{i}\) and \({\boldsymbol {h}}_{j}\) denote the merged embeddings of \(e_i\) and \(e_j\) in the hyperbolic space, respectively; \(\| \cdot \|\) is the \(L_1\) norm; the operator \(\oplus _c\) is the Möbius addition.
We expect the distance to be small for equivalent entities and large for nonequivalent ones. To align a specific entity \(e_i\) in \(MG_1\), our approach calculates the distances between \(e_i\) and all entities in \(MG_2\) and presents a ranked list of entities as candidate alignments.
4.5 Model Training
To embed equivalent entities as closely as possible in the vector space, we utilize a set of established entity alignments (known as seed entities) S as training data to train the model. Specifically, we minimize the margin-based ranking loss function during model training:
where \([x]_+ = \max \{{0,x}\}\); \((e,v)\) represents a seed entity pair and S is the set of entity pairs; \(S_{(e,v)}^\prime \) represents the set of negative instances created by altering \((e, v)\), i.e., by substituting e or v with a randomly selected entity from either \(MG_1\) or \(MG_2\); \(\gamma > 0\) denotes the margin hyper-parameter that separates positive and negative instances. The margin-based loss function stipulates that the distance between entities in positive pairs should be small, and the distance between entities in negative pairs should be large.
5 Experiment
5.1 Dataset and Evaluation Metric
In this study, we utilized datasets sourced from FreeBase, DBpedia, and YAGO, which were created by Liu et al. [16]. These datasets were developed by starting with FB15K to establish multi-modal knowledge graphs, which were then aligned with entities from other knowledge graphs such as DB15K and YAGO15K through reference links. Our experiments focused on two pairs of multi-modal knowledge graphs: FB15K-DB15K and FB15K-YAGO15K.
Due to the absence of original images in the datasets, we acquired the corresponding images for each entity using the URIs provided in [17]. To achieve this, we developed a Web crawler that can extract query results from image search engines, i.e., Google Images,Footnote 1 Bing Images,Footnote 2 and Yahoo Image Search.Footnote 3 Following this, we allocated the images obtained from various search engines to different MMKGs, thereby showcasing the dissimilarity among different MMKGs.
The detailed information on the datasets is provided in Table 9.1. Each dataset comprises approximately 15,000 entities and over 11,000 sets of entity images. The Images column represents the number of entities that possess the image sets. These alignments are given by the SameAs predicates that have been previously found. In the experiments, the known equivalent entity pairs are used for model training and testing.
Evaluation Metric
We utilize \(Hits@k\) as the evaluation metric to gauge the efficacy of all the approaches. This metric determines the percentage of correctly aligned entities that are ranked among the top-k candidates.
5.2 Experimental Setting and Competing Approaches
Experimental Setting
To analyze the effectiveness of the methods across various percentages of the provided alignments \(P(\)%\()\), we evaluate the methods with low (\(20\%\)), medium (\(50\%\)), and high percentage (\(80\%\)) of the given seed entity pairs. The remaining sameAs triples are used for test. To ensure fairness, we have maintained the same number of dimensions (i.e., 400) for both GCN-Align and HMEA. The other parameters of GCN-Align follow [29]. For the parameters of our approach HMEA, we have created six negative samples for each positive sample. The margin hyper-parameters used in the loss function are \(\gamma _{\textsf { HMEA} -s} = 0.5\) and \(\gamma _{\textsf { HMEA} -v} = 1.5\), respectively. We optimized HMEA using the Adam optimizer.
Competing Approaches
To showcase the effectiveness of our proposed model, we have selected three state-of-the-art approaches as competitors:
-
GCN-Align [29] utilizes GCN to encode the structural information of entities and then combines relation and image embeddings for the purpose of entity alignment.
-
PoE [16] is based on the product of expert model. It computes the scores of facts under each modality and learns the entity embeddings for entity alignment. PoE combines information from two modalities. Additionally, we compare our approach with the PoE-s variant, which solely utilizes the structural information.
-
IKRL [31] integrates image representations into an aggregated image-based representation via an attention-based method. The method was initially proposed in the domain of knowledge representation, and we adapted it to address the MMEA problem.
In order to showcase the advantages of hyperbolic geometry, particularly in the learning of structural features, we have conducted preliminary experiments which solely utilize the structural information for EA, resulting in HMEA-s, GCN-Align-s, and PoE-s. In addition, to evaluate the contribution of visual information, we compare PoE, GCN-Align, and HMEA with just visual information, namely, PoE-v, GCN-Align-v, and HMEA-v.
5.3 Results
Table 9.2 displays the results, indicating that HMEA exhibits the most superior performance in all scenarios. Notably, in the case of FB15K-YAGO15K with 80% seed entity pairs, HMEA outperforms PoE and GCN-Align by almost 15% in terms of \(Hits@1\). With 20% seed entity pairs, our approach also shows better results and the improvement of \(Hits@1\) is around 2% and \(Hits@10\) is up to 20%. Based on the results obtained from PoE, it is evident that there is only a slight improvement in performance from \(Hits@1\) to \(Hits@10\), with the range being between 4 and 9%. In contrast, the enhancements in performance from \(Hits@1\) to \(Hits@10\) observed for HMEA are at least 20% across all scenarios. Moreover, it is worth noting that HMEA achieves significantly better results than IKRL.
Table 9.3 demonstrates that even when utilizing solely structural information, HMEA-s still achieves superior results compared to the other two methods. Specifically, our proposed approach outperforms GCN-Align-s by almost 5% in terms of \(Hits@1\) on FB15K-DB15K and by 3% on FB15K-YAGO15K with 20% seed alignments. When using 50 and 80% seed entity pairs, HMEA-s shows significant improvements in performance. The improvements range from 10 to 18% regarding \(Hits@1\) and from 20 to 30% in terms of \(Hits@10\). These results suggest that our approach excels in capturing precise hierarchical structural representations.
Table 9.4 presents the results when incorporating visual information into the model. We compare the performance of three variants: PoE-v, GCN-Align-v, and HMEA-v. The results indicate that GCN-Align-v does not produce valuable visual representations for MMEA. However, even when utilizing only structural information, HMEA-v still achieves better results than PoE-v. Specifically, our proposed approach outperforms PoE-v slightly in both datasets for \(Hits@1\), by less than 1% with 20% seed alignments. On FB15K-DB15K dataset, when using 80% seeds, our proposed approach HMEA-v demonstrates significant improvements in performance. The improvements are around 7% regarding \(Hits@1\) and 18% in terms of \(Hits@10\). These results indicate that our proposed method is effective in learning visual features and incorporating them into the model to improve the overall performance.
5.4 Ablation Experiment
In this work, we consider multiple modalities of information in MMKGs. Specifically, we take into account the structural and visual aspects of the information. To further confirm the usefulness of multi-modal knowledge for MMEA, we carry out an ablation experiment. In addition, upon comparing HMEA and HMEA-s in Tables 9.2 and 9.3, we observe that incorporating visual information in our approach results in slightly better performance. The improvements are approximately 1% in terms of \(Hits@1\). Moreover, by comparing HMEA and HMEA-v in Tables 9.2 and 9.4, we can also conclude that the structural information plays a significant role. From the ablation study, we can conclude that MMEA primarily relies on the structural information, but the visual information still plays a useful role. Furthermore, the study also highlights that the combination of these two types of information leads to even better results.
5.5 Case Study
A key property of hyperbolic spaces is their exponential expansion, which means that they expand much faster than Euclidean spaces that expand polynomially. This property can be advantageous for distinguishing between similar entities since the neighbor nodes of a central node can be distributed in a larger space, resulting in greater distances between them.
To demonstrate the effectiveness of hyperbolic embeddings, we conducted a case study using Michael Caine as the root node. We visualized the embeddings of 1-hop film-related entities learned from both GCN-Align and HMEA separately, in the PCA-projected spaces shown in Fig. 9.4. We observed that for entities of the same type or with similar structural information, such as entity Alfie and B-o-B, their Euclidean embeddings (generated via GCN-Align) are placed closely together. In contrast, the distances between such entities in hyperbolic space are relatively farther apart, with only a few exceptions. This validates that the hyperbolic structural representation can help distinguish between similar entities. Furthermore, by placing similar entities (in the same KG) far apart, the hyperbolic representation can facilitate the alignment process across KGs.
An example can be seen in Fig. 9.4a, where entity Alfie in FB15K is closest to entity B-o-B, which is incorrect. However, in Fig. 9.4b, entity B-o-B is placed far away from Alfie, and the closest entity to Alfie is its equivalent entity in DB15K. By using hyperbolic projections, similar entities in the same KG are well distinguished and placed far apart, reducing the likelihood of alignment mistakes.
5.6 Additional Experiment
The cross-lingual EA datasets are the most commonly used datasets for evaluating EA methods. We included experiments on these datasets to demonstrate that our proposed approach is effective for popular datasets, including the cross-lingual EA task. Note that diverse languages are not taken as multiple modalities, and the cross-lingual EA is in essence single-modal EA. We use the DBP15K datasets in the experiments, which were built by Sun et al. [24]. As shown in Table 9.5, the datasets were generated from DBpedia, which contains rich inter-language links between different language versions of Wikipedia. Each dataset contains data in different languages and 15,000 known inter-language links connecting equivalent entities in two KGs, which are used for model training and testing. Following the setting in [29], we use \(30\%\) of inter-language links for training, and \(70\%\) of them for testing. \(Hits@k\) is used as the evaluation measure.
The dimensions of both structural and attribute embeddings were set to 300 dimensions for GCN-Align. GCN-Align-s and HMEA-s represent adopting structural information; GCN-Align-a and HMEA-a represent adopting attribute information; and GCN-Align and HMEA combine both the structural information and attribute information.
Table 9.6 shows that in all datasets, HMEA-s outperforms GCN-Align-s, with improvements of around 7% in terms of \(Hits@1\) and more than 10% in terms of \(Hits@10\). These results demonstrate that HMEA benefits from hyperbolic geometry and is able to capture better structural features. Furthermore, our proposed approach achieves better results compared to GCN-Align as it combines both structural and attributive information, resulting in an approximately 10% increase in \(Hits@1\). Regarding attribute information, it is worth noting that our approach, HMEA-a, outperforms GCN-Align-a by a significant margin. Specifically, our approach achieves an approximately 15% improvement in \(Hits@1\) across all datasets.
6 Conclusion
This chapter introduces our proposed approach, HMEA, which is a multi-modal EA approach designed to efficiently integrate multi-modal information for EA in MMKGs. To achieve this, our approach extends the Euclidean representation to a hyperboloid manifold and employs HGCN to learn structural embeddings of entities. Additionally, we leverage a more advanced model, densenet, to learn more accurate visual embeddings. These structural and visual embeddings are then aggregated in the hyperbolic space to predict potential alignments. We validate the effectiveness of our proposed approach through comprehensive experimental evaluations. Additionally, we conduct further experiments that confirm the superior performance of HGCN in learning structural features of knowledge graphs in the hyperbolic space.
References
A. Bordes, N. Usunier, A. GarcÃa-Durán, J. Weston, and O. Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, pages 2787–2795, 2013.
S. Cavallari, E. Cambria, H. Cai, K. C.-C. Chang, and V. W. Zheng. Embedding both finite and infinite communities on graphs [application notes]. IEEE Computational Intelligence Magazine, 14(3):39–50, 2019.
M. Chen, Y. Tian, M. Yang, and C. Zaniolo. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. arXiv preprint arXiv:1611.03954, 2016.
W. Chen, W. Fang, G. Hu, and M. W. Mahoney. On the hyperbolicity of small-world and treelike random graphs. Internet Mathematics, 9(4):434–491, 2013.
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255. IEEE Computer Society, 2009.
L. P. Eisenhart. Introduction to differential geometry. Princeton University Press, 2015.
O.-E. Ganea, G. Bécigneul, and T. Hofmann. Hyperbolic entailment cones for learning hierarchical embeddings. arXiv preprint arXiv:1804.01882, 2018.
C. Gulcehre, M. Denil, M. Malinowski, A. Razavi, R. Pascanu, K. M. Hermann, P. Battaglia, V. Bapst, D. Raposo, A. Santoro, et al. Hyperbolic attention networks. arXiv preprint arXiv:1805.09786, 2018.
Y. Hao, Y. Zhang, S. He, K. Liu, and J. Zhao. A joint embedding method for entity alignment of knowledge bases. In CCKS, pages 3–14. Springer, 2016.
M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, pages 4700–4708, 2017.
T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. CoRR, abs/1609.02907, 2016.
T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Boguná. Hyperbolic geometry of complex networks. Physical Review E, 82(3):036106, 2010.
C. Li, Y. Cao, L. Hou, J. Shi, J. Li, and T. Chua. Semi-supervised entity alignment via joint knowledge embedding model and cross-graph model. In EMNLP, pages 2723–2732. Association for Computational Linguistics, 2019.
Y. Liu, H. Li, A. Garcia-Duran, M. Niepert, D. Onoro-Rubio, and D. S. Rosenblum. Mmkg: multi-modal knowledge graphs. In ESWC, pages 459–474. Springer, 2019.
S. Moon, L. Neves, and V. Carvalho. Multimodal named entity disambiguation for noisy social media posts. In ACL (Volume 1: Long Papers), pages 2000–2008, 2018.
A. Muscoloni, J. M. Thomas, S. Ciucci, G. Bianconi, and C. V. Cannistraci. Machine learning meets complex networks via coalescent embedding in the hyperbolic space. Nature communications, 8(1):1–19, 2017.
D. A. Newman and B. G. Schunck. Generating video summaries for a video using video summary templates, Oct. 17 2017. US Patent 9,792,502.
M. Nickel and D. Kiela. Poincaré embeddings for learning hierarchical representations. In NIPS, pages 6338–6347, 2017.
M. Nickel and D. Kiela. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. arXiv: Artificial Intelligence, 2018.
H. Paulheim. Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic web, 8(3):489–508, 2017.
E. Ravasz and A.-L. Barabási. Hierarchical organization in complex networks. Physical review E, 67(2):026112, 2003.
Z. Sun, W. Hu, and C. Li. Cross-lingual entity alignment via joint attribute-preserving embedding. In ISWC, pages 628–644. Springer, 2017.
Z. Sun, W. Hu, Q. Zhang, and Y. Qu. Bootstrapping entity alignment with knowledge graph embedding. In IJCAI, pages 4396–4402, 2018.
H.-N. Tran and E. Cambria. A survey of graph processing on graphics processing units. The Journal of Supercomputing, 74(5):2086–2115, 2018.
R. C. Veltkamp, H. Burkhardt, and H.-P. Kriegel. State-of-the-art in content-based image and video retrieval, volume 22. Springer Science & Business Media, 2013.
M. Wang, G. Qi, H. Wang, and Q. Zheng. Richpedia: A comprehensive multi-modal knowledge graph. In JIST, pages 130–145. Springer, 2019.
Z. Wang, Q. Lv, X. Lan, and Y. Zhang. Cross-lingual knowledge graph alignment via graph convolutional networks. In EMNLP, pages 349–357, 2018.
Y. Wu, X. Liu, Y. Feng, Z. Wang, and D. Zhao. Neighborhood matching network for entity alignment. In ACL, pages 6477–6487. Association for Computational Linguistics, 2020.
R. Xie, Z. Liu, H. Luan, and M. Sun. Image-embodied knowledge representation learning. pages 3140–3146, 2017.
K. Yi, J. Wu, C. Gan, A. Torralba, P. Kohli, and J. Tenenbaum. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In NIPS, pages 1031–1042, 2018.
W. Zeng, X. Zhao, J. Tang, and X. Lin. Collective entity alignment via adaptive features. In ICDE, pages 1870–1873. IEEE, 2020.
Author information
Authors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Zhao, X., Zeng, W., Tang, J. (2023). Multimodal Entity Alignment. In: Entity Alignment. Big Data Management. Springer, Singapore. https://doi.org/10.1007/978-981-99-4250-3_9
Download citation
DOI: https://doi.org/10.1007/978-981-99-4250-3_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-4249-7
Online ISBN: 978-981-99-4250-3
eBook Packages: Computer ScienceComputer Science (R0)