Next Article in Journal
Effect of the Impact Coefficient of Restitution on the Nonlinear Dynamics Phenomenon of Flat-Faced Follower Mechanism with Clearance
Next Article in Special Issue
Information Leakage Detection and Risk Assessment of Intelligent Mobile Devices
Previous Article in Journal
The Classical Hom–Leibniz Yang–Baxter Equation and Hom–Leibniz Bialgebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Hypergraph and Uncertain Hypergraph Representation Learning Theory and Methods

1
College of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
2
School of Science, North China University of Science and Technology, Tangshan 063210, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(11), 1921; https://doi.org/10.3390/math10111921
Submission received: 23 April 2022 / Revised: 26 May 2022 / Accepted: 1 June 2022 / Published: 3 June 2022
(This article belongs to the Special Issue Engineering Calculation and Data Modeling)

Abstract

:
With the advent of big data and the information age, the data magnitude of various complex networks is growing rapidly. Many real-life situations cannot be portrayed by ordinary networks, while hypergraphs have the ability to describe and characterize higher order relationships, which have attracted extensive attention from academia and industry in recent years. Firstly, this paper described the development process, the application areas, and the existing review research of hypergraphs; secondly, introduced the theory of hypergraphs briefly; then, compared the learning methods of ordinary graphs and hypergraphs from three aspects: matrix decomposition, random walk, and deep learning; next, introduced the structural optimization of hypergraphs from three perspectives: dynamic hypergraphs, hyperedge weight optimization, and multimodal hypergraph generation; after that, the applicability of three uncertain hypergraph models were analyzed based on three uncertainty theories: probability theory, fuzzy set, and rough set; finally, the future research directions of hypergraphs and uncertain hypergraphs were prospected.

1. Introduction

With the development of deep learning technology, a large number of research results on complex networks and hypergraphs have emerged in recent years. Although hypergraph is not a new theory, few review papers have clearly defined and introduced their development history and theoretical foundations. This paper reviewed and compared the key technologies for new generation hypergraphs at home and abroad. Finally, the future technical challenges and research directions will be presented.
Hypergraph, a branch and extension of graph theory, is a system of subsets of finite sets and the most general structure in discrete mathematics. It has a wide range of applications in the natural sciences, including physics, mathematics, computing, and biology. Hypergraph theory was proposed by C Berge in the 1960s, followed by French and Hungarian mathematicians such as Berge, C, who studied directed hypergraph theory, hyperloops of hypergraphs, hypergraph coloring, and hypergraph design. Berge, C. [1] established the theory of undirected hypergraphs systematically for the first time and applied matrix structures to study the application of hypergraph theory to operations research. Lovasz, L. [2] verified the normal hypergraph and perfect graph conjectures and applied the results to the problems of integer-valued linear programming. Erdos, P. [3] studied the theory of 3-color hypergraphs and related properties. However, early hypergraph theory was mainly used to solve combinatorial and optimization problems [4,5,6,7,8]. In the 1980s, when information scientists studied database theory, they found that hypergraphs were closely related to databases and introduced concepts such as acyclic hypergraphs, which were combined with database theory to solve practical problems [8,9,10]. In the late 20th century, with the rapid development of the Internet and other IT technologies, the data magnitude of various complex networks grew rapidly. The original network analysis methods became impotent in the face of the massive data, and hypergraphs began to be widely used for image segmentation [11], high dimensional spatial clustering [12], multimodal data modeling [13], recommendation systems [14], social networks [15], and many other fields with their ability to describe multivariate, high-order, complex relationships.
Figure 1 shows the number of research papers related to hypergraphs from Google Scholar between 1970 and 2021. It can be seen that hypergraph theory has been slow to develop in the early years due to its abstract nature. However, the past year has been the most rapid year for the development of hypergraph theory. From the available research results, the past two years have been a period of rapid development of hypergraph theory and applications. Unlike the previous in-depth research on the concept and nature of hypergraphs, the research and applications of hypergraphs in recent years have been mainly applied to network analysis tasks. For example, clustering [16], node classification [17], personalized recommendation [18], link prediction [19], etc. The application of hypergraphs to network analysis tasks is focusing on how to map each node in a network to a low-dimensional and distributed vector representation space. This is called network representation learning, or network embedding. However, as hypergraph theory is relatively abstract and complex in structure, it poses a great challenge to existing network embedding models.
With the advent of the information age, people’s lives have become integrated with communication and information resources, and human society has evolved from the era of simple network relationships to the era of complex networks where multiple networks interact and integrate. Many situations in real life that cannot be portrayed by ordinary networks. However, hypergraphs can describe and represent the interactions between nodes well, and show various information in nodes clearly. Hyperedges can portray complex networks perfectly. For example, paper collaboration networks [20,21,22], protein networks [23], and chemical polymer networks [24]. Therefore, the study of the theory and its applications has become a new and urgent topic. There are only a few reviews on hypergraph theory and applications at present. Table 1 summarizes the related overviews work of hypergraphs.
Xu, X., et al. [25] reviewed the main results on hypergraph theory and its applications generated in the 1980s and 1990s, defined the basic concepts of hypergraphs, i.e., the theory of connectivity of hypergraphs, hypertrees and k-hypertrees, minimal cut-two partitioning of hypergraphs, and applied the directed hypergraph theory to the topology analysis of electrical networks.
Hu, F., et al. [26,27] analyzed the concepts related to hypergraph-based hyper networks, topological properties, provided a method to construct a hyper network evolution model based on the hyper edge growth and preferential linking mechanism of realistic networks, and defined the node hyper-degree.
On this basis, Ling, T., et al. [28] analyzed and summarized the existing hypergraph technologies, established a three-layer architecture of knowledge hypergraphs, introduced and compared the knowledge hypergraph representation methods briefly from four directions: soft rules, translation methods, tensor decomposition, and neural networks, and provided an outlook on its application fields and future directions.
Professor Gao, Y., et al. [29] generalized hypergraph generation based on hypergraph research work in 2020 and before as an approach based on distance calculation, vector space reconstruction, attributes, and network structure, and introduced existing hypergraph learning methods for hypergraphs, including guided hypergraph learning, inductive hypergraph learning, hypergraph structure updating, and multimodal hypergraph learning, and proposed a tensor-based dynamic hypergraph representation learning framework, THU-HyperG.
In a recent review of hypergraph research, Hu, B., et al. [30] classified hypergraph learning methods into spectral analysis methods and neural network methods based on their design characteristics, summarized the hypergraph learning methods from both unfolding and non-folding perspectives, and designed experiments to compare and analyze the results of various algorithms.
Through the research of papers on hypergraph research and an analysis of the changing trends of hypergraph literature on Google Scholar and China Knowledge Network, it can be concluded that 2021 is a key turning point in the development of hypergraph, with the number of articles published and the index of interest reaching a new historical high. At the same time, influenced by the rapid development of deep learning methods and technologies, a large number of hypergraph learning models based on deep learning algorithms such as hypergraph capsule neural network [31], hypergraph pretrained neural network [32], and adaptive hypergraph convolutional neural networks [33] have emerged over the year, but none of these methods are mentioned in the above review papers. Meanwhile, as hypergraph theory and methods continue to improve, researchers have found that they have to consider various uncertainties in real networks and carry out research on uncertain hypergraphs if they want to use hypergraph to provide a perfect portrayal of real networks [34].
In summary, the study of representation learning theory and methods for hypergraphs is in its infancy and is developing rapidly. There is still a long way to go in terms of correspondence and transformation methods between ordinary graphs and hypergraphs, rational selection of traditional representation learning algorithms and deep representation learning algorithms, correspondence mechanisms between deterministic and uncertain hypergraphs, and future research on reducing computational complexity and improving model accuracy. In this paper, we focused on a review of representation learning theory and methods for hypergraphs. The major contribution of this study are as follows:
  • Based on the ordinary graph representation learning method, the representation learning theory, modeling method, and structure optimization method of hypergraph were analyzed and compared, which provided a theoretical basis for studying the correspondence and the transformation method between ordinary graph and hypergraph, and establishing the model framework of hypergraph learning.
  • Based on the characteristics of complex and uncertainty of the real networks, this paper introduced the application scope, model construction, and representation learning methods of three kinds of uncertain hypergraph models, namely random hypergraph, fuzzy hypergraph, and rough hypergraph respectively, summarized the research status of uncertain hypergraph for the first time, and pointed out the future application field and development direction of uncertain hypergraph.
As far as we know, this is the first paper that makes a comparative analysis between hypergraph representation learning and ordinary graph representation learning, and it is also the first review paper on uncertain hypergraphs.

2. Representation Learning Theory and Methods for Hypergraph

As an extension of graph theory, hypergraph theory also has a system of representations that is similar to graph theory for expressing the various concepts under the hypergraph definition. This chapter introduces the theory and method of hypergraph representation learning from the definition of the hypergraph.

2.1. Hypergraph Related Concepts

Hypergraphs are generalizations and extensions of ordinary graphs (for ease of discussion, we refer to the common graph containing two nodes per edge as an ordinary graph). In a hypergraph, an edge can contain any number of vertices, and the hyperedge is then a simple closed curve containing more than one vertex. An example of a paper collaboration network is shown in Figure 2.
Authors with collaborative relationships (here this lists A 1 ~ A 7 , seven authors) are connected; such a concatenated edge can only see the pairwise relationship between two authors, and cannot describe the collaborative relationship of three or more authors. However, if a hypergraph structure of a paper collaboration network is constructed with the authors as nodes and the papers that the authors have collaborated on as hyperedges, a perfect portrayal of this type of network is possible. The hypergraph has the ability to describe complex relationships of multiple and higher orders compared with ordinary graphs, which also makes the hypergraph have better performance in applications.
For different network structures, different structures of hypergraph definitions are generated, and the following are several common hypergraph structures and definitions.
Hypergraph: a hypergraph is defined by two sets, the vertex set and the hyperedge set, usually denoted by G ( V , E ) , where V is the set of vertices and E is the set of hyperedges.
Weighted hypergraph G ( V , E , W ) : Weight w ( e ) indicates the importance of the relationship of the hyperedge links in the overall hypergraph.
Heterogeneous hypergraphs: Hypergraphs that contain different types of nodes and edges. Currently, data nodes and relationship types in practical applications are often heterogeneous, so most research and applications of hypergraphs are also aimed at heterogeneous hypergraphs, and there is usually no clear distinction between heterogeneous and traditional hypergraphs.
K-uniform hypergraph: A hypergraph in which each hyperedge contains exactly K vertices.
D-regular hypergraph: A hypergraph where each vertex is of degree D.
The symbolic representations and calculations in the hypergraph are represented in Table 2.

2.2. Hypergraph Representation Learning Algorithms

Since most real-world data can be easily represented by graphs, the study of graph representation learning has received more and more attention in recent years. Due to hypergraphs being generalizations and extensions of ordinary graphs, many scholars have extended ordinary graph representation learning algorithms to hypergraphs and have produced a large number of hypergraph learning research results based on theories such as matrix decomposition, random walk and deep learning. Distinguishing from previous learning methods based on transductive and inductive methods [29], spectral methods and neural network methods [30], and unfolding methods and nonfolding methods [30], in this paper, we compared and contrasted the learning methods of ordinary graphs and hypergraphs based on matrix decomposition, random walk, and deep learning methods, and analyzed the correspondence between ordinary graphs and hypergraphs. The aim was to analyze the correspondence and transformation methods between ordinary graphs and hypergraphs, and to provide a theoretical basis for the in-depth study of the representation learning methods of hypergraphs.

2.2.1. Matrix Decomposition-Based Methods

The basic idea of a graph representation learning algorithm is to represent a node in a complex network as a low-dimensional vector in a way that reflects the complex interaction information of that node in the network without losing information about the structure and content of the graph. Matrix decomposition is a common approach to graph representation learning, the core idea of which is to find an approximate matrix representation of the original graph and then learn the embedding vectors of the nodes through a form of matrix decomposition. Currently, there are two main types of graph learning based on matrix decomposition. One is the Laplace matrix decomposition of the graph (a spectral analysis approach) and the other is the decomposition of the node adjacency matrix.
(1)
Laplace Matrix Decomposition
The Laplacian matrix is a central and fundamental concept in the theory of spectral graphs, a product of the combination of graph theory and linear algebra, which investigates the properties of graphs by analyzing the eigenvalues and eigenvectors of certain matrices of the graph. In 2003, Belkin, M. et al. [35]. first applied graph Laplacian feature mapping to dimensionality reduction and representation learning of high-dimensional data. Chen, M. [36] transformed the graph representation learning algorithm into a least squares regression problem and proposed a new efficient graph representation learning framework. He, X. [37] proposed encoding and feature ranking of local features of graph-structured data by Laplace fraction (LS), and achieved good results in classification tasks. In 2006, Zhou, D., et al. [38] extended the idea of Laplace matrix decomposition to the hypergraph representation learning problem and proposed algorithms for hypergraph representation learning and conductive reasoning for application in biological and social network analysis. Dalian University of Technology, Lu, F., et al. [39] used the concept related to spectral clustering to define loss functions and objective functions, and performed representation learning on hypergraph structures according to the properties related to Laplace matrices.
The Laplace operator is a second-order differential operator in Euclidean space, defined as the scattering of gradients. It is used in graph theory to obtain the (degree of freedom) gain of a node with respect to other neighboring nodes, and then the result of the action of the Laplace operator on all nodes in the Laplace matrix.
Δ f = ( Δ f 1 Δ f N ) = ( d 1 f 1 w 1 : f d N f N w N : f ) = ( d 1 d N ) f ( w 1 : w N : ) f = ( D W ) f
Normalization of the Laplace matrix.
L s y m = D 1 / 2 L D 1 / 2 = I D 1 / 2 W D 1 / 2
By the properties of normalized Laplacian matrices, for any vector f R n there are:
f T L s y m f = 1 2 i = 1 n j = 1 n w i j ( f i d i f j d j ) 2
Taking label prediction as an example, the spectral hypergraph transductive algorithm is used for the binary classification problem, and the representation learning of the hypergraph using Laplacian matrix is to solve the minimization objective function:
a r g m i n Ω ( f ) + μ ( f )
where Ω ( f ) is the graph structure loss function, which can be found according to the definition of the Laplace matrix:
Ω ( f ) = f T L s y m f
( f ) is the supervised empirical error expressed as
( f ) = μ u v ( f ( u ) q ( u ) ) 2
where μ are the regularization parameters, the f ( u ) is the original label, and q ( u ) is the predicted label.
(2)
Graph Representation Learning Based on Node Adjacency Matrix
In addition to Laplace matrices, a method for solving generalized eigenvalues, node adjacency matrices can be used to approximate the proximity of nodes in a low-dimensional space by minimizing the objective function. As early as 1970, Golu, G.H., et al. [40] investigated the theory of singular value decomposition (SVD) and least squares methods. On the basis of this, Yang, C. [41] incorporated the textual features of vertices into the network representation learning in the framework of matrix decomposition, and solved them using low-order matrix decomposition. Singh, P. [42] proposed an ensemble matrix decomposition model to decompose multiple matrices simultaneously in multiple relations. Each relation has a different error distribution but shared parameters to solve the problem of one entity node participating in multiple relations in the network. A stochastic optimization algorithm for large-scale coefficient matrices was also proposed to cope with the sparsity problem of the network. Subsequently, Xiang, L. [43] proposed the preprocessing-based hypergraph non-negative matrix decomposition algorithm PHGNMF on the basis of the non-negative matrix decomposition algorithm.
Although matrix-based decomposition methods can present information about the graph structure and have achieved good results in many applications, they require a rigorous mathematical derivation and resolution process, have a high time and space complexity, and the solution of adjacency matrices can be limited in many cases.

2.2.2. Method Based on Random Walk

The random walk method was first introduced by Pearson, K. [44] in 1905. Based on random walk, the graph with a large number of paths was sampled by traversing from random initial nodes, and then the nearest neighbor set of nodes was constructed. The random nature of node walk provides the ability to explore the contextual information of the graph, and to capture global and local structural information by traversing neighboring nodes. Probability models such as Skip gram [45] and Bag-of-Words [46] learn node representations through random sampling paths. Traditional random walk models for graphs are divided into two types: width-first search-based and breadth-first search-based. The most typical algorithms are Deep Walk [47] and Node2vec [48].
Deep Walk uses node-to-node co-occurrence relationships in the graph to learn vector representations of nodes, and is a depth-first (DFS) traversal algorithm that can repeatedly visit visited nodes, which has the advantage of better globalization but is less useful for the representation of more distant neighbors.
Unlike the Deep Walk method, the subsequently proposed LINE [49] model uses a breadth-first (BFS) traversal algorithm to construct the adjacency matrix of nodes by vertex negative sampling and edge sampling, defines first-order similarity and second-order similarity, and can be applied to weighted graphs.
BFS has a higher coverage of surrounding neighbors, but the search is localized and cannot sense more distant neighboring nodes. Node2vec combines the advantages of DFS and BFS, thus compromising between local and global, to obtain a higher quality sampling sequence, and its random walk strategy as shown in Figure 3.
This same idea of random walk has also been extended to hypergraph representation learning. In order to maintain the high-order proximity of the hypergraph, Huang, J., et al. [50] constructed the Hyper2vec model and proposed an efficient and scalable biased second-order random walk model under the framework of the Skipgram model. On this basis, the concept of hyperpath was proposed, the indecomposability of the hyperpath was measured, and the hyperpath-based random walk algorithm Hyper-gram [51] was designed to preserve the structural information of the hypernetwork. Yang, D., et al. [52] applied the random walk model of the hypergraph to position information-based social networks, and first constructed a hypergraph (LBSN hypergraph) including user–user (friendship) homogeneous hyperedges and user–time–position semantic heterogeneous hyperedges as shown in Figure 4.
The black dashed lines in the figure indicate user–to–user friendship links, and the bold colored lines indicate the links between the users who checked in, the type of activity, the time, and the place. Based on this, this author also proposed a random walk-stay scheme as shown in Figure 5, which jointly samples user check-ins and social relationships, and then learns node embeddings from the sampled hypergraphs, not only maintaining the proximity of n-way nodes captured by the hypergraphs, but also considering the transformation of the embedding space between node domains to fully capture the complex structural features of the LBSN heterogeneous hypergraphs.
In summary, the walk-based graph representation learning method is designed to capture the cooccurrence relationships between nodes in the network more fully by designing a walking strategy. As long as the walking strategy is designed to obtain the walk sequence, the node vectors can be obtained, so it is relatively easy to implement. However, the method only takes into account the structural features of the nodes in the graph, but not the additional information such as attributes and text of the nodes. This information plays an important role in networks, especially in heterogeneous networks. How to adequately embed both textual and structural information into the vector representation of nodes is also currently a hot research issue.

2.2.3. Deep Learning Based Approach

The spectral analysis methods dominated and most of the research was based on spectral theory before neural networks were introduced to the field of graph representation learning. With the emergence of various deep learning algorithms, researchers have also started to try to extend some deep learning algorithms such as convolutional neural networks and attention mechanisms to graph representation learning. Deep learning-based graph representation learning began with the creation of the graph convolutional neural network GCN [53]. GCN does the same thing as CNN, extracting graph structure features for graph data and using them for node classification, graph classification, edge prediction, graph representation learning, etc. On the basis of the GCN, Wang, X., et al. [54] proposed a heterogeneous graph neural network model HAN (Heterogeneous graph Attention Network) based on hierarchical attention mechanism, which learns the importance between nodes and their neighbors based on meta-paths to carry out representation learning on nodes, and finally complete the text classification task.
GCN is also a neural network layer that propagates from layer to layer in the following manner:
H ( l + 1 ) = σ ( D ˜ 1 2 A ˜ D ˜ 1 2 H ( l ) W l )
where H ( l + 1 ) is the graph convolutional neural network’s. l + 1 the hidden layer representation of nodes, and H ( l ) is the upper hidden layer representation of nodes, and W l is the matrix of transformable parameters to be learned. A ˜ is the adjacency matrix of nodes, and the left and right comultiplication of its degree matrix D ˜ 1 2 to complete the normalization, and σ ( · ) denotes the activation function, where the RELU function activation is mostly used.
From the above propagation of graph convolutional networks, it is not difficult to find that GCN achieves the extraction of graph structure space features by using Laplace matrix decomposition technology, and its feature extractor is called Laplace convolutional kernel.
Graph Convolutional Networks, which enable the extraction of high-level information by aggregating features in node neighborhoods, have a wide range of applications in social networks, chemical molecular structure modeling, recommendation systems, language processing, and complex word-sentence relationships, and are therefore quickly being extended to the learning of hypergraph representations. HGNN [55] (HyperGraph Neural Network) is a GCN model based on hyperedge convolutional computation to learn higher-order correlations of realistic data. On top of this, Zhang, R., et al. [56] used the GAT model to learn homogeneous and heterogeneous graphs with variable hyperedge sizes. The HWNN [57] (Hypergraph Wavelet Neural Network), HyperGCN [58] (HyperGraph Convolutional Network) and other models all use the GCN model to model complex relations.
In addition to this, Liang, Y., et al. [31] proposed a hypergraph capsule convolutional neural network (HGC-CNN) with multiple features, using capsule neural networks to integrate different types of features. Correlations between samples were learnt using hypergraph regularization methods. In this way, the ability to describe the hypergraph was improved. Du, B., et al. [32] proposed Hypergene, an end-to-end, graph neural network-based hypergraph pretraining framework that combines two levels of self-supervised tasks (node-level and hyperedge-level respectively), supports both transductive and inductive learning settings, does not require additional datasets, and is adaptive, which helps pretrained models to be better adapted to downstream tasks. Wu, X., et al. [33], to improve the performance of multilabel classification, proposed a higher-order semantic learning model (AdaHGNN) based on adaptive hypergraph neural networks to automatically construct adaptive hypergraphs using label embeddings. Hypergraph neural network (HGNN) was used to associate graph feature vectors, explore higher-order semantic interactions, and use multi-scale learning to reduce sensitivity to object size inconsistencies.
Table 3 compares the advantages and disadvantages of different presentation learning methods.
In summary, most traditional graph representation learning algorithms can be generalized to tasks of representation learning for hypergraphs. However, due to the complexity of the hypergraph structure itself, representation learning of it has to take into account both the accurate capture of features and the high time and space complexity it entails, which is one of the problems that hypergraph representation learning needs to address urgently.

3. Optimization of the Hypergraph Structure

The current hypergraph generation methods mainly includes distance-based [59,60,61], representation-based [61,62,63], attribute-based [64,65], and network-based approaches [66,67]. However, these methods have certain limitations. One is that the constructed hypergraphs may not be optimized, resulting in the models being unable to fit the data well. In addition, the computational cost of the model is too high, especially when the hypergraph structure is updated simultaneously. So, it is difficult to apply the general hypergraph algorithm to largescale datasets. However, the modeling capability of the hypergraph structure has a significant impact on the learning performance and the learning efficiency is also critical for large-scale data. Therefore, optimizing existing hypergraph structures and investigating a more efficient intensive solution is a high priority for hypergraph learning. In this paper, we summarized existing hypergraph optimization methods, focusing on dynamic hypergraphs, hyperedge weight optimization and multimodal hypergraphs.

3.1. Dynamic Hypergraph

Hypergraph representation learning aims to represent network information as low-dimensional dense real vectors to solve tasks such as link prediction, anomaly detection, and recommender systems. Although there has been significant progress in hypergraph representation learning research in recent years, the research is mostly based on static networks. In contrast, the real world constitutes a dynamically changing network, so how to consider the dynamics of the network and truly reflect the evolution of the real network will make hypergraph representation learning more valuable.
To address the shortcomings of existing hypergraph-based neural networks that only use the initial hypergraph structures and ignore the dynamic modification of these structures Zhang, Z., et al. [68] proposed a dynamic hypergraph learning structure. Based on this, DHGNN [69] (Dynamic HyperGraph Neural Network) overcomes the shortcoming of existing graph. It consists of two modules: dynamic hypergraph construction (DHG) and hypergraph convolution (HGC). The HGC module consists of vertex convolution and hyperedge convolution, which are used to aggregate features between vertices and hyperedges respectively. The graph network is allowed to evolve itself by extracting features to mine new relationships, and dynamic hypergraph construction is achieved by adjusting feature embeddings to dynamically modify the graph or hypergraph structure during the training process.
As shown in Figure 6: firstly, a dynamic hypergraph construction method (DHG) is proposed, which uses the KNN method to generate the basic hyperedges and extends the set of neighboring hyperedges by a k-means clustering algorithm.
For each vertex u in the hypergraph, multiple hyperedges are first generated by a dynamic hypergraph construction process, and then a separate vertex convolution is performed on each of these hyperedges to obtain the hyperedge features. Then, we perform hyperedge convolution of these hyperedge features, and finally the new features of the central vertex u are obtained. The whole process is to update the features of vertex u so that a new hypergraph can be constructed based on the new features of vertex u, in a continuous loop. The local and global relationships of data can be extracted by dynamic hypergraph construction.
In addition to the above dynamic optimization of hypergraph structures through feature learning, many scholars have incorporated temporal features in their analysis of specific network structures for better downstream tasks. For example, Ge, S.L., et al. [70] considered that social networks are dynamic over time. By improving Node2vec’s random walk strategy and integrating attention mechanism in hypergraph, they proposed Meta-DHGAT, a hypergraph model suitable for community discovery on heterogeneous dynamic networks, and solved the model by generating association matrix of fusion time characteristics. The generation process of its dynamic association visits is shown in Figure 7.
As shown in Figure 7, firstly, the dataset is divided into equal time slices in chronological order, and an adjacency matrix is generated for the nodes on each time slice. Starting from the second time slice, the multislice weighted modularity values with the previous time slice are calculated to generate a local modularity matrix. Then, the modularity matrix is multiplied with the adjacency matrix on the current time slice to generate an association matrix that incorporates the modularity information. Finally, the matrices generated from the different time slices are updated with the fused temporal correlation matrix.

3.2. Optimization of Hyperedge Weights

A good choice of hyperedge weights can improve the performance of graph-based algorithms significantly. The weight of hyperedges can indicate the degree of importance of different hyperedges. The hyperedge weights should be set to different values depending on the connections between nodes. The setting of the hyperedge weight can have an impact on the information representation of the graph structure and the results of the model computation. Therefore, it is crucial to weigh the hyperedges according to the representational capability of each hyperedge.
In the initial hypergraph structure, the hyperedge weights are often simply set to 1 [38]. Currently, in most studies, the weight of the hyperedge is often calculated based on certain rules. Huang, Y. et al. [59] designed a probabilistic hypergraph model-based image retrieval system that generates hyperedges by the K-nearest neighbor method, and calculates the weight of each hyperedge using the sum of the similarity between the central vertex of the current hyperedge and all other vertices.
w ( e ) = x a , x b e exp ( d ( x a , x b ) 2 σ 2 )
where x a , x b are two vertices in the same hyperedge, and d ( x a , x b ) is the Euclidean distance between the two vertices, and σ is the average distance between all vertices.
Zhang, Z., et al. [71] proposed a new hypergraph framework for unsupervised feature selection. In order to accurately represent the higher-order information in the hypergraph structure, multidimensional interaction information (MII) was proposed as a higher-order similarity metric to calculate the weights of the hyperedges.
w i 1 , , i k = K I ( x i 1 , x i 2 , x i k ) H ( x i 1 ) + H ( x i 2 ) + H ( x i k )
where I ( x i 1 , x i 2 , x i k ) is the interaction information of the K vertices, and H ( x i 1 ) + H ( x i 2 ) + H ( x i k ) is the information entropy.
Gao, Y., et al. [62] calculated the weights of hyperedges from the sum of similarity between vertices in the hyperedges and applied them to 3D object detection and recognition (multimodal hypergraph). Yu, J. et al. [72] added a sparsity constraint to the loss function by setting the weights of useless hyperedges to 0, added two constraint terms to the objective function with a sum of hyperedge weights of 1 and nonnegative weight values, used the coordinate descent method, and proposed an alternating update label and weights in an iterative process.
Subsequently, Chasapi, A., et al. [73] used gradient descent to update the hypergraph weights with good results, but the iterative operation of such methods has high computational complexity. Chen, Z., et al. [74] designed the Adaptive Multimodal Hypergraph Learning for Image Classification (AMH) algorithm based on hypergraphs. Their main contribution was to propose a weight optimization method for multimodal data. Firstly, the initial weight of the hyperedge was calculated based on different features. Then, the constraint conditions were transformed into penalty functions to optimize the weight matrix.
Yu, Y.X. [18], from the perspective that different hyperedges have different effects on the hypergraph representation results, summarized the hyperedge weight optimization methods as three methods, based on monomorphic volume, scattering matrix trace, and linear reconstruction error from three perspectives of geometry, multivariate statistical analysis, and linear regression respectively, which remeasure the similarity between point sets to optimize the hyperedge weights.
(1)
Hyperedge weighting method based on monomorphic volume
V o l ( E ) = | det ( G T G ) | k !
The matrix G is defined as a matrix of order k × k , where the column vector of the matrix G is g i = ( x 0 x i ) ,   V o l ( E ) is the monomorphic volume of the hyperedges, and the smaller its value, the closer the node relationships within the hyperedges. On this basis, the hyperedge weights can be expressed as
w ( e ) = e V o l ( E ) μ
where μ is a positive parameter.
(2)
Hyperedge weighting method based on walking tracks
The scatter matrix is used to measure the closeness of the nodes within the hyperedges and is calculated as
S = ( X X ¯ ) ( X X ¯ ) T
where the matrix X = ( x 1 , x 2 , , x k ) is the matrix associated with the hyperedge nodes of degree k. X ¯ is the k × d dimensional vector.
The weight of the hyperedge is therefore
w ( e ) = t r ( e s μ )
where t r ( · ) represents the trace of the matrix, μ is a positive parameter.
(3)
Hyperedge weighting method based on linear reconstruction error
Inspired by linear regression methods, the reconstruction error of the hyperedge is defined as
R = 1 k i = 1 k x i X t i , t e j c ^ i T 2 x i 2
where the reconstruction factor c ^ is obtained by the least squares method, i.e.,
c ^ i = a r g e i min ( x i X t i , t e j C i T 2 )
where X t i , t e j is a d × ( k 1 ) matrix, the t column of the matrix is x t , t i and 1 t k .
The hyperedge weights are expressed as
w ( e ) = e R μ
where μ is a positive parameter.
In summary, the optimization methods for the hyperedge weights are summarized in Table 4.

3.3. Multimodal Hypergraph Generation

A multimodal hypergraph is a hypergraph whose vertices or hyperedges are constructed and generated by a multimodal data source. In this paper, multimodal data sources were summarized as both multimodal data and multimodal features. Multimodal data refers to the data is composed in different forms, for example in video analysis, containing multimodal information such as audio, images, and subtitle text. Multimodal features refer to the analysis of unimodal data from different perspectives. For example, images can be analyzed in terms of intensity or different modal features such as greyscale and texture. There is a certain correlation between this multimodal information, and hypergraphs can use their advantages to unify the information of different modalities and carry out comprehensive analysis to achieve complementary fusion of multimodal features and joint learning of potentially shared information between modalities, thus improving the efficiency of downstream tasks. Therefore, multimodal hypergraph research has been one of the research hotspots and has attracted the attention of more and more researchers.
(1)
Multimodal Feature hypergraph
Chen, Z. et al. [75] proposed a hypergraph-based cross-modal retrieval algorithm that extracts both the features of textual model and sentiment modal of comments, which can significantly improve the accuracy of sentiment classification while reducing the computational cost, and whose model is shown in Figure 8:
Li, Q., et al. [76], Dalian University of Technology combined the hypergraph algorithm with multimodal data to solve the problem of weight assignment of different modalities in the process of processing multimodal data, and applied the hypergraph algorithm to land data to cluster the influencing factors of heavy metal pollution in land, and then solved the problem of tracing the source of heavy metal pollution.
(2)
Multimodal Data hypergraph
He, L. et al. [77] constructed a hypergraph between users, goods, and multimodal attributes, learned group aware representations of users and goods on the basis of the hypergraph, fused the group aware representations with the temporal representations to obtain the final user representations, and combined the learned items representations on the hypergraph with the original items representations to obtain the final item representations.
Xu, L. et al. [78] integrated multimodal information including image content, user-generated labels and geographic location into a unified hypergraph framework for image ranking. Compared with single visual features, multimodal fusion can achieve higher accuracy and stronger robustness.
Wang, L. et al. [79] proposed a new feature fusion strategy that integrates multimodal features into a unified hypergraph. An effective multimodal hypergraph (EMHG) was constructed to address the high computational complexity of multimodal feature fusion methods. A multi-label association hypergraph (LCHG) was also constructed to model the complex associations between labels. Finally, combining the two hypergraphs, an adaptive learning algorithm was used to learn both the label scores and the hyperedge weights, and the importance of different features was represented by the hyperedge weights applied to the image classification task.
As shown in Figure 9: the algorithm constructs a hypergraph for each feature separately, and the most effective part of the nodes from each hypergraph are selected to construct the final hypergraph. This approach effectively reduces the time complexity and is helpful for larger datasets. However, this method is based on the selection of nodes from a single modal hypergraph, which reduces the complementarity between the multimodal data and does not effectively reflect the characteristics of multimodal data.
In summary, with the continuous development of Internet technology, network data also contain a large amount of text, image, voice, video, and other multi-modal information. The fusion of multimodal data into a hypergraph model to improve model performance is also a research hotspot in recent years. Multimodal hypergraphs make up for the limitations of single modal data representation. However, the sparsity of the dataset is increased threefold when the visual, audio, and text features of the target items are considered compared with the single-modal feature space. Therefore, effectively alleviating the sparsity caused by multi-channel features without affecting the model performance is the key to solve this problem.

4. Uncertain Hypergraph Construction and Representation Learning

Although hypergraphs can represent deterministic multivariate relationships (including binary and monadic relationships) in complex systems, in the real world, deterministic things are relative and uncertain things are absolute. With the rapid development of various industries and the advent of the era of big data, there will be a huge amount of data continuously generated all the time, and these data contain a large amount of uncertainty data. How to use hypergraphs to describe and measure the uncertainty in these complex relationships has received more and more attention from scholars [80].
Uncertain hypergraphs are used to describe systems with uncertainty and relative complexity. It is an important and useful tool for describing complex systems in real life. The only review of uncertain hypergraphs that can be found so far is the work of Peng, J., et al. [81], which constructs a new interdisciplinary theory for uncertainty theory and hypergraph theory. The paper introduced the concept of uncertain hypergraphs and provided a brief analysis of the representation, operations, and properties.
Uncertainty problems are usually classified as random, fuzzy, and rough, and the resulting hypergraph models for the characterization of uncertain relationships contain three aspects. The following hypergraph modeling approaches for the characterization of uncertain relations are presented from three perspectives: probabilistic-based hypergraphs, fuzzy hypergraphs, and rough hypergraphs.

4.1. Random Hypergraphs

Since the causal relationship of things is uncertain, the uncertainty of the outcome resulting from the occurrence of events can be measured by probability. Random graphs use probability to indicate the likelihood of an event occurring. Firstly, Kolmogorov, A. [82] introduced probability theory into graph theory, followed by Erdös, P. [83], and Gilbert, E.N. [84] defined random graphs almost simultaneously. Since then, many scholars have studied random hypergraphs. For example, Cooper, J., et al. [85] studied the problem of asymptotic descriptions of neighborhood eigenvalues of random and perfectly consistent hypergraphs. Semeno, A., et al. [86] discussed the number of weakly colored random hypergraphs. Liu, Y.P., et al. [87] studied the problem of the upper tail of random hypergraphs. In addition to the study of the properties of random probability hypergraphs, Gan, Y. [88] introduced the combination of Bayesian nets and hypergraphs into the analysis of uncertain data descent, defined the hypobayesian graph, performed the calculation and simplification of descent probabilities, and provided the local propagation process of descent on the hypobayesian net. Liu, G. et al. [16] proposed a method for constructing probabilistic hypergraphs of patent texts based on the k-nearest neighbor strategy, and provided an algorithm for automatic classification of patent texts based on hypergraph learning on this basis. However, the application of stochastic hypergraphs is usually limited by the fact that uncertain quantities can only be described by probability theory when the obtained data are close enough to the probability distribution of the true frequencies.

4.2. Fuzzy Hypergraphs

Ambiguity arises because events have no clear qualitative meaning and no clear quantitative boundaries, resulting in an ‘either/or’ form of uncertainty, which is measured by affiliation, indicating the extent to which something belongs to a classification. In real life, we can invite experts in the field to obtain confidence in a quantity with uncertain information when sufficient data are not available or the cost of obtaining data is too high. With the development of fuzzy set theory, some scholars began to introduce fuzzy set theory into graph theory. Subsequently, fuzzy graphs [89] were proposed and a lot of research work was carried out on fuzzy graphs and fuzzy hypergraphs. For example, Wang, Q., et al. [90] used fuzzy equivalence relations to delineate hyperedges, and then constructed fuzzy hypergraphs and represented the granular structure. Luqman, A., et al. [91] proposed a new generalized fuzzy hypergraph and provided an application of a complex orthogonal fuzzy hypergraph. Wang Junhu, Northwest Normal University [92], with the help of hesitant fuzzy sets and fuzzy hypergraphs, defined hesitancy fuzzy hypergraph, and analyzed its formal concept, structure, and graph operation. On this basis, he also put forward a multi-attribute decision algorithm of hesitancy fuzzy hypergraph model based on granular computation and applied it to social networks. Liu, Y. [93], Wuhan University et al., established a fuzzy affiliation function for the characteristics of various factors in the land evaluation problem, used the affiliation value of each evaluation unit to construct the fuzzy feature similarity between data objects as the similarity metric function, constructed a fuzzy-based hypergraph clustering model to complete the land evaluation clustering task, and achieved better clustering results.
All existing hypergraph methods based on fuzzy sets and their extensions can be applied using affiliation functions and parameterization tools. However, in certain cases where no additional information, affiliation functions, or parametric properties are available, existing hypergraph based models are difficult to apply.

4.3. Rough Hypergraph

The theory of rough sets proposed by Pawlak, Z. [94] is an important branch of uncertainty theory and does not require any prior information. Indiscernibility between events is caused by insufficient and incomplete knowledge (or information) describing events. Rough sets assign all those indistinguishable events to a boundary domain. Rough hypergraphs, as an extension of hypergraphs, can use the given information to investigate incomplete information in the hypergraph model, i.e., no additional assumptions are required. To address the problem that traditional hypergraphs cannot handle continuous-type attributes, Shi, J. [95] and Liu, X. [96] combined the neighborhood rough set theory and the connection degree of incomplete and unbalanced information system, reconstructed the data features before constructing the hypergraph, proposed the neighborhood hypergraph, and applied it to the classification task. Gauthama, R.M.R. [97] combined rough set and hypergraph theory, used conditional attributes as vertices, constructed hyperedges by attribute reduction and thus proposed a new feature selection algorithm for finding optimal feature subsets, which was applied to feature detection systems. Recently, Sarwar, M. [98] applied the concept of rough sets to hypergraphs and introduced the new concept of rough hypergraphs based on rough relations. The concepts and properties of isomorphism, consistency, linearity, duality, conjunction, exchangeability, and distributivity in rough hypergraphs were provided for study. There are also links to formal concept analysis and rough set theory based on hypergraphs [99] links between supergroups, rough sets and hypergraphs, and hypergraph operations based on up-down approximation relations [100]. However, the fusion of rough set and hypergraph research is mostly confined to the feature extraction of data, and the roughing process performed during the structural feature analysis of hypergraphs, rough hypergraph theory, and the nature of research, not for the actual application according to nodes in the hypergraph and hyperedges roughness to construct the rough relational hypergraph model, needs to be further explored, which is the next research work.
In summary, the random hypergraph, fuzzy hypergraph, and rough hypergraph models proposed for uncertainty problems have their own characteristics and applicability conditions, and the type of data in the uncertainty problem can be targeted by choosing different uncertain hypergraph models for processing. Table 5 summarizes the scope of application and core techniques of the three uncertain hypergraphs.

5. Conclusions and Future Research Directions

By reviewing the development process of hypergraph theory, we introduced the basic concepts of hypergraph theory briefly, and reviewed the key technologies of the new generation of hypergraph representation learning and structure optimization methods at home and abroad. The research on hypergraph representation learning is moving from theory to applications and the structural optimization of its models, as shown by the publications in the last two years. Hypergraphs have been applied with good results in a variety of fields such as clustering, classification, link prediction, importance ranking, recommendation systems, computer vision, chemical analysis, and biological networks. More and more research scholars are focusing on the structural and theoretical study of uncertain hyper-graphs, but further in-depth research on the representation and application of uncertain hypergraphs is urgently needed. Several potential research directions exist for the study of hypergraphs and uncertain hypergraph learning algorithms as follows.
(1)
How to reduce the algorithmic time and space complexity of hypergraph learning while accurately acquiring higher-order semantic information will be a hot topic of research in the long term;
(2)
How to obtain the adjacency matrix and point-edge association matrix accurately;
(3)
How to design an uncertain hypergraph model according to application scenarios combining the uncertainty of hypergraph vertices and hyperedges;
(4)
How to combine uncertainty theories such as probability theory, fuzzy sets, and rough sets to measure uncertain relationships in real networks, and thus construct vertex adjacency matrices and point-edge association matrices suitable for uncertain hypergraph structures;
(5)
How to establish correspondence and transformation methods between ordinary graphs and hypergraphs, and analyze the mechanism of correspondence between deterministic hypergraphs and uncertain hypergraphs, so that traditional representation learning can be better adapted to hypergraphs as well as representation learning of uncertain hypergraphs.

Author Contributions

Conceptualization, L.Z.; methodology, L.Z.; investigation, J.W. (Jiang Wang) and S.L.; writing—original draft preparation, L.Z.; writing—review and editing, J.G. and C.Z.; visualization, J.W. (Jiazheng Wang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by S&T Program of Hebei, grant number 20310301D; the National Natural Science Foundation of China, grant number 62172352 and 61871465; the Natural Science Foundation of Hebei Province, grant number F2019203157; the Key project of science and technology research in Hebei Province, grant number ZD2019004.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berge, C. Graphs and Hypergraphs, Dumond, Paris. Engl. Transl. 1970. [Google Scholar]
  2. Lovász, L. Normal hypergraphs and the perfect graph conjecture. Discret. Math. 1972, 2, 253–267. [Google Scholar] [CrossRef] [Green Version]
  3. Erdős, P.; Lovász, L. Problems and results on 3-chromatic hypergraphs and some related questions. In Colloquia Mathematica Societatis Janos Bolyai 10. Infinite and Finite Sets; János Bolyai Mathematical Society: Keszthely, Hungary, 1973. [Google Scholar]
  4. Berge, C. Packing problems and hypergraph theory: A survey. In Annals of Discrete Mathematics; Elsevier: Amsterdam, The Netherlands, 1979; pp. 3–37. [Google Scholar]
  5. Berge, C.; Johnson, E.L. Coloring the edges of a hypergraph and linear programming techniques. In Annals of Discrete Mathematics; Elsevier: Amsterdam, The Netherlands, 1977; pp. 65–78. [Google Scholar]
  6. van Cleemput, W. Hypergraph models for the circuit layout problem. Appl. Math. Model. 1976, 1, 160–161. [Google Scholar] [CrossRef]
  7. Alia, G.; Maestrini, P. A procedure to determine optimal partitions of weighted hypergraphs through a network-flow analogy. Calcolo 1976, 13, 191–211. [Google Scholar] [CrossRef]
  8. Ojaboni, M.M.O. Query Translation in a Heterogeneous Distributed Database based on Hypergraph Models (Relational Model, Hierarchical, Network, Universal); The University of Oklahoma: Norman, OK, USA, 1986. [Google Scholar]
  9. Goldstein, A. Database systems: A directed hypergraph database: A model for the local loop telephone plant. Bell Syst. Tech. J. 1982, 61, 2529–2554. [Google Scholar] [CrossRef]
  10. Saccà, D. Closures of database hypergraphs. J. ACM (JACM) 1985, 32, 774–803. [Google Scholar] [CrossRef]
  11. Rital, S.; Cherifi, H.; Miguet, S. Weighted adaptive neighborhood hypergraph partitioning for image segmentation. In Pattern Recognition and Image Analysis, Proceedings of the Third International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, 22–25 August 2005; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  12. Han, E.-H.; Karypis, G.; Kumar, V.; Mobasher, B. Clustering in a High-Dimensional Space Using Hypergraph Models. 1997. Available online: https://hdl.handle.net/11299/215349 (accessed on 22 April 2022).
  13. Zhang, L.; Gao, Y.; Hong, C.; Feng, Y.; Zhu, J.; Cai, D. Feature correlation hypergraph: Exploiting high-order potentials for multimodal recognition. IEEE Trans. Cybern. 2013, 44, 1408–1419. [Google Scholar] [CrossRef]
  14. Li, L.; Li, T. News recommendation via hypergraph learning: Encapsulation of user behavior and news content. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, Rome, Italy, 4–8 February 2013. [Google Scholar]
  15. Tan, S.; Guan, Z.; Cai, D.; Qin, X.; Bu, J.; Chen, C. Mapping users across networks by manifold alignment on hypergraph. In Proceedings of the AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014. [Google Scholar]
  16. Liu, G.-F.; Wang, M.-R.; Liu, H.-N. Research on patent text classification method based on probabilistic hypergraph semi-supervised learning. J. Telligence 2016, 35, 187–191. [Google Scholar]
  17. Wu, Y.; Wang, Y.; Wang, X.; Xue, Z.; Li, L. Semi-supervised node classification based on hypergraph convolution for heterogeneous networks. J. Comput. Sci. 2021, 44, 2248–2260. [Google Scholar]
  18. Yu, Y.; Zhang, W.; Li, Z.; Li, Y. Hypergraph-Based Personalized Recommendation & Optimization Algorithm in EBSN. Comput. Res. Dev. 2020, 57, 2556. [Google Scholar]
  19. Jin, T.; Cao, L.; Jie, F.; Ji, R. Link-aware semi-supervised hypergraph. Inf. Sci. 2020, 507, 339–355. [Google Scholar] [CrossRef]
  20. Hu, F.; Zhao, H.; He, J.; Li, F.; Li, L.; Zhang, Z. A model for the evolution of research collaboration net-works based on hypergraph structure. J. Phys. 2013, 62, 547–554. [Google Scholar]
  21. Yang, Q. Research on Complex Network Related Problems Based on Hypergraph Structure. Master’s Thesis, National University of Defense Technology, Changsha, China, 2014. [Google Scholar]
  22. Meng, L. Research on the Evolutionary Mechanism and Application of Hypernetworks. Master’s Thesis, Qinghai Normal University, Xining, China, 2020. [Google Scholar]
  23. Hu, F.; Liu, M.; Zhao, J.; Lei, L. Characterization of protein complex supernetworks and applications. Complex Syst. Complex. Sci. 2019, 15, 1–8. [Google Scholar]
  24. Hwang, T.; Tian, Z.; Kuangy, R.; Kocher, J.P. Learning on weighted hypergraphs to integrate protein inter-actions and gene expressions for cancer outcome prediction. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008; IEEE: Piscataway, NJ, USA, 2008. [Google Scholar]
  25. Xu, S.; Sun, Y.; Yang, S.; Huang, R. Hypergraph theory and its applications. J. Electron. 1994, 22, 65–72. [Google Scholar]
  26. Hu, F.; Zhao, H.X.; Ma, X.J. An evolving hypernetwork model and its properties. Sci. Sin. Phys. Mech. Astron. 2013, 43, 16–22. [Google Scholar] [CrossRef]
  27. Hu, F. Structure, Modeling and Application of Complex Hypernetworks. Ph.D. Thesis, Shaanxi Normal University, Xi’an, China, 2014. [Google Scholar]
  28. Tian, L.; Zhang, Z.; Zhang, J.; Zhou, W.; Zhou, X. A review of knowledge graphs representation, construction, inference and knowledge hypergraph theory. Comput. Appl. 2021, 41, 2161–2186. [Google Scholar]
  29. Gao, Y.; Zhang, Z.; Lin, H.; Zhao, X.; Du, S.; Zou, C. Hypergraph learning: Methods and practices. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2548–2566. [Google Scholar] [CrossRef]
  30. Hu, B.; Wang, X.; Wang, X.; Song, M.; Chen, C. Survey on hypergraph learning: Algorithm classification and application analysis. J. Softw. 2022, 33, 498–523. [Google Scholar]
  31. Liang, Y.; Hong, C.; Zhuang, W. Face Spoof Attack Detection with Hypergraph Capsule Convolutional Neural Networks. Int. J. Comput. Intell. Syst. 2021, 14, 1396–1402. [Google Scholar] [CrossRef]
  32. Du, B.; Yuan, C.; Barton, R.; Neiman, T.; Tong, H. Hypergraph Pretraining with Graph Neural Networks. arXiv 2021, arXiv:2105.10862. [Google Scholar]
  33. Wu, X.; Chen, Q.; Li, W.; Xiao, Y.; Hu, B. AdaHGNN: Adaptive Hypergraph Neural Networks for Multi-Label Image Classification. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020. [Google Scholar]
  34. Ji, W.; Pang, Y.; Jia, X.; Wang, Z.; Hou, F.; Song, B.; Wang, R. Fuzzy rough sets and fuzzy rough neural networks for feature selection: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2021, 11, e1402. [Google Scholar] [CrossRef]
  35. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, M.; Tsang, I.W.; Tan, M.; Cham, T.J. A unified feature selection framework for graph embedding on high dimensional data. IEEE Trans. Knowl. Data Eng. 2014, 27, 1465–1477. [Google Scholar] [CrossRef]
  37. He, X.; Cai, D.; Niyogi, P. Laplacian score for feature selection. In NIPS’05: Proceedings of the 18th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 5–8 December 2005; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  38. Zhou, D.; Huang, J.; Schölkopf, B. Learning with hypergraphs: Clustering, classification, and embedding. In Advances in Neural Information Processing Systems 19. Vancouver, Canada, December 2006; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  39. Lu, F. Research and Application of Classification Algorithms Based on Multimodal Hypergraphs. Master’s Thesis, Dalian University of Technology, Dalian, China, 2018. [Google Scholar]
  40. Golub, G.H.; Reinsch, C. Singular value decomposition and least squares solutions. In Linear Algebra; Springer: Berlin/Heidelberg, Germany, 1971; pp. 134–151. [Google Scholar]
  41. Yang, C.; Liu, Z.; Zhao, D.; Sun, M.; Chang, E. Network representation learning with rich text information. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015. [Google Scholar]
  42. Singh, A.P.; Gordon, G.J. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NV, USA, 24–27 August 2008. [Google Scholar]
  43. Li, X.L.; Jia, M.X. A preprocessing-based algorithm for nonnegative matrix decomposition of hypergraphs. Comput. Sci. 2020, 47, 71–77. [Google Scholar]
  44. Pearson, K. The problem of the random walk. Nature 1905, 72, 294. [Google Scholar] [CrossRef]
  45. Guthrie, D.; Allison, B.; Liu, W.; Guthrie, L.; Wilks, Y. A closer look at skip-gram modelling. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy, 22–28 May 2006; ELRA: Genoa, Italy, 2006. [Google Scholar]
  46. Zhang, Y.; Jin, R.; Zhou, Z.-H. Understanding bag-of-words model: A statistical framework. Int. Natl. J. Mach. Learn. Cybern. 2010, 1, 43–52. [Google Scholar] [CrossRef]
  47. Perozzi, B.; Alrfou, R.; Skiena, S. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014. [Google Scholar]
  48. Grover, A.; Leskovec, J. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 13–17 August 2016. [Google Scholar]
  49. Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; Mei, Q. Line: Largescale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, Geneva, Switzerland, 18–22 May 2015. [Google Scholar]
  50. Huang, J.; Chen, C.; Ye, F.; Wu, J.; Zheng, Z.; Ling, G. Hyper2vec: Biased random walk for hyper-network embedding. In Database Systems for Advanced Applications, Proceedings of the DASFAA 2019 International Workshops: BDMS, BDQM, and GDMA, Chiang Mai, Thailand, 22–25 April 2019; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  51. Huang, J.; Liu, X.; Song, Y. Hyper-path-based representation learning for hyper-networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019. [Google Scholar]
  52. Yang, D.; Qu, B.; Yang, J.; Cudré-Mauroux, P. Lbsn2vec++: Heterogeneous hypergraph embedding for location-based social networks. IEEE Trans. Knowl. Data Eng. 2022, 34, 1843–1855. [Google Scholar] [CrossRef]
  53. Kipf, T.N.; Welling, M. Semisupervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  54. Wang, X.; Ji, H.; Shi, C.; Wang, B.; Ye, Y.; Cui, P.; Yu, P.S. Heterogeneous graph attention network. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019. [Google Scholar]
  55. Feng, Y.; You, H.; Zhang, Z.; Ji, R.; Gao, Y. Hypergraph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021. [Google Scholar]
  56. Zhang, R.; Zou, Y.; Ma, J. Hyper-SAGNN: A self-attention based graph neural network for hypergraphs. arXiv 2019, arXiv:1911.02613. [Google Scholar]
  57. Sun, X.; Yin, H.; Liu, B.; Chen, H.; Cao, J.; Shao, Y.; Viet Hung, N.Q. Heterogeneous hypergraph embedding for graph classification. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Virtual, Israel, 8–12 March 2021. [Google Scholar]
  58. Yadati, N.; Nimishakavi, M.; Yadav, P.; Nitin, V.; Louis, A.; Talukdar, P. HyperGCN: Hypergraph Convolutional Networks for Semi-Supervised Learning and Combinatorial Optimisation. arXiv 2019, arXiv:1809.02589v3. [Google Scholar]
  59. Huang, Y.; Liu, Q.; Zhang, S.; Metaxas, D.N. Image retrieval via probabilistic hypergraph ranking. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar]
  60. Huang, Y.; Liu, Q.; Metaxas, D. Video object segmentation by hypergraph cut. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009. [Google Scholar]
  61. Wang, M.; Liu, X.; Wu, X. Visual classification by hypergraph modeling. IEEE Trans. Knowl. Data Eng. 2015, 27, 2564–2574. [Google Scholar] [CrossRef]
  62. Gao, Y.; Wang, M.; Tao, D.; Ji, R.; Dai, Q. 3-D object retrieval and recognition with hypergraph analysis. IEEE Trans. Image Process. 2012, 21, 4290–4303. [Google Scholar] [CrossRef] [PubMed]
  63. Jin, T.; Yu, Z.; Gao, Y.; Gao, S.; Sun, X.; Li, C. Robust ℓ2− Hypergraph and its applications. Inf. Sci. 2019, 501, 708–723. [Google Scholar] [CrossRef]
  64. Huang, S.; Elhoseiny, M.; Elgammal, A.; Yang, D. Learning hypergraph-regularized attribute predictors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  65. Joslyn, C.; Aksoy, S.; Arendt, D.; Jenkins, L.; Praggastis, B.; Purvine, E.; Zalewski, M. High performance hypergraph analytics of domain name system relationships. In Proceedings of the HICSS 2019 Symposium on Cybersecurity Big Data Analytics, Maui, HI, USA, 8–11 January 2019. [Google Scholar]
  66. Zu, C.; Gao, Y.; Munsell, B.; Kim, M.; Peng, Z.; Zhu, Y.; Wu, G. Identifying high order brain connectome biomarkers via learning on hypergraph. In Machine Learning in Medical Imaging, Proceedings of the 7th International Workshop, MLMI 2016, Athens, Greece, 17 October 2016; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  67. Amato, F.; Cozzolino, G.; Sperlì, G. A hypergraph data model for expert-finding in multimedia social networks. Information 2019, 10, 183. [Google Scholar] [CrossRef] [Green Version]
  68. Zhang, Z.; Lin, H.; Gao, Y.; BNRist; KLISS. Dynamic Hypergraph Structure Learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), Stockholm, Sweden, 13–19 July 2018. [Google Scholar]
  69. Jiang, J.; Wei, Y.; Feng, Y.; Cao, J.; Gao, Y. Dynamic Hypergraph Neural Networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, 10–16 August 2019. [Google Scholar]
  70. Ge, S.L. Heterogeneous Dynamic Network Community Discovery Based on Hypergraph Neural Networks; Xi’an University of Electronic Science and Technology: Xi’an, China, 2021. [Google Scholar]
  71. Zhang, Z.; Ren, P.; Hancock E, R. Unsupervised Feature Selection via Hypergraph Embedding. In Proceedings of the British Machine Vision Conference, BMVC, Guildford, UK, 3–7 September 2012; University of Surrey: Guildford, UK, 2012. [Google Scholar]
  72. Yu, J.; Tao, D.; Wang, M. Adaptive hypergraph learning and its application in image classification. IEEE Trans. Image Process. 2012, 21, 3262–3272. [Google Scholar]
  73. Chasapi, A.; Kotropoulos, C.; Pliakos, K. Adaptive algorithms for hypergraph learning. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
  74. Chen, Z.; Li, Q.; Zhong, F.; & Zhao, L. Adaptive Multimodal Hypergraph Learning for Image Classification. In Proceedings of the 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Exeter, UK, 28–30 June 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  75. Chen, Z.; Lu, F.; Yuan, X.; Zhong, F. TCMHG: Topic-based cross-modal hypergraph learning for online service recommendations. IEEE Access 2017, 6, 24856–24865. [Google Scholar] [CrossRef]
  76. Li, Q.-Z. Research and Application of Hypergraph-Based Multimodal Fusion Algorithm. Master’s Thesis, Dalian University of Technology, Dalian, China, 2018. [Google Scholar]
  77. He, L.; Chen, H.; Wang, D.; Jameel, S.; Yu, P.; Xu, G. Click-Through Rate Prediction with Multi-Modal Hypergraphs. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual, QLD, Australia, 1–5 November 2021. [Google Scholar]
  78. Xu, J.; Singh, V.; Guan, Z.; Manjunath, B.S. Unified hypergraph for image ranking in a multimodal context. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; IEEE: Piscataway, NJ, USA, 2012. [Google Scholar]
  79. Wang, L.; Zhao, Z.; Su, F. Efficient multi-modal hypergraph learning for social image classification with complex label correlations. Neurocomputing 2016, 171, 242–251. [Google Scholar] [CrossRef]
  80. Zhang, Y.; Zou, X.H. An efficient algorithm for mining maximal cliques in uncertain graphs. YanShan Univ. 2021, 45, 529–536. [Google Scholar] [CrossRef]
  81. Peng, J.; Zhang, B.; Sugeng, K.A. Uncertain Hypergraphs: A Conceptual Framework and Some Topological Characteristics Indexes. Symmetry 2022, 14, 330. [Google Scholar] [CrossRef]
  82. Kolmogorov, A. Grundbegriffe der Wahrscheinlichkeitsrechnung; Springer: Berlin/Heidelberg, Germany, 1933. [Google Scholar]
  83. Erdös, P. Graph theory and probability. Can. J. Math. 1959, 11, 34–38. [Google Scholar] [CrossRef]
  84. Gilbert, E.N. Random graphs. Ann. Math. Stat. 1959, 30, 1141–1144. [Google Scholar] [CrossRef]
  85. Cooper, J. Adjacency spectra of random and complete hypergraphs. Linear Algebra Its Appl. 2020, 596, 184–202. [Google Scholar] [CrossRef]
  86. Semenov, A.; Shabanov, D. On the weak chromatic number of random hypergraphs. Discret. Appl. Math. 2020, 276, 134–154. [Google Scholar] [CrossRef]
  87. Liu, Y.P.; Zhao, Y. On the upper tail problem for random hypergraphs. Random Struct. Algorithms 2021, 58, 179–220. [Google Scholar] [CrossRef]
  88. Gan, Y.N. A Study of Uncertain Data Descent Based on Super Bayesian Graphs; Qinghai Normal University: Xining, China, 2021. [Google Scholar]
  89. Rosenfeld, A. Fuzzy graphs. In Fuzzy Sets and Their Applications to Cognitive and Decision Processes; Elsevier: Amsterdam, The Netherlands, 1975; pp. 77–95. [Google Scholar]
  90. Wang, Q.; Gong, Z. An application of fuzzy hypergraphs and hypergraphs in granular computing. Inf. Sci. 2018, 429, 296–314. [Google Scholar] [CrossRef]
  91. Luqman, A.; Akram, M.; Al-Kenani, A.N.; Alcantud, J.C.R. A study on hypergraph representations of complex fuzzy information. Symmetry 2019, 11, 1381. [Google Scholar] [CrossRef] [Green Version]
  92. Wang, J.-H. Hesitant Fuzzy Graphs, Hesitant Fuzzy Hypergraphs and Their Graph Decision Analysis; Northwest Normal University: Lanzhou, China, 2021. [Google Scholar]
  93. Liu, Y.; Zhang Y, J. Application of Fuzzy Hypergraph Clustering Model in Land Evaluation. Wuhan Univ. 2007, 11, 126–128+151. [Google Scholar]
  94. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  95. Shi, J. Research on the Classification Method of Incomplete Information System Based on Neighborhood Hypergraph; Chongqing University of Posts and Telecommunications: Chongqing, China, 2016. [Google Scholar]
  96. Liu, X. Research on Imbalanced Data Classification Methods Based on Neighborhood Rough Sets and Supernetworks; Chongqing University of Posts and Telecommunications: Chongqing, China, 2015. [Google Scholar]
  97. Raman, M.R.; Kannan, K.; Pal, S.K.; Sriram, V.S. Rough Set-hypergraph-based Feature Selection Approach for Intrusion Detection Systems. Def. Sci. J. 2016, 66, 612–617. [Google Scholar] [CrossRef]
  98. Sarwar, M. A Theoretical Investigation Based on the Rough Approximations of Hypergraphs. J. Math. 2022, 2022, 1540004. [Google Scholar] [CrossRef]
  99. Cattaneo, G.; Chiaselotti, G.; Ciucci, D.; Gentile, T. On the connection of hypergraph theory with formal concept analysis and rough set theory. Inf. Sci. 2016, 330, 342–357. [Google Scholar] [CrossRef] [Green Version]
  100. Maryati, T.K.; Davvaz, B. A novel connection between rough sets, hypergraphs and hypergroups. Discret. Math. Algorithms Appl. 2017, 9, 1750044. [Google Scholar] [CrossRef]
Figure 1. Statistics of research papers related to hypergraphs from 1970 to 2021.
Figure 1. Statistics of research papers related to hypergraphs from 1970 to 2021.
Mathematics 10 01921 g001
Figure 2. The structure comparison between ordinary graph and hypergraph of a cooperative network.
Figure 2. The structure comparison between ordinary graph and hypergraph of a cooperative network.
Mathematics 10 01921 g002
Figure 3. BFS and DFS policies on node U.
Figure 3. BFS and DFS policies on node U.
Mathematics 10 01921 g003
Figure 4. Position-based heterogeneous hypergraphs of social networks [52].
Figure 4. Position-based heterogeneous hypergraphs of social networks [52].
Mathematics 10 01921 g004
Figure 5. Random walk with stay on the LBSN hypergraph [52].
Figure 5. Random walk with stay on the LBSN hypergraph [52].
Mathematics 10 01921 g005
Figure 6. Structure of the dynamic hypergraph neural network model [69].
Figure 6. Structure of the dynamic hypergraph neural network model [69].
Mathematics 10 01921 g006
Figure 7. The process of generating correlation matrix of fusion time feature.
Figure 7. The process of generating correlation matrix of fusion time feature.
Mathematics 10 01921 g007
Figure 8. Schematic diagram of emotion classifier across modal hypergraphs [75].
Figure 8. Schematic diagram of emotion classifier across modal hypergraphs [75].
Mathematics 10 01921 g008
Figure 9. Flow chart of multimodal hypergraph social image classification algorithm [79].
Figure 9. Flow chart of multimodal hypergraph social image classification algorithm [79].
Mathematics 10 01921 g009
Table 1. Related work.
Table 1. Related work.
Paper TitleDate of PublicationMain Work
Hypergraph theory and its applications [25]1994This paper introduced the theory of undirected hypergraph and directed hypergraph, discussed the connectivity of hypergraph, hypertree, and k-hypertree, the minimum cut division of hypergraph, and the application of directed hypergraph theory to the topological analysis of hypernetwork.
An evolving hypernetwork model and its properties [26]2013Analyzed the hyperedge growth mechanism and priority connection mechanism of hypernetwork, constructed the dynamic evolution model of hypernetwork, and studied its topological properties.
Modeling and application of complex hypernetworks [27]2014This paper performed a comparative analysis of heterogeneous hypergraph representation learning from five aspects: unsupervised clustering, meta-path, random walk, matrix decomposition and neural network, established a three-layer structure of knowledge hypergraph, and briefly introduced the representation methods of knowledge hypergraph from four aspects: soft rules, translation methods, tensor decomposition, and neural network.
A review of knowledge graphs representation, construction, inference and knowledge hypergraph theory [28]2021
Hypergraph learning: Methods and practices [29]2020Reviewed existing literature regarding hypergraph generation, distance-based, representation-based,
attribute-based, and network-based approaches. Introduced the existing learning methods on a hypergraph, including
transductive hypergraph learning, inductive hypergraph learning, hypergraph structure updating, and multi-modal hypergraph learning.
Survey on hypergraph learning: algorithm classification and application analysis [30]2022In this paper, hypergraph learning algorithm is divided into spectrum analysis method, neural network, expansion method and non-expansion method
Ours This paper summarized hypergraph representation learning methods from three aspects of matrix decomposition, random walk and deep learning, and compared them with ordinary graph representation learning methods. Then, it compared and analyzed three kinds of hypergraph learning optimization algorithms: dynamic hypergraph, multi-modal hypergraph, and hyperedge weight optimization. Finally, it introduced the research status of uncertain hypergraph.
Table 2. Hypergraph symbol definition and calculation method.
Table 2. Hypergraph symbol definition and calculation method.
SymbolsDefinitionCalculation Formula
H Correlation Matrix
(1)
h ( v i , e j ) = 1   i f   v i e j ,   o t h e r w i s e   h ( v i , e j ) = 0
(2)
h ( v i , e j ) can also be taken within (0, 1), whose value indicates the likelihood of the vertex v i being assigned to hyperedge e j or the importance of the vertex v i being assigned to hyperedge e j
w ( e i ) Diagonal matrix of hyperedge weightsThe elements on the diagonal indicate the weight of each hyperedge w ( e i )
d ( v i ) The degree of a vertex is the number of hyperedges associated with the vertex d ( v i ) = e i E w ( e i ) h ( v i , e i )
δ ( e i ) The degree of the hyperedge is the number of vertices in the hyperedge δ ( e i ) = v i V h ( v i , e i )
D v Diagonal matrix of hyper-vertex degrees
D e Diagonal matrix of hyperedge degrees
AAdjacency matrix A = H T W D e 1 H
LThe Laplace matrix of the hypergraph L = D v A
ΔNormalized hypergraph Laplacian matrix Δ = I D v 1 / 2 H W D e 1 H T D v 1 / 2
Table 3. Comparison of different graph embedding methods.
Table 3. Comparison of different graph embedding methods.
Algorithm ClassificationRepresentative ModelsAdvantagesDisadvantages
Matrix decompositionLaplace matrix decomposition [38]
Nodes adjacency matrix representation [41]
Ability to visualize diagram structure informationHigh temporal and spatial complexity
Random wanderingNode2vec [48], Hyper2vec [50], Hy-per-gram [51], LBSN hypergraph [52], etc.Adequate display of node co-occurrence information and easy to implement walk strategyOnly the structural characteristics of the nodes in the graphs are taken into account
Deep LearningGCN [53], HGNN [55], HWNN [57],
HyperGCN [58], etc.
Efficiently and fully exploit the high level information of the graph structurePoor model interpretability and high computational complexity
Table 4. Hyperedge weight optimization method.
Table 4. Hyperedge weight optimization method.
Method TypeOptimization MethodsAdvantages and Disadvantages
Spectrum hypergraphSimple set of hyperedge weights w ( e ) are equal to 1The different roles of different types of hyperedges in the representation of hypergraph structural information are not considered
Distance Metric w ( e ) = x a , x b e exp ( d ( x a , x b ) 2 σ 2 ) The distance between nodes can be inaccurate due to noise and outliers, and the number of nearest-neighbor nodes selected can also affect model performance
Multidimensional interactive information metrics w i 1 , , i k = K I ( x i 1 , x i 2 , x i k ) H ( x i 1 ) + H ( x i 2 ) + H ( x i k ) Accurate representation of higher-order information in hypergraph structures
Adaptive hyperedge optimizationAdaptive calculation of hyperedge weights by coordinate descentAdaptive calculation of weights, but iterative process increases computational costs
Adaptive computation of hyperedge weights by gradient descent
Add a sparsity constraint and set the useless hyperedge weight to 0Improved efficiency of model calculations
Monomorphic volume w ( e ) = e V o l ( E ) μ Highly interpretable models but high computational complexity
Walking tracks w ( e ) = t r ( e s μ )
Linear reconstruction error w ( e ) = e R μ
Table 5. Comparative analysis of uncertainty hypergraph models.
Table 5. Comparative analysis of uncertainty hypergraph models.
Type of AlgorithmApplicable ConditionsCore IdeasLimitations
Random HypergraphThe problem meets certain preconditions and the amount of data is sufficientMeasuring causal uncertainty with probabilityThe assumption of independent homogeneous distributions limits the extraction of higher-order information and it is expensive to obtain data
Fuzzy HypergraphSome expert systems to aid decision makingMeasuring categories of things with affiliationReliance on expert experience, too subjective
Rough HypergraphNo prior knowledge requiredApproximate inscription of uncertain knowledge using equivalence relations for known knowledge basesLack of raw data processing mechanism needs to be used in conjunction with other algorithms
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, L.; Guo, J.; Wang, J.; Wang, J.; Li, S.; Zhang, C. Hypergraph and Uncertain Hypergraph Representation Learning Theory and Methods. Mathematics 2022, 10, 1921. https://doi.org/10.3390/math10111921

AMA Style

Zhang L, Guo J, Wang J, Wang J, Li S, Zhang C. Hypergraph and Uncertain Hypergraph Representation Learning Theory and Methods. Mathematics. 2022; 10(11):1921. https://doi.org/10.3390/math10111921

Chicago/Turabian Style

Zhang, Liyan, Jingfeng Guo, Jiazheng Wang, Jing Wang, Shanshan Li, and Chunying Zhang. 2022. "Hypergraph and Uncertain Hypergraph Representation Learning Theory and Methods" Mathematics 10, no. 11: 1921. https://doi.org/10.3390/math10111921

APA Style

Zhang, L., Guo, J., Wang, J., Wang, J., Li, S., & Zhang, C. (2022). Hypergraph and Uncertain Hypergraph Representation Learning Theory and Methods. Mathematics, 10(11), 1921. https://doi.org/10.3390/math10111921

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop