1. Introduction
The goal of network representation learning, also regarded as network embedding, is to map each node to a low-dimensional representation vector space. The node representation vector can be applied to some popular network analysis tasks, such as node classification [
1], link prediction [
2], and community detection [
3].
According to the type of network, network representation learning is divided into conventional network representation learning and hypernetwork representation learning. As for conventional network representation learning, most of the related studies only take the network topology structure as input to learn node representation vectors, such as DeepWalk [
4], node2vec [
5], LINE [
6], GraRep [
7], and HOPE [
8]. Nevertheless, the node representation vectors learnt only from the network topology structure are not desirable vectors. Hence, some researchers have proposed some methods to incorporate other types of supplementary information, such as text, label, and community, into the process of network representation learning, such as CANE [
9] and CNRL [
10].
Nevertheless, the above network representation learning methods are designed for conventional networks with pairwise relationships.
As for the hypernetwork, hypernetwork representation learning [
11] has been gradually widely studied by researchers. According to the characteristics of hypernetwork representation learning methods, they are divided into expanded spectral analysis and non-expanded methods. The expanded spectral analysis methods, such as star and clique extensions [
12], transform the hypernetwork into a conventional network to learn the node representation vector while losing hyperedge information during the hypernetwork expansion. The non-expanded methods without the hyperedge decomposition are mainly divided into non-expanded spectral analysis and neural-network-based methods, such as Hyper2vec [
13], HPHG [
14], DHNE [
15], and so on.
Although the expanded spectral analysis methods are intuitive, there is a loss of hyperedge information. The non-expanded methods do not decompose the hyperedge. For example, Hyper2vec captures the pairwise relationships among the nodes on the hyperedge-based walk sequence but does not capture the tuple relationships among the nodes well. HPHG, combined with a one-dimensional convolutional layer, effectively captures tuple relationships among the nodes, and DHNE captures tuple relationships among the nodes by combining multi-layer perceptron, but both HPHG and DHNE are limited to heterogeneous hyperedges with a fixed size. However, the above methods cannot effectively capture the complex tuple relationships with an unfixed size. Therefore, in order to resolve the above challenge, this paper proposes a hypernetwork representation learning method with common constraints of the set and translation to effectively capture tuple relationships among the nodes.
The following two points are the main characteristics of this paper:
The hypernetwork was transformed into a conventional network abstracted as a two-section graph. Based on this conventional network, a hypernetwork representation learning method with common constraints of the set and translation was proposed to learn node representation vectors rich in both the hypernetwork topology structure and hyperedges.
The strength of our proposed method was to incorporate a hyperedge (tuple relationship) that is not limited to a fixed size into the process of hypernetwork representation learning. The weakness of our proposed study was that some hypernetwork structure information was still missing because the hypernetwork was transformed into a conventional network.
2. Related Studies
Different from the conventional network with only pairwise relationships among the nodes, there are also complex tuple relationships, namely the hyperedges among the nodes in the hypernetwork. However, most of the existing network representation learning methods cannot effectively capture the complex tuple relationships. Therefore, in order to resolve the above challenge, researchers have proposed some hypernetwork representation learning methods, which were divided into the expanded spectral analysis and non-expanded methods. As for the expanded spectral analysis methods, by transforming the hypernetwork into a conventional network, the problem of hypernetwork representation learning was simplified into the problem of conventional network representation learning, and then solved according to the spectral characteristics of the Laplace matrix. For example, star and clique extensions are two classical hypernetwork expansion methods. As for the non-expanded methods, they are mainly divided into the non-expanded spectral analysis and the neural network-based methods. The non-expanded spectral analysis methods directly model the hypernetwork, that is, the Laplacian matrix on the hypernetwork is directly built, and this modeling process ensures the integrity of hypernetwork information. For example, Zhou [
16] extended the powerful method of spectral clustering [
17], originally run on undirected graphs, to the hypergraph [
18] and further developed the hypergraph learning algorithm on the basis of the spectral hypergraph clustering method. Hyper2vec was proposed based on the biased random walk strategy on the hypergraph to preserve the structure and inherent property of the hypernetwork. Neural-network-based methods have a strong learning ability, flexible structure design, and high generalization, which make up for the defects of spectral analysis methods. For example, for DHNE, it was theoretically proved that the linear similarity measure in the embedding space used by the existing methods could not preserve the indecomposability of the hypernetwork. Thus, a new deep model was proposed to realize the local and global proximity of the nonlinear tuple similarity function in the embedding space. HPHG designs a random walk based on the hypergraph to retain the hypernetwork topology structure information to learn node representation vectors. Hyper-SAGNN [
19] uses a self-attention mechanism [
20] to aggregate hypergraph information, constructs pairwise attention coefficients between the nodes as the dynamic features of the nodes, and combines the original static features of the nodes to describe the nodes.
3. Problem Definition
Given the hypernetwork , abstracted as the hypergraph, which was composed of the node set and the hyperedge set , the goal of hypernetwork representation learning, with common constraints of the set and translation, was to learn a low-dimensional vector for each node in the hypernetwork, where was expected to be much smaller than .
4. Preliminaries
4.1. Transforming Hypergraph into Two-Section Graph
A feasible way to transform the hypergraph into a conventional graph was to carry out the research of the hypergraph, because the research for the conventional graph was relatively mature. In the literature [
18], hypergraphs were transformed into three kinds of conventional graphs, namely line, incidence, and two-section graphs. In fact, two-section graphs lost less hypernetwork structure information than line and incidence graphs. Hence, a hypergraph was transformed into a two-section graph in this study. A hypergraph and its corresponding two-section graph are shown in
Figure 1.
The two-section graph transformed from the hypergraph was a conventional graph with the following conditions:
, that is, the node set of two-section graph was equal to the node set of the hypergraph .
One edge was associated with any two different nodes if and only if the two nodes were simultaneously associated with at least one hyperedge.
4.2. TransE
Knowledge representation is the vectorization of the entity and relation in the knowledge graph, which specifically maps the entity or relation to a low-dimensional vector space. For simplicity,
denotes the triplet (head, relation, and tail), where
h,
r, and
t denote the head entity, relation, and tail entity, respectively, and
h,
r, and
t denote the vectors corresponding to the head entity, relation, and tail entity, respectively. In the relation extraction of the knowledge graph, as a knowledge representation learning algorithm based on the translation, TransE [
21] thought that the head entity vector plus the relation vector were approximately equal to the tail entity vector, that is,
when the triplet
held (
should be the nearest neighbor of
), while
should otherwise be far away from
. TransE is shown in
Figure 2.
5. Our Method
Hypernetwork representation learning with common constraints of the set and translation HRST is introduced in detail in this section. Firstly, the topology-derived model is introduced in
Section 5.1. Secondly, the set constraint model is introduced in
Section 5.2. Thirdly, the translation constraint model is introduced in
Section 5.3. Fourthly, the joint optimization of the above three models is introduced in detail in
Section 5.4. Finally, the complexity analysis of HRST is introduced in
Section 5.5.
5.1. Topology-Derived Model
Because the computational efficiency of CBOW [
22] is greater than that of skip-gram [
22], a topology-derived model [
11] based on the negative sampling to be used to capture the network structure was introduced. To be specific, in the optimization procedure of this model, the center node
was the positive sample, other nodes were the negative samples, and
was the subset of negative samples with a predefined size
. For
, the node labels are denoted as follows:
The prediction probability of the node
is denoted as
under the condition of the contextual nodes
corresponding to
. The node sequence set is denoted as
. In view of the above conditions, we maximized the following objective function:
When the node
was regarded as the contextual node, the embedding vector
was the representation of the node
, while the parameter vector
was the representation of the node
when the node
was regarded as the center node.
in the formula (2) is denoted as follows:
where
is a sigmoid function and
is the summing operation of the representation vectors corresponding to all the nodes of
. Formula (3) can also be written as an integral expression:
Consequently, Formula (2) can be rewritten as follows:
Formally, by means of maximizing , the network topology was encoded into the node representation vectors.
5.2. Set Constraint Model
Because the above topology-derived model only considered the network structure, a set constraint model [
11] based on the negative sampling to consider both the network structure and the hyperedge was introduced. To be specific, in the optimization procedure of this model,
was the set of the hyperedges associated with the center node
, and also the set of the nodes associated with the center node
if the hyperedge was regarded as the node. The center node
was the positive sample, and other nodes not associated with the center node
were the negative samples. As for
,
was the subset of negative samples with a predefined size
, and the node labels are denoted as follows:
In view of the node sequences
and the set of the hyperedges, we tried to maximize the following objective function to meet the set constraint:
where
is the parameter vector corresponding to
.
By means of maximizing , the hyperedges were encoded into the node representation vectors.
5.3. Translation Constraint Model
Because the above set constraint model did not fully consider the hyperedges, it could not learn node representation vectors very well. Hence, we tried to incorporate the hyperedges associated with the nodes, regarded as the interaction relationships among the nodes, into the process of hypernetwork representation learning.
Inspired by the successful application of the translation mechanism in TransE, the nodes and interaction relationships were mapped into a unified representation space, where the interaction relationships among the nodes could be regarded as the translation operations in the representation space.
To be specific, for the center node in , if there was a node and a hyperedge to make , that is, the hyperedge was simultaneously associated with the node and node , a normal triplet held, where is a node with the relationship with the node , is the set of the nodes with the relationship with the node , and is the set of hyperedges associated with the center node , namely the set of relationships.
Inspired by the above topology-derived model, a novel translation constraint model based on the negative sampling was proposed. To be specific, in the optimization procedure of this model, the center node
was the positive sample, other nodes were the negative samples, and
was the subset of negative samples of the center node
with a predefined size
. For
, the node labels are denoted as follows:
In view of the node sequences
and the translation constraint, we tried to maximize the following objective function to meet the translation constraint:
where
,
, and
are all the parameter vectors,
. By means of maximizing
, the interaction relations were encoded into the node representation vectors.
5.4. Joint Optimization
In this subsection, the hypernetwork representation learning method with common constraints of the set and translation HRST is proposed. HRST can jointly optimize the topology-derived, set constraint, and translation constraint models.
Figure 3 shows the HRST framework.
In
Figure 3, the network topology representation, and the hyperedge and relation representations from the topology-derived model, and the set constraint and the translation constraint models, respectively, shared the same representation rich in the hyperedges.
In order to facilitate calculation, we took the logarithm of
,
, and
to maximize the following joint optimization objective function to meet common constraints of the set and translation:
where the harmonic factors
and
were used to counterweigh the contribution rate among the topology-derived, the set constraint, and the translation constraint models.
In order to facilitate derivation,
is denoted as follows:
The objective function was optimized by the stochastic gradient ascent method. The objective was to give six kinds of gradients of .
Firstly, the gradient on
of
was calculated as follows:
Consequently, the updating formula of
is denoted as follows:
where
is the learning rate.
Secondly, the gradient on
of
was calculated. The symmetry property between
and
was utilized to get the gradient of
:
Consequently, the updating formula of
is denoted as follows, where
:
Thirdly, the gradient on
of
was calculated as follows:
Consequently, the updating formula of
is denoted as follows:
Fourthly, the gradient on
of
was calculated. The symmetry property between
and
was utilized to get the gradient of
:
Consequently, the updating formula of
is denoted as follows, where
:
Fifthly, the gradient on
of
was calculated as follows:
Consequently, the updating formula of
is denoted as follows:
Finally, the gradient on
of
was calculated. The symmetry property between
and
was utilized to get the gradient on
:
where,
and the vectors to update are
and
, so the updating of the gradient
was utilized on
and
respectively. The updating formulae of
and
are denoted as follows.
The stochastic gradient ascent method was used for optimization. More details are shown in Algorithm 1.
Algorithm 1: HRST |
1 Input: |
2 Hypernetwork |
3 Embedding size |
4 Output: |
5 Embedding matrix |
6 for node in do |
7 initialize embedding vector |
8 initialize parameter vector |
9 for node in do |
10 initialize parameter vector |
11 end for |
12 for hyperedge in do |
13 for node in do |
14 initialize parameter vector |
15 end for |
16 end for |
17 end for |
18 node sequences |
19 for in do |
20 update parameter vector according to Formula (13) |
21 update embedding vector according to Formula (15) |
22 update parameter vector according to Formula (17) |
23 for node in do |
24 update parameter vector according to Formula (19) |
25 end for |
26 update parameter vector according to Formula (21) |
27 for hyperedge in do |
28 for node in do |
29 update parameter vector according to Formula (23) |
30 update parameter vector according to Formula (24) |
31 end for |
32 end for |
33 end for |
34 for do |
35 |
36 end for |
37 return |
5.5. Complexity Analysis
The time complexity of HRST was , where the time complexities of the topology-derived, the set constraint, and the translation constraint models are , , and , respectively, where is a constant independent of the network size, and are the maxima of the set of the hyperedges associated with the node , is the maxima of , which is the set of the nodes in the triplets with the relation with the node , and are the maxima of the set of the relations associated with the node .
6. Experiments
6.1. Dataset
Four hypernetwork datasets were used to evaluate the effectiveness of HRST. Detailed dataset statistics are shown in
Table 1.
Four datasets are shown as follows:
GPS [
23] described a situation where a user partook in an activity in a location. The set of three-tuple <user, location, activity> was used to construct the hypernetwork.
MovieLens [
24] described personal tag activities from MovieLens. The set of three-tuple <user, movie, tag> was used to construct the hypernetwork, where each movie had at least one genre.
Drug (
http://www.fda.gov/Drugs/, accessed on 27 January 2020) described a situation where the user took drugs and had certain reactions that led to adverse events. The set of three-tuple <user, drug, reaction> was used to construct the hypernetwork.
wordnet [
21] was composed of a set of triplets <head, relation, tail> extracted from WordNet3.0. The set of three-tuple <head, relationship, tail> was used to construct the hypernetwork.
6.2. Baseline Methods
DeepWalk. DeepWalk is a classical representation learning method to learn node representation vectors.
node2vec. node2vec preserves network neighborhoods of the nodes to learn node representation vectors.
LINE. LINE preserves both first- and second-order proximities to learn node representation vectors.
GraRep. GraRep captures global structure properties of a graph by k-step loss functions to learn node representation vectors.
HOPE. HOPE captures the higher-order proximity and asymmetric transitivity of a graph to learn node representation vectors.
SDNE. SDNE [
25] utilizes first- and second-order proximities to characterize local and global network structures to learn node representation vectors.
HRSC. HRSC [
11] incorporates the hyperedge sets associated with the nodes into the process of hypernetwork representation learning.
HRTC. HRTC models the interaction relationships among the nodes through the translation mechanism and incorporates the relationships among the nodes into the process of hypernetwork representation learning.
HRST. HRST incorporates the hyperedge sets associated with the nodes and interaction relationships among the nodes modeled through the translation mechanism into the process of hypernetwork representation learning.
6.3. Experimental Setting
Node classification and link prediction were used to evaluate the effectiveness of HRST. The vector dimension was set to 100, the number of the random walks to begin with every node to 10, and the length of the random walks to begin with every node to 40. Some datasets were randomly selected as the training set and the rest as the test set.
6.4. Node Classification
The multi-label classification tasks [
1] were conducted on the MovieLens and wordnet datasets because labels are only on these two datasets. In addition, the nodes without labels on the two datasets were removed. An SVM [
26] classifier was trained to calculate node classification accuracies.
From
Table 2 and
Table 3, the following observations were obtained as follows:
For the two datasets, the average value of the node classification accuracy of HRST was very close to those of HRSC and HRTC, and better than those of other baseline methods. For instance, for the average values of the node classification accuracy, HRST outperformed the other best baseline methods (e.g., DeepWalk) by about 1% on the two datasets. Meanwhile, the average values of the node classification accuracies of the remaining baseline methods were roughly weaker than those of HRST.
The average value of the node classification accuracy of GraRep ranked only second to those of HRST, HRSC, HRTC, and DeepWalk, and was very close to those of DeepWalk, because GraRep integrated the hyperedges to a certain extent into the process of network representation learning.
In a word, it was found that the quality of the node representation vectors learnt from HRST was better.
6.5. Link Prediction
In this subsection, the link prediction task was evaluated by the measure AUC [
27]. From
Table 4,
Table 5,
Table 6 and
Table 7, the following observations were obtained as follows:
On the GPS and drug datasets, the average AUC value of HRST was very close to that of HRSC and superior to that of HRTC. On the wordnet dataset, the average AUC value of HRST was almost the same as those of HRSC and HRTC. On the MovieLens dataset, the average AUC values of HRST and HRTC were weaker than those of HRSC and DeepWalk. On the whole, HRST performed better than most baseline methods, which indicated the effectiveness of HRST.
HRST performed consistently at different training ratios compared with with other baseline methods, which demonstrated its feasibility and robustness.
HRST almost performed better than other baseline methods without incorporating hyperedges, which verified the assumption that it was good for link prediction to incorporate the hyperedges into the process of hypernetwork representation learning.
In a word, the above observations demonstrated that HRST can obtain high-quality node representation vectors.
6.6. Parameter Sensitivity
The harmonic factors
and
were used to counterweigh the contribution rate among the topology-derived, the set constraint, and the translation constraint models. The training ratio and
were fixed to 50% and 0.5, respectively, and calculated node classification accuracies with a different
, assuming that
ranged from 0.1 to 0.9 on MovieLens and wordnet datasets.
Figure 4 shows the comparisons of node classification accuracies with a different
.
As shown in
Figure 4, the node classification performance of HRST was not sensitive to the parameter
and demonstrated the robustness of HRST, because the variation ranges of node classification accuracies with a different
were all within 2.5%.
As for MovieLens and wordnet datasets, the best evaluated results in terms of node classifications were achieved at and , respectively.
7. Conclusions
Hypernetwork representation learning can explore the relationships among the nodes and find a universal method to solve practical problems, and it has a wide range of application scenarios, such as trend prediction, personalized recommendation, and other online applications. Therefore, we proposed a hypernetwork representation learning method with common constraints of the set and translation to effectively incorporate the hyperedges into the process of hypernetwork representation learning and regard the learning process of node representation vectors as a joint optimization problem, which was solved by means of the stochastic gradient ascend method. The experimental results demonstrated that our proposed method was almost entirely superior to other baseline methods. Although we carried out the research of the hypernetwork representation learning by means of a transformation strategy from the hypergraph to the graph and tried to incorporate the hyperedges into the process of the network representation learning, some hypernetwork structure information was still lost. Therefore, future research can be carried out regarding two aspects: firstly, continue to try to incorporate the hyperedges into network representation learning methods; secondly, the hypernetwork should no longer be transformed into the conventional network, so that the hyperedges are no longer decomposed, but regarded as a whole to study the hypernetwork representation learning.
Author Contributions
Conceptualization, Y.Z. and H.Z.; methodology, Y.Z. and H.Z.; software, Y.Z.; validation, Y.Z.; formal analysis, Y.Z.; investigation, Y.Z. and H.Z.; resources, Y.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z. and H.Z.; visualization, Y.Z.; supervision, Y.Z. and H.Z.; project administration, Y.Z. and H.Z.; funding acquisition, Y.Z., X.W., and J.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the National Natural Science Foundation of China, grant numbers 62166032, 62162053, and 62062059; by the Natural Science Foundation of Qinghai Province, grant numbers 2022-ZJ-961Q and 2022-ZJ-701; by the Project from Tsinghua University, grant number SKL-IOW-2020TC2004-01; and by the Open Project of State Key Laboratory of Plateau Ecology and Agriculture, Qinghai University, grant number 2020-ZZ-03.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data are contained in the article.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ruan, Q.S.; Zhang, Y.R.; Zheng, Y.H.; Wang, Y.D.; Wu, Q.F.; Ma, T.Q.; Liu, X.L. Recommendation model based on a heterogeneous personalized spacey embedding method. Symmetry 2021, 13, 290. [Google Scholar] [CrossRef]
- Wang, M.H.; Qiu, L.L.; Wang, X.L. A survey on knowledge graph embeddings for link prediction. Symmetry 2021, 13, 485. [Google Scholar] [CrossRef]
- Li, Y.H.; Wang, J.Q.; Wang, X.J.; Zhao, Y.L.; Lu, X.H.; Liu, D.L. Community detection based on differential evolution using social spider optimization. Symmetry 2017, 9, 183. [Google Scholar] [CrossRef] [Green Version]
- Perozzi, B.; Al-Rfou, R.; Skiena, S. DeepWalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD Internatonal Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 701–710. [Google Scholar]
- Grover, A.; Leskovec, J. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 855–864. [Google Scholar]
- Tang, J.; Qu, M.; Wang, M.Z.; Zhang, M.; Yan, J.; Mei, Q.Z. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015; pp. 1067–1077. [Google Scholar]
- Cao, S.S.; Lu, W.; Xu, Q.K. Grarep: Learning graph representations with global structural information. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, Melbourne, Australia, 19–23 October 2015; pp. 891–900. [Google Scholar]
- Ou, M.D.; Cui, P.; Pei, J.; Zhang, Z.W.; Zhu, W.W. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1105–1114. [Google Scholar]
- Tu, C.C.; Liu, H.; Liu, Z.Y.; Sun, M.S. CANE: Context-aware network embedding for relation modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada, 30 July–4 August 2017; pp. 1722–1731. [Google Scholar]
- Tu, C.C.; Zeng, X.K.; Wang, H.; Zhang, Z.Y.; Liu, Z.Y.; Sun, M.S.; Zhang, B.; Lin, L.Y. A unified framework for community detection and network representation learning. IEEE Trans. Knowl. Data Eng. 2019, 31, 1051–1065. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Y.; Zhao, H.X. Hypernetwork representation learning with the set constraint. Appl. Sci. 2022, 12, 2650. [Google Scholar] [CrossRef]
- Agarwal, S.; Branson, K.; Belongie, S. Higher order learning with graphs. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25 June 2006; pp. 17–24. [Google Scholar]
- Huang, J.; Chen, C.; Ye, F.H.; Wu, J.J.; Zheng, Z.B.; Ling, G.H. Hyper2vec: Biased random walk for hyper-network embedding. In Proceedings of the 24th International Conference on Database Systems for Advanced Applications, Chiang Mai, Thailand, 23–25 April 2019; pp. 273–277. [Google Scholar]
- Huang, J.; Liu, X.; Song, Y.Q. Hyper-path-based representation learning for hyper-networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 449–458. [Google Scholar]
- Tu, K.; Cui, P.; Wang, X.; Wang, F.; Zhu, W.W. Structural deep embedding for hyper-networks. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 426–433. [Google Scholar]
- Zhou, D.Y.; Huang, J.Y.; Schölkopf, B. Learning with hypergraphs: Clustering, classification and embedding. In Proceedings of the 19th International Conference on Neural Information Processing Systems, Vancouver, Canada, 4–7 December 2006; pp. 1601–1608. [Google Scholar]
- Sharma, K.K.; Seal, A.; Herrera-Viedma, E.; Krejcar, O. An enhanced spectral clustering algorithm with s-distance. Symmetry 2021, 13, 596. [Google Scholar] [CrossRef]
- Bretto, A. Hypergraph Theory: An Introduction; Springer Press: Berlin, Germany, 2013; pp. 24–27. [Google Scholar]
- Zhang, R.C.; Zou, Y.S.; Ma, J. Hyper-SAGNN: A self-attention based graph neural network for hypergraphs. arXiv 2019, arXiv:1911.02613. [Google Scholar]
- Song, G.; Li, J.W.; Wang, Z. Occluded offline handwritten chinese character inpainting via generative adversarial network and self-attention mechanism. Neurocomputing 2020, 415, 146–156. [Google Scholar] [CrossRef]
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. In Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 2787–2795. [Google Scholar]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.; Dean, J. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 3111–3119. [Google Scholar]
- Zheng, V.W.; Cao, B.; Zheng, Y.; Xie, X.; Yang, Q. Collaborative filtering meets mobile recommendation: A user-centered approach. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010; pp. 236–241. [Google Scholar]
- Harper, F.M.; Konstan, J.A. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst. 2015, 5, 19. [Google Scholar] [CrossRef]
- Wang, D.X.; Cui, P.; Zhu, W.W. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1225–1234. [Google Scholar]
- Xu, J.L.; Han, J.W.; Nie, F.P.; Li, X.L. Multi-view scaling support vector machines for classification and feature selection. IEEE Trans. Knowl. Data Eng. 2020, 32, 1419–1430. [Google Scholar] [CrossRef]
- Wang, Y.G.; Huang, G.N.; Yang, J.J.; Lai, H.D.; Liu, S.; Chen, C.R.; Xu, W.C. Change point detection with mean shift based on AUC from symmetric sliding windows. Symmetry 2020, 12, 599. [Google Scholar] [CrossRef] [Green Version]
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).