Next Article in Journal
Localization Detection Based on Quantum Dynamics
Previous Article in Journal
Ladderpath Approach: How Tinkering and Reuse Increase Complexity and Information
Previous Article in Special Issue
Assigning Degrees of Stochasticity to Blazar Light Curves in the Radio Band Using Complex Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MIGAN: Mutual-Interaction Graph Attention Network for Collaborative Filtering

1
Faculty of Sciences, Ferhat Abbas University, Setif 1, Setif 19000, Algeria
2
Laboratoire d’Informatique de Bourgogne, University of Burgundy, 21078 Dijon, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2022, 24(8), 1084; https://doi.org/10.3390/e24081084
Submission received: 9 June 2022 / Revised: 28 July 2022 / Accepted: 2 August 2022 / Published: 5 August 2022

Abstract

:
Many web platforms now include recommender systems. Network representation learning has been a successful approach for building these efficient recommender systems. However, learning the mutual influence of nodes in the network is challenging. Indeed, it carries collaborative signals accounting for complex user-item interactions on user decisions. For this purpose, in this paper, we develop a Mutual Interaction Graph Attention Network “MIGAN”, a new algorithm based on self-supervised representation learning on a large-scale bipartite graph (BGNN). Experimental investigation with real-world data demonstrates that MIGAN compares favorably with the baselines in terms of prediction accuracy and recommendation efficiency.

1. Introduction

In the literature, there are numerous techniques for building recommendation systems. They can be classified either as collaborative, content-based, or hybrid filtering approaches [1,2,3,4]. Collaborative filtering is the most influential. It relies on identifying users with similar tastes for item recommendations. It leverages their feedback to make suggestions to the active user. Collaborative recommender systems have been implemented in multiple application areas [5,6,7].
Learning effective user/item representations from their interactions and side information in recommender systems is a challenging issue. Since most data have a graph structure and the graph neural network (GNN) has superiority in representation learning, using GNN in recommender systems is a flourishing field of research. Several works are based on GNN to perform recommendations, and other tasks, such as the graph convolutional network (GCN) [8] and graph attention network (GAT) [9]. GAT computes the representation of nodes by adaptively combining their neighborhoods’ vectors using a self-attention mechanism with trainable attention weights. Wang et al. [10] suggest a knowledge graph attention network (KGAT) for KG-based recommendations. To enrich the representation, the authors consider the implicit collaborative information of multi-hop neighbors. In the work [11], the authors propose a neighbor-aware graph attention network for recommendation tasks to model the implicit correlations of neighbors. Unlike previous attention networks, our current work determines the most relevant weights characterizing the mutual influence among item-users. In addition to learning the interaction between user interests and item embeddings, it integrates a new component accounting for the mutual influence of items carrying collaborative signals on user decisions. MIGAN learns deeply the most relevant weights representing the users mutual influence on an item. It exploits the complex relation between the user profile and the item attributes. Its main advantage lies in its ability to discriminate the relative influence of various interactions between nodes. Our main contributions summarize as follows:
  • Our approach is based on self-supervised representation learning on a large-scale bipartite graph (BGNN). We have adapted this powerful representation for the recommendation task because its ability to model the dependencies between the nodes on a large scale.
  • The collaborative filtering recommender based on interactive neural attention networks takes advantage of the encoding potential of interactive attention between users and items. It learns the most significant weights representing users’ mutual effect on the item. Consequently, exploiting this information improves the recommender systems’ accuracy.
  • The empirical evaluation, including various real-world dataset, shows that MIGAN significantly outperforms the state-of-the-art baselines.
The rest of the paper is organized as follows. Section 2 describes the related literature. Section 3 presents in detail the proposed architecture. Section 4 discusses the experimental results. Section 5 summarizes the conclusions.

2. Related Work

Graph embedding models are one of machine learning’s newest and fastest-growing subfields. Its strength is in its ability to take advantage of the intrinsic graph structure of a wide range of data types encountered in a wide range of applications. The graph format models a set of elements (represented by nodes) and their relationships (represented by edge) to capture structural information. Therefore, there have been proposed various graph embedding models in literature [12,13]. Node2Vec is an embedding model for converting graphs into numerical representations where each node in the network is used as a starting point to produce a corpus of random walks [14]. In a first-order random walk, each step is solely determined by the current state. The steps in a second-order random walk are determined by the current and prior states. The random walk corpus is fed through Word2Vec to build the node embeddings. Liang et al. [15] extend the variational autoencoders (VAEs) to collaborative filtering for implicit feedback. It models the collaborative information into multinomial likelihood (MultiVAE) for the data distribution to sample prediction for items on the long tail. Drif et al. [16] develop an ensemble variational autoencoder framework for recommendations (EnsVAE) that specifies a procedure to transform sub-recommenders’ predicted utility matrix into interest probabilities that allow the VAE to represent the variation in their aggregation. This architecture is based on two components: (1) GloVe content-based filtering recommender (GloVe-CBF) that exploits the strengths of embedding-based representations and stacking ensemble learning techniques to extract features from the item-based side information, and (2) a variant of neural collaborative filtering recommender, named the Gate Recurrent Unit-based Matrix Factorization (GRU-MF) recommender. It models a high level of non-linearities and exhibits interactions between users and items in latent embeddings, reducing user biases towards items that are rated frequently by users. Weng Lo et al. [17] suggested the graph flow data’s extensive structural information based on a graph neural networks (GNNs). The latter works with the message passing concept, where a node collects its neighbors features and sends them to a node as a message. In recent years, the graph attention network approach has developed rapidly. Wang et al. [18] introduced a multi-dimension interaction-based attentional knowledge graph neural network (MI-KGNN) to improve recommendations based on knowledge graph (KG). MI-KGNN explores the interaction between users and the neighborhood during embedding propagation. In [10], the authors proposed a knowledge graph attention network (KGAT) modeling the high-order connectivity in knowledge graphs. It exploits the attention mechanism to determine important neighbors. In [19], the authors propose a multi-view graph attention network (MV-GAN) based on the heterogeneous information networks for the recommendation. They create attention networks at the node and path levels to learn user and product representations from every view. A view-level attention mechanism is developed to integrate various relationship types in multiple views co-operatively. Liu et al. [20] propose a contextualized graph attention network (CGAT) based on an entity’s local and non-local context data in a knowledge graph. CGAT implements a graph attention method to record local context information while considering users’ unique preferences for entities. The non-local context of an entity is also extracted using a biased random walk sampling method by CGAT. In fact, propagating information from nodes across the network is done during numerous iterations. The aggregated information at each node (node embedding) is a memory- and time-consuming task. Due to this fact, many GNNs for recommendation suffer from scalability limitations, unpredictable memory, and computational resource requirements on large graphs. To overcome these drawbacks, our work considers node representation learning on large-scale bipartite graphs.

3. Mutual-Interaction Graph Attention Network Approach

The proposed architecture considers the mutual collaborative information of the user’s preference on the whole neighboring item to enrich the representation. It is based on self-supervised representation learning on a large-scale bipartite graph (BGNN) [21]. Figure 1 illustrates the model architecture in detail. We first formulate the recommendation task on the bipartite graph. Secondly, we introduce the embedding representation based on BGNN and then describe the mutual interaction graph attention mechanism.

3.1. Problem Formulation

Let U = u 1 , u 2 , , u n and I = i 1 , i 2 , , i m be the sets of users and items, respectively, where n is the number of users, and m is the number of items. We assume that R n × m is the user-item rating matrix.
We formulate the recommendation task as a prediction problem as follows:
R ^ : utility matrix; r ^ u : predicted rating for each user u U ; U = u 1 , u 2 , , u n : is the sets of users, where n is the number of users; I = i 1 , i 2 , , i m is the set of the items, where m is the number of items; R u i : is the ground truth rating assigned by the user u on the item i.
We define the utility matrix as:
R ^ u i = P Q T = k = 1 K p u k q k i
where: K is latent space’s dimension.
S ( n × m ) : is the inner product of both user and item latent vectors. It is decomposed by the matrix factorization method into P R N × K and Q R M × K .
To normalize R ^ , we apply the min/max scaling:
m i n m a x ( x ) = x m i n m a x m i n x r ^ ( u i )
where:  m i n = min ( r u i ) is the minimal rating;
m a x = max ( r u i ) : is the maximal rating. The normalization eliminates user bias. In other words, users have different ways to rate the items. Some would only give high ratings to items they like, while others do the opposite. Normalizing users’ ratings hide their bias by mapping them to values between 0 and 1. The lowest rating of each user is associated with 0, while 1 represents their highest value. It assists in leveraging more accurate collaborative-based recommendations.
Table 1 reports the notations used in the rest of this paper.
A bipartite graph is composed of two independent sets of vertices, U 1 and I 1 . The edges connect a vertex from one set U 1 to one in I 1 .
We define Bipartite Graphs as follows: Let G = ( U 1 , I 1 , E ) be a bipartite graph. e i j represents the edge between u i and i j .
B u R M × N is the incidence matrix for U 1 . B i R N × M is the incidence matrix for I 1 . Where
B u ( i , j ) = 1 if e i j E , 0 if e i j E .
X u R M × P : is defined as a feature matrix of node u i ( X i is similarly written).
Our work is based on the self-supervised node representation learning model [21] that can employ topology information as well as separate node attributes from two domains to increase the recommendation performance for large graph.

3.2. Embedding Representation Based on Bipartite Graph Neural Networks (BGNN)

He et al. [21] proposed a a self-supervised representation learning framework for large-scale bipartite graphs. In this section, we adapt the outputs of this BGNN architecture, thus, we can deploy it in our recommendation system. Let us define the following notions: H u R P ( H i R Q , respectively): is nodes embedding representation for U 1 ( I 1 , respectively). f e m b : is an embedding model given by θ parameters.
The embedding of distinct node features X u and X i is written as:
H u , H i = f e m b ( X u , B u , X i , B i , θ )
The architecture of f e m b is based on two functions: (i) Inter-Domain Message Passing (IDMP) and (ii) Intra-Domain Alignment (IDA). We will describe briefly these functions and show how we prepare the outputs for the recommendation task (interested reader can refer to [21] for more details). IDMP enables one domain to aggregate information from the other domain, through the linked edges, as follows:
H i u = f u ( X i , B u , θ )
H u i = f i ( X u , B i , θ )
such as: f u (resp. f i ): represents the IDMP function for this domain.
H i u (resp. H u i ): is the flow of aggregated information from I 1 (resp. U 1 ) to U 1 (resp. I 1 ). After that, the Intra-Domain Alignment (IDA) is deployed for theses two distinct features into a single representation. After the self-supervised training, the algorithm gives the domains representation of H u 1 and H I 1 . The adversarial loss L a d v is used to compute the best results.
L o s s u = L a d v ( H i u , X u )
L o s s i = L a d v ( H u i , X i )
Thus, the Inter-Domain Message Passing (IDMP) is expressed as:
H i u ( K ) = σ ( B u H i ( K ) W u K )
H u i ( K ) = σ ( B i H u ( K ) W i K )
where: B u = D u 1 B u is the normalization of B u ( D u is the degree matrix of B u ). By normalizing the incidence matrix of the graph, the algorithm can effectively reduce the computational cost. σ is the activation function ReLU [22]. K denotes the depth index of the hidden features of the nodes in set U 1 (resp. I 1 ).
Let σ is the Intra-Domain Alignment (IDA) discriminator and ϕ is the IDMP generator. The discriminator loss function is expressed as follows:
L D ( σ | ϕ ) = 1 M j = 1 M l o g P σ , ϕ ( s o u r c e = 0 | h u ( j ) ) 1 N j = 1 N l o g P σ , ϕ ( s o u r c e = 1 | h i u ( j ) )
where: P σ , ϕ ( s o u r c e = 1 | h ) is the probability that the input feature vector h is from the source domain H i u .
The implementation for the self-supervised representation learning on large scale bipartite graph (BGNN) for the recommendation task is summarized in Algorithm 1.
Algorithm 1 Bipartite Graph Neural Networks for recommendation task.
Input
X u : U s e r f e a t u r e s l i s t
X i : I t e m f e a t u r e s l i s t
R : R a t i n g m a t r i x
Output
e m b u : U s e r s E m b e d d i n g r e p r e s e n t a t i o n b a s e d o n B G N N .
e m b i : I t e m s E m b e d d i n g r e p r e s e n t a t i o n b a s e d o n B G N N .
Begin
   phase 1                               ▹ Extract the graph from the rating matrix
B u , B i = G e t B i p a r t i t e G r a p h ( R )
   phase 2                     ▹ Computing embeddings for each item and user
H u 0 = X u H i 0 = X i
            for j = 0 , 1 , , K do
H u i j = IDMP ( B i , H u K )
H i u j = IDMP ( B u , H i K )
H u K = IDA ( H u i j )
H i K = IDA ( H i u j )
      Endfor
   phase 3                                                         ▹ embeddings preparation
e m b u = G e t E m b e d d i n g s ( H u K )
e m b i = G e t E m b e d d i n g s ( H i K )
return e m b u , e m b i

3.3. The Interactive Attention Network Recommender

The interactive attention network recommendation system aim at identifying latent features that show the users and items mutual influence. The attention mechanism has been shown to be useful in a variety of machine learning applications, including image/video captioning [23,24]. Our proposed interactive concept extracts each participant’s contribution from their compressed representation. Thus, it allows the proposed recommendation framework to model efficiently the interaction characteristic. This attention network model figures out which weights best represent the users’ mutual effect on the item. Figure 2 explains the mutual interactions between users and items.
Algorithm 2 shows the architecture of the proposed interactive attention network. In order to anticipate a distribution over the items, we create combined user and item interactive attention maps. As a result, the co-attention mechanism detects a correlation between items and users and calculates the likelihood that an item will be of interest to indirect comparable individuals.
The first embedding layers e u and e i captures latent features of users p u and items q i . They are followed by Long-Short-Term-Memory (LSTM) layers to learn long sequences with long time lags. Each LSTM state includes two inputs: the current feature vector and the output vector h t 1 from the previous state. Its output vector is h t . We chose to apply the LSTM model as it exhibits interactions between users and items in latent embeddings. Each node embedding layer is chained with an LSTM layer that contains recurrent modules enabling long-range learning. Information from nodes neighbors gradually enhances the subsequent feature representation because LSTM has an augmented hidden state with non-linear mechanisms. It allows propagating without modification, updating, or resetting states using simple learned gating functions. The LSTM representation is expressed as follows:
h t u = g 1 ( p )
h t i = g 2 ( q )
The learned representation is H p and H q , respectively, with d × n dimensions for H p and d × m for H q . Users and items embedded inputs are projected into a vector representation space using the attention technique. In fact, this representation models the high-order non-linear mutual relationship. For the interactive attention mechanism, we build an attention maps in order to predict a distribution over the items. For this purpose, we compute a matrix L = t a n h ( H p W p q H q ) , where L R n × m , and W p q is a d × d a learnable parameters matrix. The features co-attention maps is defined as:
α p * = t a n h ( W p H p + ( W q H q ) L ) α q * = t a n h ( W q H q + ( W p H p ) L )
The interactive attention model uses a tangent function to model the mutual interactions between users and items. Afterward, we compute the the probability distribution over the embedding space. The softmax function is used to generate the attention weights:
α u = S o f t m a x ( f ( α p * ) )
α i = S o f t m a x ( f ( α q * ) )
where f: is a multi-layer neural network.
Then, the high order interaction latent space of users and items is given by:
f 1 = [ β u β i ]
where β p and β q : are the derived attention weights.
As a result, the predicted matrix R u i ^ is defined as:
R u i ^ = f ( f 1 )
where f: is a dense layer using a sigmoid activation function.
Finally, we train the model to minimize the loss function which is the Mean Absolute Error (MAE):
L ( R u i , R ^ u i ) = 1 | C | ( u , i ) C | ( R u i R u i ^ ) |
Algorithm 2 CoAttention: The interactive attention network recommender.
Input
   lstmU: user’s lstm : size d × n
   lstmI: item’s lstm: size d × m
Output
att u i : T h e I n t e r a c t i v e A t t e n t i o n b e t w e e n u s e r s a n d i t e m s
Begin
   phase 1                                                            ▹ initialization of weights
       W u : s i z e n × d
       W i : s i z e m × d
       W u i : s i z e d × d
       b u : s i z e d × n
       b i : s i z e d × m
   phase 2                                                          ▹ tanh function application
      S = lstmI
      G = lstmU
       F = tanh ( S t W u i G )
       α u * = tanh ( W u G + ( W i S ) F t )
       α i * = tanh ( W i S + ( W u G ) F )
   phase 3                                                     ▹ Softmax function application
       α u = softmax ( f ( b u α u * ) )
α i = softmax ( f ( b i α u * ) )
β u = O ( α u )
β i = O ( α i )
                     ▹ O: is the batch.dot() function from Keras backend that is used between two tensors (( α p ) t and ( G ) t ), ( α ( q ) t and ( S ) t ) respectively.
   phase 4            ▹ Each output of function O are transposed and then used as input into a product function. After that, both results ( β u β i ) can be summed by a concatenate function.
       att u i = concatenation ( β u , β i )
return     att u i
Algorithm 3 summarizes the overall mutual-interaction graph attention network approach. Network representation learning can tackle the recommendation problems by embedding nodes into a low-dimensional space R d . Furthermore, this adapted BGNN representation for recommendation task improves multi-hop relationship modeling and the training accuracy. Unlike previous research on graph neural networks for recommendation that only learn complex relationship between the target and their neighbors using attention network, our work learn the most important weights representing the users’ mutual influence on the item based on the interactive attention.
Algorithm 3 Migan: Mutual-Interaction Graph Attention Network.
Input
X u : U s e r f e a t u r e s l i s t
X i : I t e m f e a t u r e s l i s t
U : L i s t o f u s e r : s i z e = n
I : L i s t o f i t e m : s i z e = m
R : R a t i n g m a t r i x
Output P u i : p r e d i c t i o n m a t r i x
Begin
   phase 1                                    ▹ Preparing data to be passed to the BGNN
foreach u U , i V do R j = minmax ( R )
   phase 2                                       ▹ extracting embeddings by BGNN-Class()
              e m b u , e m b i = B G N N ( X u , X i , R j )
   phase 3              ▹ User and Item embedding are followed by LSTM layers.
l s t m u = L S T M ( e m b u )
l s t m i = L S T M ( e m b i )
   phase 4                                                      ▹ Applying Attention mechanism
a t t u = A t t e n t i o n ( l s t m u )
a t t i = A t t e n t i o n ( l s t m i )
a t t u i = C o A t t e n t i o n ( l s t m u , l s t m i )
   phase 5                                                            ▹ Concatenating The outputs
A T T = c o n c a t e n a t i o n ( a t t u , a t t i , a t t u i )
                                     InteractiveAttention = BuildModel(ATT);
                                     InteractiveAttention.trainModel(D);
break return P u i = I n t e r a c t i v e A t t e n t i o n . p r e d i c t ( φ )

4. Experiments and Discussion

We conduct our experiments on MovieLens 1M that is commonly used for benchmarking recommendation frameworks. The MovieLens dataset [25] is a real, timestamped 5-star ratings of the MovieLens platform users on various films. The selected dataset contains 1 million ratings from 6000 users over 4000 movies. Table 2 shows the dataset description. We divide the datasets into 75% training data and 25% testing data in a stratified way.
We evaluate MIGAN using Mean Average Precision and Normalized Discounted Cumulative Gain. The mean average precision ( M A P ) measures the accuracy of information retrieval. The results of recommender systems are frequently pruned to return the Top-k components. The value of k varies based on the application: a system might display the top three trending items or the best ten items that meet the current user’s preferences. The Normalized Discounted Cumulative Gain (NDCG) calculates an item’s normalized usefulness depending on its position in the final list. It is used to assess the Top-k recommended items’ ranking quality. Interested readers can refer to [16] for more details.

4.1. Hyperparameters Analysis

Here, we report the hyperparameters analysis phase, which is performed separately for each MIGAN recommender variant. Evaluation is done with M A P @ k and N D C G @ k .
The main idea of our model is to pass the results of the BGNN through a neural network and then apply the attention model to it. Consequently, we perform the analysis on 1–6 variants and focus on the learning algorithms used by each hyperparameter. We have 3 hyperparameters and two kinds of recurrent networks: Embedding output size = 50, 75, 100, Neural network = LSTM, GRU. It gives us six variants. The architecture of each variant is as follows:
  • variant 1: BGNN output size = 75 × 1 | Neural Network = LSTM.
  • variant 2: BGNN output size = 100 × 1 | Neural Network = LSTM.
  • variant 3: BGNN output size = 50 × 1 | Neural Network = LSTM.
  • variant 4: BGNN output size = 75 × 1 | Neural Network = GRU.
  • variant 5: BGNN output size = 100 × 1 | Neural Network = GRU.
  • variant 6: BGNN output size = 50 × 1 | Neural Network = GRU.
As shown in Table 3, Variant 1 scores better than the other variants. Thus, it is picked for further tweaking.
We generate the utility matrix based on the learned embeddings. Figure 3 illustrates the hyperparameters analysis.
We explore a range of values for each hyperparameter as reported below:
  • Dimensions of the embedding α [ 30 , 100 ] ;
  • Number of dense layers after the co-attention θ [ 2 , 20 ] ;
  • Number of neurons per dense layer τ [ 30 , 150 ] ;
  • Activation function used in the dense layers σ { s e l u , e l u , r e l u } ;
  • Optimizer λ { s g d , a d a m , a d a g r a d } .
Results illustrate that we achieve the best performance for the following settings: α = 50 , θ = 3 , τ = 100 , σ = e l u , λ = A d a m .

4.2. Performance Comparison with the Baselines

In this subsection, MIGAN’s final benchmark results are compared to the outcomes of certain baseline recommender systems. We execute the baselines in the same evaluation environment as MIGAN to guarantee that comparisons are fair. Furthermore, we deploy the NeuRec library. Furthermore, we deploy the NeuRec library. It is released on GitHub as open-source software under an MIT license, and it implements 33 neural recommender systems [26].
We compare the proposed recommender framework with the following baselines:
  • The stacked content-based filtering recommender: the work [16] developed a content-based recommender system based on the stacking ensemble learning.
  • Neural collaborative filtering (NCF) [27]: this work developed a recommender framework that uses the multi-layer perceptron to exploit the user-item interaction.
  • Variational Autoencoders for Collaborative Filtering (MultiVAE) [15]: This approach investigates the collaborative information in a multinomial distribution to recommend items on the long tail.
  • Node2Vec embedding: We propose a variant of MIGAN architecture, which deploys Node2Vec embedding representation instead of BGNN.
Table 4 reports the performance of the various approaches under investigation using the MovieLens dataset. According to mean average precision, MIGAN outperforms the baselines. Indeed, it models the higher-order feature interactions. Figure 4 shows the MAP@k evaluation versus k-top items. As the MAP measure indicates the fraction of relevant articles in the top k suggestions averaged over all users, MIGAN creates a tailored recommendation. To put it differently, MIGAN outperforms the other models in recalling relevant items for the user. It retains the user–item interaction and generates a user-specific task recommendation. Furthermore, both recommender-based BGNN and the recommender-based Node2Vec are quite competitive. For example, MIGAN and Node2Vec achieve MAP@10 = 0.85, and MAP@10 = 0.84, respectively, outperforming the other baselines. The training loss equals 0.23 with dimensions d = 70 and 0.32 with d = 50 for BGGN and Node2vec, respectively. Node2Vec is a random walk-based node embedding method producing high memory consumption for large graphs. In contrast, the cascaded training used in BGGN does not involve loading the entire graph into memory. Consequently, it reduces the memory cost and training time. The MultiVAE recommender scores poorly. Indeed, it does not use a enough rich representation of data semantic. The neural collaborative filtering approach (Neural CF) shows a good score with k = 10, MAP@10 = 0.74 due to the appropriate representation of the interaction between user and item.
Figure 5 shows that MIGAN exhibit a high NDCG score on MovieLens. Its Top-k recommendation list is quite similar to the ground-truth list. MIGAN boosts the modeling of user–item interaction. Indeed, the more interested the users are in an item, the more likely users with similar preferences to recommend it. Note that the Node2Vec also presents good NDCG scores. The graph recommender effectively obtains the user’s overall interest built by the neighborhood representation.

5. Conclusions

We propose and investigate a graph recommender where each user’s recommended content is accurate and personalized. MIGAN is a collaborative filtering (CF) system. A neural graph represents the item and users. It computes a representation of nodes by combining their neighborhoods’ vectors according to their mutual influence interaction by utilizing a co-attention mechanism with trainable attention weights. The attention weights are adjustable parameters computed by aggregating neighbor vectors. This neural graph architecture predicts the ratings assigned by the users to the items.
We perform a comparative evaluation of several configurations for the RS using the well-known dataset MovieLens. We use two metrics to quantify its accuracy: the mean average precision (MAP) and the normalized discounted cumulative gain (NDCG). The first is about the accuracy of the recommendation. The second is about the ranking of the recommended items. Comparing MIGAN performance with some baselines shows that it outperforms all its alternatives in MAP and NDGC scores. However, it would be very interesting to focus on a specific domain for recommendation tasks, such as taking the knowledge graph to distill attribute-based collaborative signals and compare MIGAN performance with knowledge graph attention network models. Thus, future work will investigate this framework for collaborative knowledge graphs involving contextual and semantic data. Future work will investigate this framework for other recommendation tasks involving contextual data.

Author Contributions

A.D. carried out the study, designed the methodology, conceived the framework, and drafted and finalized the manuscript. H.C. supervised the study, participated in the analysis of the results, and reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The links to publicly datasets: https://dl.acm.org/doi/10.1145/2827872 (accessed on 1 March 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RSRecommender Systems
MIGANMutual-Interaction Graph Attention Network
BGNNBipartite Graph Neural Networks
IDMPInter-Domain Message Passing
IDAIntra-Domain Alignment
LSTMLong-Short-Term-Memory
GRUGate Recurrent Unit
MAPMean Average Precision
NDCGNormalized Discounted Cumulative Gain
Neural-CFNeural Collaborative Filtering
Glove-CbfGlove Content-based filtering recommender
MultiVAEVariational Autoencoders recommender

References

  1. Kulkarni, S.; Rodd, S.F. Context Aware Recommendation Systems: A review of the state of the art techniques. Comput. Sci. Rev. 2020, 37, 100255. [Google Scholar] [CrossRef]
  2. Fayyaz, Z.; Ebrahimian, M.; Nawara, D.; Ibrahim, A.; Kashef, R. Recommendation Systems: Algorithms, Challenges, Metrics, and Business Opportunities. Appl. Sci. 2020, 10, 7748. [Google Scholar] [CrossRef]
  3. Chen, R.; Hua, Q.; Chang, Y.S.; Wang, B.; Zhang, L.; Kong, X. A Survey of Collaborative Filtering-Based Recommender Systems: From Traditional Methods to Hybrid Methods Based on Social Networks. IEEE Access 2018, 6, 64301–64320. [Google Scholar] [CrossRef]
  4. Berkani, L.; Belkacem, S.; Ouafi, M.; Guessoum, A. Recommendation of users in social networks: A semantic and social based classification approach. Expert Syst. 2021, 38, e12634. [Google Scholar] [CrossRef]
  5. Drif, A.; Zerrad, H.E.; Cherifi, H. Context-Awareness in Ensemble Recommender System Framework. In Proceedings of the 2021 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Kuala Lumpur, Malaysia, 12–13 June 2021; pp. 1–6. [Google Scholar]
  6. Koren, Y. Factorization meets the neighborhood: A multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and data Mining, Las Vegas, NV, USA, 24–27 August 2008; pp. 426–434. [Google Scholar]
  7. Drif, A.; Guembour, S.; Cherifi, H. A Sentiment Enhanced Deep Collaborative Filtering Recommender System. In Proceedings of the International Conference on Complex Networks and Their Applications, Madrid, Spain, 1–3 December 2020; pp. 66–78. [Google Scholar]
  8. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  9. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  10. Wang, X.; He, X.; Cao, Y.; Liu, M.; Chua, T.S. Kgat: Knowledge graph attention network for recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 950–958. [Google Scholar]
  11. Song, J.; Chang, C.; Sun, F.; Song, X.; Jiang, P. NGAT4Rec: Neighbor-Aware Graph Attention Network For Recommendation. arXiv 2020, arXiv:2010.12256. [Google Scholar]
  12. Goyal, P.; Ferrara, E. Graph embedding techniques, applications, and performance: A survey. Knowl.-Based Syst. 2018, 151, 78–94. [Google Scholar] [CrossRef]
  13. Cai, H.; Zheng, V.W.; Chang, K.C.C. A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications. IEEE Trans. Knowl. Data Eng. 2018, 30, 1616–1637. [Google Scholar] [CrossRef]
  14. Grover, A.; Leskovec, J. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 855–864. [Google Scholar]
  15. Liang, D.; Krishnan, R.G.; Hoffman, M.D.; Jebara, T. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference, Lyon, France, 23–27 April 2018; pp. 689–698. [Google Scholar]
  16. Drif, A.; Zerrad, H.E.; Cherifi, H. EnsVAE: Ensemble Variational Autoencoders for Recommendations. IEEE Access 2020, 8, 188335–188351. [Google Scholar] [CrossRef]
  17. Lo, W.W.; Layeghy, S.; Sarhan, M.; Gallagher, M.; Portmann, M. E-GraphSAGE: A Graph Neural Network based Intrusion Detection System. arXiv 2021, arXiv:2103.16329. [Google Scholar]
  18. Wang, Z.; Wang, Z.; Li, X.; Yu, Z.; Guo, B.; Chen, L.; Zhou, X. Exploring Multi-dimension User-Item Interactions with Attentional Knowledge Graph Neural Networks for Recommendation. IEEE Trans. Big Data 2022, 155–170. [Google Scholar] [CrossRef]
  19. Chen, L.; Cao, J.; Wang, Y.; Liang, W.; Zhu, G. Multi-view graph attention network for travel recommendation. Expert Syst. Appl. 2022, 191, 116234. [Google Scholar] [CrossRef]
  20. Liu, Y.; Yang, S.; Xu, Y.; Miao, C.; Wu, M.; Zhang, J. Contextualized graph attention network for recommendation with item knowledge graph. IEEE Trans. Knowl. Data Eng. arXiv 2021, arXiv:2004.11529v1. [Google Scholar] [CrossRef]
  21. He, C.; Xie, T.; Rong, Y.; Huang, W.; Huang, J.; Ren, X.; Shahabi, C. Cascade-BGNN: Toward Efficient Self-supervised Representation Learning on Large-scale Bipartite Graphs. arXiv 2019, arXiv:1906.11994. [Google Scholar]
  22. Agarap, A.F. Deep learning using rectified linear units (relu). arXiv 2018, arXiv:1803.08375. [Google Scholar]
  23. Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A.; Salakhudinov, R.; Zemel, R.; Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 2048–2057. [Google Scholar]
  24. Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
  25. Harper, F.M.; Konstan, J.A. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst. (TIIS) 2015, 5, 1–19. [Google Scholar] [CrossRef]
  26. Wu, B.; Sun, Z.; He, X.; Wang, X.; Staniforth, J. NeuRec: Next RecSys Library; National Natural Science Foundation: Beijing, China, 2019. [Google Scholar]
  27. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar]
Figure 1. MIGAN Architecture.
Figure 1. MIGAN Architecture.
Entropy 24 01084 g001
Figure 2. The interactive attention network recommender. In this example, user 1 and user 2 rate three similar items. They have strong interactive attention. User 2 and user 3 rate an item not seen by user 1. Therefore, one can deduce that this new item can also attract user 1. It is a first-order interaction. Moreover, one can deduce a mutual influence based on entities’ dependencies at more than a first order interaction level. For example, user 1 influences user 4 (a similar user of user 3), generating a recommendation based on user 3 preferences.
Figure 2. The interactive attention network recommender. In this example, user 1 and user 2 rate three similar items. They have strong interactive attention. User 2 and user 3 rate an item not seen by user 1. Therefore, one can deduce that this new item can also attract user 1. It is a first-order interaction. Moreover, one can deduce a mutual influence based on entities’ dependencies at more than a first order interaction level. For example, user 1 influences user 4 (a similar user of user 3), generating a recommendation based on user 3 preferences.
Entropy 24 01084 g002
Figure 3. Hyperparameter searching MIGAN filtering recommendation system.
Figure 3. Hyperparameter searching MIGAN filtering recommendation system.
Entropy 24 01084 g003
Figure 4. Performance results of Top-K recommended lists, according to MAP. The ranking position K ranges from 1 to 50.
Figure 4. Performance results of Top-K recommended lists, according to MAP. The ranking position K ranges from 1 to 50.
Entropy 24 01084 g004
Figure 5. Performance results of Top-K recommended lists according to NDGC. The ranking position K ranges from 1 to 50.
Figure 5. Performance results of Top-K recommended lists according to NDGC. The ranking position K ranges from 1 to 50.
Entropy 24 01084 g005
Table 1. Notation and descriptions.
Table 1. Notation and descriptions.
SymbolsDefinitions and Descriptions
r u i User u’s rating for item i
p u The user u’s embedding.
q i The item i’s embedding.
gLong-Short-Term-Memory function
x u The user embedding layer followed by LSTM layer
x i The item embedding layer followed by LSTM layer
hThe Multi-Layer-Perception application
l s t m u The user LSTM layer following by MLP
l s t m i The item LSTM layer following by MLP
α u * Attention network function for user u
α i * Attention network function for item i
α u The last attention weights for user u
α i The last attention weights for item i
C u i User-item space
C t Text space
The concatenation operator
r u i User u’s rating expected value for item i
W , b The weight and bias in neural network
U , I Nodes of bipartite graph
X u , X i lists of Features
B u , B i adjacency matrix
Table 2. MovieLens 1M description.
Table 2. MovieLens 1M description.
# Users6040
# Movies3883
# Ratings1000209
Sparsity %95.5%
ItemGenomic Tags
UserDemographics
Table 3. The best scoring MIGAN variant.
Table 3. The best scoring MIGAN variant.
Mean Average PrecisionNormalized DCG
VariantMAP@10MAP@30MAP@50NDCG@10NDCG@30NDCG@50
Variant 10.850.830.810.710.780.79
Variant 20.820.780.760.650.720.76
Variant 30.800.770.760.660.730.76
Variant 40.790.740.7730.620.710.73
Variant 50.790.730.700.600.710.72
Variant 60.770.760.730.630.730.75
Table 4. Recommendation performance (%) of compared approaches conducted on MovieLens 1M dataset. We generate Top 10, 30, and 50 items for each user. The best score of MAP@k and NDCG@k are highlighted with a bold font.
Table 4. Recommendation performance (%) of compared approaches conducted on MovieLens 1M dataset. We generate Top 10, 30, and 50 items for each user. The best score of MAP@k and NDCG@k are highlighted with a bold font.
Mean Average PrecisionNormalized DCG
Rec sysMAP@10MAP@30MAP@50NDCG@10NDCG@30NDCG@50
Glove-Cbf0.820.780.770.670.740.77
Node2Vec0.840.820.810.550.650.69
MultiVAE0.620.580.540.570.620.65
Neural CF0.740.680.650.680.730.76
MIGAN0.850.830.810.710.780.79
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Drif, A.; Cherifi, H. MIGAN: Mutual-Interaction Graph Attention Network for Collaborative Filtering. Entropy 2022, 24, 1084. https://doi.org/10.3390/e24081084

AMA Style

Drif A, Cherifi H. MIGAN: Mutual-Interaction Graph Attention Network for Collaborative Filtering. Entropy. 2022; 24(8):1084. https://doi.org/10.3390/e24081084

Chicago/Turabian Style

Drif, Ahlem, and Hocine Cherifi. 2022. "MIGAN: Mutual-Interaction Graph Attention Network for Collaborative Filtering" Entropy 24, no. 8: 1084. https://doi.org/10.3390/e24081084

APA Style

Drif, A., & Cherifi, H. (2022). MIGAN: Mutual-Interaction Graph Attention Network for Collaborative Filtering. Entropy, 24(8), 1084. https://doi.org/10.3390/e24081084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop