Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Accurate bundle matching and generation via multitask learning with partially shared parameters

  • Hyunsik Jeon,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Seoul National University, Seoul, Republic of Korea

  • Jun-Gi Jang,

    Roles Formal analysis, Investigation, Validation, Writing – review & editing

    Affiliation Seoul National University, Seoul, Republic of Korea

  • Taehun Kim,

    Roles Investigation, Writing – review & editing

    Affiliation Seoul National University, Seoul, Republic of Korea

  • U. Kang

    Roles Formal analysis, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing

    ukang@snu.ac.kr

    Affiliation Seoul National University, Seoul, Republic of Korea

Abstract

How can we recommend existing bundles to users accurately? How can we generate new tailored bundles for users? Recommending a bundle, or a group of various items, has attracted widespread attention in e-commerce owing to the increased satisfaction of both users and providers. Bundle matching and bundle generation are two representative tasks in bundle recommendation. The bundle matching task is to correctly match existing bundles to users while the bundle generation is to generate new bundles that users would prefer. Although many recent works have developed bundle recommendation models, they fail to achieve high accuracy since they do not handle heterogeneous data effectively and do not learn a method for customized bundle generation. In this paper, we propose BundleMage, an accurate approach for bundle matching and generation. BundleMage effectively mixes user preferences of items and bundles using an adaptive gate technique to achieve high accuracy for the bundle matching. BundleMage also generates a personalized bundle by learning a generation module that exploits a user preference and the characteristic of a given incomplete bundle to be completed. BundleMage further improves its performance using multi-task learning with partially shared parameters. Through extensive experiments, we show that BundleMage achieves up to 6.6% higher nDCG in bundle matching and 6.3× higher nDCG in bundle generation than the best competitors. We also provide qualitative analysis that BundleMage effectively generates bundles considering both the tastes of users and the characteristics of target bundles.

Introduction

Given item and bundle purchase histories of users, how can we match existing bundles to the users and generate new bundles for them? Recommending a bundle, or a group of various items, instead of individual items has attracted widespread attention in e-commerce since 1) it recommends items that users would prefer at once and 2) it increases the chances of unpopular items being exposed to users. Bundle recommendation is divided into two different but highly related tasks, bundle matching and bundle generation, both of which play important roles in bundle recommendation. Bundle matching, which is to accurately match pre-constructed bundles to users, is crucial because it reduces the cost of manually constructing a bundle every time. Bundle generation, which automatically generates personalized bundles for users, is necessary because it enables us to construct a new bundle that better reflects user preferences than the pre-constructed bundles in a long-term perspective.

Bundle recommendation, however, is challenging due to the following reasons. First, bundle matching requires careful handling of heterogeneous types of data (i.e., user-item interactions and user-bundle interactions) to extract meaningful preferences of users. Previous works [15] fail to achieve high accuracy for bundle matching since they do not establish a relationship between the heterogeneous data. Second, bundle generation is a demanding task since the search space of possible bundles is burdensome to cope with; finding all possible bundles requires exponential computational costs to the number of items. Existing methods [1, 5] do not learn any generation mechanism from the observable data. Instead, they heuristically generate new personalized bundles based on a learned bundle matching model and show poor performance on bundle generation as a result. Third, it requires careful design of architecture to achieve high accuracy in both bundle matching and generation since they are highly related but different tasks. Previous works [15] have not studied architectures that perform both tasks concurrently since they have focused only on the bundle matching model.

In this paper, we propose BundleMage (Accurate Bundle Matching and Generation via Multitask Learning with Partially Shared Parameters), an accurate method for bundle recommendation. To achieve high accuracy for the bundle matching, BundleMage carefully aggregates information of user-bundle and user-item interactions by exploiting an adaptive gate technique which adaptively balances the contribution of heterogeneous information. BundleMage also learns a generation mechanism to provide a new tailored bundle for users. We train a generation module of BundleMage by reconstructing given incomplete bundles, exploiting the preferences of users who have interacted with them. BundleMage further improves its performance via multi-task learning with partially shared parameters to address the bundle matching and bundle generation problems simultaneously. With these ideas, BundleMage accurately recommends existing bundles to users, and successfully generates new bundles that users would prefer.

Our contributions are summarized as follows:

  • Method. We propose BundleMage, an accurate method for personalized bundle matching and generation. BundleMage accurately matches users to bundle using their past item and bundle interactions. BundleMage also effectively generates personalized bundles using target users’ preferences.
  • Experiments. Extensive experiments on real-world datasets show that BundleMage provides the state-of-the-art performance with up to 6.6% higher nDCG in bundle matching, and up to 6.3× higher nDCG in bundle generation compared to the best competitors (see Tables 3 and 4).
  • Case studies. We show in case studies that BundleMage successfully generates personalized bundles even with unpopular items which would otherwise be rarely exposed (see Figs 1 and 7).
thumbnail
Fig 1. Top-1 recommendations of BundleMage for different target users in bundle generation.

BundleMage considers the characteristics of a given bundle and the preferences of target users for bundle generation. For instance, BundleMage recommends a shooting and RPG game (e.g., Deadly Sin) for a bundle of shooting games and a target user A who prefers RPG.

https://doi.org/10.1371/journal.pone.0280630.g001

thumbnail
Fig 2. The architecture of bundle matching module in BundleMage.

https://doi.org/10.1371/journal.pone.0280630.g002

thumbnail
Fig 3. The architecture of bundle generation module in BundleMage.

https://doi.org/10.1371/journal.pone.0280630.g003

thumbnail
Fig 4. Cosine similarities between item and bundle interactions of each user in two real-world datasets.

A lot of users have dissimilar preferences for items and bundles.

https://doi.org/10.1371/journal.pone.0280630.g004

thumbnail
Fig 5. The architecture of adaptive gated preference mixture (PreMix).

https://doi.org/10.1371/journal.pone.0280630.g005

thumbnail
Fig 6. An illustration of shared parameters of E(1) and E(2), which are item embedding vectors of bundle matching and generation modules, respectively.

https://doi.org/10.1371/journal.pone.0280630.g006

thumbnail
Fig 7. Top-1 recommendations of BundleMage and POP for a target user in bundle generation.

BundleMage successfully recommends an unpopular item “Toby: The Secret Mine” (ranked in top 38.2%) which is the ground-truth one, whereas POP recommends a popular item “Poker Night 2” (ranked in top 1.8%) which is unrelated to the given bundle and the target user.

https://doi.org/10.1371/journal.pone.0280630.g007

The code and datasets are available at https://github.com/snudatalab/BundleMage. Symbols used frequently in this paper are summarized in Table 1.

Problem definition

Bundle recommendation [6] aims to predict bundles, instead of items, that a user would prefer. For each user u, we observe item interaction vector and bundle interaction vector , where Ni and Nb are the numbers of items and bundles, respectively. vu and ru are binary vectors, where each nonzero entry indicates the interaction with the corresponding item or bundle. We have a binary bundle-item affiliation matrix where each nonzero entry indicates the inclusion of an item to a bundle; , which indicates bth column of X, is the item affiliation vector of bundle b. We denote the sets of indices of observable entries in vu, ru, and xb as , , and , respectively; , , and are the sets of users, items, and bundles, respectively. We describe the formal definition of bundle matching and bundle generation as follows.

Problem 1 (Bundle matching):

Given: a user u’s item interaction vector vu and bundle interaction vector ru,

Predict: the user u’s next interacted bundle b′, where and b′ ∉ Ω(ru).

Problem 2 (Bundle generation):

Given: a user u’s item interaction vector vu, bundle interaction vector ru, and an incomplete bundle to be completed,

Construct: a personalized bundle of size , which denotes a small set of items to complete for user u, to be recommended to user u as the complete set .

Related works

In this section, we summarize related works of this work.

Collaborative filtering

Collaborative filtering is the most extensively used recommendation approach due to its powerful performance in real world services. Collaborative filtering predicts items a user would prefer by capturing similar patterns across users and items. On early works, matrix factorization approaches [79] learn latent factors of users and items while predicting interactions by a linear way. They still largely prevail recommender system community because of their simplicity and effectiveness. Recent collaborative filtering-based approaches utilize deep neural networks to embody the non-linear properties of users’ interactions. NCF [10] learns a non-linear scoring function as well as latent factors using fully-connected neural networks. AutoRec [11] learns an autoencoder to learn latent representations of users’ interactions. CDAE [12] adopts a denoising autoencoder [13] to improve robustness of top-N recommendation performance. VAE-CF [14] extends a variational autoencoder [15] to collaborative filtering to learn meaningful manifold of user preferences. However, the item collaborative filtering methods are not entirely suitable for bundle recommendation since they have not handled bundles which are more challenging to deal with than individual items.

Bundle recommender systems

For the bundle matching task, early works have adopted BPR framework [9] to learn latent factors of users, items, and bundles by optimizing a pairwise ranking loss. BR [1] learns latent factors of users and items from user-item interactions using the BPR loss, and predicts users’ bundle interactions by aggregating the latent item factors. EFM [2] jointly factorizes user-item and user-bundle interaction matrices using the BPR loss; it further incorporates item-item co-occurrence information to improve the performance. DAM [3] adopts an attention mechanism to represent latent bundle factors, extends NCF structure [10] to a multi-task learning framework, and learns user-item and user-bundle interactions using the BPR loss. Recent works have leveraged Graph Convolutional Networks [16] to learn user-item-bundle relationships from a unified heterogeneous graph. BGCN [4, 17] constructs a heterogeneous graph consisting of user, item, and bundle nodes, and learns latent factors of the nodes while propagating the information of interactions and affiliations. GRAM-SMOT [5] adopts Graph Attention Networks [18] to reflect the relative influence of items in a bundle. However, such bundle matching methods have not considered that users may have different interaction patterns for items and bundles. For instance, a user may purchase an item if it is included in a bundle even if she would not have purchased it individually. Thus, we expect performance improvement when considering the heterogeneous preference for items and bundles. For the bundle generation task, BR [1] and GRAM-SMOT [5] have tried to generate personalized bundles for users. However, they bypass the problem by generating items in a greedy manner through trained bundle matching models instead of learning a generation mechanism from observable data. In addition, there has been no study for a unified architecture of bundle matching and generation; if matching and generation tasks are trained together, performance improvement is expected for both tasks since they are different but highly related tasks.

Proposed method

In this section, we propose BundleMage (Accurate Bundle Matching and Generation via Multitask Learning with Partially Shared Parameters), an accurate method for personalized bundle recommendation.

Overview

We address the following challenges to achieve a high performance of the bundle recommendation.

  1. C1. Handling heterogeneous interactions. Users have heterogeneous interactions with items and bundles, both of which are informative but dissimilar. How can we effectively extract user preferences from the heterogeneous interactions for accurate bundle matching?
  2. C2. Learning customized bundle generation. Bundle generation is a demanding task since the search space of possible bundles is prohibitively unwieldy. Moreover, personalized bundle generation is necessary since each user has a different taste for bundles. How can we generate bundles customized for a target user?
  3. C3. Handling two related but different tasks. Bundle matching and generation are related but separate tasks. How can we effectively train a model to improve the performance of the two tasks simultaneously?

The main ideas of BundleMage are summarized as follows:

  1. I1. (Bundle Matching) Adaptive gated preference mixture in a bundle matching module enables us to effectively represent user preferences from heterogeneous interactions with items and bundles.
  2. I2. (Bundle Generation) Learning the reconstruction of an incomplete bundle using user preference enables us to generate personalized bundles.
  3. I3. Multi-task learning with partially shared parameters enables us to learn the common and separate information of matching and generation tasks, and results in high performance on the two tasks simultaneously.

BundleMage consists of bundle matching module and bundle generation module. Figs 2 and 3 depict the architectures of bundle matching and bundle generation modules, respectively. As shown in Fig 2, the bundle matching module is trained to predict a user’s entire bundle interactions using the user’s entire item interactions and a part of bundle interactions. In the module, an adaptive gated preference mixture (PreMix) adaptively integrates the user’s heterogeneous interactions for items and bundles. It effectively exploits user preferences for bundle matching from the heterogeneous interactions. As shown in Fig 3, the bundle generation module is trained to complete a bundle’s affiliations from incomplete ones, using the latent factor of a user who has interacted with the bundle. The generation module is able to learn a personalized bundle generation mechanism since it is trained to reconstruct bundles for a user under observed interaction pairs of users and bundles. The bundle matching and generation modules are trained in a multi-task learning manner while sharing parts of item embedding vectors. It successfully improves the performance of matching and generation simultaneously.

Bundle matching

The objective of bundle matching is to predict bundles a user would prefer using her past item and bundle interactions. For the bundle matching, it is important to effectively extract users’ preferences from the item and bundle interactions. However, users may differently interact with items and bundles since items and bundles are inherently different. Before describing our method for bundle matching, we investigate the interaction patterns from real-world datasets to verify users have dissimilar preferences for items and bundles. Specifically, we compute cosine similarities between users’ item and bundle interactions to measure how consistent users’ preferences are for items and bundles. Fig 4 shows cosine similarities between item and bundle interactions of each user in two real-world datasets, Youshu and Netease (details in Table 2). We compute the cosine similarities by the following procedure. First, for each item i, we obtain a user interaction multi-hot vector where Nu is the number of users; each nonzero entry in ci indicates the interaction of the corresponding user. Note that items with similar vectors are more likely to be similar to each other since it means they have many overlapping interacted users. Second, for each user u, we compute an item preference vector as , where is user u’s item interaction vector, Ni is the number of items, and Ω(vu) is the set of indices of nonzero entries in vu. Also, for each user u, we compute a bundle preference vector as , where , is user u’s bundle interaction vector, Nb is the number of bundles, is bundle b’s item affiliation vector, and Ω(ru) and Ω(xb) are the sets of indices of nonzero entries in ru and xb, respectively. Note that each user’s preference is computed as the averaged vector of items or bundles that the user interacted with to represent the user’s preferences for items and bundles. Last, we compute cosine similarity between the item and bundle preference vectors of each user, and sort them in descending order. As shown in Fig 4, a plenty of users have dissimilar interaction patterns for items and bundles. In order to accurately match bundles to users, we design a matching module while considering that users have different preferences for items and bundles.

The main challenge of bundle matching is to extract meaningful user preference from heterogeneous interactions for items and bundles, which entail dissimilar patterns. Meanwhile, both of interactions are crucial for predicting bundles that a user would prefer, because they both represent the preference of the user. Thus, the main technical difficulty in bundle matching is integrating the heterogeneous interactions to represent user preferences and accurately matching bundles to them. Our main idea is to adaptively balance the information of two interactions. Fig 2 depicts the structure of bundle matching module. The matching module 1) represents a user’s item and bundle interactions as low-dimensional latent factors, 2) integrates the latent factors using an adaptive gated preference mixture (PreMix), and 3) estimates matching probabilities over bundles.

Representations of interactions.

For each user u, we have item interaction vector and bundle interaction vector , where Ni and Nb are the numbers of items and bundles, respectively. Note that vu and ru are multi-hot binary vectors, where each nonzero entry indicates the interaction with the corresponding item or bundle. We obtain the representation vector of user u’s item interactions by the average of item embeddings that u has interacted with: (1) where is a trainable item embedding matrix for bundle matching; each column in E(1) indicates the embedding vector of the corresponding item. We also obtain the representation vector of user u’s bundle interactions by the average of embedding vectors of bundles that user u has interacted with. To obtain it, we use a partially masked vector , which we obtain from ru by masking 0 ≤ ρ ≤ 1 ratio of nonzero entries to zeros. This enables the matching module to learn to predict the masked nonzero entries as well as the unobserved nonzero entries, which is advantageous for accurately recommending new bundles to users at test. The representation vector is obtained as follows: (2) where is a bundle-item affiliation matrix and is a diagonal matrix whose ith diagonal element Dii is equal to ∑jXji. indicates the embedding matrix of bundles where each column represents the embedding vector of the corresponding bundle since each bundle embedding is computed as the average of embeddings of its constituent items. As a result, Eq (2) is the average of embeddings of the observed bundles in .

Adaptive gated preference mixture.

The main challenge is to mix the two representation vectors pu and qu well, to accurately predict the next bundle interactions of a user u. To address the challenge, we propose an adaptive gated preference mixture (PreMix) which adaptively balances the two vectors. As illustrated in Fig 5, PreMix integrates the two representation vectors while adaptively balancing their contributions, and obtains a latent user vector as follows: (3) where is a gate vector, σ(⋅) is the sigmoid function, and are a trainable weight matrix and a bias vector, respectively, and the square bracket [⋅] denotes concatenation. FNN(1)(⋅) is a 2-layered feed forward neural network with the structure containing an activation function, and ⊙ indicates the element-wise product. We employ the neural network to extract more complicated non-linear features from the gated preference mixture. Furthermore, we constrain the hidden dimension of the neural network to be smaller than the input and output dimensions to effectively extract meaningful information from the input. A high value of gu indicates that the information of item interactions has a great influence on matching the next bundle to the user u.

Matching probability estimation.

For evaluation, we need to obtain predicted matching probabilities for a user u. We first compute matching scores for every bundle using the latent user vector zu and embedding vectors of bundles E(1)X D−1. Then, the predicted matching probabilities of a user u are obtained by normalizing the scores with the softmax function: (4) where softmax(⋅) is the softmax function and is the embedding matrix of bundles.

Bundle generation

The objective of bundle generation is to construct a personalized bundle for a target user. Bundle generation is a demanding task since the search space of possible bundles is prohibitively unwieldy. Previous works for bundle generation [1, 5] detour the problem by utilizing pre-trained bundle matching models in a greedy manner without training any bundle generation mechanism. However, they have limitation of necessitating a heuristic criterion for whether to add new items or remove existing items from a bundle being generated. To address such limitation, it is necessary to train a personalized bundle generation model from the observable user-bundle interactions. Our main idea is to train a model to reconstruct a bundle from an incomplete one, using a target user’s preference. Our intuition is that a bundle construction is determined by the characteristic of the bundle and the preference of a target user; the characteristics of bundles could vary by domain, such as genre or provider. For instance, assume we want to generate a bundle with PlayStation games, and the target user prefers RPG (Roll Playing Games). We then need to construct a bundle with PlayStation and RPG games for the target user. Fig 3 depicts the structure of bundle generation module. Given a pair of a user and her interacted bundle, the generation module 1) represents the bundle’s incomplete affiliations as low-dimensional latent factors, 2) obtains a hidden representation of the pair, and 3) estimates the incomplete bundle’s generation probabilities over items for the target user.

Representation of bundle.

Given user u and her interacted bundles b ∈ Ω(ru), our idea for bundle generation is to train a model to reconstruct bundle b’s original affiliations from an incomplete ones using u’s preference. For each bundle b, we have item affiliation vector , where Ni is the number of items; xb is bth column of bundle-item affiliation matrix X. To represent bundle b’s incomplete affiliations, we define where we mask 0 ≤ ψ ≤ 1 ratio of nonzero entries in xb to zeros. We start with obtaining a low-dimensional representation vector of the incomplete bundle affiliation vector by the average of item embeddings: (5) where is a trainable item embedding matrix for bundle generation; analogous to E(1), each column in E(2) indicates the embedding vector of the corresponding item. The masking strategy enables the generation module to learn to predict the masked nonzero entries as well as the unobserved nonzero entries, which is advantageous for accurately generating new items at test.

Representation of pair.

To predict items that are appropriate for the given bundle b and user u, we need to represent the pair of bundle b and user u as a vector by exploiting their information. Given bundle b’s representation vector zb and user u’s representation vector zu, we obtain the representation vector of a pair of user u and bundle b by performing linear transformation independently, and integrating them using a feed forward neural network. Note that we use the latent user vector zu extracted from Eq (3) since it represents user u’s preference. The detail is described as follows: (6) where are linearly transformed vectors from zu and zb, respectively, and are trainable weight matrices, and are trainable bias vectors, and FNN(2)(⋅) is a 2-layered feed forward neural network with the structure .

Generation probability estimation.

We estimate the generation probability distribution over items for the pair of user u and bundle b as follows: (7) where is the predicted bundle generation probability over items, for user u given an incomplete bundle b. E(2) is the embedding matrix of items which is used also in representing zb.

Multi-task learning with partially shared parameters

Our goal is to maximize the performance of the two tasks, bundle matching and generation. The two tasks are different but highly related, which inevitably entail common information as well as separate information. Thus, the main technical difficulty is effectively learning the shared and separate information for the two tasks. Our main idea is to train the bundle matching and generation module in a multi-task learning manner while sharing parts of parameters.

Partially shared parameters.

Parameter sharing technique is broadly studied and incentivized in multi-task learning since it bestows an advantage of impressive performance [19, 20]. However, imprudent sharing rather decreases the performance of two different tasks [21, 22]. As shown in Fig 6, we thus propose to share parts of item embedding vectors to achieve high performance on bundle matching and generation simultaneously. We denote i’th column of E(1) and E(2) as and , respectively; they represent item i’s embedding vectors for bundle matching and generation, respectively. We share halves of and with the same parameters while letting the other halves trained separately. This enables BundleMage to learn the common and separate information for the two tasks, resulting in improving the performance of the two tasks simultaneously. We conduct thorough experiments on parameter sharing to show that our method is effective in improving the performance of bundle matching and generation, which is described in the following section.

Objective function for multi-task learning.

Our goal is to obtain optimal parameters E(1), E(2), W(1), W(2), W(3), b(1), b(2), b(3), FNN(1), and FNN(2) to accurately estimate the matching and generation probabilities. Thus, we optimize the parameters to minimize the distance between the predicted probability and the ground-truth probability. For the bundle matching and generation tasks, we utilize multinomial likelihoods for distributions ru and xb as in previous works [14, 2328], since it has shown more impressive results than other likelihoods such as Gaussian likelihood and logistic likelihood in top-k recommendation [14]. Thus, the losses are measured by KL-divergence between the observed probabilities and the predicted probabilities. Specifically, the loss to be minimized for bundle matching is defined as follows: (8) where is the bundle matching loss, are bth elements in and , respectively. Analogously, the loss to be minimized for bundle generation is defined as follows: (9) where is the bundle generation loss, are ith elements in and , respectively. To minimize the bundle matching loss and the bundle generation loss simultaneously, we define the objective function to be minimized as follows: (10) where is the objective function. Note that the matching and generation modules are trained to reconstruct the entire nonzero entries in ru and xb, respectively, although they use the masked vector and , respectively, as inputs. It makes the modules to accurately predict the unobserved interactions and affiliations. In practice, we iteratively minimize the bundle matching loss and the bundle generation loss in every epoch.

Experiments

In this section, we perform experiments to answer the following questions.

  1. Q1. Bundle matching. Does BundleMage show higher accuracy in bundle matching than those of baselines?
  2. Q2. Bundle generation. Does BundleMage generate a personalized bundle for a target user well?
  3. Q3. Ablation study. How the modules in BundleMage help improve the performance of BundleMage?
  4. Q4. Case study. What bundles does BundleMage generate for users?

Experimental setup

We introduce our experimental setup including datasets, baseline approaches, evaluation metrics, the training process, and hyperparameters.

Datasets.

We use three real-world bundle recommendation datasets as summarized in Table 2. Youshu [3] contains bundles (sets of books) from a book review site. Netease [2] contains bundles (sets of musics) from a cloud music service. Steam [1] contains bundles (sets of video games) from a video game distribution platform.

Baselines.

We compare BundleMage with existing methods for the two tasks: bundle matching and bundle generation. There are nine existing methods for bundle matching as follows.

  • POP recommends the top-k popular bundles to users.
  • BPR [9] is a matrix factorization method under a Bayesian Personalized Ranking learning framework.
  • NCF [10] is a neural network-based model which combines a generalized matrix factorization and neural networks to capture the high-order interactions between users and bundles.
  • VAE-CF [14] extends Variational Autoencoder [15] to collaborative filtering and maximizes the multinomial likelihood of user interactions.
  • BR [1] learns the latent vectors of users and items under Bayesian Personalized Ranking and learns the latent vectors of bundles aggregating the learned latent item vectors in a linear way.
  • EFM [2] jointly factorizes the user-item-bundle interaction matrix and item-item-bundle co-occurrence information matrix.
  • DAM [3] uses the attention mechanism and multi-task learning framework to learn users’, items’, and bundles’ latent vectors.
  • BGCN [4] unifies user-item interactions, user-bundle interactions, and bundle-item affiliations into a heterogeneous graph and trains a Graph Convolutional Network [16] on it to predict affinities between users and bundles.
  • GRAM-SMOT [5] also constructs a heterogeneous graph and trains a Graph Attention Network [29] by a metric learning approach [30].

We also compare BundleMage with the following four existing methods for the bundle generation task.

  • Random randomly chooses k items.
  • POP chooses the top-k popular items.
  • BR [1] repeatedly adds the best item to an incomplete bundle by computing the user-bundle score with a trained bundle matching model.
  • GRAM-SMOT [5] picks items close to a target user greedily; closeness is measured by latent vectors of the target user and items.

Note that BR and GRAM-SMOT work based on their learned bundle matching modules. We use only a user-bundle interaction matrix for BPR, NCF, and VAE-CF due to their modeling capability. On the other hand, we use all given matrices for BR, EFM, DAM, BGCN, and GRAM-SMOT.

Evaluation metrics.

We evaluate the performance of bundle matching and bundle generation with two evaluation metrics, recall@k and normalized discounted cumulative gain (nDCG@k), which are the most widely used metrics for evaluating accuracy as in previous works [4, 31]. For each user, both metrics compare the predicted rank of the held-out items with their true rank. While recall@k considers all items ranked within the first k to be equally important, nDCG@k considers higher ranks more importantly by monotonically increasing the discount factor. We vary k in {5, 10, 20} for all datasets.

Experimental process.

To evaluate the generation performance for unseen bundles, we randomly select 10% of bundles. We use user-bundle interactions of the selected bundles as the test held-out for bundle generation to evaluate the generation performance on bundles that have not been observed in training. For the rest of the user-bundle interactions, we employ leave-one-out protocol [3, 5, 10, 3235] to split them into training, matching validation, and matching test datasets. Specifically, we randomly select two bundles for each user, and one is used as a matching validation held-out and the other is used as a matching test held-out. For bundle matching task, we randomly select 99 bundles that have not been interacted with each user as negative samples to compare with the validation and test bundles following previous works [3, 5, 10]. For bundle generation task, we randomly select n items from each bundle as positive samples in the generation test held-out. We also randomly select m items that is not contained in each bundle as negative samples to compare with the positive samples. We set (n, m) to (1, 99), (5, 495), and (10, 990) for Steam, Youshu, and Netease datasets, respectively. We report experimental results for the matching and generation test held-outs when a model shows the best nDCG@5 on the matching validation dataset within 200 epochs. We run each experiment at least three times and report the average.

Hyperparameters.

We set the masking ratios ρ and ψ to 0.5, We set the learning rate to 0.001 among {0.01, 0.001, 0.0001, 0.0001}, the weight decay to 0.00001 among {0.001, 0.0001, 0.00001, 0.000001}, and dropout rate [36] to 0.3 among {0.1, 0.3, 0.5, 0.7, 0.9}. Note that the hyperparameters are set to the best one among the candidates. We set embedding dimensionality d of all methods to 200 for a fair comparison. We use Adam optimizer [37] for the training.

Performance on bundle matching (Q1)

We evaluate the performance of BundleMage and competitors for the bundle matching. Table 3 shows the results in terms of Recall@k and nDCG@k. We have two main observations. First, BundleMage shows the best performance in most cases, achieving up to 6.6% higher nDCG than the competitors. Second, our modeling of how we handle heterogeneous types of interaction data is more effective on a large dataset than on a small dataset; note that BundleMage adaptively extracts user preferences from the heterogeneous interactions with items and bundles. The performance gap is large between BundleMage and the competitors for Netease and Steam datasets which have plenty of user-item and user-bundle interactions. BundleMage effectively extracts user preferences from those interactions for the bundle matching. In contrast, the performance gap is not large on Youshu dataset because it contains less user-item and user-bundle interactions compared to the other datasets.

thumbnail
Table 3. Performance of BundleMage and competitors for bundle matching with respect to nDCG and Recall.

https://doi.org/10.1371/journal.pone.0280630.t003

Performance on bundle generation (Q2)

We evaluate the performance of the bundle generation in terms of Recall@k and nDCG@k. In Table 4, BundleMage provides the state-of-the-art accuracy by achieving up to 6.3× higher nDCG than the competitors. Note that the performance gap is large since only BundleMage learns a generation mechanism for personalized bundles from the observable data. To show the popularity biases of datasets, we measure the average of every bundle’s ranking score which is evaluated as the average of rankings of included items. For each dataset, the averaged scores are measured as follows: Youshu (14.8%), Netease (26.36%), and Steam (1.42%). For the bundle generation, POP has a good performance than Random, BR, and GRAM-SMOT since many bundles consist of popular items. The performance of POP is especially good for Steam dataset because of its extreme popularity bias. However, BundleMage outperforms POP in most cases, since BundleMage accurately generates bundles consisting of unpopular items as well as popular ones.

thumbnail
Table 4. Performance of BundleMage and competitors for bundle generation with respect to nDCG and Recall.

https://doi.org/10.1371/journal.pone.0280630.t004

Ablation study (Q3)

For an ablation study, we compare the accuracy of BundleMage and its variants to evaluate whether each module in BundleMage helps the performance improvement. The variants of BundleMage are as follows.

  • BundleMage-Avg. To evaluate the effect of the adaptive gated preference mixture module, we incorporate representation vectors as instead of gupu+ (1 − gu)⊙qu in Eq (3).
  • BundleMage-Sep. To evaluate the partially shared parameters technique, we entirely separate the parameters of E(1) and E(2).
  • BundleMage-Sha. To evaluate the partially shared parameters technique, we entirely share the parameters of E(1) and E(2).
  • BundleMage-. To evaluate the multi-task learning technique, we train BundleMage without .
  • BundleMage-. To evaluate the multi-task learning technique, we train BundleMage without .

Bundle matching.

For the bundle matching, we compare BundleMage with its variants BundleMage-Avg, BundleMage-Sep, BundleMage-Sha, and BundleMage-. Table 5 shows the result of the ablation study for bundle matching. Note that BundleMage shows better performance than its variants, indicating that the adaptive gated preference mixture, partially shared parameters, and multi-task learning improve the performance of the bundle matching.

thumbnail
Table 5. Evaluation of BundleMage and its variants for bundle matching with respect to nDCG.

https://doi.org/10.1371/journal.pone.0280630.t005

Bundle generation.

For the bundle generation, we compare BundleMage with BundleMage-Avg, BundleMage-Sep, BundleMage-Sha, and BundleMage-. Table 6 shows the result of the ablation study for bundle generation. As in the ablation study for bundle matching, using the adaptive gated mixture, multi-task learning, and partially shared parameters improves the performance of the bundle generation, Specifically, a latent user vector extracted from bundle matching modules plays an important role in generating bundles since removing the matching module from BundleMage degrades the performance of bundle generation.

thumbnail
Table 6. Evaluation of BundleMage and its variants for bundle generation with respect to nDCG.

https://doi.org/10.1371/journal.pone.0280630.t006

Case study (Q4)

We show in case studies that BundleMage successfully generates a personalized bundle even using unpopular items which would otherwise be rarely exposed. Fig 1 shows that BundleMage differently completes the bundle depending on target users when an incomplete bundle is given. Note that the incomplete bundle consists of shooting games. BundleMage adds the shooting and RPG game in the given bundle for user A interested in games of the RPG genre while adding the shooting and adventure game for user B interested in adventure genre games. For user C who prefers games of the simulation genre, BundleMage adds the shooting and simulation game in the given incomplete bundle. BundleMage successfully generates a new bundle by considering user preferences and characteristics of bundles.

We provide another case study of bundle generation for a comparison between BundleMage and POP. As shown in Fig 7, BundleMage correctly completes a bundle by considering the characteristics of the bundle while POP does not; in contrast to POP, BundleMage successfully recommends an unpopular adventure game since the given bundle includes adventure games and the target user prefers adventure games.

Conclusion

In this paper, we propose BundleMage, an accurate model to simultaneously perform bundle matching and generation. BundleMage matches bundles to users by effectively extracting users’ preferences from their heterogeneous interactions with items and bundles. BundleMage also generates a tailored bundle for a target user by exploiting a given incomplete bundle’s characteristics and the preference of the target user. To further improve accuracy for the two tasks simultaneously, BundleMage is trained in a multi-task learning manner with partially shared parameters. We experimentally show that BundleMage achieves up to 6.6% higher nDCG in bundle matching and 6.3× higher nDCG in bundle generation than existing bundle recommendation models. Moreover, we experimentally verify that our main ideas of adaptive gated preference mixture, partially shared parameters, and multi-task learning improve the performance both of bundle matching and generation. Especially, we show that the matching module has a great influence on the generation performance, demonstrating the importance of the multi-task learning-based approach in the two related tasks, matching and generation. Our case studies show that BundleMage 1) differently completes bundles depending on target users, and 2) generates personalized bundles even using un-popular items. Future works include extending BundleMage to exploit auxiliary information of users, items, and bundles.

References

  1. 1. Pathak A., Gupta K., and McAuley J. J., “Generating and personalizing bundle recommendations on Steam,” in SIGIR. ACM, 2017.
  2. 2. Cao D., Nie L., He X., Wei X., Zhu S., and Chua T., “Embedding factorization models for jointly recommending items and user generated lists,” in SIGIR. ACM, 2017.
  3. 3. Chen L., Liu Y., He X., Gao L., and Zheng Z., “Matching user with item set: Collaborative bundle recommendation with deep attention network,” in IJCAI, 2019.
  4. 4. Chang J., Gao C., He X., Jin D., and Li Y., “Bundle recommendation with graph convolutional networks,” in SIGIR. ACM, 2020.
  5. 5. Vijaikumar M., Shevade S. K., and Murty M. N., “GRAM-SMOT: top-n personalized bundle recommendation via graph attention mechanism and submodular optimization,” in ECML-PKDD, ser. Lecture Notes in Computer Science, vol. 12459. Springer, 2020.
  6. 6. Garfinkel R. S., Gopal R. D., Tripathi A. K., and Yin F., “Design of a shopbot and recommender system for bundle purchases,” Decis. Support Syst., vol. 42, no. 3, 2006.
  7. 7. Koren Y., Bell R. M., and Volinsky C., “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, 2009.
  8. 8. Salakhutdinov R. and Mnih A., “Probabilistic matrix factorization,” in NIPS. Curran Associates, Inc., 2007.
  9. 9. Rendle S., Freudenthaler C., Gantner Z., and Schmidt-Thieme L., “BPR: bayesian personalized ranking from implicit feedback,” in UAI. AUAI Press, 2009.
  10. 10. He X., Liao L., Zhang H., Nie L., Hu X., and Chua T., “Neural collaborative filtering,” in WWW. ACM, 2017.
  11. 11. Sedhain S., Menon A. K., Sanner S., and Xie L., “Autorec: Autoencoders meet collaborative filtering,” in WWW—Companion Volume. ACM, 2015.
  12. 12. Wu Y., DuBois C., Zheng A. X., and Ester M., “Collaborative denoising auto-encoders for top-n recommender systems,” in WSDM. ACM, 2016.
  13. 13. Vincent P., Larochelle H., Bengio Y., and Manzagol P., “Extracting and composing robust features with denoising autoencoders,” in ICML, ser. ACM International Conference Proceeding Series, vol. 307. ACM, 2008.
  14. 14. Liang D., Krishnan R. G., Hoffman M. D., and Jebara T., “Variational autoencoders for collaborative filtering,” in WWW. ACM, 2018.
  15. 15. Kingma D. P. and Welling M., “Auto-encoding variational bayes,” in ICLR, 2014.
  16. 16. Kipf T. N. and Welling M., “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
  17. 17. Chang J., Gao C., He X., Jin D., and Li Y., “Bundle recommendation and generation with graph neural networks,” TKDE, 2021.
  18. 18. Velickovic P., Cucurull G., Casanova A., Romero A., Liò P., and Bengio Y., “Graph attention networks,” in ICLR. OpenReview.net, 2018.
  19. 19. Caruana R., “Multitask learning: A knowledge-based source of inductive bias,” in ICML. Morgan Kaufmann, 1993.
  20. 20. Li H., Wang Y., Lyu Z., and Shi J., “Multi-task learning for recommendation over heterogeneous information network,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 2, 2022.
  21. 21. Pan S. J. and Yang Q., “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, 2010.
  22. 22. Meng Z., Yao X., and Sun L., “Multi-task distillation: Towards mitigating the negative transfer in multi-task learning,” in ICIP. IEEE, 2021.
  23. 23. Zhou F., Wen Z., Zhang K., Trajcevski G., and Zhong T., “Variational session-based recommendation using normalizing flows,” in WWW. ACM, 2019.
  24. 24. Sankar A., Wu Y., Wu Y., Zhang W., Yang H., and Sundaram H., “Groupim: A mutual information maximization framework for neural group recommendation,” in SIGIR. ACM, 2020.
  25. 25. Nema P., Karatzoglou A., and Radlinski F., “Disentangling preference representations for recommendation critiquing with ß-vae,” in CIKM. ACM, 2021.
  26. 26. Ma J., Zhou C., Cui P., Yang H., and Zhu W., “Learning disentangled representations for recommendation,” in NeurIPS, 2019.
  27. 27. Wang Z., Zhu Y., Liu H., and Wang C., “Learning graph-based disentangled representations for next POI recommendation,” in SIGIR. ACM, 2022.
  28. 28. Cao J., Lin X., Cong X., Ya J., Liu T., and Wang B., “Disencdr: Learning disentangled representations for cross-domain recommendation,” in SIGIR. ACM, 2022.
  29. 29. Velickovic P., Cucurull G., Casanova A., Romero A., Liò P., and Bengio Y., “Graph attention networks,” CoRR, 2017.
  30. 30. Hsieh C., Yang L., Cui Y., Lin T., Belongie S. J., and Estrin D., “Collaborative metric learning,” in WWW. ACM, 2017.
  31. 31. Deng Q., Wang K., Zhao M., Zou Z., Wu R., Tao J., et al., “Personalized bundle recommendation in online games,” in CIKM. ACM, 2020.
  32. 32. Sun F., Liu J., Wu J., Pei C., Lin X., Ou W., and Jiang P., “Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer,” in CIKM. ACM, 2019.
  33. 33. Kang S., Hwang J., Lee D., and Yu H., “Semi-supervised learning for cross-domain recommendation to cold-start users,” in CIKM. ACM, 2019.
  34. 34. Zhao C., Li C., and Fu C., “Cross-domain recommendation via preference propagation graphnet,” in CIKM. ACM, 2019.
  35. 35. Tang J. and Wang K., “Personalized top-n sequential recommendation via convolutional sequence embedding,” in WSDM. ACM, 2018.
  36. 36. Srivastava N., Hinton G. E., Krizhevsky A., Sutskever I., and Salakhutdinov R., “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 1, 2014.
  37. 37. Kingma D. P. and Ba J., “Adam: A method for stochastic optimization,” in ICLR, 2015.