Fig 1.
The framework for GCN.
Fig 2.
This study adopts a three-object heterogeneous information network comprising users, items, and queries as input to model and optimize the complex relationships involved in personalized recommendations. First, I apply tokenization to the input data, breaking down the textual information of users, items, and queries into word units. We then generate initial embeddings for users, items, and queries using pre-trained vectors or random initialization with the same dimensionality. These embeddings capture the basic semantic information of each entity, laying the foundation for subsequent relationship modeling.
Table 1.
The experiment environment.
Table 2.
The specific information for the employed dataset.
Table 3.
DP-GCN Model Hyperparameter Settings.
Fig 3.
Training process and AUC value on the Electronics dataset.
Fig 4.
The comparison result on the Electronics dataset.
Table 4.
Precision@K and NDCG@K (K = 5, 10) Comparison on the Electronics dataset.
Fig 5.
The comparison result on the Books dataset.
Table 5.
Precision@K and NDCG@K (K = 5, 10) Results on Books dataset.
Fig 6.
The prediction time comparison on Books dataset.
Fig 7.
The comparison result on the self-established dataset.
Fig 8.
The user satisfaction comparison.