Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Distributed uplink cache for improved energy and spectral efficiency in B5G small cell network

  • Mubarak Mohammed Al Ezzi Sufyan,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Computer Science, University of Peshawar, Peshawar, Pakistan

  • Waheed Ur Rehman ,

    Roles Conceptualization, Data curation, Investigation, Project administration, Supervision, Validation, Writing – review & editing

    wahrehman@uop.edu.pk

    Affiliation Department of Computer Science, University of Peshawar, Peshawar, Pakistan

  • Tabinda Salam,

    Roles Methodology, Validation, Writing – review & editing

    Affiliation Department of Computer Science, Shaheed Benazir Bhutto Women University Peshawar, Peshawar, Pakistan

  • Abdul Rahman Al-Salehi,

    Roles Formal analysis, Writing – review & editing

    Affiliation Department of Electronics Engineering, International Islamic University, Islamabad, Pakistan

  • Qazi Ejaz Ali,

    Roles Data curation

    Affiliation Department of Computer Science, University of Peshawar, Peshawar, Pakistan

  • Abdul Haseeb Malik

    Roles Formal analysis

    Affiliation Department of Computer Science, University of Peshawar, Peshawar, Pakistan

Abstract

The advent of content-centric networks and Small Cell Networks (SCN) has resulted in the exponential growth of data for both uplink and downlink transmission. Data caching is considered one of the popular solutions to cater to the resultant challenges of network congestion and bottleneck of backhaul links in B5G networks. Caching for uplink transmission in distributed B5G scenarios has several challenges such as duplicate matching of contents, mobile station’s unawareness about the cached contents, and the storage of large content size. This paper proposes a cache framework for uplink transmission in distributed B5G SCNs. Our proposed framework generates comprehensive lists of cache contents from all the Small Base Stations (SBSs) in the network to remove similar contents and assist uplink transmission. In addition, our framework also proposes content matching at a Mobile Station (MS) in contrast to an SBS, which effectively improves the energy and spectrum efficiency. Furthermore, large size contents are segmented and their fractions are stored in the distributed cache to improve the cache hit ratio. Our analysis shows that the proposed framework outperforms the existing schemes by improving the energy and spectrum efficiency of both access and core networks. Compared to the existing state of the art, our proposed framework improves the energy and spectrum efficiency of the access network by 41.28% and 15.58%, respectively. Furthermore, the cache hit ratio and throughput are improved by 9% and 40.00%, respectively.

1 Introduction

In recent years, content-based services such as video streaming are exponentially growing. In addition, the number of internet users by 2023 is expected to be 5.3 billion [1]. This will tremendously increase the network volume for both uplink and downlink in Beyond 5th Generation (B5G) networks. It will also result in several challenges such as traffic load, congestion, and latency, in addition to, significant consumption of energy and spectrum [24]. To solve the challenges incurred by this data explosion, the SCN is widely advocated in B5G networks. However, it affects the users’ data requirements over time and location. In addition to, creating bottlenecks at the backhaul. It is caused as a result of a significantly improved data transmission rate in the SCN. Another way to cope with the data explosion is the use of caching in the network. In this technique, cached versions of the contents are stored in the network at different locations such as Base Station (BS), Macro Base Station (MBS), SCN, and/or Mobile Stations (MSs). If the cached versions are available, devices do not need to upload or download the contents to/from the core network. In addition, as the contents are available locally, it reduces the cost of communication, energy/bandwidth consumption and latency [58].

The rapid growth of the SCNs also increases the complexity of the cellular network, especially in a cooperative scenario. In such a case, the distributed cache can be employed to act as a single cache but span across the network. A distributed cache can be seen as an extension of a single cache. It provides a higher data rate with less energy and bandwidth consumption along with low latency to access contents in order to meet users’ quality of experience [911]. Distributed caching may be employed for both downlink and uplink transmission depending upon the demands of the MSs [12]. Downlink caching restrains the unnecessary download of content by providing it locally based on its popularity to reduce the congestion on the backhaul link and core network. Whereas, uplink caching is used to store the MS’s data intended to be uploaded with the constraints of avoiding unnecessary upload to reduce the data traffic in the network, especially the access network link [13, 14]. Distributed caching can be deployed in different types of network such as content-centric network [15], cloud-small cell networks [16, 17], wireless cellular networks [18], dense-SBSs [19], an edge network [20], B5G mobile edge computing [6], HetNets SBs [21], B5G relaying networks [22], Small Cell Base Station (SCBS) [23], multi-antenna SCNs [24], and 5G-SBS [25].

Recently, a lot of research work is done on addressing the challenges of distributed caching in a cellular network with some work on uplink caching [2633]. The authors in [26], proposed a novel upload cache architecture to support the parallel uploading of segmented files. The authors in [27], studied the energy-efficient cooperative coded caching issue in heterogeneous SCNs to minimize the energy consumption for content caching and content delivery. The authors in [28], proposed an uplink cache system in delay tolerant SCNs. The main focus in [28] is to analyze the cache size and its effectiveness for uplink caching. Furthermore, duplicate elimination of contents is performed at SBS via matching the hash key of chunks of the file after uploading the real content, which is not very practical for an uplink scenario. The authors in [29] developed a novel multiple-input multiple-output (MIMO) network architecture with a large number of BSs employing cache-enabled uplink transmission. The authors proposed the modified von Mises distribution as a popularity distribution function and derive the outage probability, in addition, identified the relationship between cache storage and outage probability to be directly proportional. Therefore, it is found that the delivery rate also improves with the increase of cache storage space in addition to a denser network. The authors in [30] have proposed an innovative approach to reduce peak traffic by utilizing distributed cache of Internet of Things (IoT) devices. The proposed scheme employed an uplink transmission scheduling based on delay adaptation to mitigate the sporadic access network congestion. The average delivery latency is investigated by the authors in [31], where they proposed a proactive caching technique for movie on-demand streaming on the internet. The probabilistic caching is studied in [32], where authors proposed an optimized caching strategy to improve the successful download probability. Furthermore, in [33], the authors presented the contents matching the mobile data to be uploaded at an MS level before the real transmission of the actual content taken place to uploading the dissimilar contents only without considering the cooperative scenario.

All the aforementioned research papers proposed novel uplink cache architectures and schemes to effectively improve the network performance. However, these papers have not presented the cache benefits on Energy Efficiency (EE) and Spectral Efficiency (SE). In addition, the existing literature proposes to use an SBS for performing content matching [28, 29], which entails that the MSs will be unaware of cached contents. This will lead to unnecessary upload of content, which is not desirable. Furthermore, the existing literature has also identified that the large content size significantly degrades the performance of the distributed cache. In addition, the existing works in [28, 29] have not considered content duplication and its effect on energy and spectral efficiency in a distributed scenario. The authors in [33] considered content matching at an MS mitigating the duplicate upload, however, it did not consider a distributed scenario. As well as, the content segmentation and its effective placement in the context of distributed cache is not considered in the existing literature. This motivates us to propose a novel cache-enabled uplink transmission framework that addresses all these challenges. The main objective of this work is to improve energy and spectral efficiency along with improving cache performance in a distributed scenario. Firstly, our proposed framework generates an unduplicated list of cache contents in the distributed scenario in an attempt to be used as a map for an MS to decide whether to upload the contents or not. Secondly, it proposes the scheme to perform content matching at an MS and lastly, it performs the content segmentation to divide a large size content into smaller for effective storage across the distributed network. The main contributions of this paper can be summarized as follows:

  1. We proposed a novel scheme to effectively generate a disparate list of distributed cache for uplink transmission. The proposed scheme leverages cooperative communication to efficiently generate a list of contents based on popularity and content validity. This way the list is updated with the most relevant content, which improves the cache hit ratio.
  2. An efficient content matching scheme is also proposed to facilitate an MS to corroborate the cached contents. This will significantly limit the amount of uplink transmission, which will improve the EE and SE.
  3. We also proposed a content segmentation with distributed placement scheme, which improves the content placements in the distributed cache network by splitting the larger contents into smaller. The proposed scheme significantly improves the cache hit ratio, which intrinsically improves the EE and SE.
  4. We compare our proposed framework with state-of-the-art architectures and algorithms to analyze its effectiveness. We mainly focused on energy consumption and spectrum efficiency, which is the main limiting factor of the existing literature.

This paper is organized as follows: Related work is discussed in Section 3. The system model is discussed in Section 4. Section 5 presented our proposed framework. Experimental, Performance evaluation and conclusion are discussed in Sections 6 and 7, respectively.

2 Related work

The use of cache for the downlink transmission is widely researched. Its effectiveness is broadly accepted, especially the energy and spectral efficiencies, which is summarized in Table 1, with some work related to distributed cache in Table 2. However, cache-enabled uplink is researched to a lesser extent. The state-of-the-art and our reference points are in [28, 29]. In [28], the authors presented a framework to relieve the burden of wireless SCNs by considering cache-enabled uplink transmission in a delay-tolerant network. The paper also proposed duplicate content matching at an SBS by comparing the hash key of chunks of the file after uploading real content. The authors also used First-Input-First-Output (FIFO), random and probabilistic content scheduling strategies for cache management. It enabled an SBS to eliminate the redundancy among users’ uploaded contents to improve the network transmission efficiency. Similarly, in [29], the authors presented a cache-uplink framework for HetNet with stochastic distributed BSs for temporary caching of user-generated contents. The authors also presented the relationship between cache storage space and outage probability. While the authors in [33], presented a Broadcast cache assist uplink (BCAU) scheme to perform a matching among the attributes of new content and cache contents at an MS to discard the uploading of the available content in the SBS cache before the real transmission of the actual content taken place to improve the EE and SE of uplink transmission over B5G-SCN but the authors is not considering the cooperative among the distributed cache. However, these papers have not considered EE and SE of distributed cache. In addition, content matching at an SBS, as proposed in [28, 29, 33], entails unnecessary uploads from the MSs. Lastly, the challenges of large size contents are not addressed in these papers.

thumbnail
Table 1. Summary of existing works about the impact of cache on energy, spectral and cache efficiencies of SCN.

https://doi.org/10.1371/journal.pone.0268294.t001

thumbnail
Table 2. Summary of cooperative distributed cache methods.

https://doi.org/10.1371/journal.pone.0268294.t002

3 System model

This section gives a detailed description of the system model. The cooperative distributed network with cache-enabled SBSs and MBS along with MSs is considered as shown in Fig 1.

thumbnail
Fig 1. System model of distributed caching in B5G cellular network.

https://doi.org/10.1371/journal.pone.0268294.g001

3.1 Network model

We consider a cellular network that consists of a cloud, a TDD MBS, M TDD SBSs, and N MSs as shown in Fig 1. An MBS is a BS of a cellular network, which is donated by . An MBS is employed to collect the relevant information from all the SBSs in addition to controlling them. The SBSs and MSs are spatially distributed according to two independent homogeneous Poisson Point Process (hPPP) ΦB and ΦU with the density of SBSs and MSs are represented as λB and λU, respectively. The set of SBSs is indicated by serving a set of MSs . All SBSs are connected to the MBS and a cloud through a backhaul link. While each SBS is bidirectionally connected with its neighboring SBSs via X2 interface (a sidehaul wireless link). All SBSs are in an active mode to be associated with at least one MS to serve. Each MS selects its local SBS based on its propagation distance. Therefore, the set of MSs, which serves by an SBS Bj is denoted by Uj,i: 1 ≤ in, 1 < j < M, n < N, where i represents an MS served by an SBS Bj. In this work, we consider the uplink transmission scenario.

3.2 Cache model

We consider M + 1 caches across the cellular network. The rest of the characteristics are as under:

3.2.1 Cache storage.

The set of SBSs caches is denoted by . Each cache stores a set of popular contents denoted by , where w is the total number of cached contents. The SBS’s cache stores a set of popular contents denoted by , where l is the total number of the cached contents of Bj. The popularity of content () is denoted by and is modeled by Zipf distribution according to [38] (1) where represents the popularity rank of the content , and δ is the skewness of the popularity distribution.

Each cached content has some attributes such as name, size, hash key, length, etc. The set of attributes of each cached content is denoted by , where l is the serial number of content in a cache and κ is its maximum number of attributes. The attributes of each cached content will be used for matching to determine the duplicate content and eliminate it (More details in section 4).

The total storage capacity W of all M + 1 cache in the network can be represented as; (2) where is the storage capacity of MBS’s cache and is the storage capacity of an SBS’s cache.

3.2.2 Distributed caching model.

We are considering a cooperative distributed cache, where caches are located at the SBSs and MBS, which appear to be working as a single cache. The contents are stored in a distributed manner across the network. The rationale of this content segmentation is based on [35, 43], where authors argued that the larger size of the contents significantly affects the cache effectiveness due to the scarcity of the storage space.

Therefore, we have proposed that small-sized contents may be stored locally at an SBS, while large size contents can be split into Q segments [27, 38, 56].

The set of segments of a content can represented as for all 1 ≤ lw, where the size of each segment is determined based on the content’s size () and the free space of the local SBS and its neighboring. The segment(s) stored in a separate cache create(s) encoded packets for each segments as using MDS code [59], where represents encoded packets for ()1 and so on. Furthermore, if the content is too large, it will be cached at the MBS’s cache as a whole content. The placement of segment(s) is determined in each cache by using the Hash key of the whole content.

3.2.3 Cache availability and efficiency.

Cache availability and efficiency can be evaluated using the cache hit ratio and cache miss ratio. A cache hit occurs, when the content is available in the cache, otherwise, it is a cache miss. According to [60, 61], a cache hit and miss of each SBS Bj are given as: (3) where and are the counter of the cache hit and cache miss of SBS Bj, respectively. (4)

In the scenario of the cache-enabled uplink, the availability of data in a cache is categorized as follows:

  1. : New content is available in distributed cache, therefore, an MS send Message of Target Destination (MoTD)(More details on MoTD in Section(4.2)) rather than actual content.
  2. : New content is unavailable in the distributed cache, therefore, new content will be uploaded.

3.3 Communication model

The uplink capacity of an MS Uj,i denoted by , and can be represented as [62, 63] (5) where B represents the channel bandwidth and is the signal-interference-to-noise ratio (SINR) of the received signal from MS Uj,i at its serving SBS Bj and can be represented as, (6) where is the uplink transmit power of MS Uj,i. is the corresponding uplink channel gain. ‖.‖ stands for Euclidean norm and d(Uj,i, Bj) is the separation distance between Uj,i and Bj. α is the path loss exponent. Ii is a set of interfering MSs. σ2 is the noise power spectral density at an MS.

Using (5) and (6), the (5) can be rewritten as; (7)

Therefore, the uplink network capacity is the sum of cache hit and miss ratios of both the access link (MS) and backhaul (SBS) and is given by: (8) where is the uplink transmission power of the SBS. is the corresponding UL channel gain by an MBS/C-RAN. ‖.‖ stands for Euclidean norm and is the separation distance between Bj and .

3.4 Energy consumption model

In this subsection, we describe the Energy Consumption (EC) of an MS, an SBS, and a cache as follows.

3.4.1 Mobile station energy consumption ().

The MS is given according to [33, 64]: (9) where ECm and ECop are energy consumption of performing matching and other operations, respectively. ECr is energy-consuming for receiving the packet(s) of an SBS’s reply. is the energy consumption for transmitting data by the MS. In case of cache hit, is only calculated for the communication cost of sending the MoTD message to an SBS, else is calculated for the communication cost of sending the whole content.

According to (9), the average EC of the MSs is given as, (10)

3.4.2 Caching energy consumption ().

The caching energy consumption is the energy spent by the cache to perform the different operations such as cache hit/miss, cache management, cooperation among distributed cache and is given according to [65] as: (11) where ECHit and ECMiss are the energy consumption of cache hit and miss, respectively. ECbus, ECcell, ECpad, and ECchip are energy consumed of address and data bus, sum cell arrays, address and data pads of processor and off-chip cache, respectively. More detail about the calculation of the ECbus, ECcell, ECpad, and ECchip in [65].

In the case of cooperative distributed caching, an SBS’s cache not only consumes energy for data exchange but also for content partitioning. In addition, according to the proposed framework in Section(4), each SBS generates and broadcasts the list of its cache contents to facilitate content matching. Energy consumption of all these operations is considered to compute ECCDC.

Therefore, (11) can be rewritten as, (12) where the and of an SBS Bj are given according to (3) and (4).

3.4.3 SBS energy consumption ().

According to [66], the EC of the SBS is given as (13) where Pcomp is the total energy consumption by the following: PRFC, PBCK, and are the energy consumption by radio frequency, backhaul link, and other operations related to the communication and contents, respectively. The serial number of the MS Uj,i is represented as that denoted to the energy consumption of an SBS Bj for executing operations, transmitting and receiving of contents, which are uploaded by the MS Uj,i, respectively. is the energy consumption for transmitting data by SBS Bj. is the EC of an SBS Bj for the execution of the cache replacement policy if the cache is full.

In case of a cache hit, is calculated only for the cost of receiving the MoTD at SBS Bj, otherwise is calculated for the cost of receiving the whole content.

Using Eqs (11) and (13), we can write; (14)

Therefore, the average EC of the SBSs () is given as (15)

Finally, according to (10) and (15), the EC of the overall cellular network is given by: (16)

3.5 Spectrum Efficiency (SE) model

By definition, the uplink SE is determined by the relation between the uplink capacity ℜUL and the total available bandwidth B as given by [67], (17)

According to the Shannon channel capacity, the maximum bit rate is given by (18)

Then, according to (17) and (18), the SE is given as (19)

4 Proposed framework for distributed uplink cache

In the distributed cache, there are three challenges. Firstly, content duplication among distributed cache. Secondly, an MS’s inability to know about the cache contents. Thirdly, the segmentation and caching distributively the new content with a large size. In order to address these challenges and improve the user’s quality of experience, a framework for distributed uplink caching have proposed. There are three main components of the proposed framework namely, generating a list of cache contents, performing matching at an MS, and content segmentation for distributed cache placement as shown in Fig 2.

thumbnail
Fig 2. Proposed framework for distributed uplink caching.

https://doi.org/10.1371/journal.pone.0268294.g002

4.1 Generating disparate list of distributed cache for uplink transmission

It is of paramount importance that an MS has a list of contents that are cached to avoid unnecessary uploads. In this subsection, the algorithm of generating the content list is explained in detail. An MBS is designated as a marker final content list (MFCL). The goal of MFCL is to create two content lists namely, Un-Duplicated content List (UDL) and Duplicated Content List (DCL). As its name implies, UDL consists of un-duplicated cache contents that can be used by an MS for content matching. Whereas, the DCL contains duplicated cache contents, which will be used for evicting identical contents. The rest of this section explains the process of creating these contents lists, which is also shown in Fig 3.

thumbnail
Fig 3. Preparing and generating the final list of cache contents.

https://doi.org/10.1371/journal.pone.0268294.g003

4.1.1 Processing of cache contents at MBS and SBSs.

The processing involves creating a list of cached contents at an MBS and all SBSs in the network.

4.1.1.1 MBS cache contents. The information about the SBSs that are connected to an MBS can be expressed as; (20) where M rows present the total number of the SBSs. The columns present the characteristics of SBS such that and represent ID of an SBS Bj and its cache, respectively. Whereas, is the SBS location in the coverage area of an MBS along with , which represents the free space of each SBS and is computed as: (21)

The list of contents of the MBS’s cache can be expressed as; (22) where w rows present the total number of cached contents while the κ is the maximum number of attributes of the content.

4.1.1.2 SBSs caches contents. The list of SBS’s cached contents that is denoted by , and can be expressed as; (23) where w is the total number of each SBS’s cached contents.

Then, the total number of contents in all the SBS’ caches can be represented as (24)

4.1.1.3 Consolidated list of distributed cache contents. Every SBS sends its cache contents as shown in (23), to an MFCL, which is an MBS acting as a marker. After receiving the caches contents lists from all the SBSs, an MFCL applies row-wise combination function to generate a consolidated list as following: (25) where Tf is the total number of all cached contents across a distributed cache.

4.1.2 Filtering similar contents.

The consolidated list generated by an MFCL may have similar contents received from the various SBSs’ cache, in addition to different types of data such as nominal, ordinal, interval, and ratio. In order to remove duplication, the MFCL performs matching among attributes of contents of as shown in (25).

In order to compute dissimilarity in (25), we considered typical and target contents denoted by y and y, respectively. The dissimilarity can be calculated as [68] (26) where dissim(y, y) represents the difference between the typical content y and target content y, (Pf) represents the attributes of content, κ is the maximum number of attributes of content. is the contribution of feature (Pf) to the dissimilarity between y and y. Whereas, is the indicator.

The fetching value principle of the : If either or has a missing value of the (Pf) attribute or and attribute Pf is asymmetric, then ; otherwise .

The fetching value principle of the :

  1. If the attribute Pf of contents is numerical, interval or ratio type, then (27) where h can take all the non-missing value of the attribute Pf of content.
  2. If the attribute Pf of contents is nominal or binary, then (28)
  3. If the attribute Pf of contents is ordinal, then calculate ranking , then (29) where represents the ranking of state in the attribute Pf of contents, is the number of ordered states of Pf, and treat as a numerical type.

The similarity between y and y is computed by: (30) where sim(y, y) shows similarity of contents (y, y).

The list of the values of similarity is denoted by and can be expressed as; (31)

The closest similar two contents is a column with a maximum value (value 1) among the values across each content. Alternatively, similarity can also be determined by setting a threshold value such that (32)

In this way, duplicate contents can be identified residing in different caches at SBSs, which can then be subsequently evicted to make space for new content.

4.1.3 Checking validity of cache contents.

After identifying similar contents, it is also important to identify the validity of the existing cache contents by the MFCL. The same content in the different cache should be evicted and kept only at one cache. To do this, we proposed Dynamic Validity Period (DVP) with the aim to compute the probability of either keeping or evicting the content from any of the distributed cache. It can be expressed as; (33) where is the remaining energy of an SBS Bj. is the cache free space of an SBS as mentioned in (20) according to (21). Uj,i is the total number of the MSs served by the SBS Bj. is the popularity of each cached content according to (1). is the remaining time of content in the cache and is given by: (34) where is the expire time of that content and the is the time of the last hit of that content.

The DVP is executed after performing the content similarity and determining the duplicate contents of the distributed cache. Let (35) where 1 ≤ mM. K represented the hash key of the contents of DVP (). It mean is represents all the attributes of the contents. That has highest validity. Therefore, it should be placed in UCL. All the remaining contents will be placed in DCL. This can formally be seen in (38) and (39).

4.1.4 Final list generation.

Now that the similarity and validity of the contents are determined, the final lists of duplicated and unduplicated contents can be generated. The UCL is the list of un-duplicate contents of the distributed cache, which can be used by an MS as a map to determine whether to upload contents or send a MoTD instead of uploaded the actual content. The UCL can be formed using (25) and (35). The value of K in (35) is used to select the content of in (25) using the hash key . K represents a row in in (25), which has maximum validity (DVP). That is indicated by , and can be presented as (36) where l is the serial number of the row with max DVP such that l ∈ [1, Tf].

represents the row of the matrix in (25), which has maximum validity as shown in (35). Eq (36) shows max value of DVP of a single content. The same process is performed for all the contents of distributed cache. It will provide max values of DVP of all the duplicate contents. It can be seen as a cluster of duplicated contents with different DVP values. The contents with maximum DVP value will be placed in UCL, while the remaining values will be placed in DCL. Based on this discussion, UCL can be represented as (37) where l ∈ [1, Tf] represent the rows with max DVP values. When all the rows with max DVP are added the final UCL can be expressed as; (38) where TTf presents the total number of unduplicated contents among all distributed cache.

The DCL is the list of the duplicated contents of the distributed cache, which determines the contents to be evicted from SBS(s) caches. As previously mentioned that the max DVP values of contents are placed in UCL, the remaining duplicated contents are added to DCL and can be expressed as; (39) where T is the total number of the duplicated contents across a distributed cache to be eliminated from their caches.

4.1.5 Duplication elimination at sbs and broadcast to the MSs.

After removing duplicate contents from MBS’s cache, UCL and DCL are sent to all the SBSs. The elimination of the duplicate contents as suggested by MFCL is the result of DVP in the DCL as shown in (39). The contents with low DVP are evicted from their caches and can be represented as , where is the time of evicted content. The evicted contents are subsequently reported to MBS for updating as shown in (20).

Additionally, each SBS Bj broadcasts the UCL to all its serving MSs Uj,i, which can be used for matching, subsequently. The contents of the distributed cache are updated continuously, and their popularity is getting changed for reasons such as times of uploads, downloads, shares, views, etc. Therefore, UCL is updated and broadcast to all the MSs at a periodic time to ensure content consistency among the distributed cache and the MSs.

4.1.6 Proposed (DC)2 scheme.

Based on the discussion, an algorithm is proposed, which is called Disparate Content of Distributed Cache (DC)2 as shown in Algorithm 1. Each previous section from (4.1.1) to (4.1.5) corresponds to a step of the proposed algorithm. The steps of the proposed algorithm can be summarized as follows: The proposed (DC)2 algorithm consists of 5 major steps. In step 1, all the caches at SBSs and MBS are processed followed by a similarity check in step 2. The proposed algorithm further checks the validity of the contents before generating the final lists in step 3 and 4, respectively. Finally, duplicate contents are removed from all the target SBSs and the un-duplicated list is sent to all the MSs to perform content matching in step 5.

Algorithm 1: Disparate Contents of Distributed Cache (DC)2

1 Input: , , , , Uj,i.

2 Output: UCL, DCL.

3 Step 1: Processing of cache contents at SBSs and MBS

4 Tf,T,T←0, Counters of all, unduplicated and duplicated contents; ← the threshold of similarity;

5 Create by adding ;

6 for (j = 1, jM, j++) do

7  for (l = 1, lw, l++) do

8   Create new row in ;

9   Add and its Attributes to (25);

10   Tf++;

11 MFCL created (25) ← list of all distributed cached contents;

12 Step 2: Filtering similar contents

13 for (y = 1, yTf, y++) do

14   ← Temporary list of duplicated contents;

15  for (y = y + 1, yTf, y++) do

16   dissim(y, y) ← 0;

17   Clear D(T+1)×K;

18   for (q = 1, qκ, q++) do

19    if ( ∣∣ = Null) ∣∣ ( ∣∣ = 0) & (Pf is Asymmetric) then

20     

21    else

22     

23    if (Numeric —— Interval —— Ratio Attributes) then

24      ← according to (27);

25    if (Nominal Attributes) then

26      ← according to (28);

27    if (Ordinal Attributes) then

28      ← according to (29);

29    Add Dissim(y, y);

30   Compute Dissimilarity dissim(y, y) according to (26);

31   sim(y, y) ← Calculate Similarity according to (30);

32   Add sim(y, y) ⇒ (31);

33   Step 3: Checking validity of cache contents

34   if (sim(y, y)) then

35     ← Compute DVP according to (33);

36    Add to ;

37     ← Compute DVP according to (33);

38    Add to ;

39   Step 4: Final List Generation

40   Add Max UCLT×κ (38);

41   T++;

42   Add others (39);

43   T = T + Count()—1;

44   Clear ;

45 Step 5: Duplication elimination at SBSs and Broadcast to the MSs.

46 for (l = 1, lT, l++) do

47  for (j = 1, jM, j++) do

48   if ∈ () then

49    Remove () (as );

50 Each SBS Bj broadcast UCL to all serving MSs Uj,i.

51 Return UCL & DCL.

4.2 Cache assist uplink with an MS enabled matching

This section describes the mechanism of performing content matching at an MS after it receives UCL. The process of matching is performed in three steps as described below. In first step, an MS intends to upload new content to its serving SBS Bj. It will make a consolidated list comprising of attributes of new contents and UCL. In secon step, an MS will perform matching to check if its available in the cache. Lastly, the contents will only be uploaded in case cache miss i.e. unavailability of the contents in the cache. If the content is enlisted in UCL, an MS will not upload the content. Instead, MoTD will be sent to its serving SBS. It is worth mentioning that if matching is performed at an MS, it will significantly improve the spectrum and energy efficiency. The aforementioned steps are described in details as following.

4.2.1 Consolidated list of attributes of new content and UCL.

This subsection provide details of constructing a consolidated list of attributes of new contents and the existing UCL. The main steps are as following.

  1. Firstly, an MS creates a list comprising new content and its attributes as shown in (40). (40) where represents the new content attributes.
  2. The unduplicated contents of the distributed cache is shown in (38), which is combined with (40) by using a row-wise combination function to generate a consolidated list. The consolidate list is denoted by O(T+1)×K and is given by; (41)
    The last row in (41) is the values of attributes of new content.

4.2.2 Matching the attributes of new content and UCL.

After constructing the consolidated list, an MS will start matching to check whether the new content is available in the cache. Using (41), the matching among the attributes of new content and the contents in a distributed cache is performed by using the similar process as explained in Section (4.1.2), especially Eqs (26)–(30).

According to [69], the matching is performed between the control item and treat item. In this work, the control item is the UCL, while the new content is a target content. Then, the matching will result in a dissimilarity matrix as shown in (42). (42) The last row of (42) shows the dissimilarity between the new and cached content, which can be used to either upload or discard the new contents.

4.2.3 Uploading dissimilar content and ignore similar content.

After matching is performed, the content can either be uploaded or an MoTD is sent. If a match is found, the content is not uploaded and an MoTD is sent instead of the actual content. The content will be uploaded, if the match is not found. In other words, contents will only be uploaded in case of a cache miss.

4.2.4 Proposed UCMM scheme.

Based on the discussion, an algorithm is proposed, which is called is called an Uplink Caching with Mobile Matching (UCMM) scheme as shown in Algorithm 2 and Fig 2-(Part-2). The proposed UCMM performs matching among the attributes of new contents and UCL at an MSs level to determine if new content is already in a cache, which will eliminate the need of uploading duplicate contents. Each previous section from (4.2.1) to (4.2.3) corresponds to a step of the proposed algorithm. The steps of the proposed algorithm can be summarized as follows: In step 1, consolidate the lists of the attributes of the new content and UCl contents into a list followed by a similarity check in step 2. Finally, decides whether to upload dissimilar content or ignore similar content by sending a MoTD to the serving SBS in step 3.

Algorithm 2: Uplink Caching with Mobile Matching (UCMM)

1 Input: UCLT×κ, .

2 Output: Matching factor MatchingR, Miss/Hit factor Missn or Hitn.

3 Missn ←0, Hitn ←0, MatchingR ← false, ← Value of threshold of Similarity;

4 D(T+1)×K ← Dissimilarity matrix(42);

5 Step-1: Consolidated List of Attributes of New Content and UCL.

6 Create ;

7 Get UCLT×κ;

8 Create O(T+1)×K refers(41) by consolidated (38 and 40);

9 Step-2: Matching the Attributes of New Content and UCL.

10 for (y = 1, y ≤ (T + 1), y++) do

11   ← 0;

12  Clear D(T+1)×K;

13  for (q = 1, qκ, q++) do

14   if ( ∣∣ = Null) ∣∣ ( ∣∣ = 0) & (Pf is Asymmetric) then

15    

16   else

17    

18   if (Numeric —— Interval —— Ratio Attributes) then

19     ← according to (27);

20   if (Nominal Attributes) then

21     ← according to (28);

22   if (Ordinal Attributes) then

23     ← according to (29);

24   Add ;

25  Compute Dissimilarity according to (26);

26  D(T+1)×K created;

27  Determine value of ;

28  Compute Similarity according to (30);

29  if () ⩾ ) then

30   MatchingR ← true;

31   set y← T + 1;

32 Step-3: Uploading Dissimilar Content and Ignore Similar content.

33 if (MatchingR = true) then

34  Hitn+=1;

35  Send MoTD;

36 else

37  Missn+=1;

38  An MS send request to upload the new content;

39  An MS received the ;

40  Start uploading ;

41 Return MatchingR, Missn/Hitn.

4.3 Content segmentation and distributed placement

In the case of distributed caching, the size of contents is a major limiting factor towards the cache effectiveness [27, 38, 56]. The new content size may be too large to accommodate in a single cache, efficiently. Therefore, it is recommended that the new content should be split into smaller segments with same size to be stored distributively. In this vein, we proposed a scheme that advocates to divide the new content into Q different segments as a function of new content size and available cache space of the corresponding and its neighboring SBSs as shown in Fig 2-(Part-3) and (Fig 4). The rationale of this content segmentation is the effective placement due to the smaller size in distributed cache across multiple SBSs. According to [56], the Q is equal , where 1 is the local SBS Bj and count() is the number of neighboring SBSs, which have a non-void intersection with and enough free space to cache at least a segment as .

thumbnail
Fig 4. Segmentation of new content across distributed cache.

https://doi.org/10.1371/journal.pone.0268294.g004

In order to facilitate its storage distributively, the segment(s) will be cached in a target distributed cache based on its free space according to (20) and (21). An MDS code method is used to encode the new content into packets as mentioned in Section (3.2.2).

The result of the above discussions is programmatically extracted in a new proposed scheme, which is called Content Segmentation with Distributed Placement (CSDP) as shown in Algorithm-3, and in Fig 2-(part-3). The CSDP consists of 2 major steps, where step 1 performs preparation of the required information based on the size and hash key of the new content. In addition, free space of the local and its neighboring SBSs is also considered. Step 2, corresponds to new content splitting into Q segments, encoding, and cache placement of segmented contents in a distributed manner.

Algorithm 3: Content Segmentation with Distributed Placement (CSDP) Scheme

1 Input: , , SBSj, .

2 Output: .

3 Step 1: Preparing the require information.

4 : The size of the new content;

5 : The hash key of the new content;

6 Get the free space of the target SBSs according to (20 and 21);

7 // Set the target SBSs;

8 Step 2: Split new content and cached it distributively

9 // total number of segments;

10 for (q = 1, q¡=Q, q++) do

11  Check the according to (20 and 21);

12  if (q = 1) & () then

13   Store Segq or set of Segqs in Bj;

14   Encode using MDS code, ;

15  else

16   Select from ;

17   Check of neighboring SBSs according to (20 and 21);

18  if () then

19   Store Segq or set of Segqs in ;

20   Encode using MDS code, ;

21 Create the map of the set encoded packets: ;

22 Change the status of new content to be a cached content;

23 Return .

4.4 Complexity analysis of the proposed framework

As mentioned, and discussed in the previous sections, M is the total number of distributed cache. Each cache store w contents with a total of Tf contents in the whole distributed network. Each content has κ attributes. Additionally, the unduplicated and duplicated contents among distributed cache are T, and T, respectively. As well, the proposed framework consists of three parts as follows: (CD)2, UCMM, and CSDP.

The overall complexity of the proposed framework is (O((M.w)2) + O(M.w) + O(M)), which is computed for the three parts as follows:

4.4.1 Complexity of CD2 algorithm.

The CD2 is performed to solve the issue of eliminating the duplicated contents among the distributed cache as shown in algorithm 1, which is executed by many operations. The main operations of the iterative have complexity as the following:

  1. The operation of the processing of the cache contents at the distributed cache has complexity O(M.w).
  2. The operation of filtering similar contents by performing the matching among the attributes of the contents has complexity .
  3. The eliminating of the duplicate contents from the target distributed cache has complexity O(T).
  4. Finally, the overall complexity of the CD2 is , where Tf = (M.w) but and . Then, the overall complexity of the proposed framework is .

4.4.2 Complexity of UCMM algorithm.

The solution to the cache uplink problem without duplication of the mobile data to be uploaded is solved by algorithm 2 called UCMM. The overall complexity of UCMM is O(1) in the best case, while O(M.w) in the worst case.

4.4.3 Complexity of CSDP algorithm.

The solution of the segmentation of the new content with a large size and cache distributively is solved by algorithm 3 called SCDP. The new content will be segmented into Q segments with a smaller size to be cached distributively in the target caches, which is equal to or smaller than M. The segmentation of new content has complexity O(Q). While the caching segments of each content have complexity O(M). Therefore, the overall complexity of the CSDP is O(M).

5 Experimental design and evaluation

We have compared our proposed framework in different scenarios such as uplink with no-caching (No-cache), cache assisted uplink (Each-cache) [33] and uplink with collaborative distribution caching (SBS-CoDc) [38]. As its name implies in a no-caching scenario, we have considered the absence of caching at the SBS, while in the cache-assisted uplink, we consider uplink with the support of cache. The simulation parameters are shown in Table 3.

5.1 Performance evaluation metrics

In our system model, , , and ECN denote the average energy consumption of the MSs, and SBSs, and the overall network, respectively. In addition, , , and denote the uplink data rate of an MS, an SBS, and the overall network, respectively. For a comprehensive evaluation of the performance of the proposed framework, it is compared with the existing schemes in [33, 38]. The following metrics are used, which are computed as (43)–(55).

5.1.1 Measurements cache availability and efficiency.

According to (3) and (4), the cache hit ratio of the overall network is given as (43) where is a cache hit ratio of an SBS Bj and is the cache miss ratio of the same SBS.

While the cache miss ratio of the overall network is given as (44)

5.1.2 Throughput.

Throughput (TH) is the ratio of the average number of successfully transmitted data in GB per second [70] and is computed as follows:

The throughput of MS (THi) is given as, (45) where is the total number of successfully transmitted data and TT is the transmit time.

The average throughput of MSs () is given as, (46)

The average throughput SBSs () is given as, (47)

5.1.3 Energy efficiency measurements.

The Energy Efficiency (EE) of the uplink is the ratio of uplink data rate to the energy consumption, where the unit is GB/J/s. Area Energy Efficiency (AEE) is measured based on both the energy consumption and the size of the coverage area, where the unit is GB/Joule per area (Km2) [66, 67, 71].

In this regard, the average EE of the MSs () can be calculated as, (48)

The average EE of SBSs () can be calculated as, (49)

Furthermore, the network AEE is computed in two ways, Firstly, based on the uplink data rate and energy consumption, which is denoted by and can be calculated as, (50)

Secondly, based on average EE of MSs and SBSs in addition to the coverage size, which is denoted by and can be calculated as, (51)

5.1.4 Spectral efficiency measurements.

Spectral Efficiency (SE) is defined as the uplink data rate per bandwidth measured in GB/s/Hz, which is an important metric to represent the performance of radio resource utilization of the network as given by [67, 72].

According to (19), the average SE for MSs () can be calculated as following (52)

Similarly, the average SE of SBSs () can be calculated as following (53)

In addition to, Area Spectral Efficiency (ASE) (ASESE) (GB/Hz per area Km2) can be calculated as following (54)

5.1.5 Overall Cache Efficiency (OCE) measurements.

The OCE is the ratio of the cumulative cache hits to cumulative demands. That reflects the overall number of cache hits until a specific time (time slot t). An overall cache Efficiency (OCE) is denoted by CO, and is given according to [73] as, (55) where, is the cumulative distributed cache hit ratio, while SD is cumulative demands.

6 Numerical results

The simulation is performed considering the proposed framework and the existing models namely, No-Cache, Each-cache [33] and SBS-CoDc [38]. In addition, their sub-models in different scenarios with the same simulation parameters are listed in Table 3. The final result is validated by the different performance metrics, which are presented in (5.1) and shown below in different Figures.

6.1 Cache hit and miss ratio

Fig 5, shows the average cache hit and miss ratio of the Each-cache, and SBS-CoDc along with our Proposed Framework according to (43) and (44).

We can see a slight increase in the average cache hit ratio of SBS-CoDc as compared to Each-cache because of the lack of cooperation among SBSs. The cache hit ratio of Each-Cache is computed separately and summed subsequently. Our proposed framework outperforms SBS-CoDc by improving the cache hit ratio by 9%. The main reason is the effective use of the CSDP scheme, which improves the cache hit ratio. We can also see that our proposed framework has significantly reduced the cache miss ratio.

The rationale is the distribution of contents among different SBSs along with the UCL, which acts as a map for MSs to efficiently locate the cached contents. Due to these improvements, the traffic load at the access network, as well as backhaul link, is significantly reduced, which subsequently improves EE and SE.

6.2 Average Energy Consumption (EC)

The average energy consumption of MSs and SBSs is shown in Figs 6 and 7, respectively. Our proposed framework is compared with No-Cache, Each-cache, and SBS-CoDc. The percentage improvements of average energy consumption of MSs and SBSs are shown in Table 4.

thumbnail
Table 4. Percentage improvement of average EC of MSs and SBSs.

https://doi.org/10.1371/journal.pone.0268294.t004

We can see that our proposed framework performs significantly better as compared to No-Cache and Each-Cache scenarios. However, about 17% and 13% improvements are noticed as compared to SBS-CoDc, which also has inherent advantages of distributed scenarios. However, our proposed scheme uses UCL for matching at an MS. As the cache hit ratio of our proposed scheme is improved, it also positively affects the average energy consumption by limiting the amount of unnecessary data upload.

6.3 Improvement of uplink throughput

The average Uplink Throughput (TH) of our proposed framework is compared with the existing schemes as shown in Figs 8 and 9 of the MSs and SBSs, respectively. The percentage improvements are shown in Table 5.

thumbnail
Table 5. Percentage improvement of average throughput of MSs and SBSs.

https://doi.org/10.1371/journal.pone.0268294.t005

We can see that the THs of our proposed framework for both MS and SBS are significantly better than the No-Cache scheme for obvious reasons. Similarly, compared to Each-Cache, our proposed framework performs better due to its distributed nature. The result shows that our framework is better than SBS-CoDc because of the effective use of MoTD rather than uploading new content. Furthermore, as compared to the existing schemes, matching is done at an MS rather than at an SBS, which means no content upload that subsequently improves TH.

6.4 Improvement in energy efficiency

Energy efficiency (EE) is an effective way to show performance improvement. As one can predict that the lower energy consumption as shown previously, will also entail the EE. Our proposed framework is evaluated based on EE and compared with the existing schemes. The EE of MS and SBS increased with the increasing number of MSs is shown in Figs 10 and 11. All the percentage improvements of the proposed framework as a result of comparison with existing schemes are shown in Table 6.

thumbnail
Fig 10. Comparison average EE of MSs among existing schemes.

https://doi.org/10.1371/journal.pone.0268294.g010

thumbnail
Fig 11. Comparison average EE of SBSs among existing schemes.

https://doi.org/10.1371/journal.pone.0268294.g011

thumbnail
Table 6. Percentage improvement of average EE of MSs and SBSs.

https://doi.org/10.1371/journal.pone.0268294.t006

We can see in Fig 10, that the average EE of our proposed framework is better as compared to existing schemes. Compared to SBS-CoDc, our proposed framework improves the average EE by 41%. The rationale is the use of UCL for matching at the MS, which effectively reduces the contents upload in case of a cache hit ratio. Similarly, Fig 11, shows the improved average EE of SBSs of our proposed framework. We can see the average EE of our proposed framework as compared to other schemes with an increasing number of MSs. Our proposed framework outperforms existing schemes by significantly improving average EE by 46% as compared to SBS-CODc. The main reason is the improved hit ratio of our proposed framework, which implicitly improves the average EE because of the use of MoTD that represents the new contents.

We have also used the metric of Area Energy Efficiency (AEE) to show the performance improvement of our proposed scheme. We computed AEE using two ways, firstly as a function of uplink data and secondly as a function of EE. Figs 12 and 13, show the AEE of No-cache, Each-cache, SBS-CoDc, and proposed framework of different numbers of MSs based on two mentioned ways, respectively.

thumbnail
Fig 12. Comparison AEE based on uplink data rate and EC of the overall network.

https://doi.org/10.1371/journal.pone.0268294.g012

Fig 12, shows the AEE performance as a function of the uplink data rate, which is computed by dividing the total uplink data rate by the total energy consumption per unit area based on (50). Fig 12, shows that our proposed framework has increased the AEE by 2.34 GB/J/km2 as compared to SBS-CoDC. A summary of the result of the comparison with other schemes is shown in Table 7.

Furthermore, Fig 13, shows AEE performance as a function of the cell size, which is computed by dividing the total energy efficiency by the total size of the network based on (51) in order to assess the EE of the overall network to its size. In Fig 13, we can see our proposed framework has increased the average AEE by 43% as compared to SBS-CoDc. These improvements are credited to the improved hit ratio, higher TH, and better EE.

6.5 Spectral efficiency

The average spectral efficiency (SE) of MSs as compared to No-cache, Each-cache, SBS-CoDc, and our proposed framework for different numbers of MSs is shown in Fig 14. We can see that our proposed framework has improved the SE almost by 16% as compared to SBS-CoDc. Whereas, compared to Each-Cache, improvement is almost 37%. Similarly, Fig 15, shows the average SE of SBSs of existing schemes as compared to our proposed framework. The summary of improvement is shown in Table 8.

thumbnail
Fig 15. Average spectral efficiency of SBSs as compared to existing schemes.

https://doi.org/10.1371/journal.pone.0268294.g015

thumbnail
Table 8. Percentage improvement of average SE of MSs and SBSs.

https://doi.org/10.1371/journal.pone.0268294.t008

The rationale of improved SE is a significant reduction in the number of uplink contents. Since matching is done at an MS that facilitates the decision of either uploading the contents or not. In case of a cache hit, the contents are not uploaded and hence the bandwidth is saved for the use of other requests of the remaining MSs. In this way, a significant amount of spectrum can be saved and more requests can be entertained, which ultimately improves the SE.

Fig 16, shows the ASE of No-cache, Each-cache, SBS-CoDc, and proposed framework for different numbers of MSs. We can see in Fig 16, that our proposed framework has improved the ASE by 24% as compared to SBS-CoDc.

6.6 Improvement of overall distributed cache efficiency

The overall cache efficiency (OCE) of the distributed cache as compared to Each-cache, SBS-CoDc, and our proposed framework for different numbers of MSs is shown in Fig 17.

Fig 17, shows the OCE of the distributed cache of existing schemes as compared to our proposed framework. We can see that our proposed framework has improved the OCE almost by 28%, and 7.41% as compared to Each-cache, and SBS-CoDc, respectively. This is because the contents of the distributed cache are available to all the MSs, irrespective of their serving SBSs. In addition, the distributed cache miss ratio is less, which improves the cache efficiency and reduces access time to cache.

7 Conclusion

This paper proposed an efficient uplink cache framework based on a distributed scenario. The proposed framework leveraged the content matching at an MS in contrast to the existing schemes, which perform it at an SBS. In addition, the content matching at an MS has significantly improved the energy and spectral efficiency. The rationale is the reduced uplink contents due to local content matching at an MS. Furthermore, the proposed framework is based on the effective distribution of cache contents over cooperative SBSs, which improves the cache hit ratio. This entailed the subsequent improvement in throughput, energy consumption, and spectrum for MSs as well as SBSs. Our analysis shows that our proposed framework improves the EE and SE of the access network by 41.28% and 15.58%, respectively. Furthermore, an increase of 46.18% and 28.00% is respectively calculated as EE and SE for the backhaul link. As well as, the OCE of the proposed framework improves by 28%, and 7.41%, as compared to Each-Cache, and SBS-CoDc, respectively.

References

  1. 1. Cisco Annual Internet Report (2018-2023), White Paper, Cisco Public.
  2. 2. Ge X., Yang B., Ye J., Mao G., Wang C. and Han T., “Spatial Spectrum and Energy Efficiency of Random Cellular Networks,” in IEEE Transactions on Communications, vol. 63, no. 3, pp. 1019–1030, March 2015.
  3. 3. M. Katz, P. Pirinen and H. Posti, “Towards 6G: Getting Ready for the Next Decade,” 2019 16th International Symposium on Wireless Communication Systems (ISWCS), 2019, pp. 714-718.
  4. 4. Mattera D. and Tanda M. J. P. C., “Windowed OFDM for small-cell 5G uplink,” Physical Communication, vol. 39, p. 100993, 2020.
  5. 5. Chen L., Huang W., Deng D., Xia J., Chen B., and Zhu F., “Multi-antenna processing based cache-aided relaying networks for B5G communications,” Physical Communication, p. 101141, 2020.
  6. 6. Xia J. et al., “Cache-aided mobile edge computing for B5G wireless communication networks,” EURASIP Journal on Wireless Communications and Networking, vol. 2020, no. 1, p. 15, 2020.
  7. 7. Salam T., Rehman W. U. and Tao X., “Data Aggregation in Massive Machine Type Communication: Challenges and Solutions,” in IEEE Access, vol. 7, pp. 41921–41946, 2019.
  8. 8. Rehman W. U., Salam T., Almogren A., Haseeb K., Ud Din I. and Bouk S. H., “Improved Resource Allocation in 5G MTC Networks,” in IEEE Access, vol. 8, pp. 49187–49197, 2020.
  9. 9. B. Serbetci and J. Goseling, “On Optimal Geographical Caching in Heterogeneous Cellular Networks,” 2017 IEEE Wireless Communications and Networking Conference (WCNC), 2017, pp. 1-6.
  10. 10. Y. Hua, “Distributed caching in the future generation networks,” Loughborough University, 2020.
  11. 11. Salam T., Tao X., Chen Y., and Zhang P., “A trust framework based smart aggregation for machine type communication,” Science China Information Sciences, vol. 60, no. 10, pp. 1–15, 2017.
  12. 12. H. Elshaer, F. Boccardi, M. Dohler and R. Irmer, “Downlink and Uplink Decoupling: A disruptive architectural design for 5G networks,” 2014 IEEE Global Communications Conference, 2014, pp. 1798-1803.
  13. 13. Li L., Zhao G. and Blum R. S., “A Survey of Caching Techniques in Cellular Networks: Research Issues and Challenges in Content Placement and Delivery Strategies,” in IEEE Communications Surveys & Tutorials, vol. 20, no. 3, pp. 1710–1732, thirdquarter 2018.
  14. 14. Salam T., Rehman W. U. and Tao X., “Cooperative Data Aggregation and Dynamic Resource Allocation for Massive Machine Type Communication,” in IEEE Access, vol. 6, pp. 4145–4158, 2018.
  15. 15. Y. Xu, Z. Wang, Y. Li, T. Lin, W. An and S. Ci, “Minimizing Bandwidth Cost of CCN: A Coordinated In-Network Caching Approach,” 2015 24th International Conference on Computer Communication and Networks (ICCCN), 2015, pp. 1-7.
  16. 16. Wu X., Li Q., Leung V. C. M. and Ching P. C., “Joint Fronthaul Multicast and Cooperative Beamforming for Cache-Enabled Cloud-Based Small Cell Networks: An MDS Codes-Aided Approach,” in IEEE Transactions on Wireless Communications, vol. 18, no. 10, pp. 4970–4982, Oct. 2019.
  17. 17. Wu X., Li Q., Li X., Leung V. C. M. and Ching P. C., “Joint Long-Term Cache Updating and Short-Term Content Delivery in Cloud-Based Small Cell Networks,” in IEEE Transactions on Communications, vol. 68, no. 5, pp. 3173–3186, May 2020.
  18. 18. Sun Y., Zhu Z. and Fan Z., “Distributed Caching in Wireless Cellular Networks Incorporating Parallel Processing,” in IEEE Internet Computing, vol. 22, no. 1, pp. 52–61, Jan./Feb. 2018.
  19. 19. Hou R., Cai J., and Lui K.-S., “Distributed cache-aware CoMP transmission scheme in dense small cell networks with limited backhaul,” Computer Communications, vol. 138, pp. 11–19, 2019.
  20. 20. A. Sadeghi, A. G. Marques and G. B. Giannakis, “Distributed Network Caching via Dynamic Programming,” ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 4574-4578.
  21. 21. P. Ostovari, J. Wu, and A. Khreishah, “Efficient online collaborative caching in cellular networks with multiple base stations,” in 2016 IEEE 13th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), 2016: IEEE, pp. 136-144.
  22. 22. Lu B., Zhu F., Xia J., Li X., Zhou W., and Fan L. J. P. C., “Interference suppression by exploiting wireless cache in relaying networks for B5G communications,” Physical Communication, vol. 42, p. 101162, 2020.
  23. 23. Tamoor-ul-Hassan S., Bennis M., Nardelli P. H. J. and Latva-aho M., “Caching in Wireless Small Cell Networks: A Storage-Bandwidth Tradeoff,” in IEEE Communications Letters, vol. 20, no. 6, pp. 1175–1178, June 2016.
  24. 24. Kuang S. and Liu N., “Analysis and Optimization of Random Caching in Multi-Antenna Small-Cell Networks With Limited Backhaul,” IEEE Transactions on Vehicular Technology, vol. 68, no. 8, pp. 7789–7803, 2019.
  25. 25. M. Karaliopoulos, L. Chatzieleftheriou, G. Darzanos and I. Koutsopoulos, “On the Joint Content Caching and User Association Problem in Small Cell Networks,” 2020 IEEE International Conference on Communications Workshops (ICC Workshops), 2020, pp. 1-6.
  26. 26. K. Tokunaga, K. Kawamura, and N. Takaya, “High-speed uploading architecture using distributed edge servers on multi-RAT heterogeneous networks,” in 2016 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), 2016: IEEE, pp. 1-2.
  27. 27. Q. Jia, R. Xie, T. Huang, J. Liu and Y. Liu, “Energy-efficient cooperative coded caching for heterogeneous small cell networks,” 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2017, pp. 468-473.
  28. 28. Z. Zhang, Z. Chen, and B. Xia, “Cache-enabled uplink transmission in wireless small cell networks,” in 2018 IEEE International Conference on Communications (ICC), 2018: IEEE, pp. 1-6.
  29. 29. Papazafeiropoulos A. and Ratnarajah T., “Modeling and performance of uplink cache-enabled massive MIMO heterogeneous networks,” IEEE Transactions on Wireless Communications, vol. 17, no. 12, pp. 8136–8149, 2018.
  30. 30. Sharma S. K. and Wang X., “Distributed caching enabled peak traffic reduction in ultra-dense IoT networks,” IEEE Communications Letters, vol. 22, no. 6, pp. 1252–1255, 2018.
  31. 31. Teng L., Yu X., Tang J., and Liao M., “Proactive caching strategy with content-aware weighted feature matrix learning in small cell network,” IEEE Communications Letters, vol. 23, no. 4, pp. 700–703, 2019.
  32. 32. Song F. et al., “Probabilistic caching for small-cell networks with terrestrial and aerial users,” IEEE Transactions on Vehicular Technology, vol. 68, no. 9, pp. 9162–9177, 2019.
  33. 33. Sufyan M. M. A.-E. et al., “Duplication elimination in cache-uplink transmission over B5G small cell network,” EURASIP Journal on Wireless Communications and Networking, vol. 2021, no. 1, pp. 1–24, 2021.
  34. 34. Liu J. and Sun S., “Energy efficiency analysis of cache-enabled cooperative dense small cell networks,” IET Communications, vol. 11, no. 4, pp. 477–482, 2017.
  35. 35. X. Zhang, Y. Ren, and T. Lv, “Energy Efficiently Caching and Transmitting Scalable Videos in HetNets,” in 2018 IEEE International Conference on Communications Workshops (ICC Workshops), 2018: IEEE, pp. 1-6.
  36. 36. Furqan M., Yan W., Zhang C., Iqbal S., Jan Q. and Huang Y., “An Energy-Efficient Collaborative Caching Scheme for 5G Wireless Network,” in IEEE Access, vol. 7, pp. 156907–156916, 2019.
  37. 37. B. S. Khan, S. Jangsher, H. K. Qureshi, and S. Mumtaz, “Energy efficient caching in cooperative small cell network,” in 2019 16th IEEE Annual Consumer Communications & Networking Conference (CCNC), 2019: IEEE, pp. 1-6.
  38. 38. Zhang H., Wang Y., Ji H., and Li X., “A Sleeping Mechanism for Cache-Enabled Small Cell Networks With Energy Harvesting Function,” IEEE Transactions on Green Communications and Networking, vol. 4, no. 2, pp. 497–505, 2020.
  39. 39. Zhao J., Wang W., Qu H., Zhao S., and Ren G., “Joint caching policies for optimizing energy costs/offloading probability for D2D and millimeter-wave small cell underlaying cache-enabled networks,” Transactions on Emerging Telecommunications Technologies, vol. 31, no. 3, p. e3804, 2020.
  40. 40. Liu D. and Yang C., “Caching Policy Toward Maximal Success Probability and Area Spectral Efficiency of Cache-Enabled HetNets,” in IEEE Transactions on Communications, vol. 65, no. 6, pp. 2699–2714, June 2017.
  41. 41. T. Li, J. Liu, M. Sheng and J. Li, “Spectral efficiency optimization in caching enabled ultra-dense small cell networks,” IEEE INFOCOM 2018—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2018, pp. 492-498.
  42. 42. H. Xie, R. Hou, K. Lui and H. Li, “A Spectral Efficiency Guaranteed Caching Scheme in Small Cell Networks,” 2018 IEEE 87th Vehicular Technology Conference (VTC Spring), 2018, pp. 1-5.
  43. 43. Hou R., Huang K., Xie H., Lui K.-S., and Li H., “Caching and resource allocation in small cell networks,” Computer Networks, p. 107100, 2020.
  44. 44. C. Zhang, H. Wu, H. Lu, and J. Liu, “Throughput analysis in cache-enabled millimeter wave HetNets with access and backhaul integration,” in 2020 IEEE Wireless Communications and Networking Conference (WCNC), 2020, pp. 1-6: IEEE.
  45. 45. Sheng M., Teng W., Chu X., Li J., Guo K., and J. I. T. o. W. C. Qiu Z., “Cooperative Content Replacement and Recommendation in Small Cell Networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 3, pp. 2049–2063, 2020.
  46. 46. Liu C., Zhang H., Ji H. and Li X., “MEC-assisted flexible transcoding strategy for adaptive bitrate video streaming in small cell networks,” in China Communications, vol. 18, no. 2, pp. 200–214, Feb. 2021.
  47. 47. S. Mahboob, K. Kar, and J. J. a. p. a. Chakareski, “Decentralized Collaborative Video Caching in 5G Small-Cell Base Station Cellular Networks,” 2021.
  48. 48. T. Nie, J. Luo, L. Gao, F.-C. Zheng, and L. Yu, “Cooperative Edge Caching in Small Cell Networks with Heterogeneous Channel Qualities,” in 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), 2020, pp. 1-6: IEEE.
  49. 49. Yang X. and Thomos N., “An approximate dynamic programming approach for collaborative caching,” Engineering Optimization, vol. 53, no. 6, pp. 1005–1023, 2021.
  50. 50. Wu D., Liu B., Yang Q., and Wang R., “Social-aware cooperative caching mechanism in mobile social networks,” Journal of Network and Computer Applications, vol. 149, p. 102457, 2020.
  51. 51. Lin P., Song Q., Song J., Jamalipour A., and Yu F. R., “Cooperative caching and transmission in CoMP-integrated cellular networks using reinforcement learning,” IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 5508–5520, 2020.
  52. 52. P. Shu and Q. Du, “Group Behavior-Based Collaborative Caching for Mobile Edge Computing,” in 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 2020, vol. 1: IEEE, pp. 2441-2447.
  53. 53. H. Wu, J. Li, J. Zhi, Y. Ren, and L. Li, “Edge-oriented Collaborative Caching in Information-Centric Networking,” in 2019 IEEE Symposium on Computers and Communications (ISCC), 2019: IEEE, pp. 1-6.
  54. 54. Ren D., Gui X., Zhang K., and Wu J., “Hybrid collaborative caching in mobile edge networks: An analytical approach,” Computer Networks, vol. 158, pp. 1–16, 2019.
  55. 55. X. Yang and N. Thomos, “A rolling-horizon dynamic programming approach for collaborative caching,” arXiv preprint arXiv:1907.13516, 2019.
  56. 56. Xu X. and Tao M., “Modeling, Analysis, and Optimization of Caching in Multi-Antenna Small-Cell Networks,” IEEE Transactions on Wireless Communications, vol. 18, no. 11, pp. 5454–5469, 2019.
  57. 57. Zhang S., Sun W., and Liu J., “Spatially cooperative caching and optimization for heterogeneous network,” IEEE Transactions on Vehicular Technology, vol. 68, no. 11, pp. 11260–11270, 2019.
  58. 58. Y.-T. Wang, Y.-Z. Cai, L.-A. Chen, S.-J. Lin, and M.-H. Tsai, “Backhaul-Based Cooperative Caching in Small Cell Network,” in International Conference on Advanced Information Networking and Applications, 2019, pp. 725-736: Springer.
  59. 59. V. Bioglio, F. Gabry and I. Land, “Optimizing MDS Codes for Caching at the Edge,” 2015 IEEE Global Communications Conference (GLOBECOM), 2015, pp. 1-6.
  60. 60. Sloss A., Symes D., and Wright C., ARM system developer’s guide: designing and optimizing system software. Elsevier, 2004.
  61. 61. C. Bernardini, T. Silverston and O. Festor, “A Comparison of Caching Strategies for Content Centric Networking,” 2015 IEEE Global Communications Conference (GLOBECOM), 2015, pp. 1-6.
  62. 62. F. Tian, X. Chen and Z. Zhang, “Robust Design for Massive Access in B5G Cellular Internet of Things,” 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), Xi’an, China, 2019, pp. 1-6.
  63. 63. F. Tian and X. Chen, “Energy-Efficient Design for Massive Access in B5G Cellular Internet of Things,” 2020 IEEE Wireless Communications and Networking Conference (WCNC), 2020, pp. 1-6.
  64. 64. Qadri M. and McDonald-Maier K., “Data cache-energy and throughput models: design exploration for embedded processors,” EURASIP journal on embedded systems, vol. 2009, no. 1, p. 725438, 2009.
  65. 65. Chen L.-M., Zou X.-C., Lei J.-M., and Liu Z.-L., “Dynamic cache resources allocation for energy efficiency,” The Journal of China Universities of Posts and Telecommunications, vol. 16, no. 1, pp. 121–126, 2009.
  66. 66. Yan Z., Chen S., Ou Y., and Liu H. J. I. A., “Energy efficiency analysis of cache-enabled two-tier HetNets under different spectrum deployment strategies,” vol. 5, pp. 6791–6800, 2017.
  67. 67. Sboui L., Rezki Z., Sultan A. and Alouini M., “A New Relation Between Energy Efficiency and Spectral Efficiency in Wireless Communications Systems,” in IEEE Wireless Communications, vol. 26, no. 3, pp. 168–174, June 2019.
  68. 68. He Y., Wang M., Yu J., He Q., Sun H., and Su F. J. E., “Research on the Hybrid Recommendation Method of Retail Electricity Price Package Based on Power User Characteristics and Multi-Attribute Utility in China,” Energies, vol. 13, no. 11, p. 2693, 2020.
  69. 69. Christen P., “The data matching process,” in Data matching: Springer, 2012, pp. 23–35.
  70. 70. Çelebi H., Yapıcı Y., Güvenç İ. and Schulzrinne H., “Load-Based On/Off Scheduling for Energy-Efficient Delay-Tolerant 5G Networks,” in IEEE Transactions on Green Communications and Networking, vol. 3, no. 4, pp. 955–970, Dec. 2019.
  71. 71. Chang K.-C., Chu K.-C., Wang H.-C., Lin Y.-C., and Pan J.-S., “Energy saving technology of 5G base station based on Internet of Things collaborative control,” IEEE Access, vol. 8, pp. 32935–32946, 2020.
  72. 72. Luo Y., Shi Z., Bu F. and Xiong J., “Joint Optimization of Area Spectral Efficiency and Energy Efficiency for Two-Tier Heterogeneous Ultra-Dense Networks,” in IEEE Access, vol. 7, pp. 12073–12086, 2019.
  73. 73. Liu J., Li D. and Xu Y., “Collaborative Online Edge Caching With Bayesian Clustering in Wireless Networks,” in IEEE Internet of Things Journal, vol. 7, no. 2, pp. 1548–1560, Feb. 2020.