Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Deep learning for 1-bit compressed sensing-based superimposed CSI feedback

Abstract

In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, 1-bit compressed sensing (CS)-based superimposed channel state information (CSI) feedback has shown many advantages, while still faces many challenges, such as low accuracy of the downlink CSI recovery and large processing delays. To overcome these drawbacks, this paper proposes a deep learning (DL) scheme to improve the 1-bit compressed sensing-based superimposed CSI feedback. On the user side, the downlink CSI is compressed with the 1-bit CS technique, superimposed on the uplink user data sequences (UL-US), and then sent back to the base station (BS). At the BS, based on the model-driven approach and assisted by the superimposition-interference cancellation technology, a multi-task detection network is first constructed for detecting both the UL-US and downlink CSI. In particular, this detection network is jointly trained to detect the UL-US and downlink CSI simultaneously, capturing a globally optimized network parameter. Then, with the recovered bits for the downlink CSI, a lightweight reconstruction scheme, which consists of an initial feature extraction of the downlink CSI with the simplified traditional method and a single hidden layer network, is utilized to reconstruct the downlink CSI with low processing delay. Compared with the 1-bit CS-based superimposed CSI feedback scheme, the proposed scheme improves the recovery accuracy of the UL-US and downlink CSI with lower processing delay and possesses robustness against parameter variations.

Introduction

Massive multiple-input multiple-output (MIMO) has become the key technology of the fifth generation (5G) wireless communication system, due to its advantages in system capacity and link robustness [1, 2], etc. As premises of these advantages, the base station (BS) needs to obtain accurate downlink channel state information (CSI), and rely on downlink CSI for precoding [3], antenna selection [4], radio resource allocation [5], and communication interference management [6], etc. In time division duplex (TDD) mode, the downlink CSI can be obtained from uplink CSI by exploiting channel reciprocity [7, 8]. For frequency-division duplex (FDD) mode, it is difficult to develop the channel reciprocity due to the different frequency bands used by uplink and downlink [9, 10]. Thus, the downlink CSI is usually estimated by users and fed back to the BS in FDD massive MIMO system [9]. However, due to a large number of antennas in massive MIMO systems, CSI feedback incurs significant feedback overhead, resulting in serious uplink bandwidth occupation.

To reduce feedback overhead, lots of compressive sensing (CS)-based CSI feedback methods have emerged [1114]. In recent years, deep learning (DL)-based CSI feedback methods [1517] are proposed to further reduce feedback overhead. Although the feedback overhead is reduced to some extent, both CS-based CSI feedback and DL-based CSI feedback still occupy significant uplink bandwidth resources. To avoid the occupation of uplink bandwidth resources, the superimposed CSI feedback was proposed in [18], yet causes mutual interference due to superimposition operation. In [10, 19, 20], the 1-bit CS-based, DL-based, and extreme learning machine (ELM)-based superimposed CSI feedbacks are respectively proposed to reduce this mutual interference. Inspired by the advantages of superimposed CSI feedback based on 1-bit CS and DL, we propose a DL-based 1-bit superimposed CSI feedback scheme in this paper.

Related works

In FDD massive MIMO system, the DL-based CSI feedback methods have been investigated according to the superimposed CSI feedback, e.g., [10, 19, 20], and feedback reduction, e.g., [2126], etc.

For reducing feedback overhead, the DL-based data-driven CSI feedback can be divided into two categories. The first category is mainly based on the combination of CS technique and DL technique, while the other category employs the DL technique for the quantized data. In the first category, [21] is the first application of DL for CSI feedback. In [21], the CSI feedback was mainly based on a convolutional neural network called CsiNet, which achieved superior performance over various CS-based CSI feedbacks. Yet, the time correlation, frequency correlation, spatial correlation, feedback delay and feedback errors, etc., were not considered in CsiNet, and led to limited applications. To remedy these defects, some improvements have been proposed in [2224]. In [22], a CsiNet long short-term memory (CsiNet-LSTM) was proposed by exploiting the time correlation, which is suitable for practical application in time-varying channels. The recurrent neural network-based CsiNet in [23] was developed to capture the temporal and frequency correlations of wireless channels. Considering the spatial correlation among antennas, the bidirectional LSTM (Bi-LSTM) and bidirectional convolutional LSTM (Bi-ConvLSTM) were proposed in [24]. Another category of feedback reduction proposed for DL-based CSI feedback is mainly based on the quantization operation, e.g., [25, 26]. In [25], a bit-level CsiNet+ was proposed, which made the current CSI feedback network applicable in real communication systems and minimized the introduced quantitative distortion to improve the reconstruction quality. By employing the quantization and entropy coding blocks into a full convolution network, the work of [26] obtained drastic improvement in CSI reconstruction quality at even extremely low feedback rates. Although the DL-based CSI feedback in [2126] has achieved significant improvements in feedback reduction compared with the CS-based approaches, the uplink bandwidth resources were still seriously occupied due to the massive MIMO scenarios.

To avoid the occupation of uplink bandwidth resources, superimposed CSI feedback schemes were proposed in [1820]. In [18], the downlink CSI was spread and then superimposed on the uplink user data sequences (UL-US) as feedback to the BS, while the recoveries of the UL-US and downlink CSI were deteriorated by superimposition interference. To remedy this defect, a DL-based superimposed CSI feedback was proposed in [19], and an ELM-based superimposed CSI feedback with lower computational complexity was proposed in [20]. Considering the simplicity and cost-effectiveness, a low-consumed CSI feedback using 1-bit CS has been studied in [27], in which 1-bit operation means to discard the signal amplitude and only retain its sign information. In this work, the downlink CSI was quantified by 1-bit CS to achieve low-consumed feedback, while this work still occupied uplink bandwidth resources. To remedy this defect, the superimposed CSI feedback and 1-bit CS technique were combined in [10] and presented many advantages, e.g., the avoidance of uplink-bandwidth-resource occupation and the reduction of mutual interference, etc. However, it is facing challenges in recovery accuracy and processing delay [28], etc.

By integrating the promising advantage of deep learning and inspired by the superimposed CSI feedback by using 1-bit CS in [10], we propose a DL-based 1-bit superimposed CSI feedback scheme in this paper. First, the downlink CSI is compressed by the 1-bit CS technique and then superimposed on the UL-US as feedback to the BS. At the BS, to recover the bit information for both the UL-US and downlink CSI, a multi-task detection network with transmitted signal feature extraction is first constructed. Then, with the recovered bits of the downlink CSI, a lightweight reconstruction network, which consists of an initial feature extraction of the downlink CSI with simplified traditional method and a single hidden layer network, is utilized to reconstruct the downlink CSI with a low processing delay. Specifically, the advantages of superimposed CSI feedback by using 1-bit CS are inherited, i.e., without any occupation of uplink bandwidth for CSI feedback, and effective interference cancellation in [10], and the recovery accuracies for both the UL-US and downlink CSI are improved.

Contributions

In this paper, a DL-based 1-bit superimposed CSI feedback scheme is proposed to improve the superimposed CSI feedback 1-bit CS approach in [10]. To the best of our knowledge, there is a little literature focusing on the DL-based 1-bit superimposed CSI feedback method. And there is also no research on the introduction of deep learning into 1-bit superimposed feedback. The main contributions of this paper are as follows:

  • We propose the DL-based scheme for 1-bit CS-based superimposed CSI feedback. By using the nonlinear mapping and feature extraction ability of the DL, we develop a detection network and a reconstruction network to further suppress nonlinear superimposition interference, and improve the detection and reconstruction performances. The proposed scheme retains the advantages of 1-bit CS-based superimposed CSI feedback [10], while obtains better recovery accuracy for both the UL-US and downlink CSI with much lower processing delay.
  • We construct a multi-task detection network to recover the bit information for both the UL-US and downlink CSI, based on the model-driven approach and assisted by the superimposition-interference cancellation technology. This detection network is jointly trained to detect the UL-US and downlink CSI simultaneously, capturing a globally optimized network parameter. We use the ability that DL solve nonlinear problems to solve the superimposition separation, which shortens processing delay while improving the detection performance without any second-order statistical information about channel and noise.
  • We develop a lightweight reconstruction network by using the linear approximation ability of the traditional superimposed coding aided binary iterative hard thresholding (SCA-BIHT) algorithm and the advantages of deep learning to deal with nonlinear problems. In this network, the initial feature of downlink CSI is extracted by SCA-BIHT algorithm with only a few iterations, and then a single hidden layer refinement network is constructed to refine the downlink CSI reconstruction. The reconstruction network not only greatly reduces the iterations of the traditional SCA-BIHT algorithm to raise efficiency, but also obtains a better reconstruction performance of the downlink CSI with a lower processing delay.

The remainder of this paper is structured as follows: In Section II, we introduce the system model of the 1-bit superimposed CSI feedback. The DL-based 1-bit superimposed CSI feedback method is presented in Section III and followed by numerical results in Section IV. Finally, Section V concludes our work.

Notations: Boldface upper case and lower case letters denote matrix and vector respectively. (⋅)T and (⋅) denote transpose and matrix pseudo-inverse respectively. IP is the identity matrix of size P × P. denotes the operation of batch normalization. ‖⋅‖2 is the Euclidean norm. sign(⋅) denotes an operator of taking symbolic information, e.g., the sign function returns +1 for positive numbers and 0 otherwise. Re(⋅) and Im(⋅) represent real and imaginary part operations, respectively. K(x) represents computing the best k-term approximation of x by thresholding. ⊙ denotes the operation of Hadamard product for two vectors or matrices.

System model

The system model is shown in Fig 1. Considering a massive MIMO system that consists of one BS with N antennas and U single-antenna users, after the processing of matched-filter, the received signal from user-u, u = 1, 2, …, U, denoted as Ru, is given as (1) where denotes the uplink channel vector from user-u to the BS, is the circularly symmetric complex Gaussian noise (CSCG) of feedback link, P is the length of the UL-US. To avoid occupying the limited and crowded uplink bandwidth resources [29, 30], adopts superimposition technology, and denotes the transmitted signal of user-u, which is given by [10] (2) where ρ ∈ [0, 1] is the power proportional coefficient of the downlink CSI, Eu is the transmitted power of user-u, and and stand for the modulated superimposition signal and the UL-US, respectively.

In this paper, the downlink CSI, satisfying , is a sparse vector with K-sparsity [10], i.e., only K non-zero elements in hu. According to the 1-bit CS technique [31], hu is compressed by (3) where is the measurement matrix [10], and and denote the real and imaginary parts of the compressed CSI, respectively.

For the convenience of digital modulation, the support-set of the downlink CSI hu, denoted as zu ∈ {0, 1}N, is labelled by the bit-form [10], i.e., (4) where zu,k and hu,k are the k-th element in zu and hu, respectively. In order to reconstruct a more accurate downlink CSI at the BS, zu needs to be fed back to the BS with yreal,u and yimag,u by using the feedback vector pu. The feedback vector pu is formed by merging yreal,u, yimag,u, zu [10], i.e., (5) It is worth noting that pu can be viewed as a bit stream with the elements of pu only being 0 or 1. With the digital modulation, we have (6) where fmodu(⋅) denotes the mapping function of digital modulation, such as the quadrature phase shift keying (QPSK). In Eq (6), pu is mapped as modulated feedback vector (MFV) , where L = ⌈(2M + N)/2⌉. Without loss of generality, the UL-US’s length P is larger than L due to main task of user services [19, 20]. Similar to [10, 20], to superimpose MVF with UL-US, a spread spectrum method is utilized, which could capture spread spectrum gain to suppress the interference caused by the superimposition processing. Thus, the superimposition signal su, given in Eq (2), is obtained by using a spreading matrix to spread the MFV wu, i.e., (7) where is a spreading matrix, which satisfies , e.g., the Walsh matrix [32]. By combining Eqs(2) and (7), the transmitted signal of user-u xu is rewritten as (8)

At the user-u, the downlink CSI hu is compressed by using 1-bit CS (given in Eq (3)), and thus the transmitted signal xu is formed by weighting and superimposing the UL-US du and superimposition signal su according to Eqs (2)–(8). With the received Ru at the BS, the detection network and reconstruction network are designed to detect the UL-US du and superimposition signal su, and recover the downlink CSI hu, respectively. The detection and reconstruction networks will be deliberated in Section III.

DL-based superimposed CSI feedback using 1-bit CS

In this section, according to the superimposed CSI feedback scheme with the 1-bit CS [10], the detection network and reconstruction network are developed to recover the UL-US and downlink CSI. A transmitted signal feature extraction is first employed to coarsely extract the feature after equalizing the uplink wireless channel. Then, with the extracted transmitted signal feature, we design the detection network and reconstruction network.

Transmitted signal feature extraction

From Eqs (2)–(8), the transmitted signal xu is formed by superimposing the UL-US du and the modulated superimposition signal su. To recover du and su, the transmitted signal xu should be first extracted, and thus the uplink channel gu in Eq (1) needs to be removed by channel equalization. From [10, 19], the transmitted signal feature extraction is employed in this paper. That is, the uplink wireless channel is equalized through zero forcing (ZF) equalization, so as to extract the transmission signal feature. The feature extraction is given as (9) where denotes the coarse extracted vector of transmitted signal xu. It should be noted that, relative to the use of ZF equalization to extract the transmitted signal feature, the use of minimum mean square error (MMSE) channel equalization can obtain better feature extraction performance, while encounters higher computational complexity. Especially, the MMSE equalization requires second-order statistics of uplink channel gu and noise Nu [10, 18], which leads to application difficulties. Therefore, we use low-complexity ZF equalization to extract the transmitted signal feature, leaving the feature improvement to the subsequent detection network.

With the extracted transmitted signal feature , we construct the detection network to detect UL-US du and superimposition signal su. From Eq (7), su is obtained by spreading the MFV wu. In addition, the compressed downlink CSI yreal,u and yimag,u can be recovered from wu (given in Eqs (3)–(6)).

Detection network

In order to eliminate superimposed interference and obtain better downlink CSI and UL-US reconstruction accuracy, the detection network is designed by using unfolding method [33]. That is, the iteration steps in [10] are replaced by the groups of CSI-Net and Det-Net, including six subnets, i.e., CSI-Net1, Det-Net1, CSI-Net2, Det-Net2, CSI-Net3, and Det-Net3, in which the UL-US du and MFV wu are detected by solving a multi-task problem.

Architecture.

The architecture of detection network is illustrated in Fig 2. From the perspective of convenience and ease of implementation, we first use the easiest single hidden layer neural network architecture to design CSI-Neti and Det-Neti (i = 1, 2, 3). After experimental verification, this architecture is not only easy but also improves performance. The architecture of detection network is described as follows:

  • CSI-Net1, DET-Net1, CSI-Net2, DET-Net2, CSI-Net3, and DET-Net3 are successively cascaded to form the multi-task network. To reduce mutual interference, some expert knowledge is inserted between each cascaded subnets, i.e., the interference cancellation technology [18, 19]. In more detail, the CSI interference reduction (CSI IR) is introduced between the CSI-Neti and Det-Neti (i = 1, 2, 3), while the UL-US interference reduction (UL-US IR) is inserted between Det-Neti and CSI-Net(i + 1) (i = 1, 2).
  • The same network structures are employed by the CSI-Neti and Det-Neti (i = 1, 2, 3). Each subnet consists of an input layer, a hidden layer, and an output layer with a fully connected mode. For each CSI-Neti (DET-Neti) (i = 1, 2, 3), the number of neurons in the input layer, hidden layer, and output layer are 2L (2P), 4L (4P), and 2L (2P), respectively.
  • For each subnet, a batch normalization (BN) is employed to normalize its input sets, converting the subnet input to zero mean and unit variance.
  • The activation functions of linear activation, leaky rectified linear unit (LReLU) [34] and hyperbolic tangent (Tanh) are adopted by the input layer, hidden layer and output layer of each subnet, respectively.
  • The outputs of CSI-Net3 and DET-Net3 are the detected MFV () and the detected UL-US (), respectively.

The network architecture is summarized in Table 1.

Process of detection network.

Data preprocessing. Due to the requirement of real-valued data sets in common DL-based framework, we transform the coarse extracted complex-valued vectors , UL-US du and MFV wu to the real-valued vectors, i.e., (10)

To match the real-valued operation, the spreading matrix Qu is also transformed to real-valued matrix , which is obtained as (11)

Then, to train the detection network, is employed as the network input, while and are used as training labels in the CSI-Neti and Det-Neti (i = 1, 2, 3), respectively. In addition, to facilitate the unified description of the sub-network input in the detection network, we use to represent the input of the detection network, i.e., .

Processing procedure. The processing procedure of trained detection network is given in Table 2, and some steps are explained as follows.

Process of CSI-Neti: The CSI-Neti (i = 1, 2, 3), is used to detect the MFV, which is expressed as (12) where σ1 and σ2 denote the activation functions of LReLU and Tanh, respectively. In Eq (12), () and () are the weights (biases) of hidden layer and output layer, respectively. We use CSI-Neti to detect MFV wu and obtain the network output , which is briefly described in steps (1-1), (2-1), and (3-1) in Table 2.

CSI IR: In steps (1-2), (2-2), and (3-2) in Table 2, to reduce the interference from MFV, a spreading is employed by CSI IR, which is expressed as (13) where is obtained according to Eq (11). Then, is fed into Det-Neti to detect the UL-US.

Process of Det-Neti: The Det-Neti (i = 1, 2, 3), is used to detect the UL-US, which is expressed as (14) where () and () denote the weights (biases) of hidden layer and output layer, respectively.

UL-US IR: In steps (1-4) and (2-4) in Table 2, to reduce the interference from the UL-US, the outputs of Det-Neti (i = 1, 2, 3) are processed by expert knowledge, which is expressed as (15)

With the process given in Table 2, the UL-US du and MFV wu are detected, where the real-valued descriptions of the detected UL-US du and MFV wu are denoted by and , respectively. Then, with the detected MFV , we develop the reconstruction network to recover the downlink CSI hu.

Reconstruction network

A reconstruction network is designed to further improve the reconstruction accuracy of hu on the basis of the reconstruction algorithm, and to reduce the processing delay caused by multiple iterations of the reconstruction algorithm. The reconstruction network is given in Fig 3, and the processing procedure is summarized in Table 3. Generally, the corresponding de-mapping is first employed to restore the compressed downlink CSI. Then, the reconstruction algorithm given in [10] with reduced complexity is utilized to perform an initial feature extraction of the downlink CSI. According to the initial feature extraction, two dense layers are used to refine the reconstruction of the downlink CSI. These details will be presented as follows.

Inverse mapping operation.

From Eqs (6) and (10), the real-valued is formed by digital modulation and the mapping from complex-valued to real-valued form. Correspondingly, we adopt inverse mapping to recover the complex-valued and unmodulated forms. An inverse mapping, denoted by , is first employed to map the real-valued back to its complex-valued form. Then, the digital demodulation mapping, denoted as fdemo(⋅), is used to demodulate this complex-valued vector. The whole inverse mapping process is expressed as (16) Then, the estimation of sparsity K of the downlink CSI, denoted as , is obtained by calculating the number of non-zero entries in .

Initial feature extraction.

With , , and , we employ the reconstruction algorithm, named SCA-BIHT in [19], to conduct an initial feature extraction of the downlink CSI, while leaving the refinement reconstruction to the subsequent refinement network. In particular, this initial feature extraction is executed by SCA-BIHT with only a few iterations instead of dozens or hundreds of iterations in [10]. Here, β times of iteration are adopted in this paper. The initial feature extraction procedure is presented in Table 4.

Based on the initial feature extraction, we then input to a single hidden layer network to refine the reconstruction accuracy of the downlink CSI hu.

Refinement network.

According to the initial feature extraction, a single hidden layer network is employed to refine the reconstruction of the downlink CSI, and its network architecture is summarized in Table 5. Similar to CSI-Neti and Det-Neti (i = 1, 2, 3) of the detection network, the refinement network is also designed as the easiest single hidden layer neural network architecture.

The the initial feature of downlink CSI and the label hu are complex-valued, and thus need to be mapped to real-valued form, i.e., (17) Then, using the refinement network, the refined reconstruction of the downlink CSI is obtained from the expression (18) where W31 (b31) and W32 (b32) denote weights (biases) of the hidden layer and output layer of the refinement network, respectively.

Model training specification

Since model training is significant for network performance, we give the training details in this subsection. In the following, we discuss the training method, data preparation, and loss function, respectively.

Training method.

In this paper, the detection network and reconstruction network are separately trained to reduce the complexity of parameter tuning. For detection network, there are six subnetworks needed to be trained, including the training parameters , , , , , , , and (i = 1, 2, 3). From Fig 2, the detection network is a multi-task network in reality, which generates the estimated MFV and UL-US , respectively. Thus, we jointly train the six subnets of detection network to resolve this multi-task issue. In the reconstruction network, only the refinement network needs to be trained to optimize its network parameters W31, W32, b31, and b32. With the trained detection network and the corresponding initial feature extraction of reconstruction network, we then train the refinement network solely.

Data preparation for training.

The training set is acquired by a simulation approach, in which a significant amount of data samples are generated to train two networks, i.e., the detection network and the refinement network. Specially, these data samples are generated as follows.

hu and gu are randomly generated on the basis of the distribution . To train the detection network, we first collect the according to Eq (9) to form input sets. Then we save the corresponding du and wu as target sets, where du is formed by QPSK modulation with randomly generated Bernoulli binary sequences. All the complex-valued data sets are converted to real-valued form. For example, the input and label of the detection network are set as according to Eq (10). Similarly, the input and label of the refinement network are set as according to Eq (17). In addition, to validate the trained network parameters during the training phase, a validation set is generated by following the same generation method of training set, and thus we could capture a set of optimized network parameters.

Loss functions.

The detection network is trained by optimizing weights and biases of each subnet, i.e., CSI-Neti and Det-Neti, to minimize the loss function [35, 36]. In addition, the l2 regularization is employed in the detection network to avoid gradient explosions [37]. Thus, the loss function for training the detection network is expressed as (19) where α1 is the regularization coefficient and Θ1 denotes the training parameters, i.e., weights and biases of the detection network. In Eq (19), loss1 represents the weighted sum of the losses of six subnets, which is given as (20) where and are the output of the CSI-Neti and Det-Neti, respectively. With this detection network, we obtain the MFV and UL-US , i.e., and .

With the trained detection network, the reconstruction network is trained according to , , and , which are detected by the detection network and expressed in Eq (16). In reconstruction network, only the refinement network with single hidden layer needs to be optimized, and thus the loss function is given by (21) where is the estimated downlink CSI, α2 is the regularization coefficient and Θ2 denotes all training parameters of refinement network.

To reap an effective and feasible regularization coefficient and verify the generalization performance of detection network and reconstruction network, Fig 4 compares the convergence behaviors of LossDet and LossRec under different regularization coefficients (i.e., α1/2 = 10−9, 10−8, …, 10−4). From Fig 4, we can observe the convergence values of training loss and validation loss are almost the same, which indicates the excellent generalization performance of detection and reconstruction network. In addition, a smaller value of α1 (or α2) leads to a smaller convergence value of training loss or validation loss. Yet according to Eq (21), the value of LossRec is related to α2, the α2 that minimizes the LossRec may not achieve the best reconstruction performance. The optimized α2 is determined by the reconstruction performance of the downlink CSI, which will be given in the experimental analysis.

thumbnail
Fig 4.

(a) Training loss of the detection network. (b) Validation loss of the detection network. (c) Training loss of the reconstruction network. (d) Validation loss of the reconstruction network.

https://doi.org/10.1371/journal.pone.0265109.g004

By using the trained detection network and reconstruction network, the UL-US and downlink CSI can be recovered from the proposed scheme. Compared with the 1-bit CS-based superimposed CSI feedback scheme in [10], both the recoveries of the UL-US and downlink CSI are improved by the proposed scheme, while the requirements of second-order statistics of noise are avoided. Besides, these improvements are robust against parameter variations, which will be presented in the experimental analysis.

Experiment results

In this section, we give numerical results of the proposed scheme. Definitions and basic parameters involved in simulations are first given. Then, to verify the effectiveness of the proposed scheme, we show the bit error rate (BER) of UL-US and MFV, and the normalized mean squared error (NMSE) of reconstructed downlink CSI is presented. Finally, we compare the online running time between the proposed scheme and conventional scheme. The source code is available at https://github.com/qingchj851/DL-1BitCS-SC-CSI-Feedback2.

Parameter setting

Definitions involved in simulations are given as follows. The signal-to-noise ratio (SNR) in decibel (dB) of the signal received at BS from user-u is defined as [19] (22) The NMSE is utilized to evaluate the recovery performance of downlink CSI, and defined as [19] (23) In the experiment phase, P = 512, N = 64, and the sampling rate c is defined as c = M/N. The measurement matrix is randomly generated and obeys the Gaussian distribution [38], and it is guaranteed that its row vector and the column vector of the compressed signal cannot be sparsely represented by each other. The Walsh matrix generated by the Walsh sequence is employed as the spreading matrix Qu [32]. The UL-US du is formed by applying QPSK modulation upon randomly generated Bernoulli binary sequences. The training input data-sets are generated according to Eqs (1)–(9). Trainings of detection network and reconstruction network are carried out under the noise-free setting, and this is different from the training of the DL-based network in [19], where the training SNR is set as 5dB. Testing data-sets are generated by using the same method as the training data-sets. The sizes of training set, validation set and testing set of detection network are 60,000, 20,000, and 20,000, respectively. For the reconstruction network, 45,000, 15,000, and 15,000 samples are respectively employed for the training, validation, and testing. Both in detection network and reconstruction network, we use Adam optimizer as the training optimization algorithm, and the values of epoch and learning rate are set to 50 and 0.001, respectively. In the simulations, we stop the testing for BER performance when at least 1000-bit errors are observed [19, 20].

For the convenience of expression, we utilize “Proposed” and “Ref [10]” to denote the proposed DL-based 1-bit superimposed CSI feedback and the traditional 1-bit superimposed feedback (mentioned in [10]), respectively.

BER performance

In this subsection, the effectiveness and robustness of the detection network will be verified. To clarify the effectiveness, the comparison of BER’s performance between “Proposed” and “Ref [10]” is first presented in Fig 5. Next, to verify the robustness of the detection network, the impacts against the parameters of ρ and c are given in Figs 6 and 7, respectively.

thumbnail
Fig 5. BER versus SNR, where P = 512, c = 2.0, and ρ = 0.10 are considered.

https://doi.org/10.1371/journal.pone.0265109.g005

thumbnail
Fig 6. BER versus SNR, where P = 512 and c = 2.0 are considered.

https://doi.org/10.1371/journal.pone.0265109.g006

thumbnail
Fig 7. BER versus SNR, where P = 512 and ρ = 0.10 are considered.

https://doi.org/10.1371/journal.pone.0265109.g007

To verify the effectiveness of the detection network, both the UL-US and MFV’s BER performances are illustrated due to the UL-US being superimposed with MFV. Fig 5 depicts the BER curves of the UL-US and MFV in terms of SNR, where c = 2.0 and ρ = 0.10 are considered. From Fig 5, the BERs of UL-US and MFV obtained by “Proposed” are respectively smaller than those of “Ref [10]” in the whole given SNR regions. For example, when SNR = 10dB, the BER of UL-US (or MFV) by “Proposed” is around 3.4 × 10−3 (or 4.5 × 10−2), while the BER of UL-US (or MFV) of “Ref [10]” is nearly 1.4 × 10−2 (or 8.5 × 10−2). That is, compared with “Ref [10]”, both the UL-US and MFV’s BERs are improved by the proposed detection network. Especially, these improvements are significant to be observed in the relatively higher SNR. The possible reason is that the detection network is trained under noise-free setting.

To verify the robustness of BER performance’s improvement against the impact of ρ, the BER curves with different values of ρ, i.e., ρ = 0.05, ρ = 0.10, and ρ = 0.15, are plotted in Fig 6, where c = 2.0 is considered. From Fig 6, for each given ρ, the UL-US and MFV’s BERs of the “Proposed” are respectively smaller than those of the “Ref [10]”. This reflects that the proposed detection network could improve the BER performance under different ρ for both UL-US and MFV. As ρ increases from 0.05 to 0.15 for “Proposed”, the BER of UL-US increases while the BER of MFV decreases, and vice versa. The reason is that the increased (or decreased) ρ aggregates (or alleviates) the interference of MFV to UL-US, while alleviates (or aggregates) the interference of UL-US to MFV. On the whole, with the impacts of different ρ, the improvements of UL-US and MFV’s BER performances are evidently observed. Thus, the proposed detection network guarantees the improvement of BER performance against the impact of ρ.

The UL-US and MFV’s BER curves with different values of compression rate c (i.e., c = 2.0, c = 2.5, and c = 3.0) are depicted in Fig 7, and this validates the improvement of BER performance is robust against the impact of c, where ρ = 0.10. In Fig 7, for each given c, the UL-US and MFV’s BER performances of the “Proposed” are smaller than those of the “Ref [10]”. This implies that the proposed detection network could improve UL-US and MFV’s BER performance of “Ref [10]” for different values of c. With the increase of c, for both “Proposed” and “Ref [10]”, the BERs of both UL-US and MFV increase, and vice versa. The reason is that the spreading gain (i.e., P/M) decreases with the increase of c, and thus affects the detection performances (similar results can be found in [19, 20]). As a whole, compared with “Ref [10]”, the BER improvements of UL-US and MFV are evidently observed for each given c. Thus, the proposed detection network shows its robustness of improving UL-US and MFV’s BER performances against the impact of c.

To sum up, according to Figs 57, the UL-US and MFV’s BER performances of “Ref [10]” are effectively improved by the proposed detection network, and these improvements are robust against the impacts of ρ and c.

NMSE performance

With the detected MFV, the downlink CSI can be reconstructed by using the proposed reconstruction network. To validate the effectiveness of the proposed reconstruction network, NMSE curves of the downlink CSI recovered from the proposed reconstruction network and SCA-BIHT [10] are first given in Fig 8. Then, to demonstrate the robustness of the reconstruction network, the NMSE performance against the impacts of ρ and c are shown in Figs 9 and 10, respectively. In addition, we present the influence of regularization coefficient α2 on the NMSE performance in Table 6.

thumbnail
Fig 8. NMSE versus SNR, where P = 512, c = 2.0, and ρ = 0.10 are considered.

https://doi.org/10.1371/journal.pone.0265109.g008

thumbnail
Fig 9. NMSE versus SNR, where P = 512 and c = 2.0 are considered, and the β of Ref [10] is 10.

https://doi.org/10.1371/journal.pone.0265109.g009

thumbnail
Fig 10. NMSE versus SNR, where P = 512 and ρ = 0.10 are considered, and the β of Ref [10] is 10.

https://doi.org/10.1371/journal.pone.0265109.g010

thumbnail
Table 6. The effect of regularization coefficient α2 on NMSE performance.

https://doi.org/10.1371/journal.pone.0265109.t006

In Fig 8, the NMSE curves of downlink CSI’s recovery are depicted, where c = 2.0 and ρ = 0.10. The “Proposed” employs 8 times of iteration for initial feature extraction, i.e., β = 8, followed by two dense layers. In contrast, different iteration values (i.e., β = 10, β = 20, β = 50, and β = 100) are given for the SCA-BIHT algorithm of “Ref [10]”. From Fig 8, when SNR ≤ 14dB, the “Proposed” achieves the minimum NMSE, even lower than that of “Ref [10]” with β = 100. For example, when SNR = 12dB, the NMSE of “Proposed” is about 8.94 × 10−2, while that of “Ref [10]” with β = 100 is about 1.43 × 10−1. That is, with a smaller NMSE, the two dense layers in the reconstruction network can replace 95 iterations of SCA-BIHT algorithm in the relatively low SNR region (e.g., SNR ≤14dB), leading to a lower processing delay. For the case where SNR ≥16dB, the NMSE of “Proposed” outperforms that of “Ref [10]” with β = 10. Although it shows a slightly higher NMSE of “Proposed” than “Ref [10]” with β = 50 and 100, it compensates the high processing delay of “Ref [10]”. On the whole, the proposed reconstruction network has a lower processing delay than “Ref [10]” and shows a better NMSE performance in the relatively low SNR region. Therefore, the proposed reconstruction network is effective to improve the NMSE performance of “Ref [10]”.

To verify the robust improvement of NMSE performance against the impact of ρ, the NMSE curves with variant ρ (i.e., ρ = 0.05, ρ = 0.10, and ρ = 0.15) are plotted in Fig 9. From Fig 9, for each given ρ, the downlink CSI’s NMSE of the “Proposed” is smaller than that of the “Ref [10]”. With the increase of ρ (increases from 0.05 to 0.15), the NMSE decreases for both “Ref [10]” and “Proposed”, and vice versa. The reason is that the downlink CSI can obtain more transmission power with a larger value of ρ. In addition, with the increase of SNR, the curves gradually converge for the reason that the main influence of NMSE comes from the superimposed interference in a relatively high SNR region. On the whole, for each given value of ρ in Fig 9, the NMSE of “Ref [10]” is reduced by the “Proposed”, especially in the relatively low SNR region (e.g., SNR ≤14dB). Thus, the proposed reconstruction network possesses its robustness for improving the NMSE performance against the impact of ρ.

Fig 10 plots the NMSE curves of downlink CSI with different values of compression rate c (i.e., c = 2.0, c = 2.5, and c = 3.0) to validate the robustness of NMSE performance’s improvement against the impact of c. In Fig 10, for each given c, the downlink CSI’s NMSE performance of the “Proposed” is smaller than that of the “Ref [10]”. In addition, for SNR ≤ 10dB, the NMSEs of “Proposed” increase as the increase of c. The possible reason is that the higher compression rate results in lower spreading gain (i.e., P/M). In the low SNR region, the main impact of NMSE performance comes from the noise interference and is limited by the low spread spectrum gain. Yet, the NMSE’s convergence value of high compression rate is smaller than that of low compression rate. For example, for the cases where c = 2.0, c = 2.5, and c = 3.0, the convergence values of “Proposed” NMSE are about 6.0 × 10−2, 4.9 × 10−2, and 4.4 × 10−2, respectively. The possible reason is that the higher compression rate brings more reconstruction information in the high SNR region, where the noise interference almost disappeared. On the whole, for each given value of c in Fig 10, the NMSE of “Ref [10]” is reduced by the “Proposed”. Thus, the proposed reconstruction network possesses its robustness for improving the NMSE performance against the impact of c.

In addition, the influence of regularization coefficient α2 on NMSE performance is given in Table 6, where c = 2.0, ρ = 0.10, and different values of α2 (i.e., α2 = 10−4, α2 = 10−5, α2 = 10−6, α2 = 10−7, α2 = 10−8, and α2 = 10−9) are considered. From Table 6, the influence of different values of the regularization coefficient on the NMSE is not very obvious. Despite all this, among the given values of α2, in all SNR regions, the minimum of NMSE is still observed as α2 = 10−5. Thus, the NMSE performance in Table 6 indicates α2 = 10−5 is a preferable regularization coefficient.

To sum up, according to Figs 810, the downlink CSI’s NMSE performance of “Ref [10]” is effectively improved by the proposed reconstruction network, and these improvements are robust against the impacts of ρ and c.

Online running time

To illustrate the low processing delay of “Proposed”, i.e., detection network and reconstruction network, the online running time between “Proposed” and “Ref [10]” is compared in Fig 11, where P = 512, ρ = 0.10, and different values of c (i.e., c = 2.0, c = 2.5, and c = 3.0) are considered. Especially, “Ref [10]” adopts β = 10 and 100 in the reconstruction algorithm (i.e., SCA-BIHT algorithm). Here, β = 10 in “Ref [10]” is used to guarantee the NMSE of the “Proposed” is smaller than that of “Ref [10]”, and β = 100 in “Ref [10]” is used to present the “Proposed” has a similar NMSE (in a relatively high SNR region) while significantly lower processing delay as that of “Ref [10]”. For a fair comparison, 105 online-running experiments are conducted for “Proposed” and “Ref [10]” on the same PC (with CPU i5-8250U) by using MATLAB software. For each given c in Fig 11, the online running time of “Proposed” is shorter than that of “Ref [10]”, e.g., when c = 2.0, the online running time of “Proposed” and β = 10 (β = 100) in “Ref [10]” are 75.1s and 201.8s (1266.9s), respectively. This reflects that the proposed 1-bit CS-based superimposed CSI feedback can reduce the processing delay. It is also noticed that, as c rises from 2.0 to 3.0, the online running time of both “Proposed” and “Ref [10]” go up. However, the total increased running time of the “Proposed” is 15.9s, which is far less than that of “Ref [10]” (e.g. 54.7s for β = 10 and 374.0s for β = 100). In addition, Fig 11 shows that the online running time of “Ref [10]” is proportional to the number of iteration. Thus, the NMSE performance might not be applicable for “Ref [10]” with large iteration number, while the “Proposed” can avoid this annoyance.

thumbnail
Fig 11. Comparison of “Proposed” and “Ref [10]” about online running time by conducting 105 times of experiments, where P = 512 and ρ = 0.10 are considered.

https://doi.org/10.1371/journal.pone.0265109.g011

As a whole, compared with “Ref [10]”, the proposed DL-based 1-bit superimposed CSI feedback significantly reduces the online running time.

Conclusion

The 1-bit CS-based superimposed CSI feedback is still facing many challenges, such as low recovery accuracy of the UL-US and downlink CSI, and long processing delay, etc. To remedy these defects, the DL-based 1-bit superimposed CSI feedback has been investigated in this paper. The constructed detection network captures optimized network parameters by using joint training, and thus improves the BER performance of the UL-US. Moreover, the detection network is also helpful for reconstructing the downlink CSI. With the detected downlink CSI’s bits from the detection network, the proposed reconstruction network utilizes the simplified version of SCA-BIHT with a single hidden layer network, and achieves a significant improvement on NMSE performance of the downlink CSI recovery. In particular, compared with the conventional 1-bit CS-based superimposed CSI feedback, the proposed CSI feedback scheme presents its robustness against parameter variations and possesses significantly low processing delay.

References

  1. 1. Wu T., Yin X., Zhang L., and Ning J., “Measurement-based channel characterization for 5G downlink based on passive sounding in Sub-6 Ghz 5G commercial network,” IEEE Trans. Wireless Commun., pp. 1–1, Jan.2021.
  2. 2. Qing C., Yu W., Cai B., Wang J., and Huang C., “Elm-based frame synchronization in burst-mode communication systems with nonlinear distortion,” IEEE Wireless Commun. Lett., vol. 9, no. 6, pp. 915–919, June 2020.
  3. 3. Wei Z., Li H., Liu H., Li B., and Zhao C., “Randomized Low-Rank Approximation Based Massive MIMO CSI Compression,” in IEEE Commun. Lett., vol. 25, no. 6, pp. 2004–2008, June 2021,
  4. 4. Lin B., Gao F., Zhang S., Zhou T., and Alkhateeb A., “Deep Learning-Based Antenna Selection and CSI Extrapolation in Massive MIMO Systems,” in IEEE Trans. Wireless Commun., vol. 20, no. 11, pp. 7669–7681, Nov. 2021,
  5. 5. You L., Wang J., Wang W., and Gao X., “Secure multicast transmission for massive MIMO with statistical channel state information,” IEEE Signal Process. Lett., vol. 26, no. 6, pp. 803–807, June 2019.
  6. 6. Xia X., Xu K., Zhao S., and Wang Y., “Learning the time-varying massive MIMO channels: Robust estimation and data-aided prediction,” IEEE Trans. Veh. Technol., vol. 69, no. 8, pp. 8080–8096, Aug. 2020.
  7. 7. Sim M., Park J., Chae C., and Heath R., “Compressed channel feedback for correlated massive MIMO systems,” J. Commun. Netw., vol. 18, no. 1, pp. 95–104, Feb. 2016.
  8. 8. Kim S., Choi J., and Shim B., “Feedback reduction for beyond 5G cellular systems,” in Proc. IEEE Int. Conf. Commun. (ICC), May 2019, pp. 1–6.
  9. 9. Shen W., Dai L., Shi Y., Zhu X., and Wang Z., “Compressive sensing based differential channel feedback for massive MIMO,” Electron. Lett., vol. 51, no. 22, pp. 1824–1826, Oct. 2015.
  10. 10. Qing C., Yang Q., Cai B., Pan B., and Wang J., “Superimposed coding-based CSI feedback using 1-bit compressed sensing,” IEEE Commun. Lett., vol. 24, no. 1, pp. 193–197, Jan. 2020.
  11. 11. Son H. and Cho Y., “Analysis of compressed CSI feedback in MISO systems,” IEEE Wireless Commun. Lett., vol. 8, no. 6, pp. 1671–1674, Aug. 2019.
  12. 12. Wu P., Liu Z., and Cheng J., “Compressed CSI feedback with learned measurement matrix for mmWave massive MIMO,” Jul 2020, arXiv:1903.02127, [Online]. Available: https://arxiv.org/abs/1903.02127.
  13. 13. Rao X. and Lau V., “Distributed compressive CSI estimation and feedback for FDD multi-user massive MIMO systems,” IEEE Trans. Signal Process., vol. 62, no. 12, pp. 3261–3271, June 2014.
  14. 14. Jiang D., Wang W., Shi L., and Song H., “A compressive sensing-based approach to end-to-end network traffic reconstruction,” IEEE Trans. Netw. Sci. Eng., vol. 7, no. 1, pp. 507–519, Oct. 2020.
  15. 15. Lu Z., Wang J., and Song J., “Multi-resolution CSI feedback with deep learning in massive MIMO system,” in Proc. IEEE Int. Conf. Commun. (ICC), June 2020, pp. 1–6.
  16. 16. Li X. and Wu H., “Spatio-temporal representation with deep neural recurrent network in MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 653–657, May 2020.
  17. 17. Sun Q., Wu Y., Wang J., Xu C., and Wong K., “CNN based CSI acquisition for FDD massive MIMO with noisy feedback,” Electron. Lett., vol. 55, no. 17, pp. 963–965, Jul. 2019.
  18. 18. Xu D., Huang Y., and Yang L., “Feedback of downlink channel state information based on superimposed coding,” IEEE Commun. Lett., vol. 11, no. 3, pp. 240–242, Mar. 2007.
  19. 19. Qing C., Cai B., Yang Q., Wang J., and Huang C., “Deep learning for CSI feedback based on superimposed coding,” IEEE Access, vol. 7, pp. 93723–93733, Jul. 2019.
  20. 20. Qing C., Cai B., Yang Q., Wang J., and Huang C., “Elm-based superimposed CSI feedback for FDD massive MIMO system,” IEEE Access, vol. 8, pp. 53408–53418, Mar. 2020.
  21. 21. Wen C., Shih W., and Jin S., “Deep learning for massive MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 748–751, Oct. 2018.
  22. 22. Wang T., Wen C., Jin S., and Li G., “Deep learning-based CSI feedback approach for time-varying massive MIMO channels,” IEEE Wireless Commun. Lett., vol. 8, no. 2, pp. 416–419, Apr. 2019.
  23. 23. Lu C., Xu W., Shen H., Zhu J., and Wang K., “MIMO channel information feedback using deep recurrent network,” IEEE Commun. Lett., vol. 23, no. 1, pp. 188–191, Jan. 2019.
  24. 24. Liao Y., Yao H., Hua Y., and Li C., “CSI feedback based on deep learning for massive MIMO systems,” IEEE Access, vol. 7, pp. 86810–86820, Jul. 2019.
  25. 25. Chen T., Guo J., Jin S., Wen C., and Li G., “A novel quantization method for deep learning-based massive MIMO CSI feedback,” in Proc. IEEE Glob. Conf. Signal Inf. Process., Nov. 2019, pp. 1–5.
  26. 26. Mashhadi M., Yang Q., and Gündüz D., “Distributed deep convolutional compression for massive MIMO CSI feedback,” IEEE Trans. Wireless Commun., vol. 20, no. 4, pp. 2621–2633, Apr. 2021.
  27. 27. Tang W., Xu W., Zhang X., and Lin J., “A low-cost channel feedback scheme in mmWave massive MIMO system,” in Proc. IEEE Int. Conf. Comput. Commun., Jul. 2017, pp. 89–93.
  28. 28. Wang J., Ding Y., Bian S., Peng Y., Liu M., and Gui G., “UL- CSI data driven deep learning for predicting DL- CSI in cellular FDD systems,” IEEE Access, vol. 7, pp. 96105–96112, Jul. 2019.
  29. 29. Y Wu, L Qian, H Mao, W Lu, H Zhou, C Yu, “Joint Channel Bandwidth and Power Allocations for Downlink Non-Orthogonal Multiple Access Systems,” in Proc. IEEE Veh. Technol. Conf., pp. 1–5, Sep. 2019.
  30. 30. X Leturc, CJ Le Martret, P Ciblat, “Multiuser power and bandwidth allocation in ad hoc networks with Type-I HARQ under Rician channel with statistical CSI,” in Proc. Int. Conf. Mil. Commun. Inf. Syst., pp. 1–7, May 2017.
  31. 31. Xiao P., Liao B., and Li J., “One-bit compressive sensing via schur-concave function minimization,” IEEE Trans. Signal Process., vol. 67, no. 16, pp. 4139–4151, Aug. 2019.
  32. 32. Akansu A. and Poluri R., “Walsh-Like nonlinear phase orthogonal codes for direct sequence CDMA communications,” IEEE Trans. Signal Process., vol. 55, no. 7, pp. 3800–3806, Jul. 2007,
  33. 33. Chundi PK., Wang X., Seok M., “Channel Estimation Using Deep Learning on an FPGA for 5G Millimeter-Wave Communication Systems,” IEEE Trans. Circuits Syst. I-Regul. Pap., pp. 1–11, Oct. 2021.
  34. 34. Guo J., Wen C., Jin S., and Li G., “Convolutional neural network-based multiple-rate compressive sensing for massive MIMO CSI feedback: design, simulation, and analysis,” IEEE Trans. Wireless Commun., vol. 19, no. 4, pp. 2827–2840, Apr. 2020.
  35. 35. Zhang Y., Wang X., and Tang H., “An improved elman neural network with piecewise weighted gradient for time series prediction,” Neurocomputing, vol. 359, no. 24, pp. 199–208, Sep. 2019.
  36. 36. Luo L., Xiong Y., Liu Y., and Sun X., “Adaptive gradient methods with dynamic bound of learning rate,” Feb. 2019, arXiv:1902.09843, [Online]. Available: https://arxiv.org/abs/1902.09843.
  37. 37. Zeng M., Cai Y., Liu X., Cai Z., Li X., “Spectral-Spatial Clustering of Hyperspectral Image Based on Laplacian Regularized Deep Subspace Clustering,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., pp. 2694–2697, Jul. 2019.
  38. 38. Tong F., Li L., Peng H., Yang Y. “Flexible construction of compressed sensing matrices with low storage space and low coherence,” Signal Process., vol. 182, May 2021.