Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.


Introduction
X-ray computed tomography (CT) serves revolutionary functions in biology, medicine, and other fields. Considering that excessive X-ray radiation exposure may cause genetic disease, recent studies have aimed to reduce radiation dose during X-ray CT inspection [1][2]. A promising strategy to reduce radiation dose is to under-sample the X-ray projections across the human body. However, image reconstruction from few-views can be treated as an ill-posed mathematical problem that is difficult to converge to the correct solution without extra prior information.
The development of compressive sensing (CS) theory has spurred considerable research attention on the additional sparse prior of images to reduce the sampling rate [3][4]. Total variation (TV) regularization employing the image gradient sparsity is a popular method that handles few-views problems in CT image reconstruction [5][6][7][8][9][10][11][12][13][14][15]. However, TV is based on the assumption that the image is piecewise constant. Thus, the TV algorithm suffers from oversmoothing and staircase effect, which in turn may produce undesirable blocky images [16,17].
Several methods have been proposed to improve the performance of TV and eliminate the above drawback [18][19][20][21][22][23][24][25][26]. To our knowledge, two strategies have been widely investigated: 1) improvement of the original TV norm by introducing a penalty weight with sufficient local information, such as the edge-preserving TV (EPTV) model [18] and the adaptive-weighted TV (AwTV) model [19]; 2) involvement of high-order derivatives [22][23][24][25][26][27], such as total variation stokes(TVS) model [24], the high-degree TV (HDTV) model [25], and the total generalized variation (TGV) model [26,27], et al. The strategy of incorporating TV norm with local information can lower staircase effects but often still retain some artifacts. The latter strategy usually shows a favorable performance with a suitable balance between first-order and highorder derivatives. In particular, TGV regularization can automatically balance first-order and high-order derivatives instead of using any fixed combination [28]. Hence, this process can yield visually pleasant results in images with piecewise polynomial intensities and sharp edges without staircase effects.
In the traditional concept, TGV is based on the l 1 -norm, which is a relaxation of the l 0norm for easy computation at the expense of performance on employing sparsity prior. In fact, the most direct measure of sparsity is to count the nonzero components of the target vector [29]; this strategy leads to an l 0 -norm solution but encounters nondeterministic polynomialtime hard (NP-hard) problems. Employing an l p -norm (0 < p < 1) relaxation for convenient properties in sparsity seeking has gained considerable interest in recent years [30][31][32][33]. Sidky et al. replaced the l 1 -norm with the l p -norm in the minimization function and proposed a total p-variation (TpV) minimization algorithm [34]. Although the l p -norm causes nonconvex optimization problems, it may allow efficient image reconstruction with a few projection datasets for radiation dose reduction [35].
In this article, we explore an l p -norm (0 < p < 1) relaxation, which is close to the l 0 -norm and can accurately measure sparsity, to improve the sparsity seeking features of TGV. The proposed regularization model is called total generalized p-variation (TGpV). The proposed model is efficiently solved through variable splitting and alternating minimization method in conjunction with nonconvex p-shrinkage mapping [31]. The novelty of this work is threefold. First, the TGpV model is far less restrictive than the TGV and TpV models for X-ray CT image reconstruction. It not only shows excellent performance in detail preservation by incorporating high-order image derivatives but also achieves an accurate measurement of sparsity potential from image regularity prior. Second, an effective iterative algorithm is proposed to optimize the objective function of the TGpV minimization with a fast and stable convergence result. Third, fast and efficient closed-form solutions are investigated and derived for computationally complex subminimization problems by using the proximal point technique and fast Fourier transforms. The advantage of our approach is demonstrated in both numerical simulation and real CT data, relative to the previous TpV-based and TGVbased reconstructions.
The remainder of this paper is organized as follows. Section 2 briefly introduces the TGpV model and then describes the constrained TGpV minimization model and the present TGpV-ADM algorithm for image reconstruction. Section 3 presents the experimental results. Finally, Sections 4 and 5 respectively contain the discussions and conclusions.

Total Generalized p-Variation
Let α = (α 0 , α 1 ) denote the positive weights, the discretized second-order TGV [28] can be written as follows: where ω is a variable used to balance the first and higher order derivatives, and the operators 5 and ε are given by The original version of TGV uses an l 1 -based form. The sparsity of a target vector is generally measured directly by counting the nonzero components in it; using an l 0 -norm solution is a better way to take advantage of the sparsity prior. However, this strategy involves an NP-hard problem and lacks efficient solvers for practical image reconstruction.
Another strategy is to using the pth power of the l p -norm (0 < p < 1) which is a relaxation closer to the l 0 -norm and can measure sparsity better than the l 1 -norm. Hence, dealing with l p minimization, we propose the following modified form of the second-order TGpV: The l p -norm based form of TGV can express a lower level of sparsity than the conventional form. Thus, maximizing this sparsity can further relax the requirements of data sampling. On the other hand, a multiple description of sparsity leads to wide selections and may provide a comprehensive validity for different objects.

Constrained TGpV minimization
To promote the sparsity feature of TGpV, we introduce it into the CT imaging model based on a regularization framework. The CT imaging model can be approximated by the following discrete linear system: The vector b represents the projection data, the vector u represents the object to be reconstructed, and the system matrix A is a pixel-driven projection operator.
To solve the linear system of Eq (4), a constrained TGpV minimization model for describing the intensity variations of an image is used as follows: where e is a data-error tolerance parameter. The optimization problem in Eq (5) is referred to as TGpV minimization. In this study, we investigate CT image reconstruction by minimizing the energy function with the TGpV regularization term by solving the constrained nonconvex optimization problem as follows: arg min u;o a 0 kru À ok p þ a 1 kεðoÞk p ; s:t: kAu À bk 2 e : ð6Þ TGpV minimization reconstruction algorithm We apply variable splitting and alternating direction method (ADM) [36] to obtain efficient and easy-to-code algorithms for solving the minimization problems involved in our method. A generalized p-shrinkage operator [31] showing a qualitative resemblance to the l p proximal mapping is considered to provide a closed solution to the l p -l 2 problem in the reconstruction procedure. The reconstruction algorithm that utilizes TGpV minimization is summarized below.
Introducing the vectors d, s and σ, we consider the following constrained minimization problem, which is equivalent to Eq (6): min u;o;y;z a 0 kdk p þ a 1 ksk p ; To reformulate the original constrained problem to a sequence of unconstrained subproblems, the augmented Lagrangian method [37] is used here. The augmented Lagrangian energy associated to Eq (7) is defined as þa 1 ksk p þs T ðs À εðoÞÞ þ l 1 2 ks À εðoÞk 2 2 whered;s;r are Lagrange multipliers, and λ 0 , λ 1 , and μ are positive constants used to balance the terms. As a powerful technique to optimize problems through variable splitting, the alternating direction method is used to solve the problem efficiently. The augmented Lagrangian function L A can be split into four subproblems with respect to d, s, u, and ω. The solution to minimizing L A is equivalent to solving the subminimization problems as follows: i.. The subminimization problem with respect to d can be written as follows: This minimization problem corresponds to an ℓ p −ℓ 2 norm. To derive an efficient solution to this problem, an explicit proximal mapping for general p is considered [31]. The p-shrinkage operator shrink p (Á,1/β) is defined as Constrained TGpV Minimization for Few-View CT Image Reconstruction Thus, this minimization problem can be directly solved by ii.. The minimization w.r.t s can be written as This problem is solved with the same adaptation of the p-shrinkage operator as follows: iii.. The subminimization problem w.r.t u corresponds to the following quadratic positive definite problem: The first-order necessary conditions for optimization are The exact minimizer of Eq (14) is formulated as where M+ stands for the Moore-Penrose pseudoinverse of matrix M. Then, the noise term σ can be updated by Constrained TGpV Minimization for Few-View CT Image Reconstruction iv.. The subminimization problem w.r.t ω can be written as follows: The optimality conditions in the above case are The exact minimizer of Eq (18) is formulated as v.. Finally, the Lagrange multipliers are updated as follows: s kþ1 ¼s k þ l 1 ðs kþ1 À εðo kþ1 ÞÞ; Efficient Fourier-based solvers for subminimization problems The pseudoinverse is used to solve the subminimization problem w.r.t u (Eq (16)) in the reconstruction algorithm. This solution may only work for a toy example but is far less feasible for practical CT reconstruction because CT data are excessively large. This subproblem is conventionally solved using iterative methods, such as conjugate gradient [38][39] and nonmonotone alternating direction algorithm [40], which may also lead to significant computation and memory consumption.
Our observation shows that an exact solution to the u -subproblem is generally unnecessary. Rather, an approximate solution can be used. We introduce a proximal point technique [41,42] to avoid the prohibitive cost and solve the subproblem efficiently.
In Eq (14), 1 2 Au À b À s k k k 2 2 can be linearized at current point u k in each iteration: where ρ = A T (Au−b−σ k ) denotes the gradient at u k , and τ>0 is a parameter. Then, the u -subproblem can be transformed to an approximation problem by adding the proximal term The first-order necessary conditions for optimality are The circulant matrices can be diagonalized under the Fourier transform, and 5 T 5 is a constant and block-circulant matrix. Thus, under the periodic boundary condition for u, the coefficient matrix m t I þ l 0 r T r can be diagonalized blockwise by the 2D Fourier matrix.
Consequently, the u-subproblem can be solved by only two FFTs, thereby avoiding the costly calculation of pseudoinverse.
We further exploit the fast calculation of the solution to the ω-subproblem. For convenience, Eq (19) can be reformulated by grouping like terms where the coefficient matrices are listed as follows: Similarly, the coefficient matrix is blockwise diagonal. Multiplying a preconditioned matrix with Fourier transform converts the linear system into the following form: : where. Ã is componentwise multiplication. Then, we can obtain the following closed forms: where |Á|Ã is defined as The augmented Lagrangian function (8) is expected to be minimized by solving the four subproblems alternately. All of the subproblems in the proposed algorithm have noticeably efficient solutions: the d-subproblem and the s-subproblem are solved by using easy-to-compute p-shrinkage operators; the u-subproblem and the ω-subproblem are converted to Fourierbased formulations, which can be rapidly calculated using FFT techniques. Thus, the proposed algorithm is efficient and practical for the low cost in each iteration.

Pseudocode of the TGpV-ADM reconstruction algorithm
In summary, the workflow of present TGpV-ADM method for X-ray CT image reconstruction is listed in Algorithm 1.

End Do
Obtain reconstruction result:u.

Parameter selections
Parameters μ, λ 0 , and λ 1 are used to balance the data fidelity and two regularization terms. To get the optimal performance, the values of them should be set in accordance with both the noise level in the observation and the sparsity level of the underlying image. Generally, the higher the noise level is, the smaller μ should be. Actually, they are often empirically selected by visual inspection. Based on our experience, a simple way to choose them is to try different values from 2 3 up to 2 13 and compare the recovered images. For most CT imaging cases, the value of parameter μ could be given larger than that of λ 0 and λ 1 .
The positive weights α 0 and α 1 are used to balance the first and second derivatives. Proper values of them should be chosen based on sparsity feature of the underlying image. Generally, α 0 , α 1 2 [1,4] is suitable for most applications.
Parameter p is in (0, 1) and plays a vital role. The penalty function is somewhat approximated but not strictly equivalent to l p minimization; thus, a considerable value should be determined rigorously. Based on our experience, p2[0.5, 0.9] is adequate for noiseless datasets, and p2[0.7, 0.9] is adequate for noisy datasets.

Performance evaluations
For the quantitative evaluation of the TGpV-ADM algorithm, the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and normalized root mean square distance (NRMSD) [43] are used as measures of the reconstruction quality. The RMSE, PSNR and NRMSD are defined as follows: PSNR ¼ 10log 10 where f and u denote the ideal phantom and the reconstruction, respectively, and i indexes the pixels in the image. N is the total number of pixels in the image. An RMSE/NRMSD value close to zero suggests high similarity to the ideal phantom image. And a higher PSNR indicates that the image is of higher quality.

Results
To validate and evaluate the performance of the proposed method, three types of projection data (computer-simulated digital Moby phantom, computer-simulated digital CS-phantom, and experimental anthropomorphic phantom projection data) were used in the experiments. Both computer-simulated digital phantom projection data and real CT projection data were used to compare the proposed TGpV-ADM method with the standard TV-ADM [42], TpV-ADM [35], and TGV-ADM [27] algorithms. The implementations of these three comparison methods are described in S1 Appendix. All of the experiments were performed under Matlab 2012a running on an HP-Z820 workstation with Intel Xeon E5-2650 dual-core CPU 2.60 GHz.
To assess the quantitative evaluations of image quality (RMSE, PSNR, NRMSD), the tests of statistical significance were carried out using 60 phases of the Moby phantom. First, we performed the F-test. If the p-value of F-test was larger than 0.05, the t-test was then carried out; If the p-value was less than 0.05, the variances of the two samples could not be assumed to be equal and the Welch's t-test [44] was then carried out. In the statistical significance tests, variable were expressed as Mean ± standard deviations.

Digital Moby phantom study
In the first group of simulation study, a digital Moby phantom [45][46] was used to simulate the few-view projection data. The Moby phantom, which modeled a 3D mouse anatomy, was often used in simulation studies for single photon emission computed tomography and X-ray CT. One typical frame of the phantom is shown in Fig 1 (or S1 Fig).

Noise-free cases
For the CT projection simulation, we chose a geometry that was representative of a fan-beam CT scanner setup. The imaging configurations were as follows: (1) the projection data comprised 36 projections at an interval of 5°onto a 720-bin linear detector array, (2) the distance from the detector to the X-ray source was 600 mm, (3) the distance from the rotation center to the source was 300 mm, and (4) the space of each detector bin was 0.1 mm. All of the reconstructed images comprised a 256 × 256 square of pixels. The size of each pixel was 0.1 mm × 0.1 mm. Each projection datum along an X-ray through the sectional image was calculated as the intersection length of an X-ray with a pixel.
For four ADM-based algorithms, the parameters of ADM framework were the same in the experiments:μ and λ 0 were set to 256 and 64, respectively; τ was set to 1.3. As the image is piecewise constant in most areas, for the TGV-ADM and TGpV-ADM methods, α 0 , α 1 , and λ 1 were set to 1, 2, and 64, respectively. For the TpV-ADM and TGpV-ADM algorithms, p was set to 0.9. Considering the absence of noise from the projection data, we set e to 0. The number of iterations for each reconstruction was 600.
The images reconstructed from the four methods in the noise-free cases are shown in Fig 2. All methods could recover image well from sparse projections in visual inspection. To visualize the difference in detail, horizontal profiles of the resulting images (Fig 3) are drawn across the 52th row, that is, from the 100th column to the 150th column. As one can see, the images   Table 1 lists the RMSE, PSNR, and NRMSD measures of the images reconstructed by different algorithms with 600 iterations. From Table 1, it finds that the TGpV-ADM algorithm also outperforms other counterparts when using objective evaluation metrics.

Noisy cases
To check the capability of the proposed algorithm further, we carried out the experiments to reconstruct images from noisy projections. Noise is generated using a Poisson model [47]: where N 0 is the incident X-ray intensity, p denotes the normalized projections in real space, and p denotes the normalized projections with added Poisson noise. k indexes the pixels in the projection data.
Admissible reconstruction needs more projections than noiseless cases because of inconsistencies in the data. Thus, the imaging configuration was the same with the noise-free group except the projection acquisition. The total view number of the experiment was 90.
In this section, three cases with different noisy levels were considered. The initial numbers of photons N 0 were set to 5×10 6 , 2×10 6 , and 5×10 5 for noisy case 1, 2, and 3, respectively. For noisy case 1 and 2, the parameter μ of ADM framework in the four algorithms was set to 64, and for noisy case 3, μ was set to 32. The parameter λ 0 was set to 16. τ was set to 1.3. For the TGV-ADM and TGpV-ADM methods, α 0 , α 1 , and λ 1 were set to 1, 2, and 16, respectively. For the TpV-ADM and TGpV-ADM algorithms, p was set to 0.9. As the noise levels of images in three cases were different, the numbers of iterations in three cases were set to 200, 180, and 160, respectively.
The images reconstructed by FBP, TV-ADM, TpV-ADM, TGV-ADM, and TGpV-ADM methods from three different groups of noisy projections are shown in Fig 4. The profiles of these images along the 52th horizontal rows of three different noisy cases are indicated in Figs 5, 6 and 7, respectively. The profiles show that the TGV-ADM and TGpV-ADM reconstructions contain a little deviation from the original phantom and the TV-ADM and TpV-ADM reconstruction have some distortions which are evident in the shown profile plots. Interestingly, the gains from the TGpV-ADM method are more noticeable compared with those from the TGV-ADM method.
The RMSE, PSNR and NRMSD of the reconstructions from the different methods was calculated, and the calculation results are listed in Table 2. The quantitative results from the proposed TGpV-ADM algorithm showed better results than that from other algorithms in terms of the three measures, which agrees with the findings in Table 1.
To further assess the performance evaluations of image quality reconstructed by different algorithms, we performed the tests of statistical significance using 60 phases of the Moby phantom. The statistical mean values of performance evaluations of the images reconstructed by  Table 3. The corresponding F-test and t-test analysis results are summarized in Table 4. Similarly, for the three groups of experiments that the images reconstructed from noisy projections, the statistical mean values of performance evaluations are summarized in Tables 5, 6 and 7, respectively. The corresponding F-test and t-test analysis results are summarized in Tables 8, 9 and 10, respectively.
In the noise-free case, noisy case 1 and noisy case 2, there are significant differences in the values of RMSE, PSNR, and NRMSD between other algorithms and TGpV-ADM algorithm (p<0.0001). The values of RMSE and NRMSD by TGpV-ADM algorithm are significantly lower than that of other three algorithms. Meanwhile, the values of PSNR by TGpV-ADM algorithm is higher than that of TV-ADM, TpV-ADM, and TGV-ADM algorithms. In the noisy case 3, there are significant differences in the values of RMSE, PSNR, and NRMSD between TV-ADM, TGV-ADM algorithms and TGpV-ADM algorithm      longer than TV-ADM and TpV-ADM, which are due to the extra computation that takes by the subminimization problem of second derivatives. Compared with original TGV-ADM method, TGpV-ADM requires only a small increase in time computation.

Digital CS-phantom study
To demonstrate and validate our new method for the objects with piecewise polynomial feature, a digital CS-phantom [48][49] designed for compressed sensing MRI reconstruction was used to simulate the few-view projection data. The phantom image is composed of four quadrants: Quadrant I contains a large diagonal ramp and low-contrast squares; Quadrant II contains 16 low-contrast circles; Quadrant III contains a large quadratic hole and four Gaussian bumps; and Quadrant IV contains line pairs and concentric circles with a range of spacing. This phantom is not sparse under a gradient transformation and could provide features amenable for real anatomical studying. For reference, the typical CS-phantom is shown in Fig 8 (or  S2 Fig).

Noise-free cases
In this simulation, the imaging configurations were same with the noise-free group in digital Moby phantom studies. For four ADM-based algorithms, the parameters of ADM framework were the same in the experiments: μ and λ 0 were set to 512 and 64, respectively; τ was set to 1.3. For the TGV-ADM and TGpV-ADM methods, α 0 , α 1 and λ 1 were set to 1, 1, and 64, respectively. For the TpV-ADM and TGpV-ADM algorithms, p was set to 0.7. Considering the absence of noise from the projection data, we set e to 0. The number of iterations for each reconstruction was 800. The images reconstructed from the four methods in the noise-free cases are shown in Fig 9. To reveal texture details, the zoomed region of interest (ROI) images of Quadrant I and  To further demonstrate the superiority of TGpV-ADM algorithm, the RMSE of the reconstructions from the different methods was calculated, and the calculation results are shown in Fig 12. The TGpV-ADM algorithm could converge to a steady status and is obviously more accurate and effective over the other methods. The RMSE, PSNR and NRMSD of each reconstruction method are listed in Table 12. Compared with other methods, TGpV-ADM method can visibly obtain more accurate images.

Noisy cases
In the simulation of noisy cases, the noise level N 0 was set to 1×10 6 . Meanwhile, the parameters should also be correspondingly adjusted.e was set to 10 −5 . The parameter settings of ADM framework in the four algorithms were listed as follows: μ and λ 0 were set to 64 and 32, respectively, and τ was set to 1.3. For the TGV-ADM and TGpV-ADM methods, α 0 , α 1 , and λ 1 were set to 1, 1, and 32, respectively. For the TpV and TGpV algorithms, p was set to 0.9. The number of iterations for each reconstruction was 150.  Table 13. The proposed TGpV-ADM method successfully minimized the objective functions and effectively improved the image quality for noisy data.
The average computation time of TV-ADM, TpV-ADM, TGV-ADM, and TGpV-ADM methods for digital CS-phantom studies are listed in Table 14. The table shows that the time consumption of TGpV-ADM method is slightly larger than that of the other methods. The present algorithm can preserve a good balance between performance and computation.

Real data study
To demonstrate further the potential capability of the proposed method, we performed a radiological anthropomorphic head phantom (Chengdu Dosimetric Phantom, CPET Co. Ltd, Chengdu, China) [50] study from real data for clinical use. The phantom is shown in S3 Fig   Fig 12. Root mean squared errors as a function of iterations for different methods in the noise-free case.
doi:10.1371/journal.pone.0149899.g012 Constrained TGpV Minimization for Few-View CT Image Reconstruction and the specification of it is described in ICRU Report 48 [51]. In this study, CT projection data were acquired using a CT scanner mainly comprising an X-ray source (Hawkeye130, Thales, France) and a flat-panel detector (Varian 4030E, USA). The tube voltage is set to 100 kVp. The x-ray tube current was set at 200 μA and the duration of x-ray pulse at each projection view was 180 ms during the acquisition. The central slice of the sinogram data was extracted for 2D investigation and was modeled with 820 bins on a 1D detector for 2D image reconstruction. The associated imaging parameters of the CT scanner were as follows: (1) 360 projection views were acquired evenly for a 360°rotation on a circular orbit, (2) the distance from the detector to the X-ray source was 1610 mm, (3) the distance from the rotation center to the source was 678 mm, and (4) the space of each detector bin was 0.508 mm. All of the reconstructed images comprised a 300 × 300 square of pixels. The size of each pixel was 0.585 mm × 0.585 mm. We evenly extracted a 120-and 180-view projection from the sinogram data for illustration purposes.
In the experiment, the algorithm parameters of all ADM frameworks were set to μ = 32, λ 0 = 16, and τ = 1.3. e was set to 10 −5 . For the TGV-ADM and TGpV-ADM methods, α 0 , α 1 , and λ 1 were set to 1, 1, and 16, respectively. For the TpV and TGpV algorithms, p was set to 0.9. The number of iterations for each reconstruction was 100.
The reconstructed image results for the different methods from 120-, 180-, and 360-view projections are shown in Fig 17. The corresponding zoomed-in ROIs are shown in Fig 18. The TV-ADM and TpV-ADM methods have more patchy artifacts than the other two methods, and some details are oversmoothed in the reconstruction images. TGpV-ADM method exhibits remarkable advantages over the other methods in terms of detail preservation. Meanwhile,  the results further suggest that compared with the other three methods, the TGpV-ADM method can achieve a more outstanding capability in dose reduction.

Discussion
TV-based CT image reconstruction that employs the image gradient sparsity can reduce the Xray sampling rate and remove the unwanted artifacts but may cause unfavorable oversmoothing and staircase effects under the piecewise constant assumption. TGV (a generalization of TV) involves high-order derivatives and is suited to regularize range images. The original TGV is based on an l 1 -norm expression. We introduced a TGpV model that considers the l p -norm and then developed an optimization-based reconstruction algorithm to extract additional sparsity information from the original TGV. Compared with the other methods, the proposed TGpV-ADM method shows better image reconstruction results in both smoothly varying regions and sharp edges. Furthermore, the proposed method is robust to noise and shows much faster convergence than the other  methods. There is small difference between the time consumption of the TGpV-ADM and TGV-ADM methods. On the one hand, with the increase of projection data, the proportion of projection/back-projection procedure will increase simultaneously. On the other hand, as the effective access to realizing high-performance computation of the subminimization problems by FFTs, the presented algorithm can keep a good and stable performance of balancing the accuracy and efficiency with the increase in computational scale. Multiple parameter settings are likely to be involved in any reconstruction design and can significantly influence reconstruction results. Reconstruction under different parameter settings is likely to yield different levels of image quality. In the study, even when p = 0.9, which is very close to p = 1.0, the gains from the TGpV-ADM method are outstanding compared with those from the TGV-ADM method. To guide an adequate adaptation of the image reconstruction task, reconstructions using different parameters are given in S2 Appendix. Although we cannot provide the "best" selection strategy, the suggested metrics employing TGpV minimization allows high-quality image recovery with sparse projection data and suggests a clinically useful potential for radiation dose reduction.
The framework and metrics are only considered for the 2D fan-beam cases. We hope to extend the model to cone-beam CT and to investigate effective graphics processing unit-based implementation to gain significant improvement with minimal computational cost. Addressing this question is one of our future research focuses.

Conclusion
In this paper, we present a TGpV regularization model to adaptively preserve the edge information while avoiding the staircase effect for few-view CT image reconstruction. The new model is solved by splitting variables with an efficient alternating minimization scheme. All of the subproblems have efficient solutions after using generalized p-shrinkage mappings and partial Fourier transform. Experimental results show that the proposed TGpV-ADM method can reconstruct sharp edges accurately and smoothly varying image regions from insufficient data. In particular, the proposed method shows considerable advantages over the standard TGV-ADM and TpV-ADM algorithms.