Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Accelerating cross-validation with total variation and its application to super-resolution imaging

  • Tomoyuki Obuchi ,

    Roles Formal analysis, Funding acquisition, Investigation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing

    obuchi@c.titech.ac.jp

    Affiliation Department of Mathematical and Computing Science/Tokyo Institute of Technology, Yokohama 226-8502, Japan

  • Shiro Ikeda,

    Roles Data curation, Funding acquisition, Project administration, Software

    Affiliation The Institute of Statistical Mathematics, Tachikawa, Tokyo, 190-8562, Japan

  • Kazunori Akiyama,

    Roles Data curation, Funding acquisition, Visualization

    Affiliations Haystack Observatory/Massachusetts Institute of Technology, Westford, MA, 01886, United States of America, National Astronomy Observatory of Japan, Osawa 2-21-1, Mitaka, Tokyo 181-8588, Japan, Black Hole Initiative, Harvard University, Cambridge, MA, 02138, United States of America

  • Yoshiyuki Kabashima

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration

    Affiliation Department of Mathematical and Computing Science/Tokyo Institute of Technology, Yokohama 226-8502, Japan

Abstract

We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

1 Introduction

At present, in many practical situations of science and technology, large high-dimensional observational datasets are created and accumulated on a continuous basis. An essential difficulty concerning the treatment of such high-dimensional data is the extraction of meaningful information. Sparse modeling [1, 2] is a promising framework for overcoming this difficulty, which has recently been utilized in many disciplines [3, 4]. In this framework, a statistical or machine-learning model with a large number of parameters (explanatory variables) is fitted to the data, in conjunction with a certain sparsity-inducing penalty. This penalty should be appropriately chosen with consideration of the processed data. One representative penalty is the ℓ1 regularization, which retains certain preferred properties, such as the statistical model convexity [5, 6]. A similar penalty that has received more recent focus is the so-called “total variation (TV)” [79], which can be regarded as the ℓ1 regularization imposed on the difference between neighboring explanatory variables. The TV yields “continuity” of the neighboring variables, which is suitable for the processing of certain datasets expected to have such continuity, such as natural images [4, 79].

Another common difficulty associated with the use of statistical models is model selection. In the context of image processing using the ℓ1 and TV regularizations, this difficulty appears during the selection of appropriate regularization weights. A practical framework to select these weights, which is applicable to general situations, is cross-validation (CV). CV provides an estimator of the statistical-model generalization error, i.e., the CV error (CVE), using the data under control, and the minimum CVE obtained when sweeping the weights yields the optimal weight values. This versatile framework is, however, computationally demanding for large datasets/models, and this problem frequently becomes a bottleneck affecting model selection. Thus, reducing the CVE computational cost could have a significant impact on a broad range of sparse modeling applications in various disciplines.

Considering these circumstances, in this paper, we provide a CVE approximation formula for a statistical model of linear regression penalized by the ℓ1 and TV terms, to efficiently reduce the computational cost. Note that the formula for the case penalized by the ℓ1 term alone has already been proposed in [10], and the formula presented herein is a generalization of it. Below, we show the formula derivation and perform a demonstration in the context of super-resolution imaging. The processed images employed in this study are reconstructed from simulated observations of black holes on the event-horizon scale for the Event Horizon Telescope (EHT, see [1113]) full array. Note that our formula will be applied to actual EHT observations to be conducted after April 2017.

2 Problem setting

Let us suppose that our measurement is a linear process, and denote the measurement result as y ∈ ℝM and the measurement matrix as A = {Aμi}μ=1,⋯,M; i=1,⋯N ∈ ℝM×N. The explanatory variables, corresponding to the images that will be examined in the later demonstration, are denoted by x ∈ ℝN. The quality of the fit to the data is described by the residual sum of squares (RSS), i.e., . In addition, we consider the following penalty consisting of ℓ1 and TV terms: (1) where the T(x) term corresponds to the TV and is expressed as (2) and ∂i denotes the neighboring variables of the ith variable. There is some variation in the definition of “neighbors”; here, we follow the standard approach [79]. That is, x is assumed to be a two-dimensional image and the neighbors of the ith pixel correspond to the right and down pixels. However, the bottom row (the rightmost column) of the image is exceptional, as the neighbor of each pixel in that row (column) corresponds to the right (down) pixel only. Note that the developed approximation formula presented below is independent of this specific choice of neighbors and can be applied to general cases.

For this setup, we consider the following linear regression problem with the penalty given in Eq (1) (3) where generally represents the argument that minimizes an arbitrary function f(u). Further, we consider the leave-one-out (LOO) CV of Eq (3) in the form (4) Note that the system without the μth row of y and A is referred to as the “μth LOO system” hereafter. In this procedure, the CVE, i.e., the generalization error estimator, is (5) where is the μth row vector of A. We term this simply the “LOO error (LOOE).”

Computing the LOOE requires solution of Eq (4) M times, by definition, which is computationally expensive. Therefore, the purpose of this paper is to avoid this computational expense by deriving an approximation formula of Eq (5).

3 Approximation formula for softened system

When M is sufficiently large, i.e., the number of observations is large enourgh, the difference between the LOO solution and the full solution is expected to be small. This intuition naturally motivates us to conduct a perturbation connecting these two solutions. To conduct this perturbation, we “soften” the penalty by introducing a small cutoff δ(> 0) in the TV, having the form (6) where (7) An approximation formula in the presence of ℓ1 regularization with smooth cost functions has already been proposed in [10]. We employ that formula here and take the limit δ → 0.

To state the approximation formula, we begin by defining “active” and “killed” variables. Owing to the ℓ1 term, some variables are set to zero in ; we refer to these variables as “killed variables.” The remaining finite variables are termed “active variables.” We denote the index sets of the active and killed variables by SA and SK, respectively. The active (killed) components of a vector x are formally expressed as xSA(xSK). For any matrix X, we use double subscripts in the same manner. For example, for an N × N matrix, a submatrix having row and column components of SA and SK, respectively, is denoted by XSA SK.

The approximation formula can be derived through the following two steps. Note that, in this derivation, a crucial assumption is that the sets of active and killed variables are common among the full and LOO systems. This assumption may not hold exactly in practice, but the resultant formula is asymptotically exact in the large-N limit [10].

The first step is to compute the values of the active variables and their response to small perturbation. The active variables are determined by the extremization condition of the softened cost function with respect to the active variables, such that (8) The focus here is the response of this solution when a small perturbation −h · x is incorporated into the cost function. A simple computation demonstrates that the active–active components of the response function, , are equivalent to the inverse of the cost-function Hessian (9) (10) where denotes the Hessian operator and Gμ is the Gram matrix of Aμ, i.e., Gμ ≡ (Aμ) Aμ. The other components of the response function are identically zero, from the stability assumption of SK and because the killed variables are zero, with .

In the second step, we connect the full solution to the LOO solution, through the above perturbation with an appropriate h. To specify the perturbation, we assume that the difference is small and expand the RSS of the full system with respect to dδ as follows: (11) This equation implies that the perturbation between the full and LOO systems can be expressed as . Hence, we obtain (12) The Hessian of the full system has a simple relationship with the LOO Hessian, such that (13) where the approximation at the righthand side comes from replacing with in the argument of Rδ(x). Inserting Eqs (12 and 13) in conjunction with into Eq (5) and using the Sherman-Morrison formula for matrix inversion, we find (14) According to Eq (14), we can compute the LOOE only from the full solution , without actually performing CV, which facilitates considerable reduction of the computational cost.

4 Handling a singularity

Let us generalize Eq (14) to the limit δ → 0, where the penalty contains another singular term in addition to the ℓ1 term. This TV singularity tends to “lock” some of the neighboring variables, i.e., xj = xi (∀j ∈ ∂i), which corresponds to ti = 0 in Eq (2). If two different vanishing TV terms, ti and tj, share a common variable xr, all the variables in those TV terms take the same value xk = xr (∀k ∈ ({i} ∪ ∂i ∪ {j} ∪ ∂j)). In this manner, the active variables are separated into several “locked” clusters, with all the variables inside a cluster having an identical value. This implies that the variable response to a perturbation, χ = limδ→0 χδ, should have the same value for all variables in a cluster and may, therefore, be merged. Below, we demonstrate this behavior for the δ → 0 limit. For the derivation, we assume that the clusters are common to both the full and LOO systems, similar to the assumption for SA and SK. For convenience, we index the clusters by α, βC and denote the number of clusters by |C|; the index set of variables in a cluster α is represented by Sα and the total set of indices in all clusters is denoted by SC ≡ ∪α Sα. Hereafter, we concentrate on the active variable space only and omit the killed variable space. The complement set of SC, i.e., the set of isolated variables that do not belong to any cluster, is denoted by SI and, thus, SA = SISC.

Two crucial observations for the derivation are the “scale separation” and the presence of the “zero mode.” For vanishing TV terms, a natural scaling to satisfy is . Once this scaling is assumed, we realize that the components of the Hessian that are directly related to the clusters diverge. Let us define by the set of TV terms corresponding to cluster α, i.e., . Hence, by construction and for all αC, all components of are scaled as 1/δ and, thus, diverge as δ → 0. The remaining terms are retained as O(1). According to this “scale separation,” we decompose the Hessian as Hδ = Dδ + Fδ, where Dδ is the direct sum of the diverging components in the naively extended space; ; and Fδ consists of the remaining O(1) terms. This decomposition can be schematically expressed as (15)

We denote the basis of the current expression by {ei}iSA, with (ei)j = δij, and move to another basis that diagonalizes . Each has a “zero mode,” and its normalized eigenvector is given by zα = (z), where for iSα and 0 otherwise, in the full space. This behavior originates from the symmetry, such that the are invariant under a uniform shift in the Sα sub-space, i.e., xjxj + Δ (∀jSα) for ∀Δ ∈ ℝ. This invariance can also be directly seen from a property of the Hessian, i.e., .

In addition, we represent the set of normalized eigenvectors of all the other modes of , which have eigenvalues λαa that are proportional to 1/δ and positively divergent, as . Then, {{{uαa}a, zα}α} diagonalizes and {{{uαa}a, zα}α, {ei}iSI} constitutes an orthonormal basis of the full space. Corresponding to this variable change, we denote , , and as the index set of variables in the space spanned by {zα}α, {{zα}α, {ei}iSI, and {uαa}α,a, respectively. In the new expression, we can rewrite Hδ = Dδ + Fδ as (16) where . Because of the divergence of , only is relevant for the evaluation of (Hδ)−1. These considerations yield the explicit formula of χ as (17) where F = limδ→0 Fδ.

By construction, in the reduced space to span ({zα}α, {ei}iSI), can be expressed as (18) As the non-zero components of the zero mode zα are identically given as , all these coefficients can be easily expressed by the original coefficients Fij, as (19a) (19b) and F = Fαi by the symmetry. Now, all the components are explicitly specified. The form of χ in the original basis {ei}iSA can be accordingly assessed by moving back from the basis {{{uαa}a, zα}α, {ei}iSI}, which completes the computation.

Some additional consideration of the above computation demonstrates that we can shorten some steps and obtain a more interpretable result. We introduce a matrix as (20) with the remaining components being identical to those of , i.e., . Eqs (19) and (20) indicate that is simply the matrix summing the rows and columns in each cluster to a row and a column. It is natural that has a direct connection to χ, because the locked variables in a cluster should exhibit the same response against perturbation. In fact, the response function χ in the original basis is expressed using as (21) This can be directly shown from Eqs (17 and 19), using the relation with , and the blockwise matrix inversion formula. Eqs (14) and (21) constitute the main result of this paper.

5 Algorithmic implementation

5.1 Numerical stability and the softening constant δ

For handling the singularity of the cost-function Hessian, we have introduced the softening constant δ in the TV and finally taken the δ → 0 limit. In practical implementations, however, we should keep δ small but finite. To see the reason, it is sufficient to see a simple example with just three variables . The softened TV is defined as (22) where p = x2x1, q = x3x1 are introduced. The corresponding gradient and Hessian are (23) (24) The zero point of the gradient is given by p = q = 0 irrespectively of the δ value. Inserting this into the Hessian, we get one zero mode proportional to (1, 1, 1) and two finite modes whose eigenvalues are (3/δ, 1/δ) being divergent in the δ → 0 limit. This exactly matches with the assumptions of the approximation formula.

On the other hand, if we first take the limit δ → 0 before taking the zero gradient limit p, q → 0, we see that two zero modes appear: One is proportional to (1, 1, 1) and the other is to (p + q, q − 2p, p − 2q). This is a bad news because the second zero mode, which remains even in the limit p, q → 0, is never taken into account when deriving the approximation formula: The derivation essentially depends on how the zero mode behaves and our formula loses its justification if such unexpected zero modes exist.

These considerations manifest that the two limits, limδ→0 and limp,q→0, are not exchangeable in the TV Hessian. The derivation of our approximation formula assumes limδ→0 limp,q→0 and thus the algorithmic implementation should reflect this limit in a certain way. A simple way is to keep δ small but finite, which is actually a common technique to enhance the numerical stability when using the TV [14]. The choice of the amplitude of δ is related to the numerical precision when solving the optimization problem (3). A practical choice is stated in the next subsection.

5.2 Procedures

Here, we state the procedures for implementation of Eqs (14 and 21) in a numerical computation. Suppose that we have an algorithm to solve Eq (3) and to provide the solution given y, A, λ1, and λT. Using this solution and introducing a finite δ in the Hessian by the reason discussed above, we can assess the LOOE through the following steps:

  1. The sets of active and killed variables, SA and SK, are specified from .
  2. The values of all TV terms are computed.
  3. All clusters C and the index sets belonging to the clusters {Sα}αC are enumerated from , as well as the one of isolated variables, SI.
  4. The total variation from which the vanishing TV terms are removed is denoted by , and the regular part of the Hessian is computed as .
  5. A new index set SR = {{α}αC, SI} is defined.
  6. On SR, the merged Hessian is constructed from F, as , , , and . Similarly, the merged measurement matrix is defined as , .
  7. Using and , the LOOE factor in Eq (14) is computed as , where is the μth row vector of and x = A\b is the solution of the linear equation Ax = b.
  8. Using the LOOE factor and , the LOOE is evaluated from Eq (14).

At step 7, we take the left division instead of the inverse for numerical stability. The cluster enumeration at step 3 involves a delicate point in the definition of C and {Sα}αC. Because of the limited precision in the numerics, the TV term never exactly vanishes; therefore, we need a certain threshold to extract the cluster structure from the TV terms. Here, we introduce the threshold θ and enumerate the clusters as follows:

  1. 3-1. If , the variables in {i} ∪ ∂i are regarded as “linked.” All the links are enumerated by testing for all i = 1, ⋯, N. The set of links is denoted by L, and the index set of all variables in L is denoted by SL.
  2. 3-2. An empty set C = ϕ is prepared and the cluster index α = 1 is defined.
  3. 3-3. The following steps are repeatedly implemented while L is non-empty:
    1. (i). Two empty sets, Stmp = ϕ and Scluster = ϕ, are prepared;
    2. (ii). One link is selected and removed from L. The variable indices in the link are entered into Stmp;
    3. (iii). The following steps are repeatedly implemented while Stmp is non-empty:
      1. a. One index i in Stmp is selected and moved from Stmp to Scluster;
      2. b. If the above chosen index i exists in SL, all the links to i are removed from L, and SL is updated accordingly. The variables linked to i are entered into Stmp;
      3. c. StmpStmpScluster.
    4. (iv). The variables in Scluster constitute a cluster. Sα = Scluster is defined and α is entered into C;
    5. (v). αα + 1.
  4. 3-4. If SαSKϕ, α is removed from C. This is checked for all αC.
  5. 3-5. C, {Sα}αC, and SI = SA − ∪αC Sα are returned.

The entire procedure presented above implements Eqs (14 and 21).

A debatable point would be the values of θ and δ. In most of iterative algorithms as the one in [8, 9], there is an inevitable finite error of the TV term even when it should vanish. Let us express the “scale” of this error as . By construction, the threshold θ is related to this numerical error and it is appropriate to choose θθnum; the softening constant δ should be sufficiently larger than θnum because it does implement the assumed order of two limits, limδ→0 limp,q→0, in derivation of the approximation formula. Overall, the relation (25) must be satisfied. We have numerically checked how strict this principled relation is, and found that the approximation result is not sensitive to the choice of θ as long as it is sufficiently smaller than δ. Although a little more delicate points are involved in the choice of δ, we have also found that in a wide range of δ the approximation result is stable and the cost-function Hessian is safely invertible. Based on these observations, in the application of our formula below, the default values are set to be δ = 10−4 and θ = 10−12. They are chosen according to our datasets and experimental setup: The maximum value of the non-softened TV terms is scaled as and the numerical precision is about θnum ≈ 10−12; the former value is reflected to δ and the latter one is used in θ. Coincidently, this default value of δ accords with the one in [14]. The examination result of the sensitivity to δ and θ will be reported below.

Another noteworthy point is that these procedures can be easily extended to other variants of the TV. For example, for the so-called anisotropic TV [9], Tani = ∑ij∈∂i|xjxi|, we set F = G in step 4 and modify the definition of the link in step 3-1 accordingly, so as to render our formula applicable. In the case of the square TV, Tsq = ∑ij∈∂i(xjxi)2 ≡ (1/2)x J x, the formula can be significantly simpler, because this TV has no sparsifying effect and the formula of the simple ℓ1 case can be employed. We can employ Eq (14) with χSASA = (GSASA + λT JSASA)−1 directly, without the need for cluster enumeration.

6 Application to super-resolution imaging

To test the usefulness of the developed formula, let us apply the derived expression to the super-resolution reconstruction of astronomical images. A number of recent studies have demonstrated that sparse modeling is an effective means of reconstructing astronomical images obtained through radio interferometric observations [1518]. In particular, the capability of high-fidelity imaging in super-resolution regimes has been shown, which renders this technique a useful choice for the imaging of black holes with the EHT [1721]. We adopt the same problem setting as [17, 20] and demonstrate the efficacy of our approximation formula through comparison with the literally conducted 10-fold CV result. Here, xi denotes the ith pixel value and A is (part of) the Fourier matrix. The dataset y is generated through the linear process (26) where ξ is a noise vector and x0 is the simulated image, which we infer given y and A.

In this work, we use data for simulated EHT observations based on three different astronomical images, which are available as sample data for the EHT Imaging Challenge. Our datasets 1, 2, and 3 correspond to the sample datasets 1, 2, and 5, respectively, available from [22] at July 2017. The images are reconstructed with N = 10000 = 100 × 100 pixels and with 160, 250, and 100 μ as fields of view, which are identical to the original images of Datasets 1, 2, and 3 from the EHT Imaging Challenge, respectively. We test four values for each λ1 and λT: λ1 ∈ (M/2) × {1, 10, 100, 1000} and λT ∈ (M/8) × {1, 10, 100, 1000}. M is 1910, 1910, and 2786, for Datasets 1–3, respectively. Later, we also use different size data from the same datasets, for checking the size dependence of the result.

Table 1 shows the mean CVE values for the three datasets, determined by the 10-fold CV and by our approximation formula for varying λT. λ1 is fixed to the optimal value, which is coincidently common for all datasets and satisfies 2λ1/M = 1. It is clear that the approximate CVE values accord well with the 10-fold results, even on the error-bar scale, demonstrating that our approximation formula works very well. Note that the error bar for the approximation is given by the standard deviation of the M terms in Eq (14) divided by .

thumbnail
Table 1. CVE values determined by 10-fold CV and our approximation formula against λT.

λ1 is fixed to the optimal value (2λ1/M = 1, coincidentally common to all cases). The number in brackets denotes the error bar to the last digits. The optimal values are bolded. The tuning constants δ and θ are set to be δ = 10−4 and θ = 10−12, respectively.

https://doi.org/10.1371/journal.pone.0188012.t001

To directly observe the reconstruction quality, in Fig 1 we display the images at all investigated parameters and the reconstructed image at the optimal λ1 and λT for Dataset 3, as well as the associated errors plotted against λ1 and λT in Fig 2. Again, we can see the proposed method approximates the 10-fold result well, and the reconstructed image reasonably resembles the original. The RSS is monotonic with respect to the changes of λl1 and λT but the approximate LOOE is not, which implies that the LOOE factor computed through Eq (21) appropriately reflects the effect of the penalty terms.

thumbnail
Fig 1. Super-resolution imaging results for Dataset 3 based on model image of supermassive black hole at center of nearby elliptical galaxy, M87.

(a) Images for all investigated parameters; the star-marked panel is obtained at the optimum parameters. (b) Original images (top) and reconstructed images (bottom) at optimal parameters ((2λ1, 8λT)/M = (1, 10)). The images are convolved with a circular Gaussian beam on the right-hand side, the full width at half maximum (FWHM) of which is 25% of the nominal angular resolution of the EHT and corresponds to the diameters of the yellow circles. This coincides with the optimal resolution minimizing the mean square error between them.

https://doi.org/10.1371/journal.pone.0188012.g001

thumbnail
Fig 2.

(a) 3D plot of mean CVEs against λ1 and λT without error bars. (b) Plot of mean CVEs and RSS against λT at the optimal value of λ1, 2λ1/M = 1. (c) Plot of mean CVEs and RSS against λ1 at the optimal value of λT, 8λT/M = 10. For (c), the RSS is overlapped with the CVEs in the symbol size. In all the cases, the agreement between the approximate LOOE and the 10-fold CVE is fairly good. The tuning constants δ and θ are set to be δ = 10−4 and θ = 10−12, respectively.

https://doi.org/10.1371/journal.pone.0188012.g002

Next, we check the sensitivity of the approximate result to the tuning constants δ and θ. In Fig 3, the approximate LOOEs at the optimal λ1 are plotted against λT when changing δ (left) and θ (right). This indicates that the approximate LOOEs are stable against the change of both δ and θ. Hence, we may choose these values rather arbitrarily. This is a good news because tuning them makes the problem more numerically amenable: Enlarging δ makes the computation of the Hessian inversion more numerically stable; increasing θ lowers the effective degrees of freedom. The second property associated with θ is really beneficial when treating a large-size dataset, because it can downsize the Hessian and reduce the cost for computing its matrix inversion. In Table 2, the values of the effective degrees of freedom are given when changing θ. The reduction of the degree of freedom at large (yet small enough compared to δ = 10−4) θ is significant, which encourages us to apply the proposed formula to larger-size datasets.

thumbnail
Fig 3. Comparative plots of mean approximate LOOEs againstλT at 2M−1 λ1 = 1 when (a) δ changes as 10−6–10−3 with fixed θ = 10−12; (b) θ changes as 10−12–10−6 with fixed δ = 10−4.

They show that the LOOE curves are rather stable against the choice of the tuning constants.

https://doi.org/10.1371/journal.pone.0188012.g003

thumbnail
Table 2. The effective degrees of freedom , the number of clusters + the number of isolated variables, against θ for Dataset 3 at δ = 10−4 and the optimal parameters (2λ1, 8λT)/M = (1, 10).

https://doi.org/10.1371/journal.pone.0188012.t002

Finally, let us see the data-size dependence of the approximation accuracy and of the computational cost for solving Eq (3) and for obtaining the approximate LOOE from the solution. The data analyzed here is an identical simulated image of black hole expressed with different number of pixels. When solving Eq (3), we used Intel(R) Core(TM) i7-5820K CPU of 3.30GHz with 6 cores for N = 502 = 2500 and Intel(R) Xeon(R) CPU E5-2699 v3 of 2.30GHz with 36 cores for N = 1002 and 1502, and employed an algorithm called “MFISTA” proposed in [8, 9]. Meanwhile, we used a laptop of a 1.7 GHz Intel Core i7 with two CPUs for evaluating the approximate LOOE. Hence the comparison is not fair and unfavorable to the approximation formula. The left panel indicates that the approximation accuracy becomes better for larger sizes. This is reasonable because the perturbation we have employed should have better accuracy as the model and data become larger, though the accuracy at N = 502 = 2500 is already good. The right panel clearly shows the advantage of the developed formula: The actual computational time of the approximate LOOE is significantly shorter than that of the algorithm convergence for solving Eq (3) in the investigated range of system sizes, even under the unfair comparison mentioned above. However, this advantage will be less prominent if the model becomes very large: Our approximation formula needs the Hessian inversion whose computational cost is scaled as O((|C| + |SI|)3)≈O(N3), while MFISTA requires the cost of O(N2) as long as the number of steps to convergence is constant against N. The crossover size at which these two computational costs become comparable is roughly estimated as N× ≈ 106, though such crossover tendency cannot be seen yet from Fig 4. For such large systems, a new fundamental solution should be tailored to resolve the computational-cost problem, though tuning θ to a large value in the present method can still be a good first aide.

thumbnail
Fig 4.

(a) Plot of mean CVEs at optimal parameters of different sizes. (b) Log-log plot of the computational times for solving the optimization problem (3) and for obtaining the approximate value of CVE against the size of datasets.

https://doi.org/10.1371/journal.pone.0188012.g004

7 Conclusion

In this paper, we have developed an approximation formula for the CVE of a sparse linear regression penalized by ℓ1 and TV terms, and demonstrated its usefulness in the reconstruction of simulated black hole images. Our derivation is based on the perturbation assuming the small difference between the full and leave-one-out solutions. This assumption will not be fulfilled for some specific cases, i.e. when the measurement matrix is sparse. However, for most of dense measurement matrices, such as the Fourier matrix discussed in this paper, our assumption will be reasonably satisfied. Hence we expect the range of application of our formula is wide enough and we would like encourage the readers to use this formula in their own work. It is also straightforward to generalize the developed formula to other types of TV, and two examples of the generalization for the anisotropic and square TVs have been explained.

The key concept of our formula, perturbation between the LOO and full systems, is very general and can be applied to more general statistical models and inference frameworks [23]. The development of practical formulas for those cases will facilitate higher levels of modeling and computation.

Acknowledgments

We would like to express our sincere gratitude to Mareki Honma and Fumie Tazaki for their helpful discussions. We thank Katherine L. Bouman for preparing the EHT Imaging Challenge website [22, 24]. We also thank Andrew Chael and Lindy Blackburn for writing a simulation software to produce sample data sets [25].

References

  1. 1. Rish I, Grabarnik G. Sparse Modeling: Theory, Algorithms, and Applications. CRC Press; 2014.
  2. 2. Hastie T, Tibshirani R, Wainwright M. Statistical Learning with Sparsity: The Lasso and Generalizations. CRC Press; 2015.
  3. 3. http://sparse-modeling.jp/index_e.html
  4. 4. Mairal J, Bach F, Ponce J. Sparse modeling for image and vision processing. Available from: arXiv:1411.3230v2.
  5. 5. Tibshirani R. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc. B., 58, 267 (1996).
  6. 6. Efron B, Hastie T, Johnstone I, Tibshirani R. Least angle regression. Ann. Stat., 32, 407 (2004).
  7. 7. Rudin L I, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D 60, 259 (1992).
  8. 8. Chambolle A. An algorithm for total variation minimization and applications. J. Math. Imaging Vision, 20, 89 (2004).
  9. 9. Beck A, Teboulle M, Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process., 18, 2419 (2009). pmid:19635705
  10. 10. Obuchi T, Kabashima Y. Cross validation in LASSO and its acceleration. J. Stat. Mech., 053304 (2016).
  11. 11. http://www.eventhorizontelescope.org
  12. 12. Asada K, Kino M, Honma M, Hirota T, Lu R.-S, Inoue M. White Paper on East Asian Vision for mm/submm VLBI: Toward Black Hole Astrophysics down to Angular Resolution of 1 RS, arXiv:1705.04776
  13. 13. Akiyama K, Lu R, Fish V L, Doeleman S S, Broderick A E, Dexter J, et al. 230 GHz VLBI observations of M87: Event-horizon-scale structure during an enhanced very-high-energy γ-ray state in 2012. Astrophys. J., 807, 150 (2015).
  14. 14. Chan T F, Osher S, Shen J. The Digital TV Filter and Nonlinear Denoising, IEEE Trans. Image Process., 10, 231 (2001). pmid:18249614
  15. 15. Wiaux Y, Jacques L, Puy G, Scaife A M M, Vandergheynst P. Compressed sensing imaging techniques for radio interferometry. Mon. Not. R. Astron. Soc., 395, 1733 (2009).
  16. 16. Li F, Cornwell T J, de Hoog F. The application of compressive sampling to radio astronomy I. Deconvolution. A&A, 528, A31 (2011).
  17. 17. Honma M, Akiyama K, Uemura M, Ikeda S. Super-resolution imaging with radio interferometry using sparse modeling. Publ. Astron. Soc. Japan, 66, 1 (2014).
  18. 18. Honma M, Akiyama K, Tazaki F, Kuramochi K, Ikeda S, Hada K et al. Imaging black holes with sparse modeling. JPCS, 699, 012006 (2016).
  19. 19. Ikeda S, Tazaki F, Akiyama K, Hada K. PRECL: A new method for interferometry imaging from closure phase. Publ. Astron. Soc. Japan, 68, 45 (2016).
  20. 20. Akiyama K, Ikeda S, Pleau M, Fish V, Tazaki F, Kuramochi K et al. Superresolution Full-polarimetric Imaging for Radio Interferometry with Sparse Modeling. The Astronomical Journal, 153, 1 (2017).
  21. 21. Akiyama K, Kuramochi K, Ikeda S, Fish V, Tazaki F, Honma M et al. Imaging the Schwarzschild-radius-scale Structure of M87 with the Event Horizon Telescope Using Sparse Modeling. The Astrophysical Journal, 838, 1 (2017).
  22. 22. http://vlbiimaging.csail.mit.edu/imagingchallenge
  23. 23. Kabashima Y, Obuchi T, Uemura M, Approximate cross–validation formula for Bayesian linear regression. Available from arXiv:1610.07733.
  24. 24. Bouman K L, Johnson M D, Zoran D, Fish V L, Doeleman S S, Freeman W T. Computational Imaging for VLBI Image Reconstruction. The IEEE Conference on Computer Vision and Pattern Recognition, 913 (2016).
  25. 25. Chael A A, Johnson M D, Narayan R, Doeleman S S, Wardle J F C, Bouman K L. High-resolution Linear Polarimetric Imaging for the Event Horizon Telescope. The Astrophysical Journal, 829, 15 (2016).