Algorithm for the Time-Propagation of the Radial Diffusion Equation Based on a Gaussian Quadrature

The numerical integration of the time-dependent spherically-symmetric radial diffusion equation from a point source is considered. The flux through the source can vary in time, possibly stochastically based on the concentration produced by the source itself. Fick’s one-dimensional diffusion equation is integrated over a time interval by considering a source term and a propagation term. The source term adds new particles during the time interval, while the propagation term diffuses the concentration profile of the previous time step. The integral in the propagation term is evaluated numerically using a combination of a new diffusion-specific Gaussian quadrature and interpolation on a diffusion-specific grid. This attempts to balance accuracy with the least number of points for both integration and interpolation. The theory can also be extended to include a simple reaction-diffusion equation in the limit of high buffer concentrations. The method is unconditionally stable. In fact, not only does it converge for any time step Δt, the method offers one advantage over other methods because Δt can be arbitrarily large; it is solely defined by the timescale on which the flux source turns on and off.


Introduction
Diffusion is one of the most classic problems in physics, with the governing laws of Fick dating back more than 150 years [1,2] and a number of books on the subject [3,4,5]. From both the mathematical and numerical points of view, the diffusion equation-especially the sphericallysymmetric version considered here-is very well studied; its mathematical properties are known and a number of numerical techniques for solving it exist (e.g., explicit-time methods like forward-time/central-space (FTCS), implicit-time methods like backward-time/centralspace (BTCS) and Crank-Nicolson, and more sophisticated ones [6]). While diffusion has been studied experimentally and theoretically for a very long time, it is still of central importance in many areas today. For example, it plays vital roles in biology [7] and new technologies like nanofluidics [8].
For these applications, the efficient numerical solution of the diffusion equation is fundamental to understanding the physics underlying these new applications. One important example is calcium-induced calcium release (CICR) in cardiac muscle, where an array of Ca 2+selective ion channels open and close with a probability that depends on the local Ca 2+ concentration, which comes from the diffusion of Ca 2+ through neighboring channels [9]. In this sense, the channel's open/closed state history affects its future state in a feedback mechanism. For such systems, the source flux is changing stochastically in time, depending on the on/off states of all the other flux sources. Moreover, since each channel is * 40 nm across, the calculation of a 10 × 10 array of these channels must be accurate over long distances (at least 560 nm) to understand the complex interactions.
Traditional numerical algorithms like BTCS and Crank-Nicolson divide the space coordinate into small slices to calculate numerical derivatives and produce a matrix equation that must be solved at each time step. Similarly, the conditionally-stable explicit-time methods like FTCS require a matrix/vector multiplication. Because the matrix is sparse, the number of flops for this solve is proportional to the number of grid points. These approaches are very valuable and easy to implement, but when the number of grid points reaches well into the thousands for large systems other approaches might be more computationally efficient.
Here, one alternative numerical technique to compute the concentration profile for radial Fickian diffusion from a point source is described. Specifically, the spherically-symmetric equation considered here is @c ðr; tÞ @t ¼ D r 2 @ @r r 2 @c @r ðr; tÞ þ sðr; tÞ ð 1Þ with the initial condition which we take to be a constant. Here, c(r, t) is the concentration (number per unit volume) of the diffusing particles and s(r, t) is the flux per volume (number per unit volume per time) injected into the system. Here, a point source is considered. D is the diffusion coefficient and the variables r and t are the distance from the origin and time, respectively. The purpose of this paper is describe a method to solve Eq (1) numerically for c(r, t). Instead of discretizing the differential equation itself, the propagation algorithm described here is based on an integral version of the solution to Eq (1). This approach splits the solution into two components, the diffusion from the source during the time interval (whose exact solution is known) and the propagation of the particles released during previous time intervals. Here, we show that the propagation integral can be very efficiently calculated using a combination of (1) a new Gaussian quadrature that is specifically formulated for the diffusion equation and (2) interpolation with grid points whose locations are chosen based on the exact solution to the diffusion equation. This, then, minimizes both the number of points needed to evaluate the integral and the number of points needed to interpolate the results between grid points.
The propagation method offers advantages over more classical techniques. First, since no numerical time derivatives are used, the time step Δt does not have to be small. In the propagation method, Δt is determined by the source's timescale. For example, if the source flux varies smoothly, then Δt is chosen to be small to approximate that function. However, if the source turns on and off stochastically, then Δt should be the timescale that the source operates on; if, for example, it changes at most once a second, then Δt is 1 second. Second, since no numerical spatial derivatives are used with a constant Δr, very large numbers of grid points (and therefore very large tridiagonal matrices) are avoided. In the propagation method, a nonuniform grid is created that places points optimally for the diffusion problem and between those points interpolation is shown to be accurate.
Preliminary calculations show that a 10-point Gaussian quadrature and * 100 interpolation grid points can give very accurate results, even over long simulations using only a matrix/ vector multiplication with a sparse matrix containing < 1400 nonzero entries. These calculations also show that it is possible to evaluate the error of the propagation technique. Moreover, the technique is fast enough to find an optimal set of simulation parameters that maximize accuracy and minimize calculation time. This error and parameter determination is part of the overhead of the simulation so that simulations of the system of interest can be done knowing that the error in the concentration is always within acceptable limits and that the run time is the shortest possible.

Propagation integral formulation
For the source flux in Eq (1) we consider a point source. When the point source has a constant flux j that does not vary in time so that where δ(r) is the Dirac delta function, a simple closed form formula exists in terms of the complementary error function [3]: where However, when the source is not constant over time, Eq (1) must be solved numerically. One way to attack this problem is via classic finite difference or finite elements methods. Methods like FTCS, BTCS, and Crank-Nicolson produce matrix equations to be solved for the concentration at previously-defined spatial grid points. The choice of grid points is crucial since minimizing the number of points is critical for fast calculation, but accuracy may suffer if the number is too small. The goal of this paper is to provide an alternative solution method that is not based on finite differences or finite elements. This approach propagates the concentration profile in time with a sparse-matrix/vector multiplication, as compared to solving a sparse-matrix equation. Since the grid points in this paper are based on a Gaussian quadrature and an interpolation scheme that are both specifically developed for this diffusion problem, the number of grid points and their locations are optimized in the sense of producing a high-order solution with a minimum number of points.
The approach we use here involves decomposing the concentration into two parts, a source part that adds new particles from the point source during the discretized time interval T n = (t n , t n+1 ) and a propagation part that diffuses the concentration profile of the last time step through the time interval T n [3]. That is, where c prop ðr; t nþ1 Þ ¼ ð4pDðt nþ1 À t n ÞÞ À3=2 exp À jr À r 0 j 2 4D ðt nþ1 À t n Þ cðr 0 ; t n Þdr 0 ð7Þ and c source ðr; t nþ1 Þ ¼ w n j n where χ n is 0 or 1 depending if the source is off or on during the time interval T n , respectively, j n is the flux during T n , and Δt = t n+1 − t n is assumed to be the same for all n. These classic formulas may be found, for example, in Section 8.4 of the book by Barton [3]. (Note that for a half-space problem like flux through an ion channel on one side of a membrane, the denominator includes 2π not 4π because the surface area is half that of the whole sphere.) Here, the source flux takes the form and is assumed to be constant during the discretized time intervals: Similarly, χ n , the on or off state of the source, is the same during the entire time interval [t n , t n+1 ). Note that here we do not assume that the χ n are known beforehand; for that, an analytic solution of the diffusion equation exists (see Eq (54) later). Here, the on/off state of the source can vary due to random inputs that depend on the concentration profile produced by the source. For example, in CICR, the concentration profile produced by one channel affects the open state of its neighbors, which in turn produce Ca 2+ profiles that affect the original channel.
It is also important to note that while we assume that j is constant over a time interval, that does not mean we are only considering fluxes that are either on or off or that j must be the same during all time intervals. Specifically, if the source flux is a smoothly varying function, the time intervals T n should be chosen small enough so that j(t) is well-approximated by the piecewise-constant function j(t) = j n j(t n ) if t 2 T n . In the notation used here, j n is the flux during the n-th time interval and χ n is a random variable that takes values 0 or 1. The χ function is convenient for the case of a randomly on or off source. For a smoothly varying source on the other hand, χ n = 1 for all n and the j n are different for different n.
Since the source term is the concentration profile due to the time-independent source flux during the time interval T n , Eq (9) is just Eq (4) applied to this time interval. In Eqs (7) and (8), the vectors r and r 0 (with lengths r and r 0 , respectively) are three-dimensional locations near the point source. The propagation integral can be simplified by using the relations to give The accurate integration of this integral is computationally difficult because it requires knowing the function c(r 0 , t n ) for all r 0 from 0 to 1. This is generally difficult because one needs at least some prior knowledge about a reasonable finite upper limit and about acceptable grid spacings for the r 0 . One purpose of this paper is to develop an order N integration scheme for this propagation integral using a Gaussian quadrature.
One important thing to note is that Eqs (9) and (14) are exact solutions of the diffusion equation over a time step Δt. Therefore, Δt can be arbitrarily large. Importantly, it is not constrained by having to be small to numerically approximate a time derivative, as it must be in other methods like FTCS, BTCS, and Crank-Nicolson. Δt is then defined only by the time course of the flux source, the rate at which it turns on and off. This is one way that the propagation method provides an advantage over other existing alternatives. Moreover, because Δt can be anything, the propagation method is unconditionally stable.

Non-dimensionalization
The first step is non-dimensionalizing r with the diffusion lengthscale by defining with a similar definition for R 0 . One can then define (and similarly for ρ prop (R, t n ) and ρ source (R, t n )) so that the propagator integral (14) becomes with the weight function For the source concentration, where the flux factor F n is defined as Then,

Propagation algorithm
The integral in Eq (17) may be evaluated in a number of different ways (e.g., the Fast Gauss Transform [10]). The alternative approach taken here is to find a set of Gauss-diffusion quadrature (GDQ) points x a f g N a¼1 with corresponding weights w a f g N a¼1 that both depend on R. Then, rðx a ðRÞ; t n Þw a ðRÞ þ w n F n erfcðRÞ: This, however, leads to an infinite nesting of grid point evaluations: This is, of course, not desirable because we want a finite, well-defined set of GDQ points {x α }. One can overcome this problem by defining a set of grid points R i f g I i¼1 between which interpolation is used to evaluate the function at the GDQ points. Specifically, for each R i , ρ(R i , t n+1 ) can be calculated from Eq (23) after the {x α (R i )} and {w α (R i )} have been calculated. The interpolation grid and the corresponding GDQ points and weights may be precalculated and saved.
The interpolation used here is a weighted sum of the interpolation points, so that The interpolation weights depend on z because the R j are nonuniformly spaced. By Eqs (17) and (23), with the combined weights In order to express the iteration process in matrix/vector notation, define the column vector and the matrix Then, for the source column vector One algorithm for evolving ρ(R, t n ) in time then is: f. Compute the combined weights matrix W using Eq (31).

(optional)
Perform an error checking simulation; details are discussed in Section Error Analysis.
3. Simulation. Starting with ρ(R,0) = c 0 R for time t = 0, a. evaluate the source state χ n and flux j n , possibly based on the concentrations ρ n from the previous time step; b. compute ρ n+1 from ρ n using Eq (34).
The overhead is where all the serious programming work goes, specifically to compute the Gaussian quadrature, the interpolation grid, and the interpolation weights. That having been said, however, none of these steps are as difficult as they first appear to be. First, a Mathematica notebook that computes the Gaussian quadrature grid points and weights is available in the Supporting Information (S1 File). Second, programs like Mathematica provide function interpolation routines that can be used in the interpolation grid building strategy described in Interpolation Grid and Weights, thereby requiring little actual programming by the user. Lastly, Fornberg [11] provides an easy-to-implement algorithm for interpolation that, with one tweak, is very fast to compute and easily allows the user to change the interpolation order. The overhead calculations are generally fast, taking a few seconds and, if parallelization is used, less than 1 sec.
At each simulation step, the iteration (Eq (34)) requires a matrix/vector multiplication with the I × I matrix W. However, W is both small and sparse. Generally, W is small because I < 200 is generally sufficient for accurate results. This is because the interpolation grid is nonuniformly spaced and chosen based on knowledge of the exact solution of the diffusion problem, thereby a priori putting extra points where they are needed (Section Interpolation Grid and Weights). W is also sparse because only a small number of neighboring points are used in the Fornberg interpolation. In sample calculations, W ranged from 3% to 42% filled, depending on the number of GDQ and Fornberg points, as well as on the spacing of the interpolation grid points (specifically, the number of near and far points, as defined in Interpolation Grid and Weights). The sparsity of W and the small size of W make the propagation scheme very fast.

Interpolation grid and weights
There are two components to the interpolation used in the propagation algorithm, namely choosing the points {R i } and choosing an interpolation method. For the latter, we use a method by Fornberg [11] that uses polynomial interpolation between previously chosen points. This was chosen not only because it is easy to implement and quick to compute, but also because changing of the interpolation order N F (i.e., the order of the interpolating polynomial or, equivalently, the number of nearest-neighbor points around each R i ) is effortless for the user. The technical details are given in Appendix A.
The interpolation grid points should be a minimal number of points, and therefore it would be best to have some knowledge of the mathematical structure of the solution ρ(R, t n ). Luckily, it is possible to derive an exact solution (see Appendix B). Specifically, where e m ðRÞ ¼ While one can use this exact solution, it is computationally impractical because no work from time t n can be reused for time t n+1 . Therefore, if T timesteps have elapsed, it takes O(T 2 ) operations per location R to compute ρ(R, T) via Eq (36), which becomes slower than the propagation method proposed here after * 100 to 1000 timesteps. Eq (36) does, however, provide useful information for making the interpolation grid. Specifically, one can find an interpolation grid for the functions e m (R) for various m and taking their union. This will yield points that are located where they are needed most.
The first step is to find the largest possible distance R max that could be needed in the simulation that ends at time T max . This can be determined by considering the worst-case scenario where the current source is on the entire time. R max should be chosen so that the concentration there is below some chosen threshold ε; that is, R max is the r max that satisfies or, in nondimensionalized variables, where F max corresponds to the largest possible flux j max encountered during the simulation. (approximately). We therefore divide the interval [0, R max ] into two parts, the near (R 10) and the far (R > 10) intervals. We start with the same uniform grid for each m, but the far interval is divided uniformly on a logarithmic scale while the near interval is divided on a linear scale. That is, [0, 10] is divided uniformly while it is [log(10), log(R max )] that is divided uniformly. This serves two purposes. First, it focuses the points where e 1 (R) has the most structure (Fig 1) and needs to be resolved the best, namely in the near interval. Second, it keeps the number of grid points to a minimum. Since one can have R max > 5000, using the logarithmic scale places relatively few points in the far interval. More points (e.g., by linearly dividing the far interval) are not necessary because the e m (R) have little structure in the far interval that needs to be resolved (Fig 1). Also, since R max increases with T max , the number of grid points grows logarithmically with T max .
After dividing the near and far intervals uniformly, the next step is to interpolate the functions e m (R) on these intervals separately. To do that, polynomial interpolation (usually third order) is used on each subinterval of the uniformly divided interval. Specifically, each subinterval is bisected and both e m and the polynomial approximation are evaluated at this midpoint. Each interval is further bisected until e m and the interpolation are within a specified tolerance. This is done for many m and the union of all these interpolation grids is taken. This does not add many points because each m starts with the same uniformly-spaced intervals that are then bisected. Some m (especially the small m) will have more bisection steps because they have

GDQ points
Gaussian quadrature integration methods are very efficient when numerically integrating a smooth function f(R 0 ) against a weight function O(R 0 ) over integration limits a and b by using a weighted sum: The efficiency over standard integration methods (e.g., the trapezoidal rule) comes from being able to choose the locations R α as well as the weights ω α ; standard integration methods prescribe the R α , usually as uniformly-spaced points. With the appropriate choices of N Gaussian weights and points, a polynomial of degree 2N is integrated exactly [12]. In practice, even a small N gives very accurate results if f is a very smooth function like we have here (Eq (36) and Fig 1). For the diffusion problem, N between 10 and 20 works well (discussed in detail later).
The theory of Gaussian quadratures is well-established (see, for example, Ref. [12]) and for many specific choices of O, a, and b there are standard techniques for computing the Gaussian weights and points for a given N. These are generally referred to as, for example, Gauss-Legendre (O = 1, a = −1, b = 1) and Gauss-Laguerre (O = e −R 0 , a = 0, b = 1) quadratures, and so in our case we refer to it as Gauss-diffusion quadrature, or GDQ. Computing a Gaussian quadrature for non-standard weight functions is possible [12], and Appendix C describes in detail how for the diffusion weight function of Eq (18). A Mathematica notebook that computes the GDQ weights and points is included in the Supporting Information (S1 File).

Error analysis
There are several different ways to analyze the accuracy of the propagation technique. By using a concrete example, we next show that it is possible to quantify the error in the simulations using a standardized system. Moreover, that system can also be used to find simulation parameters that achieve a desired accuracy with the shortest computation time.

Summary of error analysis
Knowing the accuracy of a simulation method is always important, but there are additional reasons for quantifying the error of the propagation method. First, by using interpolation we are introducing errors into the calculation of ρ n at every time step and using those results to compute ρ n+1 . It is therefore a real concern that errors accumulate and lead to first-order errors after the many timesteps required for a simulation. Second, when the flux source is randomly on, ρ n (R) can have local maxima and minima that must be resolved accurately. If they are not, then that error may can lead to spurious results at later times. With the example in the next section we show that neither of these occur when the simulation parameters are chosen well. For the vast majority of simulation parameter sets (i.e., N, N F , I near , and I far ) the propagation technique gives very accurate answers, although for some combinations of parameters the ρ n can diverge. It is therefore important to run these kinds of test simulations.
There are two different checks one can do. The first is a simulation where the flux source is turned on and off randomly and the exact solution in Eq (36) is used to check the result. The second is a simulation where the flux source is always on with a constant flux and the exact solution in Eq (4) is used. Both have pros and cons. With the randomly-on source simulation only relatively small total simulation times T can be checked because the time to calculate the exact solution grows as T 2 , as discussed above. However, with this technique one can explicitly see whether the fine details of the ρ n (R) profile is reproduced. On the other hand, the advantage of the always-on simulation test is that simulations of arbitrary length can be checked to assure against error accumulation. In the next section, it is shown that the always-on test suffices because it bounds the error; its error is always larger than in the random-on test. One can therefore always run this test to assess simulation accuracy. Moreover, the propagation technique is fast enough that one can test which combination of simulation parameters gives a desired level of accuracy in the fastest time, thereby ensuring accurate results in minimal time. This is important for applications like calcium-induced calcium release were many similar simulations must be done.

Example
To illustrate this, we consider a specific example and examine the errors produced by different parameter choices. Specifically, the point source had a flux of 10 7 particles per second, which is of the order of the flux through an ion channel (approximately 1.6 pA of current for a univalent ion). The initial concentration c 0 was 0. The time step was Δt = 10 −7 seconds and the diffusion coefficient was 10 −9 m/s 2 , which gives F n = 6.61 × 10 −5 M for all time steps. A maximum of 10 6 time steps were considered for all simulations. Solving Eq (39) with ε = 10 −17 gives R max = 5148.
All the errors shown here are the maximum absolute difference between the simulated result ρ sim and the exact result ρ exact over all R, scaled by F n : With this scaling, the exact result is independent of the flux and diffusion coefficient (see Eq (36)). Also, ρ exact (R, t)/F n 1 for all R, and, because it is O(1), −log 10 (ε sim ) is approximately the number of significant figures the simulation has correct. It is important to note, however, that this may not be the error metric of choice for a given problem. In this example, for instance, having a relatively large error with ε sim = 10 −3 corresponds to a concentration error of 66.1 nM (F n ε sim ), and if nothing in the problem is sensitive to nanomolar concentrations, then having a stricter error requirement like ε sim = 10 −5 is overkill and probably a waste of computational time and effort.
As a first test, we consider different N (the number of Gaussian integration points) and N F (the number of Fornberg interpolation points) using 173 interpolation grid points (80 near points for R 10 and 73 far points for R > 10). The results for simulations lasting 10 4 time steps are shown in Fig 2. The flux source was always on. In general, as N increases, the error decreases. However, the same is not always true for N F . Usually the error decreases for N F 8, but for larger N F the simulation ρ can diverge (e.g., values > 10 10 ). This is not unexpected, however, as high interpolation order does not necessarily produce high interpolation accuracy. For example, a constant function is well-approximated by linear interpolation (N F = 1), but poorly by high-order polynomials because these necessarily require oscillations between the grid points in order to be 0 at the grid points. This is, in fact, what occurs here; the interpolation oscillates far from the flux source where ρ is constant (at 0). As the simulation continues, these oscillations are either damped out for the best results (purple in Fig 2) or amplified for the diverged results (red) (data not shown). If divergence does occur, the user should be aware that small changes in N and N F can alleviate this problem (Fig 2).
With N = 20 and N F = 9 that produce accurate results, as a second test we consider the number of interpolation grid points I. More specifically, we consider how the number of near points I near (for 0 R 10, as discussed in Interpolation Grid and Weights) and the number of far points I far (R > 10) affect the error. Table 1 lists the six cases that were considered, and Fig 3 shows ε sim (t) for the first five cases; the results of Simulation 6 were identical to Simulation 5 except that in Fig 3A it did not have the upswing in error near the 10 6 -th time step that occurs in Simulation 5 (and overlaps with Simulation 2).

Time-Propagation of the Radial Diffusion Equation
We first analyze the case where the flux source is on at all times ( Fig 3A). To make sense of the results, consider Simulations 1, 2, and 3, which differ in the number of far points. At the beginning of the simulation, they overlap. At later times, they separate and eventually Simulations 1 and 2 begin to diverge. This indicates that increasing number of far points increases accuracy at later times. This is consistent with the results of Simulations 4, 5, and 6. Again, they overlap at early times and separate according to how many far points there are; the more far points, the smaller the late-time error. Conversely, the more near points, the smaller the earlytime error. For example, compare Simulations 1 and 4 and Simulations 2 and 5. Intuitively, these results make sense: early in the simulation ρ is changing near the source so the near interval is most important, while late in the simulation ρ is changing in the far interval.
In the case just considered, the flux source is always on. However, the propagation technique is designed for fluctuating sources. Moreover, the always-on case does not require the specially-constructed interpolation grid. This is shown in Fig 4 where ρ profiles at different times are shown for a randomly-on flux source. For comparison, the dashed line shows the always-on result at the same time step as the red solid curve. The random-on profiles can have multiple local maxima and inflection points, something not seen for the monotonically decaying always-on profile. Because of the more complicated ρ profiles, it is possible that the error for the randomly-on case is much different than for the always-on case. For example, not resolving those maxima or minima at one time can propagate an error to later times. Fig 3B shows that this is not the case. In fact, for all six cases considered, the maximum error is always less for the random-on case than for the always-on case. This is very useful because the exact answer is much easier to compute for the always-on case; for the random-on case only about 5 × 10 5 time steps can be computed in a reasonable amount of time. Bounding the error is also important because it makes it always possible to pre-compute the error; if the details of how the flux source turns on or off are unimportant for measuring simulation error, then one can use the always-on case. To see how this compares to classical methods like BTCS and Crank-Nicolson, one must first consider their mathematical structure. Each discretizes both the time and space derivatives in Eq (1), almost always with uniform spacing Δt and Δr for the time and space coordinates, respectively. This creates a system of linear equations to be solved at each time step with a tridiagonal matrix [12]. Since tridiagonal systems can be solved with O(N r ) operations (N r is the number of spatial grid points), to be comparable to the propagation method, N r should be around 1000. dL/Δre = 20592. Even for such a large system, a simple Crank-Nicolson implementation produced max t {ε sim (t)} > −4.1, significantly worse than many propagation implementations with suboptimal parameters (Fig 5) and 6 to 45 times as many nonzero matrix entries.

Extension to reaction-diffusion
The focus so far has been on the diffusion equation with a point source (Eq (1)). However, the work done so far can also be extended to include a simple model where chemical reactions remove the diffusing particles. If we assume that these buffer molecules are present at high concentrations and do not change in time or space, then Eq (1) becomes [13] @cðr; tÞ @t ¼ D r 2 @ @r r 2 @c @r ðr; tÞ þ sðr; tÞ þ k À B À k þ bcðr; tÞ ð 42Þ where k − is the off-rate of the buffer and k + is the on-rate. Since we assume the buffer is present at high concentration, the free buffer concentration b and the bound buffer concentration B are known constants (see, for example, [13]). Eq (6) then has two new source and sink terms whose form is the same as Eq (8). For the source term from the diffusant unbinding from the buffer, Each point is for a different parameter set (N, N F , I near , and I far ) for a long simulation of 10 6 time steps for the example described in the main text with the flux source always on. The arrow points to the optimal balance of high accuracy and computation speed. Errors larger than 1 were truncated to 1 to make the graph more readable.
À3=2 exp À jr À r 0 j 2 4DDt cðr 0 ; t n Þdr 0 ð47Þ where the intermediate equation used a one-point quadrature for the time integral and the final result follows from Eq (7). All of the results of the previous sections then carry forward directly: and Eq (34) becomes Conclusion An integration quadrature for the propagation integral of the spherically-symmetric diffusion equation from a time-dependent point source was developed. This integral (Eq (17)) was evaluated using a new Gauss-diffusion quadrature and a specialized interpolation grid was used to find the values at the Gaussian quadrature points for the next integration. This scheme then balances accuracy with speed by using a small number of integration and interpolation grid points to achieve high accuracy over many time steps. The analysis of this propagation technique shows that it has a number of positive attributes: 1. It works with a small number of grid points. Even with just 115 interpolation grid points one can get very accurate results even for long simulations (e.g., 10 6 time steps in Fig 5) with a propagation matrix with < 1400 nonzero entries.
2. The update step is a simple sparse-matrix/vector multiplication.
3. The simulation error is quantifiable before any "real" simulations are done.
4. Simulation error and computation time can be optimized together to minimize both error and simulation time.

The time step
Δt can be arbitrarily large, being defined only by the rate at which the flux source turns on and off. 6. It is unconditionally stable.
7. It can easily incorporate nonlinear feedback into the flux source, for example, from the concentration profile calculated at previous time steps.
The algorithm developed here is an alternative to more traditional finite difference or finite element approaches. It is different since it computes the concentration profile at the next time step using a simple sparse-matrix/vector multiplication rather than a (sparse) matrix solve needed for unconditionally-stable implicit-time methods. Differences in computation speed then depend on the number of grid points used in the traditional approaches and the specifics of the matrix solution method, making it difficult to compare the methods head-to-head. However, if uniform spatial discretization is used in classical methods like BTCS or Crank-Nicolson, they produce matrices with significantly more nonzero entries than the propagation method. The matrix/vector multiplication of the propagation method is similar to that of conditionallystable explicit-time methods, except that these methods require small time steps to be convergent, while the time step for the propagation method is solely determined by the timescale on which the flux source turns on and off. However, the propagation method is meant to be an alternative technique that tries to minimize the number of points (and nonzero matrix entries) while retaining high accuracy.
next ν set a n;n ¼ À c 1 c 2 R nÀ1 À x 0 ð Þ a nÀ1;nÀ1 set c 1 = c 2 next n set A x 0 ð Þ n ¼ a N F À1;n Appendix B: Derivation of the exact solution Here, a brief derivation of the exact solution to the propagation integral evolution is given, specifically that where EðRÞ ¼ erfcðRÞ: While this result is almost surely not new, it was derived independently by the author in the following way. Define and note that the weight function of Eq (19) has the property WðR; ÀR 0 Þ ¼ WðÀR; R 0 Þ ¼ ÀWðR; R 0 Þ: ð57Þ Then r n ðRÞ ¼ 1 2 Taking the Fourier transform (denoted with˜and F R to be explicit of the variable R), we get r n ðoÞ ¼ 1 2 Therefore, with Ã denoting the complex conjugate, r n ðoÞ ¼ 1 2 e Ào 2 =4 r nÀ1 ðoÞ À r nÀ1 ðoÞ Ã À Á þ w n F nẼ ðoÞ ð 61Þ Assuming the initial concentration is 0, we have and Using Eq (62) repeatedly, we find that Im r n ðoÞ À Á ¼ X n m¼0 w nÀm F nÀm e Àmo 2 =4 f 0 ðoÞ: ð66Þ Using the fact that taking the inverse Fourier transform of Eq (62) gives that r n ðRÞ ¼ F À1 ie Ào 2 =4 Imð r nÀ1 ðoÞÞ À Á þ w n F n EðRÞ ð68Þ which is the same as Eq (36).
for the inner product Note that because the integral's weight function depends on R, the coefficients depend on R as well.
To determine the coefficients a j and b j , we need the moments of the weight function: M n Z 1 0 ðR 0 Þ n Á WðR; R 0 ÞdR 0 ð77Þ where F(α, γ;z) is the confluent hypergeometric function which is equal to the generalized hypergeometric series 1 F 1 (α;γ;z) [14]. This function has the recurrence relationship [14] F a þ 1; g; z ð Þ¼ z þ 2a À g a F a; g; z ð Þþ g À a a F a À 1; g; z ð Þ ð79Þ that will allow us to evaluate the moments efficiently. From the structure of this recurrence relationship, it is easiest to break the moments into even n and odd n. For even n = 2k (k ! 1) with α = n/2 = k, we have or, when This gives a procedure for increasing k by starting with 0 ¼ 1 ð83Þ Using that we get Time-Propagation of the Radial Diffusion Equation Then, the Gaussian grid points are the eigenvalues (i.e., x α = λ α ) and the weights are where v α,0 is the first element of v α . From a theoretical point of view this is straightforward. From a computational point of view, however, it is important to note that the inner product ratios that define the coefficients a j and b j in Eqs (74) and (75) must be done with extremely high precision (e.g., 100 digits or more of working precision) or round-off errors accumulate quickly, even for relatively small N. Moreover, the matrix in Eq (109) is extreme ill-conditioned. With similarly high working precision, however, finding the eigenvalues and eigenvectors is fast and effective. Lastly it is noted that computing the inner products, n of the polynomial p j to find the coefficients of p j (x) 2 by using a list convolution and then using a dot product with the vector of moments M n .
This procedure gives GDQ points x a R ð Þ f g N a¼1 and weights w a R ð Þ f g N a¼1 . Note that while the matrix and inner product operations must be done with many digits of accuracy, once the GDQ points and weights are calculated they can be truncated to machine precision for the diffusion calculations. A Mathematica notebook that computes the GDQ points and weights is available in the Supporting Information (S1 File).
An alternative approach to calculating the GDQ points and weights is the procedure described by Golub and Welsch [15], but ill-conditioning will still be an issue.
Supporting Information S1 File. Mathematica notebook containing code to compute the GDQ points and weights as described in Appendix C. (NB)