Figures
Abstract
This study develops a novel adaptive hybrid quadrature that combines Simpson’s 1/3 rule with Gauss-Legendre quadrature to overcome the classical difficulties in performing numerical integration. Classical methods may encounter challenges in achieving a good balance between computational cost and precision, especially when it comes to functions characterized by strongly varying behaviors across their domains. We address these issues via an intelligent adaptation mechanism that reallocates computing resources dynamically on localized function features. We rigorously analyse its convergence properties analytically and prove optimal error estimates in the sense of fourth order accuracy with a strong performance improvement. The hybrid error estimation methodology is based on the mathematical inconsistency of polynomial interpolation and orthogonal polynomial approximation which provides an effective device for local error evaluation. Extensive numerical results indicate that the proposed scheme is consistently better than several existing schemes with significant reduction in function evaluations and acceptable accuracy for different test functions. The proposed framework reduces computational costs by up to 62% when compared to traditional adaptive methods. It maintains similar precision. We carefully examine implementation details, complexity analysis, and practical deployment factors. This work is particularly relevant for scientific computing applications that require high-precision integration in computational physics, engineering simulations, and financial mathematics.
Citation: Asgedom AA, Kefela YY (2026) An adaptive hybrid quadrature scheme: Combining Simpson’s rule and Gaussian quadrature for enhanced numerical integration. PLoS One 21(2): e0335582. https://doi.org/10.1371/journal.pone.0335582
Editor: Mohammadreza Hadizadeh, Central State University, UNITED STATES OF AMERICA
Received: October 13, 2025; Accepted: February 2, 2026; Published: February 23, 2026
Copyright: © 2026 Asgedom, Kefela. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data underlying the findings described in this manuscript are fully available without restriction from the Zenodo repository: https://doi.org/10.5281/zenodo.18216672. The repository contains numerical data for all tables and figures presented in the study: Table data including convergence analysis (Table 1), computational efficiency (Table 2), and hybrid methods comparison (Table 3) are provided in MATLAB (.mat) and CSV formats. Figure data for all 7 figures are provided, including error distributions (Fig 1), error estimation analysis (Fig 2), node distributions (Fig 3), robustness analysis (Fig 4), node density analysis (Fig 5), multi-metric performance evaluation (Fig 6), and scalability analysis (Fig 7). The complete MATLAB implementation of the Adaptive Hybrid Quadrature Scheme (AHQS) and scripts to regenerate all results are also included. All files are accessible via the permanent DOI: 10.5281/zenodo.18216672.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Numerical integration remains fundamentally important throughout computational mathematics, with applications reaching into engineering, physics, finance, and data science [1,2]. The essential problem involves approximating definite integrals when analytical solutions prove unavailable. Classical Newton-Cotes formulas, such as Simpson’s rule, offer simplicity but limited efficiency for functions with localized complexity [3,4].
Gaussian quadrature methods, pioneered by Carl Friedrich Gauss, achieve exponential convergence for smooth functions through optimal node selection [5]. However, purely Gaussian approaches struggle with adaptive error estimation for functions containing singularities or rapid oscillations [6].
Adaptive quadrature strategies address these limitations by dynamically allocating computational resources. While contemporary methods have evolved from early Richardson-extrapolation approaches [7], many remain constrained by conservative error estimates that incur unnecessary overhead [8]. Recent hybrid frameworks, such as those combining Newton-Cotes and Gaussian rules [9], and global reconstruction techniques like mock-Chebyshev constrained least squares [10,11], demonstrate promising alternatives. Yet a key challenge persists: designing an efficient switching criterion that optimally selects between robust low-order and efficient high-order rules based on local integrand behavior.
This paper introduces an Adaptive Hybrid Quadrature Scheme (AHQS) that systematically combines Simpson’s 1/3 rule with Gauss-Legendre quadrature through a novel, cost-aware decision function. Our contributions are: (1) a mathematically defined hybrid error estimator and local efficiency index; (2) a clear adaptive algorithm (Algorithm 1) with rigorously justified parameters; (3) comprehensive numerical validation against standard adaptive routines and recent hybrid methods, demonstrating consistent reduction in function evaluations for integrands with localized irregularities.
2 Theoretical framework
2.1 Mathematical preliminaries
We begin by establishing the mathematical foundation for our hybrid approach. Consider the definite integral:
where represents a sufficiently smooth function. Our methodology builds upon two classical quadrature techniques with complementary properties [4,5].
Definition 2.1 (Composite Simpson’s 1/3 Rule). For an interval [a,b] partitioned into n equal subintervals (n even) with spacing , the composite Simpson’s rule approximation becomes:
with error term
provided . This method exhibits fourth-order convergence and works particularly well for functions with moderate smoothness [1].
Definition 2.2 (Gauss-Legendre Quadrature). The m-point Gauss-Legendre quadrature rule on [a,b] transforms to the standard interval [–1,1]:
where ti denote roots of Legendre polynomial Pm(t) and wi represent corresponding weights:
For the 2-point rule used extensively here, with w1,2 = 1. The error term for 2-point Gauss-Legendre quadrature is:
This method achieves fourth-order accuracy with only two function evaluations per subinterval, representing optimal efficiency for polynomial exactness [2].
Definition 2.3 (Hybrid Error Estimator). On a subinterval of length hk, let Sk(f) and Gk(f) be the approximations from Definition 2.1 (with n = 2) and Definition 2.2 (with m = 2), respectively. The hybrid error estimator is defined as:
This estimator leverages the discrepancy between a low-order Newton–Cotes rule and a high-order Gaussian rule to locally gauge approximation error.
Definition 2.4 (Local Efficiency Index). Let denote the true integral over Ik. The local efficiency index of the hybrid estimator on Ik is defined as:
An estimator is efficient if as
for smooth f, and reliable if
for all k.
2.2 Hybrid error estimation theory
The core innovation of our approach lies in utilizing the discrepancy between Simpson’s rule and Gauss-Legendre quadrature as a robust error indicator. This cross-method comparison provides a more reliable estimate of local error than traditional approaches based on nested rules or Richardson extrapolation [12,13].
Theorem 2.5 (Hybrid Error Bound). For , let IS and IG represent Simpson and 2-point Gauss approximations on a subinterval of length h. The hybrid error estimate
satisfies:
Moreover, this error estimate provides an asymptotically exact approximation of local error for sufficiently smooth functions.
Proof: The Simpson error expansion gives [4]:
while the 2-point Gauss error expansion is [5]:
Thus, the hybrid error estimate becomes:
For asymptotic exactness, observe that as ,
and:
completing the proof. □
Lemma 2.6 (Error Estimation Efficiency). The hybrid error estimate provides a reliable indicator with efficiency bound:
indicating that the true error is at most 40% of the estimated error.
Proof: From the error expansions [7]:
This conservative bound ensures that our adaptive strategy errors on the side of caution, maintaining accuracy while optimizing efficiency [14]. □
Theorem 2.7 (Global Error Bound). For , the global error of the adaptive hybrid quadrature satisfies:
where is the maximum subinterval size used in the adaptive refinement.
Proof: Let be the composite approximation over N subintervals. The global error is bounded by:
since and
. □
Theorem 2.8 (Convergence and Efficiency Properties). The Adaptive Hybrid Quadrature Scheme exhibits:
- Convergence: For any
, the method converges. For
, it achieves fourth-order convergence.
- Work-Precision Trade-off: To achieve tolerance ε, the method requires
function evaluations, matching the theoretical optimum for fourth-order methods.
- Singularity Handling: For functions with isolated singularities, adaptive refinement automatically concentrates nodes near singular regions, maintaining algebraic convergence.
Proof Sketch: Convergence follows from polynomial approximation theory and Theorem 2.7. The complexity arises from the local error bound
. Singularity handling follows from adaptive refinement theory where large error estimates trigger intensive subdivision. Detailed proofs are provided in Appendix A.1. □
Note: Additional theoretical results including optimal node selection, stability bounds, and detailed robustness analyses are provided in Appendix A for completeness.
3 Adaptive hybrid algorithm
3.1 Algorithm description
The proposed adaptive hybrid quadrature algorithm represents a significant advancement in numerical integration methodology by intelligently combining Simpson’s rule for error estimation with Gauss-Legendre quadrature for final approximation [12,13]. The algorithm employs a stack-based approach to manage subintervals, ensuring efficient memory usage while maintaining adaptive refinement structure.
Algorithm 1 Adaptive hybrid quadrature.
Require: Function f, interval [a,b], tolerance ε, maximum depth , safety factor
Ensure: Approximate integral Q, node distribution, comprehensive statistics
1: Initialize: , stack
,
, evaluations
, intervals
2: Precompute: ,
3: while stack not empty do
4: Pop interval from stack
5: , intervals
intervals + 1
6: if or
then
7:
8: evaluations evaluations + 2
9: continue
10: end if
11:
12: ,
,
13:
14: ,
15: ,
16:
17: evaluations evaluations + 5
18:
19: if then
20:
21: else
22: Push [ai,c,depth + 1] and [c,bi,depth + 1] onto stack
23: end if
24: end while
25: Compute statistics: efficiency, node distribution, error estimates
26: return Q, evaluations, intervals, statistics
The safety factor provides a conservative buffer against premature switching from Simpson’s rule to Gauss–Legendre quadrature. This value was determined through empirical optimization across our benchmark set (see S1 Fig) and aligns with established practice in adaptive refinement where factors in [0.7,0.9] balance reliability against over-refinement [8,9]. A sensitivity analysis confirms algorithm performance remains stable for
.
The minimum interval width prevents infinite recursion due to finite machine precision. This value is approximately
for double-precision arithmetic (where
), ensuring robust termination while avoiding underflow.
3.2 Computational complexity analysis
Theorem 3.1 (Complexity Bound). For , the adaptive hybrid scheme requires
function evaluations to achieve global error ε. Moreover, the computational complexity is optimal for fourth-order methods in one dimension [15].
Proof: From Theorem 2.5, the local error on an interval of length h satisfies:
To achieve , we require:
The number of subintervals scales as , with 5 function evaluations per subinterval in the adaptive refinement. Thus, the total evaluations scale as
. This complexity is information-theoretically optimal for methods achieving fourth-order accuracy with local error control [16]. □
4 Numerical experiments and results
4.1 Experimental setup
All experiments were conducted using MATLAB R2023a on a standardized computing platform with Intel i7-12700H processor and 32GB RAM. To ensure reproducibility and statistical robustness, each experiment was repeated 100 times with random initial subdivisions, and results report means ± one standard deviation unless otherwise specified.
Fixed experimental parameters:
- Global absolute error tolerance:
(unless otherwise specified)
- Maximum recursion depth:
- Safety factor (AHQS):
(determined through empirical optimization across benchmarks)
- Minimum interval width:
(approximately 1000 times the unit roundoff for double-precision arithmetic, ensuring robust termination while avoiding underflow)
- Gauss-Legendre rule order: n = 5 (10 points with Kronrod extension for comparative methods)
- Initial subdivision for adaptive methods: 4 equal subintervals
The following carefully selected test functions represent diverse integration challenges encountered in scientific computing applications [2,6]:
- Smooth function:
,
, analytical solution
- Oscillatory function:
,
, testing high-frequency components
- Sharp peak function:
,
, evaluating localization
- Boundary singularity:
,
, testing endpoint behavior
- Additional benchmark:
,
(weak endpoint singularity), included as requested by reviewers
Comparative analysis included implementations of trapezoidal rule, Simpson’s 1/3 rule, adaptive Simpson method, Gauss-Legendre quadrature, and the standard Gauss-Kronrod adaptive routine (MATLAB’s integral function, based on QUADPACK’s qags).
4.2 Convergence analysis
Table 1 demonstrates that the hybrid method maintains the theoretical fourth-order convergence of both Simpson’s rule and 2-point Gauss-Legendre quadrature while achieving comparable accuracy to the Gauss-Kronrod reference implementation. The reported values include statistical variation from multiple runs, confirming method robustness.
4.3 Computational efficiency
As shown in Table 2 and Fig 4, the hybrid method achieves significant reductions in function evaluations across all test function classes compared to both classical methods and the standard Gauss-Kronrod adaptive routine. The efficiency gains are most pronounced for functions with localized irregularities (f2, f3, f4, f5), where AHQS achieves reduction in evaluations compared to Gauss-Kronrod while maintaining equivalent accuracy. This validates the effectiveness of our cost-aware switching criterion.
4.4 Comparison with recent hybrid methods
To contextualize our contribution within recent literature, we implemented the hybrid Newton–Cotes–Gauss algorithm of Espelid & Sørevik (2024) using the parameters specified in their work.
Table 3 shows that our AHQS achieves comparable accuracy with fewer evaluations for functions with mixed smoothness, demonstrating the advantage of our specifically tuned Simpson-Gauss pairing and switching logic.
Limitations: The performance advantage diminishes for uniformly smooth functions (f1), where specialized high-order methods remain optimal. Additionally, our method’s heuristic switching criterion, while effective in practice, lacks a rigorous a priori theoretical error bound for all function classes-an important direction for future theoretical work.
4.5 Error distribution analysis
Figs 1, 2, and 3 compare error distributions across the trapezoidal rule, Simpson’s rule, and our hybrid method. The hybrid approach demonstrates superior error control with the majority of errors below 10−5, consistent with theoretical predictions [12]. This visualization confirms that our method maintains tighter error bounds across diverse function evaluations.
Errors are concentrated around 10−2 to 10−1.
Errors are around 10−4 to 10−2.
The method demonstrates superior error control with the majority of errors below 10−5, consistent with theoretical predictions [12].
The hybrid method demonstrates superior efficiency across the entire accuracy spectrum, achieving machine precision (10−14) with approximately 1936 function evaluations on average.
4.6 Node distribution analysis
Figs 5 and 6 illustrate the intelligent node allocation of our adaptive hybrid method compared to uniform sampling. The hybrid approach demonstrates sophisticated concentration of computational resources in regions of high functional variation, while maintaining sparse sampling in smooth regions. This adaptive behavior is further quantified in Fig 9, which shows node density peaks corresponding precisely to areas requiring higher resolution.
The adaptive hybrid method demonstrates intelligent resource allocation by concentrating nodes in regions of high functional variation.
4.7 Robustness analysis
Figs 7 and 8 show the method’s stability under measurement noise conditions. The hybrid approach maintains superior error control across all noise levels (0.01 to 0.1), demonstrating significantly less sensitivity to perturbations compared to traditional methods. This robustness stems from the dual-method error estimation which provides inherent noise resistance, making the method suitable for applications with uncertain or noisy data.
The hybrid method maintains superior error control and stability across all noise levels, showing significantly less sensitivity to perturbations compared to traditional trapezoidal and Simpson’s rules. The conventional methods exhibit rapid error growth and performance degradation with increasing noise, while the hybrid approach’s dual-method error estimation provides inherent noise resistance and reliable performance in uncertain computational environments.
5 Implementation framework
The MATLAB implementation serves several critical purposes in scientific publication [3,17]:
- Reproducibility: Provides exact algorithmic details enabling independent verification
- Practical utility: Offers researchers immediate access to the methodology
- Educational value: Demonstrates implementation best practices for adaptive algorithms
- Benchmarking: Establishes baseline for performance comparisons
The core implementation emphasizes numerical stability, computational efficiency, and user-friendly interfaces while maintaining mathematical rigor [16]. Key features include comprehensive error handling, adaptive parameter optimization, and detailed performance statistics collection.
6 Discussion
6.1 Theoretical implications
The hybrid error estimation strategy represents a meaningful advancement in adaptive quadrature methodology with substantial theoretical implications [12]. By leveraging fundamental mathematical disparities between Newton-Cotes and Gaussian quadrature paradigms, the method achieves unprecedented reliability in error control across diverse function types [13]. The theoretical analysis confirms that the hybrid approach maintains optimal fourth-order convergence while providing robust adaptive capabilities that surpass conventional methods [4].
The error bound established in Theorem 2.5 demonstrates that the hybrid error estimate provides not only a reliable upper bound but also an asymptotically exact approximation of local error for sufficiently smooth functions [5]. This dual property ensures both mathematical rigor and practical effectiveness.
6.2 Practical advantages
For scientific computing applications, the hybrid scheme offers several compelling advantages that address longstanding challenges in numerical integration [15]:
- Significant computational savings: 40-62% reduction in function evaluations compared to adaptive Simpson method
- Automatic adaptation: No prior knowledge of function behavior required
- Enhanced numerical stability: Conservative error estimation prevents error accumulation
- Implementation simplicity: Straightforward extension of existing adaptive frameworks
- Robust performance: Consistent accuracy across diverse function types
The hybrid adaptive approach shows sophisticated concentration of computational resources in regions requiring higher resolution, with density peaks corresponding to areas of rapid functional variation or complex local behavior. This contrasts sharply with the uniform method’s constant density profile, which inefficiently allocates resources throughout the domain regardless of local function characteristics.
The method’s exceptional efficiency makes it particularly suitable for applications involving expensive function evaluations, such as solutions of differential equations, statistical computations, and engineering simulations [6].
6.3 Comparative analysis
Comprehensive comparison with existing numerical integration techniques reveals the hybrid method’s distinctive advantages [2,9]:
- Superior to trapezoidal rule: Fourth-order convergence vs second-order, with significantly better error constants
- More efficient than Simpson’s rule: Better error estimation with comparable accuracy, leading to reduced computational costs
- More robust than pure Gaussian quadrature: Built-in adaptive capabilities without requiring function-specific tuning
- More reliable than Richardson extrapolation: Conservative error bounds and avoidance of error cancellation issues
- Competitive with recent hybrid methods: Compared to the Newton–Cotes–Gauss hybrid of Espelid & Sørevik (2024), AHQS achieves 10–20% reduction in function evaluations for integrands with localized irregularities (Table 3)
S1 Fig provides a multi-criteria comparison across five performance dimensions, confirming the method’s balanced capabilities. The scalability analysis in S2 Fig suggests potential for extension to higher-dimensional problems, though significant challenges remain in domain partitioning and the curse of dimensionality.
Limitations: The performance advantage diminishes for uniformly smooth functions (f1), where specialized high-order Gaussian rules remain optimal. Additionally, the heuristic switching criterion, while empirically effective, currently lacks rigorous a priori error bounds for all function classes. Extending the framework to multi-dimensional integrals presents significant challenges including the curse of dimensionality and complex subdomain partitioning.
7 Conclusion
This investigation presents the Adaptive Hybrid Quadrature Scheme (AHQS), which systematically combines Simpson’s 1/3 rule with Gauss-Legendre quadrature through a novel, cost-aware switching criterion. The method achieves fourth-order convergence while reducing function evaluations by 20-35% compared to standard adaptive Gauss-Kronrod routines for integrands with localized irregularities.
Key contributions include: (1) a mathematically defined hybrid error estimator and efficiency index; (2) a clear adaptive algorithm (Algorithm 1) with rigorously justified parameters; (3) comprehensive validation against classical methods and recent hybrid approaches.
Future work should address: (1) extending AHQS to multi-dimensional integration with sparse grid techniques; (2) deriving theoretical error bounds for the switching criterion; (3) applying the framework to specific scientific domains such as finite element stiffness matrix integration or financial option pricing.
A Extended theoretical framework
This appendix contains detailed theoretical results referenced in the main text. Sects A.1–A.5 present supplementary convergence proofs and stability analyses of the Adaptive Hybrid Quadrature Scheme (AHQS).
A.1 Detailed convergence proofs
Theorem A.1 (Detailed adaptive convergence). The adaptive hybrid quadrature method converges for any , and for
, it achieves fourth-order convergence.
Proof: For continuous functions, the method converges by the Stone-Weierstrass theorem since polynomial approximations can uniformly approximate continuous functions on compact intervals. For , the local error estimates ensure that refinement continues until the desired tolerance is met, and Theorem 2.7 guarantees fourth-order convergence. □
Theorem A.2 (Detailed Work-Precision Trade-off). For a given tolerance ε, the adaptive hybrid method achieves efficient work-precision trade-off with computational work .
Proof: From Theorem 2.5, the local error on an interval of length h satisfies . To achieve
, we require
. The number of subintervals scales as
, with 5 function evaluations per subinterval, giving total evaluations
. □
Theorem A.3 (Detailed Singularity Handling). For functions with isolated singularities, the adaptive hybrid method automatically concentrates nodes near singularities, achieving algebraic convergence rates dependent on the singularity strength.
Proof: Near singularities, the error estimate E becomes large, triggering intensive refinement. This creates a graded mesh that optimally handles singularities, as established in adaptive quadrature theory [6]. □
A.2 Error estimator properties
Theorem A.4 (Error Estimation Efficiency). Under the same smoothness assumptions, the efficiency index satisfies
and for practical step sizes h>0, with probability exceeding 0.95 under mild stochastic assumptions on f. The estimator is thus asymptotically exact and reliable in finite precision.
A.3 Additional properties
Proposition A.5 (Efficient node selection). The hybrid method’s node distribution is asymptotically efficient for minimizing the maximum local error given a fixed number of function evaluations.
Proof: The adaptive strategy ensures that local error is approximately equidistributed across subintervals, which is known to be efficient for error minimization [12]. As , the node distribution approaches an efficient distribution that minimizes the maximum local error. □
Lemma A.6 (Stability Bound). The hybrid quadrature method is numerically stable, with the condition number bounded by:
Proof: The method uses positive weights (Simpson weights: and Gauss weights:
), ensuring stability. The condition number follows from standard quadrature theory [4]. □
Proposition A.7 (Noise Robustness). The hybrid error estimator is robust to small perturbations in function evaluations. For a perturbed function with
, the error estimate perturbation satisfies:
Proof: The error estimate involves linear combinations of function values. For perturbed evaluations:
A.4 Robustness to high-frequency noise
Proposition A.8 (Robustness to High-Frequency Noise). Suppose , where
and
is zero-mean noise with bounded variation. Then the expected value of the AHQS error estimator remains unbiased:
and its variance grows as , indicating a smoothing effect that makes the estimator robust to high-frequency perturbations.
A.5 Singular integrand handling
Theorem A.9 (Error Control for Weak Singularities). Let with
,
, and
. Then the AHQS, with a singularity-aware refinement criterion, achieves a global error bounded by
where depends on α and g but not on the proximity of c to the subdivision points. The number of evaluations scales as
.
Supporting information
S1 Fig. Multi-metric performance evaluation using radar chart visualization.
This comparison assesses methods across computational speed, memory efficiency, numerical accuracy, implementation robustness, and adaptive capability.
https://doi.org/10.1371/journal.pone.0335582.s001
(TIFF)
S2 Fig. Scalability analysis for multi-dimensional integration.
This extends the 1D method conceptually to higher dimensions, showing polynomial complexity growth.
https://doi.org/10.1371/journal.pone.0335582.s002
(TIFF)
Acknowledgments
While this research did not receive specific financial support from Mekelle University, the authors acknowledge the institutional backing provided by the Department of Mathematics through the Scientific Computing Research Initiative. The authors also extend sincere gratitude to the anonymous reviewers for their valuable insights and constructive comments that significantly improved this work.
References
- 1.
Burden RL, Faires JD. Numerical analysis. 9th ed. Cengage Learning; 2010.
- 2.
Davis PJ, Rabinowitz P. Methods of numerical integration. 2 ed. Dover Publications; 2007.
- 3.
Chapra SC. Applied numerical methods with MATLAB. 3rd ed. McGraw-Hill; 2012.
- 4.
Atkinson KE. An introduction to numerical analysis. 2nd ed. Wiley; 2008.
- 5.
Quarteroni A, Sacco R, Saleri F. Numerical mathematics. 2nd ed. Springer; 2007.
- 6.
Piessens R, de Doncker-Kapenga E, Überhuber CW. QUADPACK: a subroutine package for automatic integration. Springer; 1983.
- 7.
Stoer J, Bulirsch R. Introduction to numerical analysis. 3rd ed. Springer; 2002.
- 8. Gander W, Gautschi W. Adaptive quadrature—revisited. BIT Numerical Mathematics. 2000;40(1):84–101.
- 9. Li X. A weak Galerkin meshless method for incompressible Navier–Stokes equations. Journal of Computational and Applied Mathematics. 2024;445:115823.
- 10. Dell’Accio F, Di Tommaso F, Francomano E, Nudo F. An adaptive algorithm for determining the optimal degree of regression in constrained mock-Chebyshev least squares quadrature. Dolomites Research Notes on Approximation. 2022;15(4):35–44.
- 11. Dell’Accio F, Marcellán F, Nudo F. An interpolation–regression approach for function approximation on the disk and its application to cubature formulas. Adv Comput Math. 2025;51(6).
- 12. Gonnet P. A review of error estimation in adaptive quadrature. ACM Comput Surv. 2012;44(4):1–36.
- 13. Espelid TO. Doubly adaptive quadrature routines based on Newton-Cotes rules. Journal of Computational and Applied Mathematics. 2004;112(1–2):231–52.
- 14. Malcolm MA, Simpson RB. Local versus global strategies for adaptive quadrature. ACM Trans Math Softw. 1975;1(2):129–46.
- 15. Lyness JN. When not to use an automatic quadrature routine. SIAM Rev. 1983;25(1):63–87.
- 16.
Krommer AR, Ueberhuber CW. Computational Integration. SIAM; 1998.
- 17.
Press WH, Teukolsky SA, Vetterling WT, Flannery BP. Numerical recipes: the art of scientific computing. 3rd ed. Cambridge University Press; 2007.