Figures
Abstract
Coupled forward-backward stochastic differential equations (FBSDEs) are closely related to financially important issues such as optimal investment. However, it is well known that obtaining solutions is challenging, even when employing numerical methods. In this paper, we propose new methods that combine an algorithm recently developed for coupled FBSDEs and an asymptotic expansion approach to those FBSDEs as control variates for learning of the neural networks. The proposed method is demonstrated to perform better than the original algorithm in numerical examples, including one with a financial implication. The results show that the proposed method exhibits not only faster convergence but also greater stability in computation.
Citation: Naito M, Saito T, Takahashi A, Takehara K (2025) Asymptotic expansions as control variates for deep solvers to fully-coupled forward-backward stochastic differential equations. PLoS One 20(5): e0321778. https://doi.org/10.1371/journal.pone.0321778
Editor: Viswanathan Arunachalam, Universidad Nacional de Colombia, COLOMBIA
Received: November 28, 2024; Accepted: March 11, 2025; Published: May 28, 2025
Copyright: © 2025 Naito et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All Python codes we use for computation are available on Github: https://github.com/Makot0922/Python-Code
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Over the past few decades, there has been a notable increase in interest in backward stochastic differential equations (BSDEs) among both practitioners and academic researchers. It is well known that solving BSDEs is closely related to stochastic control problem such as portfolio optimization in finance. In contrast to traditional forward stochastic differential equations (FSDEs), these are stochastic equations with boundary conditions at a future time point T>0. Let be a filtered probability space satisfying usual conditions. BSDEs are then typically formulated as
where V is a -mesurable
-valued random variable,
and w is a d-dimensional Wiener process, or in its integral form,
A pair of (Y,Z), -valued and
-valued stochastic processes respectively, is called a solution to the BSDE (1) or (2).
A forward-backward stochastic differential equation (FBSDE) is an equation in which V and/or f, which are often called the “driver” of Y, depend on X, a solution of another FSDEs, as in
where ,
,
;
satisfies
.
Moreover, if the functions b and/or depend on the solution (Y,Z) to the BSDE (2), as in
the system consisting of these two equations is called a coupled FBSDE. Although the functions b,f,g and may contain dependence on the sample path
beyond their dependence through X,Y and Z, we suppress
henceforth for notational simplicity. One of sufficient conditions for the existence of the solution to this FBSDE is provided in Ji et al. [1].
The (coupled) FBSDEs often arise in financial problems such as pricing derivatives, estimating the size of credit valuation adjustments (CVAs) and funding valuation adjustments (FVAs), and deriving optimal investments. Consequently, the solution of FBSDEs is of great importance. However, with few exceptions, FBSDEs are not analytically tractable, particularly in coupled cases. Therefore, efficient numerical computation of these equations is a highly desirable objective.
In recent times, a multitude of machine learning methodologies have been employed to investigate this subject area. In particular, following the seminal works of E et al. [2] and Han et al. [3], numerous subsequent studies have used deep neural networks to construct numerical solutions with Monte Carlo simulations, which are referred to as “deep solvers” for BSDEs. Among these, [1] develops three algorithms using deep solvers to construct numerical solutions to fully-coupled FBSDEs and demonstrates the effectiveness of their techniques in several numerical experiments.
Additionally, numerous efforts have been made to enhance the efficacy of deep solvers, which includes the implementation of a methodology known as “asymptotic expansions” in FBSDEs. Asymptotic expansion approaches in finance first emerged in pricing average options (Yoshida [4], Kunitomo and Takahashi [5]) and have since been applied to a broad class of financial issues, including; derivative evaluation under stochastic interest rates (Kunitomo and Takahashi [6], Takahashi and Matsushima [7], Antonov and Misirpashaev [8], Takahashi et al. [9], Shiraya et al. [10]); pricing barrier options (Shiraya et al. [11], Shiraya et al. [12], Kato et al. [13]); optimal portfolio problems (Takahashi and Yoshida [14], Naito and Takehara [15,16]); construction of control variates for Monte Carlo simulations (Takahashi and Yoshida [17], Takahashi and Takehara [18]) and so on. For the mathematical validity of this approach, see Yoshida [4,19] and Kunitomo and Takahashi [20].
This methodology has also been applied to the field of FBSDEs: For instance, see Fujii et al. [21], Fujii and Takahashi [22–25] and Takahashi and Yamada [26,27]. For the combination of this methodology and the deep solvers applied to to uncoupled FBSDEs, Fujii et al. [28] employs the asymptotic expansions around linear drivers as control variates in conjunction with the deep solvers. Takahashi et al. [29] also employs asymptotic expansions as control variates, providing rigorous error bounds. In [15,16], the optimal investment in complete and incomplete markets is considered, respectively. Rather than deriving the expansion of the corresponding FBSDE directly, a known result of the asymptotic expansion of the optimal portfolio presented by [14] is employed as a control variate. For other works applying the asymptotic expansion methods with deep solvers to FSDEs and/or BSDEs, see Naito and Yamada [30], Iguchi et al. [31] and Takahashi and Yamada [32] and so on. Nevertheless, to the best of our knowledge, the application of the asymptotic expansion approach to deep solvers for coupled FBSDEs is none. Accordingly, this paper proposes an improvement in the efficiency of the algorithm proposed by [1] through the use of the asymptotic expansion of coupled FBSDEs as control variates. The proposed technique is demonstrated to outperform the original algorithm in several numerical examples, including one pertaining to optimal investment strategies in incomplete markets.
The organization of this paper is as follows. The second section describes relationship between stochastic control problems with prior knowledge and FBSDEs. The third section then derives the asymptotic expansion of the target coupled FBSDE and the fourth section proposes a new algorithm which applies the expansion as control variates to the original algorithm by [1]. The subsequent section presents a series of numerical examples that illustrate the efficacy of the proposed technique. Finally, concluding remarks are stated. Some elements omitted in this paper due to space limitation are found in our full version [33].
Stochastic control with prior knowledge and FBSDE
In this section, we introduce a stochastic control problem with prior knowledge related to solving the coupled FBSDE (5)–(6), following arguments similar to those in [1]. First, let denote the space of all
-adapted square-integrable processes, and let
be elements of this space. We consider the following control problem:
where and
satisfy the following FSDEs
Here we know the concrete processes and
in advance, which can be interpreted as prior knowledge for u and z, respectively.
Proposition 1. Assume that the FBSDE (5)–(6) has a solution (X,Y,Z). Then, and
solve the control problem (7).
Proof: Substituting and
, the processes
and
satisfy
Clearly, and
satisfy these equation with
. This implies the optimality of u* and z* since the infimum in (7) is achieved at zero.
Next, we define the sub-problem with neural networks as
where is the space of all controls represented by some neural network. Then, by the Universal Approximation Theorem of deep neural networks (e.g. Calin [34, Theorem 9.5.3]), we can always find network architectures capable of approximating (
) in Proposition 1 with
-precision.
We also analyze the approximation error when the neural networks are insufficiently trained. For example, assuming Lipschitz continuity of the functions f,b and , the
-error between the optimally controlled process
and
with a suboptimal control (u,z) can be evaluated via Gronwall’s inequality as
where C>0 is some constant dependent on the Lipschitz coefficients and T.
We reformulate the notation to explicitly emphasize the dependence on prior knowledge, denoting the optimal controls as and
. Correspondingly, the suboptimal controls are expressed as
and
, reflecting their joint dependence on both prior components through the control problem (10).
Then, from (11) we observe the followings: First, when the prior knowledge and
closely approximate the true solution Y and Z, the optimal control
and
are small. Second, consider two distinct prior pairs
and
where the former provides superior approximation to (
) compared to the latter. To illustrate our primary motivation for using approximations based on an asymptotic expansion as prior knowledge, for instance, let
and
be the estimates obtained via the asymptotic expansion for Y and Z. In contrast, we set
and
to zero, which corresponds to the original algorithm in [1]. In such cases, under identical training procedures the errors for suboptimal controls
and
tend to be substantially smaller than their counterparts
and
, particularly during initial training phases due to the typical random parameter initialization, which accelerates the convergence of the algorithm as observed in Numerical Example Section. Moreover, providing prior knowledge for either Y or Z alone may prove insufficient, as evidenced in Example 2 of that section, due to the simultaneous dependence of the controls u and z on
and
.
The asymptotic expansion for coupled FBSDEs
Motivated by [22], in this section we apply the asymptotic expansion approach, which is a general approximation scheme to solutions to SDEs, to the coupled FBSDE (5) and (6). First, to apply this approach we consider the following FBSDE instead of the original equations.
with an parameter . If
the equations above coincide with the original ones.
Then, we approximate the solution to these FBSDE with their formal Taylor expansions with respect to
as
for where
and for
In particular, we have the following concrete expression for the leading two terms, that is, the - and
- order ones.
Proposition 2. First, the -order terms are give by
where and
, and
.
Next, the -order terms are given by
where ,
and
.
are certain deterministic functions whose definition is given in Section A of [33], and
is i-th row of
and
is k-th column of
.
and
are differential operators with respect to each element of x and y, whose concrete definitions are also given in [33].
Proof: See [33].
Obviously, the leading terms are both deterministic processes as
and
are deterministic vector-valued functions, and the first-order terms
follow Gaussian distributions as
and
are all deterministic matrix-valued functions. In contrast, for the approximation of Zt,
is actually zero and
is deterministic. We emphasize that, despite this simple structure, these asymptotic expansions serve as effective prior knowledge for the algorithm of [1], as confirmed in Numerical Example Section.
Remark 1. In principle, the terms in higher-order expansion can be computed straightforwardly as the -order terms. For example,
-order terms
satisfy the following equations:
where for
,
,
and
. Here we assume that n = m = d = 1 to avoid complicated notation. Thanks to the decoupled structure of
, namely
depends only on
,
,
,
,
but not on
, this system can be solved easily. However, when the dimension of the system n,m and d increases, as in Example 1 in Numerical Examples Section, the computational burden grows substantially, making implementation challenging even at the second order. This is why we focus on the low-order expansions for control variates.
Remark 2. Instead of using the equations (12)–(13), it seems natural to start from redefining the original FBSDE (5)–(6) as
With this reformulation, the leading term and
satisfy the following FBSDE:
Obviously satisfies these equations, and
and
are given as the solution to the coupled equations as
which usually presents greater computational complexity compared to the decoupled equations (15). Furthermore, the -order terms satisfy
where . This coupled system often requires much more computational efforts, while it admits the solution (
,
,
) so that
and
are jointly Gaussian distributed and
is deterministic as well as that in Proposition 2. Although there are several ways to introduce perturbations as in (21)–(22), which may provide more precise approximations in the same orders of expansions, our focus is on computational feasibility in many concrete examples as possible including high dimensional problems.
The asymptotic expansion as control variates
In this section the asymptotic expansion derived in the previous section is combined with the algorithm of [1]. Although we develop the algorithms with combination of all the three algorithms of [1] and our asymptotic expansion, here the one for Algorithm 2 of [1] with the asymptotic expansion is displayed and the others are left in [33]. We note that the use of our asymptotic expansion as controls with Algorithm 1 and 3 in [1] is also quite effective, as shown in [33].
In particular, the asymptotic expansions and
are used as control variates for the neural networks with replacement of
by
and
by
. With setting
you obtain the proposed algorithm, while with setting
its original version is recovered, though it is also found in [1] and [33]. As discussed in the first section, this algorithm can approximate the solution with any precision when the neural networks are deep enough. Furthermore, if the approximate processes (
,
) are sufficiently close to the true process (Y,Z), we can expect remainders should be small and thus be easier to learn. The detailed algorithm is given as below.
Algorithm 1 Algo. 2 in [1] (feedback control based on X) + the asymptotic expansion.
Input: The Wiener process , initial parameters
, learning rate
, binary parameter
; the functions
are given in (5)–(6);
Output: and the process
.
1: for k = 1 to maxstep do
2: for m = 1 to M do
3: Lm = 0;
4: ;
5: ;
6: for i = 0 to N–1 do
7: ;
8: ;
9:
;
10: ;
11:
12: end for
13:
14: end for
15: ;
16:
17: end for
In the algorithm, and
are realization of
and
on the m-th path respectively. Further,
is computed by
This algorithm can be combined with methods such as El Mouatasim et al. [35] for even greater efficiency.
Following the recommendation by [1], we use one n-dim input layer, two hidden -dim layers for both networks
and
, and a m-dim output layer for
and a
-dim output layer for
.
In this algorithm, the entire processes and
are both used as the control variates for Y and Z respectively, where the two neural networks are applied to estimate the difference between those two approximation and corresponding true processes. Note that
is stochastic even when l = 1, while in the other algorithms proposed in [33] employ only
as deterministic controls.
can be directly obtained with an explicit expression with respect to
and
such as (17), while there is an alternative way given in [33]. The difference between the computation above and its alternative seems to be very small as shown there.
Numerical examples
In this section, we confirm the effectiveness of the proposed method by several numerical examples for coupled FBSDEs. Due to space limitations, some of them are omitted and left in [33]. Unless otherwise stated, the parameters for the neural network are set to be as follows: A batch size is 256; a learning rate is 0.005; a number of discretization for the partition is 25. Batch normalization is applied to each layer, and the Adam optimizer is employed. The networks are implemented using Python pytorch, and the code is publicly available on Github [36].
Coupled FBSDEs which do not contain Z in the forward equation for X
First, in this subsection we apply the proposed method to the FBSDE where the coefficients for X depend only on Y but not on Z. Concretely, the following FBSDE is considered.
Example 1.
where and
This can be found in [1] and the exact solution is explicitly given by
Figs 1 and 2 depict the comparison among the original method, the proposed method and the method using the asymptotic expansion alone, indicated as “original”, “with AE” and “only AE”, respectively, for T = 0.1 and d = 100. The results for the other algorithms are found in [33] as well as they are in the other examples below. We generate ten independent sets of paths with different random seeds, and average the results as conducted in [1]. The comparison is presented in terms of the computational time. In order to facilitate comparison, the computational time is standardized such that the time required for the original method to compute 10,000 iterations is set to one. These path generation and standardization in computational time are maintained for all subsequent figures. Note that for the method “only AE” the estimates for Y0 and Z are computed immediately using the explicit formulas, while the loss function is determined as the average of the values obtained over the batches employed during neural network training.
The value of the loss function is plotted against computational time on the horizontal axis: d = 100.
The error for the value of Y0 is plotted against computational time on the horizontal axis: d = 100.
In both of the loss function and the error for Y0, the proposed method with the asymptotic expansions as control variates, demonstrates a notable enhancement in performance relative to the original method even with taking computational time into consideration. Notably, the use of the asymptotic expansions not only enhances the accuracy of the estimates during the initial learning steps but also throughout the entire learning process, that is, the levels to which the loss function or the error converge. Additionally, it is observed that the proposed method exhibits slight fluctuations in its results, yet these are less significant as the order of magnitude of the error is smaller than that of the original method. In comparison to the method using the asymptotic expansion alone, the proposed method, which employs the expansion as control variates, significantly outperforms in both the size of the error for Y0 and the value of the loss function.
Coupled FBSDEs which contains Z in the equation for X
In this subsection, we confirm how the proposed method works in the case where the coefficients for X depend both on Y and Z. It is well known that obtaining a good numerical solution for such FBSDEs is significantly more challenging than for ones in which X depends only on Y.
First, we see the following one-dimensional example found in [1] as with the previous one.
Example 2.
where X0 = 1 and
In this case we have the exact solution and
. Following [1], we conduct comparison of the estimates obtained by applying the slightly modified asymptotic expansion to the original algorithm with those obtained by the original algorithm itself, in this example.
Figs 3 and 4 depict the comparison of the original and proposed algorithms and the method using the asymptotic expansion alone. Here, we compare the result when only the asymptotic expansions of Y0 and are employed as control variates, denoted as “with Y0&Y,” with the result when the expansion of Z is solely used, denoted as “with Z,” and with the result when all the expansions are used in conjunction, denoted as “with AE.” While incorporating the asymptotic expansions of either Y or Z individually as control variates yields only marginal convergence improvement, their combined implementation (denoted “with AE”) achieves substantially superior performance relative to the original method. In contrast, when compared to the method using the expansion alone, the error for Y0 is slightly larger, which is due to a particular feature of this example. In fact, in this example the
-order terms of the expansion are given as
The value of the loss function is plotted against computational time on the horizontal axis: d = 1.
The error for the value of Y0 is plotted against computational time on the horizontal axis: d = 1.
and hence , which perfectly matches the true solution
. Although including the “correction term”
introduces some error, the performance of the method using the expansion alone is still better than that of the proposed method. However, even in such cases, the value of the loss function achieved by the proposed method is much smaller than that of the method using the expansion alone.
For high-dimensional cases, [1] provides Example 4 in Section 5.4. However, we do not use this example, since it is easily shown by Ito’s Lemma that any choice of Z with some regularity conditions satisfies this equation, and applying our asymptotic expansion finds one of the exact solutions.
Next, we see another example found in Horst et al. [37] for coupled FBSDEs with Z in the coefficient for X with d = 6. This example is closely related to one of the most important problems in finance, namely portfolio optimization in incomplete markets.
Example 3. For the forward SDEs for and
, we have
where and
is orthogonal
-dimensional Wiener process.
Then, the backward SDE is given by
where and
.
This fully-coupled FBSDE appears in solving the portfolio optimization problem such as
where
for and the market securities S are driven by
and the other risk from
is unhedgable. In contrast to the previous two examples, to our best knowledge, the exact solution to this FBSDE system is hard to obtain, while its existence is guaranteed by [, Theorem 5.9].
Especially, we set
with d2 = 1(i.e. ) and
. Thus,
is Gaussian and HT is lognormal.
Moreover, instead of expanding (33) and (34) directly, we expand their log-transformation;
where and
. The idea behind this transformation is that the original processes
seem to follow lognormal-like distributions in our setting, whereas their first-order expansions are normally distributed. It is confirmed in several numerical examples that this transformation slightly improves the performance of the proposed method, which are available upon request.
Figs 5 and 6 presents a comparison of the results for Algorithm 1 in Example 3. The parameters are set to be as follows; ;
; d1 = 5 and T = 1. In this example the exact solution is not available anymore. Moreover, in practice, the value of Y0 (and the maximum expected utility achieved with the true optimal portfolio) is often of secondary interest. Instead, the focus is on which portfolio offers the greatest expected utility compared to other portfolios. In this sense, the expected utility achieved with the portfolio obtained from the numerical solution of Z via (35) is estimated by the out-of-sample-path average (using 105 paths for this computation) and we replace the result for Y0 by that for this criterion.
The value of the loss function is plotted against computational time on the horizontal axis: .
The value of the expected utility achieved with the estimated portfolio is plotted against computational time on the horizontal axis: .
As seen in the figures, the proposed method not only resulted in a notable improvement in the value of the loss function, but also in the expected utility compeared to the original method and the method using the expansion alone. Furthermore, the use of the asymptotic expansions as the prior knowledge significantly improves the stability of the computation in the following sense. In the original version of Algorithm 1, [1] proposes the random selection of the initial value for the learning process for Y0. Nevertheless, if the aforementioned approach is employed, whereby the initial value is generated from a range of [–2,2] as proposed in [1], it is observed that 90 out of 100 trials fails to update the neural network within the first 100 learning steps. This phenomenon is observed consistently across a range of parameter settings, which are not reported here for brevity. In contrast, when is used as the input, the computation is successful in updating the network without exceptions. This stability in computation is noteworthy.
In summary, in all the examples presented in this section, the proposed method improves the efficiency of the algorithm by [1] and the method using the asymptotic expansion alone.
Concluding remarks
In this paper, we proposed the new method which combines the algorithm proposed by [1] for coupled FBSDEs and the asymptotic expansions of those FBSDEs as the control variates for learning of the neural networks. In the examples including ones with high dimensionality or the financially important implication, it is numerically confirmed that our proposed method performed better than the original algorithm. This improvement was not only for the values of the loss functions or the errors, but for the stability of the algorithm.
For future research, we refer to the followings: First, we try to give a rigorous error bound which was not done in this paper. Second, we can incorporate higher-order terms than one in the asymptotic expansions. From the fact that the level to which the values of the loss function and the error for Y0 converge was improved in Algorithm 1 with the stochastic control variate , it is expected that the use of these higher-order random variables as additional control variates will further improve the efficiency of the proposed method. Finally, we are interested in other examples with financial implication such as general equilibrium in incorporate markets.
References
- 1. Ji S, Peng S, Peng Y, Zhang X. Three algorithms for solving high-dimensional fully-coupled FBSDEs through deep learning. IEEE Intell Syst. 2020;35(3):71–84.
- 2. Han J, Jentzen A. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun Math Stat. 2017;5:349–80.
- 3. Han J, Jentzen A, E W. Solving high-dimensional partial differential equations using deep learning. Proc Natl Acad Sci USA. 2018;115(34):8505–10. pmid:30082389
- 4. Yoshida N. Asymptotic expansions for statistics related to small diffusions. JJSS. 1992;22:139–59.
- 5. Takahashi A, Kunitomo N. Pricing average options. Jpn Financ Rev. 1992;14:1–19.
- 6. Takahashi A, Kunitomo N. The asymptotic expansion expansion approach to the valuation of interest rate contingent claims. Math Financ. 2001;11(1):117–151.
- 7. Takahashi A, Matsushima S. Option pricing in HJM model using an asymptotic expansion method. FSA Res Rev. 2004:82–103.
- 8. Antonov A, Misirpashaev T. Projection on a quadratic model by asymptotic expansion with an application to LMM swaption. SSRN. 2009.
- 9. Takahashi A, Takehara K, Toda M. A General Computation Scheme for a High-order asymptotic expansion method. IJTAF. 2012;15(6):1250044.
- 10. Shiraya K, Takahashi A, Yamazaki A. Pricing swaptions under the Libor market model of interest rates with local-stochastic volatility models. Wilmott. 2012;61:48–63.
- 11. Shiraya K, Takahashi A, Toda M. Pricing barrier and average options under stochastic volatility environment. J Comput Financ. 2011;15(2):111–48.
- 12. Shiraya K, Takahashi A, Yamada T. Pricing discrete barrier options under stochastic volatility. Asia-Pac Financ Mark. 2012;19(3):205–32.
- 13. Kato T, Takahashi A, Yamada T. An asymptotic expansion formula for up-and-out barrier option price under stochastic volatility model. JSIAM Lett. 2013;5:17–20.
- 14. Takahashi A, Yoshida N. An asymptotic expansion scheme for optimal investment problems. Stat Infer Stoch Process. 2004;7(2):153–88.
- 15. Naito M, Takehara K. Application of asymptotic expansion method to constrained optimal portfolio problem using machine learning. SSRN. Working Paper. 2024.
- 16. Naito M, Takehara K. Application of machine learning with asymptotic expansion to unconstrained optimal portfolio. SSRN. 2024.
- 17. Takahashi A, Yoshida N. Monte Carlo simulation with asymptotic method. JJSS. 2005;35(2):171–203.
- 18. Takahashi A, Takehara K. A hybrid asymptotic expansion scheme: an application to long-term currency options. Int J Theor Appl Finan. 2010;13(08):1179–221.
- 19. Yoshida N. Asymptotic expansions for small diffusions via the theory of Malliavin-Watanabe. Probab Theory Relat Fields. 1992;92(3):275–311.
- 20. Takahashi A, Kunitomo N. On validity of the asymptotic expansion approach in contingent claim analysis. Ann Appl Probab. 2003;13(3):914–52.
- 21. Fujii M, Sato S, Takahashi A. An FBSDE approach to American option pricing with an interacting particle method. Asia-Pac Financ Mark. 2015;22(3):229–60.
- 22. Fujii M, Takahashi A. Analytical approximation for non-linear FBSDEs with perturbation scheme. IJTAF. 2012;15(5):1250034.
- 23. Fujii M, Takahashi A. Perturbative expansion technique for non-linear BSDEs with interacting particle method. Asia-Pac Financ Mark. 2015;22(3):283–304.
- 24. Fujii M, Takahashi A. Asymptotic expansion for forward-backward SDEs with jumps. Stochastics. 2018;91(2):175–214.
- 25. Fujii M, Takahashi A. Solving backward stochastic differential equations with quadratic-growth drivers by connecting the short-term expansions. Stoch Process Their Appl. 2019;129(5):1492–532.
- 26. Takahashi A, Yamada T. An asymptotic expansion of forward-backward SDEs with a perturbed driver. IJFE. 2015;2(2):1550020.
- 27. Takahashi A, Yamada T. An asymptotic expansion for forward–backward SDEs: a Malliavin calculus approach. Asia-Pac Financ Mark. 2016;23(4):337–73.
- 28. Fujii M, Takahashi A, Takahashi M. Asymptotic expansion as prior knowledge in deep learning method for high dimensional BSDEs. Asia-Pac Financ Mark. 2019;26(3):391–408.
- 29. Takahashi A, Tsuchida Y, Yamada T. A new efficient approximation scheme for solving high-dimensional semi-linear PDEs: control variate method for Deep BSDE solver. J Comput Phys. 2022;454:110956.
- 30. Naito R, Yamada T. An acceleration scheme for deep learning-based BSDE solver using weak expansions. Int J Financ Eng. 2020;7(2):2050012.
- 31.
Iguchi Y, Naito R, Okano Y, Takahashi A, Yamada T. Deep asymptotic expansion: application to financial mathematics. In: Proceedings of IEEE CSDE. 2021.
- 32. Takahashi A, Yamada T. Solving kolmogorov PDEs without the curse of dimensionality via deep learning and asymptotic expansion with Malliavin calculus. Partial Differ Equ Appl. 2023;4:27.
- 33. Naito M, Saito T, Takahashi A, Takehara K. Asymptotic expansions as control variates for deep solvers to fully-coupled forward-backward stochastic differential equations. SSRN. 2024.
- 34.
Calin O. Deep learning architectures. Cham: Springer; 2020.
- 35. El Mouatasim A, de Cursi JES, Ellaia R. Stochastic perturbation of sub gradient algorithm for nonconvex deep neural networks. Comp Appl Math. 2023;42(4).
- 36. Naito M, Saito T, Takahashi A, Takehara K. Python codes on Github: deep slovers with asymptotic expansions as control variates for fully-coupled FBSDEs. https://github.com/Makot0922/Python-Code
- 37. Horst U, Hu Y, Imkeller P, Réveillac A, Zhang J. Forward–backward systems for expected utility maximization. Stoch Process their Appl. 2014;124(5):1813–48.