当前位置:网站首页>Numerical scheme simulation of forward stochastic differential equations with Markov Switching

Numerical scheme simulation of forward stochastic differential equations with Markov Switching

2022-06-25 01:31:00 Month - flute of time

Preface

In the real world , Many phenomena or models have the characteristics of state switching , This phenomenon is described by Markov chain , It can well explain the random switching under different conditions , This leads to Markov switching of stochastic differential equations (SDEwMS) The rapid development in recent years . This stochastic mathematical model can not only be widely used in financial markets , And the same is true in the field of control engineering . However, the explicit solution of this kind of model is very difficult to find , In this article, we mainly establish the weak first order Euler and Milstein To solve such stochastic differential equations , Want to know the weak format of higher solution , You can refer to the following papers .

Yang Li, Taitao Feng, Yaolei Wang, and Yifei Xin. A high order accurate and effective scheme for solving Markovian switching stochastic models[J]. Mathematics, 2021, 9(6), 588.

 Insert picture description here

One : Stochastic differential equations with Markov switching (SDEwMS)

1: Markov chain knowledge

- Basic jump and Markov chain knowledge

Prior to “ Random knowledge ” In related articles , We mentioned in detail the knowledge of Markov chains and how to simulate discrete Markov chains , See the following article for details , I won't introduce , But the following format simulation will use this part of knowledge .
Discrete Markov chain simulation process

  • Poisson random measure

Combine basic knowledge , We define N ( d e , d t ) N(de,dt) N(de,dt) For in space ε × [ 0 , T ] \varepsilon\times[0,T] ε×[0,T] The strength on the is λ ( d e ) d t \lambda(de)dt λ(de)dt

Poisson random measure of , among λ \lambda λ For in ε \varepsilon ε Upper L e b a n e s e Lebanese Lebanese measure ,

Yes d r t = ∫ ε l ( r t − , , e ) N ( d e , d t ) dr_t=\int_{\varepsilon}l(r_{t-,},e)N(de,dt) drt=εl(rt,,e)N(de,dt) , among l ( i , e ) = j − i , e ∈ Δ i j ; l ( i , e ) = 0 l(i,e)=j-i,e\in \Delta_{ij};l(i,e)=0 l(i,e)=ji,eΔij;l(i,e)=0, other ,

among Δ i j \Delta_{ij} Δij It's length. Follow q i j q_{ij} qij Relevant inter cell , The definition is as follows :

Δ 12 = [ 0 , q 12 ) , Δ 13 = [ q 12 , q 12 + q 13 ) , . . . , Δ 1 M \Delta_{12}=[0,q_{12}),\Delta_{13}=[q_{12},q_{12}+q_{13}),...,\Delta_{1M} Δ12=[0,q12),Δ13=[q12,q12+q13),...,Δ1M

Δ 12 = [ 0 , q 12 ) , Δ 13 = [ q 12 , q 12 + q 13 ) , . . . , \Delta_{12}=[0,q_{12}),\Delta_{13}=[q_{12},q_{12}+q_{13}),..., Δ12=[0,q12),Δ13=[q12,q12+q13),...,

Δ 1 M = [ ∑ j = 2 M − 1 q 1 j , ∑ j = 2 M q 1 j ) , \Delta_{1M}=\left[ \sum_{j=2}^{M-1}{q_{1j},\sum_{j=2}^{M}{q_{1j}}} \right), Δ1M=[j=2M1q1j,j=2Mq1j),

Δ 21 = [ ∑ j = 2 M q 1 j , ∑ j = 2 M q 1 j + q 21 ) , \Delta_{21}=\left[ \sum_{j=2}^{M}{q_{1j}},\sum_{j=2}^{M}{q_{1j}+q_{21}} \right), Δ21=[j=2Mq1j,j=2Mq1j+q21),

Δ 23 = [ ∑ j = 2 M q 1 j + q 21 , ∑ j = 2 M q 1 j + q 21 + q 23 ) , . . , \Delta_{23}=\left[ \sum_{j=2}^{M}{q_{1j}+q_{21},\sum_{j=2}^{M}{q_{1j}+q_{21}+q_{23}}} \right),.., Δ23=[j=2Mq1j+q21,j=2Mq1j+q21+q23),..,

Δ 2 M = [ ∑ j = 2 M q 1 j + ∑ j = 1 , j ≠ 2 M − 1 q 2 j , ∑ j = 2 M q 1 j + ∑ j = 1 , j ≠ 2 M q 2 j ) , . . . \Delta_{2M}=\left[ \sum_{j=2}^{M}{q_{1j}}+\sum_{j=1,j\ne2}^{M-1}{q_{2j},\sum_{j=2}^{M}{q_{1j}}+\sum_{j=1,j\ne2}^{M}{q_{2j}}} \right),... Δ2M=[j=2Mq1j+j=1,j=2M1q2j,j=2Mq1j+j=1,j=2Mq2j),...

= [ ∑ j = 2 M − 1 q 1 j , ∑ j = 2 M q 1 j ) , = \left[ \sum_{j=2}^{M-1}{q_{1j},\sum_{j=2}^{M}{q_{1j}}} \right), =[j=2M1q1j,j=2Mq1j),

Δ 21 = [ ∑ j = 2 M q 1 j , ∑ j = 2 M q 1 j + q 21 ) , \Delta_{21}=\left[ \sum_{j=2}^{M}{q_{1j}},\sum_{j=2}^{M}{q_{1j}+q_{21}} \right), Δ21=[j=2Mq1j,j=2Mq1j+q21),

Δ 23 = [ ∑ j = 2 M q 1 j + q 21 , ∑ j = 2 M q 1 j + q 21 + q 23 ) , . . , \Delta_{23}=\left[ \sum_{j=2}^{M}{q_{1j}+q_{21},\sum_{j=2}^{M}{q_{1j}+q_{21}+q_{23}}} \right),.., Δ23=[j=2Mq1j+q21,j=2Mq1j+q21+q23),..,

Δ 2 M = [ ∑ j = 2 M q 1 j + ∑ j = 1 , j ≠ 2 M − 1 q 2 j , ∑ j = 2 M q 1 j + ∑ j = 1 , j ≠ 2 M q 2 j ) , . . . \Delta_{2M} =\left[ \sum_{j=2}^{M}{q_{1j}}+\sum_{j=1,j\ne2}^{M-1}{q_{2j},\sum_{j=2}^{M}{q_{1j}}+\sum_{j=1,j\ne2}^{M}{q_{2j}}} \right),... Δ2M=[j=2Mq1j+j=1,j=2M1q2j,j=2Mq1j+j=1,j=2Mq2j),...

among M M M Is the length of the Markov chain .

2:SDEwMS Introduction to the structure of

In a complete probability space ( Ω , F , P ) \left( \Omega,F,P \right) (Ω,F,P) , With information flow { F t } t ≥ 0 \left\{ F_t \right\}_{t\geq 0} { Ft}t0 , commonly S D E w M S SDEwMS SDEwMS It has the following differential structure :

d X t = f ( t , X t , r t h ) d t + g ( t , X t , r t h ) d W t , t ∈ ( 0 , T ] , X 0 = x 0 dX_t=f(t,X_t,r_t^h)dt+g(t,X_t,r_t^h)dW_t,t\in(0,T],X_0=x_0 dXt=f(t,Xt,rth)dt+g(t,Xt,rth)dWt,t(0,T],X0=x0.

among T T T Is a limited terminal moment , r t h ∈ S = { 1 , 2 , . . . , M } r_t^h\in S=\{1,2,...,M\} rthS={ 1,2,...,M} Right continuous B o r e l Borel Borel Measurable Markov chains .

W t = ( W t 1 , . . . , W t d ) T W_t=\left( W_t^1,...,W_t^d \right)^T Wt=(Wt1,...,Wtd)T yes d d d A Brownian motion in which the elements are independent of each other ,

X t = ( X t 1 , . . . , X t q ) T X_t=\left( X_t^1,...,X_t^q \right)^T Xt=(Xt1,...,Xtq)T yes q q q Dimensional random process , Drift term f ( ⋅ , ⋅ , ⋅ , ) : [ 0 , T ] × R q × S → R q f(\cdot,\cdot,\cdot,):[0,T]\times R^q\times S\rightarrow R^q f(,,,):[0,T]×Rq×SRq and

Diffusion term g ( ⋅ , ⋅ , ⋅ , ) : [ 0 , T ] × R q × S → R q × d g(\cdot,\cdot,\cdot,):[0,T]\times R^q\times S\rightarrow R^{q\times d} g(,,,):[0,T]×Rq×SRq×d Are all measurable functions , x 0 x_0 x0 Is the initial value of the equation .

3: Existence and uniqueness theorem of solution

For a fixed constant L L L And any finite terminal moment , T > 0 , x , y ∈ R q , ∀ t ∈ [ 0 , T ] T>0,x,y\in R^q,\forall t\in [0,T] T>0,x,yRq,t[0,T], Drift coefficient

f ( ⋅ , ⋅ , ⋅ , ) f(\cdot,\cdot,\cdot,) f(,,,) , Diffusion coefficient g ( ⋅ , ⋅ , ⋅ , ) g(\cdot,\cdot,\cdot,) g(,,,) Satisfy L i p s c h i t z Lipschitz Lipschitz Conditions :

∣ f ( t , x , i ) − f ( t , y , i ) ∣ ∨ ∣ g ( t , x , i ) − g ( t , y , i ) ∣ ≤ L ∣ x − y ∣ , ∣ f ( t , x , i ) − f ( s , x , i ) ∣ ≤ L ∣ t − s ∣ \left| f(t,x,i)-f(t,y,i) \right|\vee \left| g(t,x,i)-g(t,y,i) \right|\leq L\left| x-y \right|,\left| f(t,x,i)-f(s,x,i) \right|\leq L\left| t-s \right| f(t,x,i)f(t,y,i)g(t,x,i)g(t,y,i)Lxy,f(t,x,i)f(s,x,i)Lts.
And linear growth conditions :

∣ f ( t , x , i ) ∨ g ( s , x , i ) ∣ ≤ M ( 1 + ∣ x ∣ ) \left| f(t,x,i)\vee g(s,x,i) \right|\leq M(1+\left| x \right|) f(t,x,i)g(s,x,i)M(1+x) ,

among M = L ∨ m a x { ∣ f ( t , 0 , i ) ∨ g ( t , 0 , i ) ∣ } M=L\vee max\{\left| f(t,0,i)\vee g(t,0,i) \right|\} M=Lmax{ f(t,0,i)g(t,0,i)}.

So the equation S D E w M S SDEwMS SDEwMS The above conditions are met , When the initial conditions x 0 ∈ F 0 x_0\in F_0 x0F0 when , The equation has a unique solution

{ X t , t ∈ [ 0 , T ] } \{X_t,t\in [0,T]\} { Xt,t[0,T]} .

For specific proof, please refer to relevant papers and literatures !

Two : Relevant Ito formula

Ito formula is a necessary theoretical knowledge for deriving relevant numerical schemes , So here is a detailed introduction .

1: Ito formula

For the above S D E w M S SDEwMS SDEwMS equation , We remember x ∈ R q , φ ( t , x , i ) ∈ C 2 , 1 ( [ 0 , T ] × R q × S ; R ) x\in R^q,\varphi(t,x,i) \in C^{2,1}\left( [0,T]\times R^q\times S;R \right) xRq,φ(t,x,i)C2,1([0,T]×Rq×S;R) , Define operators

L 0 φ , L j φ , L e − 1 φ L^0\varphi,L^j\varphi,L_{e}^{-1}{\varphi} L0φ,Ljφ,Le1φ from R q × S → R : L φ 0 = ∂ φ ∂ t + ∑ k = 1 q ∂ φ ∂ x k f k + 1 2 ∑ m = 1 d ∑ l , k = 1 q ∂ 2 φ ∂ x k ∂ x l g l , m g k , m R^q\times S\rightarrow R: L^0_\varphi=\frac{\partial\varphi}{\partial t}+\sum_{k=1}^{q}{\frac{\partial\varphi}{\partial{x_k}}f_k}+\frac{1}{2}\sum_{m=1}^{d}{\sum_{l,k=1}^{q}{\frac{\partial^2\varphi}{\partial x_k\partial{x_l}}g_{l,m}g_{k,m}}} Rq×SR:Lφ0=tφ+k=1qxkφfk+21m=1dl,k=1qxkxl2φgl,mgk,m

+ ∑ j ∈ S q i j φ +\sum_{j\in S}q_{ij}\varphi +jSqijφ,

L j φ = ∑ k = 1 q ∂ φ ∂ x k g k , j , L e − 1 φ = φ ( t , x , i + l ( i , e ) ) − φ ( t , x , i ) L^j\varphi=\sum_{k=1}^{q}{\frac{\partial\varphi}{\partial x_k}g_{k,j}},L_{e}^{-1}\varphi=\varphi\left( t,x,i+l(i,e) \right)-\varphi(t,x,i) Ljφ=k=1qxkφgk,j,Le1φ=φ(t,x,i+l(i,e))φ(t,x,i),

among i ∈ S , i ≤ j ≤ d i\in S,i\leq j\leq d iS,ijd.

We get the following ITO integral formula :

φ ( t n + 1 , X t n + 1 , r t n + 1 ) = φ ( t n , X t n , r t n ) + ∫ t n t n + 1 L 0 φ ( s , X s , r s ) d s + \varphi\left( t_{n+1},X_{t_{n+1}},r_{t_{n+1}} \right)=\varphi(t_n,X_{t_n},r_{t_n})+\int_{t_n}^{t_{n+1}}L^0\varphi(s,X_s,r_s)ds+ φ(tn+1,Xtn+1,rtn+1)=φ(tn,Xtn,rtn)+tntn+1L0φ(s,Xs,rs)ds+

∑ j = 1 d ∫ t n t n + 1 L j φ ( s , X s , r s ) d W s j + ∫ t n t n + 1 ∫ ε L e − 1 φ ( s , X s , r s ) N ~ ( d e , d s ) \sum_{j=1}^{d}{\int_{t_n}^{t_{n+1}}L^j\varphi(s,X_s,r_s)dW_{s}^{j}} +\int_{t_n}^{t_{n+1}}\int_{\varepsilon}L_{e}^{-1}\varphi\left( s,X_s,r_s \right)\tilde{N}(de,ds) j=1dtntn+1Ljφ(s,Xs,rs)dWsj+tntn+1εLe1φ(s,Xs,rs)N~(de,ds),

here N ~ ( d e , d s ) = N ( d e , d s ) − λ ( d e ) d s \tilde{N}(de,ds)=N(de,ds)-\lambda(de)ds N~(de,ds)=N(de,ds)λ(de)ds Is a compensated Poisson measure .

2: Ito unfolds

Combine the Ito formula operator mentioned above , To any φ ( t n + 1 , X t n + 1 , r t n + 1 ) ∈ C 2 , 1 ( R q × S ; R ) \varphi\left( t_{n+1},X_{t_{n+1}},r_{t_{n+1}} \right)\in C^{2,1}(R^q\times S;R) φ(tn+1,Xtn+1,rtn+1)C2,1(Rq×S;R) , stay t_n Use Ito formula to get the following integral form :

φ ( t n + 1 , X t n + 1 , r t n + 1 ) = φ ( t n , X t n , r t n ) + ∫ t n t n + 1 L 0 φ ( s , X s , r s ) d s + \varphi\left( t_{n+1},X_{t_{n+1}},r_{t_{n+1}} \right)=\varphi\left( t_{n},X_{t_{n}},r_{t_{n}} \right)+\int_{t_n}^{t_{n+1}}L^0\varphi\left( s,X_s,r_s \right)ds+ φ(tn+1,Xtn+1,rtn+1)=φ(tn,Xtn,rtn)+tntn+1L0φ(s,Xs,rs)ds+

∑ j = 1 d ∫ t n t n + 1 L j φ ( s , X s , r s ) d W s j \sum_{j=1}^{d}{\int_{t_n}^{t_{n+1}}L^j\varphi\left( s,X_s,r_s \right)dW_{s}^{j}} j=1dtntn+1Ljφ(s,Xs,rs)dWsj,

In order to simplify the narration, ITO unfolds , We make d = q = 1 d=q=1 d=q=1 , In the same way :

φ ( t n + 1 , X t n + 1 , r t n + 1 ) = φ ( t n , X t n , r t n ) + ∫ t n t n + 1 L 0 φ ( s , X s , r s ) d s + ∫ t n t n + 1 L 1 φ ( s , X s , r s ) d W s \varphi\left( t_{n+1},X_{t_{n+1}},r_{t_{n+1}} \right)=\varphi\left( t_{n},X_{t_{n}},r_{t_{n}} \right)+\int_{t_n}^{t_{n+1}}L^0\varphi\left( s,X_s,r_s \right)ds+{\int_{t_n}^{t_{n+1}}L^1\varphi\left( s,X_s,r_s \right)dW_{s}} φ(tn+1,Xtn+1,rtn+1)=φ(tn,Xtn,rtn)+tntn+1L0φ(s,Xs,rs)ds+tntn+1L1φ(s,Xs,rs)dWs,

We have to L 0 φ ( s , X s , r s ) and L 1 φ ( s , X s , r s ) L^0\varphi\left( s,X_s,r_s \right) and L^1\varphi\left( s,X_s,r_s \right) L0φ(s,Xs,rs) and L1φ(s,Xs,rs) Use Ito formula to get :

φ ( t n + 1 , X t n + 1 , r t n + 1 ) = φ ( t n , X t n , r t n ) + \varphi\left( t_{n+1},X_{t_{n+1}},r_{t_{n+1}} \right)=\varphi\left( t_{n},X_{t_{n}},r_{t_{n}} \right)+ φ(tn+1,Xtn+1,rtn+1)=φ(tn,Xtn,rtn)+

∫ t n t n + 1 [ L 0 φ ( s , X s , r s ) + ∫ t n s L 0 L 0 φ ( τ , X τ , r τ ) d τ + ∫ t n s L 1 L 0 φ ( τ , X τ , r τ ) d W τ + ∫ t n s L e − 1 L 0 φ ( τ , X τ , r τ ) d N ~ τ ] d s + \int_{t_n}^{t_{n+1}}\left[ L^0\varphi\left( s,X_{s},r_{s} \right)+\int_{t_n}^{s}L^0L^0\varphi\left( \tau,X_{\tau},r_{\tau} \right)d\tau+\int_{t_n}^{s}L^1L^0\varphi\left( \tau,X_{\tau},r_{\tau} \right)dW_\tau+\int_{t_n}^{s}L_{e}^{-1}L^0\varphi\left( \tau,X_{\tau},r_{\tau} \right)d\tilde{N}_\tau \right]ds + tntn+1[L0φ(s,Xs,rs)+tnsL0L0φ(τ,Xτ,rτ)dτ+tnsL1L0φ(τ,Xτ,rτ)dWτ+tnsLe1L0φ(τ,Xτ,rτ)dN~τ]ds+

∫ t n t n + 1 [ L 1 φ ( s , X s , r s ) + ∫ t n s L 0 L 1 φ ( τ , X τ , r τ ) d τ + ∫ t n s L 1 L 1 φ ( τ , X τ , r τ ) d W τ + ∫ t n s L e − 1 L 1 φ ( τ , X τ , r τ ) d N ~ τ ] d W s \int_{t_n}^{t_{n+1}}\left[ L^1\varphi\left( s,X_s,r_s \right) +\int_{t_n}^{s}L^0L^1\varphi\left( \tau,X_{\tau},r_{\tau} \right)d\tau+\int_{t_n}^{s}L^1L^1\varphi\left( \tau,X_{\tau},r_{\tau} \right)dW_\tau+\int_{t_n}^{s}L_{e}^{-1}L^1\varphi\left( \tau,X_{\tau},r_{\tau} \right)d\tilde{N}\tau\right]dW_s tntn+1[L1φ(s,Xs,rs)+tnsL0L1φ(τ,Xτ,rτ)dτ+tnsL1L1φ(τ,Xτ,rτ)dWτ+tnsLe1L1φ(τ,Xτ,rτ)dN~τ]dWs.

In getting the above equation , if φ ( s , X s , r s ) \varphi\left( s,X_{s},r_{s} \right) φ(s,Xs,rs) Smooth enough , Let's continue to expand the multidimensional integrals involved above , Higher order numerical schemes can be obtained .

3、 ... and : The numerical format

1: Definition of weak convergence

Given time division 0 = t 0 < t 1 < . . . < t N − 1 < t N = T 0=t_0<t_1<...<t_{N-1}<t_N=T 0=t0<t1<...<tN1<tN=T , hypothesis Δ t = t n + 1 − t n , Δ W n = W t n + 1 − \Delta t =t_{n+1}-t_n , \Delta W_n=W_{t_{n+1}}- Δt=tn+1tn,ΔWn=Wtn+1

W t n ( n = 0 , 1 , 2 , . . . , N ) W_{t_n}(n=0,1,2,...,N) Wtn(n=0,1,2,...,N), And there is a smooth function G ∈ C p 2 ( β + 1 ) G\in C_p^{2(\beta+1)} GCp2(β+1) And the normal number C C C Satisfy the inequality

∣ E [ G ( X t ) − G ( X N ) ] ∣ ≤ C ( 1 + E [ ∣ X 0 ∣ b ] ) ( Δ t ) β \left| E[G(X_t)-G(X^N)] \right|\leq C\left( 1+E[|X_0|^b] \right)(\Delta t)^\beta E[G(Xt)G(XN)]C(1+E[X0b])(Δt)β,

among β , b ∈ N \beta,b\in N β,bN , be X N X^N XN In order \beta Weakly converges to X T X_T XT. Of course, if there is weak convergence, there is strong convergence , When we don't take the expected absolute value with the whole trajectory error , Instead, take the expected absolute error of each point , Is our strong convergence error requirement , Higher order strongly convergent schemes are more complex , The conditions are more stringent , The order can be 0.5 The multiple relation of .

2: Weakly first order convergent scheme

We remember f k i = f ( t , X n , i ) , g k i = g ( t , X n , i ) f_k^i=f(t,X^n,i) , g_k^i=g(t,X^n,i) fki=f(t,Xn,i),gki=g(t,Xn,i) , So there is the following format :
Set the initial conditions X 0 X_0 X0 It is known that , about 0 ≤ n ≤ N − 1 0\leq n\leq N-1 0nN1 , about m m m The second part of Vito Ito process k k k Weight , We have the following format :

1: Euler format ( weak 1.0 rank , strong 0.5 rank ): X k n + 1 , i = X k n + f k i Δ t + g k i Δ W n X^{n+1,i}_k=X_k^n+f_k^i\Delta t+g_k^i\Delta W_n Xkn+1,i=Xkn+fkiΔt+gkiΔWn
2: Milstein format ( Strong and weak are 1.0 rank ): X k n + 1 , i = X k n + f k i Δ t + g k i Δ W n + 1 2 L 1 g k i ( ( Δ W n ) 2 − Δ t ) X^{n+1,i}_k=X_k^n+f_k^i\Delta t+g_k^i\Delta W_n+\frac{1}{2}L^1g_k^i\left( (\Delta W_n)^2-\Delta t \right) Xkn+1,i=Xkn+fkiΔt+gkiΔWn+21L1gki((ΔWn)2Δt)
3: Higher order convergent scheme

The form of the higher-order scheme is complex , And it is difficult to simulate and prove , Want to know the weak format of higher solution , You can refer to the following papers .

Yang Li, Taitao Feng, Yaolei Wang, and Yifei Xin. A high order accurate and effective scheme for solving Markovian switching stochastic models[J]. Mathematics, 2021, 9(6), 588.

Four : Experimental simulation

  • Markov chain code
import matplotlib.pyplot as plt
import numpy as np
import time
plt.rcParams['font.sans-serif']=['SimHei']## Chinese code scrambling !
plt.rcParams['axes.unicode_minus']=False# The abscissa minus sign indicates a problem !

start_time = time.time()

def markov_m(days,delta):
    original_value = 1  ##  Set initial state 
    r = np.array([[-1,0.6,0.4], [10 ,-5, 5], [3, 1, -4]])  ### The transfer rate matrix is given arbitrarily according to the definition 
    p = r * delta
    for i in range(r.shape[1]):
        p[i, i] = 1 + p[i, i]

    # P = np.exp(delta * r) - 1
    # for i in range(r.shape[1]):
    # P[i, i] = 1 + P[i, i]
    # print(P) ###delta The smaller it is ,P Follow p The closer the 

    U = np.random.uniform(0,1,days)
    q = np.zeros((1,days))

    for i in range(days):
        u = U[i]
        v_temp = original_value
        q[:,i] = v_temp
        original_value = 1
        s = 0
        while original_value <= p.shape[1] and u > s:
            s = s + p[v_temp - 1, original_value - 1]  ### Probability value superposition 
            original_value = original_value + 1  ### The matrix column index becomes larger 
        original_value = original_value - 1  ## Because of the beginning original_value =1, So we need to reduce 1
    return p,q.tolist()[0]
  • SDEwMS Format simulation code
def schemes(a_ls,b_ls,names):

    time_ls = [16,32,64,128,256]
    tran_matx_ls = []
    state_vect_ls = []
    for i in time_ls:
        mres = markov_m(i, 0.1)## Call Markov chain simulation function 
        tran_matx = mres[0]
        state_vect = mres[1]
        print('\033[1;31m The length of time is %s The time transition probability matrix is :\033[0m'%i)
        print(tran_matx)
        tran_matx_ls.append(tran_matx)
        print('\033[1;34m The length of time is %s The state vector is :\033[0m'%i)
        print(state_vect)
        state_vect_ls.append(state_vect)

    logtimels = []
    logerrorls = []
    CRls = []
    msg = ''' 1:exp function  2:sin function  3: Of the original function 2.5 Power  4: Primitive function  '''
    print(msg)
    msg_ls = ['exp function ','sin function ',' Of the original function 2.5 Power ',' Primitive function ']
    ## Define initialization contents 
    choose = int(input(' Please select g(x) function :'))
    echoes = 500
    error_ls = []
    value_matx = np.zeros((echoes,len(time_ls)))

    Euler = 'y + a * dt * y + b * y * dwt_temp'
    Milstein = Euler + ' + 0.5 * b * b * y * (dwt_temp * dwt_temp - dt)'
    schemes_ls = [Euler,Milstein]

    x0 = 1.5  ##SDE Initial value of true solution 
    for s,n in zip(schemes_ls,names):
        for e in range(echoes):
            for i in range(len(time_ls)):
                y = x0  ## Numerical format initial value 
                N = time_ls[i]
                dt = 1 / N
                yt = []
                dwt = np.random.normal(0,1,N) * np.sqrt(dt)
                nowstate_vect = state_vect_ls[i]

                ### The process of solving the true solution 
                v_ls = []
                vtemp = 0
                for j in range(N):## Cycle length 
                    a = a_ls[int(nowstate_vect[j] - 1)]## Take the corresponding mu coefficient 
                    b = b_ls[int(nowstate_vect[j] - 1)]## Take the corresponding sigm coefficient 
                    vtemp = vtemp + (a - 0.5 * b * b) * dt + b * dwt[j]## For discrete variable coefficients , Use the definition of integral to solve its true solution 
                    v_ls.append(vtemp)

                xT = ''## Allocate memory 
                finv = x0 * np.exp(v_ls[-1])
                if choose == 1:
                    xT = np.exp(finv)
                if choose == 2:
                    xT = np.sin(finv)
                if choose == 3:
                    xT = np.power(finv,2.5)
                if choose == 4:
                    xT = finv

                ## Numerical scheme solution 
                for j in range(N):
                    a = a_ls[int(nowstate_vect[j]-1)]## Take the corresponding mu coefficient 
                    b = b_ls[int(nowstate_vect[j]-1)]## Take the corresponding sigm coefficient 
                    dwt_temp = dwt[j]
                    y = eval(s)
                    if choose == 1:
                        yt.append(np.exp(y))
                    if choose == 2:
                        yt.append(np.sin(y))
                    if choose == 3:
                        yt.append(np.power(y,2.5))
                    if choose == 4:
                        yt.append(y)
                error_temp = yt[N - 1] - xT ## The difference between the end value of the true solution and the numerical solution 
                error_ls.append(error_temp)
            value_matx[e,:] = error_ls
            error_ls = []
        error_fin = np.abs(sum(value_matx) / echoes)## Weak convergence finally finds the expectation 
        log_time = np.log(1 / np.array(time_ls))
        log_error_fin = np.log(error_fin)
        CR = np.polyfit(log_time,log_error_fin,1)## First order polynomial fitting 
        print('\033[1;36m{0:*^80}\033[0m'.format(' The result of the calculation is '))
        print(' Format %s The absolute value of the error is expected to be :%s'%(n,np.round(error_fin,6)))
        print(' Format %s The weak convergence order of is :%s'%(n,round(CR[0],6)))
        logerrorls.append(np.round(log_error_fin,6))
        logtimels.append(log_time)
        CRls.append(round(CR[0],6))
    return logtimels,logerrorls,CRls,msg_ls[choose - 1]

statemu_ls = [1,0.8,0.9]## Drift coefficient 
statesigm_ls = [-0.1,0.2,0.05]## Diffusion coefficient 
scheme_names = ['Euler', 'Milstein']
R = schemes(statemu_ls,statesigm_ls,scheme_names)

 Insert picture description here

chart 1

 Insert picture description here

chart 2

Figure above 1 Sum graph 2 Is the result of the simulation experiment , In line with the theoretical results !

  • Drawing code
marker_ls = ['-*','-o']
def make_figure():
    plt.figure(figsize=(8, 6))
    for i,j,l,m in zip(R[0],R[1],scheme_names,marker_ls):
        plt.plot(i,j,m,label=l)
    plt.plot(np.array([-4,-3]),np.array([-4,-3]) + 2.5,'--',label='slope=1')
    plt.xlabel('stepsize logN')
    plt.ylabel('log(|E(Y_T-X^N)|)')
    plt.title(R[3])
    plt.legend()
    plt.show()
make_figure()

 Insert picture description here

chart 3

5、 ... and : summary

SDEwMS yes SDE A generalization of , More trial , But the corresponding numerical solution theory will become more complex . In recent years , Many people have also studied backward stochastic differential equations with Markov and made many theoretical breakthroughs .

Stochastic differential equation is a very deep and wide research field , Whether forward or backward , Whether with switching, jumping or reflection , Are worth studying , Interested friends can read more works in this field , The content of this article is only for the purpose of attracting jade !

原网站

版权声明
本文为[Month - flute of time]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206242124291492.html