User:Tom McGuire/NumericalLaplaceTransformInversion

From J Wiki
Jump to navigation Jump to search

Numerical Laplace Transform Inversion

Classically Laplace Transform Inversion is accomplished symbolically by partial fraction expansion of a transfer function. Once the partial fractions have been established then a look up in a table of Laplace Inverse Transforms leads to an exact equation. This essay is concerned with Numerical Inversion. Numerical Inversion uses approximation to simplify the calculations necessary to arrive at a time domain answer. There are some caveats when using this method:

  • The method is most accurate when the timeframe understudy is short. There are ways to work around this restriction and the references listed below have some information regarding this.
  • The number of poles in the Padé approximant and transfer function denominator should be greater than the number of zeros in the numerator
  • The number of terms in the numerator should be reasonably close to the number of terms in the denominator

This algorithm is worked out semiformally in Vlach et al[1]. The authors provide a small program in FORTRAN for numerical Laplace transform inversion. The program suffers from "magic" constants that the authors provide in a table but do not provide an algorithm for their construction. Indeed, if you are writing a program in FORTRAN the manual insertion of constants into the program is less work than developing polynomial root finding procedures. The authors also indicate that their selection of the constants in the program will provide a broad range of solutions for most transfer functions a user would encounter.

The J language is a different story. Polynomial handling and array calculations are routine parts of the language. A few lines of J and all of the constants needed can be calculated rather quickly as part of the overall solution.

Padé Approximants

Padé Approximants are an alternative way to represent a portion of the Taylor series estimate for a function. The Padé approximant is formed from the quotient of 2 polynomials. The coefficients are calculated from the Taylor series coefficients.[2][3][4][note 1] In the case of Laplace Transform Inversion we only need to estimate the exponential function (). The exponential function has been well studied and closed forms to calculate the Padé coefficients exist. The coefficients are calculated directly without having to use the Taylor series and solve a system of simultaneous equations (ie. it's built into the closed form).[note 2]

[1]

Alternative but equivalent formulation of closed form[5]:

NOTE: There appears to be an error in the second formulation for the polynomial Q. The minus sign that is placed at the head of the equation is incorrect. This should be:

In this formulation the alternating negative/positive coefficients will be generated by the (-z)^i term.

J Implementation

I find the second closed form above more intuitive for a direct translation into J. J has polynomial functions that allows a vector of coefficients to represent the polynomial. The 'z' term will not get reproduced since it will be implicit in J's polynomial representation. NOTE: J polynomials are lowest order coefficient first, so is represented 1 2 3 1 in J vector form (lowest order coefficient first: )

   Pn =:  3 :  '(!(0{y))*((1{y)!(+/y)-i.1 + 0{y)%(!i.1 + 0{y)'
   Pn  2  4
30 10 1

   Qm =: 3 : '(!(0{y))*(_1^i.1+1{y)*((0{y)!(+/y)-i.1+1{y)%(!i.1+1{y)'
   Qm 2 4
30 _20 6 _1 0.0833333

NB. The Padé table has coefficients in rational form
NB. to get that in J do the following:
   Pn 2r1 4r1
30 10 1
   Qm 2r1 4r1
30 _20 6 _1 1r12

   NB. This produces polynomials but not the rational form you would see in a Padé table
   NB.  Since these are used together to form a fraction, notice we can divide each
   NB. polynomial by its first term this will normalize the polynomials and give the
   NB. form found in a Padé table
   NB. a fork will help speed the processing and now the coefficients match the normal form
   (] % {.) Pn  2r1  4r1 NB. (original polynomial array  divide(%)   first term of polynomial array)
1 1r3 1r30
   (] % {.) Qm  2r1  4r1
1 _2r3 1r5 _1r30 1r360

   NB. in case you were wondering the first closed form can be implemented as follows:
   pn  =: 3 : '(!(+/y)-i.1+0{y)*(i.1+0{y)!(0{y)'
   qm  =: 3 : '(_1^i.1+1{y)*(!(+/y)-i.1+1{y)*(i.1+1{y)!(1{y)'

   pn 2r1  4r1
720 240 24
   qm 2r1  4r1
720 _480 144 _24 2

   NB. different coefficients but if you normalize them
   (] % {.) pn 2r1  4r1
1 1r3 1r30
   (] % {.) qm 2r1  4r1
1 _2r3 1r5 _1r30 1r360
   NB. same normalized form

NOTE: normalization is possible because the numerator polynomial and denominator polynomial have the same low order coefficient. The operation is just multiplication by 1 when the polynomials are placed in quotient form, since the same factor is being applied to both.

A minimal amount of work and what do we have?

   NB. extend print precision for number display
   (9!:11) 16

   NB. remember the Padé approximant in this case  approximates ^
   NB. creating a fork for the ratio of polynomials we can
   NB. evaluate how good the approximation is:
   (((1 1r3 1r30)&p.) % (1 _2r3 1r5 _1r30 1r360)&p.)  0.1
1.105170918074142
   ^ 0.1
1.105170918075648

This particular approximant provides about 9 digits of precision in agreement with the J exponential operator '^'


Poles and Residues

The book by Vlach and Singhal[1] provides a table of constants in Double FORTRAN complex format. These constants are obtained from the poles and residues of the Padé approximant. The poles are relatively easy to calculate in J using the underlying polynomial functionality built into J. They are the roots of the denominator polynomial of the Padé Approximant.

Calculation of Poles

   NB. from the previous calculations the denominator was calculated as follows:
   (] % {.) qm 2r1  4r1
1 _2r3 1r5 _1r30 1r360

   NB. using the p. operator the roots are easily obtained
   p. (] % {.) qm 2r1  4r1
┌─────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│1r360│2.220980032989808j4.160391445506931 2.220980032989808j_4.160391445506931 3.779019967010193j1.380176524272845 3.779019967010193j_1.380176524272845│
└─────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Pole calculation via companion matrix[6]

The companion matrix is a sparse matrix created out of the coefficients of a polynomial. The characteristic polynomial of this matrix is the polynomial itself and therefore the eigenvalues are the roots of the polynomial. J has integrated LAPACK2 into a library. There is an example script dgeev.ijs within the examples directory of that add-on library that will calculated eigenvalues of a matrix. There are a couple of ways to structure this matrix. I show one form here found at the reference above. The reason I like it is that we have already had to normalize the polynomials being used to put them in rational form. This particular form of the companion matrix does not assume the high order coefficient is a 1. That high order coefficient is divided into all the other coefficients and works for a more general polynomial form.

So for a polynomial

The companion matrix takes the form:

A few J operators can generate this matrix, then use the do_dgeev routine to calculate the eigenvalues (ie. roots).

   require 'math/lapack2'
   load '~addons/math/lapack2/example/dgeev.ijs'
0.799482 _0.0994125j0.400792 _0.0994125j_0.400792 _0.100657
┌────────────────────────────────────────┬───────────────────────────────────────┐
│ _0.624471  0.533023         0  0.664141│ _0.655089 _0.193302  0.254632 0.125333│
│ _0.599489 _0.266616  0.404136 _0.106815│ _0.523629  0.251857 _0.522405 0.332022│
│  0.499916  0.345526  0.315285  0.729325│  0.536218 0.0971825 _0.308384 0.593838│
│_0.0270862 _0.254081 _0.445113  0.124866│_0.0956068  0.675954         0 0.722087│
└────────────────────────────────────────┴───────────────────────────────────────┘

NB. using the example there is a sample calculation of eigenvalues that automatically gets run
NB. just ignore the output. In the interest of saving time I did not extract the routine do_dgeev
NB. to use separately. 

   NB. padenorm - a function to normalize a polynomial into usual pade table form
   NB. this presumes that the 2 polynomials of interest have the same
   NB. initial coefficient.
   padenorm =: ] % {.
   padenorm Pn 2r1 4r1
1 1r3 1r30

NB. Let's store the 2 polynomials for the pade approximant in a boxed structure
NB. this way they are paired together, store them in a variable 'mypade'

   NB. mypade - store the pade polynomials in a boxed structure so they don't have
   NB. to be recalculated
   ] mypade =: (padenorm Pn 2r1 4r1);padenorm Qm 2r1 4r1
┌──────────┬──────────────────────┐
│1 1r3 1r30│1 _2r3 1r5 _1r30 1r360│
└──────────┴──────────────────────┘

NB. Building the companion matrix
NB. 1. start with an identity matrix with length and width 1 less than the number of coefficients (n-1)
NB. If you start with an identity matrix of n-2 in size we can build on that to produce the companion
NB. matrix. 
NB. get the quotient portion of the mypade structure (drop the first part, unbox, use the 
NB. ',' operator to make sure it is a simple array
   ,>}.mypade
1 _2r3 1r5 _1r30 1r360
NB. get the size
   #,>}.mypade
5

NB. Need an identity matrix of size n-2 in this case 3
NB. I chose the outer product idiom =/~ to generate this
   =/~ i. 3
1 0 0
0 1 0
0 0 1
   NB. tack on 3 0s on top of that
   (3$0), =/~ i. 3
0 0 0
1 0 0
0 1 0
0 0 1
   NB. divide the first n-1 coefficients by the last coefficient
   NB. and splice that onto the side
   ((3$0), =/~ i.3),. {{-( _1&}.y) % (<:#y){y}} ,>}.mypade
0 0 0 _360
1 0 0  240
0 1 0  _72
0 0 1   12
   NB. There you have the form of the companion matrix

   NB. putting this all together in a generalized form
   NB. this was done in 2 steps in the function for clarity
   companion =: 3 : 0
C =. ((<:<:#y)$0), =/~ i. <:<: #y NB. identity matrix with 0s on top
C,. -(_1&}. y) % (<:#y){ y        NB. tack on the polynomial values
)

   NB. Try out the function
   ]Cp =: companion ,>}.mypade 
0 0 0 _360
1 0 0  240
0 1 0  _72
0 0 1   12

   NB. find the eigenvalues, do_dgeev returns a boxed structure of eigenvalues;right eigenvector; left ev
   'ev er el' =: do_dgeev Cp
   ev
2.220980032989814j4.160391445506932 2.220980032989814j_4.160391445506932 3.779019967010191j1.380176524272848 3.779019967010191j_1.380176524272848

   NB. compare to J polynomial (p.) function
   }. p. ,>}.mypade
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│2.220980032989808j4.160391445506931 2.220980032989808j_4.160391445506931 3.779019967010193j1.380176524272845 3.779019967010193j_1.380176524272845│
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

   NB. run one of the roots through the polynomial and see how close to zero it is
   (,>}.mypade)p. 0{ev
1.77635683940025e_15j_1.221245327087672e_15

   NB. fairly close to zero not a bad estimate
   NB. compare to first root from the J polynomial function (p.)
   ]firstroot =: {. ,>}. p. ,>}.mypade
2.220980032989808j4.160391445506931
   (,>}.mypade)p. firstroot
_2.220446049250313e_16j_5.551115123125783e_16

   NB. the base J polynomial function provides slightly more accurate root finding

J's built-in polynomial handling provides more than adequate root finding functionality when compared to a more complicated methodology relying on LAPACK2 routines.

Calculation of Residues

Residues are based on partial fraction expansion of polynomial quotients. The classic method is to factor out each pole term and evaluate the remaining equation at the value of the pole. An easy to understand way to obtain residues is to take 2 data points near the particular pole the Residue can be interpolated from the answers. This particular method can be found in Gonnet et al.[2]

   NB. set a tolerance variable to use for this calculation
   tol =: 1e_7

   NB. Rnm - function to calculate the pade quotient given 2 boxed polynomials
   Rnm =: {{(0&{::x)&p. % (1&{::x)&p.)y}}

   NB. mypade - store the pade polynomials in a boxed structure so they don't have
   NB. to be recalculated
   ] mypade =: (padenorm Pn 2r1 4r1);padenorm Qm 2r1 4r1
┌──────────┬──────────────────────┐
│1 1r3 1r30│1 _2r3 1r5 _1r30 1r360│
└──────────┴──────────────────────┘

   NB. We only need the positive complex conjugate poles for this method
   NB. select them out and save them into poles

   NB. idxeven - helper function to generate even indexes which correspond to positive complex conjugates
   idxeven =: 13 : '2 * i. 2 %~ #y'

   NB. Now the poles of interest can be found easily
   ]mypoles =: (idxeven { ]) > 1{ (,p. >}.mypade)
2.220980032989808j4.160391445506931 3.779019967010193j1.380176524272845

   NB. residue - function calculates residue by interpolating using the tolerance variable
   NB. by not dividing by 2 we will get 2x the actual residue which is needed for the 
   NB. inverse transform method. See note 2 below for some background
   residue =: 4 : 0
tol * (x Rnm (y+tol)) - x Rnm y - tol
)

   NB. now the residues can be easily calculated
   ]myresidue =: mypade residue mypoles
_2.256958704076946j11.10883168058188 2.256958564186709j_39.63308717551342

Numerical Laplace Transform Inverse

The algorithm here is from Vlach et al[1] paraphrased for a J implementation.

  • create a function Vs that represents the transfer function of interest
  • Decide on an N and M for the Padé approximant
  • calculate poles and residues
  • divide each pole by time and use it as input to Vs
    • this step is due to the fact that in a Laplace transform you are integrating . I didn't go into the entire theory but a change of parameters was necessary to use the Padé approximants. Setting allows us to estimate .
    • now dividing by time puts the calculation back into an framework
  • Multiply each 'Vs poles%t' by the corresponding residue and sum
  • take the real part and divide by -t
   NB. Re - function to get the Real part of a complex number
   Re=: 9&o.

   NB. Vs will be the step function 1/s represented as follows:
   Vs =: {{%y}}

   NB. Refresher:  poles and residues calculated before
   mypoles
2.220980032989808j4.160391445506931 3.779019967010193j1.380176524272845
   myresidue
_2.256958704076946j11.10883168058188 2.256958564186709j_39.63308717551342

   NB. time stepped by 0.5 increments. Because we divide by t we can not start at 0
   ]t =: 0.5 * 1 + i.10
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

   NB. do an outer product with the divide operator. This divides each pole by each time
   NB. the answer is a table 2 x 10 where each column is the corresponding pole divided by
   NB. the corresponding time
   mypoles %/ t
4.441960065979615j8.320782891013861 2.220980032989808j4.160391445506931  1.480653355326538j2.773594297004621  1.110490016494904j2.080195722753465 0.8883920131959231j1.664156578202772  0.7403266776632692j1.38679714850231 0.6345657237113737j1.188683270144837...
 7.558039934020385j2.76035304854569 3.779019967010193j1.380176524272845 2.519346644673462j0.9201176828485633 1.889509983505096j0.6900882621364225  1.511607986804077j0.552070609709138 1.259673322336731j0.4600588414242817 1.079719990574341j0.3943361497922414...

   NB. Have Vs operate on all these values
   Vs mypoles %/ t
0.04992849223165818j_0.09352721270889659 0.09985698446331635j_0.1870544254177932 0.1497854766949746j_0.2805816381266898 0.1997139689266327j_0.3741088508355864  0.2496424611582909j_0.467636063544483 0.2995709533899491j_0.5611632762533796 0.3494994456216073j...
  0.1167381744350085j_0.0426352041662109  0.233476348870017j_0.08527040833242179 0.3502145233050255j_0.1279056124986327  0.466952697740034j_0.1705408166648436 0.5836908721750426j_0.2131760208310545  0.700429046610051j_0.2558112249972653 0.8171672210450595j...

   NB. Now the corresponding residues need to be multiplied and the answers summed only for each time increment
   NB. sounds like a matrix multiply and it is
   myresidue +/ .  * Vs mypoles %/ t
_0.5000000224932039j_3.957185860213632 _1.000000044986408j_7.914371720427265 _1.50000006747961j_11.8715575806409 _2.000000089972815j_15.82874344085453 _2.50000011246602j_19.78592930106816 _3.000000134959221j_23.74311516128179 _3.500000157452424j_27.7003010...

   NB. now take the real portion
   NB. divide by -t and you see we get a time simulation of the unit step function
   (-t) %~ Re myresidue +/ .  * Vs mypoles %/ t
1.000000044986408 1.000000044986408 1.000000044986407 1.000000044986408 1.000000044986408 1.000000044986407 1.000000044986407 1.000000044986408 1.000000044986407 1.000000044986408

This can all be combined into one function:

   NB. InvLaplace - numerical Laplace transform inversion
   NB. usage: (poles; residue) InvLaplace <time_array>
   InvLaplace =: 4 : 0
'p r' =. x
(-y) %~ Re r +/ . * Vs p %/ y
)

   NB. using mypoles, myresidue and t from the last section
   (mypoles;myresidue) InvLaplace t
1.000000044986408 1.000000044986408 1.000000044986407 1.000000044986408 1.000000044986408 1.000000044986407 1.000000044986407 1.000000044986408 1.000000044986407 1.000000044986408

Better Precision and Further Examples

Now let's use higher degree polynomials for better accuracy and recalculate the step function. It turns out that the interpolation method of calculating residues is not accurate enough. Luckily we don't have to turn to full partial fraction expansion or synthetic division to calculated the residues. I found the following equation that is good for functions with simple roots.[7]

It turns out that this method has better accuracy if the coefficients are not too large. The pade approximant polynomials have fractional coefficients so this consideration for accuracy is met.

Luckily in J there is an operator for taking the derivative of a polynomial 'p..'. Placing that into the structure of 'Rnm' would appear to provide a way to calculate Q'. Creating RnmPrimeIt is just a matter of adding the polynomial derivative operator to this function.

NB. Recall Rnm definition from above
   Rnm =: {{((0&{::x)&p. % (1{::x)&p.)y}}

NB. RnmPrime - the new function to calculate residues 
   RnmPrime =: {{((0&{::x)&p. % (p.. 1&{::x)&p.)y}}

NB. could it bee that easy?
   mypade&RnmPrime mypoles 
_1.128479372209071j5.554415818937955 1.128479372209065j_19.81654350025085

NB. to match the values in Vlach<ref name=vlach\> they need to be doubled
NB. this is due to some algebra where we double up on the poles of interest and ignore 
NB. the complex conjugate poles. Having to do with the time domain needs to select
NB. only the Real values that are calculated. see note 2 below
   2 * mypade&RnmPrime mypoles 
_2.256958744418142j11.10883163787591 2.25695874441813j_39.6330870005017

NB. Indeed the values match up until the last 3 places if you don't have the book
NB. you can take my word for it
   NB. create a pade approximant N=8 M=10
   ]mypade =: (padenorm@Pn ; padenorm@Qm)8r1 10r1
┌──────────────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────┐
│1 4r9 14r153 7r612 7r7344 1r18360 1r477360 1r20049120 1r1764322560│1 _5r9 5r34 _5r204 7r2448 _1r4080 1r63648 _1r1336608 1r39207168 _1r1764322560 1r158789030400│
└──────────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────┘
      ]mypoles =: (idxeven { ]) > 1{ (,p. >}.mypade)
4.234522494796983j14.95704378128165 7.781146264464085j11.36889164904994 9.933383722176217j8.033106334268831 11.2208537793914j4.792964167568913 11.83009373917017j1.593753005880887
   
   NB. there should be 5 poles of interest
   $mypoles
5
   
   NB. calculate the new residues
   ]Kprime =: 2 * mypade&RnmPrime mypoles 
132.1659412474699j17.47674798877804 _2870.418161030508j1674.109484085805 14629.74025232861j_19181.80818498169 _28178.11171280948j74357.58237266065 16286.62368067727j_139074.711551626

   (mypoles;Kprime) InvLaplace t
0.9999999611313513 0.9999999611313513 0.9999999611314555 0.9999999611313513 0.9999999611312888 0.9999999611314555 0.9999999611310874 0.9999999611313513 0.9999999611310513 0.9999999611312888

This is the frame work to obtain the numerical time domain response from a Laplace transformed function. In this implementation no magic constants are needed. The poles and residues are calculated. Essentially any sized pade approximant can be used within the other limits set out above for the method.

Conclusion For the conclusion I will work the second example from Vlach.[1] All that is needed is a change in V(s). This example provides the step response for a uniform RC line. The combined transfer function is:

   NB. R0C0 = 1 for this example
   NB. cosh defined to improve readability 
   cosh =: 6&o.

   NB. new Laplace domain transfer function
   Vs =: {{1 % y * cosh y^0.5}}

   NB. new time steps
   t =: 0.1 * 1 + i.22
   t
0.1 0.2 0.3 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2

   NB. perform numerical inverse Laplace transform
   (mypoles;Kprime) InvLaplace t
0.05069488819035486 0.227683094430069 0.3931993507296531 0.5255406014871777 0.6292538428313463 0.7102986548028647 0.7736060744917949 0.8230602883906002 0.8616979247116142 0.89188984146341 0.9154866582055763 0.9339329038769978 0.9483561894379677 0.959636807...

   NB. This is more accurate than the book and matches the ideal output to 7 decimal places

Notes

  1. Generalized Padé Approximants Padé approximation can be generalized to approximate any function. The more generalized solution uses linear algebra to solve a set of simultaneous equations produced from the Taylor series. Various methods can be found in the references(4,5,6 below). A J formulation of these methods should be fairly straight forward as J has built in matrix operations and SVD implementations. However, for the purposes of Laplace Transform inversion we are only interested in the exponential function.
  2. Laplace Inverse Transform The Laplace Inverse Transform is an integral based on . To approximate a simple substitution is made: so now the exponential term becomes .

    Now the pade approximant is substituted for giving:

    The Vlach J et al text (pages 350 - 353) does some hand waving over residue calculus to get this into summation form:

    From here the Numerical approximation takes advantage that the time domain solution is a real time function and therefore only poles in the upper half of the complex plane are used (this cuts the number of poles and residues in half). They get to a final form of:

    where are the poles chosen from the pade approximant and the residues corresponding to each pole

References

  1. 1.0 1.1 1.2 1.3 1.4 Vlach, Jiří, and Kishore Singhal. Computer Methods for Circuit Analysis and Design. VAN NOSTRAND, 1994. pages 349-356
  2. 2.0 2.1 Gonnet, P., Güttel, S., & Trefethen, L. N. (2013). Robust Padé approximation via SVD. SIAM Review, 55(1), 101–117. https://doi.org/10.1137/110853236
  3. Shams Es-haghi S, Gardner DJ. A Critical Evaluation and Modification of the Padé–Laplace Method for Deconvolution of Viscoelastic Spectra. Molecules. 2021; 26(16):4838. https://doi.org/10.3390/molecules26164838
  4. Rational approximants generated by Padé approximation and. Tarun Kumar Sheel, Professor Memorial University of Newfoundland, personal website . (1997, March). Retrieved October 22, 2022, from https://www.math.mun.ca/~tksheel/docs/Sheel_MS_Thesis.pdf
  5. Driver, K. A., & Temme, N. M. (1999). Zero and pole distribution of diagonal padé approximants to the exponential function. Quaestiones Mathematicae, 22(1), 7–17. https://doi.org/10.1080/16073606.1999.9632055
  6. Pan, Victor Y. and Zheng, Ai-Long, "TR-2014006: New Algorithms in the Frobenius Matrix Algebra for Polynomial Root-Finding" (2014). CUNY Academic Works. https://academicworks.cuny.edu/gc_cs_tr/397
  7. Schlecht, Sebastian, and Robert Isreal. “Stable Numerical Partial Fraction Decomposition for Large Scale Problem.” Mathematics Stack Exchange, 1 Aug. 1964, https://math.stackexchange.com/questions/2319039/stable-numerical-partial-fraction-decomposition-for-large-scale-problem.

Cite error: <ref> tag with name "bryant" defined in <references> is not used in prior text.