NISP

NISP Commit Details

Date:2015-03-30 22:21:26 (3 years 8 months ago)
Author:Michael Baudin
Commit:332
Parents: 331
Message:Fixed typos.
Changes:
M/doc/polychaos/copyright.tex
M/doc/polychaos/1-orthopoly.tex
M/doc/polychaos/intropc-main.tex

File differences

doc/polychaos/copyright.tex
1
1
22
33
44
......
77
88
99
10
10
1111
1212
1313
% Copyright (C) 2013 - Michael Baudin
% Copyright (C) 2013 - 2015 - Michael Baudin
%
% This file must be used under the terms of the
% Creative Commons Attribution-ShareAlike 3.0 Unported License :
\newpage
$\textrm{ }$
\vfill
Copyright \copyright{} 2013 - Michael Baudin
Copyright \copyright{} 2013 - 2015 - Michael Baudin
This file must be used under the terms of the
Creative Commons Attribution-ShareAlike 3.0 Unported License:
doc/polychaos/1-orthopoly.tex
153153
154154
155155
156
156
157
157158
158
159
159160
160161
161162
......
166167
167168
168169
169
170
170
171
171172
172
173
173174
174175
175176
......
192193
193194
194195
196
195197
196
197
198
199
198200
199
201
200202
201203
202204
......
212214
213215
214216
215
217
216218
217219
218220
......
222224
223225
224226
225
227
226228
227229
228230
......
231233
232234
233235
234
235
236
236
237
238
237239
238240
239241
......
266268
267269
268270
269
271
270272
271
273
272274
273275
274276
......
315317
316318
317319
318
320
319321
320322
321323
......
326328
327329
328330
329
331
330332
331333
332334
......
341343
342344
343345
344
345
346
346
347
348
347349
348350
349351
......
353355
354356
355357
356
358
357359
358360
359361
......
364366
365367
366368
367
369
368370
369371
370372
......
372374
373375
374376
375
376
377
378
379
377
378
379
380
381
380382
381383
382384
......
422424
423425
424426
425
427
426428
427429
428430
429431
430432
431
433
432434
433435
434436
435437
436438
437
439
438440
439441
440442
441443
442444
443445
444
445
446
447
446
447
448
449
448450
449451
450452
451453
452
453
454
454
455
456
455457
456458
457459
......
466468
467469
468470
469
471
470472
471473
472474
......
533535
534536
535537
536
537
538
538539
539540
540
541
541542
543
542544
543545
544546
......
632634
633635
634636
635
637
636638
637639
638640
......
714716
715717
716718
717
719
720
718721
719722
720723
721
722
723
724
725
726
727
724728
725729
726730
727731
728
729
732
733
730734
731735
732736
......
759763
760764
761765
762
766
763767
764768
765769
766770
767771
768772
769
773
770774
771
775
772776
773777
774778
775779
776780
777
781
778782
779783
780784
......
786790
787791
788792
789
793
790794
791795
792796
......
801805
802806
803807
804
808
805809
806810
807811
808812
809
813
810814
811815
812816
813
817
814818
815819
816820
817821
818822
819
823
820824
821825
822826
......
832836
833837
834838
835
839
836840
837841
838842
......
856860
857861
858862
859
863
860864
861865
862866
......
880884
881885
882886
883
887
888
884889
885890
886891
......
889894
890895
891896
892
897
898
893899
894900
895901
896902
897
898
903
904
899905
900906
901
907
902908
903909
904910
......
913919
914920
915921
916
922
917923
918924
919925
......
926932
927933
928934
929
935
930936
931937
932
938
933939
934940
935941
......
946952
947953
948954
955
949956
950957
951958
......
956963
957964
958965
959
966
960967
961968
962969
......
10041011
10051012
10061013
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1014
10231015
10241016
10251017
......
10281020
10291021
10301022
1031
1023
10321024
10331025
10341026
......
10601052
10611053
10621054
1063
1055
10641056
10651057
10661058
......
11081100
11091101
11101102
1111
1112
1103
1104
11131105
11141106
11151107
11161108
11171109
11181110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
11191123
11201124
11211125
\label{def-polymonic}
We denote by $\PP_n$ the set of real polynomials with
degree $n$, i.e. $p_n\in\PP_n$ if :
$$
\begin{eqnarray}
\label{def-poly}
p_n(x)=a_{n+1} x^n + a_n x^{n-1} + \ldots + a_1,
$$
\end{eqnarray}
for any $x\in I$, where $a_{n+1}$, $a_n$,..., $a_1$ are
real numbers.
In this case, the degree of the polynomial $p_n$ is $n$.
\begin{definition}
(\emph{Orthogonal polynomials})
The set of polynomials $\{P_n\}_{n\geq 0}$ are orthogonal polynomials if
$P_n$ is a polynomial of degree $n$ and:
The set of polynomials $\{p_n\}_{n\geq 0}$ are orthogonal polynomials if
$p_n$ is a polynomial of degree $n$ and:
\begin{eqnarray*}
\lwdotprod{P_i}{P_j}=0
\lwdotprod{p_i}{p_j}=0
\end{eqnarray*}
for $i\neq j$.
\end{definition}
\begin{definition}
\label{def-orthopoly}
(\emph{Orthonormal polynomials})
The set of polynomials $\{P_n\}_{n\geq 0}$ are orthonormal polynomials if
$P_n$ is a polynomial of degree $n$ and:
The set of polynomials $\{p_n\}_{n\geq 0}$ are orthonormal polynomials if
$p_n$ is a polynomial of degree $n$ and:
\begin{eqnarray*}
\lwdotprod{P_i}{P_j}=\delta_{ij}
\lwdotprod{p_i}{p_j}=\delta_{ij}
\end{eqnarray*}
for $i\neq j$.
\end{definition}
\begin{proposition}
(\emph{Integral of orthogonal polynomials})
\label{prop-integpoly}
Let $\{P_n\}_{n\geq 0}$ be orthogonal polynomials.
Let $\{p_n\}_{n\geq 0}$ be orthogonal polynomials.
We have
\begin{eqnarray}
\label{eq-integpoly}
Moreover, for $n\geq 1$, we have
\begin{eqnarray}
\label{eq-integpoly2}
\int_I P_n(x) w(x) dx
\int_I p_n(x) w(x) dx
&=& 0
\end{eqnarray}
\end{proposition}
The equation \ref{eq-integpoly} is the straightforward consequence of \ref{eq-integpoly0}.
Moreover, for any $n\geq 1$, we have
\begin{eqnarray*}
\int_I P_n(x) w(x) dx
&=& \int_I P_0(x) P_n(x) w(x) dx \\
&=& \lwdotprod{P_0(x)}{P_n(x)} \\
\int_I p_n(x) w(x) dx
&=& \int_I P_0(x) p_n(x) w(x) dx \\
&=& \lwdotprod{P_0(x)}{p_n(x)} \\
&=& 0,
\end{eqnarray*}
by the orthogonality property.
\subsection{Orthogonal polynomials for probabilities}
\label{sec-probaorth}
In this section, we present the properties of $P_n(X)$,
In this section, we present the properties of $p_n(X)$,
when $X$ is a random variable associated with the orthogonal polynomials
$\{P_n\}_{n\geq 0}$.
$\{p_n\}_{n\geq 0}$.
\index{Distribution function}
\begin{proposition}
\begin{proposition}
(\emph{Expectation of orthogonal polynomials})
\label{prop-expecpoly}
Let $\{P_n\}_{n\geq 0}$ be orthogonal polynomials.
Let $\{p_n\}_{n\geq 0}$ be orthogonal polynomials.
Assume that $X$ is a random variable associated with the probability distribution
function $f$, derived from the weight function $w$.
We have
Moreover, for $n\geq 1$, we have
\begin{eqnarray}
\label{eq-expecpoly2}
E(P_n(X))&=& 0.
E(p_n(X))&=& 0.
\end{eqnarray}
\end{proposition}
since $f$ is a distribution function.
Moreover, for any $n\geq 1$, we have
\begin{eqnarray*}
E(P_n(X))
&=& \int_I P_n(x) f(x) dx \\
&=& \frac{1}{\int_I w(x) dx} \int_I P_n(x) w(x) dx \\
E(p_n(X))
&=& \int_I p_n(x) f(x) dx \\
&=& \frac{1}{\int_I w(x) dx} \int_I p_n(x) w(x) dx \\
&=& 0,
\end{eqnarray*}
where the first equation derives from the equation \ref{eq-unifw},
\begin{proposition}
(\emph{Variance of orthogonal polynomials})
\label{prop-varpoly}
Let $\{P_n\}_{n\geq 0}$ be orthogonal polynomials.
Let $\{p_n\}_{n\geq 0}$ be orthogonal polynomials.
Assume that $x$ is a random variable associated with the probability distribution
function $f$, derived from the weight function $w$.
We have
Moreover, for $n\geq 1$, we have
\begin{eqnarray}
\label{eq-varpoly2}
V(P_n(X))&=& \frac{\|P_n\|^2}{\int_I w(x) dx}.
V(p_n(X))&=& \frac{\|p_n\|^2}{\int_I w(x) dx}.
\end{eqnarray}
\end{proposition}
The equation \ref{eq-varpoly1} is implied by the fact that $P_0$ is a constant.
Moreover, for $n\geq 1$, we have:
\begin{eqnarray*}
V(P_n(X))
&=& E\left(\left(P_n(X)-E(P_n(X))\right)^2\right) \\
&=& E\left(P_n(X)^2\right) \\
&=& \int_I P_n(x)^2 f(x) dx \\
&=& \frac{1}{\int_I w(x)dx} \int_I P_n(x)^2 w(x) dx,
V(p_n(X))
&=& E\left(\left(p_n(X)-E(p_n(X))\right)^2\right) \\
&=& E\left(p_n(X)^2\right) \\
&=& \int_I p_n(x)^2 f(x) dx \\
&=& \frac{1}{\int_I w(x)dx} \int_I p_n(x)^2 w(x) dx,
\end{eqnarray*}
where the second equality is implied by the equation \ref{eq-expecpoly2}.
\end{proof}
\begin{proposition}
\label{prop-exppipj}
Let $\{P_n\}_{n\geq 0}$ be orthogonal polynomials.
Let $\{p_n\}_{n\geq 0}$ be orthogonal polynomials.
Assume that $x$ is a random variable associated with the probability distribution
function $f$, derived from the weight function $w$.
For two integers $i,j\geq 0$, we have
\begin{eqnarray}
\label{eq-exppipj1}
E(P_i(X)P_j(X))=0
E(p_i(X)p_j(X))=0
\end{eqnarray}
if $i\neq j$.
Moreover, if $i\geq 1$, then
\begin{eqnarray}
\label{eq-exppipj2}
E(P_i(X)^2) = V(P_i(X)).
E(p_i(X)^2) = V(p_i(X)).
\end{eqnarray}
\end{proposition}
\begin{proof}
We have
\begin{eqnarray*}
E(P_i(X)P_j(X))
&=& \int_I P_i(x) P_j(x) f(x) dx \\
&=& \frac{1}{\int_I w(x)dx} \int_I P_i(x) P_j(x) w(x) dx \\
&=& \frac{\lwdotprod{P_i}{P_j}}{\int_I w(x)dx}.
E(p_i(X)p_j(X))
&=& \int_I p_i(x) p_j(x) f(x) dx \\
&=& \frac{1}{\int_I w(x)dx} \int_I p_i(x) p_j(x) w(x) dx \\
&=& \frac{\lwdotprod{p_i}{p_j}}{\int_I w(x)dx}.
\end{eqnarray*}
If $i\neq j$, the orthogonality of the polynomials implies \ref{eq-exppipj1}.
If, on the other hand, we have $i=j\geq 1$, then
\begin{eqnarray*}
E(P_i(X)^2)
&=& \frac{\lwdotprod{P_i}{P_i}}{\int_I w(x)dx} \\
&=& \frac{\|P_i\|^2}{\int_I w(x)dx}.
E(p_i(X)^2)
&=& \frac{\lwdotprod{p_i}{p_i}}{\int_I w(x)dx} \\
&=& \frac{\|p_i\|^2}{\int_I w(x)dx}.
\end{eqnarray*}
We then use the equation \ref{eq-varpoly2}, which leads to
the equation \ref{eq-exppipj2}.
\begin{center}
\begin{tabular}{lllllll}
\hline
Distrib. & Support & Poly. & $w(x)$ & $f(x)$ & $\|P_n\|^2$ & $V(P_n)$\\
Distrib. & Support & Poly. & $w(x)$ & $f(x)$ & $\|p_n\|^2$ & $V(p_n)$\\
\hline
$\mathcal{N}(0,1)$ & $\RR$ & Hermite & $\exp\left(-\frac{x^2}{2}\right)$ & $\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^2}{2}\right)$ & $\sqrt{2\pi} n!$ & $n!$\\
$\mathcal{U}(-1,1)$ & $[-1,1]$ & Legendre & $1$ & $\frac{1}{2}$ & $\frac{2}{2n+1}$ & $\frac{1}{2n+1}$ \\
(\emph{Degree of exactness})
\label{def-degexact}
The degree of exactness of a quadrature rule is $d$ if
$d$ is the largest degree for which, for any polynomial
$p_d\in\PP_d$, we have
$d$ is the largest degree for which we have
\begin{eqnarray}
\label{eq-degexact}
I(p_d)=I_n(p_d).
I(p_d)=I_n(p_d),
\end{eqnarray}
for any polynomial $p_d\in\PP_d$.
\end{definition}
In other words, the degree of exactness of a quadrature rule
\label{def-maximalquad}
Let $m>0$ be and integer.
The quadrature rule \ref{eq-quadrule} has degree of exactness
$n+m$ is and only if
$n+m$ if and only if
\begin{enumerate}
\item the formula \ref{eq-quadrule} is interpolatory,
\item for any $p_{m-1} \in \PP_{m-1}$, we have
\end{proposition}
\begin{proof}
We are going to prove that $m\leq n+1$, then the proposition \ref{def-maximalquad}
We are going to prove that $m\leq n+1$.
Then the proposition \ref{def-maximalquad}
implies that the maximum degree of exactness is $n+m=n+n+1=2n+1$.
Let us prove this by contradiction : suppose that $m\geq n+2$.
Then the equality \ref{eq-maximalquad} is true for $m=n+2$, which
implies that the polynomial $\omega_{n+1}\in\PP_{m-1}$ satisfies
the equality :
Then the equality \ref{eq-maximalquad} is true for $m=n+2$.
Therefore $n+1=m-1$, which
implies that the polynomial $\omega_{n+1}\in\PP_{n+1}=\PP_{m-1}$
satisfies the equality :
$$
\int_I \omega_{n+1}^2(x)w(x)dx=0.
$$
Since the weight $w$ is by hypothesis continuous, nonnegative
and with a nonnegative integral, this implies that $\omega_{n+1}=0$,
which is impossible.
and with a nonnegative integral, the previous equation
implies that $\omega_{n+1}=0$, which is impossible.
\end{proof}
We have seen that the maximum possible value of $m$ is $n+1$.
Gaussian quadrature.
\end{definition}
In the following, we denote by $\{\pi_k\}_{i=1,...,n}$ the
In the following, we denote by $\{\pi_k\}_{k=1,...,n}$ the
monic node polynomials associated with the equation~\ref{eq-nodepolycond2}.
\begin{proposition}
(\emph{Properties of node polynomials})
\label{prop-propnodpoly}
\begin{enumerate}
\item The polynomials $\{\pi_k\}_{i=1,...,n}$ are orthogonals.
\item The polynomials $\{\pi_k\}_{k=1,...,n}$ are orthogonals.
\item They are linearly independent.
\item The polynomials $\{\pi_k\}_{i=1,...,n}$ are a basis of $\PP_n$.
\item The polynomials $\{\pi_k\}_{k=1,...,n}$ are a basis of $\PP_n$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item Let us prove that the polynomials $\{\pi_k\}_{i=1,...,n}$ are orthogonals.
\item Let us prove that the polynomials $\{\pi_k\}_{k=1,...,n}$ are orthogonals.
Consider two polynomials $\pi_i$ and $\pi_j$ satisfying the equation
\ref{eq-nodepolycond2}, with $i\neq j$.
Without loss of generality, we can assume that $i>j$ (otherwise, we
$$
which concludes the proof.
\item The fact that the orthogonal polynomials $\{\pi_k\}_{i=1,...,n}$
\item The fact that the orthogonal polynomials $\{\pi_k\}_{k=1,...,n}$
are linearly independent is a result of linear algebra.
Indeed, for any real numbers $\alpha_1$,..., $\alpha_n$,
assume that
since the other terms in the sum are zero, by orthogonality.
Suppose $(\pi_i,\pi_i)= 0$. This would contradict the hypothesis
that the weight $w$ is a continuous, nonnegative function.
This implies that, necessarily, $(\pi_i,\pi_i)> 0$.
This implies that, necessarily, we have $(\pi_i,\pi_i)> 0$.
We combine this inequality with the previous equality,
and get $\alpha_i=0$, which shows that the orthogonal
polynomials are linearily independent.
\item Linear algebra shows that the polynomials $\{\pi_k\}_{i=1,...,n}$
\item Linear algebra shows that the polynomials $\{\pi_k\}_{k=1,...,n}$
are a basis of $\PP_n$.
Indeed, consider the following set of polynomials :
$$
1, \quad x, x^2, ..., x^n.
1, x, x^2, ..., x^n.
$$
The definition \ref{def-polymonic} shows that the previous polynomials
are a basis of $\PP_n$, since any degree $n$ polynomial is equal
to a linear combination of these $n$ polynomials.
Therefore, the dimension of the vector space $\PP_n$ is $n$.
However, the $n$ polynomials $\{\pi_k\}_{i=1,...,n}$ are linearly
However, the $n$ polynomials $\{\pi_k\}_{k=1,...,n}$ are linearly
independent, which implies that these are a basis of $\PP_n$, and
concludes the proof.
\end{enumerate}
\begin{proposition}
(\emph{Three term recurrence of monic orthogonal polynomials})
\label{prop-threeterm}
Assume $\{\pi_k\}_{i=-1,0,1,...,n}$ is a family of monic orthogonal
Assume $\{\pi_k\}_{k=-1,0,1,...,n}$ is a family of monic orthogonal
polynomials, with
$$
\pi_{-1}=0, \qquad \pi_0=1.
\end{proposition}
In the previous proposition, let us make clear that the
scalar product $(x\pi_k,\pi_k)$ in $\alpha_k$ implies the
scalar product $(x\pi_k,\pi_k)$ in $\alpha_k$ involves the
polynomial $x\pi_k(x)$, for any $x\in I$.
Notice that the proposition does not state the value of
lower or equal to $k$.
Indeed, both $\pi_{k+1}$ and $\pi_k$ are monic, so that the leading term $x^{k+1}$
cancels.
Since the orthogonal polynomials are a basis of $\PP_k$, we have
Since the orthogonal polynomials $\{\pi_0,\pi_1,...,\pi_k\}$
are a basis of $\PP_k$, we have
\begin{eqnarray}
\label{eq-threeterm4}
\pi_{k+1}-x\pi_k = -\alpha_k \pi_k - \beta_k \pi_{k-1}
for $k=0,1,...,n$, where $\alpha_k$, $\beta_k$ and $\gamma_{kj}$,
for $j=0,...,k-2$, are real numbers.
First, the scalar product of the equation \ref{eq-threeterm4} with $\pi_k$
\begin{enumerate}
\item The scalar product of the equation \ref{eq-threeterm4} with $\pi_k$
is :
$$
-(x\pi_k,\pi_k)=-\alpha_k(\pi_k,\pi_k),
$$
since the orthogonality of the polynomials implies that the other terms in the sum
are zero.
since the orthogonality of the polynomials implies that the other
terms in the sum are zero.
The previous equation immediately leads to the equation \ref{eq-threeterm2}.
Second, the scalar product of the equation \ref{eq-threeterm4} with $\pi_{k-1}$
\item The scalar product of the equation \ref{eq-threeterm4} with $\pi_{k-1}$
is :
\begin{eqnarray}
\label{eq-threeterm5}
&=& (\pi_k,x\pi_{k-1}). \label{eq-threeterm6}
\end{eqnarray}
Moreover, the polynomial $x\pi_{k-1}$ is a monic degree $k$ polynomial.
Hence, it can be decomposed as :
Hence, it can be decomposed as~:
$$
x\pi_{k-1} = \pi_k + c_k\pi_{k-1} + ... + c_1\pi_0,
$$
$$
(x\pi_k,\pi_{k-1}) = (\pi_k,\pi_k).
$$
We previous equation can be combined with \ref{eq-threeterm5}, which leads
The previous equation can be combined with \ref{eq-threeterm5}, which leads
to \ref{eq-threeterm3}.
Thirdly, in order to prove the equation \ref{eq-threeterm1}, we are going
\item In order to prove the equation \ref{eq-threeterm1}, we are going
to use the equation \ref{eq-threeterm4} and prove that, for any $j=0,1,...,k-2$,
we have $\gamma_{kj}=0$.
Using orthogonality, the scalar product of the equation \ref{eq-threeterm4} with $\pi_j$ is :
\gamma_{kj}(\pi_j,\pi_j)=0.
$$
However, we know that $(\pi_j,\pi_j)>0$, which concludes the proof.
\end{enumerate}
\end{proof}
The three-term recurrence \ref{eq-threeterm1} is for monic orthogonal
\begin{proposition}
(\emph{Three term recurrence of orthogonal polynomials})
\label{prop-threetermgen}
Assume $\{p_k\}_{i=-1,0,1,...,n}$ is a family of orthogonal
Assume $\{p_k\}_{k=-1,0,1,...,n}$ is a family of orthogonal
polynomials, with
$$
p_{-1}=0, \qquad p_0=\frac{1}{\gamma_0},
\end{proof}
Finally, we can normalize the polynomials and get orthonormal
polynomials.
\begin{definition}
(\emph{Orthonormal polynomials})
The set of polynomials $\{p_k\}_{k\geq 0}$ are orthonormal polynomials if
$p_k$ is a polynomial of degree $k$ and:
\begin{eqnarray*}
\lwdotprod{p_i}{p_j}=0
\end{eqnarray*}
for $i\neq j$ and
\begin{eqnarray*}
\lwdotprod{p_i}{p_i}=1,
\end{eqnarray*}
for any integer $i$.
\end{definition}
polynomials as presented in the definition \ref{def-orthopoly}.
The following proposition is a straightforward consequence of
the proposition \ref{prop-threeterm}.
Notice that, as we normalize the polynomials, they are not monic
\begin{proposition}
(\emph{Three term recurrence of orthonormal polynomials})
\label{prop-threetermnorm}
Assume $\{p_k\}_{i=-1,0,1,...,n}$ is a family of orthonormal
Assume $\{p_k\}_{k=-1,0,1,...,n}$ is a family of orthonormal
polynomials, with
\begin{eqnarray}
p_{-1}=0, \qquad p_0=\frac{1}{\sqrt{\beta_0}},
\begin{proof}
We use the proposition \ref{prop-threetermgen} with
$\gamma_k=\|pi_k\|$.
$\gamma_k=\|p_k\|$.
The equation \ref{eq-threetermgen4} implies
$$
\|p_k\| = \frac{\|\pi_k\|}{\|\pi_k\|} = 1,
This is the equation used by the \scifun{chebyshev\_quadrature}
function.
The other method to compute uses the \scifun{chebyshev\_poly},
presented in the section \ref{sec-chebyshev}, which
The other method uses the \scifun{chebyshev\_poly} function
(see the section \ref{sec-chebyshev}), which
computes the Chebyshev polynomial.
More precisely, Scilab uses a data structure based on its
coefficients.
It is then straightforward to use the \scifun{roots} function,
which returns the roots of the polynomial, based on the
eigenvalues of the companion matrix \cite{Edelman1994}.
By definition, the companion matrix of the polynomial defined by
the equation \ref{def-poly} is :
\begin{eqnarray*}
C(p)=
\begin{pmatrix}
0 & 0 & \ldots & 0 & -a_1/a_{n+1} \\
1 & 0 & \ldots & 0 & -a_2/a_{n+1} \\
0 & 1 & \ldots & 0 & -a_3/a_{n+1} \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \ldots & 1 & -a_n/a_{n+1} \\
\end{pmatrix}.
\end{eqnarray*}
In the following script, we compute the roots of the
Chebyshev polynomials from degree 2 to 50 by the
doc/polychaos/intropc-main.tex
1111
1212
1313
14
15
14
15
1616
1717
1818
\begin{document}
\title{Introduction to polynomials chaos with NISP}
\date{Version 0.4 \\January 2013}
\author{Michael Baudin (EDF)\\Jean-Marc Martinez (CEA)}
\date{Version 0.5 \\March 2015}
\author{Michael Baudin}
\maketitle

Archive Download the corresponding diff file

Revision: 332