0%

Ordinary Differential Equations (From Mathematical Physics, by Sadri Hassani)

The main structure and methods were from the book, and I added some personal ideas and detailed process that may be of help to better understand the book.

General Forms and Properties

  • The general form of an n-order differential equation is: \[ G(x,y,y',\cdots,y^{(n)})=0 \]

  • Linearity and homogeneity

    When we say a differential equation is linear, it's linear to $ y$ and its derivatives, i.e. \[ \frac{\partial^2 G}{\partial (y^{(k)})^2}=0\quad \mbox{for} \quad k=0,1,\cdots,n \] or if we can write $ G$ as a explicit function, linearity means: \[ p_0(x)y+p_1(x)\frac{\text{d}y}{\text{d}x}+\cdots+p_n (x)\frac{\text{d}^n y}{\text{d}x^n}=q(x) \] Let linear operator \[ \hat{L}=p_0+p_1\hat{D}+\cdots+p_n \hat{D}^n \] LDE can be written as \[ \hat{L}{y}=q(x) \] And the equation is said to be homogeneous when \(q(x)=0\).

  • Solutions: A solution of a differential equation is a single-variable function such that $ G(x,f,f',,f^{n}=0)$, for all $ x$ in the domain of definition of $ f$ . If we put too many restrictions on it, the differential equations may not have solutions (For nth-order LDE, the dimension of its solution space is $ n$, if we ask it to satisfy $ m>n$ conditions, the solution may not exists). To get a unique solution, we agree to make a solution as many times differentiable as plausible and to satisfy certain initial conditions. Commonly, the initial conditions is the specification of the function and its first \(n-1\) derivatives. This sort of restrictions makes sense because of the following theorem:

    Implicit function theorem : Let \(G:\mathbb{R}^{n+1}\to\mathbb{R}\) have continuous partial derivatives up to the k-order in some neighborhood of a point $ P_0=(r_1,r_2,,r_{n+1})$, and there is \(\frac{\partial G}{\partial x_{n+1}}\neq0\) . Then there is a unique function \(f:\mathbb{R}^n\to\mathbb{R}\), such that \[ x_{n+1}=F(x_1,x_2,\cdots,x_n) \]

    By applying this theorem, we can derive \(\frac{\text{d}^{n}x}{\text{d}x^{n}}\) from the initial condition. In addition, we can calculate the derivatives of all orders (assuming they exist) by differentiating this equation. This allows us to expand the solution in a Taylor series and get the solution.

Existence/Uniqueness for FODEs

  • The solution of FOLDE: The general form of a FOLDE is

    \[ p_1(x)\frac{\text{d}y}{\text{d}x}+p_0 (x)y=q(x) \]

and it has a general solution, which can be proven by simply bring it back to the equation. We can also derive it by A method of constant variation (not mentioned in the book) from Lagrange. Firstly we can get the general solution of the corresponding homogeneous equation of it \[ p_1(x)\frac{\text{d}y}{\text{d}x}+p_0 (x)y=0 \]

\[ \Rightarrow y=A\exp{\left(- \int_{x_0}^{x}\frac{p_0(t)}{p_1(t)}\text{d}t\right)} \]

where \(A\) is a constant and \(x_0\) is an arbitrary number. Now we change \(A\) to a function and believe that there is a function \(A(x)\) that can let the solution be consistent with the Non-homogeneous terms. Now we bring it back, the Leibniz rule bring a new term too us, and let it equal to \(q(x)\), i.e. \[ p_1(x)\frac{\text{d} A}{\text{d}x}\exp{\left(-\int_{x_0}^x \frac{p_0(t)}{p_1(t)}\text{d}t\right)}=q(x) \] Let \(\mu(x)=\frac{1}{p_1(x)}\exp{ \int_{x_0}^x \frac{p_0(t)}{p_1(t)}\text{d}t}\), we have \[ A(x)=\int_{x_1}^{x}\mu(t)q(t)\text{d}t+C \] In this way, we get a general solution of FOLDE: \[ y(x)=\frac{1}{p_1(x)\mu(x)}\left[ C+\int_{x_1}^x\mu(t)q(t)\text{d}t\right] \] where \[ \mu(x)=\frac{1}{p_1(x)}\exp{\left(\int_{x_0}^x\frac{p_0(t)}{p_1(t)}\text{d}t\right)} \]

  • Existence

    Peano Existence Theorem: If the function \(F(x,y)\) is continuous for the points on and within the rectangle defined by \(|y-c|\le K\) and \(|x-a|\le N\), and if \(|F(x,y)|\le M\) there, then the differential equation \(y'=F(x,y)\) has at least one solution defined for \(|x − a| ≤ \min(N,K/M)\) and satisfying the initial condition \(f(a)=c\).

  • Uniqueness

    Not all FODE have unique solution. A function $ F$, if we say that it satisfies Lipschitz Condition in a domain, there is a constant $ L$, for all $ x,y$ in this domain satisfy: \[ |F(x,y_1)-F(x,y_2) |\le L|y_1-y_2| \] Then we can discuss the uniqueness of the solutions

Uniqueness Theorem: A FODE \(y'=F(x,y)\) and $ F$ satisfies Lipschitz Condition. Suppose there are two solutions \(f\) and \(g\), there is \[ |f'-g'|\le L|f-g| \] Notice that \(|f-g|'=|f'-g'|\), and both of \(f,g\) satisfy the initial condition, integrate both sides: \[ |f(x)-g(x) |\le \text{e}^{L|x-a|}|f(a)-g(a)|=0 \] which means \(f=g\).

​ By combining two theorem above, an FODE that has unique solution needs to satisfy: \(F\) is defined and continuous in the rectangle \(|y − c| ≤ K\), \(|x − a| ≤ N\) and satisfies a Lipschitz condition there.

General Properties of SOLDEs

  • The general form of SOLDEs and the definition of singular points.

    According to the definition of "linear", the general form of SOLDEs is \[ p_2(x)\frac{\text{d}^2 y}{\text{d}x^2}+p_1(x)\frac{\text{d} y}{\text{d}x}+p_0(x)y=p_3(x) \] To simplify the equation, we want to divide \(p_2(x)\) from both sides, which needs \(p_2(x)\neq 0\). This means the points where \(p_2(x)\) varnishes are special, so we define the points at which \(p_2(x)\) varnishes are called the singular points of the differential equation. One thing that's crucial is, singular points of SOLDEs are special and essentially different from the non-linear ones.

    At the same time, we can define the SOLDEs that have a better property (we don't need to worry about whether we can divide \(p_2(2)\) as regular on a interval \([a,b]\), if there is no singular point on \([a,b]\).

  • Superposition Principle.

    The linear operator of a SOLDE is \[ \hat{L}=p_2\hat{D}^2+p_1\hat{D}+p_0 \] the linearity of it means, for all solutions \(f,g\) \[ \hat{L}(f-g)=\hat{L}(f)-\hat{L}(g)=p_3-p_3=0 \] which shows that the difference between two solutions of a non-homogeneous SOLDE is a solution of its corresponding homogeneous SOLDE. In the same way, we can also say that any linear combination of the solutions of a homogeneous SOLDE is also a solution of it.

  • Existence/Uniqueness

    We've noticed that the difference between two solutions of a non-homogeneous SOLDE is a solution of its corresponding homogeneous SOLDE, so if we want to claim that the solution of an SOLDE with initial conditions is unique, we need to consider the property of the solutions of an HSOLDE with the initial condition of \(g(a)=g(a)'=0\). Then it comes to the lemma:

Lemma: The only solution \(g(x)\) of the homogeneous differential equation \(y'' + py' + qy = 0\) defined on the interval \([a,b]\) that satisfies \(g(a) = 0 = g (a)\) is the trivial solution \(g = 0\).

Proof: Introduce an nonnegative function \(u=g^2+g'^2\) and its derivative is: \[ u'=2gg'+2g'g''=2gg'+2g'(-pg'-qg)=-2p(g')^2+2(1-q)gg' \] With some basic inequality we have: \[ 2(1-g)gg'\le 2(1+|q|)|gg'|\le (1+|q|)(g^2+(g')^2) \] then \[ u'\le 2|p|(g')^2+(1+|q|)(g^2+(g')^2)=(1+|q|)g^2+(1+|q|+2|p|)(g')^2 \] Let \(K=\max\{ 1+|q|+2|p|\}\quad \mbox{in}[a,b]\), there is \(u'\le Ku\). \[ \Rightarrow u\le u(a)\text{e}^{K|x-a|}=0 \] which means \(g=g'=0\) for all \(x\in[a,b]\).

Once the lemma has been proved, the uniqueness theorem becomes obvious.

Uniqueness Theorem of SOLDE: If \(p\) and \(q\) are continuous on\([a,b]\), then at most one solution \(y = f (x)\) of the DE \(y'' + p(x)y' + q(x)y = 0\) can satisfy the initial conditions \(f (a) = c_1\)and \(f (a) = c_2\), where \(c_1\) and \(c_2\) are arbitrary constants.

Two allowed arbitrary constants are reminding us that, the dimension of the solution space of a non-homogeneous SOLDE is \(2\), i.e. if we have two "linear independent" solutions, then all solutions can be written as the linear combination of these two solutions. Now we specify what is "linear independent".

Theorem: Let \(f_1\) and \(f_2\) be two solutions of the HSOLDE \[ y''+py'+qy=q \] where \(p\) and \(q\) are continuous functions defined on the interval \([a,b]\). If \((f_1(a),f_1'(a))\) and \((f_2(a),f_2'(a))\) are linearly independent vectors in \(\mathbb{R}^2\), then every solution \(g(x)\) of this HSOLDE is equal to some linear combination \(g(x) = c_1f_1(x) + c_2f_2(x)\).

Proof: Because two vectors mentioned before is linear independent, any initial conditions is equal to the some linear combination of: \[ \begin{bmatrix} g(a) \\ g'(a) \end{bmatrix}=c_1 \begin{bmatrix} f_1(a) \\ f_1'(a) \end{bmatrix}+c_2 \begin{bmatrix} f_2(a) \\ f_2'(a) \end{bmatrix} \] Let \(u=g-c_1f_1-c_2f_2\) , then use the lemma we have \[ g=c_1f_1+c_2f_2 \]

The Wronskian

The Wronskian of two differentiable functions is \[ W(f_1,f_2,x)=\begin{vmatrix} f_1 & f_1' \\ f_2 & f_2' \end{vmatrix} \] In the finite dimensional vector space, the determinant of a matrix shows the linear dependency of its row vectors, so the Wronskian may have something to do with the linear dependency of these two functions.

Theorem: Two differentiable functions \(f_1\) and \(f_2\), which are nonzero in the interval \([a,b]\), are linearly dependent if and only if their Wronskian vanishes.

Proof: we have \[ f_1f_2'-f_2f_1'=0 \] Because \(f_1,f_2\) are nonzero, we have \[ \left(\frac{f_2}{f_1}\right)'=\frac{f_1f_2'-f_2f_1'}{f_1^2}=0 \] so \(f_1=Cf_2\). where \(C\) is a constant.

Then consider two solutions' Wronskian. All we know is about the 2-order derivative of two functions, so we take the derivative of \(W\): \[ \frac{\text{d}W}{\text{d}x}=f_1f_2''-f_2f_1''=-p(f_1f_2'-f_2f_1')=-pW \]

\[ \Rightarrow W=W(f_1,f_2,c)\text{e}^{-\int_c^xp(t)\text{d}t} \]

It's an important proposition.

A Second Solution to the HSOLDE

If we've already known a solution of a HSOLDE, then we can use the Wronskian to calculate the another: \[ f_1f_2'-f_2f_1'=W(c)\text{e}^{-\int_c^xp(t)\text{d}t} \]

\[ \Rightarrow \left(\frac{f_2}{f_1}\right)'= \frac{C}{f_1^2} \text{e}^{-\int_c^xp(t)\text{d}t} \]

The result is \[ f_2(x)=f_1(x)\left[ C+K\int_{x_0}^x\frac{1}{f_1^2(s)}\exp{\left(-\int_c^sp(t)\text{d}t \right)} \text{ds} \right] \]

Theorem: Let \(f_1\) be a solution of \(y''+py'+qy=0\), then \[ f_2(x)=f_1(x)\int_{x_0}^x\frac{1}{f_1^2(s)}\exp{\left(-\int_c^sp(t)\text{d}t \right)} \text{ds} \] is another solution linearly independent with \(f_1(x)\).

Example: Legendre DE: \[ \frac{\text{d}}{\text{d}x}(1-x^2)\frac{\text{d}y}{\text{d}x}+n(n+1)y=0 \] from which we have \(p(x)=-\frac{2x}{1-x^2}\). One solution of this HSOLDE is the well-known Legendre polynomial \(P_n(x)\). Another solution \(Q_n\) is \[ \begin{aligned} Q_n(x)&=P_n(x)\int_{x_0}^x\frac{1}{P_n^2(s)}\exp\left(-\int_0^sp(t)\text{d}t\right)\text{d}s \\ &=P_n(x)\int_{x_0}^x \frac{\text{d}s}{P_n^2(s)(1-s^2)} \end{aligned} \] For example: \[ Q_0=\int_{x_0}^x\frac{\text{d}s}{1-s^2}=\frac{1}{2}\ln\bigg|\frac{1+x}{1-x}\bigg| \]

\[ Q_1=x\int_{x_0}^x\frac{\text{d}s}{s^2(1-s^2)}=\frac{1}{2}x\ln\bigg|\frac{1+x}{1-x}\bigg| \]

General Solution to an ISOLDE

  • General solution to an ISOLDE

    We've finished the discussion about HSOLDE and know that the general solutions are linear combinations of two linear independent solutions. Then if we have a particular solution \(g(x)\)to an ISOLDE, according to the properties of SOLDE, any solution of it can be expressed by the following form:

\[ h(x)=c_1f_1(x)+c_2f_2(x)+g(x) \]

  • The method of variation of constants

    Let \(f_1\) and \(f_2\) be the two known solutions of the HSOLDE and \(g(x)\) is the linear combination of them. Suppose \(g=v(x)f_1(x)\), bring it back: \[ v''+\left(p+\frac{2f'}{f}\right)v'=\frac{r}{f_1} \] Its a FOLDE of \(v'\), the solution can be easily written down: \[ v'=\frac{W(x)}{f_1^2(x)}\left[C+\int_a^x \frac{f_1(t)r(t)}{W(t)}\text{d}t \right] \] where we've already used \(\exp{\int p\text{d}x}=\frac{W(c)}{W(x)}\). Let \(C=0\). \[ \Rightarrow v'=\int_a^x \frac{f_1(t)r(t)}{W(t)}\text{d}t\cdot\frac{\text{d}}{\text{d}x}\frac{f_2}{f_1}=\frac{\text{d}}{\text{d}x}\left[\frac{f_2}{f_1} \int_a^x \frac{f_1(t)r(t)}{W(t)}\text{d}t\right]-\frac{f_2}{f_1}\frac{f_1(x) r(x)}{W(x)} \]

    \[ \Rightarrow v=\frac{f_2}{f_1}\int_a^x \frac{f_1(t)r(t)}{W(t)}\text{d}t-\int_a^x \frac{f_2(t)r(t)}{W(t)}\text{d}t \]

    By doing so, we get a particular solution, which allow us to express the solutions of a ISOLDE by two homogeneous solutions: \[ y(x)=c_1f_1(x)+c_2 f_2(x )+\frac{f_2}{f_1}\int_a^x \frac{f_1(t)r(t)}{W(t)}\text{d}t-\int_a^x \frac{f_2(t)r(t)}{W(t)}\text{d}t \]

Separation and Comparison Theorems

Separation Theorem: The zeros of two linearly independent solutions of an HSOLDE occur alternately.

Proof: Suppose \(x_i\) is a zero of \(f_2\), then \[ 0\neq W(f_1,f_2,x_i)=f_1(x_i)f_2'(x_i) \] Thus, both of \(f_1(x_i)\) and \(f_2'(x_i)\) are non-zero. Consider the next zero of \(f_2(x_{i+1})=0\), because \(f_2\) is continuous and non-zero in the interval \([x_i,x_{i+1}]\), \(f_2' (x_i)\) and \(f_2'(x_{i+1})\) must have different signals. We proved earlier that \(W\) doesn't change its signal, so \(f_1(x_{i+1})\) and \(f_1(x_i)\) must have different signals. According to intermediate value theorem, \(f_1\) has some zeros in the interval \([x_i,x_{i+1}]\). Because \(f_1\) and \(f_2\) are symmetry, the zeros of two functions must occur alternately.

Comparison Theorem: Let \(f\) and \(g\) be nontrivial solutions of \(u''+p(x)u=0\) and \(v''+q(x)v=0\). If there is \(p\ge q\) for all \(x\in[a,b]\) and \(p\neq q\), \(f\) vanishes at least once between any two zeros of \(g\).

The form of HSOLDE in the comparison theorem is not restrictive, as we can transform all HSOLDE into this form:

Proposition: If \(y''+py'+qy=0\), then \[ u=y\exp{\left[\frac{1}{2} \int_{\alpha}^xp(t)\text{d}t\right] } \] satisfies \(u''+S(x)y=0\), where \(S(x)=q-\frac{1}{4}p^2-\frac{1}{2}p'\).

Proof: Let \(y=wu\). Then we have: \[ wu''+(2w'+pw)u'+(w''+pw'+qw)u=0 \] we demand that the coefficient of \(u'\) is zero \[ \Rightarrow w=\exp\left[-\frac{1}{2} \int_{\alpha}^xp(t)\text{d}t\right] \] bring it back and we have \[ S(x)=q+p\frac{w'}{w}+\frac{w''}{w}=q-\frac{1}{4}p^2-\frac{1}{2}p' \] and \(u=w^{-1}y\), as shown at the beginning.

Example: Bessel Equation: \[ y''+\frac{1}{x}y'+\left(1-\frac{n^2}{x^2}\right)y=0 \] \(u=y\exp(1/2\int1/x\text{d}x)=y\sqrt{x}\). Then \(u\) satisfies \[ u''+\left(1-\frac{4n^2-1}{4x^2}\right)u=0 \] For the simple cases \(n=0\), we have \(u=\sin x\). And the corresponding solution is the \(0\)-order Bessel function: \[ J_0(x)=\frac{\sin x}{\sqrt{x}} \]

Adjoint Differential Operators

Self-adjoint differential operators are equally important because their “eigenfunctions” also form complete orthogonal sets.

  • Exact HSOLDE and integrating factors

    A HSOLDE is said to be exact, if \[ 0=\hat{L}(f)=p_2(x)f''(x)+p_1(x)f'(x)+p_0 f(x)=\frac{\text{d}}{\text{d}x}\left( A(x)f'(x)+B(x)f(x) \right) \] An integrating factor for \(\hat{L}(f)\) is a function \(\mu(x)\) such that \(\mu(x)\hat{L}(f)\) is exact.

    Consider a ISOLDE with an inhomogeneous term \(r(x)\) that has an integrating factor \(\mu(x)\), then \[ A(x)y'+B(x)y=\int_{\alpha}^x \mu(t)r(t)\text{d}t \] which is a general FOLDE which can be solved easily. Thus, the existence of integrating factors completely solves a SOLDE.

Proposition: The HSOLDE is exact, if and only if \(p_2''-p_1'+p_0=0\).

Proof: If the HSOLDE is exact, then \[ p_2f''+p_1f'+p_0=Af''+(A'+B)f'+B'f \] which implies that \(p_2=A,p_1=(A'+B),p_0=B'\) , and it in turn gives \[ p_2''-p_1'+p_0=0 \] Conversely, if we have \(p_0=p_1'-p_2''\), then the HSOLDE becomes \[ p_2f''+p_1f'+p_1'f-p_2''f=0 \]

\[ \Rightarrow \frac{\text{d}}{\text{d}x}\left(p_2f'-p_2'f+p_1f \right)=0 \]

From this proposition, how to find an integrating factor becomes clear: \(\mu(x)\) should satisfy: \[ \hat{M}\mu(x)=(p_2\mu)''-(p_1\mu)'+p_0\mu=0 \] Expand the equation above to get the form of \(\hat{M}\): \[ \hat{M}=p_2\frac{\text{d}^2}{\text{d}x^2}+(2p_2'-p_1)\frac{\text{d}}{\text{d}x}+(p_2''-p_1'+p_0) \]

  • Adjoint Differential Operators:

​ The adjoint differential operator \(\hat{M}\) of \(\hat{L}=p_2\frac{\text{d}^2}{\text{d}x^2}+p_1\frac{\text{d}}{\text{d}x}+p_0\) is defined as: \[ \hat{M}=p_2\frac{\text{d}^2}{\text{d}x^2}+(2p_2'-p_1)\frac{\text{d}}{\text{d}x}+(p_2''-p_1'+p_0) \] ​ and is denoted by \(\hat{M}=\hat{L}^{\dagger}\). It can be easily verified that \((\hat{L}^{\dagger})^{\dagger}=\hat{L}\), which suggests that if \(u\) is an integrating factor of \(\hat{L}(u)\), then u will be an integrating factor of \(\hat{M}(v)\), i.e. \[ \begin{aligned} v\hat{L}(u)&=vp_2u''+vp_1u'+vp_0u \\ u\hat{M}(v)&=u(p_2v)''+u(p_1v)'+up_0v \end{aligned} \]

\[ \Rightarrow v\hat{L}(u)-u\hat{M}(v)=\frac{\text{d}}{\text{d}x}\left(p_2vu'-(p_2v)'u+p_1uv \right) \]

\[ \int_a^b (v\hat{L}(u)-u\hat{M}(v) )\text{d}x=[p_2vu'-(p_2v)'u+p_1uv]\big|_a^b \]

This is Lagrange Identities, which shows the definition here is consistent with the definition in the quantum mechanic. Consider \(u\) and \(v\) as two vectors in the Hilbert Space and \(L\) and \(M\) are operators on it.

The equation can be written as: \[ \langle v|\hat{L}|u\rangle -\langle u|\hat{M}|v\rangle= \langle u|\hat{L}^{\dagger}|v\rangle^*-\langle u|\hat{M}|v\rangle= [p_2vu'-(p_2v)'u+p_1uv]\big|_a^b \] if the r.h.s is zero, then we have \(\langle u|\hat{L}^{\dagger}|v\rangle^*=\langle u|\hat{M}|v\rangle\), because \(u\) and \(v\) are all real, there is \(\hat{L}^{\dagger}=\hat{M}\).

  • Self-Adjoint Operators. If we want that \(\hat{L}^{\dagger}=\hat{L}\), then there is \(p_2'=p_1\) by comparison. So we can write the HSOLDE as \[ \frac{\text{d}}{\text{d}x}\left( p_2 \frac{\text{d}f}{\text{d}x} \right)+p_0f(x)=0 \] Can we make all SOLDE self-adjoint? Multiply a function \(h\) in the both side: \[ hp_2f''+hp_1f'+hp_0f=0 \] we demand \((hp_2)'=hp_1\), then we get \[ h(x)=\frac{1}{p_2}\exp\left( \int^x\frac{p_1(t)}{p_2(t)}\text{d}t \right) \] In this we, we can make all SOLDE self-adjoint.

Example: Bessel Equation: \[ y''+\frac{1}{x}y'+\left(1-\frac{n^2}{x^2} \right)y=0 \] we have \[ h(x)=\exp\left( \int^x\frac{1}{t}\text{d}t \right)=x \]

\[ \frac{\text{d}}{\text{d}x}\left(x\frac{\text{d}y}{\text{d}x} \right)+\left(x-\frac{n^2}{x} \right)y=0 \]

Power-Series Solutions of SOLDEs

Frobenius Methods and Undetermined Coefficients

A proper treatment of SOLDEs requires the medium of complex analysis. At this point, however, we are seeking a formal infinite series solution to the SOLDE: \[ y''+p(x)y'+q(x)y=0 \] where \(p(x)\) and \(q(x)\) are real and analytic in the interval considered. Most of the time, the solution of SOLDE cannot be expressed by elementary functions, then it's natural to try to express the solution in a power series (except for integral expression). Suppose \[ p(x)=\sum_{k=0}^{\infty}a_k x^k \\ q(x)=\sum_{k=0}^{\infty}b_k x^k \] and our solution \(y=\sum_{k=0}^{\infty}c_kx^k\). \(a_k\) and \(b_k\) are known and we try to solve the coefficient \(c_k\) from the SOLDE.

The first step is to write down the derivative of \(y\): \[ \begin{aligned} y'&=\sum_{k=0}^\infty(k+1)c_{k+1}x^k \\ y''&=\sum_{k=0}^\infty (k+1)(k+2)c_{k+2}x^k \end{aligned} \] Then calculate \(p(x)y'\): \[ p(x)y'=\sum_{l=0}^\infty \sum_{n=0}^{\infty} a_l(n+1)c_{n+1} x^{n+l} \] Let \(k=n+l\), \[ p(x)y'=\sum_{k=0}^{\infty}\sum_{l=0}^{k}a_{l}(k-l+1)c_{k-l+1}x^k \] Similarly, \[ q(x)y=\sum_{l=0}^{\infty}\sum_{n=0}^{\infty} b_l c_n x^{n+l}=\sum_{k=0}^{\infty}\sum_{l=0}^k b_l c_{k-l}x^k \]

The SOLDE can be written as: \[ \sum_{k=0}^{\infty}\{ (k+1)(k+2)c_{k+2}+\sum_{l=0}^k \left[ a_{l}(k-l+1)c_{k-l+1}+b_l c_{k-l} \right] \} x^k=0 \] For this to be true for all \(x\), the coefficient of each power of \(x\) must vanish: \[ (k+1)(k+2)c_{k+2}=-\sum_{l=0}^{k}\left[ a_{l}(k-l+1)c_{k-l+1}+b_l c_{k-l} \right] \] If we know \(c_0\) and \(c_1\), then we can uniquely determine \(c_n\) for all \(n\ge2\).

Existence Theorem: For any SOLDE of the form \(y''+ p(x)y' +q(x)y = 0\) with analytic coefficient functions, there exists a unique power series, given by the equation above that formally satisfies the SOLDE for each choice of \(c_0\) and \(c_1\).

The convergence of the power series is given by:

Theorem: For any choice of \(c_0\) and \(c_1\), the radius of convergence of any power series solution \(y=\sum_{k=0}^{\infty} c_kx^k\) for the normal HSOLDE \(y''+ p(x)y' + q(x)y = 0\) is at least as large as the smaller of the two radii of convergence of the two series for \(p(x)\) and \(q(x)\).

At this point, we conclude the theoretical discussion. Here are two typical examples in physics.

Example: Legendre DE. \[ y''-\frac{2x}{1-x^2}y'+\frac{\lambda}{1-x^2}y=0 \] in the interval \((-1,1)\) \(p(x)\) and \(q(x)\) satisfy the condition demanded.

Then: \[ \begin{aligned} p(x)&=\sum_{k=0}^{\infty}(-2)x^{2k+1} \\ q(x)&=\sum_{k=0}^{\infty} \lambda x^{2k} \end{aligned} \] Use: \[ (k+1)(k+2)c_{k+2}=-\sum_{l=0}^{k}\left[ a_{l}(k-l+1)c_{k-l+1}+b_l c_{k-l} \right] \] to get the recursive relationship. We need to discuss whether \(k\) is an \(odd\) number, if so, \(k=2r\), \[ (2r+1)(2r+2)c_{2r+2}=\sum_{m=0}^{r}(4r-4m-\lambda)c_{2(r-m)} \] for \(k=2r+2\), \[ \begin{aligned} (2r+3)(2r+4)c_{2r+4}&=-\sum_{m=0}^{r+1}(4(r+1)-4m-\lambda)c_{2((r+1)-m)} \\ &=(4(r+1)-\lambda)c_{2r+2}+\sum_{m=1}^{r+1}(4(r+1)-4m-\lambda)c_{2((r+1)-m)} \\ &=(4r+4-\lambda)c_{2r+2}+\sum_{m=0}^{r}(4r-4m-\lambda)c_{2(r-m)}\\ &=[-\lambda+(2r+3)(2r+2)]c_{2r+2} \end{aligned} \]

\[ c_{k+2}=\frac{-\lambda+k(k+1)}{(k+1)(k+2)}c_k \]

Because the initial equation is not related to whether \(k\) is odd or even, so we can claim that the relation is correct for all \(k\ge 0\). But still, we can see two independent solutions, one of which has only even powers of x and the other only odd powers. Sometimes we demand the series to be convergent at \(\pm1\), then we need \(\lambda=l(l+1)\), and in this case the infinite series becomes a polynomial, which is known as the Legendre polynomial.

Example: Hermite DE. For a harmonic oscillator, the time-independent Schroedinger equation is: \[ -\frac{\hbar^2}{2m}\frac{\text{d}^2\psi}{\text{d}x^2}+\frac{1}{2}m\omega^2x^2\psi=E\psi \] Substituting \(\psi(x)=H(x)\exp\left(-m\omega^2 x^2/\hbar\right)\) and letting \(y=\sqrt{m\omega/\hbar}x\), \[ H''-2yH'+\lambda H=0 \] where \(\lambda=\frac{2E}{\hbar \omega}-1\). Then we have: \[ \sum_{k=1}^\infty \left[ (k+1)(k+2)c_{k+2}+(\lambda-2k)c_k \right]y^k+2c_2+\lambda c_0=0 \]

\[ \Rightarrow c_{k+2}=\frac{2k-\lambda}{(k+1)(k+1)}c_k \]

If we asked the power series to be convergent at \(\infty\), then the infinite power series should be truncated into polynomials, which needs \(\lambda=2l\). The polynomial is known as Hermite polynomial of order \(l\). Without solving the equation, we can write down the energy of an oscillator: \[ E_l=\frac{\hbar\omega}{2}(1+\lambda)=\hbar\omega\left(l+\frac{1}{2} \right) \] which means the energy of an harmonic oscillator is quantized.

Quantum Harmonic Oscillator

This section solves the Hermite Equation by ladder operator, which is not a general methods of solving DE, so this section will be escaped (temporary, perhaps).

SOLDEs with Constant Coefficients

The most general \(n\)th-order linear differential equation with constant coefficients can be written as \[ \hat{L}(y)=y^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_1y'+a_0y=r(x) \]

  • The solutions of HNOLDEs: The corresponding HNOLDE has \(r(x)=0\), which means, the derivatives of a solution are linear dependent to itself, so it's a reasonable guess that the solution has a form like \(e^{\lambda x}\). Bring the solution back:

\[ \hat{L}(y)=(\lambda^n+a_{n-1}\lambda^{n-1}+\cdots+a_1\lambda+a_0)e^{\lambda x}=0 \]

​ The equation will hold only if \(\lambda\) is a zero of the characteristic polynomial: \[ p(x)\equiv x^n+a_{n-1}x^{n-1}+\cdots+a_1 x+a_0 \] ​ which, by the fundamental theorem of algebra, can be written as: \[ p(x)=(x-\lambda_1)^{k_1}(x-\lambda_2)^{k_2}\cdots(xa-\lambda_m)^{k_m} \] where \(\lambda_j\) is a complex zero with multiplicity of \(k_j\). The condition that \(k_j\ge 2\) is a little bit troublesome because we need to find \(n\) linear independent solutions. Fortunately, we can still find out \(n\) linear independent solutions by characteristic polynomial, but some of the solutions are more than just \(e^{\lambda x}\).

Theorem: Let \(\{λ_j\}_{j=1}^m\) be the distinct roots of the characteristic polynomial of the real HNOLDE of with multiplicities \(\{k_j \}_{j=1}^m\). Then the functions \[ \{ e^{\lambda_j x},xe^{\lambda_j x},\ldots,x^{k_j-1}e^{\lambda_j x}\}_{j=1}^m \] are a basis of solutions.

Example: damped oscillators. The damped oscillator satisfies the equation: \[ \ddot{x}+\gamma\dot{x}+\omega_0^2x=0 \] where \(\gamma\) is called the damping factor. There are three different conditions called over-damped, critically-damped and under-damped respectively. As an example, under-damped condition means \(\gamma^2<4\omega_0^2\). Two zeros of the characteristic polynomial is \[ \lambda_{1,2}=-\frac{\gamma}{2}\pm i\sqrt{\omega_0^2-\left(\frac{\gamma}{2} \right)^2} \] The solution is: \[ x=Ae^{-(\gamma/2)t}\cos\left(\sqrt{\omega_0^2-\left(\frac{\gamma}{2}\right)^2}t+\phi_0 \right) \] The amplitude is modulate by the factor \(e^{-(\gamma/2)t}\).

  • The solutions of INOLDEs:

    Theorem: The INOLDE \(\hat{L}(y)= e^{\lambda x}S(x)\), where \(S(x)\) is a polynomial, has the particular solution \(e^{\lambda x}q(x)\), where \(q(x)\) is also a polynomial. The degree of \(q(x)\) equals that of \(S(x)\) unless \(\lambda =\lambda_j\) , a root of the characteristic polynomial of L, in which case the degree of \(q(x)\) exceeds that of S(x) by \(k_j\) , the multiplicity of \(\lambda_j\) .

    Example: \(y''+y=xe^{x}\). \[ \lambda=\pm i\neq 1 \] Let's guess that \(y=(Ax+B)e^x\), \[ y''+y=(Ax+2A+B)e^x+(Ax+B)e^x=xe^x \]

    \[ \Rightarrow A=\frac{1}{2},\quad B=-\frac{1}{2} \]

    Then we get the most general solution: \[ y=A\cos(x+\phi)+\frac{1}{2}(1-x)e^{x} \]

    Example: \(y''-y=xe^x\). \(\lambda=\pm 1\), one of which is equal to \(\lambda\). Thus, we need to take a guess that \(y=(Ax^2+Bx+C)e^x\): \[ (Ax^2+Bx+C+2Ax+B+2Ax+B+2A-Ax^2-Bx-C)e^x=xe^x \]

    \[ \Rightarrow A=\frac{1}{4}, \quad B=-\frac{1}{4} \]

    Then we get general solutions: \[ y=Ae^x+Be^{-x}+\frac{1}{4}(x^2-x)e^x \]