Equation Solving¶
More about Equation Solving
For more about equation solving please refer to another notebook of mine: Intelligence.
There are so many methods and techniques to solve an equation. Here we will review only some of them.
Ordinary Differential Equations¶
There are many important equations in physics.
The are many methods to solve an ODE,
- Green’s function.
- Series solution
- Laplace transform
- Fourier transform
Green’s Function¶
Definition of Green’s Function¶
The idea of Green/s function is very simple. To solve a general solution of equation
where \(f(x)\) is the source and some given boundary conditions. To save ink we define
which takes a function \(y(x)\) to \(f(x)\), i.e.,
Now we define the Green’s function to be the solution of equation (2) but replacing the source with delta function \(\delta (x-z)\)
Why do we define this function? The solution to equation (1) is given by
To verify this conclusion we plug it into the LHS of equation (1)
in which we used one of the properties of Dirac delta distribution
Also note that delta function is even, i.e., \(\delta(-x) = \delta(x)\).
So all we need to do to find the solution to a standard second differential equation
is do the following.
Find the general form of Green’s function (GF) for operator for operator \(\hat L_x\).
Apply boundary condition (BC) to GF. This might be the most tricky part of this method. Any ways, for a BC of the form \(y(a)=0=y(b)\), we can just choose it to vanish at a and b. Otherwise we can move this step to the end when no intuition is coming to our mind.
Continuity at \(n-2\) order of derivatives at point \(x=z\), that is
\[G^{(n-2)}(x,z) \vert_{x<z} = G^{(n-2)} \vert _{x>z} ,\qquad \text{at } x= z.\]Discontinuity of the first order derivative at \(x=z\), i.e.,
\[G^{(n-1)}(x,z)\vert_{x>z} - G^{(n-1)}(x,z)\vert_{x<z} = 1, \qquad \text{at } x= z.\]This condition comes from the fact that the integral of Dirac delta distribution is Heaviside step function.
Solve the coefficients to get the GF.
The solution to an inhomogeneous ODE \(y(x)=f(x)\) is given immediately by
\[y(x) = \int G(x,z) f(z) dz.\]If we haven’t done step 2 we know would have some unkown coefficients which can be determined by the BC.
How to Find Green’s Function¶
So we are bound to find Green’s function. Solving a nonhonogeneous equation with delta as source is as easy as solving homogeneous equations.
We do this by demonstrating an example differential equation. The problem we are going to solve is
with boundary condition
For simplicity we define
First of all we find the GF associated with
We just follow the steps.
The general solution to
\[\hat L_x G(x,z) = 0\]is given by
\[\begin{split}G(x,z) = \begin{cases} A_1\cos (x/2) + B_1 \sin(x/2), & \qquad x \leq z, \\ A_2\cos (x/2) + B_2 \sin(x/2), & \qquad x \geq z. \end{cases}\end{split}\]Continuity at \(x=z\) for the 0th order derivatives,
\[G(z_-,z) = G(z_+,z),\]which is exactly
(4)¶\[A_1\cos(z/2) + B_1 \sin(z/2) = A_2 \cos(z/2) + B_2\sin(z/2).\]Discontinuity condition at 1st order derivatives,
\[\left.\frac{d}{dx} G(x,z) \right\vert_{x=z_+} - \left.\frac{d}{dx} G(x,z) \right\vert_{x=z_-} = 1,\]which is
(5)¶\[-\frac{A_2}{2}\sin\frac{z}{2} + \frac{B_2}{2} \cos\frac{z}{2} - \left( -\frac{A_1}{2}\sin\frac{z}{2} + \frac{B_1}{2}\cos\frac{z}{2} \right) = 1\]Now we combine ((4)) and ((5)) to eliminate two degrees of freedom. For example, we can solve out \(A_1\) and \(B_1\) as a function of all other coefficients. Here we have
\[\begin{split}B_1 &= \frac{ - 2/\sin(z/2) }{\tan(z/2) + \cot(z/2)} + B_2 , \\ A_1 &= A_2 + B_2(\tan(z/2)-1) + \frac{2}{\sin(z/2) + \cot(z/2)\cos(z/2)}.\end{split}\]Write down the form solution using \(y(x) = \int G(x,z) f(z) dz\). Then we still have two unknown free coefficients \(A_2\) and \(B_2\), which in fact is to be determined by the BC equation (3).
Series Solution¶
A second order ODE,
Wronskian of this is
where \(y_1\) and \(y_2\) are linearly independent solutions, i.e., \(c_1 y_1 + c_2 y_2=0\) is only satisfied when \(c_1=c_2=0\). Wronskian is NOT zero if they are linearly independent.
Singularities of an ODE is are defined when \(p(x)\) or \(q(x)\) or both of them have singular points. For example, Legendre equation
has three singular points which are \(z=\pm 1, \infty\) while \(z=0\) is an ordinary point.
Solution at Ordinary Points¶
Series expansion of the solution can be as simple as
which converges in a radius \(R\) where \(R\) is the distance from \(z=0\) to the nearest singular point of our ODE.
Solution at Regular Singular Points¶
Frobenius series of the solution
The next task is to find the indicial equation.
If the roots are not differing by an integer, we just plug the two solutions to \(\sigma\) in and find two solutions independently.
If the roots differ by an integer, on the other side, we can only plug in the larger root and find one solution. As for the second solution, we need some other techniques, such as Wronskian method and derivative method.
Wronskian method requires two expression of Wronskian, which are
and
From the first expression, we have
However, we don’t know \(W(z)\) at this point. We should apply the second expression of Wronskian,
where the constant \(C\) can be set to 1 as one wish.
TO DO
The derivative method is on my to do list.
Comparing With A General Form¶
For equation that take the following form,
where \(y\equiv y(x)\), we can write down the solutions immediately,
in which \(\mathscr {Z}_p\) is the solution to Bessel equation, i.e., is one kind of Bessel function with index \(p\).
A Pendulum With A Uniformly Chaning String Length
As an example, let’s consider the case of length changing pendulum,
Notice that l is a function of time and
Then the equation can be rewritten as
Comparing with the general form, we have one of the possible solutions
This solution should be
Airy Equatioin
Time-independent Schrödinger equation with a simple potential,
Comparing it with general form, we should set
So the two possible solutions are
The general solution is
Second Order Differential Equations and Gauss’ Equation¶
Gauss’ equation has the form
which has a solution of the hypergeometric function form
The interesting part about this equation is that its Papperitz symbol is
in which the first three columns are the singularities at points \(0,1,\infty\) while the last column just points out that the argument of this equation is \(z\).
This means, in some sense, the solution to any equation with three singularities can be directly written down by comparing the equation with Gauss’ equation. If you care, the actual steps are changing variables, rewriting the equation into Gauss’ equation form, writing down the solutions.
Integral Equations¶
Neumann Series AKA WKB¶
For differential equation, whenever the highest derivative is multiplied by a small parameter, try this. But generally, the formalism is the following.
First of all, we use Hilbert space \(\mathscr L[a,b;w]\) which means the space is defined on \([a,b]\) with a weight \(w\), i.e.,
Quantum Mechanics Books
Notice that this is very different from the notation we used in most QM books.
What is the catch? Try to write down \(\braket{x}{u}\). It’s not that different because one can alway go back to the QM notation anyway.
With the help of Hilbert space, one can alway write down the vector form of some operators. Suppose we have an equation
where \(\hat L=\hat I + \hat M\). So the solution is simply
However, it’s not a solution until we find the inverse. A most general approach is the Neumann series method. We require that
where \(\gamma\in (0,1)\) and should be independent of \(x\).
As long as this is satisfied, the equation can be solved using Neumann series, which is an iteration method with
As an example, we can solve this equation
We define \(\hat M = \ket{t}\bra{\lambda}\) and check the convergence condition for \(\lambda\).
Step one is always checking condition of convergence.
Step two is to write down the series and zeroth order. Then we reach the key point. The iteration relation is
One can write down \(u_1\) imediately
Keep on going.
Using Dyads in Vector Space¶
For the same example,
where \(\hat L=\hat I + \hat M\), we can solve it using vector space if operator is linear.
Suppose we have a \(\hat M=\ket{a}\bra{b}\), the equation, in some Hilbert space, is
Multiplying through by \(\bra{b}\), we have
which reduces to a linear equation. We only need to solve out \(\braket{b}{u}\) then plug it back into the original equation.