Lec 1 | MIT 18.03 Differential Equations, Spring 2006

  • 1979 Views
Uploaded By: Adam Rangihana . Category: General . Added on: 08 April 2016.
In this Video:
Description
The Geometrical View of y'=f(x,y): Direction Fields, Integral Curves.
View the complete course: http://ocw.mit.edu/18-03S06

License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
More
Comments
Adam Rangihana
Adam Rangihana
Separation of variables
From Wikipedia, the free encyclopedia
Differential equations
Navier–Stokes differential equations used to simulate airflow around an obstruction
Navier–Stokes differential equations used to simulate airflow around an obstruction.
Scope
[show]
Classification
Types[show]
Relation to processes[show]
Solution
General topics[show]
Solution methods[show]
People
[show]
v t e
In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

Contents [hide]
1Ordinary differential equations (ODE)
1.1Alternative notation
1.2Example
2Partial differential equations
2.1Homogeneous case
2.2Nonhomogeneous case
3Matrices
4References
5External links
Ordinary differential equations (ODE)[edit]
Suppose a differential equation can be written in the form

frac{d}{dx} f(x) = g(x)h(f(x))
which we can write more simply by letting y = f(x):

frac{dy}{dx}=g(x)h(y).
As long as h(y) ≠ 0, we can rearrange terms to obtain:

{dy over h(y)} = {g(x)dx},
so that the two variables x and y have been separated. dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential (infinitesimal) is somewhat advanced.

Alternative notation[edit]
Some who dislike Leibniz's notation may prefer to write this as

frac{1}{h(y)} frac{dy}{dx} = g(x),
but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect to x, we have

int frac{1}{h(y)} frac{dy}{dx} , dx = int g(x) , dx, qquadqquad (1)
or equivalently,

int frac{1}{h(y)} , dy = int g(x) , dx
because of the substitution rule for integrals.

If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat the derivative frac{dy}{dx} as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.

(Note that we do not need to use two constants of integration, in equation (1) as in

int frac{1}{h(y)} , dy + C_1 = int g(x) , dx + C_2,
because a single constant C = C_2 - C_1 is equivalent.)

Example[edit]
Population growth is often modeled by the differential equation

frac{dP}{dt}=kPleft(1-frac{P}{K}right)
where P is the population with respect to time t, k is the rate of growth, and K is the carrying capacity of the environment.

Separation of variables may be used to solve this differential equation.

frac{dP}{dt}=kPleft(1-frac{P}{K}right)
intfrac{dP}{Pleft(1-frac{P}{K}right)}=int k,dt
To evaluate the integral on the left side, we simplify the fraction

frac{1}{Pleft(1-frac{P}{K}right)}=frac{K}{Pleft(K-Pright)}
and then, we decompose the fraction into partial fractions

frac{K}{Pleft(K-Pright)}=frac{1}{P}+frac{1}{K-P}
Thus we have

intleft(frac{1}{P}+frac{1}{K-P}right),dP=int k,dt
lnbegin{vmatrix}Pend{vmatrix}-lnbegin{vmatrix}K-Pend{vmatrix}=kt+C
lnbegin{vmatrix}K-Pend{vmatrix}-lnbegin{vmatrix}Pend{vmatrix}=-kt-C
lnbegin{vmatrix}cfrac{K-P}{P}end{vmatrix}=-kt-C
begin{vmatrix}cfrac{K-P}{P}end{vmatrix}=e^{-kt-C}
begin{vmatrix}cfrac{K-P}{P}end{vmatrix}=e^{-C}e^{-kt}
frac{K-P}{P}=pm e^{-C}e^{-kt}
Let A=pm e^{-C}.
frac{K-P}{P}=Ae^{-kt}
frac{K}{P}-1=Ae^{-kt}
frac{K}{P}=1+Ae^{-kt}
frac{P}{K}=frac{1}{1+Ae^{-kt}}
P=frac{K}{1+Ae^{-kt}}
Therefore, the solution to the logistic equation is

Pleft(tright)=frac{K}{1+Ae^{-kt}}
To find A, let t=0 and Pleft(0right)=P_0. Then we have

P_0=frac{K}{1+Ae^0}
Noting that e^0=1, and solving for A we get

A=frac{K-P_0}{P_0}
Partial differential equations[edit]
The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as heat equation, wave equation, Laplace equation and Helmholtz equation.

Homogeneous case[edit]
Consider the one-dimensional heat equation.The equation is

frac{partial u}{partial t}-alphafrac{partial^{2}u}{partial x^{2}}=0

(1)
The boundary condition is homogeneous, that is

ubig|_{x=0}=ubig|_{x=L}=0

(2)
Let us attempt to find a solution which is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is:

u(x,t) = X(x) T(t).

(3)
Substituting u back into equation (1) and using the product rule,

frac{T'(t)}{alpha T(t)} = frac{X''(x)}{X(x)}.

(4)
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value − λ. Thus:

T'(t) = - lambda alpha T(t),

(5)
and

X''(x) = - lambda X(x).

(6)
− λ here is the eigenvalue for both differential operators, and T(t) and X(x) are corresponding eigenfunctions.

We will now show that solutions for X(x) for values of λ ≤ 0 cannot occur:

Suppose that λ < 0. Then there exist real numbers B, C such that

X(x) = B e^{sqrt{-lambda} , x} + C e^{-sqrt{-lambda} , x}.
From (2) we get

X(0) = 0 = X(L),

(7)
and therefore B = 0 = C which implies u is identically 0.

Suppose that λ = 0. Then there exist real numbers B, C such that

X(x) = Bx + C.
From (7) we conclude in the same manner as in 1 that u is identically 0.

Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that

T(t) = A e^{-lambda alpha t},
and

X(x) = B sin(sqrt{lambda} , x) + C cos(sqrt{lambda} , x).
From (7) we get C = 0 and that for some positive integer n,

sqrt{lambda} = n frac{pi}{L}.
This solves the heat equation in the special case that the dependence of u has the special form of (3).

In general, the sum of solutions to (1) which satisfy the boundary conditions (2) also satisfies (1) and (3). Hence a complete solution can be given as

u(x,t) = sum_{n = 1}^{infty} D_n sin frac{npi x}{L} expleft(-frac{n^2 pi^2 alpha t}{L^2}right),
where Dn are coefficients determined by initial condition.

Given the initial condition

ubig|_{t=0}=f(x),
we can get

f(x) = sum_{n = 1}^{infty} D_n sin frac{npi x}{L}.
This is the sine series expansion of f(x). Multiplying both sides with sin frac{npi x}{L} and integrating over [0,L] result in

D_n = frac{2}{L} int_0^L f(x) sin frac{npi x}{L} , dx.
This method requires that the eigenfunctions of x, here left{sin frac{npi x}{L}right}_{n=1}^{infty}, are orthogonal and complete. In general this is guaranteed by Sturm-Liouville theory.

Nonhomogeneous case[edit]
Suppose the equation is nonhomogeneous,

frac{partial u}{partial t}-alphafrac{partial^{2}u}{partial x^{2}}=h(x,t)

(8)
with the boundary condition the same as (2).

Expand h(x,t), u(x,t) and f(x) into

h(x,t)=sum_{n=1}^{infty}h_{n}(t)sinfrac{npi x}{L},

(9)
u(x,t)=sum_{n=1}^{infty}u_{n}(t)sinfrac{npi x}{L},

(10)
f(x)=sum_{n=1}^{infty}b_{n}sinfrac{npi x}{L},

(11)
where hn(t) and bn can be calculated by integration, while un(t) is to be determined.

Substitute (9) and (10) back to (8) and considering the orthogonality of sine functions we get

u'_{n}(t)+alphafrac{n^{2}pi^{2}}{L^{2}}u_{n}(t)=h_{n}(t),
which are a sequence of linear differential equations that can be readily solved with, for instance, Laplace transform, or Integrating factor. Finally, we can get

u_{n}(t)=e^{-alphafrac{n^{2}pi^{2}}{L^{2}} t} left (b_{n}+int_{0}^{t}h_{n}(s)e^{alphafrac{n^{2}pi^{2}}{L^{2}} s} , ds right).
If the boundary condition is nonhomogeneous, then the expansion of (9) and (10) is no longer valid. One has to find a function v that satisfies the boundary condition only, and subtract it from u. The function u-v then satisfies homogeneous boundary condition, and can be solved with the above method.

In orthogonal curvilinear coordinates, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. See spherical harmonics for example.

Matrices[edit]
The matrix form of the separation of variables is the Kronecker sum.

As an example we consider the 2D discrete Laplacian on a regular grid:

L = mathbf{D_{xx}}oplusmathbf{D_{yy}}=mathbf{D_{xx}}otimesmathbf{I}+mathbf{I}otimesmathbf{D_{yy}}, ,
where mathbf{D_{xx}} and mathbf{D_{yy}} are 1D discrete Laplacians in the x- and y-directions, correspondingly, and mathbf{I} are the identities of appropriate sizes. See the main article Kronecker sum of discrete Laplacians for details.

References[edit]
Polyanin, Andrei D. (2001-11-28). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton, FL: Chapman & Hall/CRC. ISBN 1-58488-299-9.
Myint-U, Tyn; Debnath, Lokenath (2007). Linear Partial Differential Equations for Scientists and Engineers. Boston, MA: Birkhäuser Boston. doi:10.1007/978-0-8176-4560-1. ISBN 978-0-8176-4393-5. Retrieved 2011-03-29.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Graduate Studies in Mathematics 140. Providence, RI: American Mathematical Society. ISBN 978-0-8218-8328-0.
External links[edit]
Hazewinkel, Michiel, ed. (2001), "Fourier method", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
John Renze, Eric W. Weisstein, Separation of variables (Differential Equation) at MathWorld.
Methods of Generalized and Functional Separation of Variables at EqWorld: The World of Mathematical Equations
Examples of separating variables to solve PDEs
"A Short Justification of Separation of Variables"
Categories: Ordinary differential equationsPartial differential equations
8 years ago
Adam Rangihana
Adam Rangihana
In mathematics, an integral curve is a parametric curve that represents a specific solution to an ordinary differential equation or system of equations. If the differential equation is represented as a vector field or slope field, then the corresponding integral curves are tangent to the field at each point.
Integral curve - Wikipedia, the free encyclopedia
en.wikipedia.org/wiki/…
8 years ago
Adam Rangihana
Adam Rangihana
Solutions to ODE Y1 = f(x,y)...

Y1=(x) is only a solution of ODE Ordinary Differential Equation Y1 = f(x,y) only if Y1=(x) is an Intergral curve...
8 years ago
Adam Rangihana
Adam Rangihana
First Order Differential Equations
In this chapter we will look at solving first order differential equations. The most general first order differential equation can be written as,

(1)
As we will see in this chapter there is no general formula for the solution to (1). What we will do instead is look at several special cases and see how to solve those. We will also look at some of the theory behind first order differential equations as well as some applications of first order differential equations. Below is a list of the topics discussed in this chapter.

Linear Equations Identifying and solving linear first order differential equations.

Separable Equations Identifying and solving separable first order differential equations. We’ll also start looking at finding the interval of validity from the solution to a differential equation.

Exact Equations Identifying and solving exact differential equations. We’ll do a few more interval of validity problems here as well.

Bernoulli Differential Equations In this section we’ll see how to solve the Bernoulli Differential Equation. This section will also introduce the idea of using a substitution to help us solve differential equations.

Substitutions We’ll pick up where the last section left off and take a look at a couple of other substitutions that can be used to solve some differential equations that we couldn’t otherwise solve.

Intervals of Validity Here we will give an in-depth look at intervals of validity as well as an answer to the existence and uniqueness question for first order differential equations.

Modeling with First Order Differential Equations Using first order differential equations to model physical situations. The section will show some very real applications of first order differential equations.

Equilibrium Solutions We will look at the behavior of equilibrium solutions and autonomous differential equations.

Euler’s Method In this section we’ll take a brief look at a method for approximating solutions to differential equations.
8 years ago