Abstract

In this paper, a new analytical method, which is based on the fixed point concept in functional analysis (namely, the fixed point analytical method (FPM)), is proposed to acquire the explicit analytical solutions of nonlinear differential equations. The key idea of this approach is to construct a contractive map which replaces the nonlinear differential equation into a series of linear differential equations. Usually, the series of linear equations can be solved relatively easily and have explicit analytical solutions. The FPM is different from all existing analytical methods, such as the well-known perturbation technique applied in weakly nonlinear problems, because it is independent of any small physical or artificial parameters at all; thus, it can handle more nonlinear problems, including strongly nonlinear ones. Two typical cases are investigated by FPM in detail and the comparison with the numerical results shows that the present method is one of high accuracy and efficiency.

Introduction

Nonlinear problems are of great interest to physicists, mathematicians, and engineers, because most physical systems are inherently nonlinear in nature. Nonlinear problems give rise to some important phenomena, such as solitons, shock waves, chaos, and turbulence in fluid flow. Nonlinear problems are more difficult to solve than linear ones. Thus far, very few nonlinear problems are known to have a simple closed form solution. In most cases, we have to resort to an infinite series to express the solution of nonlinear problems.

In general, a nonlinear problem has the form, as follows
(1)
where A is a nonlinear operator and u is a physical variable, such as pressure, velocity, temperature, etc. Here, B+[u]=0 is the boundary condition and/or initial condition for u. In most cases, the solution of Eq. (1) is expressed as the sum of an infinite series. There are some well-known analytical methods, such as the perturbation technique [1–3] and the δ-expansion method [4,5], which have been proposed for nonlinear problems. All of these existing methods are dependent on a small physical or artificial parameter ɛ (or δ), and decompose u as a series of ɛ
(2)
where un is determined by a linear operator L[·]
(3)
On the right hand side of Eq. (3), fn only depends on the known u0, u1,,un-1. The original complex nonlinear operator A[·] is replaced by an infinite number of linear problems governed by the linear operator L[·], which is handled relatively easily. Usually, the linear operator L[·] is related to the original nonlinear operator A[·] and is dependent on the mathematical methods to be used. Actually, the preceding infinite linear problems may not be solved, but the first N linear problems can give an approximation to the solution u
(4)

where UN is named as the Nth-order approximation to u.

Although the aforementioned idea is rather clear, there are some important questions to be answered:

  1. (A)

    How do we choose the small parameter ɛ, especially when the small parameter ɛ does not obviously exist in a problem?

  2. (B)

    Does the series (2) converge to the exact solution u?

  3. (C)

    What is the influence of parameter ɛ on the accuracy of UN, especially when ɛ becomes large?

It is hard to answer the preceding questions in detail. Generally speaking, the parameter ɛ is usually a measure of the strength of the nonlinearity. The larger ɛ is, the stronger the nonlinearity. Almost all perturbation techniques are based on the so-called small parameter assumption and the approximate solutions can be expressed in a series of small parameters, so that these perturbation techniques are applicable to the weakly nonlinear problems, and the accuracy of the perturbation series (4) becomes bad when ɛ grows. What's more if the convergence of series (4) is not guaranteed, the addition of the higher-order correction term just makes the solution worse, so that the accuracy of the perturbation series cannot be improved, unlimited only by introducing more higher-order correction terms. Moreover, in some cases, the exact solution of the original problem is regular, however, the perturbation series solution (4) has singularity; even for the larger order of UN, the stronger the singularity grows. To remove the singularity, more complicated techniques should be used. Furthermore, the perturbation series (4) usually have limited regions of validity and break down in other regions, called regions of nouniformity. To render these expansions uniformly valid, many complex and subtle techniques [1–3] should be used.

Here, the following two examples elucidate the aforementioned questions. The first example is the Lighthill equation [1], which is widely discussed in the Poincaré-Lighthill-Kuo (PLK) method. The equation explicitly contains a parameter ɛ and can be written as follows
(5)
The exact closed-form solution ye has the form
(6)
The exact solution ye is regular and ye>0 when 0x1. The straightforward expansion gives the approximation ys-e
(7)

The series ys-e is divergent, because the zeroth-order solution of the straightforward expansion is singular at x=0, and the singularity grows stronger for the higher order.

The second example is the well-known Duffing equation [6], which denotes a cyclic motion of a free oscillation in the conservative systems
(8)
with the initial boundary condition
(9)
where the dot denotes the derivative with respect to the time t. Here, ɛ is a dimensionless quantity, and is a measure of the strength of the nonlinearity. Let ω and T(=2π/2πωω) denote the frequency and the period of the solution u(t); they are the functions of parameter ɛ. The exact period of the Duffing oscillation Te can be expressed by the first kind of complete elliptic integral [2]
(10)
The straightforward expansion gives the first order approximation Us-e,1
(11)

which contains the so-called mixed secular term tsint that tends to infinity as t. Eq. (11) is valid only for times such that ɛt<O(1). In the higher order approximation, the mixed secular term always exists. Moreover, Us-e,1 is nonperiodic, although the Duffing equation (8) denotes a cyclic motion. The breakdown in the straightforward expansion is due to its failure to account for the dependence of the frequency on the nonlinearity.

The aforementioned two examples show that the straightforward expansion is invalid for many cases and the complicated and subtle perturbation techniques, such as the PLK method and the multiple scales method [1–3], should be introduced to remove the preceding singularity, mixed secular term, and so on.

It is clear that the so-called small parameter assumption brings out all of these limitations. Therefore, it is necessary to develop a new kind of analytical method which does not depend on any small parameters at all.

Fixed Point Analytical Method

The fixed point is a basic concept in functional analysis [7,8]. It is widely used to investigate the existence and uniqueness of solutions by pure mathematicians. The famous Newton’s method for nonlinear algebra equations is just based on the Banach fixed point theorem. The zero point x* of a nonlinear function f(x), i.e., f(x*)=0, can be approached by the following iteration procedure
(12)

where T[x] is a map. The iteration procedure starts with some arbitrary initial value x0, and the solution sequence {xn|n=0,1,2,3,} will usually converge to x*, provided that this initial guess x0 is close enough to the unknown zero point x*. In functional analysis, the zero point is named as the fixed point of the contractive map T[x]=x-f(x)/f'(x).

Sometimes, the convergence of Eq. (12) may be slow or the iteration procedure may be unstable. To overcome these difficulties, a real nonzero free parameter β, called the relaxation factor, is introduced into the iteration procedure
(13)

The optimal value of the relaxation factor β usually is dependent on the problem to be solved.

Here, the fixed point concept is extended to investigate nonlinear differential equations and the fixed point analytical method (FPM) is proposed. Let us consider the nonlinear differential equation (1) and a contractive map T[·] is constructed as follows
(14)
where LC[·] is a linear continuous bijective operator, named as the linear characteristic operator of the nonlinear operator A[·]. The operator LC-1[·] is the inverse operator of LC[·]. Here, β is the relaxation factor, which can improve the convergence and stability of the iteration procedure. From Eq. (14), an iteration procedure is built up as follows
(15)
(16)
The initial guess u0 satisfies
(17)
Usually the linear equation (15) is much easier to handle than the original complex nonlinear equation (1). From Eqs. (15)(17), a solution sequence {un|n=0,1,2,3,} is obtained and un is named as the n th-order approximation to u. The choice of the linear characteristic operator LC[·] is so great free that there are some operators LC[·] which can ensure the contraction property of the map T[u]=u-β·LC-1[A[u]]. Moreover, the relaxation factor β can noticeably improve the convergence and stability of the iteration procedure (15), which will be discussed in detail in Sec. 3.2. When we take the limit of both sides of Eq. (15), it is obvious that the limit value u* is exactly the zero point of the nonlinear operator A[·]
(18)

u* is called as a fixed point of the contractive map T[u]=u-β·LC-1[A[u]].

The fixed analytical method, which is different from the perturbation technique, is independent of any small parameters at all. Moreover, this method provides us with a convenient way to obtain a solution sequence {un|n=0,1,2,3,}, which can approach the exact solution of the original nonlinear equation as accurately as possible. Furthermore, the convergence of this solution sequence is so rapid that usually the first few items of the solution sequence {un|n=0,1,2,3,} can give an accurate enough approximation, which will be shown in Sec. 3.

The common solution of the linear equation LC[u]=0 is named as the kernel of the linear characteristic operator LC[·]. The determination of the linear characteristic operator LC[·] is not done in a systematical way, however, LC[·] should obey some fundamental rules:

  1. (A)

    The operator LC[·] is a linear continuous bijective operator.

  2. (B)

    The operator LC[·] should possess as many similar properties of A[·] as possible.

  3. (C)

    The operator LC[·] should ensure that the map T[u]=u-β·LC-1[A[u]] is a contractive map.

Usually, the contraction property of the map T[u]=u-β·LC-1[A[u]] is difficult to prove directly, however, it can be heuristically ensured by plotting the so called β-curves. It will be discussed in detail in Sec. 3.2.

In general, the steps of the fixed point analytical method applied to solve the differential equation are:

  1. (A)

    To analyze the property of the original differential equation, such as the distribution of the singularity points, the asymptotic property, the continuity property, the distribution of the maximum and the minimum.

  2. (B)

    To choose a basis functions system, i.e., {ek|k=0,1,2,3,}, which possess the preceding properties as much as possible, and construct a linear characteristic operator LC[·], whose kernels are members of the basis function system {ek|k=0,1,2,3,}. Usually, the basis function system is not unique, so we have a great amount of freedom to construct the linear characteristic operator LC[·].

  3. (C)

    To solve the iteration procedure (15) and obtain the solution sequence {un|n=0,1,2,3,}, whose members should be expressed by a sum of the basis functions, i.e., un=kakek,n=0,1,2,3,, where ak is the coefficient.

  4. (D)

    To plot the so-called β-curves to decide the optimal value β, which can greatly improve the convergence and stability of the iteration procedure.

In the same manner as Newton’s method, usually a good initial guess u0 will accelerate the convergence of the iteration procedure, but the question is, how to choose a good initial guess? In the framework of the FPM, any order approximation un should be expressed by a sum of the basis functions, so that the initial guess of the solution is largely decided by the linear characteristic operator, the basis function system, and the initial/boundary condition. A simple and convenient initial guess u0 can be decided, as in the equation (17).

Application of the Fixed Point Analytical Method

In this section, the fixed point analytical method is used to investigate the previous examples. All of the calculations are implemented on a laptop PC with 2 GB RAM and Intel(R) Core(TM)2 Duo 1.80GHz CPU.

The First Example.

Let us start from the first example
(19)
The nonlinear term is ɛyy' and the parameter ɛ exists only in the highest order term, so the solution has a boundary layer at x=0 for some ɛ values. The value y(0) will adjust when the parameter ɛ changes, i.e., y(0)=y(0;ɛ). In the neighborhood of x=0, i.e., 0xδ(ɛ), the original equation can be approximated by the following linear equation
(20)
Equation (20) has the solution as follows
(21)
where C1 is a constant of integration. From the solution (21), it is suggested that the basis function system can take the form of {(x+λ)-k|k=0,1,2,}, and the linear characteristic equation can be constructed as follows
(22)

where λ is a free scale parameter and is decided later. The kernel of the linear characteristic equation yker=1/(x+λ) satisfies the equation LC[C/(x+λ)]=0, where C is a constant of integration. It is clear that the kernel yker belongs to the basis function system {(x+λ)-k|k=0,1,2,}. To satisfy LC[y0]=0 and y0(1)=2, the initial guess y0 takes the form y0=2(1+λ)/(x+λ).

Now the iteration procedure is built up as follows
(23)
(24)
The preceding linear equation (24) is easily integrated and the first few items of the solution sequence {yn|n=0,1,2,3,} are listed here
(25)
(26)
The higher-order approximation can be deduced by the symbolic computation software, such as MAXIMA, MAPLE, and MATHEMATICA. Thereafter, by solving the first several higher-order approximations, it is amazing to find that yn(x) can be explicitly expressed in the general form

where the coefficient an,k is dependent on the value of ɛ and λ.

From the boundary condition (19), it is known that the value y(1) is fixed, but y(0) will adjust when the parameter ɛ changes. The accuracy of the values of y(0) and y'(0) are critical, because they determine whether a boundary layer exists at x=0. From Eq. (6), the exact values of ye(0) and y'e(0) are
(27)

Let us consider some typical values of the parameter ɛ.

ɛ=0.001.

First, we regard the value of λ as a free parameter and consider the convergence of yn(0), which is dependent on λ. We plot the so-called λ-curves of yn(0). According to these λ-curves, we can straightforwardly and heuristically decide the valid region of λ, which corresponds to the line segments almost parallel to the horizontal axis. In the neighborhood of λ=0.045, the approximations y2(0), y3(0), and y4(0) almost converge to the same value and the valid convergence regions of λ are enlarged when the order n increases, as shown in Fig. 1. In the following calculation, we set λ=0.045 when ɛ=0.001 and obtain the value of yn(0) and y'n(0), as shown in Table 1. The comparison between the exact solution ye(x) and the nth-order approximation yn(x) is shown in Fig. 2. It is clear that a boundary layer exists at x=0 and the slope at x=0 is very large, such that y'e(0)=-977.662. As shown in Fig. 3, the maximum relative error ([yn(x)-ye(x)]/ye(x)) in the whole region 0x1 is about 8.5%, 2.0%, and 0.5% for y1(x), y2(x), and y3(x), respectively, which shows that the convergence of the solution sequence {yn|n=0,1,2,3,} to ye(x) is rapid and uniform.

Fig. 1
λ-curves of yn(0). The valid convergence regions of λ correspond to the neighborhood of λ=0.045, (ɛ=0.001).
Fig. 1
λ-curves of yn(0). The valid convergence regions of λ correspond to the neighborhood of λ=0.045, (ɛ=0.001).
Close modal
Table 1

Comparison of the exact value ye(0),ye'(0) with the nth-order approximation yn(0),yn'(0), (ɛ=0.001)

Order nyn(0)ye(0)yn(0)ye(0)-1y'n(0)y'e(0)y'n(0)y'e(0)-1
n = 144.743544.7661−0.05%−938.948−977.662−4.0%
n = 244.7659−4.5 × 10− 4%−977.9290.027%
n = 344.76610%−977.6664.1 × 10− 4%
n = 444.76610%−977.6620%
Order nyn(0)ye(0)yn(0)ye(0)-1y'n(0)y'e(0)y'n(0)y'e(0)-1
n = 144.743544.7661−0.05%−938.948−977.662−4.0%
n = 244.7659−4.5 × 10− 4%−977.9290.027%
n = 344.76610%−977.6664.1 × 10− 4%
n = 444.76610%−977.6620%
Fig. 2
Comparison of the exact solution ye(x) with the nth-order approximation yn(x), (ɛ=0.001)
Fig. 2
Comparison of the exact solution ye(x) with the nth-order approximation yn(x), (ɛ=0.001)
Close modal
Fig. 3
Relative error [yn(x)-ye(x)]/ye(x) in the whole region 0≤x≤1, (ɛ=0.001)
Fig. 3
Relative error [yn(x)-ye(x)]/ye(x) in the whole region 0≤x≤1, (ɛ=0.001)
Close modal

ɛ=1.

As shown in Fig. 4, in the neighborhood of λ=2.5, the approximations y2(0), y3(0), and y4(0) almost converge to the same value and the valid convergence regions of λ are enlarged when the order n increases. We set λ=2.5 when ɛ=1 and obtain the value of yn(0) and y'n(0), as shown in the Table 2.

Fig. 4
λ-curves of yn(0). The valid convergence regions of λ correspond to the neighborhood of λ=2.5, (ɛ=1).
Fig. 4
λ-curves of yn(0). The valid convergence regions of λ correspond to the neighborhood of λ=2.5, (ɛ=1).
Close modal
Table 2

Comparison of the exact value ye(0),ye'(0) with the nth-order approximation yn(0),yn'(0), (ɛ=1)

Order nyn(0)ye(0)yn(0)ye(0)-1y'n(0)y'e(0)y'n(0)y'e(0)-1
n = 12.432002.44949−0.71%−0.438400−0.591752−26%
n = 22.44908−0.017%−0.591555−0.033%
n = 32.44948−4.1 ×  10− 4%−0.5918420.015%
n = 42.449490%−0.5917555.1 × 10− 4%
Order nyn(0)ye(0)yn(0)ye(0)-1y'n(0)y'e(0)y'n(0)y'e(0)-1
n = 12.432002.44949−0.71%−0.438400−0.591752−26%
n = 22.44908−0.017%−0.591555−0.033%
n = 32.44948−4.1 ×  10− 4%−0.5918420.015%
n = 42.449490%−0.5917555.1 × 10− 4%

The comparison between the exact solution ye(x) and the nth-order approximation yn(x) is shown in Fig. 5. The slope at x=0 is relatively small y'e(0)=-0.591752, and the boundary layer at x=0 disappears. As shown in Fig. 6, the maximum relative error ([yn(x)-ye(x)]/ye(x)) in the whole region 0x1 is about 0.7%, 0.05%, and 0.006% for y1(x), y2(x), and y3(x), respectively, which, again, shows that the convergence of the solution sequence {yn|n=0,1,2,3,} to ye(x) is rapid and uniform.

Fig. 5
Comparison of the exact solution ye(x) with the nth-order approximation yn(x), (ɛ=1)
Fig. 5
Comparison of the exact solution ye(x) with the nth-order approximation yn(x), (ɛ=1)
Close modal
Fig. 6
Relative error [yn(x)-ye(x)]/ye(x) in the whole region 0≤x≤1, (ɛ=1)
Fig. 6
Relative error [yn(x)-ye(x)]/ye(x) in the whole region 0≤x≤1, (ɛ=1)
Close modal

ɛ=1000.

Using a similar method, we set the scale parameter λ=2000 when ɛ=1000 and obtain the values of yn(0) and y'n(0), as shown in Table 3. The comparison between the exact solution ye(x) and the nth-order approximation yn(x) is shown in Fig. 7. The slope at x=0 is even smaller y'e(0)=-5.00125×10-4, and the profile of y(x) is almost a horizontal line. As shown in Fig. 8, the maximum relative error in the whole region 0x1 is so tiny, about 1.2×10-5% for y1(x), that y1(x) can give an accurate enough approximation to y(x) when ɛ=1000.

Table 3

Comparison of the exact value ye(0),ye'(0) with the nth-order approximation yn(0),yn'(0), (ɛ=1000)

Order nyn(0)ye(0)yn(0)ye(0)-1y'n(0)y'e(0)y'n(0)y'e(0)-1
n = 12.000502.000500%−4.99750 × 10−4−5.00125 × 10−4−0.075%
n = 22.000500%−5.00125 × 10−40%
n = 32.000500%−5.00125 × 10−40%
Order nyn(0)ye(0)yn(0)ye(0)-1y'n(0)y'e(0)y'n(0)y'e(0)-1
n = 12.000502.000500%−4.99750 × 10−4−5.00125 × 10−4−0.075%
n = 22.000500%−5.00125 × 10−40%
n = 32.000500%−5.00125 × 10−40%
Fig. 7
Comparison of the exact solution ye(x) with the nth-order approximation yn(x), (ɛ=1000)
Fig. 7
Comparison of the exact solution ye(x) with the nth-order approximation yn(x), (ɛ=1000)
Close modal
Fig. 8
Relative error [y1(x)-ye(x)]/ye(x) in the whole region 0≤x≤1, (ɛ=1000)
Fig. 8
Relative error [y1(x)-ye(x)]/ye(x) in the whole region 0≤x≤1, (ɛ=1000)
Close modal

The Effect of Scale Parameter λ.

From the preceding three cases ɛ=0.001, ɛ=1, and ɛ=1000, it is clear that the valid value of the free parameter λ adjusts when ɛ changes. In fact, λ can be a function of the parameter ɛ. Furthermore, it is found that when λ=4ɛ2+2ɛ even the 2nd-order approximation y2(x), given by Eq. (26), provides us with a simple uniform valid approximation with enough accuracy. The left boundary value y2(0)
(28)

agrees well with the exact value ye(0) in the whole region 0<ɛ<, as shown in Fig. 9. The maximum relative error (y2(0)/ye(0)-1) occurs when ɛ=1/6

Fig. 9
Comparison of the exact value ye(0) with the 2nd-order approximation y2(0) given by Eq. (28)
Fig. 9
Comparison of the exact value ye(0) with the 2nd-order approximation y2(0) given by Eq. (28)
Close modal
(29)

It is shown that the free scale parameter λ provides us with a convenient way to ensure the convergence of the solution sequence. Different from the perturbation approximations, Eqs. (26) and (28) are valid for all possible parameters 0<ɛ<, and thus, they are independent of the small parameters.

The Second Example

ɛ=1.

The Duffing equation is again discussed here
(30)

We have known that the straightforward expansion (11) has the mixed secular term, which is valid only for ɛt<O(1). Without loss of generality, we first consider the case ɛ=1. The procedure for the large value of ɛ is similar.

The Duffing equation denotes a cyclic motion of a free oscillation in the conservative systems. Let ω and T(=2π/ω) denote the frequency and the period of the solution u(t). Introducing the following transformation
(31)
Eq. (30) becomes
(32)
where the prime denotes the derivative with respect to τ and Ω=ω2. Although the frequency ω is unknown, u(τ) is a function with the period 2π, i.e., u(τ)=u(τ+2π). It is suggested that the basis function system can take the form of {cos(kτ)|k=0,1,2,3,} or {sin(kτ)|k=0,1,2,3,}. Furthermore, in consideration of the initial condition u(0)=1,u'(0)=0, the basis functions system is finally decided as
(33)
According to the form of the basis functions system, the linear characteristic equation is constructed as follows
(34)
The kernel of the characteristic equation uker,1=cos(τ) and uker,2=sin(τ) satisfy
(35)

where C1 and C2 are constant. Because uker,2=sin(τ) does not belong to the basis function system {cos(kτ)|k=0,1,2,3,}, we have to discard uker,2=sin(τ) and set the constant C2=0.

To satisfy LC[u0]=0 and the initial condition u0(0)=1,u'0(0)=0, the initial guess u0 takes the following form
(36)
The iteration procedure is built up as follows
(37)
(38)

where β is the relaxation factor, which is a nonzero free parameter used to improve the convergence, and Ωn is the nth-order approximation to the exact value Ω.

The first order approximation u1 satisfies the following equation
(39)
The linear equation (39) is easy to integrate. Through investigation of the right hand side of Eq. (39), it is found that the existence of the term cos(τ) makes the first order approximation u1 contain the mixed secular term τsin(τ), which does not belong to the basis function system {cos(kτ)|k=0,1,2,3,}, therefore, the coefficient of the term cos(τ) should be set as zero
(40)
and the first order approximation u1 has the form, as follows
(41)
The second order approximation u2 satisfies the following equation
(42)
Similarly, the coefficient of the term cos(τ) on the right hand side of Eq. (42) should be set as zero to remove the mixed secular term τsin(τ)
(43)
(44)
and the second order approximation u2 has the form, as follows
(45)

The higher-order approximation can be deduced by the symbolic computation software in a similar manner. Thereafter, we find that the nth-order approximation Ωn depends on the relaxation factor β, so that the value β affects the convergence of Ωn. The so-called β-curves of Ωn are plotted in Fig. 10 to investigate this effect. According to these β-curves, it is straightforward and heuristic to find the valid region of β, which corresponds to the line segments almost parallel to the horizontal axis. In the neighborhood of β=0.6, the approximate values Ω3, Ω4, and Ω5 almost converge to the same value and the valid convergence regions of β are enlarged when the order n increases, as shown in Fig. 10. It is clear that the relaxation factor β can greatly improve the convergence of the solution sequence. Although the method of the β-curves is heuristic, the β-curves can easily give the optimal value, which ensures the contraction property of the map T[u]=u-β·LC-1[A[u]] and the convergence of the solution sequence {un|n=0,1,2,3,}.

Fig. 10
β-curves of Ωn. The valid convergence regions of β correspond to the neighborhood of β=0.6, (ɛ=1)
Fig. 10
β-curves of Ωn. The valid convergence regions of β correspond to the neighborhood of β=0.6, (ɛ=1)
Close modal

In the following calculation, we set β=0.6 and obtain the period approximation Tn, as shown in Table 4. The exact period Te (Eq. (10)) is calculated by numerical integration with high accuracy. It is shown that the convergence is very rapid and the relative error between the second order approximation T2 and the exact period Te is only 0.0020%. The comparison of the second order approximation u2(t) with the numerical result unum(t) is shown in Figs. 11 and 12. The maximum relative error [u2(t)/unum(t)-1] is about 0.027%, so the uniform valid, explicit analytical approximate solution with high accuracy is

Table 4

Comparison of the exact period Te with the nth-order approximation Tn, (ɛ=1 and ɛ=1000)

ɛ=1ɛ=1000
Order nTnTe(Tn/Te-1)TnTe(Tn/Te-1)
n = 14.7496424.768022 −0.39%0.22927670.2343533 −2.2%
n = 24.7681170.0020%0.2343005 −0.023%
n = 34.7680271.0 × 10−4%0.23435810.0020%
n = 44.7680220%0.2343532 −4.3 × 10−5%
ɛ=1ɛ=1000
Order nTnTe(Tn/Te-1)TnTe(Tn/Te-1)
n = 14.7496424.768022 −0.39%0.22927670.2343533 −2.2%
n = 24.7681170.0020%0.2343005 −0.023%
n = 34.7680271.0 × 10−4%0.23435810.0020%
n = 44.7680220%0.2343532 −4.3 × 10−5%
Fig. 11
Comparison of the numerical result with the 2nd-order approximation u2(t), (ɛ=1)
Fig. 11
Comparison of the numerical result with the 2nd-order approximation u2(t), (ɛ=1)
Close modal
Fig. 12
Relative error [u2(t)/unum(t)-1]; the maximum relative error is about 0.027% for u2(t), (ɛ=1)
Fig. 12
Relative error [u2(t)/unum(t)-1]; the maximum relative error is about 0.027% for u2(t), (ɛ=1)
Close modal
(46)

ɛ=1000.

For a large value of ɛ, the procedure is similar. We succinctly give the result of ɛ=1000 for comparison. With the aid of the β-curves, it is clear that the valid region of β is in the neighborhood of β=0.0015, as shown in Fig. 13. The comparison between the exact period Te and the nth-order approximation period Tn is shown in Table 4. The difference between the second order approximation u2(t) and the numerical result is tiny and the maximum relative error [u2(t)/unum(t)-1] is about 0.34%, as shown in Figs. 14 and 15.

Fig. 13
β-curves of Ωn, The valid convergence regions of β correspond to the neighborhood of β=0.0015, (ɛ=1000)
Fig. 13
β-curves of Ωn, The valid convergence regions of β correspond to the neighborhood of β=0.0015, (ɛ=1000)
Close modal
Fig. 14
Comparison of the numerical result with the 2nd-order approximation u2(t), (ɛ=1000)
Fig. 14
Comparison of the numerical result with the 2nd-order approximation u2(t), (ɛ=1000)
Close modal
Fig. 15
Relative error [u2(t)/unum(t)-1]; the maximum relative error is about 0.34% for u2(t), (ɛ=1000)
Fig. 15
Relative error [u2(t)/unum(t)-1]; the maximum relative error is about 0.34% for u2(t), (ɛ=1000)
Close modal

The Effect of Relaxation Factor β.

From the aforementioned two cases ɛ=1 and ɛ=1000, it is clear that the relaxation factor β can greatly improve the convergence of the solution sequence. In fact, it is unnecessary to choose the relaxation factor β as a constant. The relaxation factor β can be a function of the parameter ɛ. Furthermore, it is found that when β=4/(4+3ɛ) even the 2nd-order approximation u2(t)
(47)
provides us with a simple, uniform, valid approximation with enough accuracy. The 2nd-order approximation of period T2
(48)

agrees well with the exact result Te in the whole region 0<ɛ<, as shown in Fig. 16. The maximum relative error (T2/Te-1) occurs when ɛ+

Fig. 16
Comparison of the exact period Te with the 2nd-order approximation T2 given by Eq. (48)
Fig. 16
Comparison of the exact period Te with the 2nd-order approximation T2 given by Eq. (48)
Close modal
(49)

where Γ(z)=0+tz-1exp(-t)dt is the Gamma function. It is shown that the relaxation factor provides us with a convenient way to ensure the convergence of the solution sequence.

From Eqs. (47) and (48), it is found that even the 2nd-order approximation can give an accurate enough approximation, however, different from the perturbation approximations, Eqs. (47) and (48) are valid for all possible parameters 0<ɛ<, so they are independent of the small parameters.

Conclusion

In this paper, we propose the fixed point analytical method (FPM) by which we can acquire explicit analytical solutions of nonlinear differential equations, and two typical examples are discussed, in detail, as the application of this new analytical method. The results show that:

  1. (A)

    FPM is independent of any small parameter, so that it can solve more nonlinear problems, including ones with strong nonlinearity, which the traditional perturbation techniques handle with difficulty.

  2. (B)

    By the FPM, a convergent sequence of the solution is easy to acquire. Usually, the convergence is so rapid that the first several lower order approximations behave accurately enough.

  3. (C)

    The approximate analytical solutions obtained by FPM are uniformly valid.

Thus far, only the nonlinear ordinary differential equations have been investigated by the FPM in this paper,however, the FPM has the capability to handle the nonlinear system and partial differential equations, and will be discussed in the future.

Acknowledgment

The work is supported by Xi’an Jiaotong University Education Foundation for Young Teachers. Some subsequent work is partly supported by the NSFC (National Natural Science Foundation of China) under Contract No. 11102150.

References

1.
Nayfeh
,
A. H.
,
1973
,
Perturbation Methods
,
Wiley Online Library
,
New York.
2.
Nayfeh
,
A. H.
,
1981
,
Introduction to Perturbation Techniques
,
Wiley
,
New York.
3.
Smith
,
D. R.
,
1985
,
Singular-Perturbation Theory: An Introduction With Applications
,
Cambridge University Press
,
Cambridge, England.
4.
Bender
,
C. M.
,
Milton
,
K. A.
,
Pinsky
,
S. S.
, and
Simmons
,
L. M.
,
1989
, “
A New Perturbative Approach to Nonlinear Problems
,”
J. Math. Phys.
,
30
(
7
), pp.
1447
1455
.10.1063/1.528326
5.
Bender
,
C. M.
,
1991
, “
New Approach to the Solution of Nonlinear Problems
,”
Large Scale Structures in Nonlinear Physics
,
J.-D.
Fournier
and
P.-L.
Sulem
, eds.,
Springer
,
Berlin, Heidelberg
, pp.
190
210
.
6.
Nayfeh
,
A. H.
, and
Mook
,
D. T.
,
1979
,
Nonlinear Oscillations
,
Wiley Online Library
,
New York.
7.
Zeidler
,
E.
,
1986
,
Nonlinear Functional Analysis and Its Applications, I: Fixed-Point Theorems
,
Springer-Verlag
,
Berlin.
8.
Zeidler
,
E.
,
1995
,
Applied Functional Analysis: Applications to Mathematical Physics
,
Springer
,
New York.