fixed point Iterative method for finding root of an equation
Ältere Kommentare anzeigen
The code below gives the root and the iteration at which it occur. The code goes into an infinite loop when the function contains any logarithmic or exponential function. But works fine with algebraic and trignometric function.
Can anyone point out the error?
Thank you in advance.
% let the equation whose root we want to find be x^3-5*x-7 = 0
% simplified eqation example:- f = @(x) (5x+7)^(1/3)
function [root,iteration] = fixedpoint(a,f) %input intial approiximation and simplified form of function
if nargin<1 % check no of input arguments and if input arguments is less than one then puts an error message
fprintf('Error! Atleast one input argument is required.');
return;
end
if nargin<2 % check no of input arguments and if input arguments is less than two then puts a=0
a=0;
end
x(1) = f(a) ;
i =1 ;
temp = false;
while temp == false
x(i+1) = f(x(i));
if x(i+1) == x(i)
temp = true;
root = x(i-1);
iteration = i-1;
return;
end
i = i +1 ;
end
x
end
8 Kommentare
Alan Stevens
am 30 Sep. 2020
@ Sindar In the following
% let the equation whose root we want to find be x^3-5*x-7 = 0
% simplified eqation example:- f = @(x) (5x+7)^(1/3)
the function is a rearranged form of x^3 - 5x - 7 = 0 so x = (5x + 7)^(1/3).
@ Milind This one does work. However, it's not always so easy to find a rearrangement that does work. Fixed point iterations often diverge. You might have to try different rearrangements of your equation to get the procedure to work (even then, there is no guarantee).
Milind Amga
am 30 Sep. 2020
Bearbeitet: Milind Amga
am 30 Sep. 2020
John D'Errico
am 30 Sep. 2020
Yes. There is a guarantee. Let me do some writing.
Alan Stevens
am 30 Sep. 2020
@ John
Nice explanation. In practice, with a complicated function, it can be extremely difficult to ensure an arrangement such that abs(derivative near root)<1. Is it theoretically possible always to find such an arrangement?
John D'Errico
am 30 Sep. 2020
Admittedly, if it was easy to do in some completely general way, we would probably see a fixed point solver in the optimization toolbox, or at least in MATLAB proper as a complement to tools like fminbnd and fzero.
You need to find a transformation in a fixed point form of the function around the root, where the absolute derivative is bounded by 1. And to be able to do that, you want to know where the root of interest is, and also be able to differentiate the function, and to know the derivative is bounded. But wait! If you knew all of that, then just use a Newton method. Or, better yet, a tool like fzero, which will be more robust to some problems and still has good convergence properties.
In the end, the answer really is to just use fzero, or whatever solver is appropriate. That is what I try to preach time and again - that while learning to use methods like fixed point iteration is a good thing for a student, after you get past being a student, use the right tools and don't write your own.
But can we use fixed point on some general problem? Lets see. find a root of the quadratic function x^2-3*x+2. Yeah, I know, creative as hell on my part. :)
syms x
solve(x^2 - 3*x + 2)
ans =
1
2
Two roots where I would expect to find them. Will my fixed point solver converge? Which root will it work for?
F = @(x) x.^2 - 3*x + 2;
We can make a good guess from this plot:
syms x
fplot(diff(x^2 - 3*x + 2) + 1)
yline(-1,'r');
yline(1,'r');
xline(1,'g')
xline(2,'g')

I've plotted the derivative of my fixed point function, along with horizontal bands at +/- 1.
As you can see, the root at x==1 is within the band, but the root at x==2 is not.
So, I'll predict my solver will converge for the root at x==1, but NOT at x==2.
>> [xfinal,fval,ferr,itercount] = myfp(F,1.1,0.001,10,1);
Current x, Fval, Ferr
1.01 -0.0899999999999999 0.0899999999999999
1.0001 -0.00990000000000002 0.00990000000000002
1.00000001 -9.99900000002718e-05 9.99900000002718e-05
>> [xfinal,fval,ferr,itercount] = myfp(F,2.1,0.001,10,1);
Current x, Fval, Ferr
2.21 0.109999999999999 0.109999999999999
2.4641 0.254099999999998 0.254099999999998
3.14358880999999 0.679488809999997 0.679488809999997
5.5949729863572 2.4513841763572 2.4513841763572
22.1137767453524 16.5188037589952 16.5188037589952
446.791568452582 424.67779170723 424.67779170723
198731.122503413 198284.330934961 198284.330934961
39493661591.2216 39493462860.0991 39493462860.0991
1.55974930580295e+21 1.55974930576345e+21 1.55974930576345e+21
2.43281789695278e+42 2.43281789695278e+42 2.43281789695278e+42
Milind Amga
am 30 Sep. 2020
Bearbeitet: Milind Amga
am 30 Sep. 2020
John D'Errico
am 30 Sep. 2020
@Milind Amga - you misunderstood my point, which was to Alan's question.
You would rarely want to use fixed point iteration in the real world. (I have, but only on rare occasions.) Yes, you are solving this as requested because it was homework. It teaches you both about MATLAB as well as teaching you one method for solving a problem, even though that method is not really a very good one in practice, because it has some serious flaws as outlined. (In the end, I hope your teacher spends the time for you to understand why the method works, as well as the limitations on the method and why you probably want to use better methods in the end. It is the latter parts that often get skipped over.)
I made the statement about fzero because I see far too many scientists and engineers using crude methods they learned in school to solve a problem, not knowing that good code already exists to do the same thing. For example, long ago, I was asked to help a scientist to do a computation using determinants to solve a problem. They were solving 2x2 systems of equations, and they wanted to change it so they could solve 5x5 or 6x6 systems. But they did not know how to form the necessary higher order determinants.
My answer was no, I would not help them to do what they asked me to do, but that I would show them how to solve the problem using better methods.
Akzeptierte Antwort
Weitere Antworten (2)
Alan Stevens
am 30 Sep. 2020
I defer to John on the guarantee. Certainly it's possible with x - exp(-x) = 0:
% x - exp(-x) = 0; rearrange as x = exp(-x)
x0 = 0.5; % Initial guess
f = @(x) exp(-x);
tol = 10^-10;
flag = true;
its = 0;
while flag == true && its<100
x = f(x0);
if abs(x-x0)<tol
flag = false;
end
x0 = x;
its = its+1;
end
disp([x, x-exp(-x), its])
1 Kommentar
Milind Amga
am 30 Sep. 2020
Akash G M
am 19 Dez. 2020
0 Stimmen
Find the root of the equation cos(x) - 1.3x = 0, taking initial approximation as 0.2 by fixed point iteration method
1 Kommentar
% cos(x) - 1.3 x = 0; rearrange as x = cos(x)/1.3
x0 = 0.2; % Initial guess
f = @(x) cos(x)/1.3;
tol = 10^-10;
flag = true;
its = 0;
while flag == true && its<100
x = f(x0);
if abs(x-x0)<tol
flag = false;
end
x0 = x;
its = its+1;
end
disp([x, x-f(x), its])
Kategorien
Mehr zu Get Started with Optimization Toolbox finden Sie in Hilfe-Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!