Never write basic code to solve a problem that has already been solved and written by people who are experts at it. In the case of curve fitting, there are MANY ways to solve this problem in virtually one line, and that do so more efficiently and more accurately that you did.
So if you have the curve fitting toolbox, then it is as simple as:
mdl = fittype('exp1')
mdl =
General model Exp1:
mdl(a,b,x) = a*exp(b*x)
fittedmdl = fit(x',y',mdl)
fittedmdl =
General model Exp1:
fittedmdl(x) = a*exp(b*x)
Coefficients (with 95
a = 65.91 (61.41, 70.4)
b = -0.1373 (-0.1528, -0.1217)
plot(x,y,'ro')
hold on
plot(fittedmdl)
If you are doing any modeling at all, the CFT is well worth the investment.
If you lack the CFT, then the stats toolbox has regress, or nlinfit. Regress can solve the linearized form that you used.
If you lack that TB, then the optimization Toolbox will do it, using lsqnonlin, or lsqcurvefit.
Finally, if you lack any toolbox at all, then STILL don't write it yourself!
close
P1 = polyfit(x,log(y),1);
C = exp(P1(2));
A = P1(1);
plot(x,y,'ro',x,C*exp(A*x),'b-')
And if you refuse to use something as simple as polyfit, then STILL DON'T write the code the way you did.
P1 = [x(:),ones(numel(x),1)]\log(y(:))
P1 =
-0.137994352412978
4.191285976189
C = exp(P1(2));
A = P1(1);
That your data barely supports a negative exponential model is something I would add. As well, you appear to have a data problem at that last point. That likely causes you to mis-estimate your parameters.
So, if you exclude that possibly spurious point, we see:
fittedmdl = fit(x(1:end-1)',y(1:end-1)',mdl)
fittedmdl =
General model Exp1:
fittedmdl(x) = a*exp(b*x)
Coefficients (with 95
a = 63.95 (61.06, 66.84)
b = -0.129 (-0.1396, -0.1184)
plot(x,y,'ro')
hold on
plot(fittedmdl)