How Do I Fitting a 3 Unknown Parameter Model?
12 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Jordan Mcconnell
am 15 Jan. 2021
Kommentiert: Star Strider
am 15 Jan. 2021
I am trying to fit the Vogel-Fulcher-Tammann equation to a set of data taken. The equation has 3 unknown parameters that I would like to find out. The eqaution I am trying to fit is as follows:
n = Aexp(b/(t-T))
t and n are known. But how would I go about fitting this model to find A,b, and T?
Cheers,
Jordan
0 Kommentare
Akzeptierte Antwort
Star Strider
am 15 Jan. 2021
There are several functions available to estimate the parameters. This uses fminsearch because everyone has it:
VFTfcn = @(b,t) b(1).*exp(b(2)./(t-b(3))); % Objective Function
t = 0:9; % Create Data Vector
n = rand(size(t)); % Create Data Vector
B0 = rand(1,3); % Initial Parameter Estimates
B = fminsearch(@(b) norm(n - VFTfcn(b,t)), B0); % Estimate Parameters
figure
plot(t, n, 'p')
hold on
plot(t, VFTfcn(B,t), '-r')
hold off
grid
text(3, 0.9*max(ylim), sprintf('n = %.1f e^{%.1f/(t-%.1f)}', B), 'HorizontalAlignment','center', 'VerticalAlignment','bottom')
VFTfcn = @(b,t) b(1).*exp(b(2)./(t-b(3))); % Objective Function
t = 0:9; % Create Data Vector
n = rand(size(t)); % Create Data Vector
B0 = rand(1,3); % Initial Parameter Estimates
B = fminsearch(@(b) norm(n - VFTfcn(b,t)), B0); % Estimate Parameters
figure
plot(t, n, 'p')
hold on
plot(t, VFTfcn(B,t), '-r')
hold off
grid
text(3, 0.9*max(ylim), sprintf('n = %.1f e^{%.1f/(t-%.1f)}', B), 'HorizontalAlignment','center', 'VerticalAlignment','bottom')
2 Kommentare
Star Strider
am 15 Jan. 2021
As always, my pleasure!
All nonlinear parameter estimation and optimization routines have to have a set of parameters to initialise the routine, essentially giving the routine a place in the parameter space to begin its search. The initial parameter estimates are those values.
The fminsearch function uses a ‘derivative-free’ algorithm and does not use gradient-descent (such as Levenberrg-Marquart) so it is less likely to encounter a local minimum that the others. However the choice is important for all gradient-descent algorithms, since they can end up in a local minium rather than the desired global minimum if the ‘wrong’ initial parameter estimates are chosen, and perhaps will not find any minimum. Global search algorithms of various kinds (in the Global Optimization Toolbox) search a much wider area of the parameter space, and are therefore much more likely to find the global minimum than others.
Weitere Antworten (1)
Mathieu NOE
am 15 Jan. 2021
hello Jordan
see example below
x = 0:100;
N = length(x);
tau = 10; % time scale of relaxation
x_asym = 4; % relative increase of the observable x t -> infinity
y = x_asym*(1 - exp(-x/tau)); % uncorrupted sought-for signal
sig = 0.25; % noise strength
% Use AR(1) to corrupt the signal instead of white noise.
phi = 0.6; % AR coefficeint
xi1 = sig*randn(1,N);
pert1 = filter(1,[1 -phi],xi1);
y_noisy = y+pert1;
% exponential fit method
% code is giving good results with template equation : % y = a.*(1-exp(b.*(x-c)));
f = @(a,b,c,x) a.*(1-exp(b.*(x-c)));
obj_fun = @(params) norm(f(params(1), params(2), params(3),x)-y_noisy);
sol = fminsearch(obj_fun, [y_noisy(end),0,0]);
a_sol = sol(1);
b_sol = sol(2);
c_sol = sol(3);
y_fit = f(a_sol, b_sol,c_sol, x);
figure
plot(x,y,'-+b',x,y_noisy,'r',x,y_fit,'-ok');
legend('signal','signal+noise','exp fit');
0 Kommentare
Siehe auch
Kategorien
Mehr zu Linear Model Identification finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!