# fminunc not converging objective function

3 Ansichten (letzte 30 Tage)
Xavier am 14 Mai 2024
Kommentiert: Xavier am 17 Mai 2024
I'm trying to minimize the following from observations data (which now is synthetic):
Observation generation:
Fs = 80E6;
lags = (-10:10)';
tau = lags/Fs;
bw = 0.1;
[obs, obs_A, obs_C] = Raa_parabola(lags, bw);
function [y, A, C] = Raa_parabola(lags,bw)
%RAA_PARABOLA generates a parabola function given a lag vector.
% tau: time vector
% bw: bandwidth at RF
x2 = 1/bw;
x1 = -x2;
A = 1/x1/x2;
B = 0;
C = A*x1*x2;
y = A*lags.^2 + B*lags + C;
end
Which generates a parabola given a tau vector and a bandwidth bw (Fig. 1)
f = 420E3;
ph = exp(1j*2*pi*f*tau);
obs = obs.*ph;
Thus the objective function will have 2 parabola parameters and 1 last parameter to obtain the phase, defined as:
function F = myfunc1(x, o, lags, tau)
m_mag = x(1)*lags.^2 + x(2); % magnitude
m_phase = exp(1j*2*pi*x(3)*tau); % phase
m = m_mag.*m_phase; % model
e = m - o; % error = model - observations
F = e'*e; % mean square error
end
With the idea to generate a kinda least squares minimization and use F as the mean square error.
x0 = [0,0,0];
fun = @(x) myfunc1(x, obs, lags, tau);
options = optimoptions('fminunc', 'Display', 'iter', 'StepTolerance', 1e-20, 'FunctionTolerance', 1e-9, ...
'MaxFunctionEvaluations', 300, 'DiffMinChange', 1e-5);
[x,fopt] = fminunc(fun, x0,options);
First-order Iteration Func-count f(x) Step-size optimality 0 4 10.6666 514 1 20 9.35481 9.89358e-06 18.5 2 32 9.07342 91 67.4 3 36 6.89697 1 367 4 40 3.97278 1 539 5 44 1.22032 1 427 6 48 0.282891 1 152 7 52 0.167975 1 17.1 8 56 0.164316 1 0.215 9 60 0.164294 1 0.101 10 64 0.164294 1 0.00341 11 68 0.164294 1 2.85e-05 Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
fprintf("Observation coefficients: A = %.2f, C = %.2f\n",obs_A,obs_C)
Observation coefficients: A = -0.01, C = 1.00
disp(x);
-0.0100 0.9940 0.0000
y = x(1)*lags.^2 + x(2);
y = y.*exp(1j*2*pi*x(3)*tau);
figure(1); clf;
subplot(4,1,1); plot(lags,real(obs),'LineWidth',2);hold on;
subplot(4,1,2); plot(lags,imag(obs),'LineWidth',2);hold on;
subplot(4,1,3); plot(lags,abs(obs),'LineWidth',2);hold on;
subplot(4,1,4); plot(lags,angle(obs)*180/pi,'LineWidth',2);hold on;
subplot(4,1,1); plot(lags,real(y),'LineWidth',1.5); legend('Obs','Model');
subplot(4,1,2); plot(lags,imag(y),'LineWidth',1.5); legend('Obs','Model');
subplot(4,1,3); plot(lags,abs(y),'LineWidth',2);hold on;
subplot(4,1,4); plot(lags,angle(y)*180/pi,'LineWidth',2);hold on;
Notice that values x(1) and x(2) converge to a valid point (for me) but parameter 3 should be 420E3.
Where is my misconception?
Thank you very much.
##### 1 Kommentar-1 ältere Kommentare anzeigen-1 ältere Kommentare ausblenden
Torsten am 14 Mai 2024
Bearbeitet: Torsten am 14 Mai 2024
Fs = 80E6;
lags = (-10:10)';
tau = lags/Fs;
bw = 0.1;
[obs, obs_A, obs_C] = Raa_parabola(lags, bw);
f = 420E3;
ph = exp(1j*2*pi*f*tau);
obs = obs.*ph;
x0 = [0,0,0];
fun = @(x) myfunc1(x, obs, lags, tau);
options = optimoptions('fminunc', 'Display', 'iter', 'StepTolerance', 1e-20, 'FunctionTolerance', 1e-9, ...
'MaxFunctionEvaluations', 300, 'DiffMinChange', 1e-5);
[x,fopt] = fminunc(fun, x0,options)
First-order Iteration Func-count f(x) Step-size optimality 0 4 10.6666 514 1 20 9.35481 9.89358e-06 18.5 2 32 9.07342 91 67.4 3 36 6.89697 1 367 4 40 3.97278 1 539 5 44 1.22032 1 427 6 48 0.282891 1 152 7 52 0.167975 1 17.1 8 56 0.164316 1 0.215 9 60 0.164294 1 0.101 10 64 0.164294 1 0.00341 11 68 0.164294 1 2.85e-05 Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
x = 1x3
-0.0100 0.9940 0.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
fopt = 0.1643
fprintf("Observation coefficients: A = %.2f, C = %.2f\n",obs_A,obs_C)
Observation coefficients: A = -0.01, C = 1.00
y = x(1)*lags.^2 + x(2);
y = y.*exp(1j*2*pi*x(3)*tau);
figure(1); clf;
subplot(4,1,1); plot(lags,real(obs),'LineWidth',2);hold on;
subplot(4,1,2); plot(lags,imag(obs),'LineWidth',2);hold on;
subplot(4,1,3); plot(lags,abs(obs),'LineWidth',2);hold on;
subplot(4,1,4); plot(lags,angle(obs)*180/pi,'LineWidth',2);hold on;
subplot(4,1,1); plot(lags,real(y),'LineWidth',1.5); legend('Obs','Model');
subplot(4,1,2); plot(lags,imag(y),'LineWidth',1.5); legend('Obs','Model');
subplot(4,1,3); plot(lags,abs(y),'LineWidth',2);hold on;
subplot(4,1,4); plot(lags,angle(y)*180/pi,'LineWidth',2);hold on;
function [y, A, C] = Raa_parabola(lags,bw)
%RAA_PARABOLA generates a parabola function given a lag vector.
% tau: time vector
% bw: bandwidth at RF
x2 = 1/bw;
x1 = -x2;
A = 1/x1/x2;
B = 0;
C = A*x1*x2;
y = A*lags.^2 + B*lags + C;
end
function F = myfunc1(x, o, lags, tau)
m_mag = x(1)*lags.^2 + x(2); % magnitude
m_phase = exp(1j*2*pi*x(3)*tau); % phase
m = m_mag.*m_phase; % model
e = m - o; % error = model - observations
F = e'*e; % mean square error
end

Melden Sie sich an, um zu kommentieren.

### Akzeptierte Antwort

Matt J am 14 Mai 2024
Bearbeitet: Matt J am 14 Mai 2024
You need to a better choice of units for x(3), at least for the optimization step. Below, I modify the objective function so that x(3) is measured in MHz instead of Hz.
Fs = 80E6;
lags = (-10:10)';
tau = lags/Fs;
bw = 0.1;
[obs, obs_A, obs_C] = Raa_parabola(lags, bw);
f = 420E3;
ph = exp(1j*2*pi*f*tau);
obs = obs.*ph;
s=[1,1,1e6]; %unit scaling
x0 = [0,0,0];
fun = @(x) myfunc1(x.*s, obs, lags, tau);
options = optimoptions('fminunc', 'Display', 'iter', 'StepTolerance', 1e-20, ...
'FunctionTolerance', 1e-9);
[x,fopt] = fminunc(fun, x0,options);
First-order Iteration Func-count f(x) Step-size optimality 0 4 10.6666 515 1 20 9.35481 9.87901e-06 18.5 2 36 6.79415 820 15.7 3 40 0.152421 1 0.748 4 52 0.151956 91 1.09 5 56 0.147438 1 7.25 6 60 0.137857 1 15.1 7 64 0.114705 1 25.6 8 68 0.0731064 1 32.5 9 72 0.0262799 1 25.9 10 76 0.00495409 1 11.6 11 80 0.000372359 1 2.86 12 84 3.07524e-06 1 0.132 13 88 1.19738e-08 1 0.00523 14 92 6.62468e-12 1 0.000373 Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
x=x.*s;
x1=x(1),x2=x(2),x3=x(3)
x1 = -0.0100
x2 = 1.0000
x3 = 4.2000e+05
fopt
fopt = 6.6247e-12
function [y, A, C] = Raa_parabola(lags,bw)
%RAA_PARABOLA generates a parabola function given a lag vector.
% tau: time vector
% bw: bandwidth at RF
x2 = 1/bw;
x1 = -x2;
A = 1/x1/x2;
B = 0;
C = A*x1*x2;
y = A*lags.^2 + B*lags + C;
end
function F = myfunc1(x, o, lags, tau)
m_mag = x(1)*lags.^2 + x(2); % magnitude
m_phase = exp(1j*2*pi*x(3)*tau); % phase
m = m_mag.*m_phase; % model
e = m - o; % error = model - observations
F = e'*e; % mean square error
end
##### 6 Kommentare4 ältere Kommentare anzeigen4 ältere Kommentare ausblenden
Matt J am 17 Mai 2024
The suggestion would depend on what you think you don't know.
Xavier am 17 Mai 2024
Does it matter to know more about the algorithms that the optimization functions apply in order to know how to better use them? Or just with the simple knowledge that matlab documentation offers is sufficient, I don't think so.
I'm thinking of adding the gradient matrix to the algorithm but I don't know how to calculate it before to input it.
Also, this algorithms would work with a large number of unknowns, like, 2000 unknowns and 4000 equations or it will be too tough to compute.

Melden Sie sich an, um zu kommentieren.

### Kategorien

Mehr zu Conditional Mean Models finden Sie in Help Center und File Exchange

R2023a

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by