# Code shows matrix is singular to working precision. How can I fix it?

1 Ansicht (letzte 30 Tage)
Adriano am 13 Okt. 2023
Kommentiert: Jon am 13 Okt. 2023
x=[-2 0 1 2 3]
x = 1×5
-2 0 1 2 3
y=[2.5,1,2.5,5.7,11.6]
y = 1×5
2.5000 1.0000 2.5000 5.7000 11.6000
A=[x.^0 x.^2]
A = 1×10
1 1 1 1 1 4 0 1 4 9
alin=A'*A\A'*log(y)
Warning: Matrix is singular to working precision.
alin = 10×5
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
a=[exp(alin(1));alin(2)]
a = 2×1
NaN NaN
r=a(1)*exp(a(2)*x.^2)-y
r = 1×5
NaN NaN NaN NaN NaN
RMSE=norm(r,2)/sqrt(5)
RMSE = NaN
##### 0 Kommentare-2 ältere Kommentare anzeigen-2 ältere Kommentare ausblenden

Melden Sie sich an, um zu kommentieren.

### Antworten (1)

Jon am 13 Okt. 2023
Consider your matrix A'A. If you take a column matrix, in your case A' and multiply it by a row, in your case, A, then every column of A'A will be just a scaled version of the column A', so the columns will be linearly dependent and A'A is therefore singular
A'A\A' tries to solve the equation A'A x = A' and fails because A'A is singular.
I'm not really sure what you are trying to accomplish, and whether it makes any sense, but if you want to get a solution to A'Ax=A' you can use x = pinv(A'*A)*A'. Alternatively you could just recognize that by the construction of A'A, a solution of A'Ax = A' would be [1;0;0;0;0;0;0;0;0;0]
##### 2 KommentareKeine anzeigenKeine ausblenden
Askic V am 13 Okt. 2023
Bearbeitet: Askic V am 13 Okt. 2023
If r means residual and you calculate root mean square error (RMSE), it seems to me that you're trying to find coefficients a1 and a2 such that minimize error fit of the following model a(1)*exp(a(2)*x.^2) to your data.
If that is the case, I would determine coefficients a1 and a2 in the following way:
x = [-2 0 1 2 3];
y = [2.5,1,2.5,5.7,11.6];
% Define the model function as a function handle
model = @(a, x) a(1) * exp(a(2) * x.^2);
% Initial guess for the coefficients a1 and a2
initialGuess = [2, 0.5];
% Perform nonlinear curve fitting using lsqcurvefit
options = optimoptions('lsqcurvefit', 'Display', 'off');
A = lsqcurvefit(model, initialGuess, x, y, [], [], options);
a1 = A(1)
a1 = 1.7005
a2 = A(2)
a2 = 0.2137
r = A(1)*exp(A(2)*x.^2) - y;
RMSE = norm(r,2)/sqrt(numel(r))
RMSE = 1.0760
Jon am 13 Okt. 2023
You can solve this using linear least squares as follows, this looks like what you were attempting to do
% Data as vectors
x = [-2 0 1 2 3]'
x = 5×1
-2 0 1 2 3
y = [2.5,1,2.5,5.7,11.6]'
y = 5×1
2.5000 1.0000 2.5000 5.7000 11.6000
% solve for coefficients, to best
% fit y = a(1)*exp(a(2)*x.^2)
% taking log of both side
% log(y) = log(a(1)) + a(2)*x.^2
% log(y) = c(1) + c(2)*x.^2, a(1) = exp(c(1))
% find least squares solution
A = [x.^0 x.^2]
A = 5×2
1 4 1 0 1 1 1 4 1 9
c = A\log(y);
a = [exp(c(1));c(2)]
a = 2×1
1.3950 0.2422
% compute residual
r = a(1)*exp(a(2)*x.^2) - y
r = 5×1
1.1755 0.3950 -0.7227 -2.0245 0.7375
RMSE =norm(r,2)/sqrt(5)
RMSE = 1.1578
Note that this gives a larger residual, than the non-linear approach listed above. I think this is because we are minimizing the error between log(yfit) - log(y), rather than yfit - y, on the other hand the linear solution is very reliable and doesn't require an initial guess

Melden Sie sich an, um zu kommentieren.

### Kategorien

Mehr zu Nonlinear Regression finden Sie in Help Center und File Exchange

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by