Strange solution with regularization in fminunc

4 Ansichten (letzte 30 Tage)
Anton Baranikov
Anton Baranikov am 16 Mär. 2023
Bearbeitet: Anton Baranikov am 16 Mär. 2023
I have a nonlinear regression problem, where I have the lables of my training data and a measurement matrix . The cost function is 2-norm , where is a nonlinear function and is vector of fitting parameters that I need to find. I am using fminunc function to get the fitting parameters. Also, I am using L1 regularization to prevent overfitting. To optimize regularization parameter λ. I do k-fold cross-validation: I split the data into the training and testing datasets, split training datasets into 5 parts, then do the training on 1,2,3,4, validate on 5, training on 2,3,4,5, validate on 1 etc. What I found is that even with largeλ (so that the training bias increases twice with respect to non-regularized case), can vary a lot from fold to fold in the cross-validation. As I understand, this shouldn't be the case, since strong regularization should give roughly the same robust solution for various splits of the data. Moreover, components look pivoting around 0 a lot, which also is not what one can expect in regularized solution. Note also, that initial guess in fminunc are just = [1,1,1,1,1,..]. As you can see, the final components has . I am suspicious, that the components are suppressed with regularization too much and fminunc faces some numerical errors. How can I improve it?
options = optimoptions(@fminunc, 'OptimalityTolerance', 1e-15,'FiniteDifferenceStepSize', 1e-3);
f=@(x) norm(F(A),2) + lambda*norm(x,1) ;
initial_guess = ones(19,1);
solution = fminunc(f , initial_guess ,options);

Antworten (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by