Value to differentiate is not traced. It must be a traced real dlarray scalar. Use dlgradient inside a function called by dlfeval to trace the variables.
34 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hello, I am working on customizing the loss function to minimize dimensionality by maximizing the Bhattacharyya distance distance.
But it came up with a error shown as:
Error using dlarray/dlgradient
Value to differentiate is not traced. It must be a traced real dlarray scalar. Use dlgradient inside a function called by dlfeval to trace the variables.
Error in mutiatt_nld>modelGradients (line 54)
gradients = dlgradient(loss, dlnet.Learnables);
Error in deep.internal.dlfeval (line 17)
[varargout{1:nargout}] = fun(x{:});
Error in deep.internal.dlfevalWithNestingCheck (line 15)
[varargout{1:nargout}] = deep.internal.dlfeval(fun,varargin{:});
Error in dlfeval (line 31)
[varargout{1:nargout}] = deep.internal.dlfevalWithNestingCheck(fun,varargin{:});
Error in mutiatt_nld (line 24)
[gradients, loss] = dlfeval(@modelGradients, dlnet, dlX, N);
The code:
% Parameter settings
M = 10; % Dimension of input features
N = 50; % Number of samples per class
numEpochs = 100;
learnRate = 0.01;
% Generate example data
X = rand(2*N, M);
X(1:N, :) = X(1:N, :) + 1; % Data for class A
X(N+1:end, :) = X(N+1:end, :) - 1; % Data for class B
% Define the neural network
layers = [
featureInputLayer(M, 'Normalization', 'none')
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(3)
];
dlnet = dlnetwork(layerGraph(layers));
% Custom training loop
for epoch = 1:numEpochs
dlX = dlarray(X', 'CB'); % Transpose input data to match network's expected format
[gradients, loss] = dlfeval(@modelGradients, dlnet, dlX, N);
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);
disp(['Epoch ' num2str(epoch) ', Loss: ' num2str(extractdata(loss))]);
end
% Testing phase
X_test = rand(N, M); % Assume test data is randomly generated
dlX_test = dlarray(X_test', 'CB'); % Transpose input data to match network's expected format
Y_test = predict(dlnet, dlX_test);
disp('Dimensionality reduction results during testing:');
disp(extractdata(Y_test)');
% Custom loss function
function loss = customLoss(Y, N)
YA = extractdata(Y(:, 1:N))';
YB = extractdata(Y(:, N+1:end))';
muA = mean(YA);
muB = mean(YB);
covA = cov(YA);
covB = cov(YB);
covMean = (covA + covB) / 2;
d = 0.25 * (muA - muB) / covMean * (muA - muB)' + 0.5 * log(det(covMean) / sqrt(det(covA) * det(covB)));
loss = -d; % Maximize Bhattacharyya distance
loss = dlarray(loss); % Ensure loss is a tracked dlarray scalar
end
% Model gradient function
function [gradients, loss] = modelGradients(dlnet, dlX, N)
Y = forward(dlnet, dlX);
loss = customLoss(Y, N);
gradients = dlgradient(loss, dlnet.Learnables);
end
% Update function
function param = sgdmupdate(param, grad, learnRate)
param = param - learnRate * grad;
end
0 Kommentare
Antworten (1)
Ganesh
am 12 Jun. 2024
You are getting this error as you are using "extractData" with a traced argument. This would lead to breaking of tracing. You may use the following documentation to know what norms need to be followed within your loss function:
I have tried your code with a different loss function, and this worked:
% Parameter settings
M = 10; % Dimension of input features
N = 50; % Number of samples per class
numEpochs = 100;
learnRate = 0.01;
% Generate example data
X = rand(2*N, M);
X(1:N, :) = X(1:N, :) + 1; % Data for class A
X(N+1:end, :) = X(N+1:end, :) - 1; % Data for class B
% Define the neural network
layers = [
featureInputLayer(M, 'Normalization', 'none')
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(3)
];
dlnet = dlnetwork(layerGraph(layers));
% Custom training loop
for epoch = 1:numEpochs
dlX = dlarray(X', 'CB'); % Transpose input data to match network's expected format
Y = forward(dlnet, dlX);
Yact = rand(size(Y));
[gradients, loss] = dlfeval(@modelGradients, dlnet, Y, Yact);
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);
disp(['Epoch ' num2str(epoch) ', Loss: ' num2str(extractdata(loss))]);
end
% Testing phase
X_test = rand(N, M); % Assume test data is randomly generated
dlX_test = dlarray(X_test', 'CB'); % Transpose input data to match network's expected format
Y_test = predict(dlnet, dlX_test);
disp('Dimensionality reduction results during testing:');
disp(extractdata(Y_test)');
% Model gradient function
function [gradients, loss] = modelGradients(dlnet, Y, Yact)
loss = mse(Y, Yact);
gradients = dlgradient(loss, dlnet.Learnables);
end
% Update function
function param = sgdmupdate(param, grad, learnRate)
param = param - learnRate * grad;
end
I understand that as your loss function is unsupervised, you are running into issues. You will have to refactor the code accordingly.
Hope this helps!
1 Kommentar
Siehe auch
Kategorien
Mehr zu Custom Training Loops finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!