- In the “forwardLoss” function, ensure numerical stability and correct weight application by using loss = -sum(sum(W .* (T .* log(Y + eps)), 1), 2) / N. This adjustment will help in handling the imbalance by applying the weights more accurately.
- For the “backwardLoss” function, implement the gradient calculation with dLdY = -(W' .* (T ./ (Y + eps))) / N , ensuring gradients are correctly computed and scaled for each class, considering the class weights and numerical stability.
How to implement weighted classification for 1D CNN?
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
I developed a 1D CNN for arrhythmia classification. I first segmented the long ECG signals from the MIT arrhythmia database in segments of 300 datapoints. Then, taking into account the comment made by by Joss Knigh at this question I reshaped the signals along the 4th dimension like that:
- Xtraining: 1-by-300-by-1-by-91147; where 300 is the length of each signal and 91147 are the number of signals
- Ytraining: 91147-by-1- which contains the labels for 5 classes of all the 91147 samples (these are categorical)
The problem is that the dataset is highly imbalanced and I think that's the reason why I cannot obtain a training accuracy bigger than 80%. The normal class represent approx 84% of the dataset, whereas the other 4 classes are in minority. Thus, I want to apply a weightedClassificationLayer. I read the documentation for custom layer, but I do not understand how I should set the dimensions of the input (N). Considering that I treat my signals as images I was using the examples given in the documentation, but the results became worse when I applied it.
Can you please explain how are these dimension chosen? How could I make it work?
Thank you in advance!
I calculated the weights as:
classWeights = 1./countcats(Ytrain);
classWeights = classWeights'/mean(classWeights);
And the weightedClassificationLayer that I used is:
classdef weightedClassificationLayer < nnet.layer.ClassificationLayer
properties
% Row vector of weights corresponding to the classes in the
% training data.
ClassWeights
end
methods
function layer = weightedClassificationLayer(classWeights, name)
% layer = weightedClassificationLayer(classWeights) creates a
% weighted cross entropy loss layer. classWeights is a row
% vector of weights corresponding to the classes in the order
% that they appear in the training data.
%
% layer = weightedClassificationLayer(classWeights, name)
% additionally specifies the layer name.
% Set class weights.
layer.ClassWeights = classWeights;
% Set layer name.
if nargin == 2
layer.Name = name;
end
% Set layer description
layer.Description = 'Weighted cross entropy';
end
function loss = forwardLoss(layer, Y, T)
% loss = forwardLoss(layer, Y, T) returns the weighted cross
% entropy loss between the predictions Y and the training
% targets T.
N = size(Y,4);
Y = squeeze(Y);
T = squeeze(T);
W = layer.ClassWeights;
loss = -sum(W*(T.*log(Y)))/N;
end
% function dLdY = backwardLoss(layer, Y, T)
% % dLdY = backwardLoss(layer, Y, T) returns the derivatives of
% % the weighted cross entropy loss with respect to the
% % predictions Y.
% % Find observation and sequence dimensions of Y
N = size(Y,4);
%
W = layer.ClassWeights;
dLdY = -(W'.*T./Y)/N;
% end
end
end
0 Kommentare
Antworten (1)
Rupesh
am 21 Mai 2024
Bearbeitet: Rupesh
am 21 Mai 2024
Hi Ioana,
I understand that the challenge you're encountering with your 1D CNN is primarily due to the imbalanced dataset, which significantly hinders achieving higher training accuracy. A custom weighted classification layer can be useful for addressing the imbalance by applying different weights to the classes during the loss calculation.
To fix this problem, ensuring that the custom layer correctly handles the batch size (N) and applies the class weights appropriately during both the forward and backward passes is important. The modifications should include correcting the loss and gradient calculations which will be responsible for proper application of class weights. In the original “forwardLoss” function, the operation W*(T.*log(Y)) may not apply the class weights properly to each corresponding class prediction since there is a high probability of dimension mismatch issues.
Possible code changes include:
After implementing these modifications, try to test the custom layer with a small dataset to ensure it behaves as expected, both in terms of loss calculation and gradient propagation. You can also refer to below MATLAB answer to better understand how the custom weighted classification layer construction works.
Hope it helps!
Thanks
0 Kommentare
Siehe auch
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!