output of neural network changes
3 Ansichten (letzte 30 Tage)
giacomo baccolo am 20 Apr. 2021
I have trained a LSTM neural network to classify some sequences. The sequence length is 50 and the number of classes is 4.
I tried the network to classify a new set of data for an external validation of the network perfomances.
I noticed that the classifications change depending on how I feed the network.
I tried two approach:
to feed the network with all the data in one time
to feed the network one sample at a time within a for loop
YPred_val1 = classify(net,X_valN);
for i = 1:size(X_valN,2)
YPred_val2(i) = classify(net,X_valN(:,i));
I thought the results would be identical (I expected YPred_val1 equal to YPred_val2) in the two tries but actually the classifications are different, about the 50% of the samples has a different predicted label comparing the results.
do you have any idea why? maybe I m missing something?
Shashank Gupta am 26 Apr. 2021
It won't be necessarily identical, since you are dealing with LSTM network which means the testing data is sequencial in nature, so if each time step given as input to trained network will not be necessarily give same output as compared to giving all seqeunces at one time because of LSTM behaviour. The second approach which you tried will consider each time stamp as independent identity while the first approach where each time stamp is connected to previous timestamp. If its a general CNN network then sure both the result will be identical.
I hope my explaination make sense.