Making a Deep Neural Network work on data a Shallow NN works on.
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Im working with large data sets. 200X840000 inputs and 6X840000 outputs. This data set creates a well behaved shallow NN and I want to achieve that performance using Deep NNs. The Inputs represents a time series of numbers, 200 time points that basically represent a graphs datapoints. The 6 targets are model fixing coefficient parameters, 0 to 10. To the best of my knowledge, this is a sequence to sequence. I wouldn't want to make a class for all 11^6 combinations. The only way I saw to do this at first was LSTM model so I tried whats below and it doesnt converge. MSE and Loss shoot up.
inputSize = 200;
outputSize = 12;
numClasses = 6;
numHiddenUnits1 = 12;
layers = [ ...
sequenceInputLayer(inputSize)
lstmLayer(numHiddenUnits1,'OutputMode','sequence')
fullyConnectedLayer(numClasses)
regressionLayer];
opts = trainingOptions('sgdm', ...
'MaxEpochs',15, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',50);
trainedNet = trainNetwork(Train_Inputs, Train_Targets,layers,opts)
I leave out the data division here because I can get this same data set to train a well behaved NN using matlabs shallow NN. So there arent too many ways to do sequence to sequence and I thought if I can input them as images I might have more flexibility with layer architecture. I found this tricky stuff which I admit I dont understand completely, but I format the input as a 4-D array, then the targets as the transpose of the original data. This converged with the expected shape of the MSE graph but not in a good way. error over 60 for a set of 6, 0 to 10. The code I have for that attempt is below.
New_Inputs = reshape(Inputs, [size(Inputs,1),1,1,size(Inputs,2)]);
layers = [ ...
imageInputLayer([200 1 1])
averagePooling2dLayer([200,1])
fullyConnectedLayer(100)
convolution2dLayer(1,200)
batchNormalizationLayer
reluLayer
convolution2dLayer(1,200)
batchNormalizationLayer
reluLayer
convolution2dLayer(1,200)
batchNormalizationLayer
reluLayer
convolution2dLayer(1,200)
batchNormalizationLayer
reluLayer
convolution2dLayer(1,200)
batchNormalizationLayer
reluLayer
convolution2dLayer(1,200)
batchNormalizationLayer
reluLayer
convolution2dLayer(1,200)
batchNormalizationLayer
reluLayer
fullyConnectedLayer(6)
regressionLayer];
opts = trainingOptions('sgdm', ...
'MaxEpochs',15, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',50);
trainedNet = trainNetwork(New_Inputs, Targets',layers,opts)
I tried to hit it with some brute force on the last try. Is there any glaring ignorance in the layer architecture above, or some reason I can't do this. I left out a little but in wraps, I can use train on this same data set, and get good results, and I can't get a Deep NN architecture to see it.
0 Kommentare
Antworten (1)
Vishal Bhutani
am 31 Aug. 2018
By my understanding you want to train a Deep Neural Network for your dataset which is a time series data. You have tried implementing Convolutional Neural Network (CNN) on the time series data, as there are various hyperparameters in Deep Neural Networks which you can vary. One of the things you can try is by varying filter size in convolutional layer, varying learning rate of the network, varying mini-batch size and if you have latest version of the MATLAB you can try changing solverName (optimizer) from sgdm to rmsprop or adam. For LSTM part you can try same things. Also you can try adding more number of fully connected layer at last in CNN.
0 Kommentare
Siehe auch
Kategorien
Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!