- You have 63100 sequences of length 12 with 1 feature.
- You have a single sequence of length 63100 with 12 features.
How to use "imageInputLayer" instead of "sequenceInputLayer"?
8 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
I want to build a layergraph for deep learning with multiple sequence inputs. Since matlab does not seem to support multiple sequenceinputlayers (https://de.mathworks.com/matlabcentral/answers/709528-why-does-multiple-inputs-with-sequenceinputlayer-return-an-error) I am using "imageInputLayer" instead of "sequenceInputLayer".
Since the error I get is:
Error using trainNetwork
Invalid training data. Predictors and responses must have the same number of observations.
I wanted to try a minimal example, and switching out the input layers dos not work here either:
lgraph = [...
%sequenceInputLayer(12,"Name","sequence_2")
imageInputLayer( [12 1] , "Name", "sequence_2", "Normalization", "none")
tanhLayer("Name","tanh_2")
fullyConnectedLayer(1,"Name","fc_2",'WeightsInitializer','he')
softmaxLayer("Name","softmax_2")
regressionLayer("Name","regressionoutput")];
It works with the sequenceInputLayer, but not with the imageInputLayer. The error code is:
Error using trainNetwork
Invalid training data. The output size ([1 1 1]) of the last layer does not match the response size ([1 1 63100]).
The training data is 12x63100 and the output is 1x63100, and since the fullyConnectedLayer should output 1x63100, I do not understand why the imageInputLayer does not work here.
0 Kommentare
Antworten (1)
Ben
am 20 Jun. 2023
Your imageInputLayer([12,1]) is specifying that your input data is "images" with height 12, width 1 and 1 channel/feature.
I expect that since your data is 12x63100 then there are 2 potential cases:
In case 1. you would typically create imageInputLayer([12,1,1]) and permute your data so that it has size 12x1x1x63100. This allows you to create and train networks that use convolution2dLayer like a 1D convolution over sequences with fixed length. Similarly in case 2. you can use imageInputLayer([63100,1,12]) and permute your data to have size 63100x1x12. In both cases you can technically use featureInputLayer too.
A more modern way to handle networks with multiple sequence inputs would be to use dlnetwork and a custom training loop following examples such as this one.
0 Kommentare
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!