Multiple Input Single Output Segmentation using Deep Learning
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Koshy
am 27 Apr. 2019
Kommentiert: 马瑞 李
am 21 Jan. 2021
I have 4 modal volumetric image data and output segemented data. I have to create a multi input DAG network, and I have succesfully created it using lgraph..
But I cannot able to train the network using trainNetwork. It shows error that only one input can be feed to trainNetwork..
My code is below, store1, store2, store3, store4 are four input 3d datastore and pxd is the output datastore
inputSize = [64 64 64];
layers1 = [
image3dInputLayer(inputSize,'Normalization','none','Name','input1')
convolution3dLayer(3,155,'Padding','same','Name','conv_11')
maxPooling3dLayer(4,'Name','maxpool1')];
layers2=[
image3dInputLayer(inputSize,'Normalization','none','Name','input2')
convolution3dLayer(3,155,'Padding','same','Name','conv_21')
maxPooling3dLayer(4,'Name','maxpool2')];
layers3=[
image3dInputLayer(inputSize,'Normalization','none','Name','input3')
convolution3dLayer(3,155,'Padding','same','Name','conv_31')
maxPooling3dLayer(4,'Name','maxpool3')];
layers4=[
image3dInputLayer(inputSize,'Normalization','none','Name','input4')
convolution3dLayer(3,155,'Padding','same','Name','conv_41')
maxPooling3dLayer(4,'Name','maxpool4')];
concat1=concatenationLayer(3,4,'Name','depth_1');
outlayer=[
transposedConv3dLayer(3,620,'stride',2,'cropping','same','Name','tconv_o1')
convolution3dLayer(1,numLabels,'Name','convLast');
softmaxLayer('Name','softmax');
dicePixelClassification3dLayer('output')];
lgraph = layerGraph;
lgraph = addLayers(lgraph,layers1);
lgraph = addLayers(lgraph,layers2);
lgraph = addLayers(lgraph,layers3);
lgraph = addLayers(lgraph,layers4);
lgraph = addLayers(lgraph,concat1);
lgraph = addLayers(lgraph,outlayer);
lgraph = connectLayers(lgraph,'maxpool1','depth_1/in1');
lgraph = connectLayers(lgraph,'maxpool2','depth_1/in2');
lgraph = connectLayers(lgraph,'maxpool3','depth_1/in3');
lgraph = connectLayers(lgraph,'maxpool4','depth_1/in4');
lgraph = connectLayers(lgraph,'depth_1','tconv_o1');
plot(lgraph)
miniBatchSize = 1;
options = trainingOptions('rmsprop', ...
'MaxEpochs',1, ...
'InitialLearnRate',0.01, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',5, ...
'LearnRateDropFactor',0.95, ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',miniBatchSize);
[net,info] = trainNetwork({store1,store2,store3,store4},pxds,lgraph,options);
Error shown is
Error in line:
[net,info] = trainNetwork({store1,store2,store3,store4},pxds,lgraph,options);
Caused by:
Network: Too many input layers. The network must have one input layer.
Detected input layers:
layer 'input1'
layer 'input2'
layer 'input3'
layer 'input4'
Please help me to solve this problem or suggest another way to train multi input image data
0 Kommentare
Akzeptierte Antwort
gonzalo Mier
am 28 Apr. 2019
This question was asked here: https://www.mathworks.com/matlabcentral/answers/369328-how-to-use-multiple-input-layers-in-dag-net-as-shown-in-the-figure
"One idea is to feed the network with concatenated inputs (e.g., image1;image2) then create splitter layers that split each input. The problem here is that you have to feed the network with .mat files, not image paths. Another idea is to store your images as tiff files which can hold 4 channels. In this case, you can store a colored image (3 channel) and a grayscale one. Have a look at this example https://www.mathworks.com/matlabcentral/fileexchange/65065-two-stream-cnn-for-gender-recognition-using-hand-images?s_tid=FX_rc1_behav .. see twoStream.m file. "
1 Kommentar
gonzalo Mier
am 12 Mai 2019
Bearbeitet: madhan ravi
am 12 Mai 2019
If this answer helped you, please accept it
Weitere Antworten (4)
Mohamed Abdelwahab
am 30 Jan. 2020
what about sequence input (lstm) how can we use mutiple inputs?
1 Kommentar
Yang YoonMo
am 12 Nov. 2019
How can I solve this problem??
I am training with 2 input and datastore return 2 input. Then the following problems arises:
Invalid training data for multiple-input network. For a network with 2 inputs and 1 output, the datastore read function must return an M-by-3
cell array, but it returns an M-by-2 cell array.
1 Kommentar
Y. K.
am 30 Apr. 2020
I want to build two inputs, one output network.
But the first input is an image and the second input is a vector.
When I try to train the network with cell array including two sub arrays (one for images, one for vector), I got an error.
"Invalid training data for multiple-input network. For multiple-input training, use a single datastore."
I created 4D image array, a vector array for each input and labels array for training.
How can I combine these data to a DataStore.
Matlab Datastore couldn't get the data from defined variable from workspace.
2 Kommentare
Mahmoud Afifi
am 30 Apr. 2020
You can think of packing your input in the image using a custom image read function, then unpack it later.
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!