Error while training for a deep learning network

1 Ansicht (letzte 30 Tage)
Atreya Danturthi
Atreya Danturthi am 3 Aug. 2021
Beantwortet: Prateek Rai am 18 Aug. 2021
I am trying to build neural network which will detect circular objects with low contrast from many grayscale images.
The following error showed up while I was trying to plot a lgraph.
Error using trainNetwork (line 184)
Invalid network.
Error in Scaledetection_deeplearning (line 92)
training_net = trainNetwork(imds_train,lgraph,options);
Caused by:
Layer 'fuse': Unconnected input. Each layer input must be connected to the output of another layer.
Detected unconnected inputs:
input 'in2'
Layer 'score': Unconnected input. Each layer input must be connected to the output of another layer.
Detected unconnected inputs:
input 'ref'
Layer 'score_pool4c': Unconnected input. Each layer input must be connected to the output of another
layer.
Detected unconnected inputs:
input 'ref'
I am attaching the output of the Lgraph. Could anyone guide me as to how do I connect the layers and which ones do I connect.
dataDir = fullfile('D:\M.Sc Research Project\Data');
imDir = fullfile(dataDir, 'ImageSetRevised');
imds = imageDatastore(imDir, 'LabelSource', 'foldernames');
[trainSet,testSet] = splitEachLabel(imds,0.7,'randomized'); %Dividing the dataset into test and train
%%code for resizing
%% Saving training and test data into separate folders
location_train = 'D:\M.Sc Research Project\Deep Learning Approach\Deep Learning Method for Scale Detection\traindata\TrainingImages';
location_test = 'D:\M.Sc Research Project\Deep Learning Approach\Deep Learning Method for Scale Detection\testdata\TestingImages';
%writeall(trainSet,location_train);
%writeall(testSet,location_test);
PD = 0.30;
cv = cvpartition(size(gTruth.LabelData,1),'HoldOut',PD);
trainGroundTruth = groundTruth(groundTruthDataSource(gTruth.DataSource.Source(cv.training,:)),gTruth.LabelDefinitions,gTruth.LabelData(cv.training,:));
testGroundTruth = groundTruth(groundTruthDataSource(gTruth.DataSource.Source(cv.test,:)),gTruth.LabelDefinitions,gTruth.LabelData(cv.test,:));
dataSetDir = fullfile('D:\','M.Sc Research Project','Deep Learning Approach','Deep Learning Method for Scale Detection','traindata','TrainingImages');
imageDir = fullfile(dataSetDir,'ImageSetRevised');
imds_train = imageDatastore(imageDir);
classNames = ["scales","background"];
imageSize = [461 461];
numClasses = 2;
%lgraph = fcnLayers(imageSize,numClasses,'Type','16s');
layers = [
imageInputLayer([461 461 1],'Name','input','Normalization','zerocenter')
convolution2dLayer(3,64,'Name','conv1_1','Stride',1,'Padding',100)
reluLayer('Name','relu1_1')
convolution2dLayer(3,64,'Name','conv1_2','Stride',1,'Padding',1)
reluLayer('Name','relu1_2')
maxPooling2dLayer(2,'Name','pool1','Stride',2,'Padding',0)
convolution2dLayer(3,128,'Name','conv2_1','Stride',1,'Padding',1)
reluLayer('Name','relu2_1')
convolution2dLayer(3,128,'Name','conv2_2','Stride',1,'Padding',1)
reluLayer('Name','relu2_2')
maxPooling2dLayer(2,'Name','pool2','Stride',2,'Padding',0)
convolution2dLayer(3,256,'Name','conv3_1','Stride',1,'Padding',1)
reluLayer('Name','relu3_1')
convolution2dLayer(3,256,'Name','conv3_2','Stride',1,'Padding',1)
reluLayer('Name','relu3_2')
convolution2dLayer(3,256,'Name','conv3_3','Stride',1,'Padding',1)
reluLayer('Name','relu3_3')
maxPooling2dLayer(2,'Name','pool3','Stride',2,'Padding',0)
convolution2dLayer(3,512,'Name','conv4_1','Stride',1,'Padding',1)
reluLayer('Name','relu4_1')
convolution2dLayer(3,512,'Name','conv4_2','Stride',1,'Padding',1)
reluLayer('Name','relu4_2')
convolution2dLayer(3,512,'Name','conv4_3','Stride',1,'Padding',1)
reluLayer('Name','relu4_3')
maxPooling2dLayer(2,'Name','pool4','Stride',2,'Padding',0)
convolution2dLayer(3,512,'Name','conv5_1','Stride',1,'Padding',1)
reluLayer('Name','relu5_1')
convolution2dLayer(3,512,'Name','conv5_2','Stride',1,'Padding',1)
reluLayer('Name','relu5_2')
convolution2dLayer(3,512,'Name','conv5_3','Stride',1,'Padding',1)
reluLayer('Name','relu5_3')
maxPooling2dLayer(2,'Name','pool5','Stride',2,'Padding',0)
convolution2dLayer(7,4096,'Name','fc6','Stride',1,'Padding',0)
reluLayer('Name','relu6')
dropoutLayer(.5,'Name','drop6')
convolution2dLayer(1,4096,'Name','fc7','Stride',1,'Padding',0)
reluLayer('Name','relu7')
dropoutLayer(.5,'Name','drop7')
convolution2dLayer(1,2,'Name','score_fr','Stride',1,'Padding',0)
transposedConv2dLayer(4,2,'Name','upscore2','Stride',2,'Cropping',0)
additionLayer(2)
transposedConv2dLayer(32,2,'Name','upscore16','Stride',16,'Cropping',0)
crop2dLayer('centercrop','Name','score')
convolution2dLayer(1,2,'Name','score_pool4','Stride',1,'Padding',0)
crop2dLayer('centercrop','Name','score_pool4c')
softmaxLayer('name','softmax')
pixelClassificationLayer('Name','pixelLabels')];
lgraph = layerGraph(layers);
% lgraph = connectLayers(lgraph,'score_pool4c','add_1/in1');
% lgraph = connectLayers(lgraph,'score_pool4','add_1/in2');
%
% lgraph = connectLayers(lgraph,'softmax','add_1/in3');
% lgraph = connectLayers(lgraph,'pixelLabels','add_1/in4');
plot(lgraph)
options = trainingOptions('sgdm', ...
'MaxEpochs',20,...
'InitialLearnRate',1e-4, ...
'Verbose',false, ...
Plots','training-progress');
training_net = trainNetwork(imds_train,lgraph,options);

Akzeptierte Antwort

Prateek Rai
Prateek Rai am 18 Aug. 2021
To my understanding, you are trying to develop a neural network which will detect circular objects with low contrast from many grayscale images. In the network provided,
  1. 'fuse' layer is a addition layer. Number of inputs must be atleast two for this layer.
  2. 'score' layer and score_pool4c' are 2-D crop layer applies 2-D cropping to the input. There must be two inputs to this layer.
Modifying the network accordingly will give the correct results.
You can refer to additionLayer MathWorks documentation page to find more on addition layer. You can also refer to crop2dLayer MathWorks documentation page to learn more on 2-D crop layer.

Weitere Antworten (0)

Produkte


Version

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by