How to connect fully connecter layer with convolutional layer

12 Ansichten (letzte 30 Tage)
Hello,
I am trying to train this network:
layers = layerGraph([ ...
imageInputLayer([1,1,128],"Name",'imgIn',"Normalization","none")
fullyConnectedLayer(256,'Name','bufferFc')
leakyReluLayer(0.01,"Name",'leakyrelu')
batchNormalizationLayer("Name",'batchnorm')
fullyConnectedLayer(128,'Name','fc_decoder1')
leakyReluLayer(0.01,"Name",'leakyrelu_1')
batchNormalizationLayer("Name",'batchnorm_1')
fullyConnectedLayer(1024,'Name','fc_decoder2')
leakyReluLayer(0.01,"Name",'leakyrelu_2')
batchNormalizationLayer("Name",'batchnorm_2')
transposedConv2dLayer(1,512,"Stride",2,"Name",'transpose_conv_7') ...
])
The annoying part is that this network is valid; these layers are part of a bigger network, which I am trying to split into two networks (encoder-decoder). I only added an "imageInputLayer" and one fully connected layer.
When I analyze the network using "analyzeNetwork(layers)," it checks OK, and the output dimension from the last fully connected layer is [1,1,1024], and also from the following batch normalization and leaky relu layers.
However, when I try to convert this into a dlnetwrok
decoderNet = dlnetwork(layers)
I get this error:
Error using dlnetwork/initialize (line 405)
Invalid network.
Error in dlnetwork (line 191)
net = initialize(net, dlX{:});
Caused by:
Layer 'transpose_conv_7': Input size mismatch. Size of input to this layer is different from the expected input
size.
Inputs to this layer:
from layer 'batchnorm_2' (size 1024(C) × 1(B))
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Apparently, the output from the fullyconnected layers lost the space dimensions and only have depth and batch dimensions.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Here is a list of things that I tried and didn't work:
  • Converting it first to layerGraph
  • Use depthToSpace2dLayer(blockSize) with block size [1,1].
  • Add a custom layer (AddDim), which creates a new dlarray with the correct dimensions and copy the values - this invoked a different:
Caused by:
Layer 'AddDim': Input size mismatch. Incorrect type of 'Z' for 'predict' in Layer 'addDimLayer'. Expected an
unformatted dlarray, but instead was formatted.
and when I changes it to unformated dlarray. this was the error:
Caused by:
Layer 'AddDim': Input size mismatch. Size of input to this layer is different from the expected input size.
Inputs to this layer:
from layer 'batchnorm_2' (size 1024(C) × 1(B))
Thanks in advance
  3 Kommentare
Mohammad Sami
Mohammad Sami am 30 Jun. 2021
Can I ask if you need the fully connected layers ? You can potentially have a Fully Convolutional Encoder Decoder Network instead by replacing all your fully connected layers with the transposeConv2dLayer.
ytzhak goussha
ytzhak goussha am 30 Jun. 2021
You are correct; it is possible to do vanilla encoder-decoder without any FC. However, I need the FC layer because I want to implement a variational autoencoder (VAE). In this model, the output of the encoder is a vector of 1x2*latent dimensions. It should output the mean and variance of the encoded data. The input to the decoder is a vector of 1x1*latent dimension, sampled from the variance and mean distribution of the encoded images.
I don't know. Maybe it is possible to do this just with CNNs and TCNN without any FC. If you or anyone else knows, please tell me.
I am using this example as a template:

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Katja Mogalle
Katja Mogalle am 1 Jul. 2021
Starting from MATLAB R2021a, there are a couple of new capabilities that could be helpful with this workflow.
Firstly, you noted that analyzeNetwork didn't flag up any issues. That is because analyzeNetwork, by default, analyzes the layers just like trainNetwork would do it. But in your case, you later want to use the layers in a dlnetwork. There are some subtle differences, which I'll explain later. To see the expected error message already in the network analyzer app, use the "TargetUsage" name-value pair.
analyzeNetwork(layers,'TargetUsage','dlnetwork')
Regarding "Apparently, the output from the fullyconnected layers lost the space dimensions and only have depth and batch dimensions."
This observation is absolutely correct. In a dlnetwork, the output of a fullyConnectedLayer does NOT have any spatial dimensions anymore. It only includes the channel and the batch dimension. However, when a fullyConnectedNetwork is used with trainNetwork or in a DAGNetwork/SeriesNetwork then the output has singleton spatial dimensions. That is why it's important to set the "TargetUsage" name-value pair of analyzeNetwork correctly.
Lastly, one way to connect a fullyConnectedLayer with a convolutional layer in dlnetwork, is to write a custom layer that (re)introduces the two singleton spatial dimensions that the convolutional layer requires. There are probably many ways of implementing this. Here is one example:
classdef addDimensionsLayer < nnet.layer.Layer & nnet.layer.Formattable
% Layer that adds new singleton dimensions to the data with the given
% label(s).
properties
NewDimsLabels (1,:) char = ''
end
methods
function layer = addDimensionsLayer(addedDimLabels,NameValue)
arguments
addedDimLabels (1,:) char
NameValue.Name = ''
end
% The added dimension labels should be one or more of 'S', 'C',
% 'B', 'T', 'U'. Note that, in total, a dlarray can only have one 'C',
% 'B', and 'T' dimensions each.
layer.NewDimsLabels = addedDimLabels;
layer.Name = NameValue.Name;
end
function Z = predict(layer,X)
% Add the new dimension labels to the end of the existing
% dimension labels.
outputFormat = [dims(X), layer.NewDimsLabels];
% Applying the new dimension labels generates trailing
% singleton dimension for the added labels.
Z = dlarray(X,outputFormat);
end
end
end
Note that this custom layer is marked as "Formattable" which allows it to change the number of dimensions of the data flowing through it. More information can be found in this documentation example: Define Custom Deep Learning Layer with Formatted Inputs.
To put it all together, this custom layer can now be used right before the transposedConv2dLayer as follows:
layers = layerGraph([ ...
imageInputLayer([1,1,128],"Name",'imgIn',"Normalization","none")
fullyConnectedLayer(256,'Name','bufferFc')
leakyReluLayer(0.01,"Name",'leakyrelu')
batchNormalizationLayer("Name",'batchnorm')
fullyConnectedLayer(128,'Name','fc_decoder1')
leakyReluLayer(0.01,"Name",'leakyrelu_1')
batchNormalizationLayer("Name",'batchnorm_1')
fullyConnectedLayer(1024,'Name','fc_decoder2')
leakyReluLayer(0.01,"Name",'leakyrelu_2')
batchNormalizationLayer("Name",'batchnorm_2')
addDimensionsLayer('SS','Name','addSpatialDims') % Custom layer that adds two spatial dims
transposedConv2dLayer(1,512,"Stride",2,"Name",'transpose_conv_7') ...
]);
analyzeNetwork(layers,'TargetUsage','dlnetwork')
decoderNet = dlnetwork(layers)
I hope this helps and answers your questions.
  1 Kommentar
ytzhak goussha
ytzhak goussha am 3 Jul. 2021
This is brilliant thank you !
I figured that this could do the trick and tried to implement this custom layer but this is on a different level than what I tried.
Again, thanks

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange

Produkte


Version

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by