How to add Inception-Res block and Dense-Inception block in 3D U-Net Layers
    7 Ansichten (letzte 30 Tage)
  
       Ältere Kommentare anzeigen
    
    mohd akmal masud
 am 10 Sep. 2024
  
    
    
    
    
    Bearbeitet: Malay Agarwal
      
 am 18 Sep. 2024
            Dear All,
I have the coding below,
lgraph = layerGraph();
tempLayers = [
    image3dInputLayer([128 128 128 3],"Name","ImageInputLayer")
    convolution3dLayer([3 3 3],16,"Name","Encoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Encoder-Stage-1-BN-1")
    reluLayer("Name","Encoder-Stage-1-ReLU-1")
    convolution3dLayer([3 3 3],32,"Name","Encoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Encoder-Stage-1-BN-2")
    reluLayer("Name","Encoder-Stage-1-ReLU-2")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
    maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-1-MaxPool","Stride",[2 2 2])
    convolution3dLayer([3 3 3],32,"Name","Encoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Encoder-Stage-2-BN-1")
    reluLayer("Name","Encoder-Stage-2-ReLU-1")
    convolution3dLayer([3 3 3],64,"Name","Encoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Encoder-Stage-2-BN-2")
    reluLayer("Name","Encoder-Stage-2-ReLU-2")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
    maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-2-MaxPool","Stride",[2 2 2])
    convolution3dLayer([3 3 3],64,"Name","Bridge-Conv-1","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Bridge-BN-1")
    reluLayer("Name","Bridge-ReLU-1")
    convolution3dLayer([3 3 3],128,"Name","Bridge-Conv-2","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Bridge-BN-2")
    reluLayer("Name","Bridge-ReLU-2")
    transposedConv3dLayer([2 2 2],128,"Name","Decoder-Stage-1-UpConv","BiasLearnRateFactor",2,"Stride",[2 2 2],"WeightsInitializer","he")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
    concatenationLayer(4,2,"Name","Decoder-Stage-1-Concatenation")
    convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Decoder-Stage-1-BN-1")
    reluLayer("Name","Decoder-Stage-1-ReLU-1")
    convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Decoder-Stage-1-BN-2")
    reluLayer("Name","Decoder-Stage-1-ReLU-2")
    transposedConv3dLayer([2 2 2],64,"Name","Decoder-Stage-2-UpConv","BiasLearnRateFactor",2,"Stride",[2 2 2],"WeightsInitializer","he")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
    concatenationLayer(4,2,"Name","Decoder-Stage-2-Concatenation")
    convolution3dLayer([3 3 3],32,"Name","Decoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Decoder-Stage-2-BN-1")
    reluLayer("Name","Decoder-Stage-2-ReLU-1")
    convolution3dLayer([3 3 3],32,"Name","Decoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
    batchNormalizationLayer("Name","Decoder-Stage-2-BN-2")
    reluLayer("Name","Decoder-Stage-2-ReLU-2")
    convolution3dLayer([1 1 1],5,"Name","Final-ConvolutionLayer","Padding","same","WeightsInitializer","he")
    softmaxLayer("Name","Softmax-Layer")
    pixelClassificationLayer("Name","Segmentation-Layer")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-2","Encoder-Stage-1-MaxPool");
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-2","Decoder-Stage-2-Concatenation/in2");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Encoder-Stage-2-MaxPool");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Decoder-Stage-1-Concatenation/in2");
lgraph = connectLayers(lgraph,"Decoder-Stage-1-UpConv","Decoder-Stage-1-Concatenation/in1");
lgraph = connectLayers(lgraph,"Decoder-Stage-2-UpConv","Decoder-Stage-2-Concatenation/in1");
plot(lgraph);
and the graph like below

But I want to insert the Inception-Res block and Dense-Inception block in my 3D U-Net like picture below

This is Inception-Res block propose 
    This is Dense-Inception block propose 

Anyone can help me?
0 Kommentare
Akzeptierte Antwort
  Malay Agarwal
      
 am 18 Sep. 2024
        
      Bearbeitet: Malay Agarwal
      
 am 18 Sep. 2024
  
      You can implement the Inception-Res and Dense Inception-Res blocks by implementing a custom nested deep learning layer.
You can refer to the following resource, which shows how to create a residual block using nested layers: https://www.mathworks.com/help/releases/R2023a/deeplearning/ug/define-nested-deep-learning-layer.html.
I have attached an example implementation of the Inception-Res block to the answer. If you'd like to plot the internal network, you can use the following code:
layer = Inception_Res();
inputSize = [224 224 3];
layout = networkDataLayout(inputSize, "SSC");
layer = initialize(layer, layout);
plot(layer.Network)
Note that the code contains a commented line (line 72) in the initialize() method. I have commented this out since I am not sure about the dimensions of the convolutional operations and initializing the internal network is leading to an error. You'll need to modify the parameters of the convolutional operations to make sure all the dimensions work out.
The Dense Inception-Res block can be created similarly. 
Hope this helps!
0 Kommentare
Weitere Antworten (0)
Siehe auch
Kategorien
				Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!