- Flexibility and Compatibility: The dlarray data type offers flexibility by allowing users to perform computations on different types of hardware.
- Efficiency and Performance: Internal training loops often use array data types optimized for the underlying hardware and software frameworks. These types are often tailored for efficient execution of deep learning operations, which may not be fully compatible with the dlarray type.
- Framework-Specific Implementations: Different deep learning frameworks have their own internal representations for data. This lead to differences in data types used within custom and internal training loops.
Why is the data type not unified for custom training loops (dlarray) and internal training loops (array) in deep learning?
    5 Ansichten (letzte 30 Tage)
  
       Ältere Kommentare anzeigen
    
[XTrain,TTrain] = japaneseVowelsTrainData; 
inputSize = 12;
numHead = 10;
numHiddenUnits = 100;
numClasses = 9;
embeddingDimension = 50; %
numWords = 200 ;
layers = [
    sequenceInputLayer(inputSize)
    batchNormalizationLayer
    peepholeLSTMLayer(numHiddenUnits,inputSize,OutputMode="last")
%     lstmLayer(numHiddenUnits,'OutputMode','last')
    batchNormalizationLayer
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];
for lstmLayer, the data type of forward function is array:

for peepholeLSTMLayer which is a custom defined layer, the data type of forward (predict) function is dlarray:

Why is the data type not unified for custom training loops (dlarray) and internal training loops (array) in deep learning?
It brings some trouble and inconvenience and I think it leads to corpulent as well. 
What is puzzling is that:  for internal layers (lstmLayer), there is no layer validating  with auto-generated example inputs and forward function is used during training, however for user-defined layers, there is  layer validating  with auto-generated example inputs and predict but not forward function is used. Why is there the difference?
I think the deep learning tolbox of matlab is over-staffed,  it is inconvenient and complicated  for implementing deep leaning functions but should be concise and plain. 
0 Kommentare
Antworten (1)
  arushi
      
 am 2 Sep. 2024
        Hi Jack,
The disparity in data types between custom training loops (dlarray) and internal training loops (array) in deep learning can be attributed to the following reasons:
I hope it helps!
0 Kommentare
Siehe auch
Kategorien
				Mehr zu Statistics and Machine Learning Toolbox finden Sie in Help Center und File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!

