MATLAB Answers

Error in matlab included deep learning example

25 views (last 30 days)
Javier Bush
Javier Bush on 15 Oct 2019
Edited: Walter Roberson on 30 Dec 2019
I am trying to run the matlab example
In 2019b but, when i change to train the network on gpu the example show me this error. Please help me to run it or give me a workaround to train using gpu.
Error using gpuArray/subsasgn
Attempt to grow array along ambiguous dimension.
Error in deep.internal.recording.operations.ParenAssignOp/forward (line 45)
x(op.Index{:}) = rhs;
Error in deep.internal.recording.RecordingArray/parenAssign (line 29)
x = recordBinary(x,rhs,op);
Error in dlarray/parenAssign (line 39)
objdata(varargin{:}) = rhsdata;
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 484)
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 469)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 284)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);

  1 Comment

Edric Ellis
Edric Ellis on 15 Oct 2019
Thanks for reporting this - I can reproduce the problem using R2019b here, I shall forward this to the development team...

Sign in to comment.

Accepted Answer

Joss Knight
Joss Knight on 15 Oct 2019
There is a bug in this Example which will be rectified. Thanks for reporting. To workaround, initialize the loss variable in the maskedCrossEntropyLoss function:
function loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps)
numObservations = size(dlY,2);
loss = zeros([1,1],'like',dlY); % Add this line
for i = 1:numObservations
idx = 1:numTimeSteps(i);
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');


Show 3 older comments
Katja Mogalle
Katja Mogalle on 25 Oct 2019
There are some small issues in the example script that will prevent you from setting the miniBatchSize>1. The fix is pretty simple though.
1) Replace the modelGradients function with the following:
function [gradients,loss] = modelGradients(dlX,T,parameters,hyperparameters,numTimeSteps)
dlY = model(dlX,parameters,hyperparameters,true);
dlY = softmax(dlY,'DataFormat','CBT');
dlT = dlarray(T,'CBT');
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
gradients = dlgradient(mean(loss),parameters); % this line was changed to compute the mean loss
2) Replace the transformSequences function with the following:
function [XTransformed, YTransformed, numTimeSteps] = transformSequences(X,Y)
% Removed line which computed the numTimeSteps. We'll compute this later in the loop
miniBatchSize = numel(X);
numFeatures = size(X{1},1);
sequenceLength = max(cellfun(@(sequence) size(sequence,2),X));
classes = categories(Y{1});
numClasses = numel(classes);
sz = [numFeatures miniBatchSize sequenceLength];
XTransformed = zeros(sz,'single');
sz = [numClasses miniBatchSize sequenceLength];
YTransformed = zeros(sz,'single');
for i = 1:miniBatchSize
predictors = X{i};
% Create dummy labels.
numTimeSteps(i) = size(predictors,2); % This line now sets the time steps for the i-th observation
responses = zeros(numClasses, numTimeSteps(i), 'single'); % This line also uses the i-th observation numTimeSteps
for c = 1:numClasses
responses(c,Y{i}==classes(c)) = 1;
% Left pad.
XTransformed(:,i,:) = leftPad(predictors,sequenceLength);
YTransformed(:,i,:) = leftPad(responses,sequenceLength);
Note, however, that depending on your GPU you might run into out-of-memory issues already with a small miniBatchSize. I have a GeForce GTX 1080 and I already run into this issue with a miniBatchSize of 3.
We will work on updating the example to fix these issues as soon as possible. Apologies for the inconvenience!
Javier Bush
Javier Bush on 26 Oct 2019
Thanks, I can change miniBatchSize now.
Zekun on 29 Dec 2019
I found another solution for
"Error using gpuArray/subsasgn
Attempt to grow array along ambiguous dimension."
In dlarray/parenAssign.m, at this location:"\R2019b\toolbox\nnet\deep\@dlarray\parenAssign.m"
Line 15:
obj = zeros(0, 0, 'like', rhs);
Replace line 15 with the following 2 lines:
szrhs = size(rhs);
obj = zeros(szrhs(1), szrhs(2), 'like', rhs);
Users cannot directly edit this file, so I backed it up and replace it with a new file.

Sign in to comment.

More Answers (2)

Javier Bush
Javier Bush on 16 Oct 2019
Thanks it worked!


Sign in to comment.

Linda Koletsou Koutsiou
Linda Koletsou Koutsiou on 22 Oct 2019
Thank you for reporting the issue. The error you are getting is related to an attempt to grow a gpuArray using linear indexing assignment.
For more information please refer to the following bug report:

  1 Comment

Javier Bush
Javier Bush on 23 Oct 2019
I just changed the miniBatchSize to 2, in the same example and I get the following error, could you please help me with that? I think this is a bug because that is offered as a parameter in the example but you cannot change it.
Index exceeds the number of array elements (1).
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 486)
idx = 1:numTimeSteps(i);
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 472)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 287)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);

Sign in to comment.

Sign in to answer this question.


Translated by