LSTM time series hyperparameter optimization using bayesian optimization

28 Ansichten (letzte 30 Tage)
I am working with time series regression problem. I want to optimize the hyperparamters of LSTM using bayesian optimization. I have 3 input variables and 1 output variable.
I want to optimize the number of hidden layers, number of hidden units, mini batch size, L2 regularization and initial learning rate . Code is given below:
numFeatures = 3;
numHiddenUnits = 120;
numResponses = 1;
layers = [ ...
sequenceInputLayer(numFeatures)
lstmLayer(numHiddenUnits,'OutputMode','sequence')
fullyConnectedLayer(numResponses)
regressionLayer];
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'MiniBatchSize',miniBatchSize, ...
'InitialLearnRate',optVars.InitialLearnRate, ...
'Momentum',optVars.Momentum, ...
'GradientThreshold',1, ...
'Shuffle','never', ...
'L2Regularization',optVars.L2Regularization, ...
'Plots','training-progress',...
'Verbose',0);
net = trainNetwork(XTrain,YTrain,layers,options);
YPredicted = predict(net,Xval, 'MiniBatchSize',1);
valError = 1 - mean(YPredicted == Yval);
Thanks in advance.

Antworten (4)

Jorge Calvo
Jorge Calvo am 5 Okt. 2021
I thought you would like to know that, in R2021b, we are included an example for training long short-term memory (LSTM) networks using Bayesian optimization in Experiment Manager:
I hope you find it helpful!

Don Mathis
Don Mathis am 10 Mai 2019
Here's an example using a convolutional network instead of an LSTM network. Your LSTM case should look very similar: https://www.mathworks.com/help/deeplearning/examples/deep-learning-using-bayesian-optimization.html
  1 Kommentar
Sinan Islam
Sinan Islam am 8 Mai 2021
LSTM is different from CNN. It is obvious that this example is in great demand. Why not Matlab make a proper example dedicated for optimizing LSTM?

Melden Sie sich an, um zu kommentieren.


Jorge Calvo
Jorge Calvo am 27 Mai 2021
If you have R2020b or later, you can use the Experiment Manager app to run Bayesian optimization to determine the best combination of hyperparameters. For more information, see https://www.mathworks.com/help/deeplearning/ug/experiment-using-bayesian-optimization.html.
  2 Kommentare
Sinan Islam
Sinan Islam am 27 Mai 2021
Bearbeitet: Sinan Islam am 27 Mai 2021
Can I use Experiment Manager to load 200 different datasets, and each dataset has its own target, and for every dataset the Experiment Manager finds the best combination of LSTM hyperparameters? or will I have to code the objective function and loop over it 200 times?
Jorge Calvo
Jorge Calvo am 27 Mai 2021
Each time you run an experiment, the Experiment Manager will find the best combination of hyperparameters for a given setup. To specify what you mean by best, you can select from some standard objective metrics (including validation accuracy, which I think is what the original question was using) or you can define your own.
If you want to do find the best combo of hyperparameters for each of 200 data sets, then you would:
  1. Setup the experiment for the first data set.
  2. Run the experiment.
  3. Modify the setup function to load the next data set.
  4. Run the experiment again.
  5. Repeat steps 3 and 4.
This amounts to running 200 different experiments. On the bright side, unless your objective function depends on the data set, you would not need to recode it.

Melden Sie sich an, um zu kommentieren.


SIVA SRI
SIVA SRI am 3 Sep. 2024
To customize the architecture to classify flowers, you first replace the fully connected layer. Fully connected layers need one neuron for each output class.
To add a fully connected layer, locate the Convolution and Fully Connected section of the Layer Library. Click and drag to add a layer to the canvas.

Kategorien

Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange

Produkte


Version

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by