Prediction of future values using narnet

7 Ansichten (letzte 30 Tage)
Elma
Elma am 17 Jan. 2014
Kommentiert: PRABHAT am 25 Mai 2023
I have time series of hourly data during the period from 1980 to 2005 (219000 timesteps), and I need to predict those values for the period 2006-2012 (52560 timesteps). I have generated the code using NN Toolbox, but I need to make clear next issues: How can I get predicted values based on the created network for next 6 years? I know that closed loop is used for multi-step prediction, but the elements of resulting array yc have constant values. In fact, the first few values are different, and all the others are constant. Is the last or the first value of array yc prediction for timestep y(t+1)? How can I get predictions for additional 52559 timesteps?
I created the code by nn toolbox, and I used divideblock division since the time serie is considered.
The code:
if true
% WSin - feedback time series.
load('WSin.mat')
targetSeries = WSin;
feedbackDelays = 1:4;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
net.divideFcn = 'divideblock'; % Divide data in blocks net.divideMode = 'time'; % Divide up every value
net.trainFcn = 'trainrp';
net.performFcn = 'mse'; % Mean squared error net.trainParam.epochs=2000;
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
% Test the Network
outputs = net(inputs,inputStates,layerStates); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask); valTargets = gmultiply(targets,tr.valMask); testTargets = gmultiply(targets,tr.testMask); trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs)
% View the Network
view(net)
% Closed Loop Network
netc = closeloop(net); [xc,xic,aic,tc] = preparets(netc,{},{},targetSeries); yc = netc(xc,xic,aic); perfc = perform(net,tc,yc)
end
On this picture is the output of the open loop network, and it is ok.
On this picture is shown the output of the closed-loop network, with same input data, so I'am not sure if the next predicted value of the network is first or last, and is this should be like this?
Thank you very much in advance for helping me with this, I have tried to find an answer in earlier topics, but since I tried everything I read, I needed to ask you.
With kind regards

Akzeptierte Antwort

Greg Heath
Greg Heath am 28 Jan. 2014
% The code:
==>When posting should reformat to have one code statement per line
if true
==> What does that statement do?
% WSin - feedback time series.
load('WSin.mat')
targetSeries = WSin;
feedbackDelays = 1:4;
hiddenLayerSize = 10;
==>For a tough problem like this you have to optimize the inputs using the significant target autocorrelation delays and a trial and error search for Hopt (keep increasing untill validation set improvement is negligible).
net = narnet(feedbackDelays,hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
==> Delete. That is the default.
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
net.divideFcn = 'divideblock'; % Divide data in blocks
net.divideMode = 'time'; % Divide up every value
net.trainFcn = 'trainrp';
==> I assume this is used because the data set is huge.
net.performFcn = 'mse'; % Mean squared error
==> Delete. That is the default.
net.trainParam.epochs=2000;
% Train the Network [net,tr] = train(net,inputs,targets,inputStates,layerStates);
==> You forgot the final States on the LHS: [ net tr Xf Af ]
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
==> Look more closely at tr. The above calculations have already been done.
% View the Network view(net) % Closed Loop Network netc = closeloop(net);
==> Test data using netc. If noticibly different than the openloop performance, train netc beginning with the current openloop weights. Review some of my closeloop examples in the NEWSGROUP
[xc,xic,aic,tc] = preparets(netc,{},{},targetSeries);
yc = netc(xc,xic,aic);
perfc = perform(net,tc,yc)
end
On this picture is the output of the open loop network, and it is ok.
</matlabcentral/answers/uploaded_files/7468/Net.jpg>
===> ERROR: The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
On this picture is shown the output of the closed-loop network, with same input data, so I'am not sure if the next predicted value of the network is first or last, and is this should be like this?
</matlabcentral/answers/uploaded_files/7469/NetC.jpg>
===> ERROR: The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
Thank you very much in advance for helping me with this, I have tried to find an answer in earlier topics, but since I tried everything I read, I needed to ask you.
With kind regards
OK. I saw your plots after you posted them.
Bottom line:
1. There is a limit to how far you can predict with netc. Therefore, optimization of FD and H is critical.
2. After closing loop, continue training netc initialized with existing weights from net
Thank you for formally accepting my answer
P.S. I missed this post because, for some reason, it doesn't show up when using the search word "neural"
Hope this helps.
Greg
  5 Kommentare
Thomas
Thomas am 9 Feb. 2014
Hi Greg,
Further to above explanation on
" If the targetSeries are inputs for period of 5 years, are the outputs from the closed-loop network predictions for next 5 years?",
I also understand now the outputs from the CL are not forecast/prediction for the next periods (I used to have the same doubt as Elma) .
However, how can i do for forecasting future time steps (Let's say from 2006-2012 in this example)?
Based on the above, I should make use of Af, Xf but after having a look of your quoted thread and searching other examples, I still cannot understand how to do the forecast. Would you mind providing more explanation?
Many thanks in advance for your kind help.
Thomas
Muhammad Usman Saleem
Muhammad Usman Saleem am 14 Mai 2023
@Greg Heath please respond i have same problem to forecaste future values please share how?

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (2)

Sergey Gromov
Sergey Gromov am 4 Apr. 2014
This material contains an example for future prediction. Please draw attention, for training seria, and for simulation seria. Thanks for attention/ Sergey Gromov
  3 Kommentare
Greg Heath
Greg Heath am 14 Apr. 2014
1. The 100 points are initialization transients from solving the Mackey-Glass differential equation. They are discarded so that the quasi periodic steady state can be modeled by the net.
2. There was no initialization of the random number generator. Therefore it is not possible to duplicate the result. Nevertheless, using their code, I found that most of my multiple designs were successful.
3. I tried a more conventional approach with divideFcn = divideblock, Nval = 0 Ntst = 400. However, for some strange reason, the MATLAB code does not allow the combination Nval = 0, Ntst > 0. Therefore I used the fudge Nval = 400, Ntst = 0 with max_fail = 2*maxepochs to prevent validation stopping.
4. It is very interesting that good timeseries solutions can result from using dividerand. However, I'm not convinced that, in general, it should be recommended. They used the 4 feedback delays [1,7,13,19] and two hidden layers with [ 6 3 ] hidden nodes. Since my search using the autocorrelation function shows that lags 1:19 are all statistically significant with 95% confidence, just trying several equally spaced subsets ( e.g., [1], [10], [19], [1 19], [1 10 19], [1 7 13 19] ) could come upon that combination rather quickly. Using two hidden layers can reduce the number of required hidden nodes. However, I don't know their reason for using them.
5. Again, I think the only reason dividerand was successful is because of the unusual continuous band of significant autocorrelation lags.
Hope this helps.
Greg
PRABHAT
PRABHAT am 25 Mai 2023
page not found..kindly send me on my email id..luckyprabhat7@gmail.com

Melden Sie sich an, um zu kommentieren.


Jadranka Milosavljevic
Jadranka Milosavljevic am 12 Apr. 2020
.The
Mackey‐Glass
equation
describes
a
time‐delayed
nonlinear
system
that
produces
a
chaotic
 result.

Chaotic
systems
are
ones
in
which
small
changes
eventually
lead
to
results
that
can
be
 dramatically
different.

The
Mackey‐Glass
equation
is
given
by


 

The
delay, is
17
seconds.

Simulate
for
1000
seconds
and
use
a
fixed‐step
algorithm
with
a
step
size
of
 0.01
seconds.

Illustrate
sensitivity
by
repeating
the
simulation
replacing
the
0.1
by
0.099
and
then
by
 0.101.

You
will
need
to
adjust
the
configuration
parameters
because
only
the
last
1000
time
values
are
 sent
back
to
the
work
space.

Go
to
the
simulation
tab,
configuration
parameters
data
import/export
on
 the
left
hand
side
and
unclick
the
box
that
limits
the
number
of
points
to
the
last
1000.


Kategorien

Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by