MLP classification: what is the problem in my code?

3 Ansichten (letzte 30 Tage)
mike mike
mike mike am 21 Sep. 2017
Kommentiert: Greg Heath am 23 Sep. 2017
I would like to understand why the neural network with MLP I built works badly. The network should be a universal classifier because it has two hidden layers but, with the data set I use, my neural network does not train well. My network is built using
  1. in the first and second layers a sigmoidal transfer function
  2. in the output layer a soft max function"inputs" file is a 3x120 matrix: 3 features and 120 observations" targets" file is a 3x120 matrix (representative of 3 different classes).
In the example code I used a network with 40 neurons in the first layer and 20 in the second layer.
I notice two anomalies
  1. if I perform view (net) I see that the number of outputs of the output layer is 2 while the output box is 3: what does this mean and why Matlab does this?
  2. in the output layer a soft max functionIf I do sum (net ([i1;i2; i3]) I have a value different from 1 but it should be 1 because in the last layer there is a softmax function.
In the following I write my code while I attach inputs and outputs file
Choose a Training Function
% For a list of all training functions type: help nntrain
% 'trainlm' is usually fastest.
% 'trainbr' takes longer but may be better for challenging problems.
% 'trainscg' uses less memory. Suitable in low memory situations.
trainFcn = 'trainbr'; % Scaled conjugate gradient backpropagation.
% Create a Pattern Recognition Network
hiddenLayerSize = [40 20];
net=feedforwardnet(hiddenLayerSize);%crea rete feedforward
%imposta funzione trasferimento
net.layers{1}.transferFcn = 'tansig'
net.layers{2}.transferFcn = 'tansig'
net.layers{3}.transferFcn = 'softmax'
% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,x,t);
% Test the Network
y = net(x);
e = gsubtract(t,y);
performance = perform(net,t,y)
tind = vec2ind(t);
yind = vec2ind(y);
percentErrors = sum(tind ~= yind)/numel(tind);
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, ploterrhist(e)
figure, plotconfusion(t,y)
%figure, plotroc(t,y)
  4 Kommentare
mike mike
mike mike am 22 Sep. 2017
I would like to send you the data but what is your email? Meanwhile you attach them
Greg Heath
Greg Heath am 23 Sep. 2017
If you click on my name, you will see my community profile. The address in the profile is
heath@alumni.brown.edu
Greg

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Greg Heath
Greg Heath am 22 Sep. 2017
There are too many basic concepts of which you are unaware. Unfortunately, I only have time to list some of them.
1. One hidden layer is sufficient unless you have SPECIFIC operations you wish to perform (e.g., specific image feature extraction).
2. Using a validation subset helps prevent poor performance on nontraining (validation, testing and unseen) data.
3. Too many weights can cause instability. In particular, poor performance on nontraining data (NN vendors MUST be crucially aware of this ($$$!!!)).
4. It is best to stay as close as possible to the MATLAB example code and default parameter values found in the help and doc documentation.
5. A quick look at my QUICKIES posts in the NEWSREADER may help understand what few changes are really necessary.
6. Think in terms of the example code and using multiple initial random weight trials to minimize the number of hidden nodes subject to obtaining a good performance on all of the data (e.g.
mse(t-y) <= 0.01*mse(t-mean(t,2))
(Obviously comparing your design with the naïve attempt to use the best constant value for the model).
I have quite a few tutorials in the NEWSREADER which may help.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 Kommentar
mike mike
mike mike am 22 Sep. 2017
I'll tell you why I want to use feedfowardnet. At the beginning I used, through nnstart the tool recognition pattern, a network with only one hidden layer. In fact, the code I placed above was automatically generated by Matlab where instead of feedforward there was patternet. However, with my inputs and targets files I never managed to get a convergence even during the train and validation phase even if I used 1000 or more neurons (you can try it). I thought the problem was the fact that input data is classified into classes that are not convex sets; i know that a problem of classification of non-convex sets cannot be solved with a single hidden layer but I need two hidden layers whose transfer function is sigmoidal. (Should be hardlim but this function is not differentiable) and the last layer is softmax. The fact remains that I do not know why the sum of the outputs of the soft max function is not 1. Please try it yourself. In thanking you for your quick answers, meanwhile I send you my files by email

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

Greg Heath
Greg Heath am 22 Sep. 2017
GEH1: The best network function for classification is PATTERNNET
GEH2: Your targets should be 0,1 UNIT vectors.
GEH3: The best training function for classification is TRAINSCG
GEH4. ONE hidden layer suffices for a UNIVERSAL APPROXIMATOR
GEH5: Network creation involves the sizes of ONLY the hidden layers; NOT the sizes of the input and output!
GEH6. For STABILITY & GENERALIZATION to nontraining (e.g., validation, testing and unseen) data with similar summary statistics (e.g., mean, std, etc)
a. Use a VALIDATION subset
b. MINIMIZE the number of HIDDEN NODES
GEH7. Unable to download your data to test sum(softmax) claim.
GEH8. Take a look at my QUICKIES NEWSREADER posts.
Hope this helps.
Thank you for formally accepting my answer
Greg

Kategorien

Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by