Groundtruths in StopSigns Example Has 27 Images detected & The root Folder Contains 40

1 Ansicht (letzte 30 Tage)
Hi All,
I am debugging my code which is similar to the Stopsign example! The groundtruths I am using is 90x2 table with 90 images in the root folder stored in matlab.
I would like to know!
why matlab StopSign Example has 40 images in the root folder and in the ground truths there is 27x2 table being detected?
Is there some sort of ramdomized traning happening?
What is happening with the other 13 images?
What is being tested and what is being trained in this exercise?
There is no traning code established or is it being called with one of the fucntions and i am not seeing this process?
Please let me know, this is baffling me!
thank you in advance for your patience and time responding

Antworten (1)

Ajay Pattassery
Ajay Pattassery am 21 Feb. 2020
  1. why MATLAB StopSign Example has 40 images in the root folder and in the ground truths there is a 27x2 table being detected?
  • There is no compulsion to use just 27 images for training. You can use all the 40 images available. The only thing to keep in mind is to have a separate set of images for testing. That is image that is not used for training. So In your case, you can keep maybe 70 images for training and rest 20 images for testing.
2. Is there some sort of randomized training happening?
  • Based on the option set in the trainingOptions, the images for training can be shuffled. Refer the shuffle section in the Mini Batch Options.
3. What is being tested and what is being trained in this exercise?
4.There is no training code established or is it being called with one of the functions and I am not seeing this process?
  • I assume you are referring to the following example. If not also please go through the mentioned example that details the training and testing process for the stop sign detection. In this example, transfer learning is done on the network that is already trained using the CIFAR-10 dataset. You can see the images for training in the stopSigns variable.
  4 Kommentare
Ajay Pattassery
Ajay Pattassery am 24 Feb. 2020
Bearbeitet: Ajay Pattassery am 24 Feb. 2020
%% Display strongest detection result.
img = imread('11.jpg');
[bbox, score, label] = detect(rcnn, img, 'MiniBatchSize', 15);
whos [bbox, score, label]
The above testing part looks fine to me. I assume there is an image named 11 exist. Also you do not need MiniBatchSize argument if you are testing just a single image. You can try displaying the bbox, score, and label value using disp(bbox), disp(label).
For finding all the bouding boxes you can use the argument SelectStrongest as false in the detect commnad in the above code.
[bbox, score, label] = detect(rcnn, img, 'MiniBatchSize', 15,'SelectStrongest',false);
Matpar
Matpar am 24 Feb. 2020
Bearbeitet: Matpar am 24 Feb. 2020
Hi Ajay Pattassery , I tried your instructions it was unsuccessful! sad faces): when I used your instructions *No boxes are being displayed*
That's the thing when I checked with other prototypes they utilised the same coding sequence to get the result and they did!!!
I am testing a single image 11.jpg and the puzzling parts is that this same code worked 2times perfectly when I ran the operations!!!!!
This is what is confusing me, and the same code without modificaiton is not even showing a box!!!!
So what am I doing wrong?
Can you guide me or point me to someone who can pretty pretty please? I would like to solve this challenge seeming to be a mystery!!
Check the output from the rcnnobjectdetector please!!!
Should this be outputing from the bounding boxregression layer or from the classification output layer?
My guess is from the classification layer and howdo I change such?
Can you assist me to modify it and alter the output from the boundboxregression layer to the classoutput layer?
I could be wrong but at this point I am checking everything as my limited knowledge proves fragile!
please see the result from the (RUN) operations; and thanx once more for responding pal!
>> test20
Data =
struct with fields:
Wgtruth: [192×2 table]
inputSize =
32 32 3
ImDir =
'/Applications/MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth'
imds =
ImageDatastore with properties:
Files: {
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun1.jpg';
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun10.jpg';
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun100.jpg'
... and 189 more
}
Labels: [Wgtruth; Wgtruth; Wgtruth ... and 189 more categorical]
AlternateFileSystemRoots: {}
ReadSize: 1
ReadFcn: @readDatastoreImage
imdsTrain =
ImageDatastore with properties:
Files: {
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun130.jpg';
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun1.jpg';
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun84.jpg'
... and 131 more
}
Labels: [Wgtruth; Wgtruth; Wgtruth ... and 131 more categorical]
AlternateFileSystemRoots: {}
ReadSize: 1
ReadFcn: @readDatastoreImage
imdsValidation =
ImageDatastore with properties:
Files: {
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun34.jpg';
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun174.jpg';
' .../MATLAB_R2019b.app/toolbox/vision/visiondata/Wgtruth/gun30.jpg'
... and 55 more
}
Labels: [Wgtruth; Wgtruth; Wgtruth ... and 55 more categorical]
AlternateFileSystemRoots: {}
ReadSize: 1
ReadFcn: @readDatastoreImage
layersTransfer =
12x1 Layer array with layers:
1 'imageinput' Image Input 32x32x3 images with 'zerocenter' normalization
2 'conv' Convolution 32 5x5x3 convolutions with stride [1 1] and padding [2 2 2 2]
3 'maxpool' Max Pooling 3x3 max pooling with stride [2 2] and padding [0 0 0 0]
4 'relu' ReLU ReLU
5 'conv_1' Convolution 32 5x5x32 convolutions with stride [1 1] and padding [2 2 2 2]
6 'relu_1' ReLU ReLU
7 'avgpool' Average Pooling 3x3 average pooling with stride [2 2] and padding [0 0 0 0]
8 'conv_2' Convolution 64 5x5x32 convolutions with stride [1 1] and padding [2 2 2 2]
9 'relu_2' ReLU ReLU
10 'avgpool_1' Average Pooling 3x3 average pooling with stride [2 2] and padding [0 0 0 0]
11 'fc' Fully Connected 64 fully connected layer
12 'relu_3' ReLU ReLU
Name Size Bytes Class Attributes
layersTransfer 12x1 495349 nnet.cnn.layer.Layer
Tlayers =
15x1 Layer array with layers:
1 'imageinput' Image Input 32x32x3 images with 'zerocenter' normalization
2 'conv' Convolution 32 5x5x3 convolutions with stride [1 1] and padding [2 2 2 2]
3 'maxpool' Max Pooling 3x3 max pooling with stride [2 2] and padding [0 0 0 0]
4 'relu' ReLU ReLU
5 'conv_1' Convolution 32 5x5x32 convolutions with stride [1 1] and padding [2 2 2 2]
6 'relu_1' ReLU ReLU
7 'avgpool' Average Pooling 3x3 average pooling with stride [2 2] and padding [0 0 0 0]
8 'conv_2' Convolution 64 5x5x32 convolutions with stride [1 1] and padding [2 2 2 2]
9 'relu_2' ReLU ReLU
10 'avgpool_1' Average Pooling 3x3 average pooling with stride [2 2] and padding [0 0 0 0]
11 'fc' Fully Connected 64 fully connected layer
12 'relu_3' ReLU ReLU
13 'fullyConn' Fully Connected 1 fully connected layer
14 'softmax' Softmax softmax
15 'classoutput' Classification Output crossentropyex
Name Size Bytes Class Attributes
layers 15x1 497338 nnet.cnn.layer.Layer
Name Size Bytes Class Attributes
Tlayers 15x1 496522 nnet.cnn.layer.Layer
ans =
ClassificationOutputLayer with properties:
Name: 'classoutput'
Classes: 'auto'
OutputSize: 'auto'
Hyperparameters
LossFunction: 'crossentropyex'
pixelRange =
-30 30
imageAugmenter =
imageDataAugmenter with properties:
FillValue: 0
RandXReflection: 1
RandYReflection: 1
RandRotation: [-30 30]
RandScale: [1 1]
RandXScale: [1 1]
RandYScale: [1 1]
RandXShear: [-30 40]
RandYShear: [-30 40]
RandXTranslation: [-30 30]
RandYTranslation: [-30 30]
augimdsValidation =
augmentedImageDatastore with properties:
NumObservations: 134
Files: {134×1 cell}
AlternateFileSystemRoots: {}
MiniBatchSize: 128
DataAugmentation: [1×1 imageDataAugmenter]
ColorPreprocessing: 'none'
OutputSize: [32 32]
OutputSizeMode: 'resize'
DispatchInBackground: 0
augmentedTrainingSet =
augmentedImageDatastore with properties:
NumObservations: 58
Files: {58×1 cell}
AlternateFileSystemRoots: {}
MiniBatchSize: 128
DataAugmentation: [1×1 imageDataAugmenter]
ColorPreprocessing: 'none'
OutputSize: [32 32]
OutputSizeMode: 'resize'
DispatchInBackground: 0
options =
TrainingOptionsSGDM with properties:
Momentum: 0.9000
InitialLearnRate: 1.0000e-04
LearnRateScheduleSettings: [1×1 struct]
L2Regularization: 1.0000e-06
GradientThresholdMethod: 'l2norm'
GradientThreshold: Inf
MaxEpochs: 25
MiniBatchSize: 15
Verbose: 1
VerboseFrequency: 50
ValidationData: []
ValidationFrequency: 50
ValidationPatience: Inf
Shuffle: 'every-epoch'
CheckpointPath: ''
ExecutionEnvironment: 'auto'
WorkerLoad: []
OutputFcn: []
Plots: 'none'
SequenceLength: 'longest'
SequencePaddingValue: 0
SequencePaddingDirection: 'right'
DispatchInBackground: 0
ResetInputNormalization: 1
Name Size Bytes Class Attributes
options 1x1 727 nnet.cnn.TrainingOptionsSGDM
Training on single CPU.
Initializing input data normalization.
|========================================================================================|
| Epoch | Iteration | Time Elapsed | Mini-batch | Mini-batch | Base Learning |
| | | (hh:mm:ss) | Accuracy | Loss | Rate |
|========================================================================================|
| 1 | 1 | 00:00:00 | 100.00% | -0.0000e+00 | 1.0000e-04 |
| 17 | 50 | 00:00:04 | 100.00% | -0.0000e+00 | 9.0000e-05 |
| 25 | 75 | 00:00:06 | 100.00% | -0.0000e+00 | 8.1000e-05 |
|========================================================================================|
netTransfer =
SeriesNetwork with properties:
Layers: [15×1 nnet.cnn.layer.Layer]
InputNames: {'imageinput'}
OutputNames: {'classoutput'}
*******************************************************************
Training an R-CNN Object Detector for the following object classes:
* gun
--> Extracting region proposals from 10 training images...done.
--> Training a neural network to classify objects in training data...
Training on single CPU.
Initializing input data normalization.
|========================================================================================|
| Epoch | Iteration | Time Elapsed | Mini-batch | Mini-batch | Base Learning |
| | | (hh:mm:ss) | Accuracy | Loss | Rate |
|========================================================================================|
| 1 | 1 | 00:00:00 | 26.67% | 0.7459 | 1.0000e-04 |
| 25 | 50 | 00:00:05 | 100.00% | 0.0170 | 8.1000e-05 |
|========================================================================================|
Network training complete.
--> Training bounding box regression models for each object class...Detector training complete.
*******************************************************************
rcnn =
rcnnObjectDetector with properties:
Network: [1×1 SeriesNetwork]
RegionProposalFcn: @rcnnObjectDetector.proposeRegions
ClassNames: {'gun' 'Background'}
BoxRegressionLayer: 'conv_2'
network =
SeriesNetwork with properties:
Layers: [15×1 nnet.cnn.layer.Layer]
InputNames: {'imageinput'}
OutputNames: {'rcnnClassification'}
layers =
15x1 Layer array with layers:
1 'imageinput' Image Input 32x32x3 images with 'zerocenter' normalization
2 'conv' Convolution 32 5x5x3 convolutions with stride [1 1] and padding [2 2 2 2]
3 'maxpool' Max Pooling 3x3 max pooling with stride [2 2] and padding [0 0 0 0]
4 'relu' ReLU ReLU
5 'conv_1' Convolution 32 5x5x32 convolutions with stride [1 1] and padding [2 2 2 2]
6 'relu_1' ReLU ReLU
7 'avgpool' Average Pooling 3x3 average pooling with stride [2 2] and padding [0 0 0 0]
8 'conv_2' Convolution 64 5x5x32 convolutions with stride [1 1] and padding [2 2 2 2]
9 'relu_2' ReLU ReLU
10 'avgpool_1' Average Pooling 3x3 average pooling with stride [2 2] and padding [0 0 0 0]
11 'fc' Fully Connected 64 fully connected layer
12 'relu_3' ReLU ReLU
13 'rcnnFC' Fully Connected 2 fully connected layer
14 'rcnnSoftmax' Softmax softmax
15 'rcnnClassification' Classification Output crossentropyex with classes 'gun' and 'Background'
>>

Melden Sie sich an, um zu kommentieren.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by