Tutorial on singing types

This tutorial explains the basics of singing type classification which classify a singing clip into two classes based on its resonant location of head or chest. The corpus was recorded by Bernard Wang.

Contents

Preprocessing

Before we start, let's add necessary toolboxes to the search path of MATLAB:

addpath d:/users/jang/matlab/toolbox/utility
addpath d:/users/jang/matlab/toolbox/sap
addpath d:/users/jang/matlab/toolbox/machineLearning

All the above toolboxes can be downloaded from the author's toolbox page. Make sure you are using the latest toolboxes to work with this script.

For compatibility, here we list the platform and MATLAB version that we used to run this script:

fprintf('Platform: %s\n', computer);
fprintf('MATLAB version: %s\n', version);
fprintf('Script starts at %s\n', char(datetime));
scriptStartTime=tic;	% Timing for the whole script
Platform: PCWIN64
MATLAB version: 8.5.0.197613 (R2015a)
Script starts at 04-Feb-2017 19:55:14

Dataset collection

First of all, we can collect all the sound files. The dataset can be found at this link. We can use the commmand "mmDataCollect" to collect all the file information:

auDir='datasetOfSingingTypes';
opt=mmDataCollect('defaultOpt');
opt.extName='wav';
auData=mmDataCollect(auDir, opt, 1);
Collecting 28 files with extension "wav" from "datasetOfSingingTypes"...

We need to perform feature extraction and put all the dataset into a format that is easier for further processing, including classifier construction and evaluation.

myTic=tic;
if ~exist('ds.mat', 'file')
	opt=dsCreateFromMm('defaultOpt');
	opt.auFeaFcn=@auFeaMfcc;		% Function for feature extraction
	opt.auEpdOpt.method='vol';
	opt.auEpdSelectionMethod='maxDuration';
	ds=dsCreateFromMm(auData, opt);
	fprintf('Saving ds.mat...\n'); save ds ds
else
	fprintf('Loading ds.mat...\n'); load ds.mat
end
fprintf('time=%g sec\n', toc(myTic));
Loading ds.mat...
time=0.00217325 sec

Now all the frame-based features are extracted and stored in "ds". Next we can try to plot the extracted features for each class:

figure; dsFeaVecPlot(ds);

Performance evaluation

Now we want to do performance evaluation on LOFOCV (leave-one-file-out cross validation), where each file is a recording of a complete sound event. LOFOCV is proceeded as follows:

opt=perfLoo4audio('defaultOpt');
[ds2, fileRr, frameRr]=perfLoo4audio(ds, opt);
fprintf('Frame-based leave-one-file-out RR=%g%%\n', frameRr*100);
fprintf('File-based leave-one-file-out RR=%g%%\n', fileRr*100);
1/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/A#3_chest.WAV", time=0.283076 sec
2/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/A2_chest.WAV", time=0.223322 sec
3/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/A3_chest.WAV", time=0.235403 sec
4/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/B2_chest.WAV", time=0.211378 sec
5/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/B3_chest.WAV", time=0.217453 sec
6/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/C#3_chest.WAV", time=0.212429 sec
7/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/C3_chest.WAV", time=0.218728 sec
8/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/D#3_chest.WAV", time=0.506582 sec
9/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/D3_chest.WAV", time=0.2052 sec
10/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/E3_chest.WAV", time=0.237799 sec
11/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/F#3_chest.WAV", time=0.248914 sec
12/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/F3_chest.WAV", time=0.217656 sec
13/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/G#3_chest.WAV", time=0.375588 sec
14/28: Leave-one-file-out CV for "datasetOfSingingTypes/Chest voice/G3_chest.WAV", time=0.227494 sec
15/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/A#3_head.WAV", time=0.23512 sec
16/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/A3_head.WAV", time=0.297975 sec
17/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/B3_head.WAV", time=0.226897 sec
18/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/C#4_head.WAV", time=0.233566 sec
19/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/C4_head.WAV", time=0.228133 sec
20/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/D#4_head.WAV", time=0.226349 sec
21/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/D4_head.WAV", time=0.221052 sec
22/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/E4_head.WAV", time=0.223945 sec
23/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/F#4_head.WAV", time=0.206353 sec
24/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/F4_head.WAV", time=0.231214 sec
25/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/G#3_head.WAV", time=0.251855 sec
26/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/G#4_head.WAV", time=0.221119 sec
27/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/G3_head.WAV", time=0.259713 sec
28/28: Leave-one-file-out CV for "datasetOfSingingTypes/Head Voice/G4_head.WAV", time=0.237161 sec
Frame-based leave-one-file-out RR=96.9448%
File-based leave-one-file-out RR=100%

We can plot the frame-based confusion matrix:

confMat=confMatGet(ds2.output, ds2.frameClassIdPredicted);
confOpt=confMatPlot('defaultOpt');
confOpt.className=ds.outputName;
figure; confMatPlot(confMat, confOpt);

We can also plot the file-based confusion matrix:

confMat=confMatGet(ds2.fileClassId, ds2.fileClassIdPredicted);
confOpt=confMatPlot('defaultOpt');
confOpt.className=ds.outputName;
figure; confMatPlot(confMat, confOpt);

We can also list all the misclassified sounds in a table:

for i=1:length(auData)
	auData(i).classPredicted=ds.outputName{ds2.fileClassIdPredicted(i)};
end
opt=mmDataList('defaultOpt');
opt.listType='all';
mmDataList(auData, opt);

List of 28 cases

Index\FieldFileGT ==> PredictedHiturl
 1 A#3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/A#3_chest.WAV
 2 A2_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/A2_chest.WAV
 3 A3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/A3_chest.WAV
 4 B2_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/B2_chest.WAV
 5 B3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/B3_chest.WAV
 6 C#3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/C#3_chest.WAV
 7 C3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/C3_chest.WAV
 8 D#3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/D#3_chest.WAV
 9 D3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/D3_chest.WAV
 10 E3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/E3_chest.WAV
 11 F#3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/F#3_chest.WAV
 12 F3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/F3_chest.WAV
 13 G#3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/G#3_chest.WAV
 14 G3_chest.WAV Chest voice ==> Chest voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Chest voice/G3_chest.WAV
 15 A#3_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/A#3_head.WAV
 16 A3_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/A3_head.WAV
 17 B3_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/B3_head.WAV
 18 C#4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/C#4_head.WAV
 19 C4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/C4_head.WAV
 20 D#4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/D#4_head.WAV
 21 D4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/D4_head.WAV
 22 E4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/E4_head.WAV
 23 F#4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/F#4_head.WAV
 24 F4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/F4_head.WAV
 25 G#3_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/G#3_head.WAV
 26 G#4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/G#4_head.WAV
 27 G3_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/G3_head.WAV
 28 G4_head.WAV Head Voice ==> Head Voice true /jang/books/audioSignalProcessing/appNote/singingType/datasetOfSingingTypes/Head Voice/G4_head.WAV

Dimensionality reduction

In order to visualize the distribution of the dataset, we need to project the original dataset into 2-D space. This can be achieved by LDA (linear discriminant analysis):

ds2d=lda(ds);
ds2d.input=ds2d.input(1:2, :);
figure; dsScatterPlot(ds2d); xlabel('Input 1'); ylabel('Input 2');
title('MFCC projected on the first 2 lda vectors');

As can be seen from the scatter plot, the overlap between "10" and "50" is the largest among all class pairs, indicating that these two classes are likely to be confused with each other. This is also verified by the confusion matrices shown earlier.

Actually it is possible to do LDA projection and obtain the corresponding accuracies vs. dimensionalities via leave-one-out cross validation over KNNC:

opt=ldaPerfViaKnncLoo('defaultOpt');
opt.mode='exact';
recogRate1=ldaPerfViaKnncLoo(ds, opt);
ds2=ds; ds2.input=inputNormalize(ds2.input);	% input normalization
recogRate2=ldaPerfViaKnncLoo(ds2, opt);
[featureNum, dataNum] = size(ds.input);
plot(1:featureNum, 100*recogRate1, 'o-', 1:featureNum, 100*recogRate2, '^-'); grid on
legend('Raw data', 'Normalized data', 'location', 'southeast');
xlabel('No. of projected features based on LDA');
ylabel('LOO recognition rates using KNNC (%)');

We can also perform input selection to reduce dimensionality:

myTic=tic;
z=inputSelectSequential(ds, inf, [], [], 1); figEnlarge;
toc(myTic)
Construct 91  models, each with up to 13 inputs selected from 13 candidates...

Selecting input 1:
Model 1/91: selected={ 1} => Recog. rate = 84.7%
Model 2/91: selected={ 2} => Recog. rate = 54.4%
Model 3/91: selected={ 3} => Recog. rate = 63.0%
Model 4/91: selected={ 4} => Recog. rate = 57.4%
Model 5/91: selected={ 5} => Recog. rate = 60.9%
Model 6/91: selected={ 6} => Recog. rate = 84.3%
Model 7/91: selected={ 7} => Recog. rate = 53.8%
Model 8/91: selected={ 8} => Recog. rate = 78.5%
Model 9/91: selected={ 9} => Recog. rate = 66.0%
Model 10/91: selected={10} => Recog. rate = 71.4%
Model 11/91: selected={11} => Recog. rate = 61.7%
Model 12/91: selected={12} => Recog. rate = 69.8%
Model 13/91: selected={13} => Recog. rate = 69.6%
Currently selected inputs:  1

Selecting input 2:
Model 14/91: selected={ 1,  2} => Recog. rate = 87.7%
Model 15/91: selected={ 1,  3} => Recog. rate = 85.5%
Model 16/91: selected={ 1,  4} => Recog. rate = 85.0%
Model 17/91: selected={ 1,  5} => Recog. rate = 93.0%
Model 18/91: selected={ 1,  6} => Recog. rate = 90.6%
Model 19/91: selected={ 1,  7} => Recog. rate = 86.8%
Model 20/91: selected={ 1,  8} => Recog. rate = 88.4%
Model 21/91: selected={ 1,  9} => Recog. rate = 85.2%
Model 22/91: selected={ 1, 10} => Recog. rate = 83.9%
Model 23/91: selected={ 1, 11} => Recog. rate = 84.7%
Model 24/91: selected={ 1, 12} => Recog. rate = 86.0%
Model 25/91: selected={ 1, 13} => Recog. rate = 92.8%
Currently selected inputs:  1,  5

Selecting input 3:
Model 26/91: selected={ 1,  5,  2} => Recog. rate = 92.9%
Model 27/91: selected={ 1,  5,  3} => Recog. rate = 93.0%
Model 28/91: selected={ 1,  5,  4} => Recog. rate = 95.0%
Model 29/91: selected={ 1,  5,  6} => Recog. rate = 95.3%
Model 30/91: selected={ 1,  5,  7} => Recog. rate = 94.0%
Model 31/91: selected={ 1,  5,  8} => Recog. rate = 97.7%
Model 32/91: selected={ 1,  5,  9} => Recog. rate = 94.0%
Model 33/91: selected={ 1,  5, 10} => Recog. rate = 94.0%
Model 34/91: selected={ 1,  5, 11} => Recog. rate = 93.8%
Model 35/91: selected={ 1,  5, 12} => Recog. rate = 92.6%
Model 36/91: selected={ 1,  5, 13} => Recog. rate = 94.2%
Currently selected inputs:  1,  5,  8

Selecting input 4:
Model 37/91: selected={ 1,  5,  8,  2} => Recog. rate = 97.3%
Model 38/91: selected={ 1,  5,  8,  3} => Recog. rate = 98.7%
Model 39/91: selected={ 1,  5,  8,  4} => Recog. rate = 99.2%
Model 40/91: selected={ 1,  5,  8,  6} => Recog. rate = 98.3%
Model 41/91: selected={ 1,  5,  8,  7} => Recog. rate = 98.4%
Model 42/91: selected={ 1,  5,  8,  9} => Recog. rate = 98.0%
Model 43/91: selected={ 1,  5,  8, 10} => Recog. rate = 98.1%
Model 44/91: selected={ 1,  5,  8, 11} => Recog. rate = 97.8%
Model 45/91: selected={ 1,  5,  8, 12} => Recog. rate = 98.7%
Model 46/91: selected={ 1,  5,  8, 13} => Recog. rate = 97.3%
Currently selected inputs:  1,  5,  8,  4

Selecting input 5:
Model 47/91: selected={ 1,  5,  8,  4,  2} => Recog. rate = 99.1%
Model 48/91: selected={ 1,  5,  8,  4,  3} => Recog. rate = 99.2%
Model 49/91: selected={ 1,  5,  8,  4,  6} => Recog. rate = 99.4%
Model 50/91: selected={ 1,  5,  8,  4,  7} => Recog. rate = 99.5%
Model 51/91: selected={ 1,  5,  8,  4,  9} => Recog. rate = 99.5%
Model 52/91: selected={ 1,  5,  8,  4, 10} => Recog. rate = 99.5%
Model 53/91: selected={ 1,  5,  8,  4, 11} => Recog. rate = 99.4%
Model 54/91: selected={ 1,  5,  8,  4, 12} => Recog. rate = 99.3%
Model 55/91: selected={ 1,  5,  8,  4, 13} => Recog. rate = 99.3%
Currently selected inputs:  1,  5,  8,  4, 10

Selecting input 6:
Model 56/91: selected={ 1,  5,  8,  4, 10,  2} => Recog. rate = 99.3%
Model 57/91: selected={ 1,  5,  8,  4, 10,  3} => Recog. rate = 99.3%
Model 58/91: selected={ 1,  5,  8,  4, 10,  6} => Recog. rate = 99.4%
Model 59/91: selected={ 1,  5,  8,  4, 10,  7} => Recog. rate = 99.6%
Model 60/91: selected={ 1,  5,  8,  4, 10,  9} => Recog. rate = 99.5%
Model 61/91: selected={ 1,  5,  8,  4, 10, 11} => Recog. rate = 99.6%
Model 62/91: selected={ 1,  5,  8,  4, 10, 12} => Recog. rate = 99.5%
Model 63/91: selected={ 1,  5,  8,  4, 10, 13} => Recog. rate = 99.6%
Currently selected inputs:  1,  5,  8,  4, 10, 11

Selecting input 7:
Model 64/91: selected={ 1,  5,  8,  4, 10, 11,  2} => Recog. rate = 99.4%
Model 65/91: selected={ 1,  5,  8,  4, 10, 11,  3} => Recog. rate = 99.5%
Model 66/91: selected={ 1,  5,  8,  4, 10, 11,  6} => Recog. rate = 99.6%
Model 67/91: selected={ 1,  5,  8,  4, 10, 11,  7} => Recog. rate = 99.6%
Model 68/91: selected={ 1,  5,  8,  4, 10, 11,  9} => Recog. rate = 99.6%
Model 69/91: selected={ 1,  5,  8,  4, 10, 11, 12} => Recog. rate = 99.6%
Model 70/91: selected={ 1,  5,  8,  4, 10, 11, 13} => Recog. rate = 99.5%
Currently selected inputs:  1,  5,  8,  4, 10, 11,  7

Selecting input 8:
Model 71/91: selected={ 1,  5,  8,  4, 10, 11,  7,  2} => Recog. rate = 99.5%
Model 72/91: selected={ 1,  5,  8,  4, 10, 11,  7,  3} => Recog. rate = 99.4%
Model 73/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6} => Recog. rate = 99.8%
Model 74/91: selected={ 1,  5,  8,  4, 10, 11,  7,  9} => Recog. rate = 99.7%
Model 75/91: selected={ 1,  5,  8,  4, 10, 11,  7, 12} => Recog. rate = 99.7%
Model 76/91: selected={ 1,  5,  8,  4, 10, 11,  7, 13} => Recog. rate = 99.5%
Currently selected inputs:  1,  5,  8,  4, 10, 11,  7,  6

Selecting input 9:
Model 77/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6,  2} => Recog. rate = 99.5%
Model 78/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6,  3} => Recog. rate = 99.6%
Model 79/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6,  9} => Recog. rate = 99.7%
Model 80/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12} => Recog. rate = 99.8%
Model 81/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 13} => Recog. rate = 99.6%
Currently selected inputs:  1,  5,  8,  4, 10, 11,  7,  6, 12

Selecting input 10:
Model 82/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  2} => Recog. rate = 99.6%
Model 83/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  3} => Recog. rate = 99.7%
Model 84/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  9} => Recog. rate = 99.8%
Model 85/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12, 13} => Recog. rate = 99.7%
Currently selected inputs:  1,  5,  8,  4, 10, 11,  7,  6, 12,  9

Selecting input 11:
Model 86/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  9,  2} => Recog. rate = 99.5%
Model 87/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  9,  3} => Recog. rate = 99.6%
Model 88/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  9, 13} => Recog. rate = 99.7%
Currently selected inputs:  1,  5,  8,  4, 10, 11,  7,  6, 12,  9, 13

Selecting input 12:
Model 89/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  9, 13,  2} => Recog. rate = 99.6%
Model 90/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  9, 13,  3} => Recog. rate = 99.7%
Currently selected inputs:  1,  5,  8,  4, 10, 11,  7,  6, 12,  9, 13,  3

Selecting input 13:
Model 91/91: selected={ 1,  5,  8,  4, 10, 11,  7,  6, 12,  9, 13,  3,  2} => Recog. rate = 99.8%
Currently selected inputs:  1,  5,  8,  4, 10, 11,  7,  6, 12,  9, 13,  3,  2

Overall maximal recognition rate = 99.8%.
Selected 9 inputs (out of 13):  1,  5,  8,  4, 10, 11,  7,  6, 12
Elapsed time is 274.399640 seconds.

It seems the feature selection is not very effective since the accuracy is the best when all the inputs are selected.

After dimensionality reduction, we can perform all combinations of classifiers and input normalization to search the best performance via leave-one-out cross validation:

myTic=tic;
poOpt=perfCv4classifier('defaultOpt');
poOpt.foldNum=10;	% 10-fold cross validation
figure; [perfData, bestId]=perfCv4classifier(ds, poOpt, 1);
toc(myTic)
structDispInHtml(perfData, 'Performance of various classifiers via cross validation');
Iteration=200/1000, recog. rate=72.094%
Iteration=400/1000, recog. rate=97.5185%
Iteration=600/1000, recog. rate=98.4327%
Iteration=800/1000, recog. rate=98.4763%
Iteration=1000/1000, recog. rate=98.4763%
Elapsed time is 505.852946 seconds.

Then we can display the confusion matrix corresponding to the best classifier and the best input normalization scheme:

confMat=confMatGet(ds.output, perfData(bestId).bestComputedClass);
confOpt=confMatPlot('defaultOpt');
confOpt.className=ds.outputName;
figure; confMatPlot(confMat, confOpt);

Summary

This is a brief tutorial which uses the basic techniques in pattern recognition. There are several directions for further improvement:

Appendix

List of functions and datasets used in this script

Date and time when finishing this script:

fprintf('Date & time: %s\n', char(datetime));
Date & time: 04-Feb-2017 20:09:14

Overall elapsed time:

toc(scriptStartTime)
Elapsed time is 839.214230 seconds.

Jyh-Shing Roger Jang.