5-5 Naive Bayes Classifiers (穡)

[chinese][english]

(請注意:中文版本並未隨英文版本同步更新!)

Slides

If we assume that the features of the given dataset are independent, then we can perform PDF modeling on each dimension sequentially. The final PDF for each class is then given by the multiplication of the PDF of the same class over each dimension. In symbols, we assume that where X = [X1, X2, ..., Xd] is a feature vector and C is a class. Despite the assumption seems too strong for real-world data, the resulting naive Bayes classifier (NBC for short) is highly successful in many practical applications.

如果我們假設在給定的資料集中,每一維的資料都是獨立的,在此假設下,每一類資料的PDF可以簡化成此類資料在每一維的PDF的乘積。換句話說,我們可以先算出每一類資料在每一個維度所對應的PDF,然後將同一類資料的數個PDF進行連乘,就可以得到這一類資料的PDF。我們的假設可以使用數學式子來表示如下:

p(X|C) = P(X1|C)P(X2|C) ... P(Xd|C)

where X = [X1, X2, ..., Xd] is a feature vector and C is a class. Despite the assumption seems too strong for real-world data, the resulting naive Bayes classifier (NBC for short) is highly successful in many practical applications.

其中 X = [X1, X2, ..., Xd] 是一個特徵向量,而 C 代表一個特定類別。這個假設看來似乎過強,一般實際世界的資料似乎無法滿足此假設,但由此假設所產生的單純貝氏分類器(naive Bayes classifier,簡稱 NBC)卻是相當有實用性,其辨識效能常常不輸給其它更複雜的辨識器。

In practice, we can apply 1D Gaussian functions for modeling the data of a specific class and a specific dimension. The training of a NBC can be summarized as follows:

在實做上,我們通常假設一維資料所對應的PDF是高斯機率密度函式,在此情況下,對應的NBC步驟可以說明如下:

  1. Assume that data in class i and dimension j is governed by a 1-dimensional Gaussian PDF:
    g(x, mi, si2) = [(2p)*si2]-0.5*exp[-(x-mi)2)/(2si2)]
    where mi is the mean value and si2 is the variance of the PDF. Given the dimension-j training data {x1, x2, ... xn} in class i, the optimum parameters in the sense of maximum likelihood can be expressed as follows:
    mi = (x1+x2+...+xn)/n
    si2 = [ (x1-mi)2 + (x2-mi)2 + ... + (xn-mi)2 ] /(n-1)
  2. Since the Gaussian PDF does not reflect the prior probability of class i (denoted by p(ci)), we usually use p(ci)g(x, mi, Si) to represent the probability density of a given vector x. In practice, p(ci) is computed as the number of entries of class i divided by the number of total entries in the training set.
  3. In application, for a given vector x of unknown class, the higher the probability density p(ci)g(x, mi, Si) is, the more likelihood that this vector belongs to class i.
  1. 假設每一個類別的資料均是由 d 維的高斯機率密度函數(Gaussian probability density function)所產生::
    gi(x, m, S) = (2p)-d/2*det(S)-0.5*exp[-(x-m)TS-1(x-m)/2]
    其中 m 是此高斯機率密度函數的平均向量(Mean vector),S 則是其共變異矩陣(Covariance matrix),我們可以根據 MLE,產生最佳的平均向量 m 和共變異矩陣 S
  2. 若有需要,可以對每一個高斯機率密度函數乘上一個權重 wi
  3. 在實際進行分類時,wi*gi(x, m, S) 越大,則資料 x 隸屬於類別 i 的可能性就越高。

When implementing the classifier, usually we do not calculate p(ci)g(x, mi, Si) directly since the exponential function is likely to introduce round-off errors. Instead, we compute the natural logarithm of the probability density, as follows:

在實際進行運算時,我們通常不去計算 wi*gi(x, m, S) ,而是計算 log(wi*gi(x, m, S)) = log(wi) + log(gi(x, m, S)),以便避開計算指數時可能發生的種種問題(如精確度不足、計算耗時),log(gi(x, m, S)) 的公式如下:

log[p(ci)g(x, mi, Si)] = log(p(ci)) - (d*log(2p) + log|Si|)/2 - (x-mi)TSi-1(x-mi)/2
The decision boundary between class i and j is represented by the following trajectory:
p(ci)g(x, mi, Si) = p(cj)g(x, mj, Sj).
Taking the logrithm of both sides, we have
log(p(ci)) - (d*log(2p) + log|Si|)/2 - (x-mi)TSi-1(x-mi)/2 = log(p(cj)) - (d*log(2p) + log|S|j)/2 - (x-mj)TSj-1(x-mj)/2
After simplification, we have the decision boundary as the following equation:
(x-mj)TSj-1(x-mj) - (x-mi)TSi-1(x-mi) = log{[|S|i p2(ci)]/[|S|j p2(cj)]}
where the right-hand side is a constant. Since both (x-mj)TSj-1(x-mj) and (x-mi)TSi-1(x-mi) are quadratic, the above equation represents a decision boundary of the quadratic form in the d-dimensional feature space.

In the following example, we shall use dimensions 3 and 4 of the Iris dataset for NBC:

例如,如果使用 NBC 來對 IRIS 資料的第三維及第四維進行分類,可使用下列範例程式:

Example 1: nbc01dataPlot.mDS = prData('iris'); DS.input=DS.input(3:4, :); % Only take dimensions 3 and 4 for 2d visualization plotOpt=1; % Plotting [nbcPrm, logLike, recogRate, hitIndex]=nbcTrain(DS, [], plotOpt); fprintf('Recog. rate = %f%%\n', recogRate*100); Recog. rate = 96.000000%

In the above example, each misclassified data point is labeled with a cross. Moreover, we use the number of entries of each class as the class prior probability for computing the probability density. In general, there are two ways to specify class prior probabilities:

上圖秀出資料點,以及分類錯誤的點(叉叉)。特別需要注意的是,在上述的程式碼中,我們用到 classWeight,這是一個向量,用來指定每一個類別的權重,通常有兩種做法:

We can plot the 1d PDFs for all classes and dimensions, together with the corresponding data points, as follows:

我們可以畫出每個類別及每個維度的一維PDF函數,以及其對應的資料,請見下列範例:

Example 2: nbcPlot00.mDS=prData('iris'); DS.input=DS.input(3:4, :); [nbcPrm, logLike, recogRate, hitIndex]=nbcTrain(DS); nbcPlot(DS, nbcPrm, '1dPdf');

For the above example, we can go one step further to plot the class-specific PDFs as 3D surfaces and display the corresponding contours, as follows:

我們也可以將每個類別的PDF函數以三維曲面呈現,並畫出其等高線,請見下列範例:

Example 3: nbcPlot01.mDS=prData('iris'); DS.input=DS.input(3:4, :); [nbcPrm, logLike, recogRate, hitIndex]=nbcTrain(DS); nbcPlot(DS, nbcPrm, '2dPdf');

Based on the computed PDF for each class, we can plot the decision boundaries, as follows:

根據這些高斯密度函數,我們就可以畫出每個類別的邊界,如下:

Example 4: nbcPlot02.mDS=prData('iris'); DS.input=DS.input(3:4, :); [nbcPrm, logLike, recogRate, hitIndex]=nbcTrain(DS); DS.hitIndex=hitIndex; % Attach hitIndex to DS for plotting nbcPlot(DS, nbcPrm, 'decBoundary');


Data Clustering and Pattern Recognition (資料分群與樣式辨認)