5-7 貝?????

Old Chinese version

k]Bayes classifier^DOھڨwz]Bayes' theorem^¦AΥHP_Oӳ̱@OCӨkؼЬOƱzLvέpRAF̤p~t@ؤ覡C

]{bsbYӯSxxάYO CAP(x) ܸӯSxȥX{vAP(C) ܥNǥѶüƨXSxȫꥩO C vAڭ̱N٬ƫev]prior probability^Ahھڱv]conditional probability^AwziHܬG

P(C|x) = P(Cx)/P(x) = P(C)P(x|C)/P(x)
䤤AP(C|x) x ӯSxȥX{ɡASꥩOCvAڭ̱NL٬ƫv]posterior probability^Fܩ P(x|C) hܸO C IASꥩoͯSxȬ x vC

]ӪŶiX{O`@ k {C1, C2, K, Ck}ABCOAhڭ̥iHoUC{G
P(x)=P(xC1) + P(xC2) + ... + P(xCk)
=P(C1)P(x|C1) + P(C2)P(x|C2) + ... + P(Ck)P(x|Ck)
аѦҤUCWߨƥvGܷNϡG

ѫez{Aڭ̥iHoG
P(Ci|x) = P(Ci)P(x|Ci)/P(x)
YϥΤWz{Aڭ̥iHoΩkOwzG
P(Ci|x) = P(Ci)P(x|Ci)
wwwwwwwwwwwwwwwwwwwwww
P(C1)P(x|C1) + P(C2)P(x|C2) + ... + P(Ck)P(x|Ck)
ڭ̭nP_YSxxsݩ@OɡAhڭ̶ȻݦOCiPOCjۦv]likelihood ratio^RG
R = P(Ci|x) = P(Ci)P(x|Ci)
wwwwwwwwwwww
P(Cj|x)P(Cj)P(x|Cj)
p R > 1A x VO CiFϤAp R < 1A x VO CjC

bڹBɡAP(Ci) O i Ʀ`˥ƪvA P(x|Ci) hOѲiIҦXӪ@ӾvKרơ]ҦpG^C

ڭ̥iHNwzAUtAp{bP_󤣤@ӯSxȡAӬO@թۿWߪSx (x1, x2, K, xd)AhwYO Ci ɡAviHܬG

P(x1, x2, K, xd|Ci) = P(x1|Ci)P(x2|Ci) ... P(xd|Ci)
pGN{]3-2.6^GNJ{]3-2.4^Ahڭ̥iHokOA]tdӯSxȪwzG
P(Ci|x1, x2, K, xd) = P(Ci) P(x1|Ci)P(x2|Ci) ... P(xd|Ci)
wwwwwwwwwwwwwwww
Si=1k P(Ci) P(x1|Ci)P(x2|Ci) ... P(xd|Ci)

Data Clustering and Pattern Recognition (ƤsP˦{)