第四节–朴素贝叶斯(Naive Bayes)法
朴素贝叶斯(Naive Bayes,NB)法是基于贝叶斯定理与特征条件独立假设的分类方法 .对于给定的训练数据集,首先基于特征条件独立假设学习输入/输出的联合几率分布;而后基于此模型,对给定的输入x,利用贝叶斯定理求出后验几率最大的输出y.html
NB包括如下算法:python
高斯朴素贝叶斯(Gaussian Naive Bayes)–适用于正态分布
伯努利朴素贝叶斯(Bernoulli Naive Bayes)–适用于二项分布
多项式朴素贝叶斯(Multinomial Navie Bayes)
朴素贝叶斯法的优缺点:web
优势:学习和预测的效率高,且易于实现;在数据较少的状况下仍然有效,能够处理分类问题
缺点:分类效果不必定很高,特征独立性假设会是朴素贝叶斯变得简单,可是会牺牲必定的分类准确率
一.朴素贝叶斯法的学习与分类
1.基本方法
设输入空间
X
⊆
R
n
\mathcal{X} \subseteq \mathbf{R}^{n}
X ⊆ R n 为n维向量的集合,输出空间为类标记集合
y
=
{
c
1
,
c
2
,
⋯
 
,
c
x
}
y=\left\{c_{1},\right.c_{2}, \cdots, c_{x} \}
y = { c 1 , c 2 , ⋯ , c x } ,输入为特征向量
x
∈
X
x \in \mathcal{X}
x ∈ X ,输出为类标记(class label)
y
∈
Y
y \in \mathcal{Y}
y ∈ Y ,X是定义在输入空间
X
\mathcal{X}
X 上的随机向量,Y是定义在输出空间
Y
\mathcal{Y}
Y 上的随机变量.
P
(
X
,
Y
)
P(X, Y)
P ( X , Y ) 是X和Y的联合几率分布.训练数据集:
T
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
⋯
 
,
(
x
N
,
y
N
)
}
T=\left\{\left(x_{1}, y_{1}\right),\left(x_{2}, y_{2}\right), \cdots,\left(x_{N}, y_{N}\right)\right\}
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯ , ( x N , y N ) } 算法
由
P
(
X
,
Y
)
P(X, Y)
P ( X , Y ) 独立分布产生app
朴素贝叶斯法经过训练数据集学习联合几率分布
P
(
X
,
Y
)
P(X, Y)
P ( X , Y ) .具体地,学习如下先验几率分布及条件几率分布.先验几率分布:
P
(
Y
=
c
k
)
,
k
=
1
,
2
,
⋯
 
,
K
P\left(Y=c_{k}\right), \quad k=1,2, \cdots, K
P ( Y = c k ) , k = 1 , 2 , ⋯ , K ide
条件几率分布:
P
(
X
=
x
∣
Y
=
c
k
)
=
P
(
X
(
1
)
=
x
(
1
)
,
⋯
 
,
X
(
n
)
=
x
(
n
)
∣
Y
=
c
k
)
,
k
=
1
,
2
,
⋯
 
,
K
P\left(X=x | Y=c_{k}\right)=P\left(X^{(1)}=x^{(1)}, \cdots, X^{(n)}=x^{(n)} | Y=c_{k}\right), \quad k=1,2, \cdots, K
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , ⋯ , X ( n ) = x ( n ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K svg
因而学习到联合几率分布
P
(
X
,
Y
)
P(X, Y)
P ( X , Y ) 函数
条件几率分布
P
(
X
=
x
∣
Y
=
c
k
)
P\left(X=x | Y=c_{k}\right)
P ( X = x ∣ Y = c k ) 有指数级数量的参数,其估计实际是不可行的.事实上,假设
x
(
j
)
x^{(j)}
x ( j ) 可取值有
S
j
S_{j}
S j 个,
j
=
1
,
2
,
⋯
 
,
n
j=1,2, \cdots, n
j = 1 , 2 , ⋯ , n ,Y可取值有K个,那么参数个数
K
∏
j
=
1
n
S
j
K \prod_{j=1}^{n} S_{j}
K ∏ j = 1 n S j 学习
朴素贝叶斯法对条件几率分布做了条件独立性的假设.因为这是一个较强的假设,朴素贝叶斯法也由此得名.具体地,条件独立性假设是:
P
(
X
=
x
∣
Y
=
c
k
)
=
P
(
X
(
1
)
=
x
(
1
)
,
⋯
 
,
X
(
n
)
=
x
(
n
)
∣
Y
=
c
k
)
=
∏
j
=
1
n
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
\begin{aligned} P\left(X=x | Y=c_{k}\right) &=P\left(X^{(1)}=x^{(1)}, \cdots, X^{(n)}=x^{(n)} | Y=c_{k}\right) \\ &=\prod_{j=1}^{n} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right) \end{aligned}
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , ⋯ , X ( n ) = x ( n ) ∣ Y = c k ) = j = 1 ∏ n P ( X ( j ) = x ( j ) ∣ Y = c k ) ui
朴素贝叶斯法实际上学习到生成数据的机制,因此属于生成模型.条件独立假设等因而说用于分类的特征在类肯定的条件下都是条件独立的.这一假设使朴素贝叶斯法变得简单,但有时会牺牲必定的分类准确率
朴素贝叶斯法分类时,对给定的输入x,经过学习到的模型计算后验几率分布
P
(
Y
=
c
k
∣
X
=
x
)
P\left(Y=c_{k} | X=x\right)
P ( Y = c k ∣ X = x ) ,将后验几率最大的类做为x的类输出,后验几率计算根据贝叶斯定理进行:
P
(
Y
=
c
k
∣
X
=
x
)
=
P
(
X
=
x
∣
Y
=
c
k
)
P
(
Y
=
c
k
)
∑
k
P
(
X
=
x
∣
Y
=
c
k
)
P
(
Y
=
c
k
)
P\left(Y=c_{k} | X=x\right)=\frac{P\left(X=x | Y=c_{k}\right) P\left(Y=c_{k}\right)}{\sum_{k} P\left(X=x | Y=c_{k}\right) P\left(Y=c_{k}\right)}
P ( Y = c k ∣ X = x ) = ∑ k P ( X = x ∣ Y = c k ) P ( Y = c k ) P ( X = x ∣ Y = c k ) P ( Y = c k )
将上面两式联合:
P
(
Y
=
c
k
∣
X
=
x
)
=
P
(
Y
=
c
k
)
∏
j
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
∑
k
P
(
Y
=
c
k
)
∏
j
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
,
k
=
1
,
2
,
⋯
 
,
K
P\left(Y=c_{k} | X=x\right)=\frac{P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}{\sum_{k} P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}, \quad k=1,2, \cdots, K
P ( Y = c k ∣ X = x ) = ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K
这是朴素贝叶斯法分类的基本公式,因而朴素贝叶斯分类器可表示为:
y
=
f
(
x
)
=
arg
max
c
k
P
(
Y
=
c
k
)
∏
j
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
∑
k
P
(
Y
=
c
k
)
∏
j
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
y=f(x)=\arg \max _{c_{k}} \frac{P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}{\sum_{k} P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}
y = f ( x ) = arg c k max ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k )
注意到,在上式中分母对全部
c
k
c_{k}
c k 都是相同的,因此:
y
=
arg
max
c
k
P
(
Y
=
c
k
)
∏
j
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
y=\arg \max _{c_{k}} P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)
y = arg c k max P ( Y = c k ) j ∏ P ( X ( j ) = x ( j ) ∣ Y = c k )
2.后验几率最大化的含义
朴素贝叶斯法将实例分到后验几率最大的类中,这等价于指望风险最小化.假设选择0-1损失函数:
L
(
Y
,
f
(
X
)
)
=
{
1
,
Y
≠
f
(
X
)
0
,
Y
=
f
(
X
)
L(Y, f(X))=\left\{\begin{array}{ll}{1,} & {Y \neq f(X)} \\ {0,} & {Y=f(X)}\end{array}\right.
L ( Y , f ( X ) ) = { 1 , 0 , Y ̸ = f ( X ) Y = f ( X )
式中
f
(
X
)
f(X)
f ( X ) 是分类决策函数.这时指望风险函数为:
R
e
x
p
(
f
)
=
E
[
L
(
Y
,
f
(
X
)
)
]
R_{\mathrm{exp}}(f)=E[L(Y, f(X))]
R e x p ( f ) = E [ L ( Y , f ( X ) ) ]
指望是对联合分布
P
(
X
,
Y
)
P(X, Y)
P ( X , Y ) 取的.由此取条件指望:
R
e
x
p
(
f
)
=
E
X
∑
k
=
1
K
[
L
(
c
k
,
f
(
X
)
)
]
P
(
c
k
∣
X
)
R_{\mathrm{exp}}(f)=E_{X} \sum_{k=1}^{K}\left[L\left(c_{k}, f(X)\right)\right] P\left(c_{k} | X\right)
R e x p ( f ) = E X k = 1 ∑ K [ L ( c k , f ( X ) ) ] P ( c k ∣ X )
为了使指望风险最小化,只需对
X
=
x
X=x
X = x 逐个极小化,由此获得:
f
(
x
)
=
arg
min
y
∈
y
∑
k
=
1
K
L
(
c
k
,
y
)
P
(
c
k
∣
X
=
x
)
=
arg
min
y
∈
y
∑
k
=
1
K
P
(
y
≠
c
k
∣
X
=
x
)
=
arg
min
y
∈
Y
(
1
−
P
(
y
=
c
k
∣
X
=
x
)
)
=
arg
max
y
∈
y
P
(
y
=
c
k
∣
X
=
x
)
\begin{aligned} f(x) &=\arg \min _{y \in y} \sum_{k=1}^{K} L\left(c_{k}, y\right) P\left(c_{k} | X=x\right) \\ &=\arg \min _{y \in y} \sum_{k=1}^{K} P\left(y \neq c_{k} | X=x\right) \\ &=\arg \min _{y \in \mathcal{Y}}\left(1-P\left(y=c_{k} | X=x\right)\right) \\ &=\arg \max _{y \in y} P\left(y=c_{k} | X=x\right) \end{aligned}
f ( x ) = arg y ∈ y min k = 1 ∑ K L ( c k , y ) P ( c k ∣ X = x ) = arg y ∈ y min k = 1 ∑ K P ( y ̸ = c k ∣ X = x ) = arg y ∈ Y min ( 1 − P ( y = c k ∣ X = x ) ) = arg y ∈ y max P ( y = c k ∣ X = x )
这样一来,根据指望风险最小化准则就获得了后验几率最大化准则:
f
(
x
)
=
arg
max
c
k
P
(
c
k
∣
X
=
x
)
f(x)=\arg \max _{c_{k}} P\left(c_{k} | X=x\right)
f ( x ) = arg c k max P ( c k ∣ X = x )
即朴素贝叶斯法所采用的原理
二.朴素贝叶斯法的参数估计
1.极大似然估计
在朴素贝叶斯法中,学习意味着估计
P
(
Y
=
c
k
)
P\left(Y=c_{k}\right)
P ( Y = c k ) 和
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)
P ( X ( j ) = x ( j ) ∣ Y = c k ) .能够应用极大似然估计相应的几率.先验几率
P
(
Y
=
c
k
)
P\left(Y=c_{k}\right)
P ( Y = c k ) 的极大似然估计是:
P
(
Y
=
c
k
)
=
∑
i
=
1
N
I
(
y
i
=
c
k
)
N
,
k
=
1
,
2
,
⋯
 
,
K
P\left(Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)}{N}, k=1,2, \cdots, K
P ( Y = c k ) = N ∑ i = 1 N I ( y i = c k ) , k = 1 , 2 , ⋯ , K
设第j个特征
x
(
j
)
x^{(j)}
x ( j ) 可能取值的集合为
{
a
j
1
,
a
j
2
,
⋯
 
,
a
j
S
j
}
\left\{a_{j 1}, a_{j 2}, \cdots, a_{j S_{j}}\right\}
{ a j 1 , a j 2 , ⋯ , a j S j } .条件几率
P
(
X
(
j
)
=
a
j
l
∣
Y
=
c
k
)
P\left(X^{(j)}=a_{j l} | Y=c_{k}\right)
P ( X ( j ) = a j l ∣ Y = c k ) 的极大似然估计是:
P
(
X
(
j
)
=
a
j
l
∣
Y
=
c
k
)
=
∑
i
=
1
N
I
(
x
i
(
j
)
=
a
j
l
y
i
=
c
k
)
∑
i
=
1
N
I
(
y
i
=
c
k
)
P\left(X^{(j)}=a_{j l} | Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(x_{i}^{(j)}=a_{j l} y_{i}=c_{k}\right)}{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)}
P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( y i = c k ) ∑ i = 1 N I ( x i ( j ) = a j l y i = c k )
j
=
1
,
2
,
⋯
 
,
n
;
l
=
1
,
2
,
⋯
 
,
S
j
:
k
=
1
,
2
,
⋯
 
,
K
j=1,2, \cdots, n ; l=1,2, \cdots, S_{j} : k=1,2, \cdots, K
j = 1 , 2 , ⋯ , n ; l = 1 , 2 , ⋯ , S j : k = 1 , 2 , ⋯ , K
式中,
x
i
(
j
)
x_{i}^{(j)}
x i ( j ) 是第i个样本的第j个特征;
a
j
l
a_{j l}
a j l 是第j个特征可能取的第l个值:I为指示函数
2.学习与分类算法
输入 :训练数据
T
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
⋯
 
,
(
x
N
,
y
N
)
}
T=\left\{\left(x_{1}, y_{1}\right),\left(x_{2}, y_{2}\right), \cdots,\left(x_{N}, y_{N}\right)\right\}
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯ , ( x N , y N ) } ,其中
x
i
=
(
x
i
(
1
)
,
x
i
(
2
)
,
⋯
 
,
x
i
(
n
)
)
T
x_{i}=\left(x_{i}^{(1)}, x_{i}^{(2)}, \cdots, x_{i}^{(n)}\right)^{\mathrm{T}}
x i = ( x i ( 1 ) , x i ( 2 ) , ⋯ , x i ( n ) ) T ,
x
i
(
j
)
x_{i}^{(j)}
x i ( j ) 是第i个样本的第j个特征,
x
i
(
j
)
∈
{
a
j
1
,
a
j
2
,
⋯
 
,
a
j
s
j
}
x_{i}^{(j)} \in\left\{a_{j 1}, a_{j 2}, \cdots, a_{j s_{j}}\right\}
x i ( j ) ∈ { a j 1 , a j 2 , ⋯ , a j s j } ,
a
j
l
a_{j l}
a j l 是第j个特征可能取的第l个值,
j
=
1
,
2
,
⋯
 
,
n
,
l
=
1
,
2
,
⋯
 
,
S
j
,
y
i
∈
{
c
1
,
c
2
,
⋯
 
,
c
K
}
j=1,2, \cdots, n, \quad l=1,2, \cdots, S_{j}, \quad y_{i} \in\left\{c_{1}, c_{2}, \cdots, c_{K}\right\}
j = 1 , 2 , ⋯ , n , l = 1 , 2 , ⋯ , S j , y i ∈ { c 1 , c 2 , ⋯ , c K } ,实例x;
输出 :实例x的分类
计算先验几率及条件几率
P
(
Y
=
c
k
)
=
∑
i
=
1
N
I
(
y
i
=
c
k
)
N
,
k
=
1
,
2
,
⋯
 
,
K
P
(
X
(
j
)
=
a
j
l
∣
Y
=
c
k
)
=
∑
i
=
1
N
I
(
x
i
(
j
)
=
a
j
l
,
y
i
=
c
k
)
∑
i
=
1
N
I
(
y
i
=
c
k
)
j
=
1
,
2
,
⋯
 
,
n
;
l
=
1
,
2
,
⋯
 
,
S
j
;
k
=
1
,
2
,
⋯
 
,
K
\begin{array}{l}{P\left(Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)}{N}, \quad k=1,2, \cdots, K} \\ {P\left(X^{(j)}=a_{j l} | Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(x_{i}^{(j)}=a_{j l}, y_{i}=c_{k}\right)}{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)}} \\ {j=1,2, \cdots, n ; \quad l=1,2, \cdots, S_{j} ; \quad k=1,2, \cdots, K}\end{array}
P ( Y = c k ) = N ∑ i = 1 N I ( y i = c k ) , k = 1 , 2 , ⋯ , K P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( y i = c k ) ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) j = 1 , 2 , ⋯ , n ; l = 1 , 2 , ⋯ , S j ; k = 1 , 2 , ⋯ , K
对于给定的实例
x
=
(
x
(
1
)
,
x
(
2
)
,
⋯
 
,
x
(
n
)
)
T
x=\left(x^{(1)}, x^{(2)}, \cdots, x^{(n)}\right)^{\mathrm{T}}
x = ( x ( 1 ) , x ( 2 ) , ⋯ , x ( n ) ) T ,计算:
P
(
Y
=
c
k
)
∏
j
=
1
n
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
,
k
=
1
,
2
,
⋯
 
,
K
P\left(Y=c_{k}\right) \prod_{j=1}^{n} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right), \quad k=1,2, \cdots, K
P ( Y = c k ) j = 1 ∏ n P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K
肯定实例x的类:
y
=
arg
max
c
k
P
(
Y
=
c
k
)
∏
j
=
1
n
P
(
X
(
j
)
=
x
(
j
)
∣
Y
=
c
k
)
y=\arg \max _{c_{k}} P\left(Y=c_{k}\right) \prod_{j=1}^{n} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)
y = arg c k max P ( Y = c k ) j = 1 ∏ n P ( X ( j ) = x ( j ) ∣ Y = c k )
实例1 :经过下表的训练数据学习一个朴素贝叶斯分类器并肯定
x
=
(
2
,
S
)
T
x=(2, S)^{T}
x = ( 2 , S ) T 的类标记y.表中
X
(
1
)
,
X
(
2
)
X^{(1)}, X^{(2)}
X ( 1 ) , X ( 2 ) 为特征,取值的集合分别为
A
1
=
{
1
,
2
,
3
}
,
A
2
=
{
S
,
M
,
L
}
A_{1}=\{1,2,3\}, A_{2}=\{S, M, L\}
A 1 = { 1 , 2 , 3 } , A 2 = { S , M , L } ,Y为类标记,
Y
∈
C
=
{
1
,
−
1
}
Y \in C=\{1,-1\}
Y ∈ C = { 1 , − 1 }
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
X
(
1
)
X^{(1)}
X ( 1 )
1
1
1
1
1
2
2
2
2
2
3
3
3
3
3
X
(
2
)
X^{(2)}
X ( 2 )
S
M
M
S
S
S
M
M
L
L
L
M
M
L
L
Y
-1
-1
1
1
-1
-1
-1
1
1
1
1
1
1
1
-1
from IPython. display import Image
Image( filename= "./data/4_2.png" , width= 500 )
3.贝叶斯估计
用极大似然估计可能会出现所要估计的几率值为0的状况.这时会影响到后验几率的计算结果.是分类产生误差.解决这一问题的方法是采用贝叶斯估计,具体地,条件几率的贝叶斯估计是:
P
λ
(
X
(
j
)
=
a
j
l
∣
Y
=
c
k
)
=
∑
i
=
1
N
I
(
x
i
(
j
)
=
a
j
l
,
y
i
=
c
k
)
+
λ
∑
i
=
1
N
I
(
y
i
=
c
k
)
+
S
j
λ
P_{\lambda}\left(X^{(j)}=a_{j l} | Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(x_{i}^{(j)}=a_{j l}, y_{i}=c_{k}\right)+\lambda}{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)+S_{j} \lambda}
P λ ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( y i = c k ) + S j λ ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) + λ
式中
λ
⩾
0
\lambda \geqslant 0
λ ⩾ 0 ,等价于在随机变量各个取值的频数上赋予一个正数
λ
>
0
\lambda>0
λ > 0 .当
λ
=
0
\lambda=0
λ = 0 时就是极大似然估计.常取
λ
=
1
\lambda=1
λ = 1 ,这时称为拉普拉斯平滑(Laplace smoothing) .显然对任何
l
=
1
,
2
,
⋯
 
,
S
j
,
k
=
1
,
2
,
⋯
 
,
K
l=1,2, \cdots, S_{j}, \quad k=1,2, \cdots, K
l = 1 , 2 , ⋯ , S j , k = 1 , 2 , ⋯ , K ,有:
P
λ
(
X
(
j
)
=
a
j
l
∣
Y
=
c
k
)
>
0
∑
i
=
1
s
j
P
(
X
(
j
)
=
a
j
l
∣
Y
=
c
k
)
=
1
\begin{array}{l}{P_{\lambda}\left(X^{(j)}=a_{j l} | Y=c_{k}\right)>0} \\ {\sum_{i=1}^{s_{j}} P\left(X^{(j)}=a_{j l} | Y=c_{k}\right)=1}\end{array}
P λ ( X ( j ) = a j l ∣ Y = c k ) > 0 ∑ i = 1 s j P ( X ( j ) = a j l ∣ Y = c k ) = 1
一样,先验几率的贝叶斯估计是:
P
λ
(
Y
=
c
k
)
=
∑
i
=
1
N
I
(
y
i
=
c
k
)
+
λ
N
+
K
λ
P_{\lambda}\left(Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)+\lambda}{N+K \lambda}
P λ ( Y = c k ) = N + K λ ∑ i = 1 N I ( y i = c k ) + λ
实例2 :对实例1,按照拉普拉斯平滑估计几率,即取
λ
=
1
\lambda=1
λ = 1
Image( filename= "./data/4_1.png" , width= 500 )
三.代码实现
% matplotlib inline
import numpy as np
import pandas as pd
import matplotlib. pyplot as plt
from sklearn. datasets import load_iris
from sklearn. model_selection import train_test_split
from collections import Counter
import math
def load_data ( ) :
iris= load_iris( )
df= pd. DataFrame( iris. data, columns= iris. feature_names)
df[ "label" ] = iris. target
df. columns= [ "sepal lenght" , "sepal width" , "petal length" , "petal width" , "label" ]
data= np. array( df. iloc[ : 100 , : ] )
return data[ : , : - 1 ] , data[ : , - 1 ]
X, y= load_data( )
X_train, X_test, y_train, y_test= train_test_split( X, y, test_size= 0.3 )
X_test[ 0 ] , y_test[ 0 ]
(array([4.5, 2.3, 1.3, 0.3]), 0.0)
1.自定义GaussianNB
特征的可能性被假设为高斯
几率密度函数:
P
(
x
i
∣
y
k
)
=
1
2
π
σ
y
k
2
exp
(
−
(
x
i
−
μ
y
k
)
2
2
σ
y
k
2
)
P\left(x_{i} | y_{k}\right)=\frac{1}{\sqrt{2 \pi \sigma_{y k}^{2}}} \exp \left(-\frac{\left(x_{i}-\mu_{y k}\right)^{2}}{2 \sigma_{y k}^{2}}\right)
P ( x i ∣ y k ) = 2 π σ y k 2
1 exp ( − 2 σ y k 2 ( x i − μ y k ) 2 )
数学指望(mean):
μ
\mu
μ ,方差:
σ
2
=
∑
(
X
−
μ
)
2
N
\sigma^{2}=\frac{\sum(X-\mu)^{2}}{N}
σ 2 = N ∑ ( X − μ ) 2
class NaiveBayes ( object ) :
def __init__ ( self) :
self. model= None
@staticmethod
def mean ( X) :
return sum ( X) / float ( len ( X) )
def stdev ( self, X) :
avg= self. mean( X)
return math. sqrt( sum ( [ pow ( x- avg, 2 ) for x in X] ) / float ( len ( X) ) )
def gaussian_probability ( self, x, mean, stdev) :
exponent= math. exp( - ( math. pow ( x- mean, 2 ) / ( 2 * math. pow ( stdev, 2 ) ) ) )
return ( 1 / ( math. sqrt( x* math. pi) * stdev) ) * exponent
def summarize ( self, train_data) :
summaries= [ ( self. mean( i) , self. stdev( i) ) for i in zip ( * train_data) ]
return summaries
def fit ( self, X, y) :
labels= list ( set ( y) )
data= { label: [ ] for label in labels}
for f, label in zip ( X, y) :
data[ label] . append( f)
self. model= { label: self. summarize( value) for label, value in data. items( ) }
return "GaussianNB train done"
def calculate_probabilities ( self, input_data) :
probabilities= { }
for label, value in self. model. items( ) :
probabilities[ label] = 1
for i in range ( len ( value) ) :
mean, stdev= value[ i]
probabilities[ label] *= self. gaussian_probability( input_data[ i] , mean, stdev)
return probabilities
def predict ( self, X_test) :
label= sorted ( self. calculate_probabilities( X_test) . items( ) , key= lambda x: x[ - 1 ] ) [ - 1 ] [ 0 ]
return label
def score ( self, X_test, y_test) :
right= 0
for X, y in zip ( X_test, y_test) :
label= self. predict( X)
if label== y:
right+= 1
return right/ float ( len ( X_test) )
model= NaiveBayes( )
model. fit( X_train, y_train)
'GaussianNB train done'
print ( model. predict( [ 4.4 , 3.2 , 1.3 , 0.2 ] ) )
0.0
model. score( X_test, y_test)
1.0
2.sklearn Naive_Bayes
from sklearn. naive_bayes import GaussianNB
clf= GaussianNB( )
clf. fit( X_train, y_train)
GaussianNB(priors=None, var_smoothing=1e-09)
clf. score( X_test, y_test)
1.0
clf. predict( [ [ 4.4 , 3.2 , 1.3 , 0.2 ] ] )
array([0.])