#Softmax Classifier softmax分类器和logistics regression有点像,softmax其实就是从logistics发张过来的。因为是多分类了,须要走更多的几率来表示每个分类。softmax的公式: 问题来了,为何不直接求
?而是绕这么大的一圈最后仍是求最大值。①咱们须要的其实就是max,可是这个max有一个缺点,就是不可导。因此咱们须要一个函数来模拟max,exp是指数函数,数值大的增加的速度就会更块,这样就能够把最大的区分出来。同时也是可导的,这样设计也可使得特征对几率的影响是乘性的。②softmax是从logistics发展过来的,天然就用到了交叉熵损失函数,
,目标类
其余的都是0,这个时候求导,
,这个形式很是简洁,并且与线性回归(采用最小均方偏差目标函数)、两类分类(采用cross-entropy目标函数)时的形式一致。 主要实现流程: 首先就是exp的归一化操做,获得当前样本属于每个类别的几率,
而后就是求对数化求cost function。
求导操做:
###Softmax里的参数特色
因此能够看出,最优参数
减去一些向量φ对预测结果是没有什么影响的,也就是说在模型里面,是有多组的最优解,由于φ的不一样就意味着不一样的解,而φ对于结果又是没有影响的,因此就存在多组解的可能。 ###Softmax和logistics的关系
因此说softmax是logistics的一种扩展,回到二分类,softmax也是同样的,都是用的cross-entropy。 ###代码实现 使用手写数字识别的数据集:git
class DataPrecessing(object):
def loadFile(self):
(x_train, x_target_tarin), (x_test, x_target_test) = mnist.load_data()
x_train = x_train.astype('float32')/255.0
x_test = x_test.astype('float32')/255.0
x_train = x_train.reshape(len(x_train), np.prod(x_train.shape[1:]))
x_test = x_test.reshape(len(x_test), np.prod(x_test.shape[1:]))
x_train = np.mat(x_train)
x_test = np.mat(x_test)
x_target_tarin = np.mat(x_target_tarin)
x_target_test = np.mat(x_target_test)
return x_train, x_target_tarin, x_test, x_target_test
def Calculate_accuracy(self, target, prediction):
score = 0
for i in range(len(target)):
if target[i] == prediction[i]:
score += 1
return score/len(target)
def predict(self, test, weights):
h = test * weights
return h.argmax(axis=1)
复制代码
引入数据集,格式的转换等等。github
def gradientAscent(feature_data, label_data, k, maxCycle, alpha):
'''train softmax model by gradientAscent input:feature_data(mat) feature label_data(mat) target k(int) number of classes maxCycle(int) max iterator alpha(float) learning rate '''
Dataprecessing = DataPrecessing()
x_train, x_target_tarin, x_test, x_target_test = Dataprecessing.loadFile()
x_target_tarin = x_target_tarin.tolist()[0]
x_target_test = x_target_test.tolist()[0]
m, n = np.shape(feature_data)
weights = np.mat(np.ones((n, k)))
i = 0
while i <= maxCycle:
err = np.exp(feature_data*weights)
if i % 100 == 0:
print('cost score : ', cost(err, label_data))
train_predict = Dataprecessing.predict(x_train, weights)
test_predict = Dataprecessing.predict(x_test, weights)
print('Train_accuracy : ', Dataprecessing.Calculate_accuracy(x_target_tarin, train_predict))
print('Test_accuracy : ', Dataprecessing.Calculate_accuracy(x_target_test, test_predict))
rowsum = -err.sum(axis = 1)
rowsum = rowsum.repeat(k, axis = 1)
err = err / rowsum
for x in range(m):
err[x, label_data[x]] += 1
weights = weights + (alpha/m) * feature_data.T * err
i += 1
return weights
def cost(err, label_data):
m = np.shape(err)[0]
sum_cost = 0.0
for i in range(m):
if err[i, label_data[i]] / np.sum(err[i, :]) > 0:
sum_cost -= np.log(err[i, label_data[i]] / np.sum(err[i, :]))
else:
sum_cost -= 0
return sum_cost/m
复制代码
实现其实仍是比较简单的。bash
Dataprecessing = DataPrecessing()
x_train, x_target_tarin, x_test, x_target_test = Dataprecessing.loadFile()
x_target_tarin = x_target_tarin.tolist()[0]
gradientAscent(x_train, x_target_tarin, 10, 100000, 0.001)
复制代码
运行函数。函数
###GitHub代码https://github.com/GreenArrow2017/MachineLearning/tree/master/MachineLearning/Linear%20Model/LogosticRegressionui