KNN K近邻分类算法思想:算法
假设咱们有训练数据<x,y>, x表示某个已经分类的样本,y表示其所属的分类,如今要对样本z进行分类。app
步骤以下:orm
(1)计算 z 与全部 x 的距离 D,距离计算公式有多种,如欧氏距离。排序
(2)按照距离 D 将已分类的样本进行排序。ip
(3)选取前 K 个样本,统计这些样本所属的类别,出现次数最多的即为z所属类别。get
主要方法:it
def file2matrix(fileName):
fr = open(fileName)
lines = fr.readlines()
numofLines = len(lines)
returnMatrix = zeros((numofLines,3)) //生成矩阵
classLabelVector = []
index=0
for line in lines:
line = line.strip(); //去除回车
listFromLine = line.split('\t')
returnMatrix[index,:] = listFromLine[0:3] //对矩阵进行赋值
classLabelVector.append(int(listFromLine[-1]))
index = index +1
return returnMatrix,classLabelVectorclass
def classify(inX,dataSet,labels,k):
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize,1))-dataSet
sqDiffMat = diffMat**2;
sqDistence = sqDiffMat.sum(axis=1)
distances = sqDistence**0.5
sortedDistances = distances.argsort()
classCount={}
for i in range(k):
votelabel = labels[sortedDistances[i]]
classCount[votelabel] = classCount.get(votelabel,0)+1
sortedClassCount = sorted(classCount.iteritems(),key = operator.itemgetter(1),reverse=True)
return sortedClassCount[0][0]file
def autoNorm(dataset):
minvalue = dataset.min(0)
maxvalue = dataset.max(0)
ranges = maxvalue-minvalue
m = dataset.shape[0]
normdataset = dataset-tile(minvalue,(m,1))
normdataset = normdataset/tile(ranges,(m,1))
return normdataset,ranges,minvalue方法