文本相似度算法

因为舆情监测这边涉及到一些文本相似度的判断,实现把一类新闻的分类到同一个主新闻下。有点类似baidu相似新闻的搞法。所有抽时间看了些简单的文本相似度算法。

下面是之前看的莱文斯坦距离算法。大家可以bing一下理论,这里直接上code。

def levenshtein_distance(first, second):

    if len(first) == 0 or len(second) == 0:
        return len(first) + len(second)
    first_length = len(first) + 1
    second_length = len(second) + 1
    distance_matrix = [list(range(second_length)) for i in list(range(first_length))] # 初始化矩阵
    for i in range(1, first_length):
        for j in range(1, second_length):
            deletion = distance_matrix[i-1][j] + 1
            insertion = distance_matrix[i][j-1] + 1
            substitution = distance_matrix[i-1][j-1]
            if first[i-1] != second[j-1]:
                substitution += 1
            distance_matrix[i][j] = min(insertion, deletion, substitution)
    return distance_matrix[first_length-1][second_length-1]


if __name__ == '__main__':
    print(levenshtein_distance(u"我们不要垃圾消息", u"A垃圾信息我们不要")) # 运行结果为:2
import Levenshtein  

a =r"C:/Users/Administrator/Desktop/a.txt"

b =r'C:/Users/Administrator/Desktop/b.txt'

aa = ""
bb = ""
with open(a,'r') as f:
    aa = f.read()

with open (b, 'r') as f1:
    bb = f1.read()

print(Levenshtein.distance(a,b))
print(Levenshtein.hamming(a,b))


print(Levenshtein.ratio(aa,bb))

下面的截图是从网上抄袭过来的。我觉得对于说明这个算法很好。

levenshtein