TF-IDF(Term Frequency/Inverse Document Frequency)是信息检索领域很是重要的搜索词重要性度量;用以衡量一个关键词\(w\)对于查询(Query,可看做文档)所能提供的信息。词频(Term Frequency, TF)表示关键词\(w\)在文档\(D_i\)中出现的频率:html
\[ TF_{w,D_i}= \frac {count(w)} {\left| D_i \right|} \]python
其中,\(count(w)\)为关键词\(w\)的出现次数,\(\left| D_i \right|\)为文档\(D_i\)中全部词的数量。逆文档频率(Inverse Document Frequency, IDF)反映关键词的广泛程度——当一个词越广泛(即有大量文档包含这个词)时,其IDF值越低;反之,则IDF值越高。IDF定义以下:git
\[ IDF_w=\log \frac {N}{\sum_{i=1}^N I(w,D_i)} \]github
其中,\(N\)为全部的文档总数,\(I(w,D_i)\)表示文档\(D_i\)是否包含关键词,若包含则为1,若不包含则为0。若词\(w\)在全部文档中均未出现,则IDF公式中的分母为0;所以须要对IDF作平滑(smooth):app
\[ IDF_w=\log \frac {N}{1+\sum_{i=1}^N I(w,D_i)} \]机器学习
关键词\(w\)在文档\(D_i\)的TF-IDF值:学习
\[ TF-IDF_{w,D_i}=TF_{w,D_i}*IDF_w \]网站
从上述定义能够看出:spa
《TF-IDF模型的几率解释》从几率的角度给出TF-IDF的数学解释,《The Vector Space Model of text》为TF-IDF的实操教程,包括TF-IDF的通常计算、正则化,以及如何使用scikit-learn(sklearn)来计算TF-IDF矩阵。code
最近碰到一个需求,挖掘行业关键词;好比,IT行业的关键词有:Java、Python、机器学习等。TF-IDF正好可用来作关键词的抽取,词TF-IDF值越大,则说明该词为关键词。那么,问题来了:如何套用TF-IDF模型呢?
为了作关键词挖掘,首先得有数据;咱们从某招聘网站爬取了20个行业招聘信息数据。而后,对数据进行分词。咱们发现,行业关键词具备领域特定性,即一个行业的关键词通常不会同属于另外几个行业。所以,咱们每个行业的分词结果做为一个大doc,则doc的总数量为20。用sklearn计算TF-IDF矩阵,取每一个行业top词。
在上述模型套用中,由于doc总数少,发现top词中会有一些常见词,诸如“认真负责”、“岗位”之类。为了过滤常见词,采起两个办法:
分词采用的jieba,若是以为分词效果不太理想,可采用百度词条做为自定义分词词典;TF-IDF计算依赖于sklearn;求matrix 的row top则用到了numpy。具体代码以下:
# -*- coding: utf-8 -*- # @Time : 2016/9/6 # @Author : rain import codecs import os import jieba.analyse import numpy as np import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer base_path = "./resources/corpus/" seg_path = "./resources/segmented/" def segment(): """word segment""" for txt in os.listdir(base_path): whole_base = os.path.join(base_path, txt) whole_seg = os.path.join(seg_path, txt) with codecs.open(whole_base, 'r', 'utf-8') as fr: fw = codecs.open(whole_seg, 'w', 'utf-8') for line in fr.readlines(): # seg_list = jieba.cut(line.strip()) seg_list = jieba.analyse.extract_tags(line.strip(), topK=20, withWeight=False, allowPOS=()) fw.write(" ".join(seg_list)) fw.close() def read_doc_list(): """read segmented docs""" trade_list = [] doc_list = [] for txt in os.listdir(seg_path): trade_list.append(txt.split(".")[0]) with codecs.open(os.path.join(seg_path, txt), "r", "utf-8") as fr: doc_list.append(fr.read().replace('\n', '')) return trade_list, doc_list def tfidf_top(trade_list, doc_list, max_df, topn): vectorizer = TfidfVectorizer(max_df=max_df) matrix = vectorizer.fit_transform(doc_list) feature_dict = {v: k for k, v in vectorizer.vocabulary_.items()} # index -> feature_name top_n_matrix = np.argsort(-matrix.todense())[:, :topn] # top tf-idf words for each row df = pd.DataFrame(np.vectorize(feature_dict.get)(top_n_matrix), index=trade_list) # convert matrix to df return df segment() tl, dl = read_doc_list() tdf = tfidf_top(tl, dl, max_df=0.3, topn=500) tdf.to_csv("./resources/keywords.txt", header=False, encoding='utf-8')