(三)word2vec词向量原理与实践

word2vec原理:

词向量提取工具,主要有两种模型。分别是CBOW和Skip-Gram。前者通过上下文预测中心词,后者通过中心词预测上下文。

Word2Vecç两个模å

代码:

texts = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
from gensim import corpora
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
print corpus[0] # [(0, 1), (1, 1), (2, 1)]

 

参考文献:

https://www.bilibili.com/video/av41393758/?p=2

https://github.com/Heitao5200/DGB/blob/master/feature/feature_code/train_word2vec.py

https://blog.csdn.net/l7h9ja4/article/details/80220939