利用NLTK进行分句分词

1.输入一个段落,分红句子(Punkt句子分割器) import nltk import nltk.data def splitSentence(paragraph): tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') sentences = tokenizer.tokenize(paragraph)
相关文章
相关标签/搜索