spaCy是一个流行、易用的Python天然语言处理包。spaCy具备至关高的处理精度,并且处理速度极快。不过,因为spaCy仍是一个相对比较新的NLP开发包,所以它尚未像NLTK那样被普遍采用,并且目前也没有太多的教程。在本文中,咱们将展现如何使用spaCy来实现文本分类,并在结尾提供完整的实现代码。node
对于年轻的研究者而言,寻找并筛选出合适的学术会议来投稿,是一件至关耗时耗力的事情。首先下载会议处理数据集,咱们接下来将按照会议来分类论文。git
先快速看一下数据:github
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import base64 import string import re from collections import Counter from nltk.corpus import stopwords stopwords = stopwords.words('english')df = pd.read_csv('research_paper.csv') df.head()
结果以下:web
能够用下面的代码确认数据集中没有丢失的值:数据库
df.isnull().sum()
结果以下:网络
Title 0 Conference 0 dtype: int64
如今咱们把数据拆分为训练集和测试集:app
from sklearn.model_selection import train_test_split train, test = train_test_split(df, test_size=0.33, random_state=42)print('Research title sample:', train['Title'].iloc[0]) print('Conference of this paper:', train['Conference'].iloc[0]) print('Training Data Shape:', train.shape) print('Testing Data Shape:', test.shape)
运行结果以下:dom
Research title sample: Cooperating with Smartness: Using Heterogeneous Smart Antennas in Ad-Hoc Networks. Conference of this paper: INFOCOM Training Data Shape: (1679, 2) Testing Data Shape: (828, 2)
数据集包含了2507个论文标题,已经按会议分为5类。下面的图表概述了论文在不一样会议中的分布状况:机器学习
下面的代码是使用spaCy进行文本预处理的一种方法,以后咱们将尝试找出在前两个类型会议(INFOCOM &ISCAS)的论文中用的最多的单词:函数
import spacynlp = spacy.load('en_core_web_sm') punctuations = string.punctuationdef cleanup_text(docs, logging=False): texts = [] counter = 1 for doc in docs: if counter % 1000 == 0 and logging: print("Processed %d out of %d documents." % (counter, len(docs))) counter += 1 doc = nlp(doc, disable=['parser', 'ner']) tokens = [tok.lemma_.lower().strip() for tok in doc if tok.lemma_ != '-PRON-'] tokens = [tok for tok in tokens if tok not in stopwords and tok not in punctuations] tokens = ' '.join(tokens) texts.append(tokens) return pd.Series(texts)INFO_text = [text for text in train[train['Conference'] == 'INFOCOM']['Title']]IS_text = [text for text in train[train['Conference'] == 'ISCAS']['Title']]INFO_clean = cleanup_text(INFO_text) INFO_clean = ' '.join(INFO_clean).split()IS_clean = cleanup_text(IS_text) IS_clean = ' '.join(IS_clean).split()INFO_counts = Counter(INFO_clean) IS_counts = Counter(IS_clean)INFO_common_words = [word[0] for word in INFO_counts.most_common(20)] INFO_common_counts = [word[1] for word in INFO_counts.most_common(20)]fig = plt.figure(figsize=(18,6)) sns.barplot(x=INFO_common_words, y=INFO_common_counts) plt.title('Most Common Words used in the research papers for conference INFOCOM') plt.show()
INFORCOM的运行结果以下:
‘
接下来计算ISCAS:
IS_common_words = [word[0] for word in IS_counts.most_common(20)] IS_common_counts = [word[1] for word in IS_counts.most_common(20)]fig = plt.figure(figsize=(18,6)) sns.barplot(x=IS_common_words, y=IS_common_counts) plt.title('Most Common Words used in the research papers for conference ISCAS') plt.show()
运行结果以下:
在INFOCOM中的顶级词是“networks”和“network”,显然这是由于INFOCOM是网络领域的会议。 ISCAS的顶级词是“base”和“design”,这揭示出ISCAS是关于数据库、系统设计等课题的会议。
首先咱们载入spacy模型并建立语言处理对象:
from sklearn.feature_extraction.text import CountVectorizer from sklearn.base import TransformerMixin from sklearn.pipeline import Pipeline from sklearn.svm import LinearSVC from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS from sklearn.metrics import accuracy_score from nltk.corpus import stopwords import string import re import spacy spacy.load('en') from spacy.lang.en import English parser = English()
下面是另外一种用spaCy清理文本的方法:
STOPLIST = set(stopwords.words('english') + list(ENGLISH_STOP_WORDS)) SYMBOLS = " ".join(string.punctuation).split(" ") + ["-", "...", "”", "”"]class CleanTextTransformer(TransformerMixin): def transform(self, X, **transform_params): return [cleanText(text) for text in X] def fit(self, X, y=None, **fit_params): return selfdef get_params(self, deep=True): return {} def cleanText(text): text = text.strip().replace("\n", " ").replace("\r", " ") text = text.lower() return textdef tokenizeText(sample): tokens = parser(sample) lemmas = [] for tok in tokens: lemmas.append(tok.lemma_.lower().strip() if tok.lemma_ != "-PRON-" else tok.lower_) tokens = lemmas tokens = [tok for tok in tokens if tok not in STOPLIST] tokens = [tok for tok in tokens if tok not in SYMBOLS] return tokens
下面咱们定义一个函数来显示出最重要的特征,具备最高的相关系数的特征:
def printNMostInformative(vectorizer, clf, N): feature_names = vectorizer.get_feature_names() coefs_with_fns = sorted(zip(clf.coef_[0], feature_names)) topClass1 = coefs_with_fns[:N] topClass2 = coefs_with_fns[:-(N + 1):-1] print("Class 1 best: ") for feat in topClass1: print(feat) print("Class 2 best: ") for feat in topClass2: print(feat)vectorizer = CountVectorizer(tokenizer=tokenizeText, ngram_range=(1,1)) clf = LinearSVC() pipe = Pipeline([('cleanText', CleanTextTransformer()), ('vectorizer', vectorizer), ('clf', clf)])# data train1 = train['Title'].tolist() labelsTrain1 = train['Conference'].tolist()test1 = test['Title'].tolist() labelsTest1 = test['Conference'].tolist() # train pipe.fit(train1, labelsTrain1)# test preds = pipe.predict(test1) print("accuracy:", accuracy_score(labelsTest1, preds)) print("Top 10 features used to predict: ") printNMostInformative(vectorizer, clf, 10) pipe = Pipeline([('cleanText', CleanTextTransformer()), ('vectorizer', vectorizer)]) transform = pipe.fit_transform(train1, labelsTrain1)vocab = vectorizer.get_feature_names() for i in range(len(train1)): s = "" indexIntoVocab = transform.indices[transform.indptr[i]:transform.indptr[i+1]] numOccurences = transform.data[transform.indptr[i]:transform.indptr[i+1]] for idx, num in zip(indexIntoVocab, numOccurences): s += str((vocab[idx], num))
运行结果以下:
accuracy: 0.7463768115942029 Top 10 features used to predict: Class 1 best: (-0.9286024231429632, ‘database’) (-0.8479561292796286, ‘chip’) (-0.7675978546440636, ‘wimax’) (-0.6933516302055982, ‘object’) (-0.6728543084136545, ‘functional’) (-0.6625144315722268, ‘multihop’) (-0.6410217867606485, ‘amplifier’) (-0.6396374843938725, ‘chaotic’) (-0.6175855765947755, ‘receiver’) (-0.6016682542232492, ‘web’) Class 2 best: (1.1835964521070819, ‘speccast’) (1.0752051052570133, ‘manets’) (0.9490176624004726, ‘gossip’) (0.8468395015456092, ‘node’) (0.8433107444740003, ‘packet’) (0.8370516260734557, ‘schedule’) (0.8344139814680707, ‘multicast’) (0.8332232077559836, ‘queue’) (0.8255429594734555, ‘qos’) (0.8182435133796081, ‘location’)
接下来计算精度、召回、F1分值:
from sklearn import metrics print(metrics.classification_report(labelsTest1, preds, target_names=df['Conference'].unique()))
运行结果以下;
precision recall f1-score support VLDB 0.75 0.77 0.76 159 ISCAS 0.90 0.84 0.87 299 SIGGRAPH 0.67 0.66 0.66 106 INFOCOM 0.62 0.69 0.65 139 WWW 0.62 0.62 0.62 125 avg / total 0.75 0.75 0.75 828
好了,咱们已经用spaCy完成了对论文的分类,完整源码下载: GITHUB
原文连接:Spacy实现文本分类 - 汇智网