分类任务:如何在客服对话中,识别客户情绪的好坏

对话情绪识别

对话情绪识别,目标是识别智能对话场景中用户的情绪,帮助企业更全面的把握产品体验、监控客户服务质量,适用于聊天、客服等多种场景。python

例如在智能音箱、智能车载等场景中,识别用户的情绪,能够适当地进行情绪安抚,改善产品的用户交互体验,在智能客服场景中,能够分析客服服务质量、下降人工质检成本,也可以帮助企业更好地把握对话质量、提升用户满意度。可经过百度AI开发平台体验。json

从上图能够看到,对于用户的对话文本(一般是语音识别后的文本),模型会判断该文本属于不一样情绪类别的几率,并给出最后的情绪类别,在本案例中,对话情绪类别有三种:负向情绪(0)、中性情绪(1)和正向情绪(2),属于短文本三分类问题。网络

咱们先来跑一下例子,直观感觉一下模型的输出结果!app

In[1]
# 首先解压数据集和预训练的模型
!cd /home/aistudio/data/data12605/ && unzip -qo data.zip
!cd /home/aistudio/work/ && tar -zxf emotion_detection_textcnn-1.0.0.tar.gz

# 查看预测的数据
!cat /home/aistudio/data/data12605/data/infer.txt
靠 你 真是 说 废话 
服务 态度 好 差 啊	
你 写 过 黄山 奇石 吗
一个一个 慢慢来
谢谢 服务 很 好
In[2]
# 修改配置,使用预训练好的模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./textcnn' run.sh 
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json

# 模型预测,并查看结果
!cd /home/aistudio/work/ && sh run.sh infer
Load model from ./textcnn
Final infer result:
0	0.992887	0.003744	0.003369
0	0.677892	0.229147	0.092961
1	0.001657	0.997380	0.000963
1	0.003413	0.990708	0.005880
2	0.014995	0.104129	0.880875
[infer] elapsed time: 0.014017 s
 

训练实践

介绍分类的经常使用评价指标,如何准备数据,定义分类模型,而后快速进行对话情绪识别模型的训练、评估和预测。dom

评价指标

分类模型的评价指标一般有 Accuracy、Precision、Recall 和 F1。机器学习

  • 准确率 Accuracy = 正确分类的样本数 / 总样本数。
  • 精确率 Precision = 预测为正类而且正确的样本数 / 预测为正类的样本数。
  • 召回率 Recall = 预测为正类而且正确的样本数 / 标注为正类的样本数。
  • 综合评价指标 F1:2(Precision + Recall) / (Precision*Recall),Precision 和 Recall 加权调和平均
  • 以上指标越高,则说明模型比较理想。

以二分类问题为例,一般以关注的类为正类,其它类为负类,一般会有一份测试集,模型在测试集上预测的结果有4种状况,4种状况就造成以下的混淆矩阵:函数

所以准确率 Accuracy 定义为:acc=TP+TNTP+FN+FP+TNacc = \frac {TP+TN} {TP+FN+FP+TN}acc=TP+FN+FP+TNTP+TN工具

精确率 Precision 定义为:P=TPTP+FPP = \frac {TP} {TP+FP}P=TP+FPTPpost

召回率 Recall 定义为:R=TPTP+FNR = \frac {TP} {TP+FN}R=TP+FNTP性能

综合评价指标 F1 定义为:F1=2(P∗R)P+RF1 = \frac {2(P*R)} {P+R}F1=P+R2(PR)

在多分类状况下,则用宏平均(Macro-averaging)和微平均(Micro-averaging)的方法,宏平均是指先计算每一类的各项评估指标,而后再对指标求算术平均值;微平均是指先对混淆矩阵的元素进行平均,获得TP,FP,TN,FN的平均值,而后再计算各项评估指标。

在本案例中,咱们主要使用宏平均的计算方法。

Tips:在样本不均衡的状况下,准确率 Accuracy 这个评价指标有很大的缺陷。好比说1万封邮件里有10封垃圾邮件(千分之一的几率是垃圾邮件),若是模型将全部邮件判为非垃圾邮件,那acc有99%以上,但实际上该模型是没意义的。这种状况下就须要使用Precision、Recall、F1做为评价指标。

 

数据准备

为了训练分类模型,通常须要准备三个数据集:训练集train.txt、验证集dev.txt、测试集test.txt。

  • 训练集,用来训练模型参数的数据集,模型直接根据训练集来调整自身参数以得到更好的分类效果。
  • 验证集,又称开发集,用于在训练过程当中检验模型的状态,收敛状况。验证集一般用于调整超参数,根据几组模型验证集上的表现决定哪组超参数拥有最好的性能。
  • 测试集,用来计算模型的各项评估指标,验证模型泛化能力。

Tips:测试集的数据通常不在训练集中,从而用来验证模型的效果。

这里咱们提供一份已标注的、通过分词预处理的机器人聊天数据集,其目录结构以下

.
├── train.txt   # 训练集
├── dev.txt     # 验证集
├── test.txt    # 测试集
├── infer.txt   # 待预测数据
├── vocab.txt   # 词典

数据由两列组成,以制表符('\t')分隔,第一列是情绪分类的类别(0表示负向情绪;1表示中性情绪;2表示正向情绪),第二列是以空格分词的中文文本,以下示例,文件为 utf8 编码。

label   text_a
0   谁 骂人 了 ? 我 历来 不 骂人 , 我 骂 的 都 不是 人 , 你 是 人 吗 ?
1   我 有事 等会儿 就 回来 和 你 聊
2   我 见到 你 很 高兴 谢谢 你 帮 我
 

分类模型选择

传统的机器学习分类方法,须要设置不少人工特征,例如单词的个数、文本的长度、单词的词性等等,而随着深度学习的发展,不少分类模型的效果获得验证和使用,包括BOW、CNN、RNN、BiLSTM等,其特色是不用设计人工特征,而是基于词向量(word embedding)进行表示学习。

这里咱们以 CNN 模型为例,介绍如何使用 PaddlePaddle 定义网络结构,更多的模型介绍细节在概念解释章节。

网络的配置以下,其中网络的输入dict_dim表示的是词典的大小,class_dim表示类别数,这里咱们是3。

In[3]
import paddle
import paddle.fluid as fluid

# 定义cnn模型
# 其中class_dim表示分类的类别数,win_sizes表示使用卷积核窗口大小
def cnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_size=3, is_prediction=False):
    """ Conv net """
    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    conv_3 = fluid.nets.sequence_conv_pool(
        input=emb,
        num_filters=hid_dim,
        filter_size=win_size,
        act="tanh",
        pool_type="max")

    # full connect layer
    fc_1 = fluid.layers.fc(input=[conv_3], size=hid_dim2)
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction
    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)

    return avg_cost, prediction
 

定义网络结构后,须要定义训练和预测程序、优化函数、数据提供器等,为了便于学习,咱们将模型训练、评估、预测的过程封装成 run.sh 脚本。

模型训练

基于示例的数据集,能够运行下面的命令,在训练集(train.txt)上进行模型训练,并在验证集(dev.txt)验证。

In[6]
# 修改配置,选择cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json

# 修改训练后模型保存的路径
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/cnn' run.sh

# 模型训练
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:20:44.030146   271 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:20:44.033725   271 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.350119, avg acc: 0.875000, speed: 172.452770 steps/s
[dev evaluation] avg loss: 0.319571, avg acc: 0.874074, elapsed time: 0.048704 s
step: 400, avg loss: 0.296501, avg acc: 0.890625, speed: 65.212972 steps/s
[dev evaluation] avg loss: 0.230635, avg acc: 0.914815, elapsed time: 0.073480 s
step: 600, avg loss: 0.319913, avg acc: 0.875000, speed: 63.960171 steps/s
[dev evaluation] avg loss: 0.176513, avg acc: 0.938889, elapsed time: 0.054020 s
step: 756, avg loss: 0.168574, avg acc: 0.947368, speed: 70.363845 steps/s
[dev evaluation] avg loss: 0.144825, avg acc: 0.948148, elapsed time: 0.056827 s
 

训练完成后,会在./save_models/cnn 目录下生成以 step_xxx 命名的模型目录。

模型评估

利用训练后的模型step_756,能够运行下面的命令进行测试,查看预训练的模型在测试集(test.txt)上的评测结果

In[7]
# 确保使用的模型为CNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
# 使用刚才训练的cnn模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型评估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/cnn/step_756
Final test result:
[test evaluation] accuracy: 0.866924, macro precision: 0.790397, recall: 0.714859, f1: 0.743252, elapsed time: 0.048996 s
 

模型预测

利用已有模型,可在未知label的数据集(infer.txt)上进行预测,获得模型预测结果及各label的几率。

In[8]
# 查看预测的数据
!cat /home/aistudio/data/data12605/data/infer.txt

# 使用刚才训练的cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型预测
!cd /home/aistudio/work/ && sh run.sh infer
靠 你 真是 说 废话 
服务 态度 好 差 啊	
你 写 过 黄山 奇石 吗
一个一个 慢慢来
谢谢 服务 很 好 
Load model from ./save_models/cnn/step_756
Final infer result:
0	0.969470	0.000457	0.030072
0	0.434887	0.183004	0.382110
1	0.000057	0.999915	0.000028
1	0.000312	0.999080	0.000607
2	0.164429	0.002141	0.833429
[infer] elapsed time: 0.009522 s
 

概念解释

CNN-卷积神经网络

卷积神经网络(Convolution Neural Network, CNN)最先使用于图像领域,一般有多个卷积层+池化层组成,最后再拼接全链接层作分类。卷积层主要是执行卷积操做提取图片底层到高层的特征,池化层主要是执行降采样操做,能够过滤掉一些不重要的高频信息。(降采样是图像处理中常见的一种操做)

什么是卷积

绿色表示输入的图像,能够是一张黑白图片,0是黑色像素点,1是白色像素点。黄色就卷积核(kernal),也叫过滤器(filter)或特征检测器(feature detector),经过卷积,对图片的像素点进行加权,做为这局部像素点的响应,得到图像的某种特征。

卷积的过程,就是滑动这个黄色的矩阵,以必定的步长向右和向下移动,从而获得整个图像的特征表示。

举个例子:上图中输入的绿色矩阵表示一张人脸,黄色矩阵表示一个眼睛,卷积过程就是拿这个眼睛去匹配这张人脸,那么当黄色矩阵匹配到绿色矩阵(人脸)中眼睛部分时,对应的响应就会很大,获得的值就越大。

什么是池化

前面卷积的过程,实际上"重叠"计算了不少冗余的信息,池化就是对卷积后的特征进行筛选,提取关键信息,过滤掉一些噪音,一般用的是max pooling和mean pooling。

文本卷积神经网络

这里介绍的主要是文本卷积神经网络,首先咱们将输入的query表示层词向量序列,而后使用卷积去处理输入的词向量序列,就会产生一个特征图(feature map),对特征图采用时间维度上的最大池化(max pooling over time)操做,就获得此卷积核对应的整句话的特征,最后,将全部卷积核获得的特征拼接起来即为文本的定长向量表示,对于文本分类问题,将其链接至softmax即构建出完整的模型。

在实际应用中,咱们会使用多个卷积核来处理句子,窗口大小相同的卷积核堆叠起来造成一个矩阵,这样能够更高效的完成运算。另外,咱们也可以使用窗口大小不一样的卷积核来处理句子,如上图,不一样颜色表示不一样大小的卷积核操做。

 

进阶使用

模型优化上,咱们通常会使用表达能力更强的模型,或者使用finetune。这里首先咱们使用窗口大小不一样的卷积核TextCNN模型进行实验,而后介绍如何基于预训练的模型进行finetune

TextCNN模型实验

BOW词袋模型,会忽略其词顺序、语法和句法,存在必定的缺陷,因此对于通常的短文本分类问题,常使用上文所述的文本卷积网络,它在考虑词顺序的基础上把文本映射到低维度的语义空间,而且以端对端(end to end)的方式进行文本表示及分类,其性能相对于传统方法有显著的提高。

In[9]
# 定义textcnn模型
# 其中class_dim表示分类的类别数,win_sizes表示使用卷积核窗口大小
def textcnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_sizes=None, is_prediction=False):
    """ Textcnn_net """
    if win_sizes is None:
        win_sizes = [1, 2, 3]

    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    convs = []
    for win_size in win_sizes:
        conv_h = fluid.nets.sequence_conv_pool(
            input=emb,
            num_filters=hid_dim,
            filter_size=win_size,
            act="tanh",
            pool_type="max")
        convs.append(conv_h)
    convs_out = fluid.layers.concat(input=convs, axis=1)

    # full connect layer
    fc_1 = fluid.layers.fc(input=[convs_out], size=hid_dim2, act="tanh")
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction

    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)
    return avg_cost, prediction
 

这里咱们进行配置的修改,包括模型的类型、初始化模型位置、模型保存路径,而后进行模型的训练和评估

In[10]
# 更改模型为TextCNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json
# 修改模型保存目录
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn' run.sh

# 模型训练
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:21:31.529520   339 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:21:31.533326   339 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.212591, avg acc: 0.921875, speed: 104.244460 steps/s
[dev evaluation] avg loss: 0.284517, avg acc: 0.897222, elapsed time: 0.069697 s
step: 400, avg loss: 0.367220, avg acc: 0.812500, speed: 53.107965 steps/s
[dev evaluation] avg loss: 0.195091, avg acc: 0.932407, elapsed time: 0.080681 s
step: 600, avg loss: 0.242331, avg acc: 0.921875, speed: 52.311775 steps/s
[dev evaluation] avg loss: 0.139668, avg acc: 0.955556, elapsed time: 0.082921 s
step: 756, avg loss: 0.052051, avg acc: 1.000000, speed: 58.723846 steps/s
[dev evaluation] avg loss: 0.111066, avg acc: 0.962963, elapsed time: 0.082778 s
In[11]
# 使用上面训练好的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn/step_756' run.sh

# 模型评估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn/step_756
Final test result:
[test evaluation] accuracy: 0.878496, macro precision: 0.797653, recall: 0.754163, f1: 0.772353, elapsed time: 0.082577 s
 

模型评估结果对比

模型/评估指标 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
 

基于预训练的TextCNN进行Finetune

能够经过修改run.sh中的init_checkpoint参数,加载预训练模型来进行精调(finetune)。

In[13]
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
# 使用预训练的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"./textcnn",#' config.json
# 修改学习率和保存的模型目录
!cd /home/aistudio/work/ && sed -i 's#"lr":.*$#"lr":0.0001,#' config.json
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn_finetune' run.sh

# 模型训练
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:23:05.350819   418 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:23:05.354846   418 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./textcnn
step: 200, avg loss: 0.184450, avg acc: 0.953125, speed: 103.065345 steps/s
[dev evaluation] avg loss: 0.170050, avg acc: 0.937037, elapsed time: 0.074731 s
step: 400, avg loss: 0.166738, avg acc: 0.921875, speed: 47.727028 steps/s
[dev evaluation] avg loss: 0.132444, avg acc: 0.954630, elapsed time: 0.081669 s
step: 600, avg loss: 0.076735, avg acc: 0.984375, speed: 53.387034 steps/s
[dev evaluation] avg loss: 0.103549, avg acc: 0.963889, elapsed time: 0.081754 s
step: 756, avg loss: 0.061593, avg acc: 0.947368, speed: 57.990719 steps/s
[dev evaluation] avg loss: 0.086959, avg acc: 0.971296, elapsed time: 0.080616 s
In[14]
# 修改配置,使用上面训练获得的模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn_finetune/step_756' run.sh
# 模型评估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn_finetune/step_756
Final test result:
[test evaluation] accuracy: 0.893925, macro precision: 0.829668, recall: 0.812613, f1: 0.820883, elapsed time: 0.083944 s
 

模型评估结果对比

模型/评估指标 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277

能够看出,基于预训练模型Finetune后能取得更好的分类效果。

 

基于ERNIE模型进行Finetune

这里咱们先下载ERNIE预训练模型,而后运行run_ernie.sh脚本中,加载ERNIE模型来进行精调(finetune)。

In[3]
!cd /home/aistudio/work/ && mkdir -p pretrain_models/ernie
%cd /home/aistudio/work/pretrain_models/ernie
# 获取ernie预训练模型
!wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz -O ERNIE_stable-1.0.1.tar.gz
!tar -zxvf ERNIE_stable-1.0.1.tar.gz && rm ERNIE_stable-1.0.1.tar.gz
/home/aistudio/work/pretrain_models/ernie
--2020-02-27 20:17:41--  https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz
Resolving baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 182.61.200.195, 182.61.200.229
Connecting to baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|182.61.200.195|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 374178867 (357M) [application/x-gzip]
Saving to: ‘ERNIE_stable-1.0.1.tar.gz’

ERNIE_stable-1.0.1. 100%[===================>] 356.84M  61.3MB/s    in 8.9s    

2020-02-27 20:17:50 (40.1 MB/s) - ‘ERNIE_stable-1.0.1.tar.gz’ saved [374178867/374178867]

params/
params/encoder_layer_5_multi_head_att_key_fc.w_0
params/encoder_layer_0_post_ffn_layer_norm_scale
params/encoder_layer_0_post_att_layer_norm_bias
params/encoder_layer_0_multi_head_att_value_fc.w_0
params/sent_embedding
params/encoder_layer_11_multi_head_att_query_fc.w_0
params/encoder_layer_8_ffn_fc_0.w_0
params/encoder_layer_5_ffn_fc_1.w_0
params/encoder_layer_6_ffn_fc_1.b_0
params/encoder_layer_5_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_output_fc.b_0
params/encoder_layer_4_ffn_fc_0.w_0
params/encoder_layer_4_post_ffn_layer_norm_bias
params/encoder_layer_3_ffn_fc_1.b_0
params/encoder_layer_0_multi_head_att_value_fc.b_0
params/encoder_layer_11_post_att_layer_norm_bias
params/encoder_layer_3_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_output_fc.w_0
params/encoder_layer_5_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_value_fc.w_0
params/encoder_layer_6_multi_head_att_query_fc.w_0
params/encoder_layer_8_post_att_layer_norm_bias
params/encoder_layer_2_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_key_fc.w_0
params/encoder_layer_4_multi_head_att_key_fc.w_0
params/encoder_layer_6_post_ffn_layer_norm_bias
params/encoder_layer_9_post_ffn_layer_norm_bias
params/encoder_layer_11_post_ffn_layer_norm_scale
params/encoder_layer_6_multi_head_att_value_fc.b_0
params/encoder_layer_9_ffn_fc_0.w_0
params/encoder_layer_2_post_ffn_layer_norm_scale
params/encoder_layer_1_multi_head_att_query_fc.w_0
params/encoder_layer_1_post_ffn_layer_norm_bias
params/next_sent_3cls_fc.w_0
params/encoder_layer_9_multi_head_att_key_fc.w_0
params/encoder_layer_7_multi_head_att_value_fc.w_0
params/encoder_layer_10_ffn_fc_0.b_0
params/encoder_layer_2_multi_head_att_value_fc.w_0
params/encoder_layer_8_post_ffn_layer_norm_scale
params/encoder_layer_3_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.w_0
params/encoder_layer_11_multi_head_att_query_fc.b_0
params/encoder_layer_1_ffn_fc_0.w_0
params/encoder_layer_8_multi_head_att_value_fc.w_0
params/word_embedding
params/mask_lm_trans_layer_norm_bias
params/encoder_layer_8_multi_head_att_query_fc.w_0
params/encoder_layer_1_multi_head_att_query_fc.b_0
params/encoder_layer_5_ffn_fc_0.b_0
params/encoder_layer_3_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_1.b_0
params/encoder_layer_2_post_att_layer_norm_bias
params/encoder_layer_8_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.b_0
params/encoder_layer_11_post_ffn_layer_norm_bias
params/encoder_layer_6_multi_head_att_key_fc.b_0
params/mask_lm_trans_layer_norm_scale
params/encoder_layer_11_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_ffn_layer_norm_scale
params/encoder_layer_0_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_key_fc.b_0
params/encoder_layer_9_post_att_layer_norm_scale
params/encoder_layer_7_post_ffn_layer_norm_scale
params/encoder_layer_4_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_value_fc.w_0
params/pos_embedding
params/mask_lm_trans_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.b_0
params/encoder_layer_4_multi_head_att_query_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.w_0
params/encoder_layer_3_ffn_fc_1.w_0
params/encoder_layer_9_post_att_layer_norm_bias
params/accuracy_0.tmp_0
params/encoder_layer_3_post_att_layer_norm_bias
params/encoder_layer_7_multi_head_att_output_fc.b_0
params/encoder_layer_7_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.b_0
params/encoder_layer_0_multi_head_att_key_fc.w_0
params/encoder_layer_6_ffn_fc_0.w_0
params/encoder_layer_5_multi_head_att_query_fc.w_0
params/encoder_layer_10_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.w_0
params/encoder_layer_6_multi_head_att_key_fc.w_0
params/encoder_layer_9_ffn_fc_1.w_0
params/encoder_layer_10_ffn_fc_0.w_0
params/pre_encoder_layer_norm_bias
params/encoder_layer_1_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_scale
params/encoder_layer_9_post_ffn_layer_norm_scale
params/encoder_layer_9_multi_head_att_query_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.b_0
params/tmp_51
params/encoder_layer_11_ffn_fc_1.w_0
params/encoder_layer_7_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_key_fc.w_0
params/encoder_layer_8_multi_head_att_key_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.b_0
params/encoder_layer_6_post_att_layer_norm_scale
params/encoder_layer_5_ffn_fc_0.w_0
params/encoder_layer_4_multi_head_att_query_fc.b_0
params/encoder_layer_10_post_att_layer_norm_bias
params/encoder_layer_3_post_att_layer_norm_scale
params/encoder_layer_6_ffn_fc_1.w_0
params/mask_lm_out_fc.b_0
params/encoder_layer_3_ffn_fc_0.w_0
params/encoder_layer_6_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_bias
params/encoder_layer_6_multi_head_att_query_fc.b_0
params/encoder_layer_3_ffn_fc_0.b_0
params/encoder_layer_2_post_att_layer_norm_scale
params/encoder_layer_7_ffn_fc_0.w_0
params/encoder_layer_8_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_value_fc.b_0
params/encoder_layer_3_multi_head_att_output_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.w_0
params/encoder_layer_4_ffn_fc_1.w_0
params/encoder_layer_5_post_att_layer_norm_scale
params/encoder_layer_3_post_ffn_layer_norm_bias
params/encoder_layer_2_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_key_fc.b_0
params/encoder_layer_0_ffn_fc_1.w_0
params/encoder_layer_0_post_ffn_layer_norm_bias
params/encoder_layer_11_ffn_fc_0.b_0
params/pooled_fc.b_0
params/encoder_layer_2_multi_head_att_output_fc.b_0
params/encoder_layer_8_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_output_fc.w_0
params/encoder_layer_1_ffn_fc_1.w_0
params/encoder_layer_2_ffn_fc_0.b_0
params/encoder_layer_5_multi_head_att_output_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.w_0
params/encoder_layer_0_ffn_fc_1.b_0
params/encoder_layer_7_multi_head_att_key_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.b_0
params/encoder_layer_6_post_ffn_layer_norm_scale
params/encoder_layer_2_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_0.b_0
params/encoder_layer_11_ffn_fc_0.w_0
params/encoder_layer_1_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_key_fc.w_0
params/reduce_mean_0.tmp_0
params/encoder_layer_7_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_value_fc.b_0
params/@LR_DECAY_COUNTER@
params/encoder_layer_8_multi_head_att_key_fc.b_0
params/encoder_layer_4_post_ffn_layer_norm_scale
params/encoder_layer_10_post_ffn_layer_norm_bias
params/encoder_layer_9_ffn_fc_1.b_0
params/encoder_layer_3_multi_head_att_value_fc.b_0
params/encoder_layer_6_multi_head_att_value_fc.w_0
params/encoder_layer_8_multi_head_att_query_fc.b_0
params/encoder_layer_8_ffn_fc_1.b_0
params/encoder_layer_4_post_att_layer_norm_bias
params/encoder_layer_0_post_att_layer_norm_scale
params/encoder_layer_0_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_output_fc.b_0
params/encoder_layer_4_multi_head_att_output_fc.b_0
params/encoder_layer_8_ffn_fc_0.b_0
params/pre_encoder_layer_norm_scale
params/encoder_layer_11_ffn_fc_1.b_0
params/encoder_layer_8_multi_head_att_output_fc.b_0
params/encoder_layer_10_multi_head_att_query_fc.b_0
params/encoder_layer_1_multi_head_att_key_fc.b_0
params/encoder_layer_6_multi_head_att_output_fc.b_0
params/mask_lm_trans_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.b_0
params/encoder_layer_7_multi_head_att_value_fc.b_0
params/encoder_layer_10_multi_head_att_key_fc.b_0
params/encoder_layer_8_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.w_0
params/pooled_fc.w_0
params/encoder_layer_3_multi_head_att_value_fc.w_0
params/encoder_layer_0_multi_head_att_key_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.b_0
params/next_sent_3cls_fc.b_0
params/encoder_layer_2_ffn_fc_0.w_0
params/encoder_layer_1_multi_head_att_value_fc.w_0
params/encoder_layer_7_multi_head_att_query_fc.w_0
params/encoder_layer_3_post_ffn_layer_norm_scale
params/encoder_layer_1_post_ffn_layer_norm_scale
params/encoder_layer_6_post_att_layer_norm_bias
params/encoder_layer_4_multi_head_att_output_fc.w_0
params/encoder_layer_6_multi_head_att_output_fc.w_0
params/encoder_layer_7_multi_head_att_output_fc.w_0
params/encoder_layer_10_ffn_fc_1.b_0
params/encoder_layer_11_post_att_layer_norm_scale
params/encoder_layer_4_post_att_layer_norm_scale
params/encoder_layer_5_multi_head_att_query_fc.b_0
params/encoder_layer_4_multi_head_att_key_fc.b_0
params/encoder_layer_4_ffn_fc_1.b_0
params/encoder_layer_0_ffn_fc_0.w_0
params/encoder_layer_7_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_att_layer_norm_bias
params/encoder_layer_9_ffn_fc_0.b_0
params/encoder_layer_1_multi_head_att_value_fc.b_0
params/encoder_layer_10_post_ffn_layer_norm_scale
params/encoder_layer_2_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_bias
params/encoder_layer_10_ffn_fc_1.w_0
params/encoder_layer_0_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_query_fc.b_0
params/encoder_layer_8_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_scale
vocab.txt
ernie_config.json
In[3]
# 基于ERNIE模型finetune训练
!cd /home/aistudio/work/ && sh run_ernie.sh train
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: /home/aistudio/data/data12605/data/dev.txt
do_infer: False
do_lower_case: True
do_train: True
do_val: True
epoch: 3
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./pretrain_models/ernie/params
label_map_config: None
lr: 2e-05
max_seq_len: 64
num_labels: 3
random_seed: 1
save_checkpoint_dir: ./save_models/ernie
save_steps: 500
skip_steps: 50
task_name: None
test_set: None
train_set: /home/aistudio/data/data12605/data/train.txt
use_cuda: True
use_paddle_hub: False
validation_steps: 50
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
Device count: 1
Num train examples: 9655
Max train steps: 906
Traceback (most recent call last):
  File "run_ernie_classifier.py", line 433, in <module>
    main(args)
  File "run_ernie_classifier.py", line 247, in main
    pyreader_name='train_reader')
  File "/home/aistudio/work/ernie_code/ernie.py", line 32, in ernie_pyreader
    src_ids = fluid.data(name='1', shape=[-1, args.max_seq_len, 1], dtype='int64')
AttributeError: module 'paddle.fluid' has no attribute 'data'
In[19]
# 模型评估
!cd /home/aistudio/work/ && sh run_ernie.sh eval
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: None
do_infer: False
do_lower_case: True
do_train: False
do_val: True
epoch: 10
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./save_models/ernie/step_907
label_map_config: None
lr: 0.002
max_seq_len: 64
num_labels: 3
random_seed: 0
save_checkpoint_dir: checkpoints
save_steps: 10000
skip_steps: 10
task_name: None
test_set: /home/aistudio/data/data12605/data/test.txt
train_set: None
use_cuda: True
use_paddle_hub: False
validation_steps: 1000
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
W1026 03:28:54.923435   539 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:28:54.927536   539 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./save_models/ernie/step_907
Final validation result:
[test evaluation] accuracy: 0.908390, macro precision: 0.840080, recall: 0.875447, f1: 0.856447, elapsed time: 1.859051 s
 

模型评估结果对比

模型/评估指标 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277
ERNIE-finetune 0.9054 0.8424 0.8588 0.8491

能够看出,基于ERNIE模型Finetune后能取得更大的提高。

 

辅助内容

如下是相关的辅助内容,帮助你进一步了解该项目的细节。

本项目主要代码结构及说明:

.
├── config.json             # 配置文件
├── config.py               # 配置文件读取接口
├── inference_model.py	    # 保存 inference_model 的脚本,可用于线上部署
├── nets.py                 # 各类神经网络结构
├── reader.py               # 数据读取接口
├── run_classifier.py       # 项目的主程序入口,包括训练、预测、评估
├── run.sh                  # 训练、预测、评估运行脚本
├── tokenizer/              # 分词工具
├── utils.py                # 其它功能函数脚本

能够经过如下命令,查看全部参数的说明,若想查看以上操做过程的参数状况,能够将run_classifier.py第418行注释的代码恢复(删掉#号),而后重头运行复习一遍。

In[20]
# 查看全部参数及说明
!cd /home/aistudio/work/ && python run_classifier.py -h
usage: run_classifier.py [-h] [--do_train DO_TRAIN] [--do_val DO_VAL]
                         [--do_infer DO_INFER]
                         [--do_save_inference_model DO_SAVE_INFERENCE_MODEL]
                         [--model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}]
                         [--num_labels NUM_LABELS]
                         [--init_checkpoint INIT_CHECKPOINT]
                         [--save_checkpoint_dir SAVE_CHECKPOINT_DIR]
                         [--inference_model_dir INFERENCE_MODEL_DIR]
                         [--data_dir DATA_DIR] [--vocab_path VOCAB_PATH]
                         [--vocab_size VOCAB_SIZE] [--lr LR] [--epoch EPOCH]
                         [--use_cuda USE_CUDA] [--batch_size BATCH_SIZE]
                         [--skip_steps SKIP_STEPS] [--save_steps SAVE_STEPS]
                         [--validation_steps VALIDATION_STEPS]
                         [--random_seed RANDOM_SEED] [--verbose VERBOSE]
                         [--task_name TASK_NAME] [--enable_ce ENABLE_CE]

optional arguments:
  -h, --help            show this help message and exit

Running type options:

  --do_train DO_TRAIN   Whether to perform training. Default: False.
  --do_val DO_VAL       Whether to perform evaluation. Default: False.
  --do_infer DO_INFER   Whether to perform inference. Default: False.
  --do_save_inference_model DO_SAVE_INFERENCE_MODEL
                        Whether to perform save inference model. Default:
                        False.

Model config options:

  --model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}
                        Model type to run the task. Default: textcnn_net.
  --num_labels NUM_LABELS
                        Number of labels for classification Default: 3.
  --init_checkpoint INIT_CHECKPOINT
                        Init checkpoint to resume training from. Default:
                        ./textcnn.
  --save_checkpoint_dir SAVE_CHECKPOINT_DIR
                        Directory path to save checkpoints Default: .
  --inference_model_dir INFERENCE_MODEL_DIR
                        Directory path to save inference model Default:
                        ./inference_model.

Data config options:

  --data_dir DATA_DIR   Directory path to training data. Default:
                        /home/aistudio/data/data12605/data.
  --vocab_path VOCAB_PATH
                        Vocabulary path. Default:
                        /home/aistudio/data/data12605/data/vocab.txt.
  --vocab_size VOCAB_SIZE
                        Vocabulary size. Default: 240465.

Training config options:

  --lr LR               The Learning rate value for training. Default: 0.0001.
  --epoch EPOCH         Number of epoches for training. Default: 10.
  --use_cuda USE_CUDA   If set, use GPU for training. Default: False.
  --batch_size BATCH_SIZE
                        Total examples' number in batch for training. Default:
                        64.
  --skip_steps SKIP_STEPS
                        The steps interval to print loss. Default: 10.
  --save_steps SAVE_STEPS
                        The steps interval to save checkpoints. Default: 1000.
  --validation_steps VALIDATION_STEPS
                        The steps interval to evaluate model performance.
                        Default: 1000.
  --random_seed RANDOM_SEED
                        Random seed. Default: 0.

Logging options:

  --verbose VERBOSE     Whether to output verbose log Default: False.
  --task_name TASK_NAME
                        The name of task to perform emotion detection Default:
                        emotion_detection.
  --enable_ce ENABLE_CE
                        If set, run the task with continuous evaluation logs.
                        Default: False.

Customize options:
 

分词预处理,若是须要对query数据进行分词,可使用tokenizer工具,具体执行命令以下

对话情绪识别

对话情绪识别,目标是识别智能对话场景中用户的情绪,帮助企业更全面的把握产品体验、监控客户服务质量,适用于聊天、客服等多种场景。

例如在智能音箱、智能车载等场景中,识别用户的情绪,能够适当地进行情绪安抚,改善产品的用户交互体验,在智能客服场景中,能够分析客服服务质量、下降人工质检成本,也可以帮助企业更好地把握对话质量、提升用户满意度。可经过百度AI开发平台体验。

从上图能够看到,对于用户的对话文本(一般是语音识别后的文本),模型会判断该文本属于不一样情绪类别的几率,并给出最后的情绪类别,在本案例中,对话情绪类别有三种:负向情绪(0)、中性情绪(1)和正向情绪(2),属于短文本三分类问题。

咱们先来跑一下例子,直观感觉一下模型的输出结果!

In[1]
# 首先解压数据集和预训练的模型
!cd /home/aistudio/data/data12605/ && unzip -qo data.zip
!cd /home/aistudio/work/ && tar -zxf emotion_detection_textcnn-1.0.0.tar.gz

# 查看预测的数据
!cat /home/aistudio/data/data12605/data/infer.txt
靠 你 真是 说 废话 
服务 态度 好 差 啊	
你 写 过 黄山 奇石 吗
一个一个 慢慢来
谢谢 服务 很 好
In[2]
# 修改配置,使用预训练好的模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./textcnn' run.sh 
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json

# 模型预测,并查看结果
!cd /home/aistudio/work/ && sh run.sh infer
Load model from ./textcnn
Final infer result:
0	0.992887	0.003744	0.003369
0	0.677892	0.229147	0.092961
1	0.001657	0.997380	0.000963
1	0.003413	0.990708	0.005880
2	0.014995	0.104129	0.880875
[infer] elapsed time: 0.014017 s
 

训练实践

介绍分类的经常使用评价指标,如何准备数据,定义分类模型,而后快速进行对话情绪识别模型的训练、评估和预测。

评价指标

分类模型的评价指标一般有 Accuracy、Precision、Recall 和 F1。

  • 准确率 Accuracy = 正确分类的样本数 / 总样本数。
  • 精确率 Precision = 预测为正类而且正确的样本数 / 预测为正类的样本数。
  • 召回率 Recall = 预测为正类而且正确的样本数 / 标注为正类的样本数。
  • 综合评价指标 F1:2(Precision + Recall) / (Precision*Recall),Precision 和 Recall 加权调和平均
  • 以上指标越高,则说明模型比较理想。

以二分类问题为例,一般以关注的类为正类,其它类为负类,一般会有一份测试集,模型在测试集上预测的结果有4种状况,4种状况就造成以下的混淆矩阵:

所以准确率 Accuracy 定义为:acc=TP+TNTP+FN+FP+TNacc = \frac {TP+TN} {TP+FN+FP+TN}acc=TP+FN+FP+TNTP+TN

精确率 Precision 定义为:P=TPTP+FPP = \frac {TP} {TP+FP}P=TP+FPTP

召回率 Recall 定义为:R=TPTP+FNR = \frac {TP} {TP+FN}R=TP+FNTP

综合评价指标 F1 定义为:F1=2(P∗R)P+RF1 = \frac {2(P*R)} {P+R}F1=P+R2(PR)

在多分类状况下,则用宏平均(Macro-averaging)和微平均(Micro-averaging)的方法,宏平均是指先计算每一类的各项评估指标,而后再对指标求算术平均值;微平均是指先对混淆矩阵的元素进行平均,获得TP,FP,TN,FN的平均值,而后再计算各项评估指标。

在本案例中,咱们主要使用宏平均的计算方法。

Tips:在样本不均衡的状况下,准确率 Accuracy 这个评价指标有很大的缺陷。好比说1万封邮件里有10封垃圾邮件(千分之一的几率是垃圾邮件),若是模型将全部邮件判为非垃圾邮件,那acc有99%以上,但实际上该模型是没意义的。这种状况下就须要使用Precision、Recall、F1做为评价指标。

 

数据准备

为了训练分类模型,通常须要准备三个数据集:训练集train.txt、验证集dev.txt、测试集test.txt。

  • 训练集,用来训练模型参数的数据集,模型直接根据训练集来调整自身参数以得到更好的分类效果。
  • 验证集,又称开发集,用于在训练过程当中检验模型的状态,收敛状况。验证集一般用于调整超参数,根据几组模型验证集上的表现决定哪组超参数拥有最好的性能。
  • 测试集,用来计算模型的各项评估指标,验证模型泛化能力。

Tips:测试集的数据通常不在训练集中,从而用来验证模型的效果。

这里咱们提供一份已标注的、通过分词预处理的机器人聊天数据集,其目录结构以下

.
├── train.txt   # 训练集
├── dev.txt     # 验证集
├── test.txt    # 测试集
├── infer.txt   # 待预测数据
├── vocab.txt   # 词典

数据由两列组成,以制表符('\t')分隔,第一列是情绪分类的类别(0表示负向情绪;1表示中性情绪;2表示正向情绪),第二列是以空格分词的中文文本,以下示例,文件为 utf8 编码。

label   text_a
0   谁 骂人 了 ? 我 历来 不 骂人 , 我 骂 的 都 不是 人 , 你 是 人 吗 ?
1   我 有事 等会儿 就 回来 和 你 聊
2   我 见到 你 很 高兴 谢谢 你 帮 我
 

分类模型选择

传统的机器学习分类方法,须要设置不少人工特征,例如单词的个数、文本的长度、单词的词性等等,而随着深度学习的发展,不少分类模型的效果获得验证和使用,包括BOW、CNN、RNN、BiLSTM等,其特色是不用设计人工特征,而是基于词向量(word embedding)进行表示学习。

这里咱们以 CNN 模型为例,介绍如何使用 PaddlePaddle 定义网络结构,更多的模型介绍细节在概念解释章节。

网络的配置以下,其中网络的输入dict_dim表示的是词典的大小,class_dim表示类别数,这里咱们是3。

In[3]
import paddle
import paddle.fluid as fluid

# 定义cnn模型
# 其中class_dim表示分类的类别数,win_sizes表示使用卷积核窗口大小
def cnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_size=3, is_prediction=False):
    """ Conv net """
    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    conv_3 = fluid.nets.sequence_conv_pool(
        input=emb,
        num_filters=hid_dim,
        filter_size=win_size,
        act="tanh",
        pool_type="max")

    # full connect layer
    fc_1 = fluid.layers.fc(input=[conv_3], size=hid_dim2)
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction
    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)

    return avg_cost, prediction
 

定义网络结构后,须要定义训练和预测程序、优化函数、数据提供器等,为了便于学习,咱们将模型训练、评估、预测的过程封装成 run.sh 脚本。

模型训练

基于示例的数据集,能够运行下面的命令,在训练集(train.txt)上进行模型训练,并在验证集(dev.txt)验证。

In[6]
# 修改配置,选择cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json

# 修改训练后模型保存的路径
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/cnn' run.sh

# 模型训练
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:20:44.030146   271 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:20:44.033725   271 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.350119, avg acc: 0.875000, speed: 172.452770 steps/s
[dev evaluation] avg loss: 0.319571, avg acc: 0.874074, elapsed time: 0.048704 s
step: 400, avg loss: 0.296501, avg acc: 0.890625, speed: 65.212972 steps/s
[dev evaluation] avg loss: 0.230635, avg acc: 0.914815, elapsed time: 0.073480 s
step: 600, avg loss: 0.319913, avg acc: 0.875000, speed: 63.960171 steps/s
[dev evaluation] avg loss: 0.176513, avg acc: 0.938889, elapsed time: 0.054020 s
step: 756, avg loss: 0.168574, avg acc: 0.947368, speed: 70.363845 steps/s
[dev evaluation] avg loss: 0.144825, avg acc: 0.948148, elapsed time: 0.056827 s
 

训练完成后,会在./save_models/cnn 目录下生成以 step_xxx 命名的模型目录。

模型评估

利用训练后的模型step_756,能够运行下面的命令进行测试,查看预训练的模型在测试集(test.txt)上的评测结果

In[7]
# 确保使用的模型为CNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
# 使用刚才训练的cnn模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型评估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/cnn/step_756
Final test result:
[test evaluation] accuracy: 0.866924, macro precision: 0.790397, recall: 0.714859, f1: 0.743252, elapsed time: 0.048996 s
 

模型预测

利用已有模型,可在未知label的数据集(infer.txt)上进行预测,获得模型预测结果及各label的几率。

In[8]
# 查看预测的数据
!cat /home/aistudio/data/data12605/data/infer.txt

# 使用刚才训练的cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型预测
!cd /home/aistudio/work/ && sh run.sh infer
靠 你 真是 说 废话 
服务 态度 好 差 啊	
你 写 过 黄山 奇石 吗
一个一个 慢慢来
谢谢 服务 很 好 
Load model from ./save_models/cnn/step_756
Final infer result:
0	0.969470	0.000457	0.030072
0	0.434887	0.183004	0.382110
1	0.000057	0.999915	0.000028
1	0.000312	0.999080	0.000607
2	0.164429	0.002141	0.833429
[infer] elapsed time: 0.009522 s
 

概念解释

CNN-卷积神经网络

卷积神经网络(Convolution Neural Network, CNN)最先使用于图像领域,一般有多个卷积层+池化层组成,最后再拼接全链接层作分类。卷积层主要是执行卷积操做提取图片底层到高层的特征,池化层主要是执行降采样操做,能够过滤掉一些不重要的高频信息。(降采样是图像处理中常见的一种操做)

什么是卷积

绿色表示输入的图像,能够是一张黑白图片,0是黑色像素点,1是白色像素点。黄色就卷积核(kernal),也叫过滤器(filter)或特征检测器(feature detector),经过卷积,对图片的像素点进行加权,做为这局部像素点的响应,得到图像的某种特征。

卷积的过程,就是滑动这个黄色的矩阵,以必定的步长向右和向下移动,从而获得整个图像的特征表示。

举个例子:上图中输入的绿色矩阵表示一张人脸,黄色矩阵表示一个眼睛,卷积过程就是拿这个眼睛去匹配这张人脸,那么当黄色矩阵匹配到绿色矩阵(人脸)中眼睛部分时,对应的响应就会很大,获得的值就越大。

什么是池化

前面卷积的过程,实际上"重叠"计算了不少冗余的信息,池化就是对卷积后的特征进行筛选,提取关键信息,过滤掉一些噪音,一般用的是max pooling和mean pooling。

文本卷积神经网络

这里介绍的主要是文本卷积神经网络,首先咱们将输入的query表示层词向量序列,而后使用卷积去处理输入的词向量序列,就会产生一个特征图(feature map),对特征图采用时间维度上的最大池化(max pooling over time)操做,就获得此卷积核对应的整句话的特征,最后,将全部卷积核获得的特征拼接起来即为文本的定长向量表示,对于文本分类问题,将其链接至softmax即构建出完整的模型。

在实际应用中,咱们会使用多个卷积核来处理句子,窗口大小相同的卷积核堆叠起来造成一个矩阵,这样能够更高效的完成运算。另外,咱们也可以使用窗口大小不一样的卷积核来处理句子,如上图,不一样颜色表示不一样大小的卷积核操做。

 

进阶使用

模型优化上,咱们通常会使用表达能力更强的模型,或者使用finetune。这里首先咱们使用窗口大小不一样的卷积核TextCNN模型进行实验,而后介绍如何基于预训练的模型进行finetune

TextCNN模型实验

BOW词袋模型,会忽略其词顺序、语法和句法,存在必定的缺陷,因此对于通常的短文本分类问题,常使用上文所述的文本卷积网络,它在考虑词顺序的基础上把文本映射到低维度的语义空间,而且以端对端(end to end)的方式进行文本表示及分类,其性能相对于传统方法有显著的提高。

In[9]
# 定义textcnn模型
# 其中class_dim表示分类的类别数,win_sizes表示使用卷积核窗口大小
def textcnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_sizes=None, is_prediction=False):
    """ Textcnn_net """
    if win_sizes is None:
        win_sizes = [1, 2, 3]

    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    convs = []
    for win_size in win_sizes:
        conv_h = fluid.nets.sequence_conv_pool(
            input=emb,
            num_filters=hid_dim,
            filter_size=win_size,
            act="tanh",
            pool_type="max")
        convs.append(conv_h)
    convs_out = fluid.layers.concat(input=convs, axis=1)

    # full connect layer
    fc_1 = fluid.layers.fc(input=[convs_out], size=hid_dim2, act="tanh")
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction

    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)
    return avg_cost, prediction
 

这里咱们进行配置的修改,包括模型的类型、初始化模型位置、模型保存路径,而后进行模型的训练和评估

In[10]
# 更改模型为TextCNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json
# 修改模型保存目录
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn' run.sh

# 模型训练
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:21:31.529520   339 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:21:31.533326   339 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.212591, avg acc: 0.921875, speed: 104.244460 steps/s
[dev evaluation] avg loss: 0.284517, avg acc: 0.897222, elapsed time: 0.069697 s
step: 400, avg loss: 0.367220, avg acc: 0.812500, speed: 53.107965 steps/s
[dev evaluation] avg loss: 0.195091, avg acc: 0.932407, elapsed time: 0.080681 s
step: 600, avg loss: 0.242331, avg acc: 0.921875, speed: 52.311775 steps/s
[dev evaluation] avg loss: 0.139668, avg acc: 0.955556, elapsed time: 0.082921 s
step: 756, avg loss: 0.052051, avg acc: 1.000000, speed: 58.723846 steps/s
[dev evaluation] avg loss: 0.111066, avg acc: 0.962963, elapsed time: 0.082778 s
In[11]
# 使用上面训练好的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn/step_756' run.sh

# 模型评估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn/step_756
Final test result:
[test evaluation] accuracy: 0.878496, macro precision: 0.797653, recall: 0.754163, f1: 0.772353, elapsed time: 0.082577 s
 

模型评估结果对比

模型/评估指标 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
 

基于预训练的TextCNN进行Finetune

能够经过修改run.sh中的init_checkpoint参数,加载预训练模型来进行精调(finetune)。

In[13]
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
# 使用预训练的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"./textcnn",#' config.json
# 修改学习率和保存的模型目录
!cd /home/aistudio/work/ && sed -i 's#"lr":.*$#"lr":0.0001,#' config.json
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn_finetune' run.sh

# 模型训练
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:23:05.350819   418 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:23:05.354846   418 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./textcnn
step: 200, avg loss: 0.184450, avg acc: 0.953125, speed: 103.065345 steps/s
[dev evaluation] avg loss: 0.170050, avg acc: 0.937037, elapsed time: 0.074731 s
step: 400, avg loss: 0.166738, avg acc: 0.921875, speed: 47.727028 steps/s
[dev evaluation] avg loss: 0.132444, avg acc: 0.954630, elapsed time: 0.081669 s
step: 600, avg loss: 0.076735, avg acc: 0.984375, speed: 53.387034 steps/s
[dev evaluation] avg loss: 0.103549, avg acc: 0.963889, elapsed time: 0.081754 s
step: 756, avg loss: 0.061593, avg acc: 0.947368, speed: 57.990719 steps/s
[dev evaluation] avg loss: 0.086959, avg acc: 0.971296, elapsed time: 0.080616 s
In[14]
# 修改配置,使用上面训练获得的模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn_finetune/step_756' run.sh
# 模型评估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn_finetune/step_756
Final test result:
[test evaluation] accuracy: 0.893925, macro precision: 0.829668, recall: 0.812613, f1: 0.820883, elapsed time: 0.083944 s
 

模型评估结果对比

模型/评估指标 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277

能够看出,基于预训练模型Finetune后能取得更好的分类效果。

 

基于ERNIE模型进行Finetune

这里咱们先下载ERNIE预训练模型,而后运行run_ernie.sh脚本中,加载ERNIE模型来进行精调(finetune)。

In[3]
!cd /home/aistudio/work/ && mkdir -p pretrain_models/ernie
%cd /home/aistudio/work/pretrain_models/ernie
# 获取ernie预训练模型
!wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz -O ERNIE_stable-1.0.1.tar.gz
!tar -zxvf ERNIE_stable-1.0.1.tar.gz && rm ERNIE_stable-1.0.1.tar.gz
/home/aistudio/work/pretrain_models/ernie
--2020-02-27 20:17:41--  https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz
Resolving baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 182.61.200.195, 182.61.200.229
Connecting to baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|182.61.200.195|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 374178867 (357M) [application/x-gzip]
Saving to: ‘ERNIE_stable-1.0.1.tar.gz’

ERNIE_stable-1.0.1. 100%[===================>] 356.84M  61.3MB/s    in 8.9s    

2020-02-27 20:17:50 (40.1 MB/s) - ‘ERNIE_stable-1.0.1.tar.gz’ saved [374178867/374178867]

params/
params/encoder_layer_5_multi_head_att_key_fc.w_0
params/encoder_layer_0_post_ffn_layer_norm_scale
params/encoder_layer_0_post_att_layer_norm_bias
params/encoder_layer_0_multi_head_att_value_fc.w_0
params/sent_embedding
params/encoder_layer_11_multi_head_att_query_fc.w_0
params/encoder_layer_8_ffn_fc_0.w_0
params/encoder_layer_5_ffn_fc_1.w_0
params/encoder_layer_6_ffn_fc_1.b_0
params/encoder_layer_5_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_output_fc.b_0
params/encoder_layer_4_ffn_fc_0.w_0
params/encoder_layer_4_post_ffn_layer_norm_bias
params/encoder_layer_3_ffn_fc_1.b_0
params/encoder_layer_0_multi_head_att_value_fc.b_0
params/encoder_layer_11_post_att_layer_norm_bias
params/encoder_layer_3_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_output_fc.w_0
params/encoder_layer_5_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_value_fc.w_0
params/encoder_layer_6_multi_head_att_query_fc.w_0
params/encoder_layer_8_post_att_layer_norm_bias
params/encoder_layer_2_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_key_fc.w_0
params/encoder_layer_4_multi_head_att_key_fc.w_0
params/encoder_layer_6_post_ffn_layer_norm_bias
params/encoder_layer_9_post_ffn_layer_norm_bias
params/encoder_layer_11_post_ffn_layer_norm_scale
params/encoder_layer_6_multi_head_att_value_fc.b_0
params/encoder_layer_9_ffn_fc_0.w_0
params/encoder_layer_2_post_ffn_layer_norm_scale
params/encoder_layer_1_multi_head_att_query_fc.w_0
params/encoder_layer_1_post_ffn_layer_norm_bias
params/next_sent_3cls_fc.w_0
params/encoder_layer_9_multi_head_att_key_fc.w_0
params/encoder_layer_7_multi_head_att_value_fc.w_0
params/encoder_layer_10_ffn_fc_0.b_0
params/encoder_layer_2_multi_head_att_value_fc.w_0
params/encoder_layer_8_post_ffn_layer_norm_scale
params/encoder_layer_3_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.w_0
params/encoder_layer_11_multi_head_att_query_fc.b_0
params/encoder_layer_1_ffn_fc_0.w_0
params/encoder_layer_8_multi_head_att_value_fc.w_0
params/word_embedding
params/mask_lm_trans_layer_norm_bias
params/encoder_layer_8_multi_head_att_query_fc.w_0
params/encoder_layer_1_multi_head_att_query_fc.b_0
params/encoder_layer_5_ffn_fc_0.b_0
params/encoder_layer_3_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_1.b_0
params/encoder_layer_2_post_att_layer_norm_bias
params/encoder_layer_8_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.b_0
params/encoder_layer_11_post_ffn_layer_norm_bias
params/encoder_layer_6_multi_head_att_key_fc.b_0
params/mask_lm_trans_layer_norm_scale
params/encoder_layer_11_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_ffn_layer_norm_scale
params/encoder_layer_0_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_key_fc.b_0
params/encoder_layer_9_post_att_layer_norm_scale
params/encoder_layer_7_post_ffn_layer_norm_scale
params/encoder_layer_4_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_value_fc.w_0
params/pos_embedding
params/mask_lm_trans_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.b_0
params/encoder_layer_4_multi_head_att_query_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.w_0
params/encoder_layer_3_ffn_fc_1.w_0
params/encoder_layer_9_post_att_layer_norm_bias
params/accuracy_0.tmp_0
params/encoder_layer_3_post_att_layer_norm_bias
params/encoder_layer_7_multi_head_att_output_fc.b_0
params/encoder_layer_7_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.b_0
params/encoder_layer_0_multi_head_att_key_fc.w_0
params/encoder_layer_6_ffn_fc_0.w_0
params/encoder_layer_5_multi_head_att_query_fc.w_0
params/encoder_layer_10_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.w_0
params/encoder_layer_6_multi_head_att_key_fc.w_0
params/encoder_layer_9_ffn_fc_1.w_0
params/encoder_layer_10_ffn_fc_0.w_0
params/pre_encoder_layer_norm_bias
params/encoder_layer_1_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_scale
params/encoder_layer_9_post_ffn_layer_norm_scale
params/encoder_layer_9_multi_head_att_query_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.b_0
params/tmp_51
params/encoder_layer_11_ffn_fc_1.w_0
params/encoder_layer_7_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_key_fc.w_0
params/encoder_layer_8_multi_head_att_key_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.b_0
params/encoder_layer_6_post_att_layer_norm_scale
params/encoder_layer_5_ffn_fc_0.w_0
params/encoder_layer_4_multi_head_att_query_fc.b_0
params/encoder_layer_10_post_att_layer_norm_bias
params/encoder_layer_3_post_att_layer_norm_scale
params/encoder_layer_6_ffn_fc_1.w_0
params/mask_lm_out_fc.b_0
params/encoder_layer_3_ffn_fc_0.w_0
params/encoder_layer_6_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_bias
params/encoder_layer_6_multi_head_att_query_fc.b_0
params/encoder_layer_3_ffn_fc_0.b_0
params/encoder_layer_2_post_att_layer_norm_scale
params/encoder_layer_7_ffn_fc_0.w_0
params/encoder_layer_8_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_value_fc.b_0
params/encoder_layer_3_multi_head_att_output_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.w_0
params/encoder_layer_4_ffn_fc_1.w_0
params/encoder_layer_5_post_att_layer_norm_scale
params/encoder_layer_3_post_ffn_layer_norm_bias
params/encoder_layer_2_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_key_fc.b_0
params/encoder_layer_0_ffn_fc_1.w_0
params/encoder_layer_0_post_ffn_layer_norm_bias
params/encoder_layer_11_ffn_fc_0.b_0
params/pooled_fc.b_0
params/encoder_layer_2_multi_head_att_output_fc.b_0
params/encoder_layer_8_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_output_fc.w_0
params/encoder_layer_1_ffn_fc_1.w_0
params/encoder_layer_2_ffn_fc_0.b_0
params/encoder_layer_5_multi_head_att_output_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.w_0
params/encoder_layer_0_ffn_fc_1.b_0
params/encoder_layer_7_multi_head_att_key_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.b_0
params/encoder_layer_6_post_ffn_layer_norm_scale
params/encoder_layer_2_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_0.b_0
params/encoder_layer_11_ffn_fc_0.w_0
params/encoder_layer_1_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_key_fc.w_0
params/reduce_mean_0.tmp_0
params/encoder_layer_7_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_value_fc.b_0
params/@LR_DECAY_COUNTER@
params/encoder_layer_8_multi_head_att_key_fc.b_0
params/encoder_layer_4_post_ffn_layer_norm_scale
params/encoder_layer_10_post_ffn_layer_norm_bias
params/encoder_layer_9_ffn_fc_1.b_0
params/encoder_layer_3_multi_head_att_value_fc.b_0
params/encoder_layer_6_multi_head_att_value_fc.w_0
params/encoder_layer_8_multi_head_att_query_fc.b_0
params/encoder_layer_8_ffn_fc_1.b_0
params/encoder_layer_4_post_att_layer_norm_bias
params/encoder_layer_0_post_att_layer_norm_scale
params/encoder_layer_0_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_output_fc.b_0
params/encoder_layer_4_multi_head_att_output_fc.b_0
params/encoder_layer_8_ffn_fc_0.b_0
params/pre_encoder_layer_norm_scale
params/encoder_layer_11_ffn_fc_1.b_0
params/encoder_layer_8_multi_head_att_output_fc.b_0
params/encoder_layer_10_multi_head_att_query_fc.b_0
params/encoder_layer_1_multi_head_att_key_fc.b_0
params/encoder_layer_6_multi_head_att_output_fc.b_0
params/mask_lm_trans_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.b_0
params/encoder_layer_7_multi_head_att_value_fc.b_0
params/encoder_layer_10_multi_head_att_key_fc.b_0
params/encoder_layer_8_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.w_0
params/pooled_fc.w_0
params/encoder_layer_3_multi_head_att_value_fc.w_0
params/encoder_layer_0_multi_head_att_key_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.b_0
params/next_sent_3cls_fc.b_0
params/encoder_layer_2_ffn_fc_0.w_0
params/encoder_layer_1_multi_head_att_value_fc.w_0
params/encoder_layer_7_multi_head_att_query_fc.w_0
params/encoder_layer_3_post_ffn_layer_norm_scale
params/encoder_layer_1_post_ffn_layer_norm_scale
params/encoder_layer_6_post_att_layer_norm_bias
params/encoder_layer_4_multi_head_att_output_fc.w_0
params/encoder_layer_6_multi_head_att_output_fc.w_0
params/encoder_layer_7_multi_head_att_output_fc.w_0
params/encoder_layer_10_ffn_fc_1.b_0
params/encoder_layer_11_post_att_layer_norm_scale
params/encoder_layer_4_post_att_layer_norm_scale
params/encoder_layer_5_multi_head_att_query_fc.b_0
params/encoder_layer_4_multi_head_att_key_fc.b_0
params/encoder_layer_4_ffn_fc_1.b_0
params/encoder_layer_0_ffn_fc_0.w_0
params/encoder_layer_7_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_att_layer_norm_bias
params/encoder_layer_9_ffn_fc_0.b_0
params/encoder_layer_1_multi_head_att_value_fc.b_0
params/encoder_layer_10_post_ffn_layer_norm_scale
params/encoder_layer_2_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_bias
params/encoder_layer_10_ffn_fc_1.w_0
params/encoder_layer_0_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_query_fc.b_0
params/encoder_layer_8_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_scale
vocab.txt
ernie_config.json
In[3]
# 基于ERNIE模型finetune训练
!cd /home/aistudio/work/ && sh run_ernie.sh train
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: /home/aistudio/data/data12605/data/dev.txt
do_infer: False
do_lower_case: True
do_train: True
do_val: True
epoch: 3
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./pretrain_models/ernie/params
label_map_config: None
lr: 2e-05
max_seq_len: 64
num_labels: 3
random_seed: 1
save_checkpoint_dir: ./save_models/ernie
save_steps: 500
skip_steps: 50
task_name: None
test_set: None
train_set: /home/aistudio/data/data12605/data/train.txt
use_cuda: True
use_paddle_hub: False
validation_steps: 50
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
Device count: 1
Num train examples: 9655
Max train steps: 906
Traceback (most recent call last):
  File "run_ernie_classifier.py", line 433, in <module>
    main(args)
  File "run_ernie_classifier.py", line 247, in main
    pyreader_name='train_reader')
  File "/home/aistudio/work/ernie_code/ernie.py", line 32, in ernie_pyreader
    src_ids = fluid.data(name='1', shape=[-1, args.max_seq_len, 1], dtype='int64')
AttributeError: module 'paddle.fluid' has no attribute 'data'
In[19]
# 模型评估
!cd /home/aistudio/work/ && sh run_ernie.sh eval
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: None
do_infer: False
do_lower_case: True
do_train: False
do_val: True
epoch: 10
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./save_models/ernie/step_907
label_map_config: None
lr: 0.002
max_seq_len: 64
num_labels: 3
random_seed: 0
save_checkpoint_dir: checkpoints
save_steps: 10000
skip_steps: 10
task_name: None
test_set: /home/aistudio/data/data12605/data/test.txt
train_set: None
use_cuda: True
use_paddle_hub: False
validation_steps: 1000
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
W1026 03:28:54.923435   539 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:28:54.927536   539 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./save_models/ernie/step_907
Final validation result:
[test evaluation] accuracy: 0.908390, macro precision: 0.840080, recall: 0.875447, f1: 0.856447, elapsed time: 1.859051 s
 

模型评估结果对比

模型/评估指标 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277
ERNIE-finetune 0.9054 0.8424 0.8588 0.8491

能够看出,基于ERNIE模型Finetune后能取得更大的提高。

 

辅助内容

如下是相关的辅助内容,帮助你进一步了解该项目的细节。

本项目主要代码结构及说明:

.
├── config.json             # 配置文件
├── config.py               # 配置文件读取接口
├── inference_model.py	    # 保存 inference_model 的脚本,可用于线上部署
├── nets.py                 # 各类神经网络结构
├── reader.py               # 数据读取接口
├── run_classifier.py       # 项目的主程序入口,包括训练、预测、评估
├── run.sh                  # 训练、预测、评估运行脚本
├── tokenizer/              # 分词工具
├── utils.py                # 其它功能函数脚本

能够经过如下命令,查看全部参数的说明,若想查看以上操做过程的参数状况,能够将run_classifier.py第418行注释的代码恢复(删掉#号),而后重头运行复习一遍。

In[20]
# 查看全部参数及说明
!cd /home/aistudio/work/ && python run_classifier.py -h
usage: run_classifier.py [-h] [--do_train DO_TRAIN] [--do_val DO_VAL]
                         [--do_infer DO_INFER]
                         [--do_save_inference_model DO_SAVE_INFERENCE_MODEL]
                         [--model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}]
                         [--num_labels NUM_LABELS]
                         [--init_checkpoint INIT_CHECKPOINT]
                         [--save_checkpoint_dir SAVE_CHECKPOINT_DIR]
                         [--inference_model_dir INFERENCE_MODEL_DIR]
                         [--data_dir DATA_DIR] [--vocab_path VOCAB_PATH]
                         [--vocab_size VOCAB_SIZE] [--lr LR] [--epoch EPOCH]
                         [--use_cuda USE_CUDA] [--batch_size BATCH_SIZE]
                         [--skip_steps SKIP_STEPS] [--save_steps SAVE_STEPS]
                         [--validation_steps VALIDATION_STEPS]
                         [--random_seed RANDOM_SEED] [--verbose VERBOSE]
                         [--task_name TASK_NAME] [--enable_ce ENABLE_CE]

optional arguments:
  -h, --help            show this help message and exit

Running type options:

  --do_train DO_TRAIN   Whether to perform training. Default: False.
  --do_val DO_VAL       Whether to perform evaluation. Default: False.
  --do_infer DO_INFER   Whether to perform inference. Default: False.
  --do_save_inference_model DO_SAVE_INFERENCE_MODEL
                        Whether to perform save inference model. Default:
                        False.

Model config options:

  --model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}
                        Model type to run the task. Default: textcnn_net.
  --num_labels NUM_LABELS
                        Number of labels for classification Default: 3.
  --init_checkpoint INIT_CHECKPOINT
                        Init checkpoint to resume training from. Default:
                        ./textcnn.
  --save_checkpoint_dir SAVE_CHECKPOINT_DIR
                        Directory path to save checkpoints Default: .
  --inference_model_dir INFERENCE_MODEL_DIR
                        Directory path to save inference model Default:
                        ./inference_model.

Data config options:

  --data_dir DATA_DIR   Directory path to training data. Default:
                        /home/aistudio/data/data12605/data.
  --vocab_path VOCAB_PATH
                        Vocabulary path. Default:
                        /home/aistudio/data/data12605/data/vocab.txt.
  --vocab_size VOCAB_SIZE
                        Vocabulary size. Default: 240465.

Training config options:

  --lr LR               The Learning rate value for training. Default: 0.0001.
  --epoch EPOCH         Number of epoches for training. Default: 10.
  --use_cuda USE_CUDA   If set, use GPU for training. Default: False.
  --batch_size BATCH_SIZE
                        Total examples' number in batch for training. Default:
                        64.
  --skip_steps SKIP_STEPS
                        The steps interval to print loss. Default: 10.
  --save_steps SAVE_STEPS
                        The steps interval to save checkpoints. Default: 1000.
  --validation_steps VALIDATION_STEPS
                        The steps interval to evaluate model performance.
                        Default: 1000.
  --random_seed RANDOM_SEED
                        Random seed. Default: 0.

Logging options:

  --verbose VERBOSE     Whether to output verbose log Default: False.
  --task_name TASK_NAME
                        The name of task to perform emotion detection Default:
                        emotion_detection.
  --enable_ce ENABLE_CE
                        If set, run the task with continuous evaluation logs.
                        Default: False.

Customize options:
 

分词预处理,若是须要对query数据进行分词,可使用tokenizer工具,具体执行命令以下

In[21]
# 解压分词工具包,对测试数据进行分词
!cd /home/aistudio/work/ && unzip -qo tokenizer.zip
!cd /home/aistudio/work/tokenizer && python tokenizer.py --test_data_dir test.txt.utf8 > new_query.txt

# 查看分词结果
!cd /home/aistudio/work/tokenizer && cat new_query.txt
我 是 中国 人
百度 是 一家 人工智能 公司
国家博物馆 将 闭关
巴萨 5 - 1 晋级 欧冠 八强
c罗 帽子戏法 , 尤文 实现 史诗级 逆转
In[21]
# 解压分词工具包,对测试数据进行分词
!cd /home/aistudio/work/ && unzip -qo tokenizer.zip
!cd /home/aistudio/work/tokenizer && python tokenizer.py --test_data_dir test.txt.utf8 > new_query.txt

# 查看分词结果
!cd /home/aistudio/work/tokenizer && cat new_query.txt
我 是 中国 人
百度 是 一家 人工智能 公司
国家博物馆 将 闭关
巴萨 5 - 1 晋级 欧冠 八强
c罗 帽子戏法 , 尤文 实现 史诗级 逆转

点击连接,使用AI Studio一键上手实践项目吧:https://aistudio.baidu.com/aistudio/projectdetail/121630 

下载安装命令

## CPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/cpu paddlepaddle

## GPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/gpu paddlepaddle-gpu

>> 访问 PaddlePaddle 官网,了解更多相关内容

相关文章
相关标签/搜索