注:本系列仅帮助你们快速理解、学习并能独立使用相关框架进行深度学习的研究,理论部分还请自行学习补充,每一个框架的官方经典案例写的都很是好,很值得进行学习使用。能够说在彻底理解官方经典案例后加以修改即可以解决大多数常见的相关任务。python
摘要:【从官方案例学框架Keras】从零开始的文本分类
markdown
本例展现了从未预处理的原始数据中使用Keras进行文本分类,所用数据集为IMDB,电影情感分类,使用TextVectorization层来分词和索引框架
Setup导入所需包curl
import tensorflow as tf import numpy as npLoad the data: IMDB movie review sentiment classification
本地可经过此连接下载异步
https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gzasync
!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -xf aclImdb_v1.tar.gz
能够使用tf.keras.preprocessing.text_dataset_from_directory从文件夹中读取数据,数据格式为ide
train/ ...pos/ ......text_1.txt ......text_2.txt ...neg/ ......text_1.txt ......text_2.txt
并能够经过subset=“training”,"validation"来划分训练集和验证集函数
batch_size = 32 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( "../input/aclImdb/train", batch_size=batch_size, validation_split=0.2, subset="training", seed=1337, ) raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory( "../input/aclImdb/train", batch_size=batch_size, validation_split=0.2, subset="validation", seed=1337, ) raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory( "../input/aclImdb/test", batch_size=batch_size ) print( "Number of batches in raw_train_ds: %d" % tf.data.experimental.cardinality(raw_train_ds) ) print( "Number of batches in raw_val_ds: %d" % tf.data.experimental.cardinality(raw_val_ds) ) print( "Number of batches in raw_test_ds: %d" % tf.data.experimental.cardinality(raw_test_ds) )
查看数据流中的部分数据性能
# It's important to take a look at your raw data to ensure your normalization # and tokenization will work as expected. We can do that by taking a few # examples from the training set and looking at them. # This is one of the places where eager execution shines: # we can just evaluate these tensors using .numpy() # instead of needing to evaluate them in a Session/Graph context. for text_batch, label_batch in raw_train_ds.take(1): for i in range(2): print(text_batch.numpy()[i]) print(label_batch.numpy()[i])
从示例数据中,能够看出文本中存在必定的HTML符号,这对于咱们的情感分类是没意义的(人类角度),但在比赛或实际生活中,这些看似噪音的符号有时候会有用,但本例中将会删除。
数据预处理,特别去除 <br />
adapt
对文本数据处理from tensorflow.keras.layers.experimental.preprocessing import TextVectorization import string import re # Having looked at our data above, we see that the raw text contains HTML break # tags of the form '<br />'. These tags will not be removed by the default # standardizer (which doesn't strip HTML). Because of this, we will need to # create a custom standardization function. def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ") return tf.strings.regex_replace( stripped_html, "[%s]" % re.escape(string.punctuation), "" ) # Model constants. max_features = 20000 embedding_dim = 128 sequence_length = 500 # Now that we have our custom standardization, we can instantiate our text # vectorization layer. We are using this layer to normalize, split, and map # strings to integers, so we set our 'output_mode' to 'int'. # Note that we're using the default split function, # and the custom standardization defined above. # We also set an explicit maximum sequence length, since the CNNs later in our # model won't support ragged sequences. vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode="int", output_sequence_length=sequence_length, ) # Now that the vocab layer has been created, call `adapt` on a text-only # dataset to create the vocabulary. You don't have to batch, but for very large # datasets this means you're not keeping spare copies of the dataset in memory. # Let's make a text-only dataset (no labels): text_ds = raw_train_ds.map(lambda x, y: x) # 取出数据流中训练数据x # Let's call `adapt`: vectorize_layer.adapt(text_ds)Two options to vectorize the data
两种向量化的方法:
text_input = tf.keras.Input(shape=(1,), dtype=tf.string, name='text') x = vectorize_layer(text_input) x = layers.Embedding(max_features + 1, embedding_dim)(x) ...
def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label # Vectorize the data. train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text) # Do async prefetching / buffering of the data for best performance on GPU. train_ds = train_ds.cache().prefetch(buffer_size=10) val_ds = val_ds.cache().prefetch(buffer_size=10) test_ds = test_ds.cache().prefetch(buffer_size=10)
二者之间的一个重要区别是选项2让你作异步CPU处理和缓冲你的数据时,训练在GPU上。因此若是你在GPU上训练模型,你可能想要使用这个选项来得到最好的性能。这就是咱们下面要作的。
若是咱们要将咱们的模型导出到生产中,咱们将提供一个接受原始字符串做为输入的模型,就像上面选项1的代码片断同样。这能够在训练后完成。
Build a modelfrom tensorflow.keras import layers # A integer input for vocab indices. inputs = tf.keras.Input(shape=(None,), dtype="int64") # Next, we add a layer to map those vocab indices into a space of dimensionality # 'embedding_dim'. x = layers.Embedding(max_features, embedding_dim)(inputs) x = layers.Dropout(0.5)(x) # Conv1D + global max pooling x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x) x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x) x = layers.GlobalMaxPooling1D()(x) # We add a vanilla hidden layer: x = layers.Dense(128, activation="relu")(x) x = layers.Dropout(0.5)(x) # We project onto a single unit output layer, and squash it with a sigmoid: predictions = layers.Dense(1, activation="sigmoid", name="predictions")(x) model = tf.keras.Model(inputs, predictions) # Compile the model with binary crossentropy loss and an adam optimizer. model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) model.summary()
epochs = 3 # Fit the model using the train and test datasets. model.fit(train_ds, validation_data=val_ds, epochs=epochs)
model.evaluate(test_ds)
# A string input inputs = tf.keras.Input(shape=(1,), dtype="string") # Turn strings into vocab indices indices = vectorize_layer(inputs) # Turn vocab indices into predictions outputs = model(indices) # Our end to end model end_to_end_model = tf.keras.Model(inputs, outputs) end_to_end_model.compile( loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"] ) # Test it with `raw_test_ds`, which yields raw strings end_to_end_model.evaluate(raw_test_ds)Summary
完整代码以下
import re import string import numpy as np import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras.layers.experimental.preprocessing import TextVectorization """数据读取""" batch_size = 32 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( "../input/aclImdb/train", batch_size=batch_size, validation_split=0.2, subset="training", seed=1337, ) raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory( "../input/aclImdb/train", batch_size=batch_size, validation_split=0.2, subset="validation", seed=1337, ) raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory( "../input/aclImdb/test", batch_size=batch_size ) """数据预处理""" def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ") return tf.strings.regex_replace( stripped_html, "[%s]" % re.escape(string.punctuation), "" ) # Model constants. max_features = 20000 embedding_dim = 128 sequence_length = 500 """文本向量化""" vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode="int", output_sequence_length=sequence_length, ) """!!!必定要adapt""" # Let's make a text-only dataset (no labels): text_ds = raw_train_ds.map(lambda x, y: x) # Let's call `adapt`: vectorize_layer.adapt(text_ds) def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label # Vectorize the data. train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text) # # Do async prefetching / buffering of the data for best performance on GPU. train_ds = train_ds.cache().prefetch(buffer_size=10) val_ds = val_ds.cache().prefetch(buffer_size=10) test_ds = test_ds.cache().prefetch(buffer_size=10) """定义模型""" # A integer input for vocab indices. inputs = tf.keras.Input(shape=(None,), dtype="int64") # Next, we add a layer to map those vocab indices into a space of dimensionality # 'embedding_dim'. x = layers.Embedding(max_features, embedding_dim)(inputs) x = layers.Dropout(0.5)(x) # Conv1D + global max pooling x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x) x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x) x = layers.GlobalMaxPooling1D()(x) # We add a vanilla hidden layer: x = layers.Dense(128, activation="relu")(x) x = layers.Dropout(0.5)(x) # We project onto a single unit output layer, and squash it with a sigmoid: predictions = layers.Dense(1, activation="sigmoid", name="predictions")(x) model = tf.keras.Model(inputs, predictions) # Compile the model with binary crossentropy loss and an adam optimizer. model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) # model.summary() """模型训练""" epochs = 3 # Fit the model using the train and test datasets. model.fit(train_ds, validation_data=val_ds, epochs=epochs) """模型评估""" print('evalutate:') print(model.evaluate(test_ds))
在此须要注意,在使用TextVectorization以前必定要adapt数据:
# Let's make a text-only dataset (no labels): text_ds = raw_train_ds.map(lambda x, y: x) # Let's call `adapt`: vectorize_layer.adapt(text_ds)
若是使用TextVectorization,要进行adapt。若不使用adapt适配数据集,将会致使模型的准确率始终在50%左右。
在数据集上调用此层的adapt()方法。当调整这一层时,它将分析数据集,肯定单个字符串值的频率,并从它们建立一个“词汇表”。根据这个层的配置选项,这个词汇表能够有无限的大小,也能够有上限;若是输入中的唯一值比最大词汇表大小还多,则使用最频繁的词汇建立词汇表。
至此,Keras实现从零开始的文本分类完成