nlp领域里,语义理解仍然是难题!python
给你一篇文章或者一个句子,人们在理解这些句子时,头脑中会进行上下文的搜索和知识联想。一般状况下,人在理解语义时头脑中会搜寻与之相关的知识。知识图谱的创始人人为,构成这个世界的是实体,而不是字符串,这从根本上改变了过去搜索的体系。语义理解实际上是基于知识,概念和这些概念间的关系。人们在解答问题时,每每会讲述与这个问题相关的知识,这是语义理解的过程。这种机制彻底不一样于人对图像或者语音的认识。CNN在图像或者语音领域取得成果是不足为奇的,由于生物学家已经对人脑神经元在图像识别过程当中的机制很是熟悉,可是对于人脑如何理解文字的神经元机制却知之甚少,因此致使了目前nlp语义理解方面进展很是缓慢。不少人尝试CNN引入nlp效果不佳,发现多层的CNN和单层的CNN几乎没有差异,缘由得从人脑的神经元机制提及。生搬硬套是必然失败的!深度学习的本质并非神经元层数多这么简单,可以从最基本的特征,逐层抽取出高阶特征,最后进行分类,这是深度学习取得成功的关键。git
有一部分人质疑word2vector不是深度学习,说层数太浅达不到深度的级别,这是一种误解。word2vector是地地道道的深度学习,可以抽取出词的高阶特征。他的成功,关键是基于他的核心思想:相同语境出现的词语义相近。从第一层one-hot到embedding层,就是高阶特征抽取的过程。前面说过,层数多了不必定带来效果的提高。词embedding已是高阶特征了,文字比图像要复杂不少,目前CNN在nlp中的引入,方向多是错误的。必须深刻研究人脑对文字理解的神经元机制,弄清楚生物学模型,而后才能从中抽象出数学模型,就像CNN同样,不然nlp不会有长足的进展。目前来看,LSTM以及Attention Model是比较成功的,可是仍然基于形式化的,对于深层语义仍然没有解决。github
目前来看,深度学习算法LSTM,Attention Model等在nlp中的应用,仅限于上下文和词,句子向量。计算一下句子类似度,聚类之类的,要想真正让机器理解文字,还达不到。也就是说只在语义表示层作文章是远远不够的,底层的知识图谱是关键。Google提出的知识图谱是一种变革,nlp是一个完整的生态圈,从最底层的存储,GDB三元组(entry,relation,entry),到上层的语义表示(这个阶段能够借助深度学习直接在语义层进行训练),好比(head,relation,tail)三元组表示的图结构,表达了实体与实体间的关系,能够用深度学习训练出一个模型:h + r = t,获取语义表示。这样在预测时,获得了两个实体的语义表示,进行减法运算就能够知道二者的关系。这个不一样于word2vector,可是仍是有共性的。word2vector的CBOW就是训练x1 + x2 + …… = y这个模型。目前知网也在作这些事情。算法
语义表示是深度学习在nlp应用中的重中之重。以前在词embedding上word2vector获取了巨大成功,如今主要方向是由词embedding迁移到句子或者文章embedding。获取句子的embedding,以前的博客,siamese lstm已经有论述了,在2014~2015年间,国外的学者探索了各类方法,好比tree-lstm,convnet,skip-thougt,基于ma机构的siamese lstm来计算句子或者文章的类似度。目前从数据来看,基于ma结构的siamese lstm效果最好,最适应nlp的规律。在github上已经有了siamese lstm的实验,进一步改进但是基于BiLSTM,至于增长层数是否可以带来准确率的提高,有待于进一步论证,我的持中立态度。本文主要探讨word2vector。关于他的核心思想前面已经提到了,这是道的层面,具体推导,好比CBOW ,skip-gram的优化:negative sampleing和哈夫曼树softmax,这是术的层面。如今上传用tensorflow实现的word2vector代码:session
data-helper.py:app
import collections import os import random import zipfile import numpy as np import urllib.request as request import tensorflow as tf url = 'http://mattmahoney.net/dc/' def maybe_download(filename,expected_bytes): if not os.path.exists(filename): filename,_ = request.urlretrieve(url+filename,filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified',filename) else: print(statinfo.st_size) raise Exception('Failed to verify' + filename + '.Can you get to it with a browser?') return filename def read_data(filename): with zipfile.ZipFile(filename) as f: data = tf.compat.as_str(f.read(f.namelist()[0])).split() return data vocabulary_size = 50000 def build_dataset(words): count = [['UNK',-1]] count.extend(collections.Counter(words).most_common(vocabulary_size - 1)) dictionary = dict(zip(list(zip(*count))[0],range(len(list(zip(*count))[0])))) data = list() un_count = 0 for word in words: if word in dictionary: index = dictionary[word] else: index = 0 un_count += 1 data.append(index) count[0][1] = un_count reverse_dictionary = dict(zip(dictionary.values(),dictionary.keys())) return data,reverse_dictionary,dictionary,count data_index = 0 def generate_batch(data,batch_size,num_skips,skip_window): filename = maybe_download('text8.zip', 31344016) words = read_data(filename) global data_index assert num_skips <= 2 * skip_window assert batch_size % num_skips == 0 span = 2 * skip_window + 1 batch = np.ndarray(shape=[batch_size],dtype=np.int32) labels = np.ndarray(shape=[batch_size,1],dtype=np.int32) buffer = collections.deque(maxlen=span) #初始化 for i in range(span): buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) #移动窗口,获取批量数据 for i in range(batch_size // num_skips): target = skip_window avoid_target = [skip_window] for j in range(num_skips): while target in avoid_target: target = np.random.randint(0,span) avoid_target.append(target) batch[i * num_skips + j] = buffer[skip_window] labels[i * num_skips + j,0] = buffer[target] buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) return batch,labels
w2vModel.pydom
import tensorflow as tf import w2v.data_helper as da import numpy as np import math #filename = da.maybe_download('text8.zip', 31344016) words = da.read_data("text8.zip") assert words is not None data,reverse_dictionary,dictionary,count = da.build_dataset(words) class config(object): batch_size = 128 embedding_size = 128 skip_window = 1 num_skips = 2 valid_size = 16 valid_window = 100 valid_examples = np.random.choice(valid_window, valid_size, replace=False) num_sampled = 64 vocabulary_size = 50000 num_steps = 10001 class w2vModel(object): def __init__(self,config): self.train_inputs = train_inputs = tf.placeholder(tf.int32, shape=[config.batch_size]) self.train_labels = train_labels = tf.placeholder(tf.int32, shape=[config.batch_size, 1]) self.valid_dataset = valid_dataset = tf.constant(config.valid_examples, dtype=tf.int32) with tf.device('/cpu:0'): embeddings = tf.Variable( tf.random_uniform(shape=[config.vocabulary_size, config.embedding_size], minval=-1.0, maxval=1.0)) embed = tf.nn.embedding_lookup(embeddings, train_inputs) nce_weights = tf.Variable( tf.truncated_normal([config.vocabulary_size, config.embedding_size], stddev=1.0 / math.sqrt(config.embedding_size))) nce_bias = tf.Variable(tf.zeros([config.vocabulary_size])) loss = tf.reduce_mean( tf.nn.nce_loss(weights=nce_weights, biases=nce_bias, labels=train_labels, inputs=embed, num_sampled=config.num_sampled, num_classes=config.vocabulary_size)) optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss) norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True) tf.add_to_collection("embedding",embeddings) self.saver = saver = tf.train.Saver(tf.global_variables())
train.py:学习
import tensorflow as tf import w2v.w2vmodel as model import w2v.data_helper as da config = model.config() with tf.Graph().as_default() as g: Model = model.w2vModel(config) with tf.Session(graph=g) as session: tf.global_variables_initializer().run() print("initialized") average_loss = 0.0 for step in range(config.num_steps): batch_inputs,batch_labels = da.generate_batch(model.data,config.batch_size,config.num_skips,config.skip_window) feed_dict = {Model.train_inputs:batch_inputs,Model.train_labels:batch_labels} _,loss_val = session.run([Model.optimizer,Model.loss],feed_dict=feed_dict) average_loss += loss_val if step % 2000 == 0: if step > 0: average_loss /= 2000 print("Average loss at step",step,":",average_loss) average_loss = 0 if step % 10000 == 0: sim = Model.similarity.eval() for i in range(config.valid_size): valid_word = model.reverse_dictionary[config.valid_examples[i]] top_k = 8 nearest = (-sim[i,:]).argsort()[1:top_k+1] log_str = "Nearest to %s:" % valid_word for k in range(top_k): close_word = model.reverse_dictionary[nearest[k]] log_str = "%s %s," % (log_str,close_word) print(log_str) Model.saver.save(session, "E:/word2vector/models/model.ckpt") #final_embeddings = model.normalized_embeddings.eval()
代码实现比较简单,先对样本统计,而后降序排列,在获得dictionary{词:索引},接下把样本中的词转换成索引,进行训练。词向量就是神经元参数embedding,在预测时,只须要拿出embedding和dictionary,就能够获得词向量,比biLSTM和siamese lstm简单多了!可是,他在语义理解上有致命的缺点:对于词典中没出现的词的语义表示用0代替,明显是不稳当的,能力有限!因此如今国内有少数的学者研究把神经几率语义表示和符号语义表示结合起来,难度不小!优化
期待nlp语义理解出现变革……ui