这一节咱们总结FM三兄弟FNN/PNN/DeepFM,由远及近,从最初把FM获得的隐向量和权重做为神经网络输入的FNN,到把向量内/外积从预训练直接迁移到神经网络中的PNN,再到参考wide&Deep框架把人工特征交互替换成FM的DeepFM,咱们终于来到了2017年。。。html
如下代码针对Dense输入感受更容易理解模型结构,针对spare输入的代码和完整代码 👇
https://github.com/DSXiangLi/CTRpython
FNN算是把FM和深度学习最先的尝试之一。能够从两个角度去理解FNN:从以前Embedding+MLP的角看,FNN使用FM预训练的隐向量做为第一层能够加快模型收敛。从FM的角度来看,FM局限于二阶特征交互信息,想要学到更高阶的特征交互,在FM基础上叠加全联接层就是FNN。git
补充一下Embedding直接拼接做为输入为啥收敛的这么慢,一个是参数太多一层Embedding就要N*K个参数再加上后面的全联接层,另外就是每次gradient descent只会更新和稀疏离散输入里面非0特征对应的Embedding。github
先看下FM的公式,FNN提取了如下的\(W,V\)来做为神经网络第一层的输入网络
FNN的模型结构比较简单。输入特征N维, FM隐向量维度是Kapp
模型结构以下框架
FNN几个能想到的问题有ide
这里用了tf.contrib.framework.load_variable去读了以前FM模型的embedding和weight。感受也能够直接把FM的variable写出来,而后FNN里用params再传进去也是能够的。post
@tf_estimator_model def model_fn(features, labels, mode, params): feature_columns= build_features() input = tf.feature_column.input_layer(features, feature_columns) with tf.variable_scope('init_fm_embedding'): # method1: load from checkpoint directly embeddings = tf.Variable( tf.contrib.framework.load_variable( './checkpoint/FM', 'fm_interaction/v' ) ) weight = tf.Variable( tf.contrib.framework.load_variable( './checkpoint/FM', 'linear/w' ) ) dense = tf.add(tf.matmul(input, embeddings), tf.matmul(input, weight)) add_layer_summary('input', dense) with tf.variable_scope( 'Dense' ): for i, unit in enumerate( params['hidden_units'] ): dense = tf.layers.dense( dense, units=unit, activation='relu', name='dense{}'.format( i ) ) dense = tf.layers.batch_normalization( dense, center=True, scale=True, trainable=True, training=(mode == tf.estimator.ModeKeys.TRAIN) ) dense = tf.layers.dropout( dense, rate=params['dropout_rate'], training=(mode == tf.estimator.ModeKeys.TRAIN) ) add_layer_summary( dense.name, dense ) with tf.variable_scope('output'): y = tf.layers.dense(dense, units= 1, name = 'output') tf.summary.histogram(y.name, y) return y
PNN的目标在paper最开始就点明了以比MLPs更有效的方式来挖掘信息。在前一篇咱们就说过MLP理论上能够提炼任意信息,但也由于它太过general致使最终模型能学到模式受数据量的限制会很是有限,PNN借鉴了FM的思路来帮助MLP学到更多特征交互信息。学习
PNN给出了三种挖掘特征交互信息的方式IPNN采用向量内积,OPNN采用向量外积,concat在一块儿就是PNN。模型结构以下
以后跟全链接层。能够发现去掉全联接层把权重都设为1,把线性部分对接到最初的离散输入那IPNN就退化成了FM。
以上IPNN和OPNN的计算都有维度太高,计算复杂度太高的问题,做者进行了相应的优化。
PNN的几个可能能够吐槽的地方
@tf_estimator_model def model_fn(features, labels, mode, params): dense_feature= build_features() dense = tf.feature_column.input_layer(features, dense_feature) # lz linear concat of embedding feature_size = len( dense_feature ) embedding_size = dense_feature[0].variable_shape.as_list()[-1] embedding_matrix = tf.reshape( dense, [-1, feature_size, embedding_size] ) # batch * feature_size *emb_size with tf.variable_scope('IPNN'): # use matrix multiplication to perform inner product of embedding inner_product = tf.matmul(embedding_matrix, tf.transpose(embedding_matrix, perm=[0,2,1])) # batch * feature_size * feature_size inner_product = tf.reshape(inner_product, [-1, feature_size * feature_size ])# batch * (feature_size * feature_size) add_layer_summary(inner_product.name, inner_product) with tf.variable_scope('OPNN'): outer_collection = [] for i in range(feature_size): for j in range(i+1, feature_size): vi = tf.gather(embedding_matrix, indices = i, axis=1, batch_dims=0, name = 'vi') # batch * embedding_size vj = tf.gather(embedding_matrix, indices = j, axis=1, batch_dims= 0, name='vj') # batch * embedding_size outer_collection.append(tf.reshape(tf.einsum('ai,aj->aij',vi,vj), [-1, embedding_size * embedding_size])) # batch * (emb * emb) outer_product = tf.concat(outer_collection, axis=1) add_layer_summary( outer_product.name, outer_product ) with tf.variable_scope('fc1'): if params['model_type'] == 'IPNN': dense = tf.concat([dense, inner_product], axis=1) elif params['model_type'] == 'OPNN': dense = tf.concat([dense, outer_product], axis=1) elif params['model_type'] == 'PNN': dense = tf.concat([dense, inner_product, outer_product], axis=1) add_layer_summary( dense.name, dense ) with tf.variable_scope('Dense'): for i, unit in enumerate( params['hidden_units'] ): dense = tf.layers.dense( dense, units=unit, activation='relu', name='dense{}'.format( i ) ) dense = tf.layers.batch_normalization( dense, center=True, scale=True, trainable=True, training=(mode == tf.estimator.ModeKeys.TRAIN) ) dense = tf.layers.dropout( dense, rate=params['dropout_rate'], training=(mode == tf.estimator.ModeKeys.TRAIN) ) add_layer_summary( dense.name, dense) with tf.variable_scope('output'): y = tf.layers.dense(dense, units=1, name = 'output') add_layer_summary( 'output', y ) return y
DeepFM是对Wide&Deep的Wide侧进行了改进。以前的Wide是一个LR,输入是离散特征和交互特征,交互特征会依赖人工特征工程来作cross。DeepFM则是用FM来代替了交互特征的部分,和Wide&Deep相比再也不依赖特征工程,同时cross-column的剔除能够下降输入的维度。
和PNN/FNN相比,DeepFM能更多提取到到低阶特征。并且上述这些模型间直接并不互斥,好比把DeepFM的FMLayer共享到Deep部分其实就是IPNN。
Wide部分就是一个FM,输入是N个one-hot的离散特征,每一个离散特征对应到等长的低维(k)embedding上,最终输出的就是以前FM模型的output。而且由于这里不须要像IPNN同样输出隐向量,所以可使用FM下降复杂度的trick。
Deep部分和Wide部分共享N*K的Embedding输入层,而后跟两个全联接层
Deep和Wide联合训练,模型最终的输出是FM部分和Deep部分权重为1的简单加和。联合训练共享Embedding也保证了二阶特征交互学到的Embedding会和高阶信息学到的Embedding的一致性。
@tf_estimator_model def model_fn(features, labels, mode, params): dense_feature, sparse_feature = build_features() dense = tf.feature_column.input_layer(features, dense_feature) sparse = tf.feature_column.input_layer(features, sparse_feature) with tf.variable_scope('FM_component'): with tf.variable_scope( 'Linear' ): linear_output = tf.layers.dense(sparse, units=1) add_layer_summary( 'linear_output', linear_output ) with tf.variable_scope('second_order'): # reshape (batch_size, n_feature * emb_size) -> (batch_size, n_feature, emb_size) emb_size = dense_feature[0].variable_shape.as_list()[0] # all feature has same emb dimension embedding_matrix = tf.reshape(dense, (-1, len(dense_feature), emb_size)) add_layer_summary( 'embedding_matrix', embedding_matrix ) # Compared to FM embedding here is flatten(x * v) not v sum_square = tf.pow( tf.reduce_sum( embedding_matrix, axis=1 ), 2 ) square_sum = tf.reduce_sum( tf.pow(embedding_matrix,2), axis=1 ) fm_output = tf.reduce_sum(tf.subtract( sum_square, square_sum) * 0.5, axis=1, keepdims=True) add_layer_summary('fm_output', fm_output) with tf.variable_scope('Deep_component'): for i, unit in enumerate(params['hidden_units']): dense = tf.layers.dense(dense, units = unit, activation ='relu', name = 'dense{}'.format(i)) dense = tf.layers.batch_normalization(dense, center=True, scale = True, trainable=True, training=(mode ==tf.estimator.ModeKeys.TRAIN)) dense = tf.layers.dropout( dense, rate=params['dropout_rate'], training = (mode==tf.estimator.ModeKeys.TRAIN)) add_layer_summary( dense.name, dense ) with tf.variable_scope('output'): y = dense + fm_output + linear_output add_layer_summary( 'output', y ) return y
https://github.com/DSXiangLi/CTR
CTR学习笔记&代码实现1-深度学习的前奏LR->FFM
CTR学习笔记&代码实现2-深度ctr模型 MLP->Wide&Deep