tf中softmax_cross_entropy_with_logits与sparse_softmax_cross_entropy_with_logits

其实这两个都是计算交叉熵,只是输入数据不一样。python

#sparse 稀疏的、稀少的

    word_labels = tf.constant([2,0])
    predict_logits = tf.constant([[2.0,-1.0,3.0],[1.0,0.0,-0.5]])
    loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
        labels = word_labels,logits = predict_logits)
    with tf.Session() as sess:
        print(sess.run(loss))
        #结果是:[0.32656264 0.4643688 ]

  

word_prob_distribution = tf.constant([[0.0,0.0,1.0],[1.0,0.0,0.0]])
    loss = tf.nn.softmax_cross_entropy_with_logits(labels = word_prob_distribution,logits = predict_logits)
    with tf.Session() as sess:
        print(sess.run(loss))
        #结果是:[0.32656264 0.4643688 ]

  

因为softmax_cross_entropy_with_logits容许提供一个几率分布,所以在使用时有更大的自由度。
举个例子,一种叫label_smoothing的技巧将正确数据的几率设为一个比1.0略小的值,将错误的该几率设置为一个比0.0略大的值,
这样能够避免模型与数据过拟合,在某些时候能够提升训练效果

word_prob_smooth = tf.constant([[0.01, 0.01, 0.97], [0.98, 0.03, 0.01]])
    loss = tf.nn.softmax_cross_entropy_with_logits(labels = word_prob_smooth,logits = predict_logits)
    with tf.Session() as sess:
        print(sess.run(loss))
        #[0.37329704 0.5186562 ]
相关文章
相关标签/搜索