深度有趣 | 06 变分自编码器

简介

变分自编码器(Variational Autoencoder,VAE)是生成式模型(Generative Model)的一种,另外一种常见的生成式模型是生成式对抗网络(Generative Adversarial Network,GAN)html

这里咱们介绍下VAE的原理,并用Keras实现git

原理

咱们常常会有这样的需求:根据不少个样本,学会生成新的样本网络

以MNIST为例,在看过几千张手写数字图片以后,咱们能进行模仿,并生成一些相似的图片,这些图片在原始数据中并不存在,有一些变化可是看起来类似dom

换言之,须要学会数据x的分布,这样,根据数据的分布就能轻松地产生新样本函数

$$ P(X) $$学习

但数据分布的估计不是件容易的事情,尤为是当数据量不足的时候ui

可使用一个隐变量z,由z通过一个复杂的映射获得x,而且假设z服从高斯分布编码

$$ x=f(z;\theta) $$spa

所以只须要学习隐变量所服从高斯分布的参数,以及映射函数,便可获得原始数据的分布code

为了学习隐变量所服从高斯分布的参数,须要获得z足够多的样本

然而z的样本并不能直接得到,所以还须要一个映射函数(条件几率分布),从已有的x样本中获得对应的z样本

$$ z=Q(z|x) $$

这看起来和自编码器很类似,从数据自己,经编码获得隐层表示,经解码还原

但VAE和AE的区别以下:

  • AE中隐层表示的分布未知,而VAE中隐变量服从高斯分布
  • AE中学习的是encoder和decoder,VAE中还学习了隐变量的分布,包括高斯分布的均值和方差
  • AE只能从一个x,获得对应的重构x
  • VAE能够产生新的z,从而获得新的x,即生成新的样本

损失函数

除了重构偏差,因为在VAE中咱们假设隐变量z服从高斯分布,所以encoder对应的条件几率分布,应当和高斯分布尽量类似

能够用相对熵,又称做KL散度(Kullback–Leibler Divergence),来衡量两个分布的差别,或者说距离,但相对熵是非对称

$$ D(f\parallel g)=\int f(x)\log\frac{f(x)}{g(x)}dx $$

实现

这里以MNIST为例,学习隐变量z所服从高斯分布的均值和方差两个参数,从而能够重新的z生成原始数据中没有的x

encoder和decoder各用两层全链接层,简单一些,主要为了说明VAE的实现

加载库

# -*- coding: utf-8 -*-

import numpy as np
import matplotlib.pyplot as plt

from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras import backend as K
from keras import objectives
from keras.datasets import mnist

定义一些常数

batch_size = 100
original_dim = 784
intermediate_dim = 256
latent_dim = 2
epochs = 50

encoder部分,两层全链接层,隐层表示包括均值和方差

x = Input(shape=(original_dim,))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)

Lambda层不参与训练,只参与计算,用于后面产生新的z

def sampling(args):
    z_mean, z_log_var = args
    epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.)
    return z_mean + K.exp(z_log_var / 2) * epsilon

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

decoder部分,两层全链接层,x_decoded_mean为重构的输出

decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

自定义总的损失函数并编译模型

def vae_loss(x, x_decoded_mean):
    xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean)
    kl_loss = -0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
    return xent_loss + kl_loss

vae = Model(x, x_decoded_mean)
vae.compile(optimizer='rmsprop', loss=vae_loss)

加载数据并训练,CPU训练的速度还算能忍

(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

vae.fit(x_train, x_train,
        shuffle=True,
        epochs=epochs,
        batch_size=batch_size,
        validation_data=(x_test, x_test))

定义一个encoder,看看MNIST中的数据在隐层中变成了什么样子

encoder = Model(x, z_mean)

x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)
plt.colorbar()
plt.show()

结果以下,说明在二维的隐层中,不一样的数字被很好地分开了

数字在隐层中的表示

再定义一个生成器,从隐层到输出,用于产生新的样本

decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)

用网格化的方法产生一些二维数据,做为新的z输入到生成器,并将生成的x展现出来

n = 20
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
grid_x = np.linspace(-4, 4, n)
grid_y = np.linspace(-4, 4, n)

for i, xi in enumerate(grid_x):
    for j, yi in enumerate(grid_y):
        z_sample = np.array([[yi, xi]])
        x_decoded = generator.predict(z_sample)
        digit = x_decoded[0].reshape(digit_size, digit_size)
        figure[(n - i - 1) * digit_size: (n - i) * digit_size,
               j * digit_size: (j + 1) * digit_size] = digit

plt.figure(figsize=(10, 10))
plt.imshow(figure)
plt.show()

结果以下,和以前看到的隐层图是一致的,甚至能看到一些数字之间的过渡态

网格化隐层数据对应的输出

因为包含一些随机因素,因此每次生成的结果会存在一些差别

若是将全链接层换成CNN,应该能够获得更好的表示结果

拓展

掌握以上内容后,用相同的方法,能够在FashionMNIST这个数据集上再跑一遍,数据集规模和MNIST彻底相同

FashionMNIST数据集

只需改动四行便可

from keras.datasets import fashion_mnist

(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

grid_x = np.linspace(-3, 3, n)
grid_y = np.linspace(-3, 3, n)

完整代码以下

# -*- coding: utf-8 -*-

import numpy as np
import matplotlib.pyplot as plt

from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras import backend as K
from keras import objectives
from keras.datasets import fashion_mnist

batch_size = 100
original_dim = 784
intermediate_dim = 256
latent_dim = 2
epochs = 50

x = Input(shape=(original_dim,))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)

def sampling(args):
    z_mean, z_log_var = args
    epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.)
    return z_mean + K.exp(z_log_var / 2) * epsilon

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

def vae_loss(x, x_decoded_mean):
    xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean)
    kl_loss = -0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
    return xent_loss + kl_loss

vae = Model(x, x_decoded_mean)
vae.compile(optimizer='rmsprop', loss=vae_loss)

(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

vae.fit(x_train, x_train,
        shuffle=True,
        epochs=epochs,
        batch_size=batch_size,
        validation_data=(x_test, x_test))

encoder = Model(x, z_mean)

x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)
plt.colorbar()
plt.show()

decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)

n = 20
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
grid_x = np.linspace(-3, 3, n)
grid_y = np.linspace(-3, 3, n)

for i, xi in enumerate(grid_x):
    for j, yi in enumerate(grid_y):
        z_sample = np.array([[yi, xi]])
        x_decoded = generator.predict(z_sample)
        digit = x_decoded[0].reshape(digit_size, digit_size)
        figure[(n - i - 1) * digit_size: (n - i) * digit_size,
               j * digit_size: (j + 1) * digit_size] = digit

plt.figure(figsize=(10, 10))
plt.imshow(figure)
plt.show()

咱们来看一下隐层的表示,一样起到了很好的分类效果

FashionMNIST隐层表示

而后再来生成一些图形,能够看到不一样种类衣服之间的过渡

FashionMNIST网格化隐层数据对应的输出

参考

视频讲解课程

深度有趣(一)

相关文章
相关标签/搜索