移动端神经网络MobileNet系列论文解读与简单代码实现(MobileNetv2)

MobileNetv2:

归纳:

MobileNet V1的结构较为简单,另外,主要的问题仍是在Depthwise Convolution之中,Depthwise Convolution确实下降了计算量,可是 Depthwise 部分的 kernel 训练容易废掉,最终再通过ReLU出现输出为0的状况。python

主要架构仍是将MobileNet V1和残差网络ResNet的残差单元结合起来,用Depthwise Convolutions代替残差单元的bottleneck ,最重要的是与residuals block相反,一般的residuals block是先通过1×1的卷积,下降feature map通道数,而后再经过3×3卷积,最后从新通过1×1卷积将feature map通道数扩张回去;并且为了不ReLU对特征的破坏,用线性层替换channel数较少层后的ReLU非线性激活。网络

核心:

(1)Linear Bottlenecks 线性瓶颈层

本节中传递的思想只有一个,即ReLU会对channel数较低的张量形成较大的信息损耗。以下图所示,当原始输入维度数增长到15之后再加ReLU,基本不会丢失太多的信息;但若是只把原始输入维度增长至2~5后再加ReLU,则会出现较为严重的信息丢失。架构

所以执行降维的卷积层后面不会接相似于ReLU这样的非线性激活层ide

至于ReLU是如何损失特征的,ReLU的特性使得对于负值输入,其输出为0,并且降维自己就是特征压缩的过程,这样就使得特征损失更为严重。spa

(2)Inverted Residuals 倒置残差

首先为何要倒置残差?debug

由于MobileNetv2将residuals block的bottleneck替换为了深度可分离卷积,深度可分离卷积参数少了,提取的特征也相对较少,若是此时再进行降维压缩操做,能提取的特征就更少了。3d

模型结构

与MobileNetv1和ShuffleNet对比:code

 

简单实现:

# mobilenet_v2网络定义
def mobilenet_v2_func_blocks(is_training):
    assert const.use_batch_norm == True
    filter_initializer = tf.contrib.layers.xavier_initializer()
    activation_func = tf.nn.relu6

    def conv2d(inputs, filters, kernel_size, stride, scope=''):
        with tf.variable_scope(scope):
            with tf.variable_scope('conv2d'):
                outputs = tf.layers.conv2d(inputs, filters, kernel_size, strides=(stride, stride),
                                           padding='same', activation=None, use_bias=False,
                                           kernel_initializer=filter_initializer)
                outputs = tf.layers.batch_normalization(outputs, training=is_training)
                outputs = tf.nn.relu(outputs)

            return outputs

    def _1x1_conv2d(inputs, filters, stride):
        kernel_size = [1, 1]
        with tf.variable_scope('1x1_conv2d'):
            outputs = tf.layers.conv2d(inputs, filters, kernel_size, strides=(stride, stride),
                                       padding='same', activation=None,use_bias=False,
                                       kernel_initializer=filter_initializer)
            outputs = tf.layers.batch_normalization(outputs, training=is_training)
        return outputs

    def expansion_conv2d(inputs, expansion, stride):
        input_shape = inputs.get_shape().as_list()
        assert len(input_shape) == 4
        filters = input_shape[3] * expansion

        kernel_size = [1, 1]
        with tf.variable_scope('expansion_1x1_conv2d'):
            outputs = tf.layers.conv2d(inputs, filters, kernel_size, strides=(stride, stride),
                                       padding='same', activation=None, use_bias=False,
                                       kernel_initializer=filter_initializer)
            outputs = tf.layers.batch_normalization(outputs, training=is_training)
            outputs = activation_func(outputs)
        return outputs

    def projection_conv2d(inputs, filters, stride):
        kernel_size = [1, 1]
        with tf.variable_scope('projection_1x1_conv2d'):
            outputs = tf.layers.conv2d(inputs, filters, kernel_size, strides=(stride, stride),
                                       padding='same', activation=None, use_bias=False,
                                       kernel_initializer=filter_initializer)
            outputs = tf.layers.batch_normalization(outputs, training=is_training)
        return outputs
    def depthwise_conv2d(inputs, depthwise_conv_kernel_size,stride):
        with tf.variable_scope('depthwise_conv2d'):
            outputs = tf.contrib.layers.separable_conv2d(
                inputs,
                None,
                depthwise_conv_kernel_size,
                depth_multiplier=1,
                stride=(stride,stride),
                padding='SAME',
                activation_fn=None,
                weights_initializer=filter_initializer,
                biases_initializer=None)
            outputs = tf.layers.batch_normalization(outputs, training=is_training)
            outputs = tf.nn.relu(outputs)
        return outputs
    def avg_pool2d(inputs, scope=''):
        inputs_shape = inputs.get_shape().as_list()
        assert len(inputs_shape) == 4

        pool_height = inputs_shape[1]
        pool_width = inputs_shape[2]

        with tf.variable_scope(scope):
            outputs = tf.layers.average_pooling2d(inputs, [pool_height, pool_width],
                                                  strides=(1, 1),padding='valid')

        return outputs

    def inverted_residual_block(inputs, filters, stride, expansion=6,scope=''):
        assert stride == 1 or stride == 2
        depthwise_conv_kernel_size = [3, 3]
        pointwise_conv_filters = filters

        with tf.variable_scope(scope):
            net = inputs
            net = expansion_conv2d(net, expansion, stride=1)
            net = depthwise_conv2d(net, depthwise_conv_kernel_size, stride=stride)
            net = projection_conv2d(net, pointwise_conv_filters, stride=1)

            if stride == 1:
                # print('----------------- test, net.get_shape().as_list()[3] = %r' % net.get_shape().as_list()[3])
                # print('----------------- test, inputs.get_shape().as_list()[3] = %r' % inputs.get_shape().as_list()[3])
                # 若是 net.get_shape().as_list()[3] != inputs.get_shape().as_list()[3]
                # 借助一个 1x1 的卷积让他们的 channels 相等,而后再相加
                if net.get_shape().as_list()[3] != inputs.get_shape().as_list()[3]:
                    inputs = _1x1_conv2d(inputs, net.get_shape().as_list()[3], stride=1)

                net = net + inputs
                return net
            else:
                # stride == 2
                return net

    func_blocks = {}
    func_blocks['conv2d'] = conv2d
    func_blocks['inverted_residual_block'] = inverted_residual_block
    func_blocks['avg_pool2d'] = avg_pool2d
    func_blocks['filter_initializer'] = filter_initializer
    func_blocks['activation_func'] = activation_func

    return func_blocks



def mobilenet_v2(inputs, is_training):
    assert const.use_batch_norm == True

    func_blocks = mobilenet_v2_func_blocks(is_training)
    _conv2d = func_blocks['conv2d']
    _inverted_residual_block = func_blocks['inverted_residual_block']
    _avg_pool2d = func_blocks['avg_pool2d']

    with tf.variable_scope('mobilenet_v2', 'mobilenet_v2', [inputs]):
        end_points = {}
        net = inputs

        net = _conv2d(net, 32, [3, 3], stride=2, scope='block0_0')  # size/2
        end_points['block0'] = net
        print('!! debug block0, net shape is: {}'.format(net.get_shape()))

        net = _inverted_residual_block(net, 16, stride=1, expansion=1, scope='block1_0')
        end_points['block1'] = net
        print('!! debug block1, net shape is: {}'.format(net.get_shape()))

        net = _inverted_residual_block(net, 24, stride=2, scope='block2_0')  # size/4
        net = _inverted_residual_block(net, 24, stride=1, scope='block2_1')
        end_points['block2'] = net
        print('!! debug block2, net shape is: {}'.format(net.get_shape()))

        net = _inverted_residual_block(net, 32, stride=2, scope='block3_0')  # size/8
        net = _inverted_residual_block(net, 32, stride=1, scope='block3_1')
        net = _inverted_residual_block(net, 32, stride=1, scope='block3_2')
        end_points['block3'] = net
        print('!! debug block3, net shape is: {}'.format(net.get_shape()))

        net = _inverted_residual_block(net, 64, stride=2, scope='block4_0')  # size/16
        net = _inverted_residual_block(net, 64, stride=1, scope='block4_1')
        net = _inverted_residual_block(net, 64, stride=1, scope='block4_2')
        net = _inverted_residual_block(net, 64, stride=1, scope='block4_3')
        end_points['block4'] = net
        print('!! debug block4, net shape is: {}'.format(net.get_shape()))

        net = _inverted_residual_block(net, 96, stride=1, scope='block5_0')
        net = _inverted_residual_block(net, 96, stride=1, scope='block5_1')
        net = _inverted_residual_block(net, 96, stride=1, scope='block5_2')
        end_points['block5'] = net
        print('!! debug block5, net shape is: {}'.format(net.get_shape()))

        net = _inverted_residual_block(net, 160, stride=2, scope='block6_0')  # size/32
        net = _inverted_residual_block(net, 160, stride=1, scope='block6_1')
        net = _inverted_residual_block(net, 160, stride=1, scope='block6_2')
        end_points['block6'] = net
        print('!! debug block6, net shape is: {}'.format(net.get_shape()))

        net = _inverted_residual_block(net, 320, stride=1, scope='block7_0')
        end_points['block7'] = net
        print('!! debug block7, net shape is: {}'.format(net.get_shape()))

        net = _conv2d(net, 1280, [1, 1], stride=1, scope='block8_0')
        end_points['block8'] = net
        print('!! debug block8, net shape is: {}'.format(net.get_shape()))

        output = _avg_pool2d(net, scope='output')
        print('!! debug after avg_pool, net shape is: {}'.format(output.get_shape()))

    return output, end_points
相关文章
相关标签/搜索