精华内容
下载资源
问答
  • Squeeze Net

    2021-06-11 12:08:29
    1602_Squeeze Net 图: 网络描述: Squeeze Net 发表于ICLR-2017,作者分别来自Berkeley和Stanford,Squeeze Net不是模型压缩技术,而是 “design strategies for CNN architectures with ...SqueezeNet的核心在于Fi

    1602_Squeeze Net

    图:

    img

    img

    网络描述:

    Squeeze Net 发表于ICLR-2017,作者分别来自Berkeley和Stanford,Squeeze Net不是模型压缩技术,而是 “design strategies for CNN architectures with few parameters” 。 Squeeze Net是Han等提出的一种轻量且高效的CNN模型,它参数比AlexNet少50x,但模型性能(accuracy)与AlexNet接近。SqueezeNet的核心在于Fire module,Fire module 由两层构成,分别是squeeze层+expand层,如上图所示,squeeze层是一个1x1卷积核的卷积层,expand层是1x1和3x3卷积核的卷积层,expand层中,把1x1 和3x3 得到的feature map 进行concat。

    特点,优点:

    (1) 提出了新的网络架构Fire Module,通过减少参数来进行模型压缩

    (2) 使用其他方法对提出的Squeeze Net模型进行进一步压缩,

    (3) 对参数空间进行了探索,主要研究了压缩比和3∗3,3∗3卷积比例的影响

    (4) 更高效的分布式训练,小模型参数小,网络通信量减少;

    (5) 便于模型更新,模型小,客户端程序容易更新;

    (6) 利于部署在特定硬件如FPGA,因为其内存受限。因此研究小模型是很有现实意义的。

    其他方法:使用以下三个策略来减少Squeeze Net设计参数

    1.使用1∗1卷积代替3∗3 卷积:参数减少为原来的1/9

    2.减少输入通道数量:这一部分使用squeeze layers来实现

    3.将池化层操作延后,可以给卷积层提供更大的激活图:更大的激活图保留了更多的信息,可以提供更高的分类准确率

    代码:

    keras实现:
    def fire_module(x, fire_id, squeeze=16, expand=64):
        s_id = 'fire' + str(fire_id) + '/'
    
        if K.image_data_format() == 'channels_first':
            channel_axis = 1
        else:
            channel_axis = 3
        
        x = Convolution2D(squeeze, (1, 1), padding='valid', name=s_id + sq1x1)(x)
        x = Activation('relu', name=s_id + relu + sq1x1)(x)
    
        left = Convolution2D(expand, (1, 1), padding='valid', name=s_id + exp1x1)(x)
        left = Activation('relu', name=s_id + relu + exp1x1)(left)
    
        right = Convolution2D(expand, (3, 3), padding='same', name=s_id + exp3x3)(x)
        right = Activation('relu', name=s_id + relu + exp3x3)(right)
    
        x = concatenate([left, right], axis=channel_axis, name=s_id + 'concat')
        return x
    
    
    # Original SqueezeNet from paper.
    def SqueezeNet(include_top=True, weights='imagenet',
                   input_tensor=None, input_shape=None,
                   pooling=None,
                   classes=1000):
        """Instantiates the SqueezeNet architecture.
        """
            
        if weights not in {'imagenet', None}:
            raise ValueError('The `weights` argument should be either '
                             '`None` (random initialization) or `imagenet` '
                             '(pre-training on ImageNet).')
    
        if weights == 'imagenet' and classes != 1000:
            raise ValueError('If using `weights` as imagenet with `include_top`'
                             ' as true, `classes` should be 1000')
    
        input_shape = _obtain_input_shape(input_shape,
                                          default_size=227,
                                          min_size=48,
                                          data_format=K.image_data_format(),
                                          require_flatten=include_top)
    
        if input_tensor is None:
            img_input = Input(shape=input_shape)
        else:
            if not K.is_keras_tensor(input_tensor):
                img_input = Input(tensor=input_tensor, shape=input_shape)
            else:
                img_input = input_tensor
    
    
        x = Convolution2D(64, (3, 3), strides=(2, 2), padding='valid', name='conv1')(img_input)
        x = Activation('relu', name='relu_conv1')(x)
        x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
    
        x = fire_module(x, fire_id=2, squeeze=16, expand=64)
        x = fire_module(x, fire_id=3, squeeze=16, expand=64)
        x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool3')(x)
    
        x = fire_module(x, fire_id=4, squeeze=32, expand=128)
        x = fire_module(x, fire_id=5, squeeze=32, expand=128)
        x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool5')(x)
    
        x = fire_module(x, fire_id=6, squeeze=48, expand=192)
        x = fire_module(x, fire_id=7, squeeze=48, expand=192)
        x = fire_module(x, fire_id=8, squeeze=64, expand=256)
        x = fire_module(x, fire_id=9, squeeze=64, expand=256)
        
        if include_top:
            # It's not obvious where to cut the network... 
            # Could do the 8th or 9th layer... some work recommends cutting earlier layers.
        
            x = Dropout(0.5, name='drop9')(x)
    
            x = Convolution2D(classes, (1, 1), padding='valid', name='conv10')(x)
            x = Activation('relu', name='relu_conv10')(x)
            x = GlobalAveragePooling2D()(x)
            x = Activation('softmax', name='loss')(x)
        else:
            if pooling == 'avg':
                x = GlobalAveragePooling2D()(x)
            elif pooling=='max':
                x = GlobalMaxPooling2D()(x)
            elif pooling==None:
                pass
            else:
                raise ValueError("Unknown argument for 'pooling'=" + pooling)
    
        # Ensure that the model takes into account
        # any potential predecessors of `input_tensor`.
        if input_tensor is not None:
            inputs = get_source_inputs(input_tensor)
        else:
            inputs = img_input
    
        model = Model(inputs, x, name='squeezenet')
    
        # load weights
        if weights == 'imagenet':
            if include_top:
                weights_path = get_file('squeezenet_weights_tf_dim_ordering_tf_kernels.h5',
                                        WEIGHTS_PATH,
                                        cache_subdir='models')
            else:
                weights_path = get_file('squeezenet_weights_tf_dim_ordering_tf_kernels_notop.h5',
                                        WEIGHTS_PATH_NO_TOP,
                                        cache_subdir='models')
                
            model.load_weights(weights_path)
            if K.backend() == 'theano':
                layer_utils.convert_all_kernels_in_model(model)
    
            if K.image_data_format() == 'channels_first':
    
                if K.backend() == 'tensorflow':
                    warnings.warn('You are using the TensorFlow backend, yet you '
                                  'are using the Theano '
                                  'image data format convention '
                                  '(`image_data_format="channels_first"`). '
                                  'For best performance, set '
                                  '`image_data_format="channels_last"` in '
                                  'your Keras config '
                                  'at ~/.keras/keras.json.')
        return model
    
    pytorch实现:
    class Fire(nn.Module):
     
        def __init__(self, inplanes, squeeze_planes,
                     expand1x1_planes, expand3x3_planes):
            super(Fire, self).__init__()
            self.inplanes = inplanes
            self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1)
            self.squeeze_activation = nn.ReLU(inplace=True)
            self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes,
                                       kernel_size=1)
            self.expand1x1_activation = nn.ReLU(inplace=True)
            self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes,
                                       kernel_size=3, padding=1)
            self.expand3x3_activation = nn.ReLU(inplace=True)
     
        def forward(self, x):
            x = self.squeeze_activation(self.squeeze(x))
            return torch.cat([
                self.expand1x1_activation(self.expand1x1(x)),
                self.expand3x3_activation(self.expand3x3(x))
            ], 1)
     
     
    class SqueezeNet(nn.Module):
     
        def __init__(self, version='1_0', num_classes=1000):
            super(SqueezeNet, self).__init__()
            self.num_classes = num_classes
            if version == '1_0':
                self.features = nn.Sequential(
                    nn.Conv2d(3, 96, kernel_size=7, stride=2),
                    nn.ReLU(inplace=True),
                    nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                    Fire(96, 16, 64, 64),
                    Fire(128, 16, 64, 64),
                    Fire(128, 32, 128, 128),
                    nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                    Fire(256, 32, 128, 128),
                    Fire(256, 48, 192, 192),
                    Fire(384, 48, 192, 192),
                    Fire(384, 64, 256, 256),
                    nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                    Fire(512, 64, 256, 256),
                )
            elif version == '1_1':
                self.features = nn.Sequential(
                    nn.Conv2d(3, 64, kernel_size=3, stride=2),
                    nn.ReLU(inplace=True),
                    nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                    Fire(64, 16, 64, 64),
                    Fire(128, 16, 64, 64),
                    nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                    Fire(128, 32, 128, 128),
                    Fire(256, 32, 128, 128),
                    nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                    Fire(256, 48, 192, 192),
                    Fire(384, 48, 192, 192),
                    Fire(384, 64, 256, 256),
                    Fire(512, 64, 256, 256),
                )
            else:
                # FIXME: Is this needed? SqueezeNet should only be called from the
                # FIXME: squeezenet1_x() functions
                # FIXME: This checking is not done for the other models
                raise ValueError("Unsupported SqueezeNet version {version}:"
                                 "1_0 or 1_1 expected".format(version=version))
     
            # Final convolution is initialized differently from the rest
            final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1)
            self.classifier = nn.Sequential(
                nn.Dropout(p=0.5),
                final_conv,
                nn.ReLU(inplace=True),
                nn.AdaptiveAvgPool2d((1, 1))
            )
     
            for m in self.modules():
                if isinstance(m, nn.Conv2d):
                    if m is final_conv:
                        init.normal_(m.weight, mean=0.0, std=0.01)
                    else:
                        init.kaiming_uniform_(m.weight)
                    if m.bias is not None:
                        init.constant_(m.bias, 0)
     
        def forward(self, x):
            x = self.features(x)
            x = self.classifier(x)
            return torch.flatten(x, 1)
    
    展开全文
  • SqueezeNet

    2020-06-15 17:41:07
    Paper : SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and <0.5MB Model Size Code : torchvision 摘要 SqueezeNet是着重强调模型压缩的卷积网络模型,准确率与AlexNet相当但是参数规模只有...

    Paper : SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and <0.5MB Model Size
    Code : torchvision

    摘要

    SqueezeNet是着重强调模型压缩的卷积网络模型,准确率与AlexNet相当但是参数规模只有AlexNet的1/50,在模型压缩技术的支持下,SqueezeNet可以压缩到<0.5MB。虽然理论上减少了50倍的参数,但是AlexNet本身全连接节点过于庞大,50倍参数的减少和SqueezeNet的设计并没有关系。0.5MB的模型要得益于模型压缩技术,不使用模型压缩技术的话从网络结构的角度来考虑优化幅度并没有那么大。

    网络结构

    SqueezeNet的模型压缩使用了3个策略:

    1. 将3*3卷积替换成1*1卷积
    2. 减少3*3卷积的通道数
    3. 将降采样后置:作者认为较大的Feature Map含有更多的信息,因此将降采样往分类层移动。虽然会提升网络的精度,但是会增加网络的计算量。

    Fire 模块:SqueezeNet是由若干个Fire模块结合卷积网络中 卷积层,降采样层,全连接等层组成的。一个Fire模块由Squeeze部分和Expand部分组成。Squeeze部分是一组连续的1*1卷积组成,Expand部分则是由一组连续的1*1卷积和一组连续的3*3卷积连接组成,注意3*3卷积需要使用same卷积。在Fire模块中,Squeeze部分1*1卷积的通道数记做s1×1s_{1\times 1},Expand部分1*1卷积和3*3卷积的通道数分别记做e1×1e_{1\times 1}e3×3e_{3\times 3}。在Fire模块中,作者建议s1×1<e1×1+e3×3s_{1\times 1}<e_{1\times 1}+e_{3\times 3},这么做相当于在两个3*3卷积的中间加入了瓶颈层,实验中的一个策略是

    s1×1=e1×14=e3×34 s_{1\times 1} = \frac{e_{1\times 1}}{4} = \frac{e_{3\times 3}}{4}

    可视化后如下图所示

    image.png

    image.png

    整体的网络结构如下,分别是SqueezeNet,SqueezeNet(shortcut),SqueezeNet(complex shortcut,使用1*1进行通道对齐)

    image.png

    参数表如下

    image.png

    核心观点

    1. 文章提出了SqueezeNet这一结构,将1*1卷积核的作用做了进一步扩展,提出使用1*1卷积核代替3*3卷积核进行参数压缩这一观点
    展开全文
  • squeezenet

    2019-10-01 10:03:19
    SqueezeNet模型主要是为了降低CNN模型参数数量而设计的。 3、在整个网络后期进行下采样,使得前期的卷积层有比较大的activation maps 转载于:...

    SqueezeNet模型主要是为了降低CNN模型参数数量而设计的。

     

     

     

     

     

     

     

     

     

     3、在整个网络后期进行下采样,使得前期的卷积层有比较大的activation maps

     

     

     

     

     

     

    转载于:https://www.cnblogs.com/pacino12134/p/9795003.html

    展开全文
  • keras-squeezenet 使用Keras Functional Framework 2.0的SqueezeNet v1.1实现 该具有AlexNet精度,且占用空间小(5.1 MB)。 # Most Recent One pip install git+https://github.com/rcmalli/keras-squeezenet.git...
  • SQUEEZENET

    2017-11-05 10:32:17
    SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. More efficient distributed training Less overhead when exporting new models to clients Architectural design ...

    Compelling Advantages

    • Smaller CNNs require less communication across servers during distributed training.
    • Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car.
    • Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory.
    • SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
    • More efficient distributed training
    • Less overhead when exporting new models to clients

    Architectural design strategies

    • Replace 3x3 filters with 1x1 filters
    • Decrease the number of input channels to 3x3 filters
    • Downsample late in the network so that convolution layers have large activation maps

    Methods

    这里写图片描述

    Architecture

    这里写图片描述

    Other squeeznet details

    这里写图片描述

    Experiments

    这里写图片描述
    这里写图片描述

    Others

    • early layers in the network have large strides, then most layers will have small activation maps.
    • delayed downsampling to four different CNN architectures, and in each case delayed downsampling led to higher classification accuracy
    展开全文
  • ncnn-android-squeezenet 挤压图像分类 这是一个示例ncnn android项目,它仅取决于ncnn库 如何构建和运行 步骤1 下载ncnn-android-vulkan.zip或自己为Android构建ncnn 第2步 将ncnn-android-vulkan.zip解压缩到app...
  • 挤压网的matlab代码Squeezenet-Matlab-Keras Squeezenet v1.1 预训练模型与 Matlab 函数 ImportKerasNetwork 兼容。 这些文件是用 keras 2.0.6 创建的。 Matlab R2017b 中的用法示例: squeezenet = ...
  • DL之SqueezeNetSqueezeNet算法的架构详解 相关文章DL之SqueezeNetSqueezeNet算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略DL之SqueezeNetSqueezeNet算法的架构详解 SqueezeNet算法的架构...
  • ncnn-android-squeezenet The squeezenet image classification this is a sample ncnn android project, it depends on ncnn library only https://github.com/Tencent/ncnn how to build and run step1 ...
  • 简记SqueezeNet

    千次阅读 2020-07-22 15:32:59
    SqueezeNet》   对移动端算力有限的情况下对深度学习推理的场景越来越多,模型压缩技术应运而生,同为Deep Compression团队推出的SqueezeNet一经问世就广为流传,奉为经典,到目前为止,这篇论文的影响还是巨大...
  • differnet_squeezenet-源码

    2021-03-28 02:06:41
    differentnet_squeezenet
  • 挤压火炬 将SqueezeNet移植到PyTorch中;主要是为了我自己的学习
  • SqueezeNet:挤压网-源码

    2021-05-02 14:10:45
    SqueezeNet_v1.0/squeezenet_v1.0.caffemodel #pretrained model parameters 如果您发现SqueezeNet对您的研究有用,请考虑引用: @article{SqueezeNet, Author = {Forrest N. Iandola and Song Han and Matthew W...
  • 用于图像分类的SqueezeNet预训练模型是R2020a中的“深度学习工具箱”的一部分,不需要单独安装。 如果您使用的是深度学习工具箱的 R2020a 版本,您可以在命令行中键入“squeezenet”或直接访问模型,而无需从 Deep ...
  • SqueezeNet论文笔记

    千次阅读 2018-10-12 14:40:45
    SqueezeNet ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND &lt;0.5MB MODEL SIZE 论文下载地址:http://arxiv.org/abs/1602.07360  论文代码:https://github.com/DeepScale/SqueezeNet   ...
  • YOLO的squeezeNet模型

    2018-03-16 11:28:55
    YOLO目标检测框架,结合请谅解的网络模型SqueezeNEt,根据squeezeNet的论文思想,设计了用于目标检测的轻量级神经网络
  • SqueezeNet网络

    2019-08-20 17:56:59
    神经网络瘦身:SqueezeNet 2.SqueezeNet 3.深度学习方法(七):最新SqueezeNet 模型详解,CNN模型参数降低50倍,压缩461倍! 其中 导读1中的 “3×3两层卷积的计算量是9×C×(1+C')×N×N” 可拆解为3*3*C*N*N + 3...
  • SqueezeNet网络原理

    千次阅读 2017-08-16 22:12:48
    SqueezeNet网络原理
  • SqueezeNet记录

    2020-04-18 19:09:48
    SqueezeNet:深度压缩(Deep compression) 从LeNet5到DenseNet,反应卷积网络的一个发展方向:提高精度。 SqueezeNet的工作为以下几个方面: 1、提出了新的网络架构Fire Module,通过减少参数来进行模型压缩 2、...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 10,270
精华内容 4,108
关键字:

squeezenet