精华内容
下载资源
问答
  • with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detec- tion adjustment. Following the learning pipelines in Viola- Jones framework, the multi...
  • 必须在mysql上创建,不能在mycat上,否则报错java.sql.SQLSyntaxErrorException: op table not in schema----VIEW 然后配xml 就可以select了。 show table status where comment =‘view’; ...
    • 必须在mysql上创建,不能在mycat上,否则报错java.sql.SQLSyntaxErrorException: op table not in schema----VIEW
    • 然后配xml
    • 就可以select了。
    • show table status where comment =‘view’;
    展开全文
  • Concurrent Spatial and Channel ‘Squeeze & Excitation’ in Fully Convolutional Networks PDF: https://arxiv.org/pdf/1803.02579v2.pdf PyTorch代码: https://github.com/shanglianlm0525/PyTorch-Networks...

    Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks
    PDF: https://arxiv.org/pdf/1803.02579v2.pdf
    PyTorch代码: https://github.com/shanglianlm0525/PyTorch-Networks
    PyTorch代码: https://github.com/shanglianlm0525/CvPytorch

    1 概述

    本文对SE模块进行了改进,设计了三种 SE 变形结构cSE、sSE、scSE,在 MRI 脑分割 和 CT 器官分割任务上取得了可观的改进。

    在这里插入图片描述

    2 Spatial Squeeze and Channel Excitation Block (cSE)

    即原始的SE Block , 详细见 Attention论文:Squeeze-and-Excitation Networks及其PyTorch实现
    PyTorch代码:

    class SE_Module(nn.Module):
        def __init__(self, channel,ratio = 16):
            super(SE_Module, self).__init__()
            self.squeeze = nn.AdaptiveAvgPool2d(1)
            self.excitation = nn.Sequential(
                    nn.Linear(in_features=channel, out_features=channel // ratio),
                    nn.ReLU(inplace=True),
                    nn.Linear(in_features=channel // ratio, out_features=channel),
                    nn.Sigmoid()
                )
        def forward(self, x):
            b, c, _, _ = x.size()
            y = self.squeeze(x).view(b, c)
            z = self.excitation(y).view(b, c, 1, 1)
            return x * z.expand_as(x)
    

    3 Channel Squeeze and Spatial Excitation Block (sSE)

    PyTorch代码:

    class sSE_Module(nn.Module):
        def __init__(self, channel):
            super(sSE_Module, self).__init__()
            self.spatial_excitation = nn.Sequential(
                    nn.Conv2d(in_channels=channel, out_channels=1, kernel_size=1,stride=1,padding=0),
                    nn.Sigmoid()
                )
        def forward(self, x):
            z = self.spatial_excitation(x)
            return x * z.expand_as(x)
    

    4 Spatial and Channel Squeeze & Excitation Block (scSE)

    PyTorch代码:

    class scSE_Module(nn.Module):
        def __init__(self, channel,ratio = 16):
            super(scSE_Module, self).__init__()
            self.cSE = cSE_Module(channel,ratio)
            self.sSE = sSE_Module(channel)
    
        def forward(self, x):
            return self.cSE(x) + self.sSE(x)
    

    5 实验结果

    在这里插入图片描述

    展开全文
  • ResNet结构分析

    万次阅读 多人点赞 2018-11-08 11:38:06
    4不相等 (之所以要二者相等时在Bottleneck最后一个卷积层会将channel变为 planes乘block.expansion 如果inplanes(实际就是输入的channel)与之不相等 不可相加) 构造右路downsample (1 1卷积核的卷积层 扩展...

    源码

    ResNet封装 在torchvision中封装了Resnet的源码,下面通过对ResNet源码的分析进一步了解ResNet网络结构,方便对ResNet结构进行修改,同时学习网络源码的组织方式,方便日后搭建自己的神经网络

    在这里插入图片描述

    源码解析

    def resnet50(pretrained=False, **kwargs):
        """Constructs a ResNet-50 model.
        Args:
            pretrained (bool): If True, returns a model pre-trained on ImageNet
        """
        model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
        if pretrained:
            model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
        return model
    

    从源码的入口出发,通过 model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) 构造网络结构,主要分成两个部分即 Bottleneck 和 [3,4,6,3] 由这两个参数共同决定了ResNet50的网络结构 ,当pretrained 为true时,为model加载imageNet中预训练的参数。
    这里涉及到了Bottleneck这个类,[3,4,6,3]对应于上图中ResNet50中 conv2_x中 有三个(1164,3364,11256)卷积层的堆叠 ,同理conv3_中有4个(11128,33128,11512)卷积层的堆叠,resnet将卷积层分为4个大层,[3,4,6,3]代表每一个大层中11,33,1*1 卷积层组合的重复次数 总共1(第一个卷积层)+1(第一个池化层)+(3+4+6+3)*3=50层

    这里涉及到一个Bottleneck类,可以把一个Bottleneck当成一个基础的block就是对应上图的(11,33,11)卷积核大小的卷积层的组合
    在这里插入图片描述
    解释一下为什么输入Bottleneck之前是56 * 56 * 64 的,因为ResNet接受的图像大小为224 * 224 经过第一层卷积层
    self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
    floor((224-7+2
    3)/2)+1=112
    经过第一层池化之后
    self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
    floor((112+2*1-3)/2)+1=56

    因此在输入到Bottleneck之前得到一个56(height)*56(weight)*64(channel)大小的feature map

    
    class Bottleneck(nn.Module):
        expansion = 4
    
        def __init__(self, inplanes, planes, stride=1, downsample=None):
            super(Bottleneck, self).__init__()
            self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
            self.bn1 = nn.BatchNorm2d(planes)
            self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
                                   padding=1, bias=False)
            self.bn2 = nn.BatchNorm2d(planes)
            self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
            self.bn3 = nn.BatchNorm2d(planes * self.expansion)
            self.relu = nn.ReLU(inplace=True)
            self.downsample = downsample
            self.stride = stride
    

    因为1 * 1卷积核不改变feature map的大小,3 * 3卷积核padding=1 也不改变输入feature map的大小,因此经过一个Bottleneck组成的卷积层组操作后feature map大小不会改变

    下面看一下Bottleneck的forward函数

    def forward(self, x):
            residual = x
            out = self.conv1(x)
            out = self.bn1(out)
            out = self.relu(out)
            out = self.conv2(out)
            out = self.bn2(out)
            out = self.relu(out)
            out = self.conv3(out)
            out = self.bn3(out)
            if self.downsample is not None:
                residual = self.downsample(x)
            out += residual
            out = self.relu(out)
            return out
    

    这里要留意一下downsample,因为feature map的大小不变 但是在经过Bottleneck 之后channel变成了原来的四倍,因此想要和原始的feature map相加 需要将原始的feature map 也变为原来的四倍 ,downsample 作用是residual+当前feature map时将维度统一

    接下来分析ResNet 类的具体构成

    class ResNet(nn.Module):
    
        def __init__(self, block, layers, num_classes=1000):
            self.inplanes = 64
            super(ResNet, self).__init__()
            self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                                   bias=False)
            self.bn1 = nn.BatchNorm2d(64)
            self.relu = nn.ReLU(inplace=True)
            self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
            self.layer1 = self._make_layer(block, 64, layers[0])
            self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
            self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
            self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
            self.avgpool = nn.AvgPool2d(7, stride=1)
            self.fc = nn.Linear(512 * block.expansion, num_classes)
    
            for m in self.modules():
                if isinstance(m, nn.Conv2d):
                    nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
                elif isinstance(m, nn.BatchNorm2d):
                    nn.init.constant_(m.weight, 1)
                    nn.init.constant_(m.bias, 0)
    
        def _make_layer(self, block, planes, blocks, stride=1):
            downsample = None
            if stride != 1 or self.inplanes != planes * block.expansion:
                downsample = nn.Sequential(
                    nn.Conv2d(self.inplanes, planes * block.expansion,
                              kernel_size=1, stride=stride, bias=False),
                    nn.BatchNorm2d(planes * block.expansion),
                )
    
            layers = [ ]
            layers.append(block(self.inplanes, planes, stride, downsample))
            self.inplanes = planes * block.expansion
            for i in range(1, blocks):
                layers.append(block(self.inplanes, planes))
    
            return nn.Sequential(*layers)
    
    

    Resnet 也是由__init__ 和forward构成 ,为了方便分析 这里首先分析init函数,在init中最重要的是_make_layer 函数,以layer1为例
    block为Bottleneck,planes=64(即channel数目)blocks=3 ([3,4,6,3] 分别代表每一层的blocks数目)这里要注意layer1的stride为1 其他layer的stride为2
    对于layer1而言 inplanes=64 planes=64 block.expansion=4,因此需要经过downsample 才能够使得残差和经过该层的feature map能够相加,downsample即为右路部分

    在这里插入图片描述
    这样说其实还是不太清晰 ,最直观的方法就是分析每一层究竟发生了什么变化
    layer1:
    输入 :[batch_size,56,56,64]
    此时self.inplanes=64 planes * block.expansion=644不相等 (之所以要二者相等时在Bottleneck最后一个卷积层会将channel变为 planes乘block.expansion 如果inplanes(实际就是输入的channel)与之不相等 不可相加)
    构造右路downsample (1
    1卷积核的卷积层 扩展channel +BN层)

    构造layer1 3个Bottleneck中的第一个:

    主体分支
    在这里插入图片描述.
    downsample 分支
    在这里插入图片描述
    更新 inplanes=64*4

    构造layer1 3个Bottleneck中的第2个

    self.conv1 = nn.Conv2d(256, 64, kernel_size=1, bias=False)
    self.bn1 = nn.BatchNorm2d(64)
    self.conv2 = nn.Conv2d(64, 64, kernel_size=3, stride=stride,
    padding=1, bias=False)
    self.bn2 = nn.BatchNorm2d(64)
    self.conv3 = nn.Conv2d(64,256, kernel_size=1, bias=False)
    self.bn3 = nn.BatchNorm2d(planes * self.expansion)
    self.relu = nn.ReLU(inplace=True)
    self.downsample = None
    self.stride = 1

    构造layer1 3个Bottleneck中的第3个

    self.conv1 = nn.Conv2d(256, 64, kernel_size=1, bias=False)
    self.bn1 = nn.BatchNorm2d(64)
    self.conv2 = nn.Conv2d(64, 64, kernel_size=3, stride=stride,
    padding=1, bias=False)
    self.bn2 = nn.BatchNorm2d(64)
    self.conv3 = nn.Conv2d(64,256, kernel_size=1, bias=False)
    self.bn3 = nn.BatchNorm2d(planes * self.expansion)
    self.relu = nn.ReLU(inplace=True)
    self.downsample = None
    self.stride = 1

    构造layer2 4个Bottleneck中的第一个

    此时stride=2 self.inplanes=256 planes * block.expansion=128*4
    需要生成downsample层
    downsample = nn.Sequential(
    nn.Conv2d(256 ,512
    kernel_size=1, stride=2, bias=False),
    nn.BatchNorm2d(512),
    )

    生成第一个Bottleneck的主干
    self.conv1 = nn.Conv2d(256, 128, kernel_size=1, bias=False)
    self.bn1 = nn.BatchNorm2d(planes)
    self.conv2 = nn.Conv2d(128, 128, kernel_size=3, stride=2, //此时feature map 大小由56变成28
    padding=1, bias=False)
    self.bn2 = nn.BatchNorm2d(planes)
    self.conv3 = nn.Conv2d(128, 512, kernel_size=1, bias=False)
    self.bn3 = nn.BatchNorm2d(planes * self.expansion)
    self.relu = nn.ReLU(inplace=True)
    self.downsample = downsample
    self.stride = stride

    构造layer2 4个Bottleneck中的第2,3,4个

    更新inplanes=512
    self.conv1 = nn.Conv2d(512, 128, kernel_size=1, bias=False)
    self.bn1 = nn.BatchNorm2d(64)
    self.conv2 = nn.Conv2d(128, 128, kernel_size=3, stride=1,
    padding=1, bias=False)
    self.bn2 = nn.BatchNorm2d(64)
    self.conv3 = nn.Conv2d(128,256, kernel_size=1, bias=False)
    self.bn3 = nn.BatchNorm2d(planes * self.expansion)
    self.relu = nn.ReLU(inplace=True)
    self.downsample = None
    self.stride = 1

    构造layer3 6 个Bottleneck中的第1个

    对于之后的layer3和layer4 都同理
    第一层会将feature map缩小2倍
    同时生成一个11卷积核 inchannel =inplanes outchannel=4planes的支路用来进行对输入的feature mapdownsample操作
    生成一个11 33(stride=2) 1*1 的卷积层组 对featuremap进行卷积操作 并与支路相加

    (0): Bottleneck(
    (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace)
    (downsample): Sequential(
    (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
    (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    剩余的Bottleneck均生成如下结构;
    (1): Bottleneck(
    (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace)
    )

    整体结构如下:

    (net): ResNet(
        (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace)
        (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
        (layer1): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
            (downsample): Sequential(
              (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (2): Bottleneck(
            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
        )
        (layer2): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
            (downsample): Sequential(
              (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
              (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (2): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (3): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
        )
        (layer3): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
            (downsample): Sequential(
              (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
              (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (2): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (3): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (4): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (5): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
        )
        (layer4): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
            (downsample): Sequential(
              (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
              (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
          (2): Bottleneck(
            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace)
          )
        )
        (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)  
        (fc): Linear(in_features=2048, out_features=1000, bias=True)
      )
    )
    
    最后看一下Resnet的forward函数:
    
    
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)
    
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
    
        x = self.avgpool(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)
    
        return x
    
       self.avgpool = nn.AvgPool2d(7, stride=1)
      x = self.avgpool(x)  因为最后feature map在输入为224时 经过layer4之后大小为7乘7   ,此时经过 nn.AvgPool2d(7, stride=1)大小变为1乘1  再经过全连接层时
    self.fc = nn.Linear(512 * block.expansion, num_classes) 前者是输出的所有channel数目 实际应该为channel * 1 * 1 后者为分类数
      x = x.view(x.size(0), -1) 将数据拉伸成batchsize * channel * 1 * 1    
      如果输入大小不为224  那么相应的可以修改AvgPool2d 或者在全连接层第一个参数中乘上最终的width 和height
    

    Ref:[https://blog.csdn.net/jiangpeng59/article/details/79609392](https://blog.csdn.net/jiangpeng59/article/details/79609392)

    展开全文
  • Could not resolve view with name ‘/ getPersonal’ in servlet with name ‘springMvc’ at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1190) 我的解决方法:检查后台...

    这是我的报错:
    Could not resolve view with name ‘/ getPersonal’ in servlet with name ‘springMvc’
    at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1190)

    我的解决方法:检查后台是否有这个方法,如果有但是还报错那就重启项目, 因为注解和方法名,参数在项目初始化才会被注入。如果没有这个方法查看html或jsp是否写错请求的方法名

    展开全文
  • NIO学习笔记——通道(channel)详解

    万次阅读 2017-06-25 00:50:16
    in ); WritableByteChannel dest = Channels.newChannel(System. out ); channelCopy1(source,dest); try { source.close(); dest.close(); } catch (IOException e) { e.printStackTrace(); } } ...
  • 这篇文章在DiffPool的基础上引入了多通道(Multi-channel)的概念,提出了multi channel graph convolutional networks (MuchGCN),通过inter-channel和intra-channel两种方式聚合通道上的信息。 本文被IJCAI2020...
  • Android Studio:Android Studio 配置

    千次阅读 多人点赞 2016-08-27 18:02:26
    勾选 Check for updates in channel ,即开通了自动检查更新。你可以禁用自动检查更新。右侧的列表,是更新通道。 Stable Channel : 正式版本通道,只会获取最新的正式版本。 Beta Channel : 测试版本通道...
  • Multi-Task GANs for View-Specific Feature Learning in Gait Recognition论文翻译以及理解 今天想尝试一下翻译一篇自己读的论文。写的不好,后续慢慢改进。 Abstract  Abstract— Gait recognition is of ...
  • 最近使用Lettuce连接启用ssl的Redis cluster,碰到异常如下: ... This connection point is not known in the cluster view at com.lambdaworks.redis.cluster.Pooled...
  • UPDATE_INTERVAL=number in minutes (default 60 minutes) 码头工人 您可以在一行中使用docker启动它 docker run --name RSS-TG-BOT -d -e TG=botfathertoken \ -e IV_HASH=rhash_for_telegram_iv \ -e CHANNEL_ID...
  • 嵌入原生View-Android 嵌入原生View-IOS 与原生通信-MethodChannel 与原生通信-BasicMessageChannel 与原生通信-EventChannel 添加 Flutter 到 Android Activity 添加 Flutter 到 Android Fragment 添加 Flutter 到 ...
  • x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 2704) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x 补充:关于torch.nn 和 torch.nn.functional 的区别 实现的功能比较近似,...
  • Channel的源代码如下: public interface Channel<E> : SendChannel<E>, ReceiveChannel<E> { ... } Channel的父类有发送消息的...
  • UnavailableInvalidChannel: The channel is not accessible or is invalid. 1. UnavailableInvalidChannel: The channel is not accessible or is invalid. (base) yongqiang@famu-sys:~$ conda install pytorch==...
  • cross channel pooling 的原理与代码实现

    千次阅读 2020-05-12 11:52:45
    普通的 pooling,是 channel 之间独立做的,只是在每个 feature map 空间维度上去做pooling,pool 完 channel 数是不会改变的。 cross channel pooling,是在 channel 维度上去做,比如现在有 50 个 feature map,想...
  • InputDispatcher接收InputReader读取到的事件,分发给对应窗口,InputDispatcher属于system_server进程和各个应用不在同一进程,它们之间的联系靠的就是InputChannel。 handleResumeActivity 直接从ActivityThread...
  • Donald Lambert是谁 CCI称为顺势指标。用于衡量现价于某周期均价之间的偏离程度。识别超买超卖。 CCI(Commodity Channel Index)由Donald ...有幸找到当年发表的文章《Commodity Channel Index: Tool for Tradi...
  • Exception in thread "main" org.apache.spark.sql.AnalysisException: expression 'pay.`pay_channel`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in f...
  • appium小程序android.view.view输入问题

    千次阅读 2019-10-05 21:56:42
    python+appium,做微信小程序自动化时,发现class为android.view.view的控件,只能点击,不能sendkeys输入,错误如下,请大神指导: 例如,输入“11111”,appium立刻报错如下,感觉与uiautomator2有点关系: ...
  • In view of the characteristics of the meteor burst channel, the variable rate data transmission should be adopted to improve the system average throughput, which results in channel tracing and ...
  • FULL TEXT TUTORIAL AND NOTES: ...In the first part of the tutorial we covered all the basics and important principles for UDK lightmaps. Now let's go deeper into practical examples and techniq
  • 特别的,作者还计算量一个相似度矩阵 S∈Rn×n\mathbf{S} \in \mathbb{R}^{n \times n}S∈Rn×n,并介绍了两种方式: 余弦相似度: Sij=xi⋅xj∣xi∣∣xj∣ \mathbf{S}_{i j}=\frac{\mathbf{x}_{i} \cdot \mathbf{x}...
  • 上一篇文章分析到InputDispatcher将Input事件做了一些列处理之后,会将事件发送到APP进程,InputDispatcher和APP属于两个不同进程,他们之间是如何通信的呢?... * a window in another process. It is P
  • Activity取消注册InputChannel(十二)

    千次阅读 2017-01-18 23:18:58
    在启动一个acitivity时,将该activity和服务进程的InputChannel对应,这样事件就可以通过sockets进行通信了。 但是只有当前显示的acitivity才会获取事件,这就说明,在有一刻,会断开其他acitivity和服务进程的对应关系...
  • 因此,这一节我们的目标很明确,弄懂”是谁在接收消息,然后这条消息是怎么分发到View的结构树中的”。 上一节我们说InputChannel的本质是linux本地套接字,因为它内部使用socketpair()函数创建了一对套接字描
  • Android入门之创建InputChannel

    千次阅读 2015-01-16 14:26:21
     我们说,InputDispatcher和客户窗口ViewRoot之间,是通过Pipe传递消息的,而Pipe是Linux系统调用的一部分,Android为了能够调用Pipe而创建了InputChannel类,可以说,InputChannel是Pipe的Android版。  Input...
  • Flume Channel Selectors使用

    千次阅读 2017-10-26 14:55:00
    前几篇文章只有一个项目的日志,现在我们考虑多个项目的日志的收集,我拷贝了一份flumedemo项目,重命名为flumedemo2,添加了一个WriteLog2.java类,稍微改动了一下JSON字符串的输出,将以前...[java] view plain
  • UnavailableInvalidChannel: The channel is not accessible or is invalid. channel name: anaconda/pkgs/free channel url: https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free er...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 38,311
精华内容 15,324
关键字:

channelinview