精华内容
下载资源
问答
  • max_pool2d
    2020-09-10 13:37:47
    class DeepWise_MaxPool(nn.MaxPool1d):
        def __init__(self, channels):
            super(DeepWise_MaxPool, self).__init__(channels)
    
        def forward(self, input):
            n, c, h, w = input.size()
            input = input.view(n, c, h * w).permute(0, 2, 1)
            pooled = torch.nn.functional.max_pool1d(input, self.kernel_size, self.stride, 
                 self.padding, self.dilation, self.ceil_mode, self.return_indices)
            _, _, c = pooled.size()
            pooled = pooled.permute(0, 2, 1)
            return pooled.view(n, c, h, w)

    在模型中使用该类作为基本结构的一部分时,网络反向传播报错,使用contiguous()将permute()操作之后的tensor变为连续。

    Returns a contiguous in memory tensor containing the same data as self tensor. If self tensor is already in the specified memory format, this function returns the self tensor
    class DeepWise_MaxPool(nn.MaxPool1d):
        def __init__(self, channels):
            super(DeepWise_MaxPool, self).__init__(channels)
    
        def forward(self, input):
            n, c, h, w = input.size()
            input = input.view(n, c, h * w).permute(0, 2, 1).contiguous()
            pooled = torch.nn.functional.max_pool1d(input, self.kernel_size, self.stride,                     
                 self.padding,self.dilation, self.ceil_mode, self.return_indices)
            _, _, c = pooled.size()
            pooled = pooled.permute(0, 2, 1).contiguous()
            return pooled.view(n, c, h, w)

     

    更多相关内容
  • 本文整理汇总了Python中torch.nn.functional.max_pool2d方法的典型用法代码示例。如果您正苦于以下问题:Python functional.max_pool2d方法的具体用法?Python functional.max_pool2d怎么用?Python functional.max_...

    本文整理汇总了Python中torch.nn.functional.max_pool2d方法的典型用法代码示例。如果您正苦于以下问题:Python functional.max_pool2d方法的具体用法?Python functional.max_pool2d怎么用?Python functional.max_pool2d使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块torch.nn.functional的用法示例。

    在下文中一共展示了functional.max_pool2d方法的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

    示例1: test_resize_methods

    ​点赞 6

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def test_resize_methods():

    inputs_x = torch.randn([2, 256, 128, 128])

    target_resize_sizes = [(128, 128), (256, 256)]

    resize_methods_list = ['nearest', 'bilinear']

    for method in resize_methods_list:

    merge_cell = BaseMergeCell(upsample_mode=method)

    for target_size in target_resize_sizes:

    merge_cell_out = merge_cell._resize(inputs_x, target_size)

    gt_out = F.interpolate(inputs_x, size=target_size, mode=method)

    assert merge_cell_out.equal(gt_out)

    target_size = (64, 64) # resize to a smaller size

    merge_cell = BaseMergeCell()

    merge_cell_out = merge_cell._resize(inputs_x, target_size)

    kernel_size = inputs_x.shape[-1] // target_size[-1]

    gt_out = F.max_pool2d(

    inputs_x, kernel_size=kernel_size, stride=kernel_size)

    assert (merge_cell_out == gt_out).all()

    开发者ID:open-mmlab,项目名称:mmdetection,代码行数:21,

    示例2: forward

    ​点赞 6

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    out = F.relu(self.conv1(x))

    out = self.bnm1(out)

    out = F.relu(self.conv2(out))

    out = self.bnm2(out)

    out = F.max_pool2d(out, 2)

    out = F.relu(self.conv3(out))

    out = self.bnm3(out)

    out = F.relu(self.conv4(out))

    out = self.bnm4(out)

    out = F.max_pool2d(out, 2)

    out = out.view(out.size(0), -1)

    #out = self.dropout1(out)

    out = F.relu(self.fc1(out))

    #out = self.dropout2(out)

    out = self.bnm5(out)

    out = F.relu(self.fc2(out))

    #out = self.dropout3(out)

    out = self.bnm6(out)

    out = self.fc3(out)

    return (out)

    开发者ID:StephanZheng,项目名称:neural-fingerprinting,代码行数:23,

    示例3: apply

    ​点赞 6

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def apply(features: Tensor, proposal_bboxes: Tensor, proposal_batch_indices: Tensor, mode: Mode) -> Tensor:

    _, _, feature_map_height, feature_map_width = features.shape

    scale = 1 / 16

    output_size = (7 * 2, 7 * 2)

    if mode == Pooler.Mode.POOLING:

    pool = []

    for (proposal_bbox, proposal_batch_index) in zip(proposal_bboxes, proposal_batch_indices):

    start_x = max(min(round(proposal_bbox[0].item() * scale), feature_map_width - 1), 0) # [0, feature_map_width)

    start_y = max(min(round(proposal_bbox[1].item() * scale), feature_map_height - 1), 0) # (0, feature_map_height]

    end_x = max(min(round(proposal_bbox[2].item() * scale) + 1, feature_map_width), 1) # [0, feature_map_width)

    end_y = max(min(round(proposal_bbox[3].item() * scale) + 1, feature_map_height), 1) # (0, feature_map_height]

    roi_feature_map = features[proposal_batch_index, :, start_y:end_y, start_x:end_x]

    pool.append(F.adaptive_max_pool2d(input=roi_feature_map, output_size=output_size))

    pool = torch.stack(pool, dim=0)

    elif mode == Pooler.Mode.ALIGN:

    pool = ROIAlign(output_size, spatial_scale=scale, sampling_ratio=0)(

    features,

    torch.cat([proposal_batch_indices.view(-1, 1).float(), proposal_bboxes], dim=1)

    )

    else:

    raise ValueError

    pool = F.max_pool2d(input=pool, kernel_size=2, stride=2)

    return pool

    开发者ID:potterhsu,项目名称:easy-faster-rcnn.pytorch,代码行数:27,

    示例4: forward

    ​点赞 6

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, X):

    h = F.relu(self.conv1_1(X))

    h = F.relu(self.conv1_2(h))

    relu1_2 = h

    h = F.max_pool2d(h, kernel_size=2, stride=2)

    h = F.relu(self.conv2_1(h))

    h = F.relu(self.conv2_2(h))

    relu2_2 = h

    h = F.max_pool2d(h, kernel_size=2, stride=2)

    h = F.relu(self.conv3_1(h))

    h = F.relu(self.conv3_2(h))

    h = F.relu(self.conv3_3(h))

    relu3_3 = h

    h = F.max_pool2d(h, kernel_size=2, stride=2)

    h = F.relu(self.conv4_1(h))

    h = F.relu(self.conv4_2(h))

    h = F.relu(self.conv4_3(h))

    relu4_3 = h

    return [relu1_2, relu2_2, relu3_3, relu4_3]

    ## Weights init function

    开发者ID:AlexiaJM,项目名称:Deep-learning-with-cats,代码行数:26,

    示例5: forward

    ​点赞 6

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, inputs, y=None):

    # Apply convs

    theta = self.theta(inputs)

    phi = F.max_pool2d(self.phi(inputs), [2, 2])

    g = F.max_pool2d(self.g(inputs), [2, 2])

    # Perform reshapes

    theta = theta.view(-1, self.channels // self.heads, inputs.shape[2] * inputs.shape[3])

    phi = phi.view(-1, self.channels // self.heads, inputs.shape[2] * inputs.shape[3] // 4)

    g = g.view(-1, self.channels // 2, inputs.shape[2] * inputs.shape[3] // 4)

    # Matmul and softmax to get attention maps

    beta = F.softmax(torch.bmm(theta.transpose(1, 2), phi), -1)

    # Attention map times g path

    o = self.o(torch.bmm(g, beta.transpose(1, 2)).view(-1, self.channels // 2, inputs.shape[2],

    inputs.shape[3]))

    outputs = self.gamma * o + inputs

    return outputs

    开发者ID:bayesiains,项目名称:nsf,代码行数:18,

    示例6: forward

    ​点赞 6

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    x = F.relu(self.bn1_a(self.conv1_a(x)))

    x_pool1b = F.max_pool2d(F.relu(self.bn1_b(self.conv1_b(x))),2, stride=2)

    x = self.layer1(x_pool1b)

    x = F.max_pool2d(F.relu(self.bn2(self.conv2(x))),2, stride=2)

    x = self.layer2(x)

    x_pool3 = F.max_pool2d(F.relu(self.bn3(self.conv3(x))),2, stride=2)

    x = self.layer3(x_pool3)

    x = F.max_pool2d(F.relu(self.bn4(self.conv4(x))),2, stride=2)

    x = self.layer4(x)

    x = x.view(-1, self.num_flat_features(x))

    x = self.fc5_new(x)

    # x1 = x1.view(1,-1,512)

    # x1, hn1 = self.lstm1(x1, (self.h1, self.c1))

    x = self.fc8_final(x)

    return x

    开发者ID:XiaoYee,项目名称:emotion_classification,代码行数:23,

    示例7: forward

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    x = self.conv1(x)

    x = F.relu(x)

    x = self.conv2(x)

    x = F.max_pool2d(x, 2)

    x = torch.flatten(x, 1)

    x = self.fc1(x)

    x = F.normalize(x)

    return x

    开发者ID:peisuke,项目名称:MomentumContrast.pytorch,代码行数:11,

    示例8: forward

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    """Forward input images through the network to generate heatmaps."""

    x = F.max_pool2d(F.relu(self.bn1(self.conv1(x))), 2)

    x = F.max_pool2d(F.relu(self.bn2(self.conv2(x))), 2)

    x = F.max_pool2d(F.relu(self.bn3(self.conv3(x))), 2)

    x = F.relu(self.bn4(self.conv4(x)))

    x = F.relu(self.bn5(self.conv5(x)))

    x = F.relu(self.bn6(self.conv6(x)))

    x = F.sigmoid(self.conv7(x))

    return x

    开发者ID:aleju,项目名称:cat-bbs,代码行数:12,

    示例9: _crop_pool_layer

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def _crop_pool_layer(self, bottom, rois, max_pool=True): # done

    # implement it using stn

    # box to affine

    # input (x1,y1,x2,y2)

    """

    [ x2-x1 x1 + x2 - W + 1 ]

    [ ----- 0 --------------- ]

    [ W - 1 W - 1 ]

    [ ]

    [ y2-y1 y1 + y2 - H + 1 ]

    [ 0 ----- --------------- ]

    [ H - 1 H - 1 ]

    """

    rois = rois.detach()

    x1 = rois[:, 1::4] / 16.0

    y1 = rois[:, 2::4] / 16.0

    x2 = rois[:, 3::4] / 16.0

    y2 = rois[:, 4::4] / 16.0

    height = bottom.size(2)

    width = bottom.size(3)

    # affine theta

    theta = Variable(rois.data.new(rois.size(0), 2, 3).zero_())

    theta[:, 0, 0] = (x2 - x1) / (width - 1)

    theta[:, 0 ,2] = (x1 + x2 - width + 1) / (width - 1)

    theta[:, 1, 1] = (y2 - y1) / (height - 1)

    theta[:, 1, 2] = (y1 + y2 - height + 1) / (height - 1)

    if max_pool:

    pre_pool_size = cfg.POOLING_SIZE * 2

    grid = F.affine_grid(theta, torch.Size((rois.size(0), 1, pre_pool_size, pre_pool_size)))

    crops = F.grid_sample(bottom.expand(rois.size(0), bottom.size(1), bottom.size(2), bottom.size(3)), grid)

    crops = F.max_pool2d(crops, 2, 2)

    else:

    grid = F.affine_grid(theta, torch.Size((rois.size(0), 1, cfg.POOLING_SIZE, cfg.POOLING_SIZE)))

    crops = F.grid_sample(bottom.expand(rois.size(0), bottom.size(1), bottom.size(2), bottom.size(3)), grid)

    return crops

    开发者ID:Sunarker,项目名称:Collaborative-Learning-for-Weakly-Supervised-Object-Detection,代码行数:42,

    示例10: _resize

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def _resize(self, x, size):

    if x.shape[-2:] == size:

    return x

    elif x.shape[-2:] < size:

    return F.interpolate(x, size=size, mode=self.upsample_mode)

    else:

    assert x.shape[-2] % size[-2] == 0 and x.shape[-1] % size[-1] == 0

    kernel_size = x.shape[-1] // size[-1]

    x = F.max_pool2d(x, kernel_size=kernel_size, stride=kernel_size)

    return x

    开发者ID:open-mmlab,项目名称:mmdetection,代码行数:12,

    示例11: forward

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    # Left branch

    y1 = self.sep_conv1(x)

    y2 = self.sep_conv2(x)

    # Right branch

    y3 = F.max_pool2d(x, kernel_size=3, stride=self.stride, padding=1)

    if self.stride==2:

    y3 = self.bn1(self.conv1(y3))

    y4 = self.sep_conv3(x)

    # Concat & reduce channels

    b1 = F.relu(y1+y2)

    b2 = F.relu(y3+y4)

    y = torch.cat([b1,b2], 1)

    return F.relu(self.bn2(self.conv2(y)))

    开发者ID:StephanZheng,项目名称:neural-fingerprinting,代码行数:16,

    示例12: forward

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    out = F.relu(self.conv1(x))

    out = F.max_pool2d(out, 2)

    out = F.relu(self.conv2(out))

    out = F.max_pool2d(out, 2)

    out = out.view(out.size(0), -1)

    out = F.relu(self.fc1(out))

    out = F.relu(self.fc2(out))

    out = self.fc3(out)

    return (out,F.log_softmax(out))

    开发者ID:StephanZheng,项目名称:neural-fingerprinting,代码行数:12,

    示例13: forward

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    x = F.relu(F.max_pool2d(self.conv1(x), 2))

    x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))

    x = x.view(-1, 320)

    x = F.relu(self.fc1(x))

    x = F.dropout(x, training=self.training)

    x = self.fc2(x)

    return F.log_softmax(x)

    开发者ID:StephanZheng,项目名称:neural-fingerprinting,代码行数:10,

    示例14: forward

    ​点赞 5

    # 需要导入模块: from torch.nn import functional [as 别名]

    # 或者: from torch.nn.functional import max_pool2d [as 别名]

    def forward(self, x):

    y1 = self.sep_conv1(x)

    y2 = F.max_pool2d(x, kernel_size=3, stride=self.stride, padding=1)

    if self.stride==2:

    y2 = self.bn1(self.conv1(y2))

    return F.relu(y1+y2)

    开发者ID:StephanZheng,项目名称:neural-fingerprinting,代码行数:8,

    注:本文中的torch.nn.functional.max_pool2d方法示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。

    展开全文
  • torch.nn.MaxPool2d和torch.nn.functional.max_pool2d,在pytorch构建模型中,都可以作为最大池化层的引入,但前者为类模块,后者为函数,在使用上存在不同。 1. torch.nn.functional.max_pool2d pytorch中的函数,...

    引言

    torch.nn.MaxPool2dtorch.nn.functional.max_pool2d,在pytorch构建模型中,都可以作为最大池化层的引入,但前者为类模块,后者为函数,在使用上存在不同。

    1. torch.nn.functional.max_pool2d

    pytorch中的函数,可以直接调用,源码如下:

    def max_pool2d_with_indices(
        input: Tensor, kernel_size: BroadcastingList2[int],
        stride: Optional[BroadcastingList2[int]] = None,
        padding: BroadcastingList2[int] = 0,
        dilation: BroadcastingList2[int] = 1,
        ceil_mode: bool = False,
        return_indices: bool = False
    ) -> Tuple[Tensor, Tensor]:
        r"""Applies a 2D max pooling over an input signal composed of several input
        planes.
    
        See :class:`~torch.nn.MaxPool2d` for details.
        """
        if has_torch_function_unary(input):
            return handle_torch_function(
                max_pool2d_with_indices,
                (input,),
                input,
                kernel_size,
                stride=stride,
                padding=padding,
                dilation=dilation,
                ceil_mode=ceil_mode,
                return_indices=return_indices,
            )
        if stride is None:
            stride = torch.jit.annotate(List[int], [])
        return torch._C._nn.max_pool2d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode)
    
    
    def _max_pool2d(
        input: Tensor, kernel_size: BroadcastingList2[int],
        stride: Optional[BroadcastingList2[int]] = None,
        padding: BroadcastingList2[int] = 0,
        dilation: BroadcastingList2[int] = 1,
        ceil_mode: bool = False,
        return_indices: bool = False
    ) -> Tensor:
        if has_torch_function_unary(input):
            return handle_torch_function(
                max_pool2d,
                (input,),
                input,
                kernel_size,
                stride=stride,
                padding=padding,
                dilation=dilation,
                ceil_mode=ceil_mode,
                return_indices=return_indices,
            )
        if stride is None:
            stride = torch.jit.annotate(List[int], [])
        return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
    
    
    max_pool2d = boolean_dispatch(
        arg_name="return_indices",
        arg_index=6,
        default=False,
        if_true=max_pool2d_with_indices,
        if_false=_max_pool2d,
        module_name=__name__,
        func_name="max_pool2d",
    )
    

    使用如下:

    import torch.nn.functional as F
    input = torch.randn(20, 16, 50, 32)  # 输入张量
    F.max_pool2d(input, kernel_size=2, stride=1,padding=0)
    """
    其中:
    Shape:
            - Input: :math:`(N, C, H_{in}, W_{in})`
            - Output: :math:`(N, C, H_{out}, W_{out})`, where
    """
    

    2. torch.nn.MaxPool2d

    pytorch中的类模块,先实例化,再调用其函数,源码如下(笔者已将源码中的注释简化):

    class MaxPool2d(_MaxPoolNd):
    
        kernel_size: _size_2_t
        stride: _size_2_t
        padding: _size_2_t
        dilation: _size_2_t
    
        def forward(self, input: Tensor) -> Tensor:
            return F.max_pool2d(input, self.kernel_size, self.stride,
                                self.padding, self.dilation, self.ceil_mode,
                                self.return_indices)
    

    使用如下:

    import torch
    m = torch.nn.MaxPool2d(3, stride=2)  # 实例化
    # 或者
    m = torch.nn.MaxPool2d((3, 2), stride=(2, 1))  # 实例化
    input = torch.randn(20, 16, 50, 32)  # 输入张量
    output = m(input) # 使用该类
    """
        Shape:
            - Input: :math:`(N, C, H_{in}, W_{in})`
            - Output: :math:`(N, C, H_{out}, W_{out})`, where
    """
    

    3. 对比类和函数的使用

    通过上述比较,torch.nn.functional.max_pool2d作为函数可以直接调用,传入参数(input(四个维度的输入张量), kernel_size(卷积核尺寸), stride(步幅),padding(填充), dilation, ceil_mode,return_indices)即可。
    torch.nn.MaxPool2d,要先实例化,并在forward调用了torch.nn.functional.max_pool2d函数。
    综上:torch.nn.functional.max_pool2d函数包含于torch.nn.MaxPool2d类模块中,可以单独使用,也可以实例化类再使用。
    在模型构建下的使用:
    (1)使用类模块

    import torch
    class Net(torch.nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
            self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
            self.pooling = torch.nn.MaxPool2d(2)  # kernel_size = 2,实例化
            self.fc = torch.nn.Linear(320, 10)
    
        def forward(self, x):
            # Flatten data from(n,1,28,28) to (n,784)
            batch_size = x.s(0)
            x = F.relu(self.pooling(self.conv1(x)))
            x = F.relu(self.pooling(self.conv2(x)))
            x = x.view(batch_size, -1)
            x = self.fc(x)
            return x
    

    说明:kernel_size 是必须要指定的参数,否则会报错
    笔者修改了torch.nn.MaxPool2d的源码,说明传入参数要求(记得改回来!):
    在这里插入图片描述

    import torch
    pooling1 = torch.nn.MaxPool2d(1,2,3,4)
    print(pooling1)
    pooling2 = torch.nn.MaxPool2d(1)
    print(pooling2)
    

    输出为

    MaxPool2d(kernel_size=1, stride=2, padding=3, dilation=4, ceil_mode=False)
    MaxPool2d(kernel_size=1, stride=1, padding=0, dilation=1, ceil_mode=False)
    

    (2)直接调用函数

    import torch
    import torch.nn.functional as F
    class Net(torch.nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
            self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
            # 最大池化层无需实例化,直接在forward中调用
            self.fc = torch.nn.Linear(320, 10)
    
        def forward(self, x):
            # Flatten data from(n,1,28,28) to (n,784)
            batch_size = x.s(0)
            x = F.relu(F.max_pool2d(self.conv1(x), kernel_size=2))  # 一定要指定kernel_size
            x = F.relu(F.max_pool2d(self.conv2(x), kernel_size=2))
            x = x.view(batch_size, -1)
            x = self.fc(x)
            return x
    
    展开全文
  • torch.nn.MaxPool2d和torch.nn.functional.max_pool2d两者本质上是一样的,具体可以参考torch.nn.MaxPool2d的源代码,核心源代码如下所示: from .. import functional as F class MaxPool2d(_MaxPoolNd): kernel_...

    两者之间的区别与联系

    首先给出结论,torch.nn.MaxPool2dtorch.nn.functional.max_pool2d两者本质上是一样的。
    具体可以参考torch.nn.MaxPool2d源代码,核心源代码如下所示:

    from .. import functional as F
    
    class MaxPool2d(_MaxPoolNd):
        kernel_size: _size_2_t
        stride: _size_2_t
        padding: _size_2_t
        dilation: _size_2_t
    
        def forward(self, input: Tensor) -> Tensor:
            return F.max_pool2d(input, self.kernel_size, self.stride,
                                self.padding, self.dilation, self.ceil_mode,
                                self.return_indices)
    

    由此可见,torch.nn.MaxPool2d在自己的forward()方法中调用了torch.nn.functional.max_pool2d
    而pytorch官网在介绍torch.nn.functional.max_pool2d时,也是直接说

    See MaxPool2d for details.

    因此两者本质上是一样的。

    至于为什么同一个任务定义两种方法,可能是为了符合不同人使用PyTorch的编码风格。

    资料参考

    https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html#torch.nn.functional.max_pool2d

    https://stackoverflow.com/questions/58514197/difference-between-nn-maxpool2d-vs-nn-functional-max-pool2d?noredirect=1

    展开全文
  • # def max_pool2d(inputs, # kernel_size, # stride=2, # padding='VALID', # data_format=DATA_FORMAT_NHWC, # outputs_collections...
  • 一、 如果在做卷积过程中出现WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:4070: The name tf.... Please use tf.nn.max_pool2d instead.**** ,就是如下图所示的...
  • RuntimeError: max_pool2d_with_indices_out_cuda_frame failed with error code 0 问题描述 pytorch前向推理模型, 在1.3版本下可以运行, 但切换到1.4后报RuntimeError: max_pool2d_with_indices_out_cuda_frame ...
  • 在学习python时,遇到了以下报错,有大佬遇到过吗? 可以帮忙解读一下这是什么原因造成的吗? 标题
  • TensorFlow基础笔记(11) max_pool2D函数

    千次阅读 2017-11-27 17:19:00
    # def max_pool2d(inputs, # kernel_size, # stride=2, # padding='VALID', # data_format=DATA_FORMAT_NHWC, # outputs_collectio...
  • tf.nn.max_pool 原生的池化操作,而tf.layers.max_pooling2d是进行了封装,对许多参数进行了设置,使用起来更方便 tf.layers.max_pooling2d: 用于2D输入的最大池化层(例如图像). 参数: inputs:池的张量,秩...
  • 参考自tf2.1官方文档: https://www.tensorflow.org/api_docs/python/tf/nn/max_pool2d Performs the max pooling on the input. ...tf.nn.max_pool2d( input, ksize, strides, padding, data_forma...
  • 今天小编就为大家分享一篇浅谈pytorch池化maxpool2D注意事项,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
  •  valid_pad = tf.nn.max_pool(x, [1, 3, 3, 1], [1, 2, 2, 1], padding='VALID')  same_pad = tf.nn.max_pool(x, [1, 3, 3, 1], [1, 2, 2, 1], padding='SAME') with tf.Session(graph=graph) as sess:  print...
  • 问题: 语义分割网络pspnet导出成onnx的时候报错 分析: 主要是ppm中使用了AdaptiveAvgPool2d ...将AdaptiveAvgPool2d替换成AvgPool2d 参考自:https://www.cnblogs.com/xiaosongshine/p/10750908.html import torch a
  • torch.nn.MaxPool2d详解

    万次阅读 多人点赞 2020-11-22 20:18:38
    _) class MaxPool2d(_MaxPoolNd): kernel_size: _size_2_t stride: _size_2_t padding: _size_2_t dilation: _size_2_t def forward(self, input: Tensor) -> Tensor: return F.max_pool2d(input, self.kernel_size,...
  • pool_size:一个整数或者一个单个整数的tuple/list,表示池化窗口的大小 strides:一个整数或者一个单个整数的tuple/list,指定池化操作的移动步幅 padding:一个字符串。padding的方法:”valid...
  • : max pool with 2x2 kernel, stride 2 and SAME padding (this is the classic way to go) The output shapes are: valid_pad : here, no padding so the output shape is [1, 1] same_pad : here, ...
  • tf.nn.max_pool3d用法

    千次阅读 2020-09-16 20:01:52
    tf.nn.max_pool3d:对输入执行最大池化。 在tf 1.x中为tf.compat.v1.nn.max_pool3d tf.nn.max_pool3d( input, ksize, strides, padding, data_format='NDHWC', name=None ) Args input 由data_...
  • 池化层max_pool中两种paddding操作

    千次阅读 2019-06-12 10:07:27
    max_pool()中padding参数有两种模式valid和same模式。 Tensorflow的padding和卷积层一样也有padding操作,两种不同的操作输出的结果有区别。 函数原型max_pool(value, ksize, strides, padding, data_format="NHWC...
  • pytorch 中nn.MaxPool1d() 和nn.MaxPool2d()对比
  • 记录帖 nn.conv2d中的参数SAME和VALID https://blog.csdn.net/flashlau/article/details/82944536 tf.nn.max_pool中的参数SAME和VALID https://vimsky.com/article/3881.html
  • 1.tf.nn.maxpool2d()函数介绍 tf.nn.max_pool2d(input, ksize, strides, padding, data_format='NHWC', name=None) 参数说明: Args input A 4-DTensorof the format specified bydata_format. ...
  • Pytorch nn.MaxPool1d; nn.functional.max_pool1d pytorch 中nn.MaxPool1d() 和nn.MaxPool2d()对比
  • torch.nn.MaxPool2d参数详解

    千次阅读 2021-09-05 10:28:12
    MaxPool2d的使用方法。 API官网文档 MaxPool2d 参数介绍 kernel_size :表示做最大池化的窗口大小,可以是单个值,也可以是tuple元组 stride :步长,可以是单个值,也可以是tuple元组 padding :填充,可以是...
  • tf.nn.max_pool(value, ksize, strides, padding, name=None) 参数是四个,和卷积很类似: 第一个参数value:需要池化的输入,一般池化层接在卷积层后面,所以输入通常是feature map,依然是[batch, height, width, ...
  • MaxPool2d 的使用(池化层)(附代码)

    千次阅读 2021-11-22 17:52:10
    MaxPool2d 的使用 此处我们仍然使用官网自带的数据集进行训练,最后将其可视化 加载数据集和可视化部分在此处不在介绍,若需要了解: 加载数据集:torch.utils.data中的DataLoader数据加载器(附代码)_硕大的蛋的博客-...
  • tf.nn.max_pool参数含义和用法

    万次阅读 多人点赞 2017-09-19 00:05:24
    转自:... max pooling是CNN当中的最大值池化操作,其实用法和卷积很类似 ...有些地方可以从卷积去参考【TensorFlow】tf.nn.conv2d是怎样实现卷积的?...tf.nn.max_pool(value, ksize, strid
  • Pytorch(笔记3)--MaxPool2d&AdaptiveAvgPool2d

    万次阅读 2019-05-18 14:01:39
    在上一节中我们详细的阐述了Conv2d的计算原理,今天我们来讲述下Pytorch中其他比较常见的操作!...在pytorch中使用Pooling操作来实现采样,常见的pool操作包含Max_pool,Avg_poolMax_pool x = t.rand(1,3,7,7...
  • Pytorch MaxPool2d

    千次阅读 2020-08-07 12:24:42
    再说说我学nn.MaxPool2d时遇到的问题: import torch import torch.nn as nn m=nn.MaxPool2d(3,stride=2) input=torch.randn(6,6) output=m(input) 然后就会报这个错: RuntimeError: non-empty 3D or 4D (batch ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 50,267
精华内容 20,106
关键字:

max_pool2d