精华内容
下载资源
问答
  • 0.5, 0.5), (0.5, 0.5, 0.5)) ]) img = Image.open('testsets/set5/butterfly.bmp') img = trans(img) # 加一个batch维度 img = torch.unsqueeze(img, dim=0) model = torchvision.models.vgg19(pretrained=True) ...
    import torch
    import torchvision
    import torchvision.transforms as transforms
    from PIL import Image
    import numpy as np
    
    trans = transforms.Compose([
        transforms.Resize([224,224]),
        transforms.ToTensor(),
        transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
    ])
    img = Image.open('testsets/set5/butterfly.bmp')
    img = trans(img)
    # 加一个batch维度
    img = torch.unsqueeze(img, dim=0)
    
    model = torchvision.models.vgg19(pretrained=True)
    model.eval()
    with torch.no_grad():
        output = torch.squeeze(model(img))
        predict = torch.softmax(output, dim=0) # 得到概率分布
        predict_cla = torch.argmax(predict).numpy() # 获取概率最大处所对应的索引值
    
    '''
        获取前几个可能
    '''
    def get_max(n, pre):
        #得到从大到小排序的索引号
        pre = np.argsort(-pre)
        # 读取索引对应
        with open('imagenet1000_clsid_to_human.txt','r') as f:
            line = f.readlines()
        name = []
        for i in range(n):
            print(pre[i])
            name.append(line[int(pre[i])].split('\'')[1])
        return name
    print(get_max(5, predict))
    

    结果:

    文件  imagenet1000_clsid_to_human.txt 下载:

    https://download.csdn.net/download/zjh12312311/15514012

    展开全文
  • 本文使用的数据集是网络开源的鲜花数据集,并且基于VGG19预训练模型通过迁移学习重新训练鲜花数据由此构建一个鲜花识别分类器 数据集 可以在此处找到有关花朵数据集的信息。数据集为102个花类的每一个都包含一个...

    项目说明

    本文使用的数据集是网络开源的鲜花数据集,并且基于VGG19的预训练模型通过迁移学习重新训练鲜花数据由此构建一个鲜花识别分类器

    数据集

    可以在此处找到有关花朵数据集的信息。数据集为102个花类的每一个都包含一个单独的文件夹。每朵花都标记为一个数字,每个编号的目录都包含许多.jpg文件。

    实验环境

    prtorch库
    PIL库
    如果想使用GPU训练的话请使用英伟达的显卡并安装好CUDA
    如果用GPU的话我在自己电脑上使用GPU只使用了91分钟(我的GPU是1050)

    ##倒入库并检测是否有可用GPU

    %matplotlib inline
    %config InlineBackend.figure_format = 'retina'
    
    import time
    import json
    import copy
    
    import matplotlib.pyplot as plt
    import seaborn as sns
    import numpy as np
    import PIL
    
    from PIL import Image
    from collections import OrderedDict
    
    
    import torch
    from torch import nn, optim
    from torch.optim import lr_scheduler
    from torch.autograd import Variable
    import torchvision
    from torchvision import datasets, models, transforms
    from torch.utils.data.sampler import SubsetRandomSampler
    import torch.nn as nn
    import torch.nn.functional as F
    
    import os
    # check if GPU is available
    train_on_gpu = torch.cuda.is_available()
    
    if not train_on_gpu:
        print('Bummer!  Training on CPU ...')
    else:
        print('You are good to go!  Training on GPU ...')
    ##有GPU就启用
        device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    

    定义数据集位置

    data_dir = 'F:\资料\项目\image_classifier_pytorch-master\\flower_data'
    train_dir = 'flower_data/train'
    valid_dir = 'flower_data/valid'
    

    导入数据集并对数据进行处理

    # Define your transforms for the training and testing sets
    data_transforms = {
        'train': transforms.Compose([
            transforms.RandomRotation(30),
            transforms.RandomResizedCrop(224),
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], 
                                 [0.229, 0.224, 0.225])
        ]),
        'valid': transforms.Compose([
            transforms.Resize(256),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], 
                                 [0.229, 0.224, 0.225])
        ])
    }
    
    # Load the datasets with ImageFolder
    image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
                                              data_transforms[x])
                      for x in ['train', 'valid']}
    
    # Using the image datasets and the trainforms, define the dataloaders
    batch_size = 64
    dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size,
                                                 shuffle=True, num_workers=4)
                  for x in ['train', 'valid']}
    
    class_names = image_datasets['train'].classes
    
    dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'valid']}
    class_names = image_datasets['train'].classes
    
    # Label mapping
    with open('F:\资料\项目\image_classifier_pytorch-master\cat_to_name.json', 'r') as f:
        cat_to_name = json.load(f)
    

    查看数据情况

    # Run this to test the data loader
    images, labels = next(iter(dataloaders['train']))
    images.size()
    # # Run this to test your data loader
    images, labels = next(iter(dataloaders['train']))
    rand_idx = np.random.randint(len(images))
    # print(rand_idx)
    print("label: {}, class: {}, name: {}".format(labels[rand_idx].item(),
                                                   class_names[labels[rand_idx].item()],
                                                   cat_to_name[class_names[labels[rand_idx].item()]]))
    

    定义模型

    model_name = 'densenet' #vgg
    if model_name == 'densenet':
        model = models.densenet161(pretrained=True)
        num_in_features = 2208
        print(model)
    elif model_name == 'vgg':
        model = models.vgg19(pretrained=True)
        num_in_features = 25088
        print(model.classifier)
    else:
        print("Unknown model, please choose 'densenet' or 'vgg'")
    
    
    # Create classifier
    for param in model.parameters():
        param.requires_grad = False
    
    def build_classifier(num_in_features, hidden_layers, num_out_features):
       
        classifier = nn.Sequential()
        if hidden_layers == None:
            classifier.add_module('fc0', nn.Linear(num_in_features, 102))
        else:
            layer_sizes = zip(hidden_layers[:-1], hidden_layers[1:])
            classifier.add_module('fc0', nn.Linear(num_in_features, hidden_layers[0]))
            classifier.add_module('relu0', nn.ReLU())
            classifier.add_module('drop0', nn.Dropout(.6))
            classifier.add_module('relu1', nn.ReLU())
            classifier.add_module('drop1', nn.Dropout(.5))
            for i, (h1, h2) in enumerate(layer_sizes):
                classifier.add_module('fc'+str(i+1), nn.Linear(h1, h2))
                classifier.add_module('relu'+str(i+1), nn.ReLU())
                classifier.add_module('drop'+str(i+1), nn.Dropout(.5))
            classifier.add_module('output', nn.Linear(hidden_layers[-1], num_out_features))
            
        return classifier
    hidden_layers = None#[4096, 1024, 256][512, 256, 128]
    
    classifier = build_classifier(num_in_features, hidden_layers, 102)
    print(classifier)
    
     # Only train the classifier parameters, feature parameters are frozen
    if model_name == 'densenet':
        model.classifier = classifier
        criterion = nn.CrossEntropyLoss()
        optimizer = optim.Adadelta(model.parameters()) # Adadelta #weight optim.Adam(model.parameters(), lr=0.001, momentum=0.9)
        #optimizer_conv = optim.SGD(model.parameters(), lr=0.0001, weight_decay=0.001, momentum=0.9)
        sched = optim.lr_scheduler.StepLR(optimizer, step_size=4)
    elif model_name == 'vgg':
        model.classifier = classifier
        criterion = nn.NLLLoss()
        optimizer = optim.Adam(model.classifier.parameters(), lr=0.0001)
        sched = lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1)
    else:
        pass
    def train_model(model, criterion, optimizer, sched, num_epochs=5):
        since = time.time()
    
        best_model_wts = copy.deepcopy(model.state_dict())
        best_acc = 0.0
    
        for epoch in range(num_epochs):
            print('Epoch {}/{}'.format(epoch+1, num_epochs))
            print('-' * 10)
    
            # Each epoch has a training and validation phase
            for phase in ['train', 'valid']:
                if phase == 'train':
                    model.train()  # Set model to training mode
                else:
                    model.eval()   # Set model to evaluate mode
    
                running_loss = 0.0
                running_corrects = 0
    
                # Iterate over data.
                for inputs, labels in dataloaders[phase]:
                    inputs = inputs.to(device)
                    labels = labels.to(device)
    
                    # zero the parameter gradients
                    optimizer.zero_grad()
    
                    # forward
                    # track history if only in train
                    with torch.set_grad_enabled(phase == 'train'):
                        outputs = model(inputs)
                        _, preds = torch.max(outputs, 1)
                        loss = criterion(outputs, labels)
    
                        # backward + optimize only if in training phase
                        if phase == 'train':
                            #sched.step()
                            loss.backward()
                            
                            optimizer.step()
    
                    # statistics
                    running_loss += loss.item() * inputs.size(0)
                    running_corrects += torch.sum(preds == labels.data)
    
                epoch_loss = running_loss / dataset_sizes[phase]
                epoch_acc = running_corrects.double() / dataset_sizes[phase]
    
                print('{} Loss: {:.4f} Acc: {:.4f}'.format(
                    phase, epoch_loss, epoch_acc))
    
                # deep copy the model
                if phase == 'valid' and epoch_acc > best_acc:
                    best_acc = epoch_acc
                    best_model_wts = copy.deepcopy(model.state_dict())
    
            print()
    
        time_elapsed = time.time() - since
        print('Training complete in {:.0f}m {:.0f}s'.format(
            time_elapsed // 60, time_elapsed % 60))
        print('Best val Acc: {:4f}'.format(best_acc))
    
        #load best model weights
        model.load_state_dict(best_model_wts)
        
        return model
    

    开始训练

    epochs = 30
    model.to(device)
    model = train_model(model, criterion, optimizer, sched, epochs)
    
    展开全文
  • VGG19预训练模型(不包括全连接层),是一个字典结构,请使用numpy读取。
  • pytorch预训练模型vgg19-dcbb9e9d
  • pytorch中的VGG19预训练模型, 最后一层输出是1000维的图像分类结果, 但是如果我们只想要模型中某一层的输出特征, 比如全连接层的4096维度的特征, 要如何提取呢? 本文解决这个问题. 本文内容参考: ...

    前言

    使用pytorch预训练模型VGG19提取图像特征, 得到图像embedding

    pytorch中的VGG19预训练模型, 最后一层输出是1000维的图像分类结果, 但是如果我们只想要模型中某一层的输出特征, 比如全连接层的4096维度的特征, 要如何提取呢? 本文解决这个问题.


    本文内容参考:

    https://zhuanlan.zhihu.com/p/105703821

    https://blog.csdn.net/Geek_of_CSDN/article/details/84343971

    https://discuss.pytorch.org/t/how-to-extract-features-of-an-image-from-a-trained-model/119/2 

    一、导入pytorch中vgg19预训练模型

    pytorch中vgg预训练模型下载地址:

    model_urls = {
        'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth',
        'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth',
        'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
        'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth',
        'vgg11_bn': 'https://download.pytorch.org/models/vgg11_bn-6002323d.pth',
        'vgg13_bn': 'https://download.pytorch.org/models/vgg13_bn-abd245e5.pth',
        'vgg16_bn': 'https://download.pytorch.org/models/vgg16_bn-6c64b313.pth',
        'vgg19_bn': 'https://download.pytorch.org/models/vgg19_bn-c79401a0.pth',
    }

    import torch
    import torchvision.models as models
    
    
    # 如果联网使用pytorch的预训练模型,
    # 将pretrained设置为True, 就会自动下载vgg19的模型放在本地缓存中.
    vgg_model = models.vgg19(pretrained=True)
    
    # 如果使用下载到本地的预训练模型, pretrained默认为False, 
    # 则需要提供本地的模型路径, 并使用load_state_dict加载.
    # vgg_model = models.vgg19()
    # pre = torch.load('/XXXX/vgg19-dcbb9e9d.pth')
    # vgg_model.load_state_dict(pre)

    二、使用vgg19得到4096维度的图像特征

    1.查看模型结构

    代码如下(示例):

    # 查看模型整体结构
    structure = torch.nn.Sequential(*list(vgg_model.children())[:])
    print(structure)
    
    # 查看模型各部分名称
    print('模型各部分名称', vgg_model._modules.keys())

    输出结构如下:

    Sequential(
      (0): Sequential(
        (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU(inplace=True)
        (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): ReLU(inplace=True)
        (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (6): ReLU(inplace=True)
        (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (8): ReLU(inplace=True)
        (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU(inplace=True)
        (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (13): ReLU(inplace=True)
        (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (15): ReLU(inplace=True)
        (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (17): ReLU(inplace=True)
        (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (20): ReLU(inplace=True)
        (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (22): ReLU(inplace=True)
        (23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (24): ReLU(inplace=True)
        (25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (26): ReLU(inplace=True)
        (27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (29): ReLU(inplace=True)
        (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (31): ReLU(inplace=True)
        (32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (33): ReLU(inplace=True)
        (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (35): ReLU(inplace=True)
        (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      )
      (1): AdaptiveAvgPool2d(output_size=(7, 7))
      (2): Sequential(
        (0): Linear(in_features=25088, out_features=4096, bias=True)
        (1): ReLU(inplace=True)
        (2): Dropout(p=0.5, inplace=False)
        (3): Linear(in_features=4096, out_features=4096, bias=True)
        (4): ReLU(inplace=True)
        (5): Dropout(p=0.5, inplace=False)
        (6): Linear(in_features=4096, out_features=1000, bias=True)
      )
    )
    模型各部分名称: odict_keys(['features', 'avgpool', 'classifier'])

    上述输出显示, vgg19整体结构分为三大部分'features', 'avgpool', 和 'classifier', 为了得到4096维度的特征, 我们可以去掉classifier层的最后两层(5),(6). 下面介绍如何实现去掉这两层.

    2.修改模型结构

    上述获取模型结构的代码可以添加参数得到模型的每个部分:

    # 获取vgg19模型的第一个Sequential, 也就是features部分.
    features = torch.nn.Sequential(*list(vgg_model.children())[0])
    print('features of vgg19: ', features)
    
    # 同理, 可以获取到vgg19模型的最后一个Sequential, 也就是classifier部分.
    classifier = torch.nn.Sequential(*list(vgg_model.children())[-1])
    print('classifier of vgg19: ', classifier)
    
    # 在获取到最后一个classifier部分的基础上, 再切割模型, 得到输出维度为4096的子模型.
    new_classifier = torch.nn.Sequential(*list(vgg_model.children())[-1][:5])
    print('new_classifier: ', new_classifier)

    上述代码输出为: (为了减少文章长度, 我在下面输出结果里面将features模型的(2)-(33)省略了)

    features of vgg19:  Sequential(
      (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): ReLU(inplace=True)
      (2).......................................................................(33)

      (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (35): ReLU(inplace=True)
      (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    classifier of vgg19:  Sequential(
      (0): Linear(in_features=25088, out_features=4096, bias=True)
      (1): ReLU(inplace=True)
      (2): Dropout(p=0.5, inplace=False)
      (3): Linear(in_features=4096, out_features=4096, bias=True)
      (4): ReLU(inplace=True)
      (5): Dropout(p=0.5, inplace=False)
      (6): Linear(in_features=4096, out_features=1000, bias=True)
    )
    new_classifier :  Sequential(
      (0): Linear(in_features=25088, out_features=4096, bias=True)
      (1): ReLU(inplace=True)
      (2): Dropout(p=0.5, inplace=False)
      (3): Linear(in_features=4096, out_features=4096, bias=True)
      (4): ReLU(inplace=True)
    )

    从上述输出可以看出, 与vgg19模型的原始classifier相比, new_classifier的模型结构已经去掉了最后一层全连接层和一个Dropout层, 能够得到输出维度为4096的特征了. 

    所以下面, 我们就用new_classifier替换vgg19原始模型中的分类器( classifier )部分, 保留vgg19模型前面的features和avgpool不变.代码如下:

    # 得到vgg19原始模型, 输出图像维度是1000的.
    vgg_model_1000 = models.vgg19(pretrained=True)
    
    # 下面三行代码功能是:得到修改后的vgg19模型.
    # 具体实现是: 去掉vgg19模型的最后一个全连接层, 使输出图像维度是4096.
    vgg_model_4096 = models.vgg19(pretrained=True)
    # 获取原始模型中去掉最后一个全连接层的classifier.
    new_classifier = torch.nn.Sequential(*list(vgg_model_4096.children())[-1][:5])
    # 替换原始模型中的classifier.
    vgg_model_4096.classifier = new_classifier
    
    # 获取和处理图像
    data_dir = '/mnt/image_test.jpg'
    im = Image.open(data_dir)
    trans = transforms.Compose([
            transforms.Resize((224, 224)),
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
    ])
    im = trans(im)
    im.unsqueeze_(dim=0)
    
    # 使用vgg19得到图像特征表示.
    image_feature_1000 = vgg_model_1000(im).data[0]
    image_feature_4096 = vgg_model_4096(im).data[0]
    print('dim of vgg_model_1000: ', image_feature_1000.shape)
    print('dim of vgg_model_4096: ', image_feature_4096.shape)
     

    输出结果如下:

    dim of vgg_model_1000:  torch.Size([1000])
    dim of vgg_model_4096:  torch.Size([4096])

    可以看出, 没有修改的vgg19模型输出维度是1000, 修改后的vgg19模型输出图像维度是4096.


    总结

    花了一天多, 解决这个问题, 虽然很费时费力, 但是感觉是有意义的.

     

    扩展

    如果不需要得到classifier部分的输出结果, 而是只需要原始vgg19模型的features部分的输出结果, 有一种更简单的方法, 代码如下:

    # 获取vgg19原始模型的features部分的前34个结构, 得到新的vgg_model模型.
    vgg_model = models.vgg19(pretrained=True).features[:34]
    
    # 但是下面的代码只能得到classifier部分的前40个, 
    # 而不能得到包含features及avgpool及classifier的一共前40个结构.
    # 所以这个方法不能实现输出4096维度图像特征的目标.
    # vgg_model = models.vgg19(pretrained=True).classifier[:40]

    这种方法只使用于联网加载的vgg19模型(即设置了pretrained=True的模型),不适用于使用了本地vgg19模型的vgg_model,目前不知道原因.

    展开全文
  • 目录1、Pytorch预训练模型路径修改2、Keras修改预训练模型位置 1、Pytorch预训练模型路径修改 Pytorch安装目录下有一个hub.py,改文件指定了预训练模型的加载位置。该文件存在于xxx\site-packages\torch,例如我的...

    1、Pytorch预训练模型路径修改

    Pytorch安装目录下有一个hub.py,改文件指定了预训练模型的加载位置。该文件存在于xxx\site-packages\torch,例如我的存在于“C:\ProgramData\Miniconda3\Lib\site-packages\torch”。
    打开hub.py文件,找到load_state_dict_from_url函数,其中第二个参数
    model_dir用于指定权重文件路径:model_dir (string, optional): directory in which to save the object。将该参数值由None改为权重文件位置即可,例如model_dir=‘D:/Models_Download/torch’。

    def load_state_dict_from_url(url, model_dir='D:/Models_Download/torch', map_location=None, progress=True, check_hash=False, file_name=None):
        r"""Loads the Torch serialized object at the given URL.
    
        If downloaded file is a zip file, it will be automatically
        decompressed.
    
        If the object is already present in `model_dir`, it's deserialized and
        returned.
        The default value of `model_dir` is ``<hub_dir>/checkpoints`` where
        `hub_dir` is the directory returned by :func:`~torch.hub.get_dir`.
    
        Args:
            url (string): URL of the object to download
            model_dir (string, optional): directory in which to save the object
            map_location (optional): a function or a dict specifying how to remap storage locations (see torch.load)
            progress (bool, optional): whether or not to display a progress bar to stderr.
                Default: True
            check_hash(bool, optional): If True, the filename part of the URL should follow the naming convention
                ``filename-<sha256>.ext`` where ``<sha256>`` is the first eight or more
                digits of the SHA256 hash of the contents of the file. The hash is used to
                ensure unique names and to verify the contents of the file.
                Default: False
            file_name (string, optional): name for the downloaded file. Filename from `url` will be used if not set.
    
        Example:
            >>> state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth')
    
        """
    

    2、Keras修改预训练模型位置

    Keras安装路径内并没有一个文件来定义预训练模型位置,我只能在调用预训练模型的时候指定模型文件的路径(有没有更好的设置方法?)。

    base_model = vgg19.VGG19(input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3), include_top=False, 
                             weights='D:\\Models_Download\\keras\\vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5')
    
    展开全文
  • pytorch最全预训练模型下载与调用

    千次阅读 多人点赞 2020-05-21 10:28:37
    pytorch预训练模型下载与调用 torchvision.models 官方预训练模型调用代码:https://github.com/pytorch/vision/tree/master/torchvision/models 官方文档地址 :...
  • 就拿Resnet18举例 在程序中输入 from __future__ import print_function, division from torchvision import models ...再将这个网址复制到浏览器中,就可以直接下载Resnet18模型。下载结束后,将下...
  • 联系方式: ...QQ: 973926198 ...文章目录基模型参数加载从持久化模型开始加载模型吧部分加载模型模型Fine-Tune给每一层或者每个模型设置不同的学习率Pytorch内置的模型Reference 我们在使用pytorch的时候,经常...
  • 就拿Resnet18举例 在程序中输入 model_ft...‘vgg19’: ‘https://download.pytorch.org/models/vgg19-dcbb9e9d.pth’ ‘vgg11_bn’: ‘https://download.pytorch.org/models/vgg11_bn-6002323d.pth’ ‘vgg13_bn’: ...
  • Pytorch中加载预训练模型以及冻结层

    千次阅读 2020-05-27 23:06:52
    一、加载预训练模型 加载方式有两种,主要是第二种对于模型finetune比较常用 1、加载框架已有的模型(如resnet等) 代码如下: import torch import torch.nn as nn from torch.utils import model_zoo import ...
  • Resnet: model_urls = { 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth', 'resnet50': 'https...
  • VGG预训练模型需要连接外网下载,而且下载速度很慢,有500多兆!资源包括VGG16和VGG19预训练模型
  • 三、模型预训练 一、论文 先来看看VGG这篇论文《Very Deep Convolutional Networks for Large-Scale Image Recognition》论文下载地址 论文中几个模型主要以几下几种方案A、B、C、D、E。目前主要还是采用VGG16和...
  • pytorch 使用预训练模型并修改部分结构
  • PyTorch加载预训练模型

    万次阅读 2018-11-15 20:27:54
    Pytorch调用预训练模型: 1 Pytorch保存和加载整个模型: torch.save(model, 'model.pth') model = torch.load('model.pth') Pytorch保存和加载预训练模型参数: torch.save(model.state_dict(), 'params.pth...
  • 今天小编就为大家分享一篇PyTorch加载预训练模型实例(pretrained),具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
  • pytorch调用预训练模型

    千次阅读 2019-08-20 22:41:03
    1.pytorch提供以下模型 AlexNet: AlexNet variant from the “One weird trick” paper. VGG: VGG-11, VGG-13, VGG-16, VGG-19 (with and without batch normalization) ResNet: ResNet-18, ResNet-34, ResNet-...
  • pytorch加载预训练模型

    2019-07-02 17:20:15
    pytorch加载预训练的model 博主也是刚刚开始学习使用深度分类网络实现一些应用、做对比试验,pytorch是大家极力推荐的deep learning框架,与python本身的语言风格很切合,又有一种写matlab的感觉,受到广泛青睐。博...
  • vggish或者yamnet基于audioset训练得到的预训练模型 参考: https://zhuanlan.zhihu.com/p/368632852 https://github.com/tensorflow/models/tree/master/research/audioset/vggish 使用: 安装相关库 pip install ...
  • 几乎所有的常用预训练模型都在这里面 总结下各种模型的下载地址: Resnet: model_urls = { 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', 'resnet34': '...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,301
精华内容 520
关键字:

pytorchvgg19预训练模型