精华内容
下载资源
问答
  • 有线电视新闻网 使用TensorFlowPyTorch和OpenCV3个开源框架进行黑色素瘤检测的CNN
  • Tensorflow转Pytorch实战(1)

    千次阅读 2019-04-02 19:36:33
    近期任务,把论文中用Tensorflow实现的源码转换成Pytorch,菜鸡开始好好撸代码惹。。。 最开始进入Tensorflow中文社区和Pytorch中文社区过了一遍,发现没有特别能记住什么,还是看些基础的例子,然后动手实践起来,...

    近期任务,把论文中用Tensorflow实现的源码转换成Pytorch,菜鸡开始好好撸代码惹。。。
    最开始进入Tensorflow中文社区和Pytorch中文社区过了一遍,发现没有特别能记住什么,还是看些基础的例子,然后动手实践起来,期间推荐看相关框架的英文tutorials查阅(可以看看常用词和提高英语,是吧)

    Tensorflow英文教程:Tensorflow tutorials
    Tensorflow中文教程:Tensorflow 中文
    Pytorch英文教程: Pytorch tutorials
    Pytorch中文教程:Pytorch 中文

    首先做的是搭建CNN

    1、Tensorflow搭建CNN:

    #coding=UTF-8
    #step 1 导入Tensorflow
    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    
    
    #step 2 加载数据集mnist
    mnist = input_data.read_data_sets('MNIST_data/',one_hot=True)
    x = tf.placeholder(tf.float32,[None,784])
    y_ = tf.placeholder(tf.float32,[None,10])
    
    #step 3 为了代码整洁,定义weight和bias的初始化封装成函数
    #通常使用少量的噪声初始化权值以打破对称性,并防止梯度为0
    def weight_variable(shape):
        initial = tf.truncated_normal(shape,stddev=0.1)
        return tf.Variable(initial)
    def bias_variable(shape):
        initial = tf.constant(0.1,shape=shape)
        return tf.Variable(initial)
    
    #step4 定义卷积层和maxpooling
    #为了代码整洁,将卷积层和maxpooling封装起来
    def conv2d(x,w):
        return tf.nn.conv2d(x,w,strides=[1,1,1,1],padding='SAME')
    def max_pool_2x2(x):
        return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')
    
    #step5 reshape image数据
    x_image = tf.reshape(x,[-1,28,28,1])
    
    #step6 搭建第一个卷积层
    #使用32个5x5的filter,然后通过maxpooling
    W_conv1 = weight_variable([5,5,1,32])
    b_conv1 = bias_variable([32])
    h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+b_conv1)
    h_pool1 = max_pool_2x2(h_conv1)
    
    #step7 搭建第二个卷积层
    #使用64个5x5的filter
    W_conv2 = weight_variable([5,5,32,64])
    b_conv2 = bias_variable([64])
    h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)
    h_pool2 = max_pool_2x2(h_conv2)
    
    #step8 构建全连接层
    #需要将上一层的输出展开成1d的神经层
    W_fc1 = weight_variable([7*7*64,1024])
    b_fc1 = bias_variable([1024])
    h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)
    
    #step9 添加Dropout
    #加入Dropout层,可以防止过拟合问题,这里使用了一个Placeholder,控制在训练和测试时候是否使用Dropout层
    keep_prob = tf.placeholder(tf.float32)
    h_fc1_dropout = tf.nn.dropout(h_fc1,keep_prob)
    
    #step10 输入层  输出一个线性结果
    W_fc2 = weight_variable([1024,10])
    b_fc2 = bias_variable([10])
    y_conv = tf.matmul(h_fc1_dropout,W_fc2)+b_fc2
    
    #step11 训练和评估
    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_,
                                                                           logits=y_conv))
    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y_,1),tf.argmax(y_conv,1)),tf.float32))
    
    with tf.Session() as sess:
        writer = tf.summary.FileWriter('logs/', sess.graph)
        tf.global_variables_initializer().run()
        for i in range(3000):
            batch  = mnist.train.next_batch(50)
            if i%100 == 0:
                train_accuracy = accuracy.eval(feed_dict = {x:batch[0],
                                                            y_:batch[1],
                                                            keep_prob:1.})
                print('steo {},the train accuracy {}'.format(i,train_accuracy))
            train_step.run(feed_dict={x:batch[0],y_:batch[1],keep_prob:0.5})
        test_accuracy = accuracy.eval(feed_dict = {x:mnist.test.images,y_:mnist.test.labels,
                                                   keep_prob:1.})
        print('the test accuracy :{}'.format(test_accuracy))
        saver = tf.train.Saver()
        path = saver.save(sess,'./results/mnist_deep.ckpt')
        print('save path:{}'.format(path))
    

    2、Pytorch搭建CNN:

    #coding=UTF-8
    #导入需要的包
    import torch
    import torch.nn as nn
    import torchvision.datasets as normal_datasets
    import torchvision.transforms as transforms
    from torch.autograd import Variable
    import matplotlib.pyplot as plt
    #超参
    EPOCH = 1
    BATCH_SIZE = 100
    LR = 0.001
    
    #将数据处理成Variable,如果有GPU,可以转换成cuda形式
    def get_variable(x):
        x = Variable(x)
        return x.cuda() if torch.cuda.is_available() else x
    
    #从torchvision.datasets中加载一些常用的数据集
    train_dataset = normal_datasets.MNIST(
        root='./mnist/',
        train=True,
        transform=transforms.ToTensor(),
        download=True
    )
    #绘制图
    print(train_dataset.train_data.size())
    print(train_dataset.train_labels.size())
    plt.imshow(train_dataset.train_data[0].numpy(),cmap='gray')
    plt.title('%i' % train_dataset.train_labels[0])
    plt.show()
    
    #建立数据加载器和batch
    test_datatest = normal_datasets.MNIST(
        root='./mnist/',
        train=True,
        transform=transforms.ToTensor(),
    )
    
    #处理数据,使用DataLoader进行batch训练
    train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                                               batch_size=BATCH_SIZE,
                                               shuffle=True)
    
    test_loader = torch.utils.data.DataLoader(dataset=test_datatest,
                                              batch_size=BATCH_SIZE,
                                              shuffle=False)
    
    #建立计算图模型
    class CNN(nn.Module):
        def __init__(self):
            super(CNN, self).__init__()
            #使用序列工具快速构建
            self.conv1 = nn.Sequential(
                nn.Conv2d(1,16,kernel_size=5,padding=2),
                nn.BatchNorm2d(16),
                nn.ReLU(),
                nn.MaxPool2d(2)
            )
    
            self.conv2 = nn.Sequential(
                nn.Conv2d(16,32,kernel_size=5,padding=2),
                nn.BatchNorm2d(32),
                nn.ReLU(),
                nn.MaxPool2d(2)
            )
            self.fc = nn.Linear(7*7*32,10)
    
        def forward(self,x):
            out = self.conv1(x)
            out = self.conv2(out)
            out = out.view(out.size(0),-1)
            out = self.fc(out)
            return out
    
    cnn = CNN()
    print(cnn)
    
    if torch.cuda.is_available():
        cnn = cnn.cuda()
    
    #定义优化器和损失(选择损失函数和优化方法)
    loss_func = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(cnn.parameters(),lr=LR)
    
    #进行batch训练
    for epoch in range(EPOCH):
        for i,(images,labels) in enumerate(train_loader):
            images = get_variable(images)
            labels = get_variable(labels)
            # 输入训练数据
            outputs = cnn(images)
            #计算误差
            loss = loss_func(outputs,labels)
            # 清空上一次的梯度
            optimizer.zero_grad()
            # 误差反向传递
            loss.backward()
            # 优化器参数更新
            optimizer.step()
    
            if(i+1)%100 ==0:
                print('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f'
                      % (epoch + 1, EPOCH, i + 1, len(train_dataset) //BATCH_SIZE, loss.data[0]))
    
        #测试模型
        cnn.eval()
        correct = 0
        total = 0
        for images,labels in test_loader:
            images = get_variable(images)
            labels = get_variable(labels)
    
            outputs = cnn(images)
            _,predicted = torch.max(outputs.data,1)
            total += labels.size(0)
            correct += (predicted == labels.data).sum()
    
        #print('测试准确率: %d%d' %(100*correct/total))
        print(' 测试 准确率: %d %%' % (100 * correct / total))
        #保存训练模型
        torch.save(cnn.state_dict(),'cnn.pkl')
    
    展开全文
  • NN框架:Caffe,TensorFlowPyTorch:Caffe介绍、TensorFlow介绍、PyTorch介绍和各种对比优缺点,其他框架
  • 蠕虫/ tensorflow-pytorch-cuda
  • 1. pytorch转tensorflow # coding=utf-8 """Convert Huggingface Pytorch checkpoint to Tensorflow checkpoint.""" import argparse import os import numpy as np import tensorflow as tf import torch def...

    1. pytorch转tensorflow

    根据抱抱脸代码修改

    # coding=utf-8
    """Convert Huggingface Pytorch checkpoint to Tensorflow checkpoint."""
    
    import argparse
    import os
    
    import numpy as np
    import tensorflow as tf
    import torch
    
    
    def convert_pytorch_checkpoint_to_tf(model: BertModel, ckpt_dir: str, model_name: str):
    
        """
        :param model:BertModel Pytorch model instance to be converted
        :param ckpt_dir: Tensorflow model directory
        :param model_name: model name
        :return:
    
        Currently supported HF models:
            Y BertModel
            N BertForMaskedLM
            N BertForPreTraining
            N BertForMultipleChoice
            N BertForNextSentencePrediction
            N BertForSequenceClassification
            N BertForQuestionAnswering
        """
    
        tensors_to_transpose = ("dense.weight", "attention.self.query", "attention.self.key", "attention.self.value")
    
        var_map = (
            ("layer.", "layer_"),
            ("word_embeddings.weight", "word_embeddings"),
            ("position_embeddings.weight", "position_embeddings"),
            ("token_type_embeddings.weight", "token_type_embeddings"),
            (".", "/"),
            ("LayerNorm/weight", "LayerNorm/gamma"),
            ("LayerNorm/bias", "LayerNorm/beta"),
            ("weight", "kernel"),
        )
    
        if not os.path.isdir(ckpt_dir):
            os.makedirs(ckpt_dir)
    
        state_dict = model.keys()
    
        def to_tf_var_name(name: str):
            for patt, repl in iter(var_map):
                name = name.replace(patt, repl)
            return "bert/{}".format(name)
    
        def create_tf_var(tensor: np.ndarray, name: str, session: tf.Session):
            tf_dtype = tf.dtypes.as_dtype(tensor.dtype)
            tf_var = tf.get_variable(dtype=tf_dtype, shape=tensor.shape, name=name, initializer=tf.zeros_initializer())
            session.run(tf.variables_initializer([tf_var]))
            session.run(tf_var)
            return tf_var
    
        tf.reset_default_graph()
        with tf.Session() as session:
            for var_name in state_dict:
                tf_name = to_tf_var_name(var_name)
                torch_tensor = model[var_name].cpu().numpy()
                if any([x in var_name for x in tensors_to_transpose]):
                    torch_tensor = torch_tensor.T
                tf_var = create_tf_var(tensor=torch_tensor, name=tf_name, session=session)
                tf.keras.backend.set_value(tf_var, torch_tensor)
                tf_weight = session.run(tf_var)
                print("Successfully created {}: {}".format(tf_name, np.allclose(tf_weight, torch_tensor)))
    
            saver = tf.train.Saver(tf.trainable_variables())
            saver.save(session, os.path.join(ckpt_dir, model_name.replace("-", "_") + ".ckpt"))
    
    
    def main(raw_args=None):
        parser = argparse.ArgumentParser()
        parser.add_argument("--model_name", type=str, required=True, help="model name e.g. bert-base-uncased")
        parser.add_argument("--pytorch_model_path", type=str, required=True, help="/path/to/<pytorch-model-name>.bin")
        parser.add_argument("--tf_cache_dir", type=str, required=True, help="Directory in which to save tensorflow model")
        args = parser.parse_args(raw_args)
    
        model = torch.load(args.pytorch_model_path),
        
        convert_pytorch_checkpoint_to_tf(model=model, ckpt_dir=args.tf_cache_dir, model_name=args.model_name)
    
    
    if __name__ == "__main__":
        main()
    

    模型的转换主要在模型参数名 key值的转换,所以重点是正确获取当前模型的key值,即代码的第46行

     state_dict = model.keys()

    获取key值后,根据参数名映射表var_map 进行转换,并将value转换为TensorFlow的格式即可。

    将以上代码保存为

     convert_bert_pytorch_checkpoint_to_original_tf.py

    使用一下命令进行转换 

    python convert_bert_pytorch_checkpoint_to_original_tf.py --model_name bert_model --tf_cache_dir tf_save_path/ --pytorch_model_path torch_model_path/

    1. tensorflow转pytorch

    直接使用抱抱脸代码即可

    # coding=utf-8
    
    """Convert BERT checkpoint."""
    
    
    import argparse
    import logging
    
    import torch
    
    from transformers import BertConfig, BertForPreTraining, load_tf_weights_in_bert
    
    
    logging.basicConfig(level=logging.INFO)
    
    
    def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):
        # Initialise PyTorch model
        config = BertConfig.from_json_file(bert_config_file)
        print("Building PyTorch model from configuration: {}".format(str(config)))
        model = BertForPreTraining(config)
    
        # Load weights from tf checkpoint
        load_tf_weights_in_bert(model, config, tf_checkpoint_path)
    
        # Save pytorch-model
        print("Save PyTorch model to {}".format(pytorch_dump_path))
        torch.save(model.state_dict(), pytorch_dump_path)
    
    
    if __name__ == "__main__":
        parser = argparse.ArgumentParser()
        # Required parameters
        parser.add_argument(
            "--tf_checkpoint_path", default=None, type=str, required=True, help="Path to the TensorFlow checkpoint path."
        )
        parser.add_argument(
            "--bert_config_file",
            default=None,
            type=str,
            required=True,
            help="The config json file corresponding to the pre-trained BERT model. \n"
            "This specifies the model architecture.",
        )
        parser.add_argument(
            "--pytorch_dump_path", default=None, type=str, required=True, help="Path to the output PyTorch model."
        )
        args = parser.parse_args()
        convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path)
    

    根据具体的参数说明进行使用

    展开全文
  • 基于TensorFlowPyTorch的深度学习框架对比分析.pdf
  • tensorflowpytorch模型之间转换

    千次阅读 2021-01-13 11:07:11
    参考链接:... 一. tensorflow模型转pytorch模型 import tensorflow as tf import deepdish as dd import argparse import os import numpy as np def tr(v): # tensorflow weights to pytorch weights

    参考链接:
    https://github.com/bermanmaxim/jaccardSegment/blob/master/ckpt_to_dd.py

    一. tensorflow模型转pytorch模型

    import tensorflow as tf
    import deepdish as dd
    import argparse
    import os
    import numpy as np
    
    def tr(v):
        # tensorflow weights to pytorch weights
        if v.ndim == 4:
            return np.ascontiguousarray(v.transpose(3,2,0,1))
        elif v.ndim == 2:
            return np.ascontiguousarray(v.transpose())
        return v
    
    def read_ckpt(ckpt):
        # https://github.com/tensorflow/tensorflow/issues/1823
        reader = tf.train.NewCheckpointReader(ckpt)
        weights = {n: reader.get_tensor(n) for (n, _) in reader.get_variable_to_shape_map().items()}
        pyweights = {k: tr(v) for (k, v) in weights.items()}
        return pyweights
    if __name__ == '__main__':
        parser = argparse.ArgumentParser(description="Converts ckpt weights to deepdish hdf5")
        parser.add_argument("infile", type=str,
                            help="Path to the ckpt.")  # ***model.ckpt-22177***
        parser.add_argument("outfile", type=str, nargs='?', default='',
                            help="Output file (inferred if missing).")
        args = parser.parse_args()
        if args.outfile == '':
            args.outfile = os.path.splitext(args.infile)[0] + '.h5'
        outdir = os.path.dirname(args.outfile)
        if not os.path.exists(outdir):
            os.makedirs(outdir)
        weights = read_ckpt(args.infile)
        dd.io.save(args.outfile, weights)
    

    1.运行上述代码后会得到model.h5模型,如下:
    备注:保持tensorflow和pytorch使用的python版本一致
    在这里插入图片描述

    2.使用:在pytorch内加载改模型:
    这里假设网络保存时参数命名一致

    net = ...
    import torch
    import deepdish as dd
    net = resnet50(..)
    model_dict = net.state_dict()
    #先将参数值numpy转换为tensor形式
    pretrained_dict =  = dd.io.load('./model.h5')
    new_pre_dict = {}
    for k,v in pretrained_dict.items():
        new_pre_dict[k] = torch.Tensor(v)
    #更新
    model_dict.update(new_pre_dict)
    #加载
    net.load_state_dict(model_dict)
    展开全文
  • TensorflowPytorch的函数转换 1)http://www.xyu.ink/1785.html 2)https://www.cnblogs.com/wanghui-garcia/p/10775859.html 3)https://www.cnpython.com/qa/353210 仅供学习记录,如侵必删

    Tensorflow与Pytorch的函数转换
    1)http://www.xyu.ink/1785.html
    2)https://www.cnblogs.com/wanghui-garcia/p/10775859.html
    3)https://www.cnpython.com/qa/353210
    仅供学习记录,如侵必删

    展开全文
  • 中文NER 本项目使用 python 2.7 张量流1.7.0 火炬0.4.0 对命名实体识别不了解的可以先看一下这篇。顺便求star〜 这是最简单的一个命名实体识别BiLSTM + CRF模型。 数据 数据文件夹中有三个开源数据集可以使用,玻...
  • tensorflow中地这个语言实现了图片地resize,采用双线性插值地方法,对应pytorch中语言为: import torch.nn.functional as F x=tensor x = nn.functional.interpolate(x, scale_factor=8, mode='bilinear', align_...
  • tensorflowpytorch的移植转换函数对比表

    千次阅读 多人点赞 2019-08-05 21:30:11
    相信有了这份表格对比,tensorflowpytorch的基本移植转换,应该是手到擒来。 名称 tensorflow pytorch 二维卷积 tf.nn.conv2d(input_x, w, strides=[1, 1, 1, 1], padding='SAME') torch.nn.Conv2d...
  • RTX3060安装tensorflow+pytorch+pycharm+anaconda,文档中含有百度网盘的安装包(永久分享),省去了人工在nvidia官网下载文件,同时pytorch直接运行whl文件就可以了,省时间,另外安装包里面还有pycharm包,以及...
  • 后台很多同学问我深度学习框架到底该学TensorFlow还是PyTorch呢?我将在以下几个方面给出个人建议。 一、易学性与操作性 深度学习框架使用计算图来定义神经网络中执行的计算顺序。TF1使用的静态图机制,PyTorch使用...
  • tensorflow转Pytorch的笔记(gather的用法,待补充...) https://blog.csdn.net/CHNguoshiwushuang/article/details/80721675 tensorflow的mnist改写成pytorch ...
  • 在这里分别介绍TensorFlowPyTorch的一种方法。 tf.train.exponential_decay() TensorFlow提供了指数衰减法 tf.train.exponential_decay(learning_rate, global_step=global_step, decay_steps=100,decay_rate=0.99...
  • TensorFlowPyTorch

    千次阅读 2020-10-31 16:46:08
    TensorFlowPyTorch对比计算图分布式训练生产部署比较 参考链接: https://zhuanlan.zhihu.com/p/80733307 https://blog.csdn.net/qq_37388085/article/details/102559532 计算图 计算图是一种将计算描述成有向无环...
  • 【保姆级】TensorFlow以及pytorch安装教程1. 前言2. 安装pytorch教程3. 安装TensorFlow教程3. 参考 1. 前言 简单总结下我安装的环境及过程: 2. 安装pytorch教程 超详细的安装教程戳:PyTorch深度学习快速入门教程...
  • Framework Env name (--env parameter) ... TensorFlow 2.2 tensorflow-2.2 TensorFlow 2.2.0 + Keras 2.3.1 on Python 3.7. floydhub/tensorflow TensorFlow-2.2 TensorFlow 2.1 tensorfl
  • 目录前言一、Tensorflow的版本兼容性二、Pytorch的版本兼容性三、Tensorflow安装流程1.创建虚拟环境2.激活虚拟环境3.配置CUDA和CUDNN驱动框架4.加载tensorflow模块5.设置超时时间6.验证tensorflow安装成功四、...
  • 参考链接 https://time.geekbang.org/course/detail/100046401-202904 TensorFlow PyTorch
  • tensorflowpytorch 一起安装

    千次阅读 2020-09-13 10:11:27
    先用pip 安装 tensorflow-gpu (conda search tensorflow-gpu 可以搜索版本) pip install tensorflow-gpu==1.15 升级pip, 这个最新版的pip 可以用于解决tf 与torch 依赖包的版本冲突 pip install --upgrade pip ...
  • 前面代码是PyTorch,动态图方式计算。动态图意思,编好程序即可执行; 后面代码是TensorFlow,静态图方式计算。静态图意思,先创建计算程序,后执行;
  • Tensorflowpytorch对比

    2020-05-05 00:55:11
    1、前言 很多人在学习深度学习时,都会对于学习哪个深度学习的框架而烦恼,到底是Tensorflow 还是 pytourch?一个主流的说法就是如果搞学术研究,那么...PyTorch本质上是Numpy的替代者,而且支持GPU、带有高级功能...
  • 对比TensorflowPyTorch的异同

    千次阅读 2020-03-21 21:04:09
    一、PyTorch简介 PyTorch 是由 Torch7 团队开源的,这也是Facebook 的 AI 研究团队发布了一个 Python 工具包,据该项目官网介绍,是一个 Python 优先的深度学习框架,能够在强大的 GPU 加速基础上实现张量和动态神经...
  • Tensorflow安装教程 1. Anaconda安装 前往Anaconda官网,下载对应版本Anaconda安装包。 安装包下载完成后,进行安装,记得自己Anaconda的安装路径。 2. Pycharm安装 前往Jetbrain官网,下载安装社区版Pycharm即可。...
  • Windows下TensorFlow与cuda Windows下pytorch与cuda https://pytorch.org/get-started/previous-versions/
  • https://huggingface.co/transformers/converting_tensorflow_models.html pip install transformers 下载完成后: 第二步 到:...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 41,726
精华内容 16,690
关键字:

tensorflow转pytorch