精华内容
下载资源
问答
  • pytorch模型部署

    千次阅读 2019-11-20 10:12:31
    1. C++调用python训练的pytorch模型(一)--makefile编写基础 https://blog.csdn.net/xiake001/article/details/84838249 2.C++调用python训练的pytorch模型(二)-----C++调用python例子 ...

    1. C++调用python训练的pytorch模型(一)--makefile编写基础

    https://blog.csdn.net/xiake001/article/details/84838249

    2. C++调用python训练的pytorch模型(二)-----C++调用python例子

    https://blog.csdn.net/xiake001/article/details/84838290

    3. C++调用python训练的pytorch模型(三)----- 实战:封装pytorch模型

    https://blog.csdn.net/xiake001/article/details/84838339

    4. 利用C++调用Pytorch的模型

    https://blog.csdn.net/qq_16309049/article/details/83502243

    展开全文
  • Pytorch模型部署全纪录 文章目录Pytorch模型部署全纪录pytorch模型保存模型保存模型加载保存和加载 Checkpoint 用于推理/继续训练模型转换onnx模型检查安装libtorch编写程序加载模型CMakeLists.txt测试(cpu与gpu)...

    Pytorch模型部署全纪录

    觉得好请点个赞

    pytorch模型保存

    模型保存

    torch.save(model, ‘net.pth’) # 保存整个神经网络的模型结构以及参数
    torch.save(model, ‘net.pkl’) # 保存整个神经网络的模型结构以及参数
    torch.save(model.state_dict(), ‘net_params.pth’) # 只保存模型参数
    torch.save(model.state_dict(), ‘net_params.pkl’) # 只保存模型参数
    

    模型加载

    model = torch.load(‘net.pth’) # 加载整个神经网络的模型结构以及参数
    model = torch.load(‘net.pkl’) # 加载整个神经网络的模型结构以及参数
    model.load_state_dict(torch.load(‘net_params.pth’)) # 仅加载参数
    model.load_state_dict(torch.load(‘net_params.pkl’)) # 仅加载参数
    

    保存和加载 Checkpoint 用于推理/继续训练

    torch.save({
                'epoch': epoch,
                'model_state_dict': model.state_dict(),
                'optimizer_state_dict': optimizer.state_dict(),
                'loss': loss,
                ...
                }, PATH)
    

    加载

    model = TheModelClass(*args, **kwargs)
    optimizer = TheOptimizerClass(*args, **kwargs)
    
    checkpoint = torch.load(PATH)
    model.load_state_dict(checkpoint['model_state_dict'])
    optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
    epoch = checkpoint['epoch']
    loss = checkpoint['loss']
    
    model.eval()
    # - or -
    model.train()
    

    模型转换

    libtorch不依赖于python,python训练的模型,需要转换为script model才能由libtorch加载

    import torch
    import architecture as arch
     
    # An instance of your model.
    model = arch.RRDB_Net(3, 3, 64, 23, gc=32, upscale=4, norm_type=None, act_type='leakyrelu', \
                            mode='CNA', res_scale=1, upsample_mode='upconv')
     
    model.load_state_dict(torch.load('./models/RRDB_ESRGAN_x4.pth'), strict=True)
    model.eval()
     
    # An example input you would normally provide to your model's forward() method.
    example = torch.rand(64, 3, 3, 3)
     
    # Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
    traced_script_module = torch.jit.trace(model, example)
    output = traced_script_module(torch.ones(64, 3, 3, 3))
    traced_script_module.save("./models/RRDB_ESRGAN_x4_000.pt")
     
    # The traced ScriptModule can now be evaluated identically to a regular PyTorch module
    print(output)
    

    自己的模型

    import torch
    import torchvision
    from PIL import Image
    import numpy as np
    from model import AlexNet
    
    # 图片发在了build文件夹下
    image = Image.open("/home/zhongsy/datasets/dataset/train/no_obstacle/0.jpg")
    image = image.resize((224, 224), Image.ANTIALIAS)
    image = np.asarray(image)
    image = image / 255
    image = torch.Tensor(image).unsqueeze_(dim=0)
    image = image.permute((0, 3, 1, 2)).float()
    
    model = torch.load('./AlexNet.pt',map_location=torch.device('cpu'))
    model.eval()
    input_cpu_ = image.cpu()
    input_gpu = image.cuda()
    
    
    torchd_cpu = torch.jit.trace(model, input_cpu_)
    torch.jit.save(torchd_cpu, "cpu.pth")
    
    model_gpu = torch.load('./AlexNet.pt')
    model_gpu.cuda()
    model_gpu.eval()
    
    torchd_gpu = torch.jit.trace(model_gpu,input_gpu)
    torch.jit.save(torchd_gpu, "gpu.pth")
    

    需要注意的是如果想要在gpu上训练模型,在cpu上做inference,一定要在模型save之前转化,再就是记得调用model.eval(),形如

    • 也就是说 cpu保存的只能在cpu上做推理,也就是说 gpu保存的只能在gpu上做推理,

    onnx模型检查

    import onnx
    
    model = onnx.load('gpu.onnx')
    onnx.checker.check_model(model)
    print("====> pass")
    

    安装libtorch

    下载链接

    编写程序加载模型

    #include <time.h>
    #include <torch/script.h>
    
    #include <algorithm>
    #include <iostream>
    #include <memory>
    #include <opencv2/opencv.hpp>
    
    int main() {
      clock_t start, end;
      torch::jit::script::Module model = torch::jit::load("../cpu.pth");
    //   model.to(at::kCUDA);
      start = clock();
    
      cv::Mat input_image = cv::imread("../294.jpg");
      cv::resize(input_image, input_image, cv::Size(224, 224));
    
      torch::Tensor image_tensor = torch::from_blob(
          input_image.data, {input_image.rows, input_image.cols, 3}, torch::kByte);
    
      image_tensor = image_tensor.permute({2, 0, 1});
    
      image_tensor = image_tensor.toType(torch::kFloat);
    
      image_tensor = image_tensor.div(255);
    
      image_tensor = image_tensor.unsqueeze(0);
    
    //   image_tensor = image_tensor.to(at::kCUDA);
    
      torch::Tensor pred = model.forward({image_tensor}).toTensor();
      end = clock();
      std::cout << "F2运行时间" << (double)(end - start) / CLOCKS_PER_SEC
                << std::endl;
    
      std::cout << pred << std::endl;
    }
    

    CMakeLists.txt

    cmake_minimum_required(VERSION 3.4 FATAL_ERROR)
    project(simnet)
    
    set(Torch_DIR /home/zhongsy/Downloads/libtorch/share/cmake/Torch)
    find_package(Torch REQUIRED)        # 查找libtorch
    find_package(OpenCV REQUIRED)       # 查找OpenCV
    set(CMAKE_CXX_STANDARD 14)
    set(CMAKE_CXX_STANDARD_REQUIRED ON)
    if(NOT Torch_FOUND)
        message(FATAL_ERROR "Pytorch Not Found!")
    endif(NOT Torch_FOUND)
    
    message(STATUS "Pytorch status:")
    message(STATUS "    libraries: ${TORCH_LIBRARIES}")
    
    message(STATUS "OpenCV library status:")
    message(STATUS "    version: ${OpenCV_VERSION}")
    message(STATUS "    libraries: ${OpenCV_LIBS}")
    message(STATUS "    include path: ${OpenCV_INCLUDE_DIRS}")
    
    add_executable(simnet pytorch.cc)
    target_link_libraries(simnet ${TORCH_LIBRARIES} ${OpenCV_LIBS}) 
    
    

    测试(cpu与gpu)

    可以把数据选择放在cpu或者gpu上面

    cpu运行时间0.094049 s
     0.4806 -0.5249
    [ CPUFloatType{1,2} ]
    
    cuda::is_available():1
    Time used:33.31 ms
    -10.0399  10.7939
    [ CUDAFloatType{1,2} ]
    

    利用Tensorrt加速

    tensort加速链接

    展开全文
  • Pytorch模型部署 - Libtorch(crnn模型部署)

    千次阅读 热门讨论 2020-03-21 15:42:32
    Pytorch模型部署 - Libtorch 简介 libtorch是facebook提供的一套C++推理接口库,便于工业级别部署和性能优化。 安装: libtoch+opencv联合编译,这里采用libtorch-1.4(cpu)+opencv4.1. 可能出现的问题 ibtoch,...

    Pytorch模型部署 - Libtorch

    简介

    libtorch是facebook提供的一套C++推理接口库,便于工业级别部署和性能优化。

    配置

    • cmake 3.0
    • libtorch-1.14(cpu)
    • opencv-4.1.1

    安装:

    libtoch+opencv联合编译,这里采用libtorch-1.4(cpu)+opencv4.1.

    • 可能出现的问题

      • ibtoch,opencv联合编译项目时,报错Undefined reference to cv::imread(std::string const&, int).
      • 解决方案:
        • 在相同编译环境下,重新编译libtorch和opencv源码.(未测试…)
        • 在opencv的CMakeList.txt中加上add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)重新编译opencv.(测试通过)
    • libtorch安装:解压下载包就可以,在代码编译时指定库的路径即可。

    • opencv安装: 下载源码 https://opencv.org/releases/

      unzip opencv-4.1.1.zip
      cd opencv-4.1.1
      # vim CMakeList.txt 如果出现上面问题,在这里添加上述命令,重新编译安装
      mkdir build && cd build
      cmake -D CMAKE_BUILD_TYPE=RELEASE -D OPENCV_GENERATE_PKGCONFIG=ON -D CMAKE_INSTALL_PREFIX=/usr/local ..
      make -j4
      sudo make intall
      

      ls /usr/local/lib查看安装好的opencv库.

    案例:libtorch部署crnn-英文识别模型.

    crnn: 文本识别模型,常用于OCR.

    Step1: 模型转换

    将pytorch训练好的crnn模型转换为libtorch能够读取的模型.

    #covertion.py
    import torch
    import torchvison
    
    model = CRNN(32, 1, len(keys.alphabetEnglish) + 1, 256, 1).cpu()
    
    state_dict = torch.load(
        model_path, map_location=lambda storage, loc: storage)
    new_state_dict = OrderedDict()
    for k, v in state_dict.items():
        name = k.replace('module.', '')  # remove `module.`
        new_state_dict[name] = v
    # # # load params
    model.load_state_dict(new_state_dict)
    
    # convert pth-model to pt-model
    example = torch.rand(1, 1, 32, 512)
    traced_script_module = torch.jit.trace(model, example)
    traced_script_module.save("src/crnn.pt")
    

    代码过长,github附完整代码。github: crnn_libtorch

    Step2: 模型部署

    利用libtoch+opencv实现对文字条的识别.

    //crnnDeploy.h
    #include <torch/torch.h>
    #include <torch/script.h>
    #include <opencv2/highgui.hpp>
    #include <opencv2/imgproc.hpp>
    
    #include <iostream>
    #include <cassert>
    #include <vector>
    
    #ifndef CRNN_H
    #define CRNN_H
    
    class Crnn{
        public:
            Crnn(std::string& modelFile, std::string& keyFile);
            torch::Tensor loadImg(std::string& imgFile, bool isbath=false);
            void infer(torch::Tensor& input);
        private:
            torch::jit::script::Module m_module;
            std::vector<std::string> m_keys;
            std::vector<std::string> readKeys(const std::string& keyFile);
            torch::jit::script::Module loadModule(const std::string& modelFile);
    };
    
    #endif//CRNN_H
    
    /*
    @author
    date: 2020-03-17
    Introduce:
        Deploy crnn model with libtorch.
    */
    
    #include "CrnnDeploy.h"
    #include <thread>
    #include <sys/time.h>
    
    //construtor
    Crnn::Crnn(std::string& modelFile, std::string& keyFile){
        this->m_module = this->loadModule(modelFile);
        this->m_keys = this->readKeys(keyFile);
    }
    
    
    torch::Tensor Crnn::loadImg(std::string& imgFile, bool isbath){
    	cv::Mat input = cv::imread(imgFile, 0);
    	if(!input.data){
    		printf("Error: not image data, imgFile input wrong!!");
    	}
    	int resize_h = int(input.cols * 32 / input.rows);
    	cv::resize(input, input, cv::Size(resize_h, 32));
        torch::Tensor imgTensor;
        if(isbath){
            imgTensor = torch::from_blob(input.data, {32, resize_h, 1}, torch::kByte);
    	    imgTensor = imgTensor.permute({2,0,1});
        }else
        {
            imgTensor = torch::from_blob(input.data, {1,32, resize_h, 1}, torch::kByte);
            imgTensor = imgTensor.permute({0,3,1,2});
        }
    	imgTensor = imgTensor.toType(torch::kFloat);
    	imgTensor = imgTensor.div(255);
    	imgTensor = imgTensor.sub(0.5);
    	imgTensor = imgTensor.div(0.5);
        return imgTensor;
    }
    
    void Crnn::infer(torch::Tensor& input){
        torch::Tensor output = this->m_module.forward({input}).toTensor();
        std::vector<int> predChars;
        int numImgs = output.sizes()[1];
        if(numImgs == 1){
            for(uint i=0; i<output.sizes()[0]; i++){
                auto maxRes = output[i].max(1, true);
                int maxIdx = std::get<1>(maxRes).item<float>();
                predChars.push_back(maxIdx);
            }
            // 字符转录处理
            std::string realChars="";
            for(uint i=0; i<predChars.size(); i++){
                if(predChars[i] != 0){
                    if(!(i>0 && predChars[i-1]==predChars[i])){
                        realChars += this->m_keys[predChars[i]];
                    }
                }
            }
            std::cout << realChars << std::endl;
        }else
        {
            std::vector<std::string> realCharLists;
            std::vector<std::vector<int>> predictCharLists;
    
            for (int i=0; i<output.sizes()[1]; i++){
                std::vector<int> temp;
                for(int j=0; j<output.sizes()[0]; j++){
                    auto max_result = (output[j][i]).max(0, true);
                    int max_index = std::get<1>(max_result).item<float>();//predict value
                    temp.push_back(max_index);
                }
                predictCharLists.push_back(temp);
            }
    
            for(auto vec : predictCharLists){
                std::string text = "";
                for(uint i=0; i<vec.size(); i++){
                    if(vec[i] != 0){
                        if(!(i>0 && vec[i-1]==vec[i])){
                            text += this->m_keys[vec[i]];
                        }
                    }
                }
                realCharLists.push_back(text);
            }
            for(auto t : realCharLists){
                std::cout << t << std::endl;
            }
        }
    
    }
    
    std::vector<std::string> Crnn::readKeys(const std::string& keyFile){
        std::ifstream in(keyFile);
    	std::ostringstream tmp;
    	tmp << in.rdbuf();
    	std::string keys = tmp.str();
    
        std::vector<std::string> words;
        words.push_back(" ");//函数过滤掉了第一个空格,这里加上
        int len = keys.length();
        int i = 0;
        
        while (i < len) {
          assert ((keys[i] & 0xF8) <= 0xF0);
          int next = 1;
          if ((keys[i] & 0x80) == 0x00) {
          } else if ((keys[i] & 0xE0) == 0xC0) {
            next = 2;
          } else if ((keys[i] & 0xF0) == 0xE0) {
            next = 3;
          } else if ((keys[i] & 0xF8) == 0xF0) {
            next = 4;
          }
          words.push_back(keys.substr(i, next));
          i += next;
        } 
        return words;
    }
    
    torch::jit::script::Module Crnn::loadModule(const std::string& modelFile){
        torch::jit::script::Module module;
        try{
             module = torch::jit::load(modelFile);
        }catch(const c10::Error& e){
            std::cerr << "error loadding the model !!!\n";
        }
        return module;
    }
    
    
    long getCurrentTime(void){
        struct timeval tv;
        gettimeofday(&tv, NULL);
        return tv.tv_sec * 1000 + tv.tv_usec/1000;
    }
    
    int main(int argc, const char* argv[]){
    
        if(argc<4){
            printf("Error use CrnnDeploy: loss input param !!! \n");
            return -1;
        }
        std::string modelFile = argv[1];
        std::string keyFile = argv[2];
        std::string imgFile = argv[3];
    
        long t1 = getCurrentTime();
        Crnn* crnn = new Crnn(modelFile,keyFile);
        torch::Tensor input = crnn->loadImg(imgFile);
        crnn->infer(input);
        delete crnn;
        long t2 = getCurrentTime();
    
        printf("ocr time : %ld ms \n", (t2-t1));
        return 0;
    }
    

    完整代码和测试模型:
    github: crnn_libtorch

    获取代码: git clone https://github.com/chenyangMl/crnn_libtorch.git

    参考

    • opencv installtion: https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html
    • libtorch : https://pytorch.org/tutorials/advanced/cpp_frontend.html
    展开全文
  • 因此需要将PyTorch模型部署到C++上。以下是实现的步骤。 将PyTorch模型转化成Torch Script Torch Script的作用是用TorchScript编写的任何代码都可以从Python进程中保存并加载到没有Python依赖关系的进程中。有两种...
    1. 为什么要这么做?

    PyTorch作为一个开源的Python机器学习库正受到越来越广泛的关注和应用。然而,作为一门语言,Python并不是在任何场景下都适用的。在生产以及部署到自动驾驶车辆上等场景中,C++常常是更好的选择。因此需要将PyTorch模型部署到C++上。以下是实现的步骤。

    1. 将PyTorch模型转化成Torch Script

    Torch Script的作用是用TorchScript编写的任何代码都可以从Python进程中保存并加载到没有Python依赖关系的进程中。有两种方法可以实现这一步骤。

    (1)tracing

    在这种机制中,通过使用示例输入对模型进行分析,并记录通过模型的这些输入流,来捕获模型的结构。这适用于控制流使用有限的模型。具体做法为,将示例输入到模型中,并将其作为一个例子输入到torch.jit.trace函数中。这将产生一个torch.jit.ScriptModule对象,并在模块的forward方法中嵌入模型跟踪,具体代码如下:

    import torch
    import torchvision
    
    # An instance of your model.
    model = torchvision.models.resnet18()
    
    # An example input you would normally provide to your model's forward() method.
    example = torch.rand(1, 3, 224, 224)
    
    # Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
    traced_script_module = torch.jit.trace(model, example)
    
    

    (2)使用注释来转化

    在某些情况下,比如模型采用了特定形式的控制流,可能希望直接用Torch Script编写模型,并相应地对模型进行注释。比如以下这个模型:

    import torch
    
    class MyModule(torch.nn.Module):
        def __init__(self, N, M):
            super(MyModule, self).__init__()
            self.weight = torch.nn.Parameter(torch.rand(N, M))
    
        def forward(self, input):
            if input.sum() > 0:
              output = self.weight.mv(input)
            else:
              output = self.weight + input
            return output
    

    由于该模块的forward方法使用依赖于输入的控制流,因此不适合tracing。相反,我们可以将其转换为ScriptModule。为了将模块转换为ScriptModule,需要使用torch.jit.script编译模块,如下所示:

    class MyModule(torch.nn.Module):
        def __init__(self, N, M):
            super(MyModule, self).__init__()
            self.weight = torch.nn.Parameter(torch.rand(N, M))
    
        def forward(self, input):
            if input.sum() > 0:
              output = self.weight.mv(input)
            else:
              output = self.weight + input
            return output
    
    my_module = MyModule(10,20)
    sm = torch.jit.script(my_module)
    
    1. 将Script Module输出为文件

    (1) tracing方法:

    traced_script_module.save("traced_resnet_model.pt")
    

    (2)script方法:

    sm.save("my_module_model.pt")
    

    到这步我们就已经结束了在Python上的操作,要开始转战C++了。

    1. 在C++中载入Script Module

    要用c++加载序列化的PyTorch模型,应用程序必须依赖于LibTorch。LibTorch发行版包含了一组共享库、头文件和CMake配置文件。虽然CMake不是必须的,但它是推荐的方法,并且在将来会得到很好的支持。在本教程中,我们将使用CMake和LibTorch构建一个c++应用程序,它只是用于加载和执行一个序列化的PyTorch模型。

    LibTorch的下载地址在这里,根据自己的需求和电脑的配置下载,本文对应的版本是1.8.1CPU版,其他版本的也许本文的方法不适用,请注意!

    首先,需要编写代码来载入模块。

    #include <torch/script.h> // One-stop header.
    
    #include <iostream>
    #include <memory>
    
    int main(int argc, const char* argv[]) {
      if (argc != 2) {
        std::cerr << "usage: example-app <path-to-exported-script-module>\n";
        return -1;
      }
    
    
      torch::jit::script::Module module;
      try {
        // Deserialize the ScriptModule from a file using torch::jit::load().
        module = torch::jit::load(argv[1]);
      }
      catch (const c10::Error& e) {
        std::cerr << "error loading the model\n";
        return -1;
      }
    
      std::cout << "ok\n";
    }
    

    接着,创建一个CMakeLists.txt, 内容如下:

    cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
    project(custom_ops)//写项目名
    
    //set(Torch_DIR /home/nio/libtorch-1.8.1/libtorch/share/cmake/Torch)
    find_package(Torch REQUIRED)
    
    add_executable(example-app example-app.cpp)//文件夹名,文件名
    target_link_libraries(example-app "${TORCH_LIBRARIES}")//文件夹名
    set_property(TARGET example-app PROPERTY CXX_STANDARD 14)//文件夹名
    

    这里需要注意,按官网教程,是不用set行的,在很多别人的教程里也不用,但我在实验的时候,没有set行设置Torch地址,后续的操作会报错,提示找不到Torch,所以看情况自己加。

    两个文件创建好之后,放在一个文件夹下,如:

    example-app/
      CMakeLists.txt
      example-app.cpp
    

    在该文件夹下,打开终端,输入以下指令:

    mkdir build
    cd build
    cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch ..//背后两个小点别忘了!!!
    

    一开始我没有设置Torch地址,得到的结果为:

    -- The C compiler identification is GNU 7.5.0
    -- The CXX compiler identification is GNU 7.5.0
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    CMake Error at CMakeLists.txt:4 (find_package):
      By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has
      asked CMake to find a package configuration file provided by "Torch", but
      CMake did not find one.
    
      Could not find a package configuration file provided by "Torch" with any of
      the following names:
    
        TorchConfig.cmake
        torch-config.cmake
    
      Add the installation prefix of "Torch" to CMAKE_PREFIX_PATH or set
      "Torch_DIR" to a directory containing one of the above files.  If "Torch"
      provides a separate development package or SDK, be sure it has been
      installed.
    
    
    -- Configuring incomplete, errors occurred!
    See also "/home/nio/C++FILE/code/vectornet/build/CMakeFiles/CMakeOutput.log".
    
    

    设置了之后:

    -- The C compiler identification is GNU 7.5.0
    -- The CXX compiler identification is GNU 7.5.0
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- Looking for pthread.h
    -- Looking for pthread.h - found
    -- Looking for pthread_create
    -- Looking for pthread_create - not found
    -- Looking for pthread_create in pthreads
    -- Looking for pthread_create in pthreads - not found
    -- Looking for pthread_create in pthread
    -- Looking for pthread_create in pthread - found
    -- Found Threads: TRUE  
    -- Found Torch: /home/nio/libtorch-1.8.1/libtorch/lib/libtorch.so  
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/nio/C++FILE/code/vectornet/build
    
    

    成功之后,在build文件夹下用make编译:

    (base) nio@LT5CG052BHT2:~/C++FILE/code/vectornet/build$ make
    Scanning dependencies of target vectornet
    [ 50%] Building CXX object CMakeFiles/vectornet.dir/pytoc.cpp.o
    [100%] Linking CXX executable vectornet
    [100%] Built target vectornet
    
    

    运行pt文件

    (base) nio@LT5CG052BHT2:~/C++FILE/code/vectornet/build$ ./vectornet ../traced_vectornet_model.pt 
    ok
    

    最后,执行输入并验证输出。在之前的cpp文件中加入:

    // Create a vector of inputs.
    std::vector<torch::jit::IValue> inputs;
    inputs.push_back(torch::ones({1, 3, 224, 224}));
    
    // Execute the model and turn its output into a tensor.
    at::Tensor output = module.forward(inputs).toTensor();
    std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';
    

    再make和执行pt文件即可得到结果。

    展开全文
  • Pytorch模型部署到Android端

    万次阅读 多人点赞 2020-03-09 15:07:35
    已经训练好的pytorch模型 Jetpack组件:CameraX(这个用来调用相机的) 如有需要,可以先看看我这两篇博文: 如果pytorch环境不满足,进行pytorch环境升级:win10+pytorch1.4+cuda10.1安装:从显卡驱动开始 ...
  • 分享 | 将Pytorch模型部署到Movidius神经计算棒 爱分享的 OpenVINO 中文社区 内 容 来 源 | 郭春旭 排 版 | 卢书晴 原文链接:https://mc.dfrobot.com.cn/thread-306570-1-1.html 内容提要 这篇文章将是从笔者一个...
  • PyTorch模型部署建议方案

    千次阅读 2020-03-13 22:03:09
    在这个存储库中,我将分享一些关于在生产环境中部署基于深度学习的模型的有用的注释和参考资料。 Convert PyTorch Models in Production: PyTorch Production Level Tutorials [Fantastic] The road to 1.0: ...
  • 工作原因,需要将pytorch模型部署到rk3399。 现一步步记录下这些bug哈(晕)----希望最后能把这项目完成吧 pytorch ----> onnx Bug 1: RuntimeError: Failed to export an ONNX attribute, since it’s not ...
  • 2020年更新,pytorch的web与c++部署方式小哲:pytorch web与c++部署的两种方式pytorch框架不像TF那样有tensorflow service 来进行工业环境的部署,相对工业部署比较复杂,但是自己习惯了使用pytorch,最近需要研究...
  • 在AWS Lambda( )上进行PyTorch网络部署的先前方法使用过时的PyTorch( 1.1.0 )作为依赖项层,并需要AWS S3来托管您的模型。 现在,您只能使用AWS Lambda并将模型托管为图层,并且每天都支持PyTorch master和最新...
  • 基于C++的PyTorch模型部署

    千次阅读 2020-04-23 11:05:23
    PyTorchAuthor:louwillMachineLearning Lab 引言 PyTorch作为一款端到端的深度学习框架,在1.0版本之后已具备较好的生产环境部署...
  • 文章目录引言基础概念onnx:跨框架的模型表达标准onnxruntime:部署模型的推理引擎示例代码0)安装onnx和onnxruntime1)pytorch模型转onnx模型2)onnx模型检验3)调用ONNX Runtime测试输入图片参考教程 引言 目前...
  • 使用Cortex把PyTorch模型部署到生产中

    千次阅读 2020-01-17 07:30:00
    点击上方“AI公园”,关注公众号,选择加“星标“或“置顶”作者:Caleb Kaiser编译:ronghuaiyang导读使用Cortex可以非常方便的部署PyTorch模型。Using...
  • pytorch训练出.pth模型如何在MacOS上或者IOS部署,这是个问题。 然而我们有了onnx,同样我们也有了coreML。 ONNX: onnx是一种针对机器学习设计的开放式文件格式,用来存储训练好的模型,并进行多种框架模型间...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 8,713
精华内容 3,485
关键字:

pytorch模型部署