精华内容
下载资源
问答
  • Libtorch编译

    千次阅读 2019-08-06 14:07:31
    关于libtorch的问题 libtorch的编译主要由下面两篇博文解决: https://www.cnblogs.com/cheungxiongwei/p/10689483.html ... libtorch编译完成后,写了一个cpp测试torch...

    关于libtorch的问题

    • libtorch的编译主要由下面两篇博文解决:
    • libtorch编译完成后,写了一个cpp测试torch能不能用,但是有torch.dll, c10.dll, caffe2.dll三个文件找不到,可以放在exe所在的Debug文件里,但是为了后面不出现这样的错误,就把三个dll放在了C盘的windows/System32文件夹里,但是这样就导致了在python中import torch里出现了“ImportError: DLL load failed: 找不到指定的模块"这个问题,因此不能放在windows/System32文件夹里
    展开全文
  • libtorch编译C++版本

    千次阅读 2020-12-05 12:15:33
    libtorch编译C++版本 一. 下载pytorch源码 git clone https://github.com/pytorch/pytorch.git cd pytorch git submodule sync git submodule update --init --recursive 二. 编译 1.安装依赖 # first: 安装cuda与...

    libtorch编译C++版本

    一. 下载pytorch源码

    git clone https://github.com/pytorch/pytorch.git
    cd pytorch
    git submodule sync
    git submodule update --init --recursive
    

    下面是上传至百度网盘的源码,pytorch-1.8.0

    链接:https://pan.baidu.com/s/1lxh7ueDsrnHqLaf5_oP2gg 提取码:8kp8 
    

    二. 编译

    1.安装依赖

    # first: 安装cuda与cudnn,下载cuda10.0对应.run文件与对应的cudnn7.6.5
    sh cuda_10.0.130_410.48_linux.run --no-opengl-libs 
    ldconfig /usr/local/cuda-10.0/lib64
    tar -xvzf  cudnn-10.0-linux-x64-v7.6.5.32.tgz
    cp cuda/include/* /usr/local/cuda-10.0/include/
    cp cuda/lib64/* /usr/local/cuda-10.0/lib64/
    chmod +x /usr/local/cuda-10.0/include/cudnn.h
    chmod +x /usr/local/cuda-10.0/lib64/libcudnn*
    nvcc -V
    
    # second:安装gcc g++
    apt install apt install software-properties-common
    add-apt-repository ppa:ubuntu-toolchain-r/test
    apt-get update 
    apt-get install gcc-7
    apt-get install g++-7
    update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 100
    update-alternatives --config gcc
    update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 100
    update-alternatives --config g++
    
    gcc --version
    g++ --version
    

    2.编译libtorch

    cd pytorch
    mkdir release 
    cd release
    cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/opt/libtorch -D BUILD_CAFFE2_MOBILE=OFF -D BUILD_PYTHON=OFF -D BUILD_CAFFE2_OPS=OFF -D BUILD_TEST=OFF -D USE_CUDA=ON -D USE_CUDNN=ON -D USE_OPENCV=ON -D USE_TBB=OFF ..
    make -j${nproc}
    make install
    

    三. 使用

    python training

    import torch
    import io
    
    class MyModule(torch.nn.Module):
        def forward(self, x):
            return x + 10
    
    m = torch.jit.script(MyModule())
    
    # Save to file
    torch.jit.save(m, 'scriptmodule.pt')
    # This line is equivalent to the previous
    m.save("scriptmodule.pt")
    
    # Save to io.BytesIO buffer
    buffer = io.BytesIO()
    torch.jit.save(m, buffer)
    
    # Save with extra files
    extra_files = {'foo.txt': b'bar'}
    torch.jit.save(m, 'scriptmodule.pt', _extra_files=extra_files)
    

    c++ inference

    #include <torch/script.h> // One-stop header.
    #include <iostream>
    #include <memory>
    
    int main(int argc, const char* argv[]) {
       if (argc != 2) {
          std::cerr << "usage: example-app <path-to-exported-script-module>\n";
          return -1;
      }
    
       // Deserialize the ScriptModule from a file using torch::jit::load().
       // libtorch verison 1.7.0
       torch::jit::script::Module module = torch::jit::load(argv[1]);
       std::cout << "ok\n";
      
        //设置Device
        torch::DeviceType device_type; //设置Device类型
        device_type = torch::kCUDA;  //torch::kCUDA  and torch::kCPU
        torch::Device device(device_type, 0);
        //把模型和数据都送到Device中去(数据和模型必须都在同一个device,结果也是)
        module.to(device);
        // Create a vector of inputs.
        std::vector<torch::jit::IValue> inputs;
        inputs.push_back(torch::ones({1, 3, 224, 224}).to(device));
    
    	// Execute the model and turn its output into a tensor.
    	at::Tensor output = module.forward(inputs).toTensor();
    
    	std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';
    }
    

    CMakeLists.txt

    cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
    project(custom_ops)
    set(CMAKE_CXX_STANDARD 11)
    
    set(Torch_DIR /home/xxx/下载/libtorch/share/cmake/Torch) #TorchConfig.cmake上级目录
    find_package(Torch REQUIRED)
    set(OpenCV_DIR /opt/opencv440/lib/cmake/opencv4/)  # OpenCVConfig.cmake上级目录
    find_package(OpenCV REQUIRED)
    include_directories(${OpenCV_LIBS})
    include_directories(${TORCH_INCLUDE_DIRS})
    
    add_executable(example-app example-app.cpp)
    target_link_libraries(example-app ${OpenCV_LIBS}  ${TORCH_LIBRARIES}) # 加入opencv/libTorch的库文件路径
    
    展开全文
  • 在ubuntu18.04虚拟机下面,从源码编译libtorch,发现编译速度超级慢,一看cpu只使用了一个核,应该是配置的时候参数有点问题,找了一下发现,应该是有个参数读出来的不太对,可能是因为虚拟机配置读取的问题,如下图...

    在ubuntu18.04虚拟机下面,从源码编译libtorch,发现编译速度超级慢,一看cpu只使用了一个核,应该是配置的时候参数有点问题,找了一下发现,应该是有个参数读出来的不太对,可能是因为虚拟机配置读取的问题,文件路径是pytorch/scripts/build_local.sh,最后一行的参数修改下,如下图

    估计是这个CAFFE_MAKE_NCPUS的数据错了,如下,直接把变量改成4或者8,自己根据自己的cpu核数来指定就行。

    改了之后快多了

    展开全文
  • ubuntu 源码编译libtorch

    千次阅读 2020-04-29 11:06:00
    有一点儿感悟就是: ...首先我跑的是pytorch 1.3版本的,conda安装好的,现在需要源码编译。按照官网的流程,需要先安装依赖包。 我是切换到pytorch1.3的conda虚拟环境进行的:这里是官网的教程: 其中,git clon...

    ## pytorch/libtorch qq群: 1041467052

    有一点儿感悟就是:
    一定要去官网找一手资料,百度出来的都是个人根据官网来的,这个就随便看看了。
    官网:https://github.com/pytorch/pytorch
    首先我跑的是pytorch 1.3版本的,conda安装好的,现在需要源码编译。按照官网的流程,需要先安装依赖包。
    我是切换到pytorch1.3的conda虚拟环境进行的:这里是官网的教程:

    其中,git clone --recursive https://github.com/pytorch/pytorch 是下载的最新版本,而我需要1.3的。
    下载完成后:

    git tag
    

    会出来:

    v0.1.1
    v0.1.10
    v0.1.11
    v0.1.12
    v0.1.2
    v0.1.3
    v0.1.4
    v0.1.5
    v0.1.6
    v0.1.7
    v0.1.8
    v0.1.9
    v0.2.0
    v0.3.0
    v0.3.1
    v0.4.0
    v0.4.1
    v1.0.0
    v1.0.0a0
    v1.0.1
    v1.0rc0
    v1.0rc1
    v1.1.0
    v1.1.0a0
    v1.2.0
    v1.2.0a0
    v1.3.0
    v1.3.0a0
    v1.3.1
    v1.4.0
    v1.4.0a0
    v1.4.1
    v1.5.0
    v1.5.0-rc1
    v1.5.0-rc2
    v1.5.0-rc3
    v1.5.0-rc4
    v1.5.0-rc5
    

    然后敲:

    git checkout v1.3.1
    

    然后再:

    git submodule sync
    git submodule update --init --recursive
    
    
    export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
    python setup.py install
    

    然后是漫长的等待
    好久之后,没有报错,然后我不知道我需要的库生成在哪里,然后往上翻日志:

    - ******** Summary ********
    --   CMake version         : 3.6.3
    --   CMake command         : /usr/local/bin/cmake
    --   System                : Linux
    --   C++ compiler          : /usr/bin/c++
    --   C++ compiler version  : 5.4.0
    --   CXX flags             :  -fvisibility-inlines-hidden -fopenmp -Wnon-virtual-dtor
    --   Build type            : Release
    --   Compile definitions   : ONNX_ML=1
    --   CMAKE_PREFIX_PATH     : /data_1/Yang/software_install/Anaconda1105/bin/../;/usr/local/cuda
    --   CMAKE_INSTALL_PREFIX  : /data_2/everyday/0429/pytorch/torch
    --   CMAKE_MODULE_PATH     : /data_2/everyday/0429/pytorch/cmake/Modules;/data_2/everyday/0429/pytorch/cmake/public/../Modules_CUDA_fix
    -- 
    --   ONNX version          : 1.4.1
    --   ONNX NAMESPACE        : onnx_torch
    --   ONNX_BUILD_TESTS      : OFF
    --   ONNX_BUILD_BENCHMARKS : OFF
    --   ONNX_USE_LITE_PROTO   : OFF
    --   ONNXIFI_DUMMY_BACKEND : OFF
    -- 
    --   Protobuf compiler     : 
    --   Protobuf includes     : 
    --   Protobuf libraries    : 
    --   BUILD_ONNX_PYTHON     : OFF
    -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
    -- Removing -DNDEBUG from compile flags
    -- MAGMA not found. Compiling without MAGMA support
    -- Could not find hardware support for NEON on this machine.
    

    找到
    -- CMAKE_INSTALL_PREFIX : /data_2/everyday/0429/pytorch/torch
    恩,安装到了目录torch下面了。
    然后用测试代码,cmakelist,可以加载模型。应该是可以了。

    还有注意就是cuda这些环境需要配置好。

    ######################################################################################

    @@@@总体代码@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

    source activate snake_cuda10
     conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses

    conda install -c pytorch magma-cuda100

    git clone --recursive https://github.com/pytorch/pytorch

    cd pytorch/

    git tag

    git checkout v1.1.0

    git submodule sync

    git submodule update --init --recursive

    export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}

    python setup.py install

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

    以下是20200901更新:

    按照这边的流程今天又编译了pytorch1.1.0 cuda10的libtorch库,把流程写在这里:

    首先是anaconda环境snake_cuda10,这个环境snake_cuda10是已经能够跑通一个git仓库的工程,是基于pytorch1.0,cuda10的conda环境下的::

    命令1:

     source activate snake_cuda10
     conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses

    这里是按照官网上面的安转依赖,会提示一些网络问题:

    CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/linux-64/mkl-2020.2-256.tar.bz2>
    Elapsed:

    An HTTP error occurred when trying to retrieve this URL.
    HTTP errors are often intermittent, and a simple retry will get you on your way

    简单的重试就可以。

    命令2:

    conda install -c pytorch magma-cuda100

    git clone --recursive https://github.com/pytorch/pytorch

    漫长的等待

    cd pytorch/

    git tag

    ###############################################################
    会出来:

    v0.1.1
    v0.1.10
    v0.1.11
    v0.1.12
    v0.1.2
    v0.1.3
    v0.1.4
    v0.1.5
    v0.1.6
    v0.1.7
    v0.1.8
    v0.1.9
    v0.2.0
    v0.3.0
    v0.3.1
    v0.4.0
    v0.4.1
    v1.0.0
    v1.0.0a0
    v1.0.1
    v1.0rc0
    v1.0rc1
    v1.1.0
    v1.1.0a0
    v1.2.0
    v1.2.0a0
    v1.3.0
    v1.3.0a0
    v1.3.1
    v1.4.0
    v1.4.0a0
    v1.4.1
    v1.5.0
    v1.5.0-rc1
    v1.5.0-rc2
    v1.5.0-rc3
    v1.5.0-rc4
    v1.5.0-rc5
    v1.5.1
    v1.5.1-rc1
    v1.6.0
    v1.6.0-rc1
    v1.6.0-rc2
    v1.6.0-rc3
    v1.6.0-rc4
    v1.6.0-rc5
    v1.6.0-rc6
    v1.6.0-rc7

    ###############################################################

    git checkout v1.1.0

    git submodule sync

    git submodule update --init --recursive

    export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}

    python setup.py install

    漫长的等待+1

    2021年09月01日15:59:13更新

    https://oldpan.me/archives/pytorch-build-simple-instruction

    只安装libtorch库:创建build文件夹,在里头执行python ../tools/build_libtorch.py

    python的安装方式并不是单独利用Cmake进行构建的,而是结合了python端的setuptools搭配cmake进行构建,pytorch的项目还是比较庞大的,所以编译代码也是老长,我们主要看看编译过程中的环境变量即可:

    在setup.py里面:

    ```

    # Environment variables you are probably interested in:
    #
    #   DEBUG
    #     build with -O0 and -g (debug symbols)
    #
    #   REL_WITH_DEB_INFO
    #     build with optimizations and -g (debug symbols)
    #
    #   MAX_JOBS
    #     maximum number of compile jobs we should use to compile your code
    #
    #   NO_CUDA
    #     disables CUDA build
    #   
    #   ....
    #   ....
    #   
    # Environment variables for feature toggles:
    #
    #   NO_CUDNN
    #     disables the cuDNN build
    #
    #   NO_FBGEMM
    #     disables the FBGEMM build
    #
    #   NO_TEST
    #     disables the test build
    #
    #   NO_MIOPEN
    #     disables the MIOpen build

    这些编译变量根据我们的需要在执行python setup.py install使用,如果你不想编译CUDA,则NO_CUDA=1 python setup.py install.

    执行以上语句我们就可以进行编译了。

    ```

    2021年09月01日16:56:26再次编译libtorch1.6,cuda10.2

    ```

    git tag

    git checkout v1.6.0

    git submodule sync

    git submodule update --init --recursive --jobs 0最新官网是这句话,但是有问题,还是用之前的

    git submodule update --init --recursive

    conda install astunparse numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses

    conda install -c pytorch magma-cuda102

    export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}

    python setup.py install

    ```

    - Adding OpenMP CXX_FLAGS: -fopenmp
    -- Will link against OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/5/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so
    -- Found CUDA: /usr/local/cuda (found version "10.2") 
    -- Caffe2: CUDA detected: 10.2
    -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
    -- Caffe2: CUDA toolkit directory: /usr/local/cuda
    -- Caffe2: Header version is: 10.2
    -- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so  
    -- Found cuDNN: v?  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
    CMake Error at cmake/public/cuda.cmake:170 (message):
      PyTorch requires cuDNN 7 and above.
    Call Stack (most recent call first):
      cmake/Dependencies.cmake:956 (include)
      CMakeLists.txt:411 (include)
    
    
    -- Configuring incomplete, errors occurred!
    See also "/data_1/everyday/0901/pytorch1.5_libtorch/build/CMakeFiles/CMakeOutput.log".
    See also "/data_1/everyday/0901/pytorch1.5_libtorch/build/CMakeFiles/CMakeError.log".
    Traceback (most recent call last):
      File "setup.py", line 744, in <module>
        build_deps()
      File "setup.py", line 316, in build_deps
        cmake=cmake)
      File "/data_1/everyday/0901/pytorch1.5_libtorch/tools/build_pytorch_libs.py", line 59, in build_caffe2
        rerun_cmake)
      File "/data_1/everyday/0901/pytorch1.5_libtorch/tools/setup_helpers/cmake.py", line 323, in generate
        self.run(args, env=my_env)
      File "/data_1/everyday/0901/pytorch1.5_libtorch/tools/setup_helpers/cmake.py", line 141, in run
        check_call(command, cwd=self.build_dir, env=env)
      File "/data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib/python3.7/subprocess.py", line 363, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['cmake', '-GNinja', '-DBUILD_PYTHON=True', '-DBUILD_TEST=True', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX=/data_1/everyday/0901/pytorch1.5_libtorch/torch', '-DCMAKE_PREFIX_PATH=/data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2', '-DJAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/', '-DNUMPY_INCLUDE_DIR=/data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib/python3.7/site-packages/numpy/core/include', '-DPYTHON_EXECUTABLE=/data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/bin/python', '-DPYTHON_INCLUDE_DIR=/data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/include/python3.7m', '-DPYTHON_LIBRARY=/data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib/libpython3.7m.so.1.0', '-DTORCH_BUILD_VERSION=1.5.0a0+3c31d73', '-DUSE_NUMPY=True', '/data_1/everyday/0901/pytorch1.5_libtorch']' returned non-zero exit status 1.
    

    一堆报错,但是从报错中可以看到好像是cudnn问题,

    -- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so  
    -- Found cuDNN: v?  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
    CMake Error at cmake/public/cuda.cmake:170 (message):
      PyTorch requires cuDNN 7 and above.
    Call Stack (most recent call first):
      cmake/Dependencies.cmake:956 (include)
      CMakeLists.txt:411 (include)


    我cuda10.2, cudnn8.0.

    看了cmake/public/cuda.cmake:170。 里面写法就是获取cudnn.h刚开始的头文件里面编写的版本号,可是我在cudnn.h里面没有找到,7点多版本是有的。于是是把cudnn8删掉,下载了cudnn7.6.5可以。

    但是后续又出现错误了,一大堆

    
    -- Looking for clock_gettime in rt
    -- Looking for clock_gettime in rt - found
    -- Looking for mmap
    -- Looking for mmap - found
    -- Looking for shm_open
    -- Looking for shm_open - found
    -- Looking for shm_unlink
    -- Looking for shm_unlink - found
    -- Looking for malloc_usable_size
    -- Looking for malloc_usable_size - found
    -- Performing Test C_HAS_THREAD
    -- Performing Test C_HAS_THREAD - Success
    -- Version: 6.2.0
    -- Build type: Release
    -- CXX_STANDARD: 14
    -- Performing Test has_std_14_flag
    -- Performing Test has_std_14_flag - Success
    -- Performing Test has_std_1y_flag
    -- Performing Test has_std_1y_flag - Success
    -- Performing Test SUPPORTS_USER_DEFINED_LITERALS
    -- Performing Test SUPPORTS_USER_DEFINED_LITERALS - Success
    -- Performing Test FMT_HAS_VARIANT
    -- Performing Test FMT_HAS_VARIANT - Failed
    -- Required features: cxx_variadic_templates
    -- Looking for strtod_l
    -- Looking for strtod_l - not found
    -- GCC 5.4.0: Adding gcc and gcc_s libs to link line
    -- Performing Test HAS_WERROR_FORMAT
    -- Performing Test HAS_WERROR_FORMAT - Success
    -- Looking for backtrace
    -- Looking for backtrace - found
    -- backtrace facility detected in default set of libraries
    -- Found Backtrace: /usr/include  
    -- NUMA paths:
    -- /usr/include
    -- /usr/lib/x86_64-linux-gnu/libnuma.so
    -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
    -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Success
    -- Using ATen parallel backend: OMP
    CMake Deprecation Warning at third_party/sleef/CMakeLists.txt:20 (cmake_policy):
      The OLD behavior for policy CMP0066 will be removed from a future version
      of CMake.
    
      The cmake-policies(7) manual explains that the OLD behaviors of all
      policies are deprecated and that a policy should be set to OLD only under
      specific short-term circumstances.  Projects should be ported to the NEW
      behavior and not rely on setting a policy to OLD.
    
    
    -- Found OpenSSL: /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib/libcrypto.so (found version "1.1.1k")  
    -- Check size of long double
    -- Check size of long double - done
    -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
    -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
    -- Performing Test COMPILER_SUPPORTS_FLOAT128
    -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Success
    -- Performing Test COMPILER_SUPPORTS_SSE2
    -- Performing Test COMPILER_SUPPORTS_SSE2 - Success
    -- Performing Test COMPILER_SUPPORTS_SSE4
    -- Performing Test COMPILER_SUPPORTS_SSE4 - Success
    -- Performing Test COMPILER_SUPPORTS_AVX
    -- Performing Test COMPILER_SUPPORTS_AVX - Success
    -- Performing Test COMPILER_SUPPORTS_FMA4
    -- Performing Test COMPILER_SUPPORTS_FMA4 - Success
    -- Performing Test COMPILER_SUPPORTS_AVX2
    -- Performing Test COMPILER_SUPPORTS_AVX2 - Success
    -- Performing Test COMPILER_SUPPORTS_AVX512F
    -- Performing Test COMPILER_SUPPORTS_AVX512F - Failed
    -- Performing Test COMPILER_SUPPORTS_OPENMP
    -- Performing Test COMPILER_SUPPORTS_OPENMP - Success
    -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
    -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Success
    -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
    -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success
    -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM
    -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Success
    -- Configuring build for SLEEF-v3.4.0
       Target system: Linux-4.15.0-142-generic
       Target processor: x86_64
       Host system: Linux-4.15.0-142-generic
       Host processor: x86_64
       Detected C compiler: GNU @ /usr/bin/cc
    -- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
    -- Building shared libs : OFF
    -- MPFR : LIB_MPFR-NOTFOUND
    -- GMP : LIBGMP-NOTFOUND
    -- RT : /usr/lib/x86_64-linux-gnu/librt.so
    -- FFTW3 : LIBFFTW3-NOTFOUND
    -- OPENSSL : 1.1.1k
    -- SDE : SDE_COMMAND-NOTFOUND
    -- RUNNING_ON_TRAVIS : 0
    -- COMPILER_SUPPORTS_OPENMP : 1
    AT_INSTALL_INCLUDE_DIR include/ATen/core
    core header install: /data_1/everyday/0901/pytorch1.6_libtorch/build/aten/src/ATen/core/TensorBody.h
    -- Include NCCL operators
    -- Excluding FakeLowP operators
    -- Excluding ideep operators as we are not using ideep
    -- Excluding image processing operators due to no opencv
    -- Excluding video processing operators due to no opencv
    -- Include Observer library
    -- /usr/bin/c++ /data_1/everyday/0901/pytorch1.6_libtorch/caffe2/../torch/abi-check.cpp -o /data_1/everyday/0901/pytorch1.6_libtorch/build/abi-check
    -- Determined _GLIBCXX_USE_CXX11_ABI=1
    -- MPI_INCLUDE_PATH: /usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent;/usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent/include;/usr/lib/openmpi/include;/usr/lib/openmpi/include/openmpi
    -- MPI_LIBRARIES: /usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so
    -- MPIEXEC: /usr/bin/mpiexec
    CMake Deprecation Warning at torch/lib/libshm/CMakeLists.txt:2 (cmake_minimum_required):
      Compatibility with CMake < 2.8.12 will be removed from a future version of
      CMake.
    
      Update the VERSION argument <min> value or use a ...<max> suffix to tell
      CMake that the project does not need compatibility with older versions.
    
    
    CMake Deprecation Warning at torch/lib/libshm/CMakeLists.txt:3 (cmake_policy):
      Compatibility with CMake < 2.8.12 will be removed from a future version of
      CMake.
    
      Update the VERSION argument <min> value or use a ...<max> suffix to tell
      CMake that the project does not need compatibility with older versions.
    
    
    -- Autodetected CUDA architecture(s):  6.1
    CMake Warning (dev) at /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
      The package name passed to `find_package_handle_standard_args` (OpenMP_C)
      does not match the name of the calling package (OpenMP).  This can lead to
      problems in calling code that expects `find_package` result variables
      (e.g., `_FOUND`) to follow a certain pattern.
    Call Stack (most recent call first):
      cmake/Modules/FindOpenMP.cmake:565 (find_package_handle_standard_args)
      caffe2/CMakeLists.txt:812 (find_package)
    This warning is for project developers.  Use -Wno-dev to suppress it.
    
    CMake Warning (dev) at /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
      The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
      does not match the name of the calling package (OpenMP).  This can lead to
      problems in calling code that expects `find_package` result variables
      (e.g., `_FOUND`) to follow a certain pattern.
    Call Stack (most recent call first):
      cmake/Modules/FindOpenMP.cmake:565 (find_package_handle_standard_args)
      caffe2/CMakeLists.txt:812 (find_package)
    This warning is for project developers.  Use -Wno-dev to suppress it.
    
    -- pytorch is compiling with OpenMP. 
    OpenMP CXX_FLAGS: -fopenmp. 
    OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/5/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so.
    -- Caffe2 is compiling with OpenMP. 
    OpenMP CXX_FLAGS: -fopenmp. 
    OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/5/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so.
    -- Using lib/python3.7/site-packages as python relative installation path
    CMake Warning at CMakeLists.txt:690 (message):
      Generated cmake files are only fully tested if one builds with system glog,
      gflags, and protobuf.  Other settings may generate files that are not well
      tested.
    
    
    -- 
    -- ******** Summary ********
    -- General:
    --   CMake version         : 3.19.6
    --   CMake command         : /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/bin/cmake
    --   System                : Linux
    --   C++ compiler          : /usr/bin/c++
    --   C++ compiler id       : GNU
    --   C++ compiler version  : 5.4.0
    --   BLAS                  : MKL
    --   CXX flags             :  -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format
    --   Build type            : Release
    --   Compile definitions   : TH_BLAS_MKL;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;MAGMA_V2;IDEEP_USE_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
    --   CMAKE_PREFIX_PATH     : /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2;/usr/local/cuda
    --   CMAKE_INSTALL_PREFIX  : /data_1/everyday/0901/pytorch1.6_libtorch/torch
    -- 
    --   TORCH_VERSION         : 1.6.0
    --   CAFFE2_VERSION        : 1.6.0
    --   BUILD_CAFFE2_MOBILE   : OFF
    --   USE_STATIC_DISPATCH   : OFF
    --   BUILD_BINARY          : OFF
    --   BUILD_CUSTOM_PROTOBUF : ON
    --     Link local protobuf : ON
    --   BUILD_DOCS            : OFF
    --   BUILD_PYTHON          : True
    --     Python version      : 3.7.7
    --     Python executable   : /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/bin/python
    --     Pythonlibs version  : 3.7.7
    --     Python library      : /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib/libpython3.7m.so.1.0
    --     Python includes     : /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/include/python3.7m
    --     Python site-packages: lib/python3.7/site-packages
    --   BUILD_CAFFE2_OPS      : ON
    --   BUILD_SHARED_LIBS     : ON
    --   BUILD_TEST            : True
    --   BUILD_JNI             : OFF
    --   INTERN_BUILD_MOBILE   : 
    --   USE_ASAN              : OFF
    --   USE_CUDA              : ON
    --     CUDA static link    : OFF
    --     USE_CUDNN           : ON
    --     CUDA version        : 10.2
    --     cuDNN version       : 7.6.5
    --     CUDA root directory : /usr/local/cuda
    --     CUDA library        : /usr/local/cuda/lib64/stubs/libcuda.so
    --     cudart library      : /usr/local/cuda/lib64/libcudart.so
    --     cublas library      : /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib/libcublas.so
    --     cufft library       : /usr/local/cuda/lib64/libcufft.so
    --     curand library      : /usr/local/cuda/lib64/libcurand.so
    --     cuDNN library       : /usr/local/cuda/lib64/libcudnn.so
    --     nvrtc               : /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib/libnvrtc.so
    --     CUDA include path   : /usr/local/cuda/include
    --     NVCC executable     : /usr/local/cuda/bin/nvcc
    --     NVCC flags          : -DONNX_NAMESPACE=onnx_torch;-gencode;arch=compute_61,code=sm_61;-Xcudafe;--diag_suppress=cc_clobber_ignored;-Xcudafe;--diag_suppress=integer_sign_change;-Xcudafe;--diag_suppress=useless_using_declaration;-Xcudafe;--diag_suppress=set_but_not_used;-Xcudafe;--diag_suppress=field_without_dll_interface;-Xcudafe;--diag_suppress=base_class_has_different_dll_interface;-Xcudafe;--diag_suppress=dll_interface_conflict_none_assumed;-Xcudafe;--diag_suppress=dll_interface_conflict_dllexport_assumed;-Xcudafe;--diag_suppress=implicit_return_from_non_void_function;-Xcudafe;--diag_suppress=unsigned_compare_with_zero;-Xcudafe;--diag_suppress=declared_but_not_referenced;-Xcudafe;--diag_suppress=bad_friend_decl;-std=c++14;-Xcompiler;-fPIC;--expt-relaxed-constexpr;--expt-extended-lambda;-Wno-deprecated-gpu-targets;--expt-extended-lambda;-gencode;arch=compute_61,code=sm_61;-Xcompiler;-fPIC;-DCUDA_HAS_FP16=1;-D__CUDA_NO_HALF_OPERATORS__;-D__CUDA_NO_HALF_CONVERSIONS__;-D__CUDA_NO_HALF2_OPERATORS__
    --     CUDA host compiler  : /usr/bin/cc
    --     NVCC --device-c     : OFF
    --     USE_TENSORRT        : OFF
    --   USE_ROCM              : OFF
    --   USE_EIGEN_FOR_BLAS    : 
    --   USE_FBGEMM            : ON
    --     USE_FAKELOWP          : OFF
    --   USE_FFMPEG            : OFF
    --   USE_GFLAGS            : OFF
    --   USE_GLOG              : OFF
    --   USE_LEVELDB           : OFF
    --   USE_LITE_PROTO        : OFF
    --   USE_LMDB              : OFF
    --   USE_METAL             : OFF
    --   USE_MKL               : ON
    --   USE_MKLDNN            : ON
    --   USE_NCCL              : ON
    --     USE_SYSTEM_NCCL     : OFF
    --   USE_NNPACK            : ON
    --   USE_NUMPY             : ON
    --   USE_OBSERVERS         : ON
    --   USE_OPENCL            : OFF
    --   USE_OPENCV            : OFF
    --   USE_OPENMP            : ON
    --   USE_TBB               : OFF
    --   USE_VULKAN            : OFF
    --   USE_PROF              : OFF
    --   USE_QNNPACK           : ON
    --   USE_PYTORCH_QNNPACK   : ON
    --   USE_REDIS             : OFF
    --   USE_ROCKSDB           : OFF
    --   USE_ZMQ               : OFF
    --   USE_DISTRIBUTED       : ON
    --     USE_MPI             : ON
    --     USE_GLOO            : ON
    --     USE_TENSORPIPE      : ON
    --   Public Dependencies  : Threads::Threads;caffe2::mkl
    --   Private Dependencies : pthreadpool;cpuinfo;qnnpack;pytorch_qnnpack;nnpack;XNNPACK;fbgemm;/usr/lib/x86_64-linux-gnu/libnuma.so;fp16;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;tensorpipe;aten_op_header_gen;foxi_loader;rt;fmt::fmt-header-only;gcc_s;gcc;dl
    -- Configuring done
    CMake Warning at caffe2/CMakeLists.txt:583 (add_library):
      Cannot generate a safe runtime search path for target torch_cpu because
      files in some directories may conflict with libraries in implicit
      directories:
    
        runtime library [libgomp.so.1] in /usr/lib/gcc/x86_64-linux-gnu/5 may be hidden by files in:
          /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib
    
      Some of these libraries may not be found correctly.
    
    
    CMake Warning at cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1847 (add_library):
      Cannot generate a safe runtime search path for target
      caffe2_detectron_ops_gpu because files in some directories may conflict
      with libraries in implicit directories:
    
        runtime library [libgomp.so.1] in /usr/lib/gcc/x86_64-linux-gnu/5 may be hidden by files in:
          /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/lib
    
      Some of these libraries may not be found correctly.
    Call Stack (most recent call first):
      modules/detectron/CMakeLists.txt:13 (CUDA_ADD_LIBRARY)
    
    
    -- Generating done
    CMake Warning:
      Manually-specified variables were not used by the project:
    
        JAVA_HOME
    
    
    -- Build files have been written to: /data_1/everyday/0901/pytorch1.6_libtorch/build
    cmake --build . --target install --config Release -- -j 8
    [54/4954] Performing build step for 'nccl_external'
    FAILED: nccl_external-prefix/src/nccl_external-stamp/nccl_external-build nccl/lib/libnccl_static.a 
    cd /data_1/everyday/0901/pytorch1.6_libtorch/third_party/nccl/nccl && env CCACHE_DISABLE=1 SCCACHE_DISABLE=1 make CXX=/usr/bin/c++ CUDA_HOME=/usr/local/cuda NVCC=/usr/local/cuda/bin/nvcc NVCC_GENCODE=-gencode=arch=compute_61,code=sm_61 BUILDDIR=/data_1/everyday/0901/pytorch1.6_libtorch/build/nccl VERBOSE=0 -j && /data_1/Yang/software_install/Anaconda1105/envs/DB_cuda10_2/bin/cmake -E touch /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl_external-prefix/src/nccl_external-stamp/nccl_external-build
    make -C src build BUILDDIR=/data_1/everyday/0901/pytorch1.6_libtorch/build/nccl
    make[1]: Entering directory '/data_1/everyday/0901/pytorch1.6_libtorch/third_party/nccl/nccl/src'
    Grabbing   include/nccl_net.h                  > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/include/nccl_net.h
    Compiling  channel.cc                          > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/channel.o
    Compiling  bootstrap.cc                        > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/bootstrap.o
    Compiling  init.cc                             > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/init.o
    Compiling  transport.cc                        > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/transport.o
    Compiling  enqueue.cc                          > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/enqueue.o
    Compiling  misc/nvmlwrap.cc                    > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/nvmlwrap.o
    Compiling  misc/group.cc                       > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/group.o
    Compiling  misc/ibvwrap.cc                     > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/ibvwrap.o
    Generating nccl.h.in                           > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/include/nccl.h
    Compiling  misc/rings.cc                       > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/rings.o
    Compiling  misc/argcheck.cc                    > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/argcheck.o
    Compiling  misc/trees.cc                       > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/trees.o
    Compiling  misc/utils.cc                       > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/utils.o
    Compiling  misc/topo.cc                        > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/misc/topo.o
    Compiling  transport/p2p.cc                    > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/transport/p2p.o
    Compiling  transport/shm.cc                    > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/transport/shm.o
    Compiling  transport/net.cc                    > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/transport/net.o
    Compiling  collectives/all_gather.cc           > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/collectives/all_gather.o
    Compiling  collectives/all_reduce.cc           > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/collectives/all_reduce.o
    Compiling  collectives/broadcast.cc            > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/collectives/broadcast.o
    Compiling  collectives/reduce.cc               > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/collectives/reduce.o
    Compiling  transport/net_ib.cc                 > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/transport/net_ib.o
    Compiling  collectives/reduce_scatter.cc       > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/collectives/reduce_scatter.o
    Generating nccl.pc.in                          > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/lib/pkgconfig/nccl.pc
    Compiling  transport/net_socket.cc             > /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/obj/transport/net_socket.o
    make[2]: Entering directory '/data_1/everyday/0901/pytorch1.6_libtorch/third_party/nccl/nccl/src/collectives/device'
    In file included from include/group.h:10:0,
                     from misc/group.cc:7:
    /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/include/nccl.h:7:0: error: unterminated #ifndef
     #ifndef NCCL_H_
     ^
    In file included from include/group.h:10:0,
                     from misc/group.cc:7:
    /data_1/everyday/0901/pytorch1.6_libtorch/build/nccl/include/nccl.h:202:5: error: ‘s’ has not been declared
         size_t recvcount, ncclDataType_t datatype, ncclRedOp_t op, ncclComm_t comm,
         ^
    In file included from /usr/include/x86_64-linux-gnu/bits/byteswap.h:27:0,
                     from /usr/include/endian.h:60,
                     from /usr/include/pthread.h:22,
                     from include/core.h:10,
                     from include/group.h:11,
                     from misc/group.cc:7:
    /usr/include/x86_64-linux-gnu/bits/types.h:30:23: error: two or more data types in declaration of ‘__u_char’
     typedef unsigned char __u_char;
                           ^
    /usr/include/x86_64-linux-gnu/bits/types.h:30:31: error: expected ‘)’ before ‘;’ token
     typedef unsigned char __u_char;
                                   ^
    In file included from /usr/include/c++/5/bits/stl_algobase.h:61:0,
                     from /usr/include/c++/5/algorithm:61,
                     from include/core.h:11,
                     from include/group.h:11,
                     from misc/group.cc:7:
    /usr/include/c++/5/bits/cpp_type_traits.h:72:3: error: template with C linkage
       template<typename _Iterator, typename _Container>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:85:3: error: template with C linkage
       template<bool>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:89:3: error: template specialization with C linkage
       template<>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:95:3: error: template with C linkage
       template<class _Sp, class _Tp>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:103:3: error: template with C linkage
       template<typename, typename>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:110:3: error: template with C linkage
       template<typename _Tp>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:118:3: error: template with C linkage
       template<typename _Tp>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:125:3: error: template specialization with C linkage
       template<>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:135:3: error: template with C linkage
       template<typename _Tp>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:146:3: error: template specialization with C linkage
       template<>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:153:3: error: template specialization with C linkage
       template<>
       ^
    /usr/include/c++/5/bits/cpp_type_traits.h:160:3: error: template specialization with C linkage
       template<>
    

    好像是nccl的,天啊!无解。

    然后百度其他的,发现可以直接编译libtorch。

    只安装libtorch库:创建build文件夹,在里头执行python ../tools/build_libtorch.py

    然后执行这个命令没有报错。成功编译出库出来了。

    展开全文
  • Linux下源码编译libtorch

    2020-09-15 21:52:46
    注意:不要编译libtorch1.5.1 ,有内存泄漏问题,第三方库 OpenMP 内存泄漏,而且RReLU也有内存泄漏,从pytorch1.6.0的bug fix中可知。 参考官网:https://github.com/pytorch/pytorch/tree/v1.5.1 尝试验证。 安装...
  • Ubuntu 编译 LibTorch

    2021-05-17 09:26:28
    文章目录Ubuntu 编译 LibTorch参考尽情享用吧~ Ubuntu 编译 LibTorch 参考 see this web doc https://github.com/pytorch/pytorch#installation 尽情享用吧~
  • 编译好opencv, 下载好对应的libtorch 到pytorch官网上并解压到任意文件夹 编译所需文件: 建议文件夹:test_two , mkdir build , 将以下2个文件放入build main.cpp makelists.txt 一张图像 7.jpg main.cpp内容: ...
  • 编译libtorch/pytorch

    2019-11-07 15:59:48
    vs2017 64位 git clone --recursive https://github.com/pytorch/pytorch.git ...,生成build.ninja编译FLAGS是/MT,否则是/MD ...Release模式,编译时更新torch.pdb,这一步很慢,让人受不了。。。
  • windwos源码编译libtorch(win10+32bit+libtorch)

    千次阅读 热门讨论 2020-04-06 16:54:50
    win10编译32位libtorch 文章目录win10编译32位libtorch前言1. 安装Anaconda2. 安装Visual Studio 20173. 获取pytorch源码4. 设置环境变量1. 打开anaconda prompt2. 添加环境变量3. 编译好后libtorch的位置4. 测试...
  • Libtorch的在pytorch的stable1.0版本编译的CPU版本,这个可以通过window下的cmd指令直接编译或者使用cmake-gui进行编译,注意其不支持VS2013及一下版本。最好使用VS2017.
  • pytorch有几个优点: ...有时候,升级gcc的话,也比较不方便,即便是scl方式升级,因为生产环境的gcc都是4.8.5,不方便scl enable切换,那么我们可以直接用gcc4.8.5,从pytorch源码编译libtorch. 编译详见文档: ...
  • 最近在vs2015上编译libtorch源码,在生成解决方案时,一直遇到"caffe2:TypeMeta:New":非法的存储类的问题。困扰了好久。我看网上其他人在Vs2015上能运行通过,到我这怎么就不行了呢!!! 我从libtorchv1.0一直适到...
  • #libtorch C++ GPU版本编译问题 1.首先进行cuda版本编译,不建议手动在VS中创建项目然后进行,包含头include以及链接lib库,建议使用cmake进行编译。 按照如下的目录结构创建CMakeLists.txt文件 CMakeLists.txt内容...
  • 搜索了好多方法,搞了很久,demo都编译不过,各种一堆错误,后来还是找到官网搞定的。 https://pytorch.org/cppdocs/installing.html 1 下载libtorch 2 解压之后变成libtorch文件夹 3 新建example-app ...
  • 已经查看https://github.com/pytorch/pytorch上的pytorch各个版本,从pytorch1.0.0之后都是用vs2017编译的,所以下载编译好的libtorch使用时需要注意自己的vs版本。 At least Visual Studio 2017 version 15.6 with...
  • Ubuntu16.04+libtorch编译1.参考博客2.准备工作3.Libtorch编译安装3.1 下载Pytorch源码3.2 下载libtorch库3.3 CMakeLists编写4.低版本兼容问题(权重加载) 1.参考博客 Pytorch+libtorch编译 libtorch源码编译 2....
  • 问题需要编写pytorch的libtorch库,记录一下大致步骤与问题; 下载源码:从官方克隆最新的代码的时候要加入recursive这个参数,因为Pytorch本身需要很多的第三方库参与编译:git clone --recursive ...
  • 这次编译的时libtorch1.6.0,因为前面libtorch1.5.1有内存泄漏问题。 编译原因 cuda10.0支持。 先决条件: 安装了Visual Studio 2017. 安装了cuda10.0. 安装了anaconda. 1、下载pytorch源码 git clone -b可以...
  • Ubuntu16.04 配置 LibTorch_c++编译libtorch工程 提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档 文章目录1. 下载LibTorch二、libtorch工程编译执行示例1.编写CMake构建配置,CMakeLists.txt...
  • ORB-SLAM的编译需要C++11标准,但是libtorch1.5之后的版本都需要C++14,所以,根据自己的cuda选择一个可以下载的libtorch版本。链接格式可以参考Ubuntu系统pangolin与libtorch所引发的连接错误里面的格式进行尝试,...
  • =d_)) { //std::cout(test_pytorch) #set(CMAKE_CXX_STANDARD 11) #set(CMAKE_CXX_STANDARD_REQUIRED ON) cmake_minimum_required(VERSION 3.5) set(Torch_DIR /home/mask/Downloads/libtorch/share/cmake/Torch) #...
  • libtorch打包为dll(环境:win10+libtorch−cuda10.1)将libtorch打包为dll(环境:win10+libtorch-cuda10.1)将libtorch打包为dll(环境:win10+libtorch−cuda10.1)
  • 使用libtorch在C++中部署遇到了问题,大多是关于torch的API接口问题,因为使用网上大多的示例是很久之前的,而现在torch的很多API接口已经发生了改变,所以才会在编译的时候出现各种接口问题报错。 1. CMake error: ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 826
精华内容 330
关键字:

libtorch编译