精华内容
下载资源
问答
  • caffe预测、特征可视化python接口调用
  • 今天小编就为大家分享一篇python接口调用已训练好的caffe模型测试分类方法,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
  • 网上有很多caffe训练好的模型,有的时候我们仅仅想要方便的调用已经训练好的模型,利用python进行预测分类测试,查看可视化结果,这个时候,我们可以使用caffe为我们写好的python接口文件,我们在安装caffe的时候,...

    原文地址http://blog.csdn.net/hjimce/article/details/48972877

    作者:hjimce

    网上有很多caffe训练好的模型,有的时候我们仅仅想要方便的调用已经训练好的模型,利用python进行预测分类测试,查看可视化结果,这个时候,我们可以使用caffe为我们写好的python接口文件,我们在安装caffe的时候,有一步:make  pycaffe。这个便是安装caffe的python 结果函数,把自己用的代码记录一下,以便日后直接复制粘贴使用。感觉使用python就是轻松,如果用caffe的c++接口,挺麻烦的。

    下面的使用例子是自己搞利用CNN进行性别预测的python接口调用实例:

    [python] view plain copy
     在CODE上查看代码片派生到我的代码片
    1. # coding=utf-8  
    2. import os  
    3. import numpy as np  
    4. from matplotlib import pyplot as plt  
    5. import cv2  
    6. import shutil  
    7. import time  
    8.   
    9. #因为RGB和BGR需要调换一下才能显示  
    10. def showimage(im):  
    11.     if im.ndim == 3:  
    12.         im = im[:, :, ::-1]  
    13.     plt.set_cmap('jet')  
    14.     plt.imshow(im)  
    15.     plt.show()  
    16.   
    17. #特征可视化显示,padval用于调整亮度  
    18. def vis_square(data, padsize=1, padval=0):  
    19.     data -= data.min()  
    20.     data /= data.max()  
    21.   
    22.     #因为我们要把某一层的特征图都显示到一个figure上,因此需要计算每个图片占用figure多少比例,以及绘制的位置  
    23.     n = int(np.ceil(np.sqrt(data.shape[0])))  
    24.     padding = ((0, n ** 2 - data.shape[0]), (0, padsize), (0, padsize)) + ((00),) * (data.ndim - 3)  
    25.     data = np.pad(data, padding, mode='constant', constant_values=(padval, padval))  
    26.   
    27.     # tile the filters into an image  
    28.     data = data.reshape((n, n) + data.shape[1:]).transpose((0213) + tuple(range(4, data.ndim + 1)))  
    29.     data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])  
    30.   
    31.     showimage(data)  
    32.   
    33.   
    34. #设置caffe源码所在的路径  
    35. caffe_root = '../../../caffe/'  
    36. import sys  
    37. sys.path.insert(0, caffe_root + 'python')  
    38. import caffe  
    39.   
    40.   
    41.   
    42.   
    43. #加载均值文件  
    44. mean_filename='./imagenet_mean.binaryproto'  
    45. proto_data = open(mean_filename, "rb").read()  
    46. a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)  
    47. mean  = caffe.io.blobproto_to_array(a)[0]  
    48.   
    49. #创建网络,并加载已经训练好的模型文件  
    50. gender_net_pretrained='./caffenet_train_iter_1500.caffemodel'  
    51. gender_net_model_file='./deploy_gender.prototxt'  
    52. gender_net = caffe.Classifier(gender_net_model_file, gender_net_pretrained,mean=mean,  
    53.                        channel_swap=(2,1,0),#RGB通道与BGR  
    54.                        raw_scale=255,#把图片归一化到0~1之间  
    55.                        image_dims=(256256))#设置输入图片的大小  
    56.   
    57.   
    58. #预测分类及其可特征视化  
    59. gender_list=['Male','Female']  
    60. input_image = caffe.io.load_image('1.jpg')#读取图片  
    61.   
    62. prediction_gender=gender_net.predict([input_image])#预测图片性别  
    63. #打印我们训练每一层的参数形状  
    64. print 'params:'  
    65. for k, v in gender_net.params.items():  
    66.     print 'weight:'  
    67.     print (k, v[0].data.shape)#在每一层的参数blob中,caffe用vector存储了两个blob变量,用v[0]表示weight  
    68.     print 'b:'  
    69.     print (k, v[1].data.shape)#用v[1]表示偏置参数  
    70. #conv1滤波器可视化  
    71. filters = gender_net.params['conv1'][0].data  
    72. vis_square(filters.transpose(0231))  
    73. #conv2滤波器可视化  
    74. '''''filters = gender_net.params['conv2'][0].data 
    75. vis_square(filters[:48].reshape(48**2, 5, 5))'''  
    76. #特征图  
    77. print 'feature maps:'  
    78. for k, v in gender_net.blobs.items():  
    79.     print (k, v.data.shape);  
    80.     feat = gender_net.blobs[k].data[0,0:4]#显示名字为k的网络层,第一张图片所生成的4张feature maps  
    81.     vis_square(feat, padval=1)  
    82.   
    83.   
    84.   
    85.   
    86.   
    87. #显示原图片,以及分类预测结果  
    88. str_gender=gender_list[prediction_gender[0].argmax()]  
    89. print str_gender  
    90.   
    91. plt.imshow(input_image)  
    92. plt.title(str_gender)  
    93. plt.show()  


    展开全文
  • windows平台下python接口调用darknet训练的模型进行视频中的目标检测的两种办法(待实践)。 一个方法是可以直接用opencv的python接口进行调用,参考文章: 1、opencv的python接口修改 ...

    windows平台下python接口调用darknet训练的模型进行视频中的目标检测的两种办法(方法2实践成功了,可以直接参考我的下一篇文章)。

    一个方法是可以直接用opencv的python接口进行调用,参考文章:

    1、opencv的python接口修改

    https://blog.csdn.net/qq_27158179/article/details/81915740?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.control&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.control

    2、下面这个是实操成功的opencv的python接口检测图片的方法

    https://blog.csdn.net/qq_32761549/article/details/90402438

    还有个办法是直接调用darknet中的python包,参考文章:

    1、anaconda导入darknet包

    https://blog.csdn.net/Cwenge/article/details/80389988

    2、修改darknet.py

    https://blog.csdn.net/phinoo/article/details/83009061

    3、修改main函数

    https://blog.csdn.net/greatsam/article/details/90672386?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.control&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.control

    展开全文
  • python接口调用 get/post

    2019-11-14 16:54:59
    调用Python接口一般有两种方式,get和post 1.get方式调用Python接口 (1)给定具体的参数,进行一次调用 import json import requests r = requests.get(...

    调用Python接口一般有两种方式,get和post
    1.get方式调用Python接口
    (1)给定具体的参数,进行一次调用

    import json
    import requests
    
    r = requests.get("http://47.92.225.212:8001/OutCall/recognition?nodeId=6212aea7&query=嗯方便的你说&flowId=1907300015_2&breakTts=0")
    print(r.json())
    

    接口调用结果:

    {'matchLableData': '你说', 'lableDataToken': {}, 'answerType': 'HIGH_SIMILARITY', 'conditionId': 'g40', 'matchKeywordRegular': '', 'queryToken': {}, 'nodeId': 'fc0f3f20', 'targetNodeId': 'fc0f3f20', 'matchScore': '1', 'titleCondi': '方便', 'actionCode': '', 'matchKeyword': '', 'result': 3}
    

    (2)使用字典构造参数

    import json
    import requests
    nodeId = "6212aea7"
    query = "嗯方便的你说"
    flowId="1907300015_2"
    breakTts="0"
    d= {"nodeId":nodeId,"query":query,"flowId":flowId,"breakTts":breakTts}
    rr = requests.get("http://47.92.225.212:8001/OutCall/recognition",params = d)
    print(rr.json())
    

    接口调用结果:

    {'matchLableData': '你说', 'lableDataToken': {}, 'answerType': 'HIGH_SIMILARITY', 'conditionId': 'g40', 'matchKeywordRegular': '', 'queryToken': {}, 'nodeId': 'fc0f3f20', 'targetNodeId': 'fc0f3f20', 'matchScore': '1', 'titleCondi': '方便', 'actionCode': '', 'matchKeyword': '', 'result': 3}
    

    2.post方式调用python接口

    import requests
    host = "39.98.93.32:10030"
    url = "http://"+host+"/debang_text_analysis_model"
    body = {"text":{"sentences":[{"speech_rate":252,"emotion_value":5,"begin_time":"0","silence_duration":0,"text":"您好,欢迎致电三大半导体技术,上海有限公司","channel_id":"1","end_time":"5"},{"speech_rate":220,"emotion_value":5,"begin_time":"5","silence_duration":5,"text":"请拨分机号查号,请按0","channel_id":"1","end_time":"8"},{"speech_rate":282,"emotion_value":6,"begin_time":"8","silence_duration":3,"text":"我要看不去,不做伤害,可亏神衰道how's真是纳闷哦ps jung","channel_id":"1","end_time":"15"},{"speech_rate":480,"emotion_value":4,"begin_time":"23","silence_duration":7,"text":"只有朱小姐,我去","channel_id":"2","end_time":"24"},{"speech_rate":120,"emotion_value":6,"begin_time":"28","silence_duration":5,"text":"什么","channel_id":"1","end_time":"29"},{"speech_rate":120,"emotion_value":5,"begin_time":"29","silence_duration":1,"text":"你好","channel_id":"1","end_time":"30"},{"speech_rate":462,"emotion_value":6,"begin_time":"30","silence_duration":0,"text":"喂,你好,哎,你好,我这边是德邦物流公司的,呃,你们那边有一位朱小姐,在我们这里下了一个网站,需要发货是吗?","channel_id":"2","end_time":"37"},{"speech_rate":255,"emotion_value":6,"begin_time":"38","silence_duration":8,"text":"嗯,我们这边应该有同事,已经联系过","channel_id":"1","end_time":"42"},{"speech_rate":240,"emotion_value":5,"begin_time":"42","silence_duration":0,"text":"本公司的","channel_id":"2","end_time":"43"},{"speech_rate":350,"emotion_value":6,"begin_time":"44","silence_duration":1,"text":"你也联系过了对可能已经下单了,对对他是我看到的就是确认你们下的这个订单","channel_id":"2","end_time":"50"},{"speech_rate":400,"emotion_value":6,"begin_time":"51","silence_duration":1,"text":"因为我们看到下订单,就会给客户回电话的。","channel_id":"2","end_time":"54"},{"speech_rate":120,"emotion_value":5,"begin_time":"56","silence_duration":5,"text":"嗯是","channel_id":"1","end_time":"57"},{"speech_rate":220,"emotion_value":6,"begin_time":"58","silence_duration":1,"text":"联系人,就写了个朱小姐","channel_id":"2","end_time":"61"},{"speech_rate":600,"emotion_value":5,"begin_time":"63","silence_duration":5,"text":"对因为刚刚有问过你。","channel_id":"1","end_time":"64"},{"speech_rate":360,"emotion_value":5,"begin_time":"64","silence_duration":0,"text":"我们客服吗?","channel_id":"2","end_time":"65"},{"speech_rate":450,"emotion_value":5,"begin_time":"66","silence_duration":0,"text":"然后他说,你们会给我回电话吗?","channel_id":"2","end_time":"68"},{"speech_rate":340,"emotion_value":5,"begin_time":"68","silence_duration":2,"text":"然后刚刚问我们同事,他说好像是已经","channel_id":"1","end_time":"71"},{"speech_rate":420,"emotion_value":5,"begin_time":"72","silence_duration":4,"text":"叫人来取货了吧","channel_id":"1","end_time":"73"},{"speech_rate":180,"emotion_value":5,"begin_time":"74","silence_duration":0,"text":"啊,今天去取","channel_id":"2","end_time":"76"},{"speech_rate":300,"emotion_value":5,"begin_time":"76","silence_duration":0,"text":"好吧,好吧","channel_id":"2","end_time":"77"}]},"flag":2,"call_ID":"84046_1558682796"}
    r = requests.post(url,body)
    print(r.status_code)
    

    调用接口返回结果:

    200
    

    参考:
    https://www.liaoxuefeng.com/wiki/1016959663602400/1183249464292448
    https://blog.csdn.net/lihao21/article/details/51857385

    展开全文
  • 一、Python接口调用yolov3检测视频并保存检测视频 前置: 报错解决:https://blog.csdn.net/qq_34717531/article/details/107466494 修改darknet源码:https://blog.csdn.net/phinoo/article/details/83009061 ...

    一、Python接口调用yolov3检测视频并保存检测视频

    前置:

    报错解决:https://blog.csdn.net/qq_34717531/article/details/107466494

    修改darknet源码:https://blog.csdn.net/phinoo/article/details/83009061

    if __name__ == "__main__":
        net = load_net(b"cfg/yolov3.cfg", b"yolov3.weights", 0)
        meta = load_meta(b"cfg/coco.data")
        vid = cv2.VideoCapture('1.ts')
        fourcc = cv2.VideoWriter_fourcc('M','P','4','2') #opencv3.0
        videoWriter = cv2.VideoWriter('11.avi', fourcc, 25, (1920,1080))
        while True:
            return_value,arr=vid.read()
            if not return_value:
                break 
            im=nparray_to_image(arr)
            boxes= detect(net, meta, im)
           
            for i in range(len(boxes)):
                score=boxes[i][1]
                label=boxes[i][0]
                xmin=boxes[i][2][0]-boxes[i][2][2]/2
                ymin=boxes[i][2][1]-boxes[i][2][3]/2
                xmax=boxes[i][2][0]+boxes[i][2][2]/2
                ymax=boxes[i][2][1]+boxes[i][2][3]/2
                cv2.rectangle(arr,(int(xmin),int(ymin)),(int(xmax),int(ymax)),(0,255,0),2)
                cv2.putText(arr,str(label),(int(xmin),int(ymin)),fontFace=cv2.FONT_HERSHEY_SIMPLEX,fontScale=0.8,color=(0,255,255),thickness=2)    
            cv2.imshow("Canvas", arr)
            videoWriter.write(arr) 
            cv2.waitKey(1) 
        cv2.destroyAllWindows()

     

    二、Python接口调用yolov4检测视频并保存检测视频

    '''
    注释:
        darknet python调用接口,参考darknet.py即可!
    '''
    import os
    import cv2
    import numpy as np
    import random
    import darknet
    
    class Detect:
        def __init__(self, metaPath, configPath, weightPath, namesPath):
            '''
            :param metaPath:   ***.data 存储各种参数
            :param configPath: ***.cfg  网络结构文件
            :param weightPath: ***.weights yolo的权重
            :param namesPath:  ***.data中的names路径,这里是便于读取使用
            '''
            # 网络
            self.netMain = darknet.load_net_custom(configPath.encode("ascii"), weightPath.encode("ascii"), 0, 1)
            # 各种参数
            self.metaMain = darknet.load_meta(metaPath.encode("ascii"))
            # 读取标签类别名称列表
            self.names = self.read_names(namesPath)
            # 每类颜色肯定一致,但是每次执行不一定都一样
            self.colors = self.color()
    
        def read_names(self, namesPath):
            # 专门读取包含类别标签名的***.names文件
            with open(namesPath, 'r') as f:
                lines = f.readlines()
                altNames = [x.strip() for x in lines]
            f.close()
            return altNames
    
        def color(self):
            # rgb 格式
            colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(self.metaMain.classes)]
            return colors
    
        def predict_image(self, image, thresh=0.25, is_show=True, save_path=''):
            '''
            :param image:    cv2.imread 图像, darknet自己会对图像进行预处理
            :param thresh:   置信度阈值, 其它阈值不变
            :param is_show:  是否将画框之后的图像返回
            :param save_path: 画框后的保存路径
            :return:         返回1个矩阵
            '''
            # bgr->rgb
            rgb_img = image[..., ::-1]
            # 获取图片大小,网络输入大小
            height, width = rgb_img.shape[:2]
            network_width = darknet.network_width(self.netMain)
            network_height = darknet.network_height(self.netMain)
            # 将图像resize到输入大小
            rsz_img = cv2.resize(rgb_img, (network_width, network_height), interpolation=cv2.INTER_LINEAR)
            # 转成tensor的形式,以及[1,C,H,W]
            darknet_image, _ = darknet.array_to_image(rsz_img)
            detections = darknet.detect_image(self.netMain, self.metaMain, darknet_image, thresh=thresh)
    
            if is_show:
                for detection in detections:
                    x, y, w, h = detection[2][0], \
                                 detection[2][1], \
                                 detection[2][2], \
                                 detection[2][3]
                    # 置信度
                    conf = detection[1]
                    # 预测标签
                    label = detection[0].decode()
                    # 获取坐标
                    x *= width / network_width
                    w *= width / network_width
                    y *= height / network_height
                    h *= height / network_height
                    # 转成x1y1x2y2,左上右下坐标; x是w方向
                    xyxy = np.array([x - w / 2, y - h / 2, x + w / 2, y + h / 2])
    
                    index = self.names.index(label)
                    label_conf = f'{label} {conf:.2f}'
                    self._plot_one_box(xyxy, rgb_img, self.colors[index], label_conf)
                bgr_img = rgb_img[..., ::-1]
                # 保存图像
                if save_path:
                    cv2.imwrite(save_path, bgr_img)
    
                return bgr_img  #返回画框的bgr图像
            return detections
    
        def _plot_one_box(self, xyxy, img_rgb, color, label):
            # 直接对原始图像操作
            img = img_rgb[..., ::-1]  # bgr
            pt1 = (int(xyxy[0]), int(xyxy[1]))  # 左上角
            pt2 = (int(xyxy[2]), int(xyxy[3]))  # 右下角
    
            thickness = round(0.001 * max(img.shape[0:2])) + 1  # 必须为整数
            # if thickness > 1:
            #     thickness = 1  # 可强制为1
            cv2.rectangle(img, pt1, pt2, color, thickness)  #画框,thickness线粗细
            # 获取字体的宽x-高y,实际上此高y应该乘1.5 才是字体的真实高度(bq是占上中、中下3个格)
            t_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, fontScale=thickness / 3, thickness=thickness)[0]
            # 按照2种方式显示,默认是在框上面显示,当上框仅挨着上边界时,采用框内显示;右边界不管
            c1 = (pt1[0], pt1[1]-int(t_size[1]*1.5)) if pt1[1]-int(t_size[1]*1.5) >= 0 else (pt1[0], pt1[1])
            c2 = (pt1[0]+t_size[0], pt1[1]) if pt1[1]-int(t_size[1]*1.5) >= 0 else (pt1[0]+t_size[0], pt1[1]+int(t_size[1]*1.5))
    
            # 字体框内背景填充与框颜色一致
            cv2.rectangle(img, c1, c2, color, -1)  # 当thickness=-1时为填充
            # 绘制文本,文本是在下1/3位置开始
            text_pos = (c1[0], c1[1]+t_size[1])
            cv2.putText(img, label, text_pos, cv2.FONT_HERSHEY_SIMPLEX, thickness / 3, [225, 255, 255], thickness=thickness, lineType=cv2.LINE_AA)
            cv2.imshow('frame',img)
    
    
    
    if __name__ == '__main__':
        # gpu 通过环境变量设置CUDA_VISBLE_DEVICES=0
        # detect = Detect(metaPath=r'./cfg/helmet.data',
        #                 configPath=r'./cfg/yolov4-obj.cfg',
        #                 weightPath=r'./backup/yolov4-obj_best.weights',
        #                 namesPath=r'./data/helmet.names')
    
        # detect = Detect(metaPath=r'./cfg/coco.data',
        #                 configPath=r'./cfg/yolov4.cfg',
        #                 weightPath=r'./yolov4.weights',
        #                 namesPath=r'./data/coco.names')
    
        detect = Detect(metaPath=r'./cfg/coco.data',
                        configPath=r'./cfg/yolov4.cfg',
                        weightPath=r'./yolov4.weights',
                        namesPath=r'./cfg/coco.names')
    
        # image = cv2.imread(r'/home/Datasets/image/000000.jpg', -1)
        # image = cv2.imread(r'/home/Datasets/20200714085948.jpg', -1)
        # detect.predict_image(image, save_path='./pred.jpg')
    
        ''' 读取视频,保存视频 '''
        cap = cv2.VideoCapture(r'/home/ycc/darknet-master/1.ts')
        # 获取视频的fps, width height
        fps = int(cap.get(cv2.CAP_PROP_FPS))
        width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
        height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
        count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
        print(count)
        # 创建视频
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        video_writer = cv2.VideoWriter(r'/home/ycc/darknet-master/fire1.mp4', fourcc=fourcc, fps=fps, frameSize=(width,height))
        ret, frame = cap.read()  # ret表示下一帧还有没有 有为True
        while ret:
            # 预测每一帧
            pred = detect.predict_image(frame)
            video_writer.write(pred)
            cv2.waitKey(fps)
            # 读取下一帧
            ret, frame = cap.read()
            print(ret)
        cv2.imshow('frame',img)
        cap.release()
        cv2.destroyAllWindows()
    

     

    可能出现的问题:

    ycc@ycc:~/darknet-master$ python fire.py 
      File "fire.py", line 80
        label_conf = f'{label} {conf:.2f}'
                                         ^
    SyntaxError: invalid syntax
    

    解决办法:

    #label_conf = f"{label} {conf:.2f}"
    label_conf =  label + " [" + str(round(conf* 100, 2)) + "]"

     

    参考:https://blog.csdn.net/greatsam/article/details/90672386

    展开全文
  • 网上有很多caffe训练好的模型,有的时候我们仅仅想要方便的调用已经训练好的模型,利用python进行预测分类测试,查看可视化结果,这个时候,我们...下面的使用例子是自己搞利用CNN进行性别预测的python接口调用实例:
  • yolov3-python接口调用

    千次阅读 热门讨论 2018-10-11 11:01:48
    大家好,我是泽浩,转载朋一...在YOLO官网提供的Darknet源码中,有一个使用python接口的示例程序 darknet.py 示例如下:https://github.com/pjreddie/darknet/blob/master/python/darknet.py 其处理一张图片的代码...
  • salt的python接口调用

    千次阅读 2015-04-09 11:02:06
    salt的python api:...python脚本调用salt命令 脚本要在master机器上执行,执行用户需要是master用户 (1)获取master配置文件: salt.config.client_config(path, env_var=’SALT_CLIENT_CONFIG’, defa
  • 一、前言: 本人使用的开发环境是python 3.7.4 首先下载requests包: ...在接口调用过程中(程序运行),出现如下错误: requests.exceptions.SSLError: HTTPSConnectionPool(host=‘oapi.dingtalk.com’,...
  • LabVIEW2018调用Python接口失败的解决方法(原创)error1663问题error1662,error1661问题 error1663问题 首先在这里感谢大神钟博士,这个问题是看到他在论坛上的解释才解决的。他的公众号是:钟博士LabVIEW工作室...
  • 训练好caffemodel后,需要测试模型分类的正确率,caffe 有 python接口,可以调用已训练好的caffemodel测试分类。 有以下几点需要注意: 1, 需要修改 net.prototxt 文件为 deploy.prototxt 文件,方法见我的另一个...
  • Win10安装OpenNI2并通过python接口调用Kinect

    万次阅读 热门讨论 2018-01-22 12:38:58
    1. 先不要插入Kinect到usb3.0(一定要是3.0)接口,安装Kinect SDK2.0,然后插入Kinect,按下Win+X,在设备管理器里面显示有这个代表成功: 然后打开Kinect Studio,测试一下: 二、安装libfreenet2(参考...
  • 训练好了model后,可以通过python调用caffe的模型,然后进行模型测试的输出。 本次测试主要依靠的模型是在caffe模型里面自带训练好的结构参数:~/caffe/models/bvlc_reference_caffe...
  • 网上有很多caffe训练好的模型,有的时候我们仅仅想要方便的调用已经训练好的模型,利用python进行预测分类测试,查看可视化结果,这个时候,我们可以使用caffe为我们写好的python接口文件,我们在安装caffe的时候,...
  • 运行python路径下给的darknet.py接口样例时出现 Couldn’t open file: data/coco.names 而此py文件中并未出现此路径 问题解决: 进入路径cfg vim coco.data data.name改为…/data.name over。。。 ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 7,682
精华内容 3,072
关键字:

python接口调用

python 订阅