精华内容
下载资源
问答
  • cifar10

    千次阅读 2018-09-13 21:27:22
    看卷积神经网络教程时,点击下载代码,并不能下载下来,可以去github上搜索tensorflow,下载压缩包,里面包含了tensorflow的各种文件,在文件夹搜索image,在image文件夹下面可以找到,cifar10...

     

     

    pycharm+cuda9.2+cudnn+tensorflow1.8+1050ti+windows10+python3.6

    注意:在tensorflow中文社区,看卷积神经网络教程时,点击下载代码,并不能下载下来,可以去github上搜索tensorflow,下载压缩包,里面包含了tensorflow的各种文件,在文件夹搜索image,在image文件夹下面可以找到,cifar10的所有代码。

    点击训练cifar10_train,如果你不改任何,那么他会在当前文件夹下面新建tmp/cifar_data,并把cifar-10-binary-bin下载在其中,当然速度很慢,你也可以在网上找资源,下载下来,然后解压,当然这样需要配置路径

    下面是我配置路径时遇到的一些问题,

    1>

    raise ValueError('Failed to find file: ' + f)
    ValueError: Failed to find file: cifar10_data/cifar-10-batches-py/data_batch_1.bin

    出现上述报错,结果发现大致是以下几个原因

    1>上述那个实例是由于,路径末尾.bin照成的,注意,在pycharm左边文件显示时,你的文件名并不会显示其类型,如图

    其文件名后缀并无.bin 字样,故pycharm,并不能找到在cifar10_input文件中找到

    filenames = [os.path.join(data_dir, 'data_batch_%d' % i)
                   for i in xrange(1, 6)]
      for f in filenames:
    
          if not tf.gfile.Exists(f):
              raise ValueError('Failed to find file: ' + f)

    修改即可

    2>

    raise ValueError('Failed to find file: ' + f)
    ValueError: Failed to find file: cifar10_data/cifar-10-batches-bin/data_batch_1.bin

    如上述,是由于路径中间的cifar-10-batches-bin有问题,其问题主要是自己解压,或者代码书写时,有问题,找到,那串字符cifar-10-batches-bin,改回来就好,如果代码不熟,可以ctril+f,搜索一下

    3>

    raise ValueError('Failed to find file: ' + f)
    ValueError: Failed to find file: /cifar10_data/cifar-10-batches-bin/data_batch_1.bin

    出现以上,时由于,如果你在路径前面加/,斜杆符号时,他会自动在c盘,根目录下面找是否有这个文件,这样当然找不到了,如果你想在当前文件夹下面找,那么,把前面的   /       去掉即可.

    4>还有就是反斜杆的问题了

    raise ValueError('Failed to find file: ' + f)
    ValueError: Failed to find file: /cifar10_data\cifar-10-batches-bin\data_batch_1.bin

    这个问题主要是由于os.path.join这个函数造成的,当他拼接两个路径的时候,如果前面两个路径之间没有用 / 隔开,那么为了分隔开两个路径,自动添加   \   ,然后,python3.6,并不能识别这个路径,造成报错。解决办法,在前面那个路径的最后加一个 / ,即可。

    之后我配置了gpu的tensorflow,我点开了多GPU版本,点击了运行

    成功~至于代码问题,学习中,待续~

    assert not np.isnan(loss_value), 'Model diverged with loss = NaN'
    AssertionError: Model diverged with loss = NaN
    

     

    展开全文
  • cifar10-源码

    2021-03-31 12:12:34
    cifar10
  • CIFAR10-源码

    2021-02-24 07:39:11
    CIFAR10
  • pytorch-cifar10 使用PyTorch在CIFAR10上的个人实践灵感来自来自 。 介绍 CIFAR-10数据集包含10个类别的60000个32x32彩色图像,每个类别6000个图像。 有50000张训练图像和10000张测试图像。 数据集分为五个训练...
  • Tensorflow.js的cifar10的数据集。 从转换而来。 我最初尝试转换为json格式,但是生成的数据要大得多。 当前方法是转换为Tensorflowjs官方网站中的mnist等图像格式。 每张图片有10,000个样本,每行代表一个。 该...
  • CIFAR10:在Pytorch中使用ResNet进行CIFAR 10 TransferLearning
  • tensorflow 卷积神经网络 cifar10 有代码解析 对于入门学习者很有帮助
  • cifar10_inception10.py

    2021-02-19 17:13:44
    cifar10_inception10.py
  • cifar10vgg

    2017-12-14 13:50:56
    cifar10vgg.h5是cifar10由vgg16卷积神经网络跑通的权重文件,可用此文件进行转移学习,该代码在我的下载上面可以下载! http://download.csdn.net/download/qq_30803353/10158299
  • cifar10 数据库

    2018-04-11 17:21:40
    tensorflow配套的cifar10数据库 解压即用 github上下载的
  • cifar10_cifar100合集.zip

    2020-08-14 09:04:23
    python版本的cifar10/cifar100合集,可下载后解压到自定义路径下使用。原下载地址:http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz,http://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
  • cifar10图片集

    2018-08-28 11:19:22
    cifar10图片集,由原始数据集转换而来的cifar10图片集
  • CAFFE CIFAR10 MODEL IMAGE 之 cifar10 full

    千次阅读 2016-10-03 12:36:42
    CAFFE CIFAR10 MODEL IMAGE 之 cifar10 full

    cifar10_full_train_test.prototxt:

    name: "CIFAR10_full"
    layer {
      name: "cifar"
      type: "Data"
      top: "data"
      top: "label"
      include {
        phase: TRAIN
      }
      transform_param {
        mean_file: "examples/cifar10/mean.binaryproto"
      }
      data_param {
        source: "examples/cifar10/cifar10_train_lmdb"
        batch_size: 100
        backend: LMDB
      }
    }
    layer {
      name: "cifar"
      type: "Data"
      top: "data"
      top: "label"
      include {
        phase: TEST
      }
      transform_param {
        mean_file: "examples/cifar10/mean.binaryproto"
      }
      data_param {
        source: "examples/cifar10/cifar10_test_lmdb"
        batch_size: 100
        backend: LMDB
      }
    }
    layer {
      name: "conv1"
      type: "Convolution"
      bottom: "data"
      top: "conv1"
      param {
        lr_mult: 1
      }
      param {
        lr_mult: 2
      }
      convolution_param {
        num_output: 32
        pad: 2
        kernel_size: 5
        stride: 1
        weight_filler {
          type: "gaussian"
          std: 0.0001
        }
        bias_filler {
          type: "constant"
        }
      }
    }
    layer {
      name: "pool1"
      type: "Pooling"
      bottom: "conv1"
      top: "pool1"
      pooling_param {
        pool: MAX
        kernel_size: 3
        stride: 2
      }
    }
    layer {
      name: "relu1"
      type: "ReLU"
      bottom: "pool1"
      top: "pool1"
    }
    layer {
      name: "norm1"
      type: "LRN"
      bottom: "pool1"
      top: "norm1"
      lrn_param {
        local_size: 3
        alpha: 5e-05
        beta: 0.75
        norm_region: WITHIN_CHANNEL
      }
    }
    layer {
      name: "conv2"
      type: "Convolution"
      bottom: "norm1"
      top: "conv2"
      param {
        lr_mult: 1
      }
      param {
        lr_mult: 2
      }
      convolution_param {
        num_output: 32
        pad: 2
        kernel_size: 5
        stride: 1
        weight_filler {
          type: "gaussian"
          std: 0.01
        }
        bias_filler {
          type: "constant"
        }
      }
    }
    layer {
      name: "relu2"
      type: "ReLU"
      bottom: "conv2"
      top: "conv2"
    }
    layer {
      name: "pool2"
      type: "Pooling"
      bottom: "conv2"
      top: "pool2"
      pooling_param {
        pool: AVE
        kernel_size: 3
        stride: 2
      }
    }
    layer {
      name: "norm2"
      type: "LRN"
      bottom: "pool2"
      top: "norm2"
      lrn_param {
        local_size: 3
        alpha: 5e-05
        beta: 0.75
        norm_region: WITHIN_CHANNEL
      }
    }
    layer {
      name: "conv3"
      type: "Convolution"
      bottom: "norm2"
      top: "conv3"
      convolution_param {
        num_output: 64
        pad: 2
        kernel_size: 5
        stride: 1
        weight_filler {
          type: "gaussian"
          std: 0.01
        }
        bias_filler {
          type: "constant"
        }
      }
    }
    layer {
      name: "relu3"
      type: "ReLU"
      bottom: "conv3"
      top: "conv3"
    }
    layer {
      name: "pool3"
      type: "Pooling"
      bottom: "conv3"
      top: "pool3"
      pooling_param {
        pool: AVE
        kernel_size: 3
        stride: 2
      }
    }
    layer {
      name: "ip1"
      type: "InnerProduct"
      bottom: "pool3"
      top: "ip1"
      param {
        lr_mult: 1
        decay_mult: 250
      }
      param {
        lr_mult: 2
        decay_mult: 0
      }
      inner_product_param {
        num_output: 10
        weight_filler {
          type: "gaussian"
          std: 0.01
        }
        bias_filler {
          type: "constant"
        }
      }
    }
    layer {
      name: "accuracy"
      type: "Accuracy"
      bottom: "ip1"
      bottom: "label"
      top: "accuracy"
      include {
        phase: TEST
      }
    }
    layer {
      name: "loss"
      type: "SoftmaxWithLoss"
      bottom: "ip1"
      bottom: "label"
      top: "loss"
    }

    拓扑图:


    展开全文
  • cifar10&cifar100数据集
  • cifar10分类.py

    2021-04-05 21:00:42
    cifar10分类.py
  • tensorflow 卷积神经网络 cnn cifar10数据集 有源码解析 于入门学习者很有帮助
  • cifar10.zip

    2019-08-20 18:28:50
    MXNet 官网分布式 训练 cifar10 数据集,
  • 成功解决create_cifar10.sh: 13: create_cifar10.sh: ./build/examples/cifar10/convert_cifar_data.bin: not found

    在将cifar10转化为lmdb格式的时候,使用如下命令报错:

    sh create_cifar10.sh
    

    报如下的错误:

    create_cifar10.sh: 13: create_cifar10.sh: ./build/examples/cifar10/convert_cifar_data.bin: not found
    

    在这里插入图片描述但是实际上在caffe目录下/build/examples/cifar10下,convert_cifar_data.bin文件是存在的
    在这里插入图片描述

    解决方法:

    在caffe目录下执行命令即可:
    执行如下命令:

    sh examples/cifar10/create_cifar10.sh
    

    在这里插入图片描述

    展开全文
  • cifar10.py

    2021-02-19 16:34:32
    class定义方式构建出基础 CNN 网络,还包含 cifar10 数据集读取,模型参数保存与读取,loss 及准确率曲线绘制等功能。
  • kaggle-cifar10-torch7, Kaggle CIFAR10竞争代码 5th 位置 Kaggle CIFAR-10CIFAR-10 竞争代码 http://www.kaggle.com/c/cifar-10摘要描述型号具有 3 x3内核 [1 ]的非常深的卷积网络数据增强裁剪,水平反射 [2]
  • CAFFE CIFAR10 MODEL IMAGE 之 cifar10 full sigmoid

    cifar10 添加sigmoid层后的拓扑文件为:

    name: "CIFAR10_full"
    layer {
      name: "cifar"
      type: "Data"
      top: "data"
      top: "label"
      include {
        phase: TRAIN
      }
      transform_param {
        mean_file: "examples/cifar10/mean.binaryproto"
      }
      data_param {
        source: "examples/cifar10/cifar10_train_lmdb"
        batch_size: 111
        backend: LMDB
      }
    }
    layer {
      name: "cifar"
      type: "Data"
      top: "data"
      top: "label"
      include {
        phase: TEST
      }
      transform_param {
        mean_file: "examples/cifar10/mean.binaryproto"
      }
      data_param {
        source: "examples/cifar10/cifar10_test_lmdb"
        batch_size: 1000
        backend: LMDB
      }
    }
    layer {
      name: "conv1"
      type: "Convolution"
      bottom: "data"
      top: "conv1"
      param {
        lr_mult: 1
      }
      param {
        lr_mult: 2
      }
      convolution_param {
        num_output: 32
        pad: 2
        kernel_size: 5
        stride: 1
        weight_filler {
          type: "gaussian"
          std: 0.0001
        }
        bias_filler {
          type: "constant"
        }
      }
    }
    layer {
      name: "pool1"
      type: "Pooling"
      bottom: "conv1"
      top: "pool1"
      pooling_param {
        pool: MAX
        kernel_size: 3
        stride: 2
      }
    }
    
    
    
    layer {
      name: "Sigmoid1"
      type: "Sigmoid"
      bottom: "pool1"
      top: "Sigmoid1"
    }
    
    layer {
      name: "conv2"
      type: "Convolution"
      bottom: "Sigmoid1"
      top: "conv2"
      param {
        lr_mult: 1
      }
      param {
        lr_mult: 2
      }
      convolution_param {
        num_output: 32
        pad: 2
        kernel_size: 5
        stride: 1
        weight_filler {
          type: "gaussian"
          std: 0.01
        }
        bias_filler {
          type: "constant"
        }
      }
    }
    
    
    layer {
      name: "Sigmoid2"
      type: "Sigmoid"
      bottom: "conv2"
      top: "Sigmoid2"
    }
    layer {
      name: "pool2"
      type: "Pooling"
      bottom: "Sigmoid2"
      top: "pool2"
      pooling_param {
        pool: AVE
        kernel_size: 3
        stride: 2
      }
    }
    layer {
      name: "conv3"
      type: "Convolution"
      bottom: "pool2"
      top: "conv3"
      convolution_param {
        num_output: 64
        pad: 2
        kernel_size: 5
        stride: 1
        weight_filler {
          type: "gaussian"
          std: 0.01
        }
        bias_filler {
          type: "constant"
        }
      }
      param {
        lr_mult: 1
      }
      param {
        lr_mult: 1
      }
    
    }
    
    layer {
      name: "Sigmoid3"
      type: "Sigmoid"
      bottom: "conv3"
      top: "Sigmoid3"
    }
    
    layer {
      name: "pool3"
      type: "Pooling"
      bottom: "Sigmoid3"
      top: "pool3"
      pooling_param {
        pool: AVE
        kernel_size: 3
        stride: 2
      }
    }
    
    layer {
      name: "ip1"
      type: "InnerProduct"
      bottom: "pool3"
      top: "ip1"
      param {
        lr_mult: 1
        decay_mult: 0
      }
      param {
        lr_mult: 2
        decay_mult: 0
      }
      inner_product_param {
        num_output: 10
        weight_filler {
          type: "gaussian"
          std: 0.01
        }
        bias_filler {
          type: "constant"
        }
      }
    }
    layer {
      name: "accuracy"
      type: "Accuracy"
      bottom: "ip1"
      bottom: "label"
      top: "accuracy"
      include {
        phase: TEST
      }
    }
    layer {
      name: "loss"
      type: "SoftmaxWithLoss"
      bottom: "ip1"
      bottom: "label"
      top: "loss"
    }
    

    拓扑图就变为:


    展开全文
  • CIFAR10_mxnet 抽象的 kaggle CIFAR10补偿代码,由mxnet gluon实现。 我们通过合并一些想法获得了0.9688 Directroy和文件描述符 文件 描述符 日志 一些火车日志文件 楷模 一些trianed模型参数(权重) 结果 kaggle...
  • 此代码可以从 CIFAR 10 数据集中提取图像。 在您的工作空间中创建一个名为 base 的目录。
  • DCGAN_RAY_CIFAR10:在CIFAR10数据集上使用Ray库训练DCGAN
  • cifar10数据集

    2018-08-23 11:07:17
    cifar10数据集,内附使用说明
  • CIFAR基准 CIFAR10数据集中的基线分类器。

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 28,839
精华内容 11,535
关键字:

cifar10