精华内容
下载资源
问答
  • validation accuracy vs train accuracy

    千次阅读 2018-11-18 23:00:08
    训练时validation accuracy和train accuracy之间没有差距,本义为这是一个还不错的曲线,但是今天讨论时有人评论说这种情况说明网络参数不足,因为在参数充足的情况下多多少少会有过拟和,所以正常情况下train ...

    训练时validation accuracy和train accuracy之间没有差距,本义为这是一个还不错的曲线,但是今天讨论时有人评论说这种情况说明网络参数不足,因为在参数充足的情况下多多少少会有过拟和,所以正常情况下train accuracy会高于validiation accuracy,似乎很有道理!

    展开全文
  • Top-1 Accuracy和Top-5 Accuracy

    千次阅读 多人点赞 2019-12-01 20:01:27
    Top-1 Accuracy和Top-5 Accuracy是指什么呢?区别在哪呢?我们知道ImageNet有大概1000个分类,而模型预测某张图片时,会给出1000个按概率从高到低的类别排名,   所谓的Top-1 Accuracy是指排名第一的类别与实际...

    Top-1 Accuracy和Top-5 Accuracy是指什么
    我们知道ImageNet有大概1000个分类,而模型预测某张图片时,会给出1000个从高到低排名的概率,表示网络预测该图片属于各类的概率

    • Top-1 Accuracy是指排名第一的类别与实际结果相符的准确率
    • Top-5 Accuracy是指排名前五的类别包含实际结果的准确率
    展开全文
  • probe_accuracy-源码

    2021-05-23 15:05:10
    测头精度测试 这是为了在运行Klipper的3D...从该存储库下载probe_accuracy.py ,并将其复制到Raspberry Pi上的/home/pi/probe_accuracy/ 。 下载test_probe_accuracy.cfg从这个库,并将其复制到包含目录printer.cfg
  • 当面对多分类或者多标签的任务时,评价度量可能会用到这两个 categorical_accuracy和 sparse_categorical_accuracy binary_accuracy自然不必多讲,这篇文章讲一下categorical_accuracy和 sp...

    在Keras中,官方内置了几种评价函数。

    • 对于二分类问题,评价指标可以用 binary_accuracy,就是最直观上讲的准确率。
    • 当面对多分类或者多标签的任务时,评价度量可能会用到这两个 categorical_accuracysparse_categorical_accuracy

    binary_accuracy自然不必多讲,这篇文章讲一下categorical_accuracysparse_categorical_accuracy的区别:

    Keras 官方API在这里, 里面没有讲到各个度量指标的具体含义,于是我们来看一下源码

    def categorical_accuracy(y_true, y_pred):
        return K.cast(K.equal(K.argmax(y_true, axis=-1),
                              K.argmax(y_pred, axis=-1)),
                      K.floatx())
    
    def sparse_categorical_accuracy(y_true, y_pred):
        return K.cast(K.equal(K.max(y_true, axis=-1),
                              K.cast(K.argmax(y_pred, axis=-1), K.floatx())),
                      K.floatx())
    

    从代码可以看到:

    categorical_accuracy:检查 y_ture 中最大值对应的index 与 y_pred 中最大值对应的index是否相等。

    • 注意,这里只比较一个值,即最大的那个值的index,这对于【多分类单标签】任务的是合适的,但并不适用于【多标签】任务。
    • 这里的 y_true 应为一个 one-hot 向量

    sparse_categorical_accuracy检查 y_true 中的值(本身就是index) 与 y_pred 中最大值对应的index是否相等。

    • 针对稀疏情况的多分类,这里的 y_true 就是真实类的 index ,是个整数

    举个栗子

    如果有四个类,该样本属于第三类,那么在 categorical_accuracy 中 y_true =(0,0, 1, 0) , 而在 sparse_categorical_accuracy中 y_true = 2 (0-based计数)。但是,y_pred是一样的,均为softmax输出的vector,比如 y_pred = (0.02, 0.05, 0.83, 0.1),于是

    y_true = (0, 0, 1, 0)
    y_pred = (0.02, 0.05, 0.83, 0.1)
    acc = categorical_accuracy(y_true, y_pred)
    
    y_true = 2
    y_pred = (0.02, 0.05, 0.83, 0.1)
    acc = sparse_categorical_accuracy(y_true, y_pred)
    

    Reference:
    Keras - Difference between categorical_accuracy and sparse_categorical_accuracy

    展开全文
  • caffe 显示各类 accuracy(含 accuracy_layer 源码修改)Tags: Deep_Learning本文主要包含如下内容:caffe 显示各类 accuracyaccuracy_layer 源码修改 方式一修改 prototxt 文件 方式二直接修改 accuracy_layercpp...

    caffe 显示各类 accuracy(含 accuracy_layer 源码修改)

    Tags: Deep_Learning


    本文主要包含如下内容:

      本篇博客旨在教会你在训练分类网络的时候,用一些简单的操作即可进一步显示具体每个类别的准确率,你可以根据这些信息进一步调整网络


    方式一:修改 prototxt 文件


      这里,我们需要编辑测试的 prototxt : deploy.prototxt,在其中添加一个top: “class”即可.

    layer {
      name: "data"
      type: "Data"
      top: "data"
      top: "label"
      include {
        phase: TEST
      }
      transform_param {
        mean_file: "/home/kb539/YH/work/behavior_recognition/lmdb/imagenet_mean.binaryproto"
        mirror: false
        crop_size: 224
      }
      data_param {
        source: "/home/kb539/YH/work/behavior_recognition/lmdb/test_lmdb"
        batch_size: 128     # 注意batch_size的设置(跟验证集大小有关系)
        backend: LMDB
      }
    }
    layer {
      name: "accuracy"
      type: "Accuracy"
      bottom: "fc8_score"
      bottom: "label"
      top: "accuracy@1"
      top: "class"      # 源码中有top[0]/top[1],其中top[1]对应每个类别的标签
      include: { phase: TEST }
      accuracy_param {
        top_k: 1
      }
    }

      接下来, 使用 caffe 测试即可, 测试代码显示如下:

    #!/usr/bin/env sh
    set -e
    
    /home/kb539/YH/caffe-master/build/tools/caffe test --gpu=0 --model=/home/kb539/YH/work/behavior_recognition/vgg_16/deploy.prototxt --weights=/home/kb539/YH/work/behavior_recognition/vgg_16/output/case_two.caffemodel --iterations=21     # iterations*batch_size>=验证集数目

      可以得到如下结果:(注意:我的类别为12类)

    测试结果:
    I0503 15:50:23.471802 12256 caffe.cpp:325] accuracy@1 = 0.857887
    I0503 15:50:23.471859 12256 caffe.cpp:325] loss_fc8 = 0.603455 (* 1 = 0.603455 loss)
    I0503 15:50:23.471871 12256 caffe.cpp:325] perclass = 0.845481
    I0503 15:50:23.471881 12256 caffe.cpp:325] perclass = 0.847117
    I0503 15:50:23.471891 12256 caffe.cpp:325] perclass = 0.786423
    I0503 15:50:23.471900 12256 caffe.cpp:325] perclass = 0.782536
    I0503 15:50:23.471909 12256 caffe.cpp:325] perclass = 0.85791
    I0503 15:50:23.471920 12256 caffe.cpp:325] perclass = 0.944581
    I0503 15:50:23.471928 12256 caffe.cpp:325] perclass = 0.891931
    I0503 15:50:23.471938 12256 caffe.cpp:325] perclass = 0.926242
    I0503 15:50:23.471947 12256 caffe.cpp:325] perclass = 0.919357
    I0503 15:50:23.471956 12256 caffe.cpp:325] perclass = 0.909317
    I0503 15:50:23.471966 12256 caffe.cpp:325] perclass = 0.912399
    I0503 15:50:23.471976 12256 caffe.cpp:325] perclass = 0.704083

    方式二:直接修改 accuracy_layer.cpp 源码



    accuracy_layer.cpp 源码


      首先,我们可以阅读源码 accuracy_layer.cpp : 源码的思路就是构造了top[0]/top[1]的 blob,其中,top[0]存储了验证集的准确率,top[1]存储了验证集中每个类别的准确率.

    #include <functional>
    #include <utility>
    #include <vector>
    
    #include "caffe/layers/accuracy_layer.hpp"
    #include "caffe/util/math_functions.hpp"
    
    namespace caffe {
    
    template <typename Dtype>
    void AccuracyLayer<Dtype>::LayerSetUp(
      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
      top_k_ = this->layer_param_.accuracy_param().top_k();
    
      has_ignore_label_ =
        this->layer_param_.accuracy_param().has_ignore_label();
      if (has_ignore_label_) {
        ignore_label_ = this->layer_param_.accuracy_param().ignore_label();
      }
    }
    
    template <typename Dtype>
    void AccuracyLayer<Dtype>::Reshape(
      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
      CHECK_LE(top_k_, bottom[0]->count() / bottom[1]->count())
          << "top_k must be less than or equal to the number of classes.";
      label_axis_ =
          bottom[0]->CanonicalAxisIndex(this->layer_param_.accuracy_param().axis());
      outer_num_ = bottom[0]->count(0, label_axis_);    // outer_num_为图像数量,100
      inner_num_ = bottom[0]->count(label_axis_ + 1);   // inner_num_为每个图像所对应的类别数,1
      CHECK_EQ(outer_num_ * inner_num_, bottom[1]->count())
          << "Number of labels must match number of predictions; "
          << "e.g., if label axis == 1 and prediction shape is (N, C, H, W), "
          << "label count (number of labels) must be N*H*W, "
          << "with integer values in {0, 1, ..., C-1}.";
      vector<int> top_shape(0);  // Accuracy is a scalar; 0 axes.   // 整体测试集的准确率
      top[0]->Reshape(top_shape);
      if (top.size() > 1) {
        // Per-class accuracy is a vector; 1 axes.
        vector<int> top_shape_per_class(1);
        top_shape_per_class[0] = bottom[0]->shape(label_axis_);
        top[1]->Reshape(top_shape_per_class);   // 对应每个类别的准确率: 10维
        nums_buffer_.Reshape(top_shape_per_class);  // 对应每个类别的图像总数: 10维
      }
    }
    
    template <typename Dtype>
    void AccuracyLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
        const vector<Blob<Dtype>*>& top) {
      Dtype accuracy = 0;       // 准确率初始化为0
      const Dtype* bottom_data = bottom[0]->cpu_data(); // 输入图像100张,每一张对应10个输出类别 100*10
      const Dtype* bottom_label = bottom[1]->cpu_data();    // 图像标签,每一张图像对应一个标签 100*1
      const int dim = bottom[0]->count() / outer_num_;  // dim = 10,outer_num_ = 100
      const int num_labels = bottom[0]->shape(label_axis_);     // 类别数目 = 10
      vector<Dtype> maxval(top_k_+1);
      vector<int> max_id(top_k_+1);
      if (top.size() > 1) {
        caffe_set(nums_buffer_.count(), Dtype(0), nums_buffer_.mutable_cpu_data());
        caffe_set(top[1]->count(), Dtype(0), top[1]->mutable_cpu_data());
      }
      int count = 0;
      for (int i = 0; i < outer_num_; ++i) {
        for (int j = 0; j < inner_num_; ++j) {  // inner_num_为每个图像所对应的类别数,所以=1
          const int label_value =
              static_cast<int>(bottom_label[i * inner_num_ + j]);
          if (has_ignore_label_ && label_value == ignore_label_) {
            continue;
          }
          if (top.size() > 1) ++nums_buffer_.mutable_cpu_data()[label_value];   // 记录每个类别的图像总数
          DCHECK_GE(label_value, 0);        // label_value(0~9)大于等于 0
          DCHECK_LT(label_value, num_labels);   // label_value(0~9)肯定小于 num_labels(10)
          // Top-k accuracy  // top_k为取前k个最高评分(的预测标签)
          std::vector<std::pair<Dtype, int> > bottom_data_vector;
          for (int k = 0; k < num_labels; ++k) {
            bottom_data_vector.push_back(std::make_pair(    // 记录预测结果:dim = 10;inner_num = 1,num_labels = 10
                bottom_data[i * dim + k * inner_num_ + j], k));
          }
          std::partial_sort(    // 按预测结果排序
              bottom_data_vector.begin(), bottom_data_vector.begin() + top_k_,
              bottom_data_vector.end(), std::greater<std::pair<Dtype, int> >());
          // check if true label is in top k predictions
          for (int k = 0; k < top_k_; k++) {    // 只看前top_k个结果
            if (bottom_data_vector[k].second == label_value) {  // 如果存在标签,即准确值增加
              ++accuracy;
              if (top.size() > 1) ++top[1]->mutable_cpu_data()[label_value];    // 对应每个类别准确率计数 + 1
              break;
            }
          }
          ++count;  // 总统计次数
        }
      }
    
      // LOG(INFO) << "Accuracy: " << accuracy;
      top[0]->mutable_cpu_data()[0] = accuracy / count; // 总的准确率
      if (top.size() > 1) {
        for (int i = 0; i < top[1]->count(); ++i) {     // 对应每个类别的准确率
          top[1]->mutable_cpu_data()[i] =
              nums_buffer_.cpu_data()[i] == 0 ? 0
              : top[1]->cpu_data()[i] / nums_buffer_.cpu_data()[i];
        }
      }
      // Accuracy layer should not be used as a loss function.
    }
    
    INSTANTIATE_CLASS(AccuracyLayer);
    REGISTER_LAYER_CLASS(Accuracy);
    
    }  // namespace caffe

    accuracy_layer.cpp 源码修改


      接下来:我们对源码进行修改: 即只构造了top[0]的 blob,其中,top[0]存储了验证集的准确率以及验证集中每个类别的准确率.

    #include <functional>
    #include <utility>
    #include <vector>
    
    #include "caffe/layers/accuracy_layer.hpp"
    #include "caffe/util/math_functions.hpp"
    
    namespace caffe {
    
    template <typename Dtype>
    void AccuracyLayer<Dtype>::LayerSetUp(
      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
      top_k_ = this->layer_param_.accuracy_param().top_k();
    
      has_ignore_label_ =
        this->layer_param_.accuracy_param().has_ignore_label();
      if (has_ignore_label_) {
        ignore_label_ = this->layer_param_.accuracy_param().ignore_label();
      }
    }
    
    template <typename Dtype>
    void AccuracyLayer<Dtype>::Reshape(
      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
      CHECK_LE(top_k_, bottom[0]->count() / bottom[1]->count())
          << "top_k must be less than or equal to the number of classes.";
      label_axis_ =
          bottom[0]->CanonicalAxisIndex(this->layer_param_.accuracy_param().axis());
      outer_num_ = bottom[0]->count(0, label_axis_);    // outer_num_为图像数量,100
      inner_num_ = bottom[0]->count(label_axis_ + 1);   // inner_num_为每个图像所对应的类别数,1
      CHECK_EQ(outer_num_ * inner_num_, bottom[1]->count())
          << "Number of labels must match number of predictions; "
          << "e.g., if label axis == 1 and prediction shape is (N, C, H, W), "
          << "label count (number of labels) must be N*H*W, "
          << "with integer values in {0, 1, ..., C-1}.";
      int dim = bottom[0]->count() / outer_num_;    // dim = 10
      top[0]->Reshape(1 + dim, 1, 1, 1);
    }
    
    template <typename Dtype>
    void AccuracyLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
        const vector<Blob<Dtype>*>& top) {
      Dtype accuracy = 0;       // 准确率初始化为0
      const Dtype* bottom_data = bottom[0]->cpu_data(); // 输入图像100张,每一张对应10个输出类别 100*10
      const Dtype* bottom_label = bottom[1]->cpu_data();    // 图像标签,每一张图像对应一个标签 100*1
      int num = outer_num_; // 图像总数:100
      const int dim = bottom[0]->count() / outer_num_;  // dim = 10,outer_num_ = 100
      vector<Dtype> maxval(top_k_+1);
      vector<int> max_id(top_k_+1);
      vector<Dtype> accuracies(dim, 0); // 记录每个类别的准确率
      vector<Dtype> nums(dim, 0);       // 记录每个类别图像的总数
      for (int i = 0; i < outer_num_; ++i) {
          const int label_value = static_cast<int>(bottom_label[i]);        // 每张图像的标签
          std::vector<std::pair<Dtype, int> > bottom_data_vector;
          for (int k = 0; k < dim; ++k) {
            bottom_data_vector.push_back(std::make_pair(    // 记录预测结果:dim = 10;inner_num = 1,num_labels = 10
                bottom_data[i * dim + k], k));
          }
          std::partial_sort(    // 按预测结果排序
              bottom_data_vector.begin(), bottom_data_vector.begin() + top_k_,
              bottom_data_vector.end(), std::greater<std::pair<Dtype, int> >());
          // check if true label is in top k predictions
          for (int k = 0; k < top_k_; k++) {    // 只看前top_k个结果
            ++nums[label_value];
            if (bottom_data_vector[k].second == label_value) {  // 如果存在标签,即准确值增加
              ++accuracy;
              ++accuracies[label_value];    // 对应每个类别准确率计数 + 1
              break;
            }
          }
      }
    
      // LOG(INFO) << "Accuracy: " << accuracy;
      top[0]->mutable_cpu_data()[0] = accuracy / num;   // 总的准确率
      for (int i = 0; i < dim; ++i) {       // 对应每个类别的准确率
         top[0]->mutable_cpu_data()[i + 1] = accuracies[i] / nums[i];   // 输出每个类别的准确率
      }
      // Accuracy layer should not be used as a loss function.
    }
    INSTANTIATE_CLASS(AccuracyLayer);
    REGISTER_LAYER_CLASS(Accuracy);
    
    }  // namespace caffe

      最后,在caffe的根目录make即可,你可以得到如下结果:(注意:我的类别为12类,获得了13个输出)

    I0503 21:29:25.707322 14206 caffe.cpp:325] accuracy@1 = 0.857887
    I0503 21:29:25.707332 14206 caffe.cpp:325] accuracy@1 = 0.845481
    I0503 21:29:25.707340 14206 caffe.cpp:325] accuracy@1 = 0.847117
    I0503 21:29:25.707346 14206 caffe.cpp:325] accuracy@1 = 0.786423
    I0503 21:29:25.707353 14206 caffe.cpp:325] accuracy@1 = 0.782536
    I0503 21:29:25.707361 14206 caffe.cpp:325] accuracy@1 = 0.85791
    I0503 21:29:25.707370 14206 caffe.cpp:325] accuracy@1 = 0.944581
    I0503 21:29:25.707378 14206 caffe.cpp:325] accuracy@1 = 0.891931
    I0503 21:29:25.707386 14206 caffe.cpp:325] accuracy@1 = 0.926242
    I0503 21:29:25.707392 14206 caffe.cpp:325] accuracy@1 = 0.919357
    I0503 21:29:25.707399 14206 caffe.cpp:325] accuracy@1 = 0.909317
    I0503 21:29:25.707406 14206 caffe.cpp:325] accuracy@1 = 0.912399
    I0503 21:29:25.707414 14206 caffe.cpp:325] accuracy@1 = 0.704083
    I0503 21:29:25.707427 14206 caffe.cpp:325] loss_fc8 = 0.603455 (* 1 = 0.603455 loss)
    展开全文
  • 召回率 Recall、精确度Precision、准确率Accuracy、虚警、漏警等分类判定指标  假设原始样本中有两类,其中:   1:总共有 P个类别为1的样本,假设类别1为正例。   2:总共有N个类别为0的样本,假设类别0为...
  • 今天小编就为大家分享一篇TensorFlow绘制loss/accuracy曲线的实例,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
  • accuracy:训练集准确率 val_loss:测试集损失值 val_accruacy:测试集准确率 以下5种情况可供参考: train loss 不断下降,test loss不断下降,说明网络仍在学习;(最好的) train loss 不断下降,test loss...
  • 知乎:深度学习为什么会出现validation accuracy大于train accuracy的现象? 概括他们的答案: validation的样本数量一般远小于training的 val的时候是用已经训练了一个epoch的model进行测试的(经过大量的训练学习...
  • 英文论文中的accuracy与precision的区别

    万次阅读 2021-03-07 15:14:19
    英文论文中的accuracy与precision的区别 在阅读英文文献过程中,特别是涉及到精度的问题时,一般会遇到两个表示精度的单词“accuracy”和“precision”,利用有道翻译对这两个单词进行翻译时,它们的含义都是精度,...
  • Accuracy of correction in modal
  • caffe中的Accuracy

    千次阅读 2017-12-13 12:58:47
    Caffe 中的 Accuracy 是precision,即:理解为你预测对的正例数,占预测正例总量的比率今天才偶然发现,caffe在计算Accuracy时,利用的是最后一个全链接层的输出(不带有acitvation function),比如:alexnet的...
  • 出现KeyError: ‘accuracy’报错信息,是由于keras版本问题所造成的。keras库老版本中的参数不是accuracy,而是acc 解决办法:将参数accuracy替换为acc
  • A simplified reinforcement technique for improving test accuracy A SIMPLIFIED REINFORCEMENT TECHNIQUE FOR IMPROVING TEST ACCURACY’ HAROLD W. THORPE CRAIQ B. DARCH Uniuersily of Wisconsin-...
  • caffe画accuracy曲线脚本

    2016-11-02 17:17:30
    caffe的python接口下画accuracy和loss等的曲线 随时随地画
  • LFW DataBase Accuracy 测定说明-附件资源
  • Accuracy-aware data collection in wireless sensor networks
  • 在本片文章里小编给大家整理的是关于Pytorch中accuracy和loss的计算相关知识点内容,有需要的朋友们可以学习下。
  • 在学习使用keras库遇到了这个问题。 可能是由于版本不同导致,那么要怎么判断自己的版本用的是哪个呢?...可以看到loss和val_loss还是一样的,但是我的acc和val_acc却变成了binary_accuracy和val_binary_ac...
  • accuracy.eval

    千次阅读 2018-07-23 13:45:41
    这的函数是TensorFlow的MNIST官方的例程中出现的...accuracy.eval()函数的作用: f.Tensor.eval(feed_dict=None, session=None): 作用: 在一个Seesion里面“评估”tensor的值(其实就是计算),首先执行之前的...
  • 2000 High-Accuracy CMOS Smart Temperature Sensors.PDF
  • accuracy_score 分类准确率分数是指所有分类正确的百分比。分类准确率这一衡量分类器的标准比较容易理解,但是它不能告诉你响应值的潜在分布,并且它也不能告诉你分类器犯错的类型。 形式: sklearn.metrics....
  • accuracy and stability at numerical algorithmsaccuracy and stability at numerical algorithmsaccuracy and stability at numerical algorithmsaccuracy and stability at numerical algorithmsaccuracy and ...
  • High Accuracy Sub-Pixel Image Registration under Noisy Condition
  • Human Mobility Enhances Global Positioning Accuracy for Mobile Phone Localization
  • Improving the Accuracy of Static Analysis Based on State Partition
  • caffe accuracy 学习

    千次阅读 2016-04-22 15:55:27
    首先我们先看一下accuracy (在caffe.proto里面)的类定义 message AccuracyParameter {  // When computing accuracy, count as correct by comparing the true label to  // the top k scoring classes. By ...
  • Influence maximization, defined as a problem... However, existing algorithms suffer a scalability-accuracy dilemma: conventional greedy algorithms guarantee the accuracy with expensive computation, while

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 146,684
精华内容 58,673
关键字:

accuracy