精华内容
下载资源
问答
  • Confusion

    2019-10-25 16:34:21
    Confusion
  • confusion

    2011-12-04 02:09:14
    最近看一位博友实习还有工作之类的文章,感觉有技术才是硬道理,hah~让我出国的想法动摇了,我什么时候才能有个成熟的想法啊,。。  这两者之间我要选择一个。以后的事情我真是不清楚,hah。或者两者并不矛盾。...

      最近看一位博友实习还有工作之类的文章,感觉有技术才是硬道理,hah~让我出国的想法动摇了,我什么时候才能有个成熟的想法啊,。。

      这两者之间我要选择一个。以后的事情我真是不清楚,hah。或者两者并不矛盾。说点儿别的。。

      youtube什么时候能在中国用啊!下载的几集周末夜生活,深深地让我感觉到,中国娱乐圈的非娱乐性,就是个悲剧。哎~~~~

    展开全文
  • ConFusion-源码

    2021-03-19 02:14:54
    ConFusion
  • confusion matrix

    千次阅读 2019-07-06 18:31:01
    confusion matrix https://scikit-learn.org/stable/modules/model_evaluation.html confusion [kən'fjuːʒ(ə)n]:n. 混淆,混乱,困惑 The confusion_matrix function evaluates classification accuracy by ...

    confusion matrix

    https://scikit-learn.org/stable/modules/model_evaluation.html

    confusion [kən'fjuːʒ(ə)n]:n. 混淆,混乱,困惑
    

    The confusion_matrix function evaluates classification accuracy by computing the confusion matrix with each row corresponding to the true class (Wikipedia and other references may use different convention for axes.)
    confusion_matrix 函数通过计算混淆矩阵来评估分类准确性,每个行对应于真实类别 (维基百科和其他引用可以使用不同的轴约定。)

    By definition, entry i , j i, j i,j in a confusion matrix is the number of observations actually in group i i i, but predicted to be in group j j j. Here is an example:
    根据定义,混淆矩阵中的条目 i , j i, j i,j 是实际在 i i i 组中的观察数,但预计在 j j j 组中。Here is an example:

    >>> from sklearn.metrics import confusion_matrix
    >>> y_true = [2, 0, 2, 2, 0, 1]
    >>> y_pred = [0, 0, 2, 2, 0, 2]
    >>> confusion_matrix(y_true, y_pred)
    array([[2, 0, 0],
           [0, 0, 1],
           [1, 0, 2]])
    
    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    
    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    
    from sklearn.metrics import confusion_matrix
    
    y_true = [2, 0, 2, 2, 0, 1]
    y_pred = [0, 0, 2, 2, 0, 2]
    print(confusion_matrix(y_true, y_pred))
    
    strong@foreverstrong:~/git_workspace/MonoGRNet$ python yongqiang.py 
    [[2 0 0]
     [0 0 1]
     [1 0 2]]
    strong@foreverstrong:~/git_workspace/MonoGRNet$
    

    Here is a visual representation of such a confusion matrix (this figure comes from the Confusion matrix example):

    在这里插入图片描述

    For binary problems, we can get counts of true negatives, false positives, false negatives and true positives as follows:
    对于二元问题,我们可以得到真阴性,误报,假阴性和真阳性的计数如下:

    >>> y_true = [0, 0, 0, 1, 1, 1, 1, 1]
    >>> y_pred = [0, 1, 0, 1, 0, 1, 0, 1]
    >>> tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
    >>> tn, fp, fn, tp
    (2, 1, 2, 3)
    

    References

    https://en.wikipedia.org/wiki/Confusion_matrix

    展开全文
  • confusion matix

    2010-04-13 23:13:31
    about confusion matrix
  • ClientWebStack_conFusion
  • Confusion Matrix

    2018-08-22 22:38:05
    March 25, 2014 · machine learning Simple guide to confusion matrix terminology A confusion matrix is a table that is often used to describe the performance of a classification model (or “...

    时间仓促,抽时间在翻译

    #### https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/
    March 25, 2014 · machine learning

    Simple guide to confusion matrix terminology

    A confusion matrix is a table that is often used to describe the performance of a classification model (or “classifier”) on a set of test data for which the true values are known. The confusion matrix itself is relatively simple to understand, but the related terminology can be confusing.

    I wanted to create a “quick reference guide” for confusion matrix terminology because I couldn’t find an existing resource that suited my requirements: compact in presentation, using numbers instead of arbitrary variables, and explained both in terms of formulas and sentences.

    Let’s start with an example confusion matrix for a binary classifier (though it can easily be extended to the case of more than two classes):

    Example confusion matrix for a binary classifier

    What can we learn from this matrix?

    • There are two possible predicted classes: “yes” and “no”. If we were predicting the presence of a disease, for example, “yes” would mean they have the disease, and “no” would mean they don’t have the disease.
    • The classifier made a total of 165 predictions (e.g., 165 patients were being tested for the presence of that disease).
    • Out of those 165 cases, the classifier predicted “yes” 110 times, and “no” 55 times.
    • In reality, 105 patients in the sample have the disease, and 60 patients do not.

    Let’s now define the most basic terms, which are whole numbers (not rates):

    • true positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease.
    • true negatives (TN): We predicted no, and they don’t have the disease.
    • false positives (FP): We predicted yes, but they don’t actually have the disease. (Also known as a “Type I error.”)
    • false negatives (FN): We predicted no, but they actually do have the disease. (Also known as a “Type II error.”)

    I’ve added these terms to the confusion matrix, and also added the row and column totals:

    Example confusion matrix for a binary classifier

    This is a list of rates that are often computed from a confusion matrix for a binary classifier:

    • Accuracy: Overall, how often is the classifier correct?
      • (TP+TN)/total = (100+50)/165 = 0.91
    • Misclassification Rate: Overall, how often is it wrong?
      • (FP+FN)/total = (10+5)/165 = 0.09
      • equivalent to 1 minus Accuracy
      • also known as “Error Rate”
    • True Positive Rate: When it’s actually yes, how often does it predict yes?
      • TP/actual yes = 100/105 = 0.95
      • also known as “Sensitivity” or “Recall”
    • False Positive Rate: When it’s actually no, how often does it predict yes?
      • FP/actual no = 10/60 = 0.17
    • Specificity: When it’s actually no, how often does it predict no?
      • TN/actual no = 50/60 = 0.83
      • equivalent to 1 minus False Positive Rate
    • Precision: When it predicts yes, how often is it correct?
      • TP/predicted yes = 100/110 = 0.91
    • Prevalence: How often does the yes condition actually occur in our sample?
      • actual yes/total = 105/165 = 0.64

    A couple other terms are also worth mentioning:

    • Positive Predictive Value: This is very similar to precision, except that it takes prevalence into account. In the case where the classes are perfectly balanced (meaning the prevalence is 50%), the positive predictive value (PPV) is equivalent to precision. (More details about PPV.)
    • Null Error Rate: This is how often you would be wrong if you always predicted the majority class. (In our example, the null error rate would be 60/165=0.36 because if you always predicted yes, you would only be wrong for the 60 “no” cases.) This can be a useful baseline metric to compare your classifier against. However, the best classifier for a particular application will sometimes have a higher error rate than the null error rate, as demonstrated by the Accuracy Paradox.
    • Cohen’s Kappa: This is essentially a measure of how well the classifier performed as compared to how well it would have performed simply by chance. In other words, a model will have a high Kappa score if there is a big difference between the accuracy and the null error rate. (More details about Cohen’s Kappa.)
    • F Score: This is a weighted average of the true positive rate (recall) and precision. (More details about the F Score.)
    • ROC Curve: This is a commonly used graph that summarizes the performance of a classifier over all possible thresholds. It is generated by plotting the True Positive Rate (y-axis) against the False Positive Rate (x-axis) as you vary the threshold for assigning observations to a given class. (More details about ROC Curves.)

    And finally, for those of you from the world of Bayesian statistics, here’s a quick summary of these terms from Applied Predictive Modeling:

    In relation to Bayesian statistics, the sensitivity and specificity are the conditional probabilities, the prevalence is the prior, and the positive/negative predicted values are the posterior probabilities.

    What did I miss? Are there any terms that need a better explanation? Your feedback is welcome!

    P.S. Want more content like this in your inbox? Subscribe to the Data School newsletter.

    展开全文
  • deep domain confusion

    2018-10-16 09:16:06
    # DDC-transfer-learning A simple implementation of Deep Domain Confusion: Maximizing for Domain ... Deep domain confusion: Maximizing for domain invariance[J]. arXiv preprint arXiv:1412.3474, 2014.
  • Confusion matrix

    2019-12-19 09:31:05
    模型评估之混淆矩阵(confusion_matrix) TP(True Positive):将正类预测为正类数,真实为0,预测也为0 FN(False Negative):将正类预测为负类数,真实为0,预测为1 FP(False Positive):将负类预测为正类数, 真实...

    模型评估之混淆矩阵(confusion_matrix)
    TP(True Positive):将正类预测为正类数,真实为0,预测也为0
    FN(False Negative):将正类预测为负类数,真实为0,预测为1
    FP(False Positive):将负类预测为正类数, 真实为1,预测为0
    TN(True Negative):将负类预测为负类数,真实为1,预测也为1
    混淆矩阵定义及表示含义

    混淆矩阵是机器学习中总结分类模型预测结果的情形分析表,以矩阵形式将数据集中的记录按照真实的类别与分类模型预测的类别判断两个标准进行汇总。其中矩阵的行表示真实值,矩阵的列表示预测值,下面我们先以二分类为例,看下矩阵表现形式,如下:
    在这里插入图片描述
    现在我们举个列子,并画出混淆矩阵表,假如宠物店有10只动物,其中6只狗,4只猫,现在有一个分类器将这10只动物进行分类,分类结果为5只狗,5只猫,那么我们画出分类结果混淆矩阵,并进行分析,如下(我们把狗作为正类):
    在这里插入图片描述
    通过混淆矩阵我们可以轻松算的真实值狗的数量(行数量相加)为6=5+1,分类得到狗的数量(列数量相加)为5=5+0,真实猫的数量为4=0+4,分类得到猫的数量为5=1+4。同时,我们不难发现,对于二分类问题,矩阵中的4个元素刚好表示TP,TN,FP,TN这四个符号量,如下图:
    在这里插入图片描述
    那么对于二分类问题来说,精确率Precision=a/(a+c)=TP/(TP+FP),召回率recall=a/(a+b)=TP/(TP+FN),准确率accuracy=(a+d)/(a+b+c+d)=(TP+FN+FP+TN),可以看到准确率中的分子值就是矩阵对角线上的值。

    刚才分析的是二分类问题,那么对于多分类问题,混淆矩阵表示的含义也基本相同,这里我们以三类问题为例,看看如何根据混淆矩阵计算各指标值。
    在这里插入图片描述
    与二分类混淆矩阵一样,矩阵行数据相加是真实值类别数,列数据相加是分类后的类别数,那么相应的就有以下计算公式;

    精确率_类别1=a/(a+d+g)
    召回率_类别1=a/(a+b+c)

    展开全文
  • Dashboard Confusion

    2009-07-18 03:19:59
    Dashboard Confusion,关于Dashboard的比较好的文章。pdf格式,英文,老外2004年写的。
  • Ruby requires confusion

    2020-05-22 10:18:30
    Ruby requires confusion
  • conFusion-API-源码

    2021-04-17 03:27:13
    conFusion API 使用的工具 表示 节点JS 猫鼬 MongoDB(社区服务器) 去做 验证 授权
  • 使用python绘制混淆矩阵(confusion_matrix)

    万次阅读 多人点赞 2018-04-22 13:46:00
    涉及到分类问题,我们经常需要通过可视化混淆矩阵来分析实验结果进而得出调参思路,本文介绍如何利用python绘制混淆矩阵(confusion_matrix),本文只提供代码,给出必要注释。 Code # -*-coding:utf-8-*- from ...
  • matlab开发-ConfusionMatrix

    2019-08-23 23:35:46
    matlab开发-ConfusionMatrix。计算多类问题的混淆矩阵
  • Confusion control in generalized Petri nets using synchronized events
  • Lottery bring confusion

    2015-03-20 00:07:51
    Lottery bring confusion
  • Program in matlab to compute the confusion matrix.
  • conFusion:使用bootstrP4学习和练习
  • conFusion:前端Web UI框架引导项目
  • confusion_matrix

    2019-02-05 22:03:28
    confusion_matrix(y_true,y_pred) array([[1, 2], [2, 3]]) 函数结果的理解如下: pred &true中包含的数字有0,1两个,则: 00位置上的数表示的是实际上是0,预测值为0的个数,在本例中为1; 01位置上的...
  • Confusion Matrix-混淆矩阵 How in the hell can we measure the effectiveness of our model. Better the effectiveness, better the performance and that’s exactly what we want. And it is where the ...
  • 利用随机字符串代替变量等,混淆原代码,保护自己写的代码,不利于别人直接...** 类名:CLASS_CONFUSION ** 功能:JS混淆 ** 示例:  ——————————————————— ——————————————

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 28,480
精华内容 11,392
关键字:

confusion