精华内容
下载资源
问答
  • 交叉验证代码 MachineLearing-Homework UESTC MachineLearning course homework 作业一(Matlab) 假设x=(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20),y=( 2.94, 4.53, 5.96, 7.88, 9.02, 10.94, 12.14, ...
  • matlab交叉验证代码PyTorch DGCNN 关于 DGCNN(深图卷积神经网络)的PyTorch实现。 检查更多信息。 要求:python 2.7或python 3.6; 火炬> = 0.4.0 安装 此实现基于戴汉俊的structure2vec图后端。 在“ lib /”目录...
  • 交叉验证代码 The Machine Learing Training in HIT 作业的发布与提交都在本git仓库中进行 相关资源: 机器学习视频网站--请与课程的进度保持同步。 matlab下载地址--windows & linux。链接: 密码: ad6a 数据挖掘...
  • 帮助文档crossvalind交叉验证Generate cross-validation indices 生成交叉验证索引Syntax语法Indices = crossvalind('Kfold', N, K) K折交叉[Train, Test] = crossvalind('HoldOut', N, P)[Train, Test] = ...

    帮助文档

    crossvalind交叉验证

    Generate cross-validation indices  生成交叉验证索引

    Syntax语法

    Indices = crossvalind('Kfold', N, K) K折交叉

    [Train, Test] = crossvalind('HoldOut', N, P)

    [Train, Test] = crossvalind('LeaveMOut', N, M)留M法交叉验证,默认M为1,留一法交叉验证

    [Train, Test] = crossvalind('Resubstitution', N, [P,Q])

    [...] = crossvalind(Method, Group, ...)

    [...] = crossvalind(Method, Group, ..., 'Classes', C)

    [...] = crossvalind(Method, Group, ..., 'Min', MinValue)

    Description描述

    Indices = crossvalind('Kfold', N, K) returns randomly generated indices for a K-fold cross-validation ofN observations.Indices contains equal (or approximately equal) proportions of the

    integers1 throughK that define a partition of the N observations intoK disjoint subsets. Repeated calls return different randomly generated partitions.K defaults to5 when omitted. In K-fold cross-validation,

    K-1 folds are used for training and the last fold is used for evaluation. This process is repeatedK times, leaving one different fold for evaluation each time.

    [Train, Test] = crossvalind('HoldOut', N, P) returns logical index vectors for cross-validation ofN observations by randomly selectingP*N (approximately) observations to hold out for the evaluation set.P

    must be a scalar between0 and 1. P defaults to 0.5 when omitted, corresponding to holding50% out. Using holdout cross-validation within a loop is similar to K-fold cross-validation one time outside the loop, except that

    non-disjointed subsets are assigned to each evaluation.

    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    LeaveMOut

    [Train, Test] = crossvalind('LeaveMOut', N, M), where M is an integer, returns logical index vectors for cross-validation ofN observations by randomly selectingM of the observations to hold out for the evaluation set.M

    defaults to1 when omitted. Using 'LeaveMOut' cross-validation within a loop does not guarantee disjointed evaluation sets. To guarantee disjointed evaluation sets, use'Kfold' instead.

    M是整数,返回交叉索引逻辑索引向量,其中N个观测值,从N个观测值中随机选取M个观测值保留作为验证集,其余作为训练集。省略时,M默认为1,即留一法交叉验证。

    在一个循环中使用LeaveMOut交叉验证不保证不连贯的验证集.为保证非连贯的验证集,使用K-fold方法替换。

    Approximate a leave-one-out prediction error estimate. 拟合一个留一法交叉验证预测误差估计

    load carbig

    x = Displacement; y = Acceleration;

    % x为轿车形状的大小,y为轿车轿车速度从0到60公里所用时间

    N = length(x);

    % N为x长度=406

    sse = 0;

    for i = 1:100

    [train,test] = crossvalind('LeaveMOut',N,1);

    yhat = polyval(polyfit(x(train),y(train),2),x(test));

    sse = sse + sum((yhat - y(test)).^2);

    end

    CVerr = sse / 100

    % sse=353.10 CVerr交叉验证误差为sse/100=3.5310

    CVerr =

    4.9750

    polyfit(x(train),y(train),2)   x为横坐标,y为纵坐标,拟合2次多项式

    polyfit 输出是一个多项式系数的行向量,从左到右表示从高次到低次的多项式系数。2 1 0次

    y = polyval(p,x)

    返回n次多项式在x处的值。输入变量p是一个长度为n+1的向量,其元素为按降幂排列的多项式系数。

    y=p1*x^n+p2*x^(n-1)+...+pn*x+p(n+1)

    x可以是一个矩阵或者一个向量,在这两种情况下,polyval计算在X中任意元素处的多项式p的估值。

    计算均方误差(估值减去观测值的平方)之和。

    进行了100次交叉验证,除以总次数100,为单次均方误差。

    模型的均方误差越小,拟合的越好。

    其中carbig.mat是一个各国轿车的统计数据,总计406辆轿车。

    4df749d94da8d132c9303a947c73d947.png

    这里:

    Accelaration: 轿车速度从0到60公里所用时间

    Cylinders:    轿车的汽缸数

    Displacement:轿车形状的大小

    Horsepower:轿车的马力

    MPG:             每加仑汽油行驶的里程

    Model:          轿车的型号

    Model_year:那一年代的模型

    Origin:          轿车产自那里

    Weight:        轿车的重量

    其中有些是数据型变量,有些是字符型变量。

    >> x=1:10

    x =

    1 2 3 4 5 6 7 8 9 10

    >> y=sin(x)

    y =

    Columns 1 through 8

    0.8415 0.9093 0.1411 -0.7568 -0.9589 -0.2794 0.6570 0.9894

    Columns 9 through 10

    0.4121 -0.5440

    >> [train,test]=crossvalind('LeaveMOut',10,1);

    >> train

    train =

    1

    1

    1

    1

    0

    1

    1

    1

    1

    1

    >> test

    test =

    0

    0

    0

    0

    1

    0

    0

    0

    0

    0

    >> [train,test]=crossvalind('LeaveMOut',10,2);

    >> train

    train =

    1

    0

    1

    1

    1

    1

    1

    1

    1

    0

    >> test

    test =

    0

    1

    0

    0

    0

    0

    0

    0

    0

    1

    ----------------------------------------------------------------------------

    [Train, Test] = crossvalind('Resubstitution', N, [P,Q]) returns logical index vectors of indices for cross-validation ofN observations by randomly selectingP*N observations for the evaluation set andQ*N

    observations for training. Sets are selected in order to minimize the number of observations that are used in both sets.P andQ are scalars between

    0 and 1. Q=1-P corresponds to holding out(100*P)%, whileP=Q=1 corresponds to full resubstitution.[P,Q] defaults to

    [1,1] when omitted.

    [...] = crossvalind(Method, Group, ...) takes the group structure of the data into account.Group is a grouping vector that defines the class for each observation.Group can be a numeric vector, a string array,

    or a cell array of strings. The partition of the groups depends on the type of cross-validation: For K-fold, each group is divided intoK subsets, approximately equal in size. For all others, approximately equal numbers of observations from each group

    are selected for the evaluation set. In both cases the training set contains at least one observation from each group.

    [...] = crossvalind(Method, Group, ..., 'Classes', C) restricts the observations to only those values specified inC.C can be a numeric vector, a string array, or a cell array of strings, but it is of the

    same form asGroup. If one output argument is specified, it contains the value0 for observations belonging to excluded classes. If two output arguments are specified, both will contain the logical value false for observations belonging to

    excluded classes.

    [...] = crossvalind(Method, Group, ..., 'Min', MinValue) sets the minimum number of observations that each group has in the training set.Min defaults to1. Setting a large value for

    Min can help to balance the training groups, but adds partial resubstitution when there are not enough observations. You cannot setMin when using K-fold cross-validation.

    Examples

    Create a 10-fold cross-validation to compute classification error.

    load fisheriris

    indices = crossvalind('Kfold',species,10);

    cp = classperf(species);

    for i = 1:10

    test = (indices == i); train = ~test;

    class = classify(meas(test,:),meas(train,:),species(train,:));

    classperf(cp,class,test)

    end

    cp.ErrorRate

    Divide cancer data 60/40 without using the 'Benign' observations. Assume groups are the true labels of the observations.

    labels = {'Cancer','Benign','Control'};

    groups = labels(ceil(rand(100,1)*3));

    [train,test] = crossvalind('holdout',groups,0.6,'classes',...

    {'Control','Cancer'});

    sum(test) % Total groups allocated for testing

    ans =

    35

    sum(train) % Total groups allocated for training

    ans =

    26

    函数原型

    function [tInd,eInd] = crossvalind(method,N,varargin)

    %CROSSVALIND generates cross-validation indices 按比例取出每次交叉验证的索引

    % each time.

    %

    % [TRAIN,TEST] = CROSSVALIND('HoldOut',N,P) returns logical index vectors 返回逻辑索引向量

    %

    % [TRAIN,TEST] = CROSSVALIND('LeaveMOut',N,M), where M is an integer,

    % returns logical index vectors for cross-validation of N observations by

    % randomly selecting M of the observations to hold out for the evaluation

    % set. M defaults to 1 when omitted. Using LeaveMOut cross-validation

    % within a loop does not guarantee disjointed evaluation sets. Use K-fold

    % instead.

    % M是整数,返回交叉索引逻辑索引向量,其中N个观测值,随机选取M个观测值保留作为验证集,其余作为训练集

    % 省略时,M默认为1,即留一法交叉验证。

    % 在一个循环中使用LeaveMOut交叉验证不保证不连贯的验证集.使用K-fold方法替换

    % [TRAIN,TEST] = CROSSVALIND('Resubstitution',N,[P,Q]) returns logical

    % index vectors of indices for cross-validation of N observations by

    % randomly selecting P*N observations for the evaluation set and Q*N

    % observations for training. Sets are selected in order to minimize the

    % number of observations that are used in both sets. P and Q are scalars

    % between 0 and 1. Q=1-P corresponds to holding out (100*P)%, while P=Q=1

    % corresponds to full resubstitution. [P,Q] defaults to [1,1] when omitted.

    %

    % [...] = CROSSVALIND(METHOD,GROUP,...) takes the group structure of the

    % data into account. GROUP is a grouping vector that defines the class for

    % each observation. GROUP can be a numeric vector, a string array, or a

    % cell array of strings. The partition of the groups depends on the type

    % of cross-validation: For K-fold, each group is divided into K subsets,

    % approximately equal in size. For all others, approximately equal

    % numbers of observations from each group are selected for the evaluation

    % set. In both cases the training set will contain at least one

    % observation from each group.

    %

    % [...] = CROSSVALIND(METHOD,GROUP,...,'CLASSES',C) restricts the

    % observations to only those values specified in C. C can be a numeric

    % vector, a string array, or a cell array of strings, but it is of the

    % same form as GROUP. If one output argument is specified, it will

    % contain the value 0 for observations belonging to excluded classes. If

    % two output arguments are specified, both will contain the logical value

    % false for observations belonging to excluded classes.

    %

    % [...] = CROSSVALIND(METHOD,GROUP,...,'MIN',MIN) sets the minimum number

    % of observations that each group has in the training set. MIN defaults

    % to 1. Setting a large value for MIN can help to balance the training

    % groups, but adds partial resubstitution when there are not enough

    % observations. You cannot set MIN when using K-fold cross-validation.

    %

    % Examples:示例

    %

    % % Create a 10-fold cross-validation to compute classification error.十折交叉验证 计算分类误差

    % 将样本打乱,然后均匀分成K份,轮流选择其中K-1份训练,剩余的一份做验证,计算预测误差平方和,

    % 最后把K次的预测误差平方和再做平均作为选择最优模型结构的依据。这里取K=10

    % 特别的K取N,就是留一法(leave one out)。

    %

    % load fisheriris

    % indices = crossvalind('Kfold',species,10);

    % cp = classperf(species);

    % for i = 1:10

    % test = (indices == i); train = ~test;

    % class = classify(meas(test,:),meas(train,:),species(train,:));

    % classperf(cp,class,test)

    % end

    % cp.ErrorRate

    %

    % % Approximate a leave-one-out prediction error estimate.

    % load carbig

    % x = Displacement; y = Acceleration;

    % N = length(x);

    % sse = 0;

    % for i = 1:100

    % [train,test] = crossvalind('LeaveMOut',N,1);

    % yhat = polyval(polyfit(x(train),y(train),2),x(test));

    % sse = sse + sum((yhat - y(test)).^2);

    % end

    % CVerr = sse / 100

    %

    % % Divide cancer data 60/40 without using the 'Benign' observations.

    % % Assume groups are the true labels of the observations.

    % labels = {'Cancer','Benign','Control'};

    % groups = labels(ceil(rand(100,1)*3));

    % [train,test] = crossvalind('holdout',groups,0.6,'classes',...

    % {'Control','Cancer'});

    % sum(test) % Total groups allocated for testing

    % sum(train) % Total groups allocated for training

    %

    % See also CLASSPERF, CLASSIFY, GRP2IDX, KNNCLASSIFY, SVMCLASSIFY.

    % References:

    % [1] Hastie, T. Tibshirani, R, and Friedman, J. (2001) The Elements of

    % Statistical Learning, Springer, pp. 214-216.

    % [2] Theodoridis, S. and Koutroumbas, K. (1999) Pattern Recognition,

    % Academic Press, pp. 341-342.

    % Copyright 2003-2008 The MathWorks, Inc.

    % $Revision: 1.1.10.5 $ $Date: 2008/06/16 16:32:40 $

    % set defaults

    classesProvided = false;

    MG = 1; % default for minimum number of observations for every training group

    P = 0.5; % default value for holdout

    K = 5; % default value for Kfold

    M = 1; % default value for leave-M-out

    Q = [1 1];% default value for resubstitution

    % get and validate the method (first input)

    if ischar(method) && size(method,1)==1

    validMethods = {'holdout','kfold','resubstitution','leavemout'};

    method = strmatch(lower(method),validMethods);

    if isempty(method)

    error('Bioinfo:crossvalind:NotValidMethod',...

    'Not a valid method.')

    end

    method = validMethods{method};

    else

    error('Bioinfo:crossvalind:NotValidTypeForMethod',...

    'Valid methods are ''KFold'', ''HoldOut'', ''LeaveMOut'', or ''Resubstitution''.')

    end

    if nargout>1 && isequal(method,'kfold')

    error('Bioinfo:crossvalind:TooManyOutputArgumentsForKfold',...

    'To many output arguments for Kfold cross-validation.')

    end

    % take P,K,Q, or M if provided by the third input (first varargin) and

    % validate it

    if numel(varargin) && isnumeric(varargin{1})

    S = varargin{1};

    varargin(1)=[];

    switch method

    case 'holdout'

    if numel(S)==1 && S>0 && S<1

    P = S;

    else

    error('Bioinfo:crossvalind:InvalidThirdInputP',...

    'For hold-out cross-validation, the third input must be a scalar between 0 and 1.');

    end

    case 'kfold'

    if numel(S)==1 && S>=1

    K = round(S);

    else

    error('Bioinfo:crossvalind:InvalidThirdInputK',...

    'For Kfold cross-validation, the third input must be a positive integer.');

    end

    case 'leavemout'

    if numel(S)==1 && S>=1

    M = round(S);

    else

    error('Bioinfo:crossvalind:InvalidThirdInputM',...

    'For leave-M-out cross-validation, the third input must be a positive integer.');

    end

    case 'resubstitution'

    if numel(S)==2 && all(S>0) && all(S<=1)

    Q = S(:);

    else

    error('Bioinfo:crossvalind:InvalidThirdInputQ',...

    'For resubstitution cross-validation, the third input must be a 2x1 vector with values between 0 and 1.');

    end

    end %switch

    end

    % read optional paired input arguments in

    if numel(varargin)

    if rem(numel(varargin),2)

    error('Bioinfo:crossvalind:IncorrectNumberOfArguments',...

    'Incorrect number of arguments to %s.',mfilename);

    end

    okargs = {'classes','min'};

    for j=1:2:numel(varargin)

    pname = varargin{j};

    pval = varargin{j+1};

    k = find(strncmpi(pname, okargs,length(pname)));

    if isempty(k)

    error('Bioinfo:crossvalind:UnknownParameterName',...

    'Unknown parameter name: %s.',pname);

    elseif length(k)>1

    error('Bioinfo:crossvalind:AmbiguousParameterName',...

    'Ambiguous parameter name: %s.',pname);

    else

    switch(k)

    case 1 % classes

    classesProvided = true;

    classes = pval;

    case 2 % min

    MG = round(pval(1));

    if MG<0

    error('Bioinfo:crossvalind:NotValidMIN',...

    'MIN must be a positive scalar.')

    end

    end

    end

    end

    end

    if isscalar(N) && isnumeric(N)

    if N<1 || N~=floor(N)

    error('Bioinfo:crossvalind:NNotPositiveInteger',...

    'The number of observations must be a positive integer.');

    end

    group = ones(N,1);

    else

    [group, groupNames] = grp2idx(N); % at this point group is numeric only

    N = numel(group);

    end

    if classesProvided

    orgN = N;

    % change classes to same type as groups

    [dummy,classes]=grp2idx(classes);

    validGroups = intersect(classes,groupNames);

    if isempty(validGroups)

    error('bioinfo:crossvalind:EmptyValidGroups',...

    'Could not find any valid group. Are CLASSES the same type as GROUP ?')

    end

    selectedGroups = ismember(groupNames(group),validGroups);

    group = grp2idx(group(selectedGroups)); % group idxs are reduced to only the sel groups

    N = numel(group); % the new size of the reduced vector

    end

    nS = accumarray(group(:),1);

    if min(nS)

    error('Bioinfo:crossvalind:MissingObservations',...

    'All the groups must have at least least MIN obeservation(s).')

    end

    switch method

    case {'leavemout','holdout','resubstitution'}

    switch method

    case 'leavemout'

    % number of samples for holdout in every group

    nSE = repmat(M,numel(nS),1);

    % at least there is MG sample(s) for training in every group

    nST = max(nS-nSE,MG);

    case 'holdout'

    % computes the number of samples for holdout in every group

    nSE = floor(nS*P);

    % at least there is MG sample(s) for training in every group

    nST = max(nS-nSE,MG);

    case 'resubstitution'

    % computes the number of samples for training and evaluation

    nSE = floor(nS*Q(1));

    nST = floor(nS*Q(2));

    % at least there is MG sample(s) for training in every group

    nST = max(nST,MG);

    end

    % Initializing the outputs

    tInd = false(N,1);

    eInd = false(N,1);

    % for every group select randomly the samples for both sets

    for g = 1:numel(nS)

    h = find(group==g);

    randInd = randperm(nS(g));

    tInd(h(randInd(1:nST(g))))=true;

    eInd(h(randInd(end-nSE(g)+1:end)))=true;

    end

    case 'kfold'

    tInd = zeros(N,1);

    for g = 1:numel(nS)

    h = find(group==g);

    % compute fold id's for every observation in the group

    q = ceil(K*(1:nS(g))/nS(g));

    % and permute them to try to balance among all groups

    pq = randperm(K);

    % randomly assign the id's to the observations of this group

    randInd = randperm(nS(g));

    tInd(h(randInd))=pq(q);

    end

    end

    if classesProvided

    if isequal(method,'kfold')

    temp = zeros(orgN,1);

    temp(selectedGroups) = tInd;

    tInd = temp;

    else

    temp = false(orgN,1);

    temp(selectedGroups) = tInd;

    tInd = temp;

    temp = false(orgN,1);

    temp(selectedGroups) = eInd;

    eInd = temp;

    end

    end

    展开全文
  • matlab交叉验证代码HLearn HLearn是一个用编写的高性能机器学习库。 例如,对于任意度量空间,它目前具有最快的最近邻居实现(请参阅参考资料)。 HLearn也是一个研究项目。 研究目标是发现机器学习的“最佳可能”...
  • matlab交叉验证代码厕所 拟合贝叶斯模型的有效近似留一法交叉验证 loo是一个R包,它使用户可以为拟合的贝叶斯模型计算有效的近似留一法式交叉验证,以及可以用于平均预测分布的模型权重。 loo软件包打包为来自以下...
  • 交叉验证MATLAB代码

    2014-10-27 15:06:38
    用于交叉验证MATLAB代码,可以很好实现对数据的验证
  • 它可以被视为 K 折交叉验证的一种新颖的顺序和预测实现。 PEMF 将模型训练器(例如 RBF-multiquadric 或 Kriging-Linear)、用于训练模型的样本数据和应用于模型的超参数值(例如 RBF 中的形状因子)作为输入。 ...
  • 核密度非参数估计的matlab代码交叉验证 在我目前的课程“数据分析和解释”中,我们的课程讲师是图像处理专家,我们已经完成了关于这个主题的几个有趣的作业,并在 MATLAB 中实现了它们。 其中之一是 PDF 估计器,...
  • 交叉验证代码 Preface 下面是我对这篇 Reading Text in the Wild with Convolutional Neural Networks 文章, 对于前半部分:文字定位检测部分的复现大致流程。 用的数据集是 ICDAR 2011: ,不少人都说 ICDAR 2011 ...
  • Prepared by the multiple regression of cross-validation procedure
  • 采用了十折交叉验证提高了分类的准确性,分类器分类函数可以替换成Linear,quadratic,rbf函数
  • [PearsonR, PearsonP, SpearmanR, SpearmanP, yhat, R2 ] = BenStuff_CrossValCorr( x,y, [MathMagic], [OmNullModel] ) 留一法交叉验证的简单线性回归输入变量: x, y:数据向量(x(n) 和 y(n) 对应一对观测值) ...
  • 我正在尝试对具有多个分类类的模型执行交叉验证,但是在尝试在每个折叠上更新我的classperf时遇到错误。我收到此错误&#34;索引向量具有无效值。&#34;我的代码如下:K = 5;N = size(DataSet, 1);idx = ...

    我正在尝试对具有多个分类类的模型执行交叉验证,但是在尝试在每个折叠上更新我的classperf时遇到错误。我收到此错误&#34;索引向量具有无效值。&#34;

    我的代码如下:

    K = 5;

    N = size(DataSet, 1);

    idx = crossvalind('Kfold', N, K);

    cp = classperf(trainLabel);

    for i = 1:K

    ...

    %Long codes for svmtrain & svmclassify

    ...

    cp = classperf(cp, Group, trueTestLabel); %error on this line

    end

    cp.CorrectRate

    其中trainLabel是一个120 x 1的双倍,由所有项目的真实性得分组成;组是svmclassify在20 x 1 double中获得的结果;和trueTestLabel是使用以下函数获得的20 x 1双重形式的测试类的真实性得分:

    trueTestLabel = trainLabel(idx == i, end);

    我尝试过转换&#34; Group&#34;和&#34; trueTestLabel&#34;通过使用num2cell函数进入单元格:

    cp = classperf(cp, num2cell(Group), num2cell(trueTestLabel ));

    但相反,我得到了一个不同的错误说&#34;当CP对象的类标签是数字时,输出

    分类器必须是所有非负整数或NaN&#39; s。

    展开全文
  • Matlab使用交叉验证

    万次阅读 多人点赞 2018-09-25 14:41:39
    在做机器学习时,经常要用到交叉验证来分配数据,故在此记录一下。所谓交叉验证,就是将一个数据集分为K份,然后取其中一份作为测试集,剩余K-1份作为训练集。然后,取另一份作为测试集,其余K-1份作为训练集.........

    在做机器学习时,经常要用到交叉验证来分配数据,故在此记录一下。所谓交叉验证,就是将一个数据集分为K份,然后取其中一份作为测试集,剩余K-1份作为训练集。然后,取另一份作为测试集,其余K-1份作为训练集......如此循环,直到每一份都做过测试集为止。用的比较多的是10折交叉验证,代码如下:

    clc
    clear all
    % 导入数据
    data = load('F:\work_matlab\Matlab\wdbc.txt');
    [data_r, data_c] = size(data);
    %将数据样本随机分割为10部分
    indices = crossvalind('Kfold', data_r, 10);
    for i = 1 : 10
        % 获取第i份测试数据的索引逻辑值
        test = (indices == i);
        % 取反,获取第i份训练数据的索引逻辑值
        train = ~test;
        %1份测试,9份训练
        test_data = data(test, 1 : data_c - 1);
        test_label = data(test, data_c);
        
        train_data = data(train, 1 : data_c - 1);
        train_label = data(train, data_c);
        % 使用数据的代码
    end
    

    做交叉验证的函数就是crossvalind,第二个参数是用于做交叉验证的数据个数,第三个参数表示要将这些数据分成几份。另外,有兴趣的同学可以打开test变量看下,你会发现是逻辑值。

    注意,此处使用的数据是用于分类的数据,类别标签放在了最后每个样本的最后一位。代码中将特征值与标签分离开来是为了可以对特征值做归一化处理,而避免将类别标签也做归一化。各位可以根据需要使用。数据分配好后,下面就可以添上自己的代码了。

     

    展开全文
  • 提供交叉验证神经网络matlab代码,供大家学习。
  • Matlab 十折交叉验证

    2021-04-18 04:44:00
    十折交叉验证(1)英文名叫做10-fold cross-validation,用来测试算法准确性,是常用的测试方法。(2)将数据集分成十份,轮流将其中9份作为训练数据,1份作为测试数据,进行试验。每次试验都会得出相应的正确率(或差错...

    十折交叉验证

    (1)英文名叫做10-fold cross-validation,用来测试算法准确性,是常用的测试方法。

    (2)将数据集分成十份,轮流将其中9份作为训练数据,1份作为测试数据,进行试验。每次试验都会得出相应的正确率(或差错率)。

    (3)10次的结果的正确率(或差错率)的平均值作为对算法精度的估计,一般还需要进行多次10折交叉验证(例如10次10折交叉验证),再求其均值,作为对算法准确性的估计。

    例子:利用十折交叉验证计算错误分类率

    (Matlab内置了由Fisher在1936年发布的关于iris的数据集,鸠尾花的分类,详见UCI链接;载入该数据集,包括means和species,分别是四维的150个样本和对应的类别)

    load fisheriris

    indices = crossvalind('Kfold',species,10);

    cp = classperf(species);

    for i = 1:10

    test = (indices == i); train = ~test;

    %分别取第1、2、...、10份为测试集,其余为训练集

    class = classify(meas(test,:),meas(train,:),species(train,:));

    classperf(cp,class,test);

    end

    cp.ErrorRate

    %查询错误分类率

    相关函数解释:

    Indices = crossvalind('Kfold', N, K)

    1)参数'Kfold'表明为了K折十字交叉验证,把数据集N随机分成平均的(或近似评价的)K份,Indices中为每个样本所属部分的索引(从1到K)

    2)因为是随机分,因此重复调用会产生不同分法。

    3)在K折十字交叉验证中,K-1份被用做训练,剩下的1份用来测试,这个过程被重复K次。

    cp = classperf(truelabels)

    1)classperf是评估分类器性能(Evaluate performance of classifie)函数。

    2)truelabels中为每个样本对应的真实类别,创建并初始化一个空的分类器性能对象CP。

    3)classperf provides an interface to keep track of the performance during the validation of classifiers. classperf creates and, optionally, updates a classifier performance object, CP, which accumulates the results of the classifier.

    class = classify(sample,training,group)

    1)classify是判别分析(Discriminant Analysis)函数。

    2)若事先已经建立类别,则使用判别分析;若事先没有建立类别,则使用聚类分析。一般地,若已有给定的若干总体的(即若干类别)的观测资料,希望构造一个或多个判别函数,能由此函数对新的位置其所属总体的样品作出判断,从而决定其应属于哪个总体,这就是判别分析问题。

    3)判别分析是利用原有的分类信息,得到判别函数(判别函数关系式,一般是与分类相关的若干个指标的线性关系式),然后利用 该函数去判断未知样品属于哪一类。因此,这是一个学习与预测的过程。常用的判别分析法有距离判别法、费歇尔判别法、贝叶斯判别法等。

    4)matlab中语法:class = classify(sample,training,group) ,默认线性判别分析,将sample的每个样本进行判别,分到trainning指定的类中,返回该类表作为分类结果。还可以用参数type指定判别分析法。

    classperf(cp, classout, testidx)

    1)根据分类结果,更新分类器性能对象CP。

    2)在十折交叉验证法中,就是重复10次,可累积得到总的错误分类率。

    展开全文
  • Generate cross-validation indices 生成交叉验证索引 Syntax语法 Indices= crossvalind('Kfold', N, K) %K折交叉验证 [Train, Test] = crossvalind('HoldOut', N, P)% 将原始数据随机分为两组,一组做为训练集,...
  • 一、SVM如何使用MATLAB调用 我之所以介绍这个,主要的原因是,大家在做机器视觉算法时使用最多的工具,而matlab中自带的svm工具箱又只能用于分两类的情况,而且不能进行交叉验证选择合适的参数,但是在正常的使用时...
  • 以下是我对此交叉验证的看法.我使用魔法创建虚拟数据(10)我也随机创建标签.想法如下,我们得到我们的数据和标签,并将它们与随机列结合起来.考虑遵循虚拟代码.>> data = magic(4)data =16 2 3 135 11 10 89 7 6 ...
  • crossvalind Matlab 交叉验证

    千次阅读 2018-02-01 11:19:41
    原文地址:Matlab 交叉验证" style="color:rgb(62,115,160)">crossvalind Matlab 交叉验证作者:lujingyang1029 今天用到crossvalind. 这个适用于Cross validation。中文应该叫做交叉验证。我主要想说说这个...
  • %说明:下面是我自己写的matlab代码,其实matlab有自带的交叉验证代码crossvalind,见Chunhou Zheng师兄的Metasample Based Sparse Representation for Tumor提供的代码%说明:Main_gene10FOLD_1.m有,用法非常简单...
  • 笔记下面是代码,注释一般都能看懂的。%datas为读入的数据集 labels为读入的标签%规范化数据[datas_normal] = premnmx(datas) ;...%交叉验证,使用十折交叉验证 Kfold%indices为 m 行一列数据,表示每个训练样本属于...
  • 进行SVM时运用高斯核,需要参数选择,改程序用来参数选择
  • matlab自相关代码MVPA通过交叉验证的MANOVA 这是Matslab实现的,该方法由Carsten Allefeld和John-Dylan Haynes引入,“通过交叉验证的MANOVA对fMRI进行基于探照灯的多体素模式分析”。 先决条件 交叉验证的MANOVA...
  • MATLAB中使用交叉验证函数的方法

    千次阅读 2018-01-28 14:14:42
    这周忙着赶毕业论文,发个前段时间用到的在MATLAB中使用交叉验证函数的方法吧。  交叉验证是一种随机循环验证方法,它可以将数据样本随机分割成几个子集。交叉验证主要用于评估统计分析或机器学习算法的泛化能力等...
  • 十则交叉验证的KNN分类matlab程序

    热门讨论 2013-09-04 09:26:36
    用十则交叉验证进行k近邻分类的matlab源程序.
  • 交叉验证 matlab实现

    万次阅读 多人点赞 2016-05-02 19:15:33
    转自:... crossvalind交叉验证 Generate cross-validation indices 生成交叉验证索引 Syntax语法 Indices = crossvalind('Kfold', N, K) K折交叉 [Train, Test] = crossvalind('Ho
  • 本帖最后由 azure_sky 于 2014-1-17 00:30 编辑2).K-fold Cross Validation(记为K-CV)将原始数据分成K组(一般是均分),将每个子集数据分别做一次验证集,其余的K-1组子集数据作为训练集,这样会得到K个模型,用这K个模型...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 5,562
精华内容 2,224
关键字:

matlab交叉验证

matlab 订阅