精华内容
下载资源
问答
  • 2022-04-26 14:14:23

    matlab 使用svm进行分类(适用于二分类和多分类

    1. 简单二分类

    clear,clc
    
    %% 二分类
    %训练数据20×2,20行代表20个训练样本点,第一列代表横坐标,第二列纵坐标
    Train_Data =[-3 0;4 0;4 -2;3 -3;-3 -2;1 -4;-3 -4;0 1;-1 0;2 2;3 3;-2 -1;-4.5 -4;2 -1;5 -4;-2 2;-2 -3;0 2;1 -2;2 0];
    %Group 20 x 1,20行代表训练数据对应点属于哪一类(1,-1)
    Train_labels =[1 -1 -1 -1 1 -1 1 1 1 -1 -1 1 1 -1 -1 1 1 1 -1 -1]';
    TestData = [3 -1;3 1;-2 1;-1 -2;2 -3;-3 -3];%测试数据
    classifier  = fitcsvm(Train_Data,Train_labels); %train
    test_labels  = predict(classifier ,TestData); % test
    

    这里 test_labels 就是最后的分类结果啦,大家可以按照这个格式对自己的数据进行修改

    2. 多分类(不调用工具箱)

    因为

    %% 多分类
    TrainingSet=[ 1 10;2 20;3 30;4 40;5 50;6 66;3 30;4.1 42];%训练数据
    TestSet=[3 34; 1 14; 2.2 25; 6.2 63];%测试数据
    GroupTrain=[1;1;2;2;3;3;2;2];%训练标签
    results =my_MultiSvm(TrainingSet, GroupTrain, TestSet);
    disp('multi class problem');
    disp(results);
    

    results为最终的分类结果,上述中有用到 my_MultiSvm.m() 函数,以下是my_MultiSvm.m函数的全部内容

    function [y_predict,models] = my_MultiSvm(X_train, y_train, X_test)
    % multi svm
    % one vs all 模型
    % Input:
    % X_train: n*m矩阵 n为训练集样本数 m为特征数
    % y_train: n*1向量 为训练集label,支持任意多种类
    % X_test: n*m矩阵 n为测试集样本数 m为特征数
    % Output:
    % y_predict: n*1向量 测试集的预测结果
    % 
    % Copyright(c) lihaoyang 2020
    %
    
        y_labels = unique(y_train);
        n_class = size(y_labels, 1);
        models = cell(n_class, 1);
        % 训练n个模型
        for i = 1:n_class
            class_i_place = find(y_train == y_labels(i));
            svm_train_x = X_train(class_i_place,:);
            sample_num = numel(class_i_place);
            class_others = find(y_train ~= y_labels(i));
            randp = randperm(numel(class_others));
            svm_train_minus = randp(1:sample_num)';
            svm_train_x = [svm_train_x; X_train(svm_train_minus,:)];
            svm_train_y = [ones(sample_num, 1); -1*ones(sample_num, 1)];
            disp(['生成模型:', num2str(i)])
            models{i} = fitcsvm(svm_train_x, svm_train_y);
        end
        test_num = size(X_test, 1);
        y_predict = zeros(test_num, 1);
        % 对每条数据,n个模型分别进行预测,选择label为1且概率最大的一个作为预测类别
        for i = 1:test_num
            if mod(i, 100) == 0
                disp(['预测个数:', num2str(i)])
            end
            bagging = zeros(n_class, 1);
            for j = 1:n_class
                model = models{j};
                [label, rat] = predict(model, X_test(i,:));
                bagging(j) = bagging(j) + rat(2);
            end
            [maxn, maxp] = max(bagging);
            y_predict(i) = y_labels(maxp);
        end
    end
    

    3.多分类(调用libsvm工具箱)

    以下代码是调用matlab工具箱libsvm的一种方法

    TrainingSet=[ 1 10;2 20;3 30;4 40;5 50;6 66;3 30;4.1 42];%训练数据
    TestSet=[3 34; 1 14; 2.2 25; 6.2 63];%测试数据
    GroupTrain=[1;1;2;2;3;3;2;2];%训练标签
    GroupTest=[1;2;1;3];%测试标签
    
    %svm分类
    model = svmtrain(GroupTrain,TrainingSet);
    % SVM网络预测
    [predict_label] = svmpredict(GroupTest,TestSet,model);
    

    之所以放到最后,是因为需要在matlab安装libsvm的工具箱,具体方法可参看此链接在Matlab中安装LibSVM工具箱

    更多相关内容
  • 使用libsvm 实现MNIST数据库手写数字识别,正确率98.14. 包含matlab程序,libsvm库,以及60000张训练数据10000张测试数据
  • 利用MATLAB实现SVM分类

    千次阅读 2021-10-07 10:57:21
    利用MATLAB实现SVM分类 %% LibSVM clc;clear;close all; %% 加载文件 [Iris1,Iris2,Iris3,Iris4,IrisClass]=textread('iris.data','%f%f%f%f%s','delimiter',','); Wine=load('wine.data'); % 仅读数字 Abalone=...

    利用MATLAB实现SVM分类

    %% LibSVM
    clc;clear;close all;
    %% 加载文件
    [Iris1,Iris2,Iris3,Iris4,IrisClass]=textread('iris.data','%f%f%f%f%s','delimiter',','); 
    Wine=load('wine.data'); % 仅读数字
    Abalone=importdata('abalone.data');%既可以读取数据又可以读取字符
    %% 数据预处理
    % Iris数据预处理
    Iris_all=[Iris1,Iris2,Iris3,Iris4];
    Iris_class = zeros(1, 150);
    for i = 1: size(IrisClass,1)
                 if (strcmp(IrisClass(i), 'Iris-setosa' ))
                   Iris_class(1,i) = 1;     
                elseif(strcmp(IrisClass(i), 'Iris-versicolor') )
                   Iris_class(1,i) = 2;
                else
                   Iris_class(1,i) = 3;  
                 end 
    end
    Iris_class=Iris_class(:);
    %------数据类别放在第一列-----%
    Iris=[Iris_class Iris_all];
    % Wine数据预处理
    Wine_class=Wine(:,1);
    Wine_data=Wine(:,2:14);
    % Abalone数据预处理
    Abalone_gendar = zeros(1, 4177);
    for i = 1: size(Abalone.textdata,1)
                 if (strcmp(Abalone.textdata(i), 'F' ))
                   Abalone_gendar(1,i) = 1;     
                elseif(strcmp(Abalone.textdata(i), 'M') )
                   Abalone_gendar(1,i) = 2;
                else
                   Abalone_gendar(1,i) = 3;  
                 end 
    end
    Abalone_gendar = Abalone_gendar(:);
    Abalone_Data=Abalone.data(:,1:8);
    Abalone_class=Abalone_gendar;
    Abalone_all=[Abalone_class Abalone_Data];
    % 分割部分
    times=2;
    %% -----Iris样本的选取-----
    % 随机选取
    Data = Iris;%创建随机矩阵样本
    indices = crossvalind('Kfold', size(Data,1), times);%将数据样本随机分割为3部分
    for i = 1:times %循环3次,分别取出第i部分作为测试样本,其余两部分作为训练样本
        test = (indices == i);
        train = ~test;
        trainData = Data(train, :);
        testData = Data(test, :);
    end
    train_1=trainData(:,2:5);
    train_label=trainData(:,1);
    test_1=testData(:,2:5);
    test_label=testData(:,1);
    % 分类选取(先分类再随机再组合)
    Data = Iris;%创建随机矩阵样本
    % 第一类数据
    Iris_Index1=logical(Iris(:,1)==1);
    Iris_class1=Data(Iris_Index1==1,:);
    indices = crossvalind('Kfold', size(Iris_class1,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData1 = Iris_class1(train, :);
        testData1 = Iris_class1(test, :);
    end
    % 第二类数据
    Iris_Index2=logical(Iris(:,1)==2);
    Iris_class2=Data(Iris_Index2==1,:);
    indices = crossvalind('Kfold', size(Iris_class2,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData2 = Iris_class2(train, :);
        testData2 = Iris_class2(test, :);
    end
    % 第三类数据
    Iris_Index3=logical(Iris(:,1)==3);
    Iris_class3=Data(Iris_Index3==1,:);
    indices = crossvalind('Kfold', size(Iris_class3,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData3 = Iris_class3(train, :);
        testData3 = Iris_class3(test, :);
    end
    trainData_all=[trainData1;trainData2;trainData3];
    testData_all=[testData1;testData2;testData3];
    train_cross=trainData_all(:,2:5);
    train_label_cross=trainData_all(:,1);
    test_cross=testData_all(:,2:5);
    test_label_cross=testData_all(:,1);
    %% -------Wine样本的选取------
    % 随机选取
    times=10;
    Data = Wine;%创建随机矩阵样本
    indices = crossvalind('Kfold', size(Data,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData = Data(train, :);
        testData = Data(test, :);
    end
    train_1=trainData(:,2:14);
    train_label=trainData(:,1);
    test_1=testData(:,2:14);
    test_label=testData(:,1);
    % 分类选取(先分类再随机再组合)
    Data = Wine;%创建随机矩阵样本
    % 第一类数据
    Wine_Index1=logical(Wine(:,1)==1);
    Wine_class1=Data(Wine_Index1==1,:);
    indices = crossvalind('Kfold', size(Wine_class1,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData1 = Wine_class1(train, :);
        testData1 = Wine_class1(test, :);
    end
    % 第二类数据
    Wine_Index2=logical(Wine(:,1)==2);
    Wine_class2=Data(Wine_Index2==1,:);
    indices = crossvalind('Kfold', size(Wine_class2,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData2 = Wine_class2(train, :);
        testData2 = Wine_class2(test, :);
    end
    % 第三类数据
    Wine_Index3=logical(Wine(:,1)==3);
    Wine_class3=Data(Wine_Index3==1,:);
    indices = crossvalind('Kfold', size(Wine_class3,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData3 = Wine_class3(train, :);
        testData3 = Wine_class3(test, :);
    end
    trainData_all=[trainData1;trainData2;trainData3];
    testData_all=[testData1;testData2;testData3];
    train_cross=trainData_all(:,2:14);
    train_label_cross=trainData_all(:,1);
    test_cross=testData_all(:,2:14);
    test_label_cross=testData_all(:,1);
    % 训练
    % 随机分类训练
    cmd1 =['-t  1']; %线性核函数
    cmd2 =['-t  1 –c  100']; %线性核函数
    model = svmtrain(train_label, train_1, cmd2);
    svmpredict(test_label,test_1,model);
    % 按类别随机分类训练
    cmd1 =['-t  1']; %线性核函数
    cmd2 =['-t  1 –c  100']; %线性核函数
    model = svmtrain(train_label_cross, train_cross, cmd1);
    svmpredict(test_label_cross,test_cross,model);
    % 换核函数
    cmd1 =['-t  1']; %线性核函数
    cmd2 =['-t  1 –c  100']; %线性核函数
    model = svmtrain(train_label_cross, train_cross, cmd2);
    svmpredict(test_label_cross,test_cross,model);
    %% -----Abalone样本的选取-----
    % 对于分布不均匀样本的随机选取方法
    % 随机选取
    times=10;
    Data = Abalone_all;%创建随机矩阵样本
    indices = crossvalind('Kfold', size(Data,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData = Data(train, :);
        testData = Data(test, :);
    end
    train_1=trainData(:,2:9);
    train_label=trainData(:,1);
    test_1=testData(:,2:9);
    test_label=testData(:,1);
    % 分类选取(先分类再随机再组合)
    Data = Abalone_all;%创建随机矩阵样本
    % 第一类数据
    Abalone_Index1=logical(Abalone_all(:,1)==1);
    Abalone_class1=Data(Abalone_Index1==1,:);
    indices = crossvalind('Kfold', size(Abalone_class1,1),times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData1 = Abalone_class1(train, :);
        testData1 = Abalone_class1(test, :);
    end
    % 第二类数据
    Abalone_Index2=logical(Abalone_all(:,1)==2);
    Abalone_class2=Data(Abalone_Index2==1,:);
    indices = crossvalind('Kfold', size(Abalone_class2,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData2 = Abalone_class2(train, :);
        testData2 = Abalone_class2(test, :);
    end
    % 第三类数据
    Abalone_Index3=logical(Abalone_all(:,1)==3);
    Abalone_class3=Data(Abalone_Index3==1,:);
    indices = crossvalind('Kfold', size(Abalone_class3,1), times);
    for i = 1:times 
        test = (indices == i);
        train = ~test;
        trainData3 = Abalone_class3(train, :);
        testData3 = Abalone_class3(test, :);
    end
    trainData_all=[trainData1;trainData2;trainData3];
    testData_all=[testData1;testData2;testData3];
    train_cross=trainData_all(:,2:9);
    train_label_cross=trainData_all(:,1);
    test_cross=testData_all(:,2:9);
    test_label_cross=testData_all(:,1);
    % % 随机分类训练
    % cmd1 =['-t  1']; %线性核函数
    % cmd2 =['-t  1 –c  100']; %线性核函数
    % model = svmtrain(train_label, train_1, cmd1);
    % svmpredict(test_label,test_1,model);
    % 按类别随机分类训练
    cmd1 =['-t  1']; %线性核函数
    cmd2 =['-t  1 –c  100']; %线性核函数
    model = svmtrain(train_label_cross, train_cross, cmd1);
    svmpredict(test_label_cross,test_cross,model);
    % 按类别随机分类训练
    cmd1 =['-t  1']; %线性核函数
    cmd2 =['-t  1 –c  100']; %线性核函数
    model = svmtrain(train_label_cross, train_cross, cmd2);
    svmpredict(test_label_cross,test_cross,model);
    %% 训练
    % 随机分类训练
    cmd1 =['-t  1']; %线性核函数
    cmd2 =['-t  1 –c  100']; %线性核函数
    model = svmtrain(train_label, train_1, cmd1);
    svmpredict(test_label,test_1,model);
    % 按类别随机分类训练
    cmd1 =['-t  1']; %线性核函数
    cmd2 =['-t  1 –c  100']; %线性核函数
    model = svmtrain(train_label_cross, train_cross, cmd1);
    svmpredict(test_label_cross,test_cross,model);
    
    %% 换样本进行操作
    
    
    
    
    展开全文
  • svm分类与回归的matlab代码
  • svm分类器,集成了svm多分类解决方案,支持matlab、C、python等
  • 分类SVMMatlab实现

    2019-04-12 09:31:15
    内含libsvm工具箱、SVMs的示例程序(含代码和实例数据)、SVR的示例程序(含代码和实例数据)
  • 可直接编译成功
  • 利用matlabSVM算法的参数进行优化,从而更好的提升分类性能
  • ,1:500) val_pca = val_SCORE(:,1:500) save data_for_svm.mat te_pca tr_pca val_pca %设定SVM分类器参数 c=1000; lambda=1e-7; kerneloption=2; kernel='poly'; verbose=1; class_number = 26; load load ...

    非常着急!

    手写字母识别 附件里是数据,已经经过0-1预处理

    以下是我代码 做了 poly核函数(准确率5.7%) 和 Gaussian核函数(准确率2.6% 看了一下都识别成了18号也就是字母‘r’)的

    load TrainSampleImage.mat;

    DataSet = zeros(64*64,2600);

    Tag = zeros(1,2600);

    for i = 1:2600

    DataSet(:,i)=reshape(CharImg(i).Img,64*64,1);

    Tag(1,i)=reshape(CharImg(i).Letter-96,1,1);

    end

    [train_set,validate_set,test_set] = dividevec(DataSet,Tag,0.2,0.1)

    [tr_PCA,tr_SCORE,tr_latent,tr_tsquare]=princomp(train_set.P);

    test_set=test_set'

    test_set.T=test_set.T';

    test_set.P=test_set.P';

    test_set.indices=test_set.indices';

    validate_set.T=test_set.T';

    validate_set.P=test_set.P';

    validate_set.indices=test_set.indices';

    train_set.T=test_set.T';

    train_set.P=test_set.P';

    train_set.indices=test_set.indices';

    [train_set,validate_set,test_set] = dividevec(DataSet,Tag,0.2,0.1)

    test_set.T=test_set.T';

    test_set.P=test_set.P';

    test_set.indices=test_set.indices';

    validate_set.T=validate_set.T';

    validate_set.P=validate_set.P';

    validate_set.indices=validate_set.indices';

    train_set.T=train_set.T';

    train_set.P=train_set.P';

    train_set.indices=train_set.indices';

    [tr_PCA,tr_SCORE,tr_latent,tr_tsquare]=princomp(train_set.P);

    [tr_PC,tr_SCORE,tr_latent,tr_tsquare]=princomp(train_set.P);

    [te_PC,te_SCORE,te_latent,te_tsquare]=princomp(test_set.P);

    [val_PCA,val_SCORE,val_latent,val_tsquare]=princomp(val_set.P);

    [val_PCA,val_SCORE,val_latent,val_tsquare]=princomp(validate_set.P);

    save afterPCA.mat val_SCORE te_SCORE tr_SCORE

    save beforePCA.mat test_set train_set validate_set

    clc

    clear

    clc

    load afterPCA.mat

    tr_pca = tr_SCORE(1:2,:)

    tr_pca = tr_SCORE(:,1:500)

    te_pca = te_SCORE(:,1:500)

    val_pca = val_SCORE(:,1:500)

    save data_for_svm.mat te_pca tr_pca val_pca

    %设定SVM分类器参数

    c=1000;

    lambda=1e-7;

    kerneloption=2;

    kernel='poly';

    verbose=1;

    class_number = 26;

    load

    load beforePCA.mat

    [tr_sup_pca_lda,w_pca,b_pca,nbsv_pca]=svmmulticlassoneagainstall(tr_pca,train_set.T,class_number,c,lambda,kernel,kerneloption,verbose);

    save SVMmodel.mat tr_sup_pca_lda w_pca b_pca nbsv_pca

    [val_pred_pca_lda,maxi_pca] = svmmultival(val_pca,tr_sup_pca_lda,w_pca,b_pca,nbsv_pca,kernel,kerneloption);%得到测试样本的类别

    accute_rate_pca=sum(val_pred_pca_lda==validate_set.T)/length(validate_set.T);

    which svmmulticlassoneagainstall

    save SVMmodelpoly.mat tr_sup_pca_lda w_pca b_pca nbsv_pca accute_rate_pca val_pred_pca_lda

    c=1000;

    lambda=1e-7;

    kerneloption=2;

    kernel='poly';

    verbose=1;

    class_number = 26;

    kernel='gaussian';%gaussian

    %对PCA降维后的样本进行分类器训练

    [tr_sup_pca_lda,w_pca,b_pca,nbsv_pca]=svmmulticlassoneagainstall

    (tr_pca,train_set.T,class_number,c,lambda,kernel,kerneloption,verbose);

    %对PCA降维后的测试样本进行预测

    [val_pred_pca_lda,maxi_pca] = svmmultival(val_pca,tr_sup_pca_lda,w_pca,b_pca,nbsv_pca,kernel,kerneloption);%得到测试样本的类别

    accute_rate_pca=sum(val_pred_pca_lda==validate_set.T)/length(validate_set.T);

    %对PCA降维后的样本进行分类器训练

    [tr_sup_pca_lda,w_pca,b_pca,nbsv_pca]=svmmulticlassoneagainstall

    (tr_pca,train_set.T,class_number,c,lambda,kernel,kerneloption,verbose);

    %对PCA降维后的测试样本进行预测

    [val_pred_pca_lda,maxi_pca] = svmmultival(val_pca,tr_sup_pca_lda,w_pca,b_pca,nbsv_pca,kernel,kerneloption);%得到测试样本的类别

    accute_rate_pca=sum(val_pred_pca_lda==validate_set.T)/length(validate_set.T);

    save SVMmodelgaussian.mat tr_sup_pca_lda w_pca b_pca nbsv_pca accute_rate_pca val_pred_pca_lda

    99e39a8196ca8f63dc1eee7b0d72d0ed.gif

    2010-12-27 10:26 上传

    点击文件名下载附件

    430.6 KB, 下载次数: 10896

    展开全文
  • MatLab实现SVM分类.

    2013-04-02 08:17:52
    MatLab实现SVM分类
  • 使用matlab实现SVM做线性分类、非线性分类以及模型参数自动优化。

    tips

    因为现在已经有许多很成熟的SVM软件或者包来实现最小化代价函数求解参数值,它们都是由机器学习领域的专家编写的,且运用了许多高级优化和实现技巧,远非现在我手写出的求解能比拟的,白嫖不香吗?

    在利用了大佬的求解后,我们需要做的就是确定算法的参数 C C C和核函数,且相似度中也可能有我们需要确定的参数(如高斯函数的 σ \sigma σ)。例如在特征值数目 n n n很大,而样本数 m m m很少的时候,我们可能选用线性核函数。如果 n n n比较少而 m m m适中,我们则倾向使用高斯函数(记得归一化)构造一个复杂边界。

    大佬的函数

    这个求解的优点是兼容性好,所以作为作业中提供给学生的函数,实际操作中吴恩达建议我们用更高级的工具,如LIBSVM和matlab的Statistics and Machine Learning Toolbox。

    训练SVM模型

    function [model] = svmTrain(X, Y, C, kernelFunction, ...
                                tol, max_passes)
    %SVMTRAIN Trains an SVM classifier using a simplified version of the SMO 
    %algorithm. 
    %   [model] = SVMTRAIN(X, Y, C, kernelFunction, tol, max_passes) trains an
    %   SVM classifier and returns trained model. X is the matrix of training 
    %   examples.  Each row is a training example, and the jth column holds the 
    %   jth feature.  Y is a column matrix containing 1 for positive examples 
    %   and 0 for negative examples.  C is the standard SVM regularization 
    %   parameter.  tol is a tolerance value used for determining equality of 
    %   floating point numbers. max_passes controls the number of iterations
    %   over the dataset (without changes to alpha) before the algorithm quits.
    %
    % Note: This is a simplified version of the SMO algorithm for training
    %       SVMs. In practice, if you want to train an SVM classifier, we
    %       recommend using an optimized package such as:  
    %
    %           LIBSVM   (http://www.csie.ntu.edu.tw/~cjlin/libsvm/)
    %           SVMLight (http://svmlight.joachims.org/)
    %
    %
    
    if ~exist('tol', 'var') || isempty(tol)
        tol = 1e-3;
    end
    
    if ~exist('max_passes', 'var') || isempty(max_passes)
        max_passes = 5;
    end
    
    % Data parameters
    m = size(X, 1);
    n = size(X, 2);
    
    % Map 0 to -1
    Y(Y==0) = -1;
    
    % Variables
    alphas = zeros(m, 1);
    b = 0;
    E = zeros(m, 1);
    passes = 0;
    eta = 0;
    L = 0;
    H = 0;
    
    % Pre-compute the Kernel Matrix since our dataset is small
    % (in practice, optimized SVM packages that handle large datasets
    %  gracefully will _not_ do this)
    % 
    % We have implemented optimized vectorized version of the Kernels here so
    % that the svm training will run faster.
    if strcmp(func2str(kernelFunction), 'linearKernel')
        % Vectorized computation for the Linear Kernel
        % This is equivalent to computing the kernel on every pair of examples
        K = X*X';
    elseif strfind(func2str(kernelFunction), 'gaussianKernel')
        % Vectorized RBF Kernel
        % This is equivalent to computing the kernel on every pair of examples
        X2 = sum(X.^2, 2);
        K = bsxfun(@plus, X2, bsxfun(@plus, X2', - 2 * (X * X')));
        K = kernelFunction(1, 0) .^ K;
    else
        % Pre-compute the Kernel Matrix
        % The following can be slow due to the lack of vectorization
        K = zeros(m);
        for i = 1:m
            for j = i:m
                 K(i,j) = kernelFunction(X(i,:)', X(j,:)');
                 K(j,i) = K(i,j); %the matrix is symmetric
            end
        end
    end
    
    % Train
    fprintf('\nTraining ...');
    dots = 12;
    while passes < max_passes,
                
        num_changed_alphas = 0;
        for i = 1:m,
            
            % Calculate Ei = f(x(i)) - y(i) using (2). 
            % E(i) = b + sum (X(i, :) * (repmat(alphas.*Y,1,n).*X)') - Y(i);
            E(i) = b + sum (alphas.*Y.*K(:,i)) - Y(i);
            
            if ((Y(i)*E(i) < -tol && alphas(i) < C) || (Y(i)*E(i) > tol && alphas(i) > 0)),
                
                % In practice, there are many heuristics one can use to select
                % the i and j. In this simplified code, we select them randomly.
                j = ceil(m * rand());
                while j == i,  % Make sure i \neq j
                    j = ceil(m * rand());
                end
    
                % Calculate Ej = f(x(j)) - y(j) using (2).
                E(j) = b + sum (alphas.*Y.*K(:,j)) - Y(j);
    
                % Save old alphas
                alpha_i_old = alphas(i);
                alpha_j_old = alphas(j);
                
                % Compute L and H by (10) or (11). 
                if (Y(i) == Y(j)),
                    L = max(0, alphas(j) + alphas(i) - C);
                    H = min(C, alphas(j) + alphas(i));
                else
                    L = max(0, alphas(j) - alphas(i));
                    H = min(C, C + alphas(j) - alphas(i));
                end
               
                if (L == H),
                    % continue to next i. 
                    continue;
                end
    
                % Compute eta by (14).
                eta = 2 * K(i,j) - K(i,i) - K(j,j);
                if (eta >= 0),
                    % continue to next i. 
                    continue;
                end
                
                % Compute and clip new value for alpha j using (12) and (15).
                alphas(j) = alphas(j) - (Y(j) * (E(i) - E(j))) / eta;
                
                % Clip
                alphas(j) = min (H, alphas(j));
                alphas(j) = max (L, alphas(j));
                
                % Check if change in alpha is significant
                if (abs(alphas(j) - alpha_j_old) < tol),
                    % continue to next i. 
                    % replace anyway
                    alphas(j) = alpha_j_old;
                    continue;
                end
                
                % Determine value for alpha i using (16). 
                alphas(i) = alphas(i) + Y(i)*Y(j)*(alpha_j_old - alphas(j));
                
                % Compute b1 and b2 using (17) and (18) respectively. 
                b1 = b - E(i) ...
                     - Y(i) * (alphas(i) - alpha_i_old) *  K(i,j)' ...
                     - Y(j) * (alphas(j) - alpha_j_old) *  K(i,j)';
                b2 = b - E(j) ...
                     - Y(i) * (alphas(i) - alpha_i_old) *  K(i,j)' ...
                     - Y(j) * (alphas(j) - alpha_j_old) *  K(j,j)';
    
                % Compute b by (19). 
                if (0 < alphas(i) && alphas(i) < C),
                    b = b1;
                elseif (0 < alphas(j) && alphas(j) < C),
                    b = b2;
                else
                    b = (b1+b2)/2;
                end
    
                num_changed_alphas = num_changed_alphas + 1;
    
            end
            
        end
        
        if (num_changed_alphas == 0),
            passes = passes + 1;
        else
            passes = 0;
        end
    
        fprintf('.');
        dots = dots + 1;
        if dots > 78
            dots = 0;
            fprintf('\n');
        end
        if exist('OCTAVE_VERSION')
            fflush(stdout);
        end
    end
    fprintf(' Done! \n\n');
    
    % Save the model
    idx = alphas > 0;
    model.X= X(idx,:);
    model.y= Y(idx);
    model.kernelFunction = kernelFunction;
    model.b= b;
    model.alphas= alphas(idx);
    model.w = ((alphas.*Y)'*X)';
    
    end
    

    利用模型预测

    function pred = svmPredict(model, X)
    %SVMPREDICT returns a vector of predictions using a trained SVM model
    %(svmTrain). 
    %   pred = SVMPREDICT(model, X) returns a vector of predictions using a 
    %   trained SVM model (svmTrain). X is a mxn matrix where there each 
    %   example is a row. model is a svm model returned from svmTrain.
    %   predictions pred is a m x 1 column of predictions of {0, 1} values.
    %
    
    % Check if we are getting a column vector, if so, then assume that we only
    % need to do prediction for a single example
    if (size(X, 2) == 1)
        % Examples should be in rows
        X = X';
    end
    
    % Dataset 
    m = size(X, 1);
    p = zeros(m, 1);
    pred = zeros(m, 1);
    
    if strcmp(func2str(model.kernelFunction), 'linearKernel')
        % We can use the weights and bias directly if working with the 
        % linear kernel
        p = X * model.w + model.b;
    elseif strfind(func2str(model.kernelFunction), 'gaussianKernel')
        % Vectorized RBF Kernel
        % This is equivalent to computing the kernel on every pair of examples
        X1 = sum(X.^2, 2);
        X2 = sum(model.X.^2, 2)';
        K = bsxfun(@plus, X1, bsxfun(@plus, X2, - 2 * X * model.X'));
        K = model.kernelFunction(1, 0) .^ K;
        K = bsxfun(@times, model.y', K);
        K = bsxfun(@times, model.alphas', K);
        p = sum(K, 2);
    else
        % Other Non-linear kernel
        for i = 1:m
            prediction = 0;
            for j = 1:size(model.X, 1)
                prediction = prediction + ...
                    model.alphas(j) * model.y(j) * ...
                    model.kernelFunction(X(i,:)', model.X(j,:)');
            end
            p(i) = prediction + model.b;
        end
    end
    
    % Convert predictions into 0 / 1
    pred(p >= 0) =  1;
    pred(p <  0) =  0;
    
    end
    
    
    

    模型可视化

    把数据集和训练出的决策边界绘制出来,让我们能直观地观察模型:

    线性边界:

    function visualizeBoundaryLinear(X, y, model)
    %VISUALIZEBOUNDARYLINEAR plots a linear decision boundary learned by the
    %SVM
    %   VISUALIZEBOUNDARYLINEAR(X, y, model) plots a linear decision boundary 
    %   learned by the SVM and overlays the data on it
    
    w = model.w;
    b = model.b;
    xp = linspace(min(X(:,1)), max(X(:,1)), 100);
    yp = - (w(1)*xp + b)/w(2);
    plotData(X, y);
    hold on;
    plot(xp, yp, '-b'); 
    hold off
    
    end
    
    

    非线性边界:

    function visualizeBoundary(X, y, model, varargin)
    %VISUALIZEBOUNDARY plots a non-linear decision boundary learned by the SVM
    %   VISUALIZEBOUNDARYLINEAR(X, y, model) plots a non-linear decision 
    %   boundary learned by the SVM and overlays the data on it
    
    % Plot the training data on top of the boundary
    plotData(X, y)
    
    % Make classification predictions over a grid of values
    x1plot = linspace(min(X(:,1)), max(X(:,1)), 100)';
    x2plot = linspace(min(X(:,2)), max(X(:,2)), 100)';
    [X1, X2] = meshgrid(x1plot, x2plot);
    vals = zeros(size(X1));
    for i = 1:size(X1, 2)
       this_X = [X1(:, i), X2(:, i)];
       vals(:, i) = svmPredict(model, this_X);
    end
    
    % Plot the SVM boundary
    hold on
    contour(X1, X2, vals, [0.5 0.5], 'b');
    hold off;
    
    end
    
    

    线性边界

    老规矩,先可视化数据:
    在这里插入图片描述
    很显然,一根直线就能划分这两类数据。不过,在最左侧有一个反常样本。通过这个例子我们可以观察参数 C C C对SVM模型结果的影响。

    因为都是用的别人的函数,所以代码很简单:

    C = 1;
    model = svmTrain(X, y, C, @linearKernel, 1e-3, 20);
    visualizeBoundaryLinear(X, y, model);
    

    训练出的模型就比较合理,没有强行将反常样本纳入,同时是大间距模型,决策边界划在两类的中央,看着就舒服。
    在这里插入图片描述
    但如果将 C C C增大到100,使算法对样本的拟合要求变得非常严苛,那么模型就会尽全力去把反常点也纳入进来,看着就不太好的亚子:
    在这里插入图片描述

    复杂非线性边界

    对于下面这样的数据,显然一根直线没法解决问题:
    在这里插入图片描述
    所以我们需要加入核函数来构造复杂的非线性边界,编写高斯函数作为核函数:
    f = similarity ( x ⃗ , l ⃗ ) = exp ⁡ ( − ∣ ∣ x ⃗ − l ⃗ ∣ ∣ 2 2 σ 2 ) f=\text{similarity}(\vec{x},\vec{l})=\exp(-\frac{||\vec{x}-\vec{l}||^2}{2\sigma^2}) f=similarity(x ,l )=exp(2σ2x l 2)

    function sim = gaussianKernel(x1, x2, sigma)
    %RBFKERNEL returns a radial basis function kernel between x1 and x2
    %   sim = gaussianKernel(x1, x2) returns a gaussian kernel between x1 and x2
    %   and returns the value in sim
    
    % Ensure that x1 and x2 are column vectors
    x1 = x1(:); x2 = x2(:);
    
    % You need to return the following variables correctly.
    sim = 0;
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Fill in this function to return the similarity between x1
    %               and x2 computed using a Gaussian kernel with bandwidth
    %               sigma
    %
    %
    
    sim=(x1-x2)'*(x1-x2);
    sim=exp(-sim/(2*sigma^2));
    % =============================================================
        
    end
    

    然后调用大佬的函数,坐等结果:

    % SVM Parameters
    C = 1; sigma = 0.1;
    
    % We set the tolerance and max_passes lower here so that the code will run faster. However, in practice, 
    % you will want to run the training to convergence.
    model= svmTrain(X, y, C, @(x1, x2) gaussianKernel(x1, x2, sigma)); 
    visualizeBoundary(X, y, model);
    

    在这里插入图片描述

    参数选择

    在上一个问题中,参数 C C C σ \sigma σ都是吴恩达给我们的,在我们自己建立模型时,则需要自己选择参数。这就需要一个程序,来帮我们枚举参数并在训练集上训练,最后通过验证集找到最合适的参数:

    function [C, sigma] = dataset3Params(X, y, Xval, yval)
    %DATASET3PARAMS returns your choice of C and sigma for Part 3 of the exercise
    %where you select the optimal (C, sigma) learning parameters to use for SVM
    %with RBF kernel
    %   [C, sigma] = DATASET3PARAMS(X, y, Xval, yval) returns your choice of C and 
    %   sigma. You should complete this function to return the optimal C and 
    %   sigma based on a cross-validation set.
    %
    
    C_vec = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30];
    sigma_vec = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30];
    n=length(C_vec);
    m=length(sigma_vec);
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Fill in this function to return the optimal C and sigma
    %               learning parameters found using the cross validation set.
    %               You can use svmPredict to predict the labels on the cross
    %               validation set. For example, 
    %                   predictions = svmPredict(model, Xval);
    %               will return the predictions on the cross validation set.
    %
    %  Note: You can compute the prediction error using 
    %        mean(double(predictions ~= yval))
    %
    error=10^10;
    for i=1:n
        for j=1:m
            C=C_vec(i);
            sigma=sigma_vec(j);%枚举参数
            model=svmTrain(X,y,C,@(x1,x2)gaussianKernel(x1, x2, sigma));
            predictions=svmPredict(model,Xval);%训练并预测
            if error>mean(double(predictions~=yval))%计算并更新最小误差
                error=mean(double(predictions~=yval));
                p=i;
                q=j;
            end
        end
    end
    C=C_vec(p);
    sigma=sigma_vec(q);
    % =========================================================================
    
    end
    

    最后,对于下面的数据我们得到了如图的决策边界:

    % Try different SVM Parameters here
    [C, sigma] = dataset3Params(X, y, Xval, yval);
    
    % Train the SVM
    model = svmTrain(X, y, C, @(x1, x2)gaussianKernel(x1, x2, sigma));
    visualizeBoundary(X, y, model);
    

    在这里插入图片描述

    展开全文
  • GA-SVM预测数据,输入训练样本,运用测试样本测试
  • 经典SVM算法多类分类matlab程序

    千次下载 热门讨论 2013-07-17 14:17:58
    经典SVM算法多类分类 matlab程序
  • MATLAB源码:SVM神经网络的数据分类预测之葡萄酒种类识别
  • Matlab使用svm分类

    2021-04-12 17:27:06
    svm
  • 基于SVM的疲劳驾驶系统。基于神经网络的非接触式疲劳驾驶检测已成为当前针对疲劳驾驶检测领域炙手可热的研究方向。它有效解决了接触式疲劳检测方法给驾驶员带来的干扰以及单一信号源对于反映疲劳程度可靠性低的问题...
  • Matlab-SVM分类

    万次阅读 多人点赞 2016-08-29 11:47:51
    支持向量机(Support Vector Machine,SVM),可以完成对数据的分类,包括线性可分情况和线性不可分情况。 1、线性可分 首先,对于SVM来说,它用于二分类问题,也就是通过寻找一个分类线(二维是直线,三维是平面,...
  • MATLAB实现SVM分类 纠错输出码(ECOC)相关网页 官网的例子 Train Multiclass Model Using SVM Learners Train Multiclass Linear Classification Model Cross-Validate ECOC Classifier Estimate Posterior ...
  • SVM分类matlab代码

    2012-06-26 16:27:17
    这是一个完整的SVM分类器的matlab代码。
  • 提供的 MATLAB 函数可用于使用基于树状图的支持向量机 (D-SVM) 对数据集进行训练和执行多类分类。 两个主要功能是: Train_DSVM:这是用于训练的函数Classify_DSVM:这是用于 D-SVM 分类的函数示例:使用fisheriris ...
  • 基于MATLABsvm分类器代码实现基于MATLABsvm分类器代码实现
  • MATLAB中的fisheriris分类,使用SVM算法,图像显示
  • 关于libsvm分类结果的可视化及分类曲线的可视化-加权SVM相关论文.rar 关于libsvvm分类结果的可视化及分类曲线的可视化 by faruto 论坛里曾有多位朋友询问过,有关libsvvm分类结果的可视化及分类曲线的可视化...
  • SVM分类器的相关算法和matlab源码,部分内容如下,1.命令函数部分: clear;%清屏 clc; X =load('data.txt'); n = length(X);%总样本数量 y = X(:,4);%类别标志 X = X(:,1:3); TOL = 0.0001;%精度要求 C = 1;%参数,...
  • 内容:程序代码,毕业论文(14544字),任务书,开题报告,中期检查,电路图摘要:支持向量机(SVM)是近来提出的基于统计学习理论的解决模式识别问题的新技术,因其出色的学习性能,已成为目前国际机器学习界的研究热点...
  • 采用MATLAB实现支持向量机(SVM)解决二分类问题,分别采用二次规划凸优化求解、半不无穷规划(线性核与非线性核)求解。 带IRIS数据、实验报告与SVM分类原理数学推导文档,可直接运行,不使用MATLABSVM工具箱...
  • 基本的svm分类例程,对数据进行简单分类
  • 其基本模型是定义在空间上最大间隔的线性分类器,由于其遵循经验风险与置信风险之和最小化(即结构风险)原理,因此SVM泛化能力强。SVM学习策略是使间隔最大化,对于已知数据集:T={(x1,y1),(x2,y2),……(xn,yn...
  • svm算法matlab基础教程,适合新手学习入门........
  • 三维二分类任务,SVM可视化源代码以及数据文件。 对SVM分类平面,支持向量,bad case进行可视化分析。

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 9,926
精华内容 3,970
关键字:

matlabsvm分类

matlab 订阅