精华内容
下载资源
问答
  • svm图像分类python代码实现续 千次阅读 热门讨论
    2021-05-31 17:41:32

    svm图像分类python代码实现续

    这篇博客诗接上前面一篇svm图像分类得一篇续集
    svm分类代码如下

     
    #os.system("pause")
    
    #Svm 训练:
    import sys
    import os
    import cv2
    import numpy as np
    from sklearn.svm import SVC
    from sklearn.model_selection import train_test_split
    import time
    import pickle
    #help(SVC)
    
    SHAPE = (30, 30)
    def getImageData(directory):
       s = 1
       feature_list = list()
       label_list   = list()
       num_classes = 0
       for root, dirs, files in os.walk(directory):
          for d in dirs:
             num_classes += 1
             images = os.listdir(root+d)
             for image in images:
                s += 1
                label_list.append(d)
                feature_list.append(extractFeaturesFromImage(root + d + "/" + image))
    
       return np.asarray(feature_list), np.asarray(label_list)
    
    def extractFeaturesFromImage(image_file):
       img = cv2.imread(image_file)
       img = cv2.resize(img, SHAPE, interpolation = cv2.INTER_CUBIC)
       img = img.flatten()
       img = img / np.mean(img)
       return img
       
    
    if __name__ == "__main__":
       
       directory ="C:/learn_data/2021_car/image/" 
    
    
       feature_array, label_array = getImageData(directory)
    
       X_train, X_test, y_train, y_test = train_test_split(feature_array, label_array, test_size = 0.2, random_state = 42)
    
       if os.path.isfile("svm_model.pkl"):
          svm = pickle.load(open("svm_model.pkl", "rb"))
       else:
          svm = SVC(kernel='rbf',gamma=0.001)
          svm.fit(X_train, y_train)
          pickle.dump(svm, open("C:/learn_data/2021_car/svm_model.pkl", "wb"))
    
       print("Testing...\n")
     
       right = 0
       total = 0
       for x, y in zip(X_test, y_test):
          x = x.reshape(1, -1)
          prediction = svm.predict(x)[0]
          if y == prediction:
             right += 1
          total += 1
    
       accuracy = float(right) / float(total) 
    
       print (str(accuracy) + "% accuracy")
       print ("Manual Testing\n")
    print("success")
       
    os.system("pause")
    
    

    标题

    那么数据集是什么样的呢
    这里我要说一下了,这一次做的三分类,数据集的文件结构如下:
    在这里插入图片描述
    也就是说,这里的image就是我们的训练集,里面分为三个子文件夹,这三个子文件夹里只能有图片,其他的任何文件都不能有
    在这里插入图片描述
    好的,那么对于文件没是没有要求的,那么如果你现在想要进行二分类,三分类,也就改变image下的子文件数量,这里应该能听懂

    如果调用模型,在上一篇博客有着介绍可以参考。
    博客链接在这里
    这次的数据集已上传到我的资源,是三类车牌号数据。

    更多相关内容
  • 下载后,安装好版本匹配的python3.6,numpy,scipy,matplot,sklearn,skimage等包,后直接可以运行,无需修改代码。运行后输入y,就可以实习自带的图像分类(小鸡,小鸭,蛇,猫等分类)。
  • 题目是将图片库中的纹理图片裁剪成九份,其中五份做训练集,四份做测试集,先提取图片LBP特征 ,最后用svm支持向量机进行学习,在预测测试集里的图片是哪一类纹理。 正常情况下是需要调参的,即调整SVM的参数,但图...
  • python3.6,svm-hog图像分类,成功测试。代码图片打包下载。直接运行。

    python使用svm,hog进行图像分类时,需要使用sklearn,skimage,numpy,matplotlib等。这些包如果版本不对应很难成功运行代码。

    python3.6,numpy,matplotlib,scipy,sklearn匹配版本安装_Vertira的博客-CSDN博客先安装numpy,再安装scipy,再安装matplotlib这三个都是python3.6对应的版本。比较唯一,sk-learn 官网没有展示与python3.6对应的版本,但是使用清华源安装,可以安装到对应的版本。https://blog.csdn.net/Vertira/article/details/122317251用清华源安装,熟读更快

    python 安装numpy,matplotlib 清华源快速安装_Vertira的博客-CSDN博客pip install numpy -i https://pypi.tuna.tsinghua.edu.cn/simplepip install matplotlib -i https://pypi.tuna.tsinghua.edu.cn/simplehttps://blog.csdn.net/Vertira/article/details/122171528

    svm_hog python源码下载地址

    https://download.csdn.net/download/Vertira/74005192https://download.csdn.net/download/Vertira/74005192

    下载的压缩文件及其解压后的文件

     

    配置好各种包后,直接运行hog_svm.py(注意:把raw_input (这个是python2.0版本的)改成 input(这个是python3的))

    运行后程序输出

     文件夹中的变化:

     多了几个文件夹和一个输出测试结果 result.txt. 上面三个文件夹中都有训练产生的特征和参数。

    result.txt里面是预测的结果

    欢迎各位客官,点赞,收藏,加关注。

    展开全文
  • python 使用SVM进行简单的图像分类

    千次阅读 2021-11-05 11:22:08
    from sklearn import svm from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import cv2 import os impor

    1.首先进行数据处理

    import numpy as np
    from matplotlib import pyplot as plt
    from sklearn import svm
    from sklearn.datasets  import load_digits
    from sklearn.model_selection  import train_test_split
    from sklearn.metrics import accuracy_score
    import cv2
    import os
    import pickle
    from PIL import Image
    
    SHAPE = (30, 30)
    

    1.1文件的结构如下

    图片不用太多,一类几张即可
    替换成自己的图片以及目录即可
    在这里插入图片描述

        def getImageData(self,directory):
            s = 1
            feature_list = list()
            label_list = list()
            num_classes = 0
            for root, dirs, files in os.walk(directory):
                for d in dirs:
                    num_classes += 1
                    images = os.listdir(root + d)
                    for image in images:
                        s += 1
                        label_list.append(d)
                        feature_list.append(Svm_derection.extractFeaturesFromImage(root + d + "/" + image))
    
            return np.asarray(feature_list), np.asarray(label_list)
    

    1.2接下来图片的预处理函数(上方有调用到)

        def extractFeaturesFromImage(self,image_file):
            img = cv2.imread(image_file)
            img = cv2.resize(img, self.SHAPE, interpolation=cv2.INTER_CUBIC)
            img = img.flatten()
            img = img / np.mean(img)
            return img
    

    2.svm模型训练

        def train(self,dir):
        	#数据获取,这里Svm_derection是自定义类的名称
            feature_array, label_array = Svm_derection.getImageData(self.directory)
            #数据的分割
            X_train, X_test, y_train, y_test = train_test_split(feature_array, label_array, test_size=0.2, random_state=42)
    
            print("shape of raw image data: {0}".format(feature_array.shape))
            print("shape of raw image data: {0}".format(X_train.shape))
            print("shape of raw image data: {0}".format(X_test.shape))
    		#模型的选择
            clf = svm.SVC(gamma=0.001, C=100., probability=True)
            #模型的训练
            clf.fit(X_train, y_train);
            #模型测试
            Ypred = clf.predict(X_test);
    
            print("pre",Ypred)
            print("test",y_test)
    		#模型保存
            pickle.dump(clf, open("digits_svm.pkl", "wb"))
    

    3.模型读取使用

        def test(self,path,img_file):
            pkl_file = open(path, 'rb')
            clf=pickle.load(pkl_file)
            Ypred = clf.predict(np.reshape(self.extractFeaturesFromImage(img_file),(1,2700)))
            return Ypred
    

    4.运行代码

    path='digits_svm.pkl'
    img_file='derection/f2/1.jpg'
    sd=Svm_derection()
    t=sd.test(path,img_file)
    print(t)
    img = Image.open(os.path.join('derection/f1/1.jpg'))
    plt.figure("Image") # 图像窗口名称
    plt.imshow(img)
    plt.axis('off') # 关掉坐标轴为 off
    plt.title(t) # 图像题目
    plt.show()
    

    5.结果展示

    在这里插入图片描述

    展开全文
  • SVM分类,可以实现对图像分类,主要用于理解支持向量机
  • SVM分类python实现

    2020-12-03 21:08:24
    本作业的目标如下: implement a fully-vectorized loss function for the SVM implement the fully-vectorized expression for its analytic gradient check your implementation using numerical gradient use a ...

    本作业的目标如下:

    implement a fully-vectorized loss function for the SVM

    implement the fully-vectorized expression for its analytic gradient

    check your implementation using numerical gradient

    use a validation set to tune the learning rate and regularization strength

    optimize the loss function with SGD

    visualize the final learned weights

    归一化图片

    mean_image = np.mean(X_train, axis=0)

    print mean_image[:10] # print a few of the elements

    plt.figure(figsize=(4,4))

    plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image

    plt.show()

    X_train -= mean_image

    X_val -= mean_image

    X_test -= mean_image

    X_dev -= mean_image

    损失函数与梯度计算

    def svm_loss_naive(W, X, y, reg):

    dW = np.zeros(W.shape) # initialize the gradient as zero

    # compute the loss and the gradient

    num_classes = W.shape[1]

    num_train = X.shape[0]

    loss = 0.0

    for i in xrange(num_train):

    scores = X[i].dot(W)

    correct_class_score = scores[y[i]]

    for j in xrange(num_classes):

    if j == y[i]:

    continue

    margin = scores[j] - correct_class_score + 1 # note delta = 1

    if margin > 0:

    loss += margin

    dW[:, y[i]] += -X[i, :] # compute the correct_class gradients

    dW[:, j] += X[i, :] # compute the wrong_class gradients

    # Right now the loss is a sum over all training examples, but we want it

    # to be an average instead so we divide by num_train.

    loss /= num_train

    dW /= num_train

    dW += 2 * reg * W

    # Add regularization to the loss.

    loss += 0.5 * reg * np.sum(W * W)

    return loss, dW

    梯度检验

    def grad_check_sparse(f, x, analytic_grad, num_checks=10, h=1e-5):

    """

    sample a few random elements and only return numerical

    in this dimensions.

    """

    for i in xrange(num_checks):

    ix = tuple([randrange(m) for m in x.shape])

    oldval = x[ix]

    x[ix] = oldval + h # increment by h

    fxph = f(x) # evaluate f(x + h)

    x[ix] = oldval - h # increment by h

    fxmh = f(x) # evaluate f(x - h)

    x[ix] = oldval # reset

    grad_numerical = (fxph - fxmh) / (2 * h)

    grad_analytic = analytic_grad[ix]

    rel_error = abs(grad_numerical - grad_analytic) / (abs(grad_numerical) + abs(grad_analytic))

    print 'numerical: %f analytic: %f, relative error: %e' % (grad_numerical, grad_analytic, rel_error)

    from cs231n.gradient_check import grad_check_sparse

    f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]

    grad_numerical = grad_check_sparse(f, W, grad)

    # do the gradient check once again with regularization turned on

    # you didn't forget the regularization gradient did you?

    loss, grad = svm_loss_naive(W, X_dev, y_dev, 1e2)

    f = lambda w: svm_loss_naive(w, X_dev, y_dev, 1e2)[0]

    grad_numerical = grad_check_sparse(f, W, grad)

    使用向量化方法计算损失函数与梯度

    def svm_loss_vectorized(W, X, y, reg):

    loss = 0.0

    dW = np.zeros(W.shape) # initialize the gradient as zero

    scores = X.dot(W) # N by C

    num_train = X.shape[0]

    num_classes = W.shape[1]

    scores_correct = scores[np.arange(num_train), y] #1 by N

    scores_correct = np.reshape(scores_correct, (num_train, 1)) # N by 1

    margins = scores - scores_correct + 1.0 # N by C

    margins[np.arange(num_train), y] = 0.0

    margins[margins <= 0] = 0.0

    loss += np.sum(margins) / num_train

    loss += 0.5 * reg * np.sum(W * W)

    margins[margins > 0] = 1.0 #

    row_sum = np.sum(margins, axis=1) # 1 by N

    margins[np.arange(num_train), y] = -row_sum

    dW += np.dot(X.T, margins)/num_train + reg * W # D by C

    return loss, dW

    随机梯度下降

    def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,

    batch_size=200, verbose=True):

    num_train, dim = X.shape

    # assume y takes values 0...K-1 where K is number of classes

    num_classes = np.max(y) + 1

    if self.W is None:

    # lazily initialize W

    self.W = 0.001 * np.random.randn(dim, num_classes) # D by C

    # Run stochastic gradient descent(Mini-Batch) to optimize W

    loss_history = []

    for it in xrange(num_iters):

    X_batch = None

    y_batch = None

    # Sampling with replacement is faster than sampling without replacement.

    sample_index = np.random.choice(num_train, batch_size, replace=False)

    X_batch = X[sample_index, :] # batch_size by D

    y_batch = y[sample_index] # 1 by batch_size

    # evaluate loss and gradient

    loss, grad = self.loss(X_batch, y_batch, reg)

    loss_history.append(loss)

    # perform parameter update

    self.W += -learning_rate * grad

    if verbose and it % 100 == 0:

    print 'Iteration %d / %d: loss %f' % (it, num_iters, loss)

    return loss_history

    随着梯度下降过程中,损失函数的变化

    # A useful debugging strategy is to plot the loss as a function of

    # iteration number:

    plt.plot(loss_hist)

    plt.xlabel('Iteration number')

    plt.ylabel('Loss value')

    plt.show()

    预测

    def predict(self, X):

    #print X.shape,self.W.shape

    scores = X.dot(self.W)

    y_pred = np.argmax(scores, axis = 1)

    return y_pred

    validation验证集

    learning_rates = [1.4e-7, 1.5e-7, 1.6e-7]

    regularization_strengths = [(1+i*0.1)*1e4 for i in range(-3,3)] + [(2+0.1*i)*1e4 for i in range(-3,3)]

    results = {}

    best_val = -1 # The highest validation accuracy that we have seen so far.

    best_svm = None # The LinearSVM object that achieved the highest validation rate.

    for rs in regularization_strengths:

    for lr in learning_rates:

    svm = LinearSVM()

    loss_hist = svm.train(X_train, y_train, lr, rs, num_iters=3000)

    y_train_pred = svm.predict(X_train)

    train_accuracy = np.mean(y_train == y_train_pred)

    y_val_pred = svm.predict(X_val)

    val_accuracy = np.mean(y_val == y_val_pred)

    if val_accuracy > best_val:

    best_val = val_accuracy

    best_svm = svm

    results[(lr,rs)] = train_accuracy, val_accuracy

    # Print out results.

    for lr, reg in sorted(results):

    train_accuracy, val_accuracy = results[(lr, reg)]

    print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (

    lr, reg, train_accuracy, val_accuracy)

    print 'best validation accuracy achieved during cross-validation: %f' % best_val

    可视化cross-validation结果

    import math

    x_scatter = [math.log10(x[0]) for x in results]

    y_scatter = [math.log10(x[1]) for x in results]

    # plot training accuracy

    marker_size = 100

    colors = [results[x][0] for x in results]

    plt.subplot(2, 1, 1)

    plt.scatter(x_scatter, y_scatter, marker_size, c=colors)

    plt.colorbar()

    plt.xlabel('log learning rate')

    plt.ylabel('log regularization strength')

    plt.title('CIFAR-10 training accuracy')

    # plot validation accuracy

    colors = [results[x][1] for x in results] # default size of markers is 20

    plt.subplot(2, 1, 2)

    plt.scatter(x_scatter, y_scatter, marker_size, c=colors)

    plt.colorbar()

    plt.xlabel('log learning rate')

    plt.ylabel('log regularization strength')

    plt.title('CIFAR-10 validation accuracy')

    plt.show()

    将所得bestSVM应用于测试集,结果为 0.381000

    可视化wight各图片成分

    w = best_svm.W[:-1,:] # strip out the bias

    w = w.reshape(32, 32, 3, 10)

    w_min, w_max = np.min(w), np.max(w)

    classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

    for i in xrange(10):

    plt.subplot(2, 5, i + 1)

    # Rescale the weights to be between 0 and 255

    wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)

    plt.imshow(wimg.astype('uint8'))

    plt.axis('off')

    plt.title(classes[i])

    展开全文
  • 卷积神经网络(CNN)用来提取特征,采用SVM分类器进行训练和分类
  • 基于LSA和SVM的文本分类模型的研究,本文提出了一种基于标题类别语义识别的文本分类算法
  • 快乐虾欢迎转载,但请保留作者信息在opencv中支持SVM分类器。本文尝试在python中调用它。和前面的贝叶斯分类器一样,SVM也遵循先训练再使用的方式。我们直接在贝叶斯分类器的測试代码上做简单改动。完毕两类数据点的...
  • @python,sklearn,svm,遥感数据分类,代码实例 python_sklearn_svm遥感数据分类代码实例 (1)svm原理简述 支持向量机(Support Vector Machine,即SVM)是包括分类(Classification)、回归(Regression)和异常检测...
  • 我必须训练一个分类模型,以便可以将任何新图像分类为好/坏。 SVM似乎是执行此操作的最佳方法。 我已经在MATLAB中完成了图像处理,但没有在python中完成。谁能建议如何在python中做到这一点? 什么是图书馆? 对于...
  • SVM 图片二分类

    2018-12-07 10:04:02
    提取图片的siftte特征,训练SVM分类器,对图片进行二分类,亲测有效,供大家使用
  • 卷积神经网络与支持向量机结合的python代码
  • 基于SVM的RBG图像分割,对图像的前景与后景进行分割
  • 代码是基于SVM分类Python实现,原文章节题目和code关系不大,或者说给出已处理好数据的方法缺失、源是图像数据更是不见踪影,一句话就是练习分类器(▼㉨▼メ) 源代码直接给好了K=30,就试了试怎么选的,挑选...
  • python实现支持向量机遥感图像分类

    千次阅读 2020-12-03 06:39:33
    支持向量机支持向量机(Support Vector Machine, SVM)是一类按监督学习(supervised learning)方式对数据进行二元分类的广义线性分类器(generalized linear classifier),其决策边界是对学习样本求解的最大边距超平面...
  • svm实现图片分类python

    万次阅读 多人点赞 2018-11-19 17:03:28
    svm &amp; linear classifier bias trick loss function regularization optimization 代码主体 导入数据及预处理 svm计算loss_function和梯度 验证梯度公式是否正确 比较运行时间 svm训练及预测,...
  • python语言编写调用HOG算法提取特征向量SVM算法训练和分类程序,内附程序可正常运行。有任何问题请留言,我看到后会尽量解决
  • svm图像分割matlab,python实现

    千次阅读 2020-06-29 17:25:20
    % 使用SVM将鸭子从湖面分割 % 导入图像文件引导对话框 [filename,pathname,flag] = uigetfile(’.jpg’,‘请导入图像文件’); Duck = imread([pathname,filename]); %使用ColorPix软件从图上选取几个湖面的代表性点...
  • 基于p基于pythonsvm算法源代码ython的svm算法源代码
  • 改动点1:图像输入自定义,不再固定名称改动点2:解决找不到sift、svm组件类型问题改动点3:解决svm训练的标签错误问题改动点4:解决导入不了svm模型问题等注意:SVM.train中标签一定要为整数类型代码如下:import ...
  • hog+svm图像检测流程 --python

    千次阅读 2022-03-24 16:00:57
    利用hog和svm进行的图像检测,详细的流程以及python代码实现。(未完待续。。。) 6. 模型调整好后,就可以进行图像的检测,为了更好的突出显示检测效果..
  • SIFT+SVM图像分类

    千次阅读 2021-01-13 23:02:17
    import sklearn.svm as svm import joblib def calcSiftFeature(img): #设置图像sift特征关键点最大为200 sift = cv2.xfeatures2d.SURF_create() #计算图片的特征点和特征点描述 keypoints, features = sift....
  • 主要介绍了通过PYTHON来实现图像分割详解,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,,需要的朋友可以参考下
  • 原理SVM被提出于1964年,在二十世纪90年代后得到快速发展并衍生出一系列改进和扩展算法,在人像识别、文本分类等模式识别(pattern recognition)问题中有得到应用。支持向量机(Support Vector Machine, SVM)是一类按...
  • 由于这是一个多类分类问题,因此,SVM是使用“一对一”和“一对一”的方法实现的,并且两种方法的结果均与线性和RBF内核进行对比。 #执行程式码的步骤 使用binary.py将所有色度照相像转换为二进制。 运行

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 14,127
精华内容 5,650
关键字:

svm图像分类python