精华内容
下载资源
问答
  • 手写数字识别TensorFlow

    2020-09-04 14:18:15
    基于TensorFlow的CNN(Convolutional Neural Network)模型通过对MNIST数据集训练,来实现手写数字识别 导入TensorFlow模块 import tensorflow as tf 导入imput_data用于下载和安装MNIST数据集 from tensorflow....

    基于TensorFlow的CNN(Convolutional Neural Network)模型通过对MNIST数据集训练,来实现手写数字识别

    导入TensorFlow模块

    import tensorflow as tf
    

    导入imput_data用于下载和安装MNIST数据集

    from tensorflow.examples.tutorials.mnist import input_data
    

    读取数据集相关内容,如果已下载好,直接写入目录

    mnist = input_data.read_data_sets("./MNIST_data/", one_hot=True)
    

    创建两个占位符,x为输入网络的图像,y_为输入网络的图像类别

    x = tf.placeholder("float", shape=[None, 784])
    y_ = tf.placeholder("float", shape=[None, 10])
    

    初始化权重函数

    def weight_variable(shape):
        #输出服从截尾正态分布的随机值,标准差为0.1
        initial = tf.truncated_normal(shape, stddev=0.1)
        return tf.Variable(initial)
    

    初始化偏执函数

    def bias_variable(shape):
        #输出一个常量0.1
        initial = tf.constant(0.1, shape=shape)
        return tf.Variable(initial)
    

    创建卷积函数

    def conv2d(x, W):
        #x为输入的张量,W为卷积核,padding填充(same考虑边缘,并填充为0,valid则不考虑填充)
        return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding="SAME")
    

    创建池化函数

    def max_pool_2x2(x):
        #ksize表示pool窗口大小为2x2,也就是高2,宽2
    	#strides,表示在height和width维度上的步长都为2
        return tf.nn.max_pool(x, ksize=[1,2,2,1],strides=[1,2,2,1], padding="SAME")
    

    开始第1层卷积(卷积核为5*5),并池化

    #初始化权重W为[5,5,1,32]的张量,表示卷积核大小为5*5,1表示图像通道数,6表示卷积核个数即输出6个特征图
    W_conv1 = weight_variable([5,5,1,6])
    #初始化偏执b为[6],即输出大小
    b_conv1 = bias_variable([6])
    
    #把输入x(二维张量,shape为[batch, 784])变成4d的x_image,x_image的shape应该是[batch,28,28,1]
    #-1表示自动推测这个维度的size
    #将x变成28*28*(1深度)的张量,并且自动推算个数
    x_image = tf.reshape(x, [-1,28,28,1])
    
    #把x_image和权重进行卷积,加上偏置项,然后应用ReLU激活函数tf.nn.relu(小于0输出0,大于0不变),最后进行max_pooling(最大化池化),h_pool1的输出即为第一层网络输出,池化后shape为[batch,14,14,6]
    h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
    h_pool1 = max_pool_2x2(h_conv1)
    

    第2层卷积

    #卷积核大小依然是5*5,通道数为6,卷积核个数为16
    W_conv2 = weight_variable([5,5,6,16])
    b_conv2 = weight_variable([16])
    
    #h_pool2即为第二层网络输出,shape为[batch,7,7,16]
    #卷积
    h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
    #2*2池化
    h_pool2 = max_pool_2x2(h_conv2)
    

    第3层, 全连接层

    #这层是拥有1024个神经元的全连接层
    #W的第1维size为7*7*16,7*7是h_pool2输出的size,16是第2层输出神经元个数
    W_fc1 = weight_variable([7*7*16, 120])
    b_fc1 = bias_variable([120])
    
    #计算前需要把第2层的输出reshape成[batch, 7*7*16]的张量
    #tf.matmul矩阵相乘      
    #tf.nn.relu将大于0的保持不变,小于0的数置为0。
    h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*16])
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
    

    Dropout层

    #为了减少过拟合,在输出层前加入dropout
    #防止或减轻过拟合(为了得到一致假设而使假设变得过度严格)而使用的函数
    keep_prob = tf.placeholder("float")
    h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
    

    输出层,第二个全连接层

    #添加一个softmax层,使用softmax将网络输出值换成了概率
    W_fc2 = weight_variable([120, 10])
    b_fc2 = bias_variable([10])
    
    y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
    

    预测值和真实值之间的交叉墒

    #tf.log计数自然对数        
    #tf.reduce_sum求沿着某一维度的和
    cross_entropy = -tf.reduce_sum(y_ * tf.log(y_conv))
    

    使用ADAM优化器来做梯度下降。学习率为0.0001

    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    

    评估模型

    #tf.argmax 按照后面值,axis=0时比较每一列的元素,将每一列最大元素所在的索引记录下来,最后输出每一列最大元素所在的索引数组。axis=1时比较每一行的元素。
    #tf.equal 判断两个值是否相等,即判断预测与正确值是否相等
    correct_predict = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
    

    计算正确率

    #因为tf.equal返回的是布尔值,使用tf.cast把布尔值转换成浮点数
    #然后用tf.reduce_mean求平均值
    accuracy = tf.reduce_mean(tf.cast(correct_predict, "float"))
    

    实例化一个saver对象

    #tf.train.Saver() 保存和加载模型
    saver = tf.train.Saver()
    

    开始训练函数

    def cnn_train():
        # 创建一个交互式Session
        sess = tf.InteractiveSession()
        #tf.initialize_all_variables()初始化变量
        sess.run(tf.initialize_all_variables())
        #循环20000次
        for i in range(20000):
            batch = mnist.train.next_batch(50)
            if i%100 == 0:
                #每100次输出一次日志
                train_accuracy = accuracy.eval(feed_dict={
                    x:batch[0], y_:batch[1], keep_prob:1.0})
                print ("step %d, training accuracy %g" % (i, train_accuracy))
                #保存模型
                saver.save(sess, './model')
            train_step.run(feed_dict={x:batch[0], y_:batch[1], keep_prob:0.5})
    

    预测函数

    def predict():
        sess = tf.InteractiveSession()
        sess.run(tf.global_variables_initializer())
        saver = tf.train.Saver(tf.global_variables())
        saver.restore(sess, 'model')
        print( "test accuracy %g" % accuracy.eval(feed_dict={
            x:mnist.test.images, y_:mnist.test.labels, keep_prob:1.0}))
    

    调用主函数

    cnn_train()
    
    predict()
    
    展开全文
  • 卷积神经网络+tensorflow+手写数字识别+正确率在99%以上。适合于CPU及GPU两种环境下,如果超出显存可以修改batch_size的大小,。程序里面有具体说明。
  • https://blog.csdn.net/askmeaskyou/article/details/108674860 文章全套代码。 mnist手写数字识别tensorflow2全连接层实现和卷积层实现(包含代码,模型,调用接口)
  • 手写数字识别,一共有三个文件,前向传播,反向传播,测试。
  • 最近入门 tensorflow 用简单卷积神经网络做 MNIST 手写数字识别, 参考Pluralsight中教程 的代码 如下: 定义filter以及bias的函数: import tensorflow as tf def weightVariable(shape): # to generate ...
    最近入门 tensorflow 用简单卷积神经网络做 MNIST 手写数字识别, 参考Pluralsight中教程 的代码 如下: 
    
    定义filter以及bias的函数: 
    
    import tensorflow as tf
    def weightVariable(shape): # to generate the filter
        initial = tf.truncated_normal(shape,stddev=1.0)
        return tf.Variable(initial)
    def biasVariable(shape):
        initial = tf.constant(0.1,shape=shape)
        return tf.Variable(initial)
    
    def conv2d(x,W): # input the x as iamge :  [batch, nwidth, nheight, chennals]
        return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding ='SAME')
    
    def MaxPooling_2x2(x):
        return tf.nn.max_pool(x,ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
    
    导入数据以及 利用数据集的随机样本 训练网络:  
    
    from tensorflow.examples.tutorials.mnist import input_data
    
    mnist = input_data.read_data_sets(
        r"C:\Myfiles\deepTraining\Tensorflow\Course_tensorflow-understanding-foundations\Practice\mnist_data",one_hot=True)
    
    training_digits, training_labels = mnist.train.next_batch(200)  # get the data point randomly
    
    test_digits, test_labels = mnist.test.next_batch(100)
    
    print("Data is ready")
    print(test_digits.shape)
    print(test_labels.shape)
    # construct the net;
    
    Xinput = tf.placeholder(tf.float32,shape=[None,784])
    ylable = tf.placeholder(tf.float32,shape=[None,10])
    
    x_image = tf.reshape(Xinput,[-1,28,28,1],name="image")
    # x_image [batch, 28,28,1]
    W_conv1 = weightVariable([5, 5, 1, 32])
    # difine the filter size would be 5 by 5; 1 is the chennal number
    # if it is a color picture the number would be 3,
    # 32 is the features output chennals, 32 is defined by us
    b_conv1 = biasVariable([32])
    FirstConLayerOutput = conv2d(x_image, W_conv1) + b_conv1 # FirstConLayerOutput size:  [batch,28,28,32]
    # push to relu function;
    # relu funciton would keep the same size the data
    h_con1 = tf.nn.relu(FirstConLayerOutput)  # h_con1:  [batch,28,28,32]
    h_pool1 = MaxPooling_2x2(h_con1) # h_pool1:  [batch,14,14,32]
    
    W_conv2 = weightVariable([5,5,32,64]) # 32 means the h_pool1 has 32 output chennals
    b_conv2 = biasVariable([64])
    SecondConLayerOutput = conv2d(h_pool1, W_conv2) + b_conv2 # SecondConLayerOutput:  [batch,14,14,64]
    h_con2 = tf.nn.relu(SecondConLayerOutput) # h_con2:  [batch,14,14,64]
    h_pool2 = MaxPooling_2x2(h_con2)  # h_pool2:  [batch,7,7,64]
    
    # Then define the fully connect layer ;
    
    Wfc1 = weightVariable([7 * 7 * 64, 1024])
    bfc1 = biasVariable([1024])
    
    h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64]) # h_pool2_flat: [batch, 7*7*64]
    hfc1 = tf.nn.relu(tf.matmul(h_pool2_flat, Wfc1) + bfc1)  # hfc1: [batch, 1024]
    
    keep_prob = tf.placeholder(tf.float32)
    h_fc_drop = tf.nn.dropout(hfc1, keep_prob) # the drop out operation would not change the dimensions
    Wfc2 = weightVariable([1024,10])
    bfc2 = biasVariable([10])
    
    hfc2 = tf.matmul(h_fc_drop, Wfc2) + bfc2  # hfc1 [batch, 10]
    
    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=hfc2,labels=ylable))
    
    trainStep = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        import time
    
        num_steps = 1000
        display_every = 100
        print("start:")
        start_time = time.time()
        end_time = time.time()
        for istep in range(800):
    
            onebatch = mnist.train.next_batch(13)
    
            trainStep.run(feed_dict={Xinput: onebatch[0], ylable: onebatch[1], keep_prob: 0.5})
    
            Y_fit = sess.run(tf.argmax(hfc2,1),{Xinput: onebatch[0], ylable: onebatch[1], keep_prob: 0.5})
    
            print(str(istep)+str(Y_fit))
            #print(Y_predict.shape)
        print("#----------------------------------------------------------------#")
        # 测试一下训练结果; 随机抽取 14个样本 比较识别结果 ;
        testbatch = mnist.train.next_batch(14)
        Y_predict = sess.run(tf.argmax(hfc2, 1),{Xinput: testbatch[0], ylable: testbatch[1], keep_prob: 0.5})
        Y_test = sess.run(tf.argmax(testbatch[1],1))
        print("Predict: ")
        print(Y_predict)
        print("Test database")
        print(Y_test)

     

     

     

    展开全文
  • TensorFlow实战教程,该课程主要分享两个TensorFlow卷积神经网络实战的案例,第一个是手写数字识别案例,第二个是人脸识别案例。
  • #练习手写数字识别 tensorflow框架 #加载数据集 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data/',one_hot=True) W0814 14:23:30.529140 8456 ...
    #练习手写数字识别 tensorflow框架
    #加载数据集
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets('MNIST_data/',one_hot=True)
    
    W0814 14:23:30.529140  8456 deprecation.py:323] From <ipython-input-2-d6591e062e26>:4: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
    W0814 14:23:30.531140  8456 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tensorflow1.14\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please write your own downloading logic.
    W0814 14:23:30.545140  8456 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tensorflow1.14\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:252: _internal_retry.<locals>.wrap.<locals>.wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use urllib or similar directly.
    W0814 14:23:37.409940  8456 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tensorflow1.14\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use tf.data to implement this functionality.
    
    
    Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
    Extracting MNIST_data/train-images-idx3-ubyte.gz
    
    
    W0814 14:23:38.166540  8456 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tensorflow1.14\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use tf.data to implement this functionality.
    W0814 14:23:38.172540  8456 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tensorflow1.14\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use tf.one_hot on tensors.
    
    
    Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
    Extracting MNIST_data/train-labels-idx1-ubyte.gz
    Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
    Extracting MNIST_data/t10k-images-idx3-ubyte.gz
    
    
    W0814 14:23:40.217140  8456 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tensorflow1.14\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
    
    
    Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
    Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
    
    print(mnist.train.images.shape,mnist.train.labels.shape)
    
    (55000, 784) (55000, 10)
    
    print(mnist.test.images.shape,mnist.test.labels.shape)
    
    (10000, 784) (10000, 10)
    
    print(mnist.validation.images.shape,mnist.validation.labels.shape,)
    
    (5000, 784) (5000, 10)
    
    #设置session窗口
    import tensorflow as tf
    sess = tf.InteractiveSession()
    x = tf.placeholder(tf.float32,[None,784])
    
    #初始化weights、biases
    W = tf.Variable(tf.zeros([784,10]))#权重矩阵
    b = tf.Variable(tf.zeros([10]))#偏置矩阵
    #实现Softmax Regression
    y = tf.nn.softmax(tf.matmul(x,W) + b)
    
    #定义loss function -- Cross-entropy
    y_ = tf.placeholder(tf.float32,[None,10])
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),reduction_indices = [1]))
    
    #定义一个优化算法 --SGD(随机梯度下降算法)
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
    
    #初始化全局参数
    tf.global_variables_initializer().run()
    
    #开始训练
    for i in range(1000):
        batch_xs,batch_ys = mnist.train.next_batch(100)#每次从样本中随机抽取100个样本构成mini_batch进行训练
        train_step.run({x:batch_xs,y_:batch_ys})
    print('训练结束')
    
    训练结束
    
    #判断模型的准确性
    correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    print(accuracy.eval({x:mnist.test.images,y_:mnist.test.labels}))
    
    0.9202
    
    
    
    展开全文
  • 实战手写数字识别与人脸识别TensorFlow 乐川科技有限公司CEO,人工...

    扫码下载「CSDN程序员学院APP」,1000+技术好课免费看

    APP订阅课程,领取优惠,最少立减5元 ↓↓↓

    订阅后:请点击此处观看视频课程

     

    视频教程-实战手写数字识别与人脸识别TensorFlow-深度学习

    学习有效期:永久观看

    学习时长:87分钟

    学习计划:2天

    难度:

     

    口碑讲师带队学习,让你的问题不过夜」

    讲师姓名:王而川

    CEO/董事长/总经理

    讲师介绍:乐川科技有限公司CEO,人工智能培训讲师,专业从事机器学习与深度学习培训。参与多个人工智能领域项目,专注于机器学习与计算机视觉领域,长期参与无人驾驶汽车项目,专注研究无人驾驶领域的目标识别与跟踪,善于人脸识别、物体识别、轨迹跟踪、点云识别分析等方向的新算法。

    ☛点击立即跟老师学习☚

     

    「你将学到什么?」

    TensorFlow实战教程,该课程主要分享两个TensorFlow卷积神经网络实战的案例,第一个是手写数字识别案例,第二个是人脸识别案例。

     

    「课程学习目录」

    第1章:手写数字实战
    1.手写数字数据集讲解
    2.数组与图片
    3.实战讲解
    第2章:人脸识别实战
    1.人脸识别原理
    2.实战人脸识别

     

    7项超值权益,保障学习质量」

    • 大咖讲解

    技术专家系统讲解传授编程思路与实战。

    • 答疑服务

    专属社群随时沟通与讲师答疑,扫清学习障碍,自学编程不再难。

    • 课程资料+课件

    超实用资料,覆盖核心知识,关键编程技能,方便练习巩固。(部分讲师考虑到版权问题,暂未上传附件,敬请谅解)

    • 常用开发实战

    企业常见开发实战案例,带你掌握Python在工作中的不同运用场景。

    • 大牛技术大会视频

    2019Python开发者大会视频免费观看,送你一个近距离感受互联网大佬的机会。

    • APP+PC随时随地学习

    满足不同场景,开发编程语言系统学习需求,不受空间、地域限制。

     

    「什么样的技术人适合学习?」

    • 想进入互联网技术行业,但是面对多门编程语言不知如何选择,0基础的你
    • 掌握开发、编程技术单一、冷门,迫切希望能够转型的你
    • 想进入大厂,但是编程经验不够丰富,没有竞争力,程序员找工作难。

     

    「悉心打造精品好课,2天学到大牛3年项目经验」

    【完善的技术体系】

    技术成长循序渐进,帮助用户轻松掌握

    掌握深度学习知识,扎实编码能力

    【清晰的课程脉络】

    浓缩大牛多年经验,全方位构建出系统化的技术知识脉络,同时注重实战操作。

    【仿佛在大厂实习般的课程设计】

    课程内容全面提升技术能力,系统学习大厂技术方法论,可复用在日后工作中。

     

    「你可以收获什么?」

    掌握两个案例,可以自己利用卷积神经网络训练自己的模型。

     

    展开全文
  • MNIST手写数字识别TensorFlow实现 一、内容 本节将和大家学习如何使用TensorFlow实现一个简单的卷积神经网络,使用的数据集是手写数字数据集MNIST,预期可以达到99.2%左右的准确率。本节将使用两个卷积层加一个全...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 4,527
精华内容 1,810
关键字:

手写数字识别tensorflow