tensorboard 订阅
TensorBoard 是 TensorFlow 提供的实用工具(utility),可以图形化的显示 computational graph。 展开全文
TensorBoard 是 TensorFlow 提供的实用工具(utility),可以图形化的显示 computational graph。
信息
外文名
TensorBoard
TCP/IP协议简介
TensorBoard 是 TensorFlow提供的一组可视化工具(a suite of visualization tools),可以帮助开发者方便的理解、调试、优化TensorFlow 程序 [1]  。
收起全文
精华内容
参与话题
问答
  • tensorboard

    千次阅读 2017-02-10 15:23:08
    tensorboard ,tensorflow

     之前装过tensorflow的,升级一下

    step1:

    # sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux("CSDN自动大写首字母,请Linux的L字母小写")/cpu/tensorflow-1.0.0rc1-cp27-none-linux_x86_64.whl

    升级完毕后

    step2:

    执行tensorboard :

     $python("CSDN自动大写首字母,请python的P字母小写") /usr/local/lib/python2.7/dist-packages/tensorflow/tensorboard/tensorboard.py --logdir= path/to/log-directory(自己指定日志路径)

    
    
    step 3:

    浏览器输入:

    http://localhost:6006/

    就看到了


    注:  页面标红的linux 和python被csdn自动首字母大写了,其实是首字母小写的

    展开全文
  • Tensorboard使用

    万次阅读 多人点赞 2018-02-20 14:12:13
    背景 在复杂的问题中,网络往往都是很复杂的。...TensorBoard是一个可视化工具,能够有效地展示Tensorflow在运行过程中的计算图、各种指标随着时间的变化趋势以及训练中使用到的数据信息。可以查看Tens...

    背景

    在复杂的问题中,网络往往都是很复杂的。为了方便调试参数以及调整网络结构,我们需要将计算图可视化出来,以便能够更好的进行下一步的决策。Tensorflow提供了一个TensorBoard工具,可以满足上面的需求。

    介绍

    TensorBoard是一个可视化工具,能够有效地展示Tensorflow在运行过程中的计算图、各种指标随着时间的变化趋势以及训练中使用到的数据信息。可以查看TensorBoard Github ReadMe 详细阅读适应方法。

    简单的例子

    下面通过一个简单的例子来显示一下使用方式,下面的图是一个向量加法的图结构。

    import tensorflow as tf
    a = tf.constant([1.0,2.0,3.0],name='input1')
    b = tf.Variable(tf.random_uniform([3]),name='input2')
    add = tf.add_n([a,b],name='addOP')
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        writer = tf.summary.FileWriter("E://TensorBoard//test",sess.graph)
        print(sess.run(add))
    writer.close()
    

    运行完程序后,图结构将以日志文件的形式保存到给定的路径下。
    然后在命令行启动Tensorboard。Tensorboard作为本地的一个服务已经启动,端口号是6006,我们只需要在浏览器下输入127.0.0.1:6006就可以进入TensorBoard。下图分别是Tensorboard的界面,以及上面代码对用的图结构。

    tensorboard --logdir=E://TensorBoard//test

    这里写图片描述
    我们可以简单对下面的图结构进行解读。图中的椭圆代表操作,阴影代表明明空间,小圆圈代表常量。虚线箭头代表依赖,实线箭头代表数据流。
    我们的程序想要完成一个加法操作,首先需要利用random_uniform生成一个3元的向量,输入到input2变量节点中,然后input2变量节点需要依赖init操作来完成变量初始化。input2节点将结果输入到addOP操作节点中,同时常量节点input1也将数据输入到addOP中,最终addOP完成计算。
    这里写图片描述

    添加命名空间

    前面的程序,只是一个简单的加法,但是图结构却十分凌乱,而且整个程序的核心add没有在图结构中突出,反而是一些初始化的操作占据了计算图的主要部分。这样不利于对计算图进行分析。因此,我们需要利用一种方式,将计算图的细节部分隐藏,保留关键部分。 命名空间给我们提供了这种机会。
    上面的计算图中,核心部分是两个输入传递给加法操作完成计算,因此,我们需要将其他部分隐藏,只保留核心部分。

    import tensorflow as tf
    with tf.variable_scope('input1'):
        input1 = tf.constant([1.0,2.0,3.0],name='input1')
    with tf.variable_scope('input2'):
        input2 = tf.Variable(tf.random_uniform([3]),name='input2')
    add = tf.add_n([input1,input2],name='addOP')
    with tf.Session() as sess:
        init = tf.global_variables_initializer()
        sess.run(init)
        writer = tf.summary.FileWriter("E://TensorBoard//test",sess.graph)
        print(sess.run(add))
    writer.close()

    这里写图片描述
    利用同样的方式,我们可以将MNIST神经网络代码重新进行组织(不知道组织的逻辑结构是否正确)

    from tensorflow.examples.tutorials.mnist import input_data
    import tensorflow as tf
    
    mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
    
    batch_size = 100
    hidden1_nodes = 200
    with tf.name_scope('Input'):
        x = tf.placeholder(tf.float32,shape=(None,784))
        y = tf.placeholder(tf.float32,shape=(None,10))
    with tf.name_scope('Inference'):
        w1 = tf.Variable(tf.random_normal([784,hidden1_nodes],stddev=0.1))
        w2 = tf.Variable(tf.random_normal([hidden1_nodes,10],stddev=0.1))
        b1 = tf.Variable(tf.random_normal([hidden1_nodes],stddev=0.1))
        b2 = tf.Variable(tf.random_normal([10],stddev=0.1))
        hidden = tf.nn.relu(tf.matmul(x,w1)+b1)
        y_predict = tf.nn.relu(tf.matmul(hidden,w2)+b2)
    
    with tf.name_scope('Loss'):
        cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_predict))
    with tf.name_scope('Train'):
        train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
    with tf.name_scope('Accuracy'):
        correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_predict, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for i in range(10000):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step, feed_dict={x: batch_xs, y: batch_ys})
            if i%1000==0:
                print ('Phase'+str(i/1000+1)+':',sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}))
    writer = tf.summary.FileWriter("./mnist_nn_log",sess.graph)
    writer.close()

    这里写图片描述

    展开全文
  • TensorBoard TensorBoard is a suite of web applications for inspecting and understanding your TensorFlow runs and graphs. This README gives an overview of key concepts in TensorBoard, as well as how ...
  • TensorBoard使用

    万次阅读 2017-03-31 10:30:07
    Graph and Loss visualization using Tensorboard. This example is using the MNIST database of handwritten digits (http://yann.lecun.com/exdb/mnist/) Author: Aymeric Damien Project: https://

    先跑一个小例程

    '''
    Graph and Loss visualization using Tensorboard.
    This example is using the MNIST database of handwritten digits
    (http://yann.lecun.com/exdb/mnist/)
    Author: Aymeric Damien
    Project: https://github.com/aymericdamien/TensorFlow-Examples/
    '''
    
    from __future__ import print_function
    
    import tensorflow as tf
    
    # Import MNIST data
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
    
    # Parameters
    learning_rate = 0.01
    training_epochs = 25
    batch_size = 100
    display_step = 1
    logs_path = '/tmp/tensorflow_logs/example'
    
    # Network Parameters
    n_hidden_1 = 256 # 1st layer number of features
    n_hidden_2 = 256 # 2nd layer number of features
    n_input = 784 # MNIST data input (img shape: 28*28)
    n_classes = 10 # MNIST total classes (0-9 digits)
    
    # tf Graph Input
    # mnist data image of shape 28*28=784
    x = tf.placeholder(tf.float32, [None, 784], name='InputData')
    # 0-9 digits recognition => 10 classes
    y = tf.placeholder(tf.float32, [None, 10], name='LabelData')
    
    
    # Create model
    def multilayer_perceptron(x, weights, biases):
        # Hidden layer with RELU activation
        layer_1 = tf.add(tf.matmul(x, weights['w1']), biases['b1'])
        layer_1 = tf.nn.relu(layer_1)
        # Create a summary to visualize the first layer ReLU activation
        #tf.summary.histogram("relu1", layer_1)
        tf.histogram_summary("relu1", layer_1)
        # Hidden layer with RELU activation
        layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2'])
        layer_2 = tf.nn.relu(layer_2)
        # Create another summary to visualize the second layer ReLU activation
        #tf.summary.histogram("relu2", layer_2)
        tf.histogram_summary("relu2", layer_2)
        # Output layer
        out_layer = tf.add(tf.matmul(layer_2, weights['w3']), biases['b3'])
        return out_layer
    
    # Store layers weight & bias
    weights = {
        'w1': tf.Variable(tf.random_normal([n_input, n_hidden_1]), name='W1'),
        'w2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]), name='W2'),
        'w3': tf.Variable(tf.random_normal([n_hidden_2, n_classes]), name='W3')
    }
    biases = {
        'b1': tf.Variable(tf.random_normal([n_hidden_1]), name='b1'),
        'b2': tf.Variable(tf.random_normal([n_hidden_2]), name='b2'),
        'b3': tf.Variable(tf.random_normal([n_classes]), name='b3')
    }
    
    # Encapsulating all ops into scopes, making Tensorboard's Graph
    # Visualization more convenient
    with tf.name_scope('Model'):
        # Build model
        pred = multilayer_perceptron(x, weights, biases)
    
    with tf.name_scope('Loss'):
        # Softmax Cross entropy (cost function)
        loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
    
    with tf.name_scope('SGD'):
        # Gradient Descent
        optimizer = tf.train.GradientDescentOptimizer(learning_rate)
        # Op to calculate every variable gradient
        grads = tf.gradients(loss, tf.trainable_variables())
        grads = list(zip(grads, tf.trainable_variables()))
        # Op to update all variables according to their gradient
        apply_grads = optimizer.apply_gradients(grads_and_vars=grads)
    
    with tf.name_scope('Accuracy'):
        # Accuracy
        acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
        acc = tf.reduce_mean(tf.cast(acc, tf.float32))
    
    # Initializing the variables
    init = tf.initialize_all_variables()
    
    # Create a summary to monitor cost tensor
    tf.scalar_summary("loss", loss)
    # Create a summary to monitor accuracy tensor
    tf.scalar_summary("accuracy", acc)
    # Create summaries to visualize weights
    for var in tf.trainable_variables():
        tf.histogram_summary(var.name, var)
    # Summarize all gradients
    for grad, var in grads:
        tf.histogram_summary(var.name + '/gradient', grad)
    # Merge all summaries into a single op
    merged_summary_op = tf.merge_all_summaries()
    
    # Launch the graph
    with tf.Session() as sess:
        sess.run(init)
    
        # op to write logs to Tensorboard
        summary_writer = tf.train.SummaryWriter(logs_path,
                                                graph=tf.get_default_graph())
    
        # Training cycle
        for epoch in range(training_epochs):
            avg_cost = 0.
            total_batch = int(mnist.train.num_examples/batch_size)
            # Loop over all batches
            for i in range(total_batch):
                batch_xs, batch_ys = mnist.train.next_batch(batch_size)
                # Run optimization op (backprop), cost op (to get loss value)
                # and summary nodes
                _, c, summary = sess.run([apply_grads, loss, merged_summary_op],
                                         feed_dict={x: batch_xs, y: batch_ys})
                # Write logs at every iteration
                summary_writer.add_summary(summary, epoch * total_batch + i)
                # Compute average loss
                avg_cost += c / total_batch
            # Display logs per epoch step
            if (epoch+1) % display_step == 0:
                print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
    
        print("Optimization Finished!")
    
        # Test model
        # Calculate accuracy
        print("Accuracy:", acc.eval({x: mnist.test.images, y: mnist.test.labels}))
    
        print("Run the command line:\n" \
              "--> tensorboard --logdir=/tmp/tensorflow_logs " \
    "\nThen open http://0.0.0.0:6006/ into your web browser")

    运行该程序。

    运行完了之后,可以在"/tmp/tensorflow_logs/example"目录下找到事件记录文件"events.out.tfevents.1490276692.inspur.datanode7.com"。

    输入"tensorboard --logdir=/tmp/tensorflow_logs"

    这个时候就遇到问题啦~看下面呢:

    [root@inspur example]# tensorboard --logdir=/tmp/tensorflow_logs
    ERROR:tensorflow:Tried to connect to port 6006, but address is in use.
    
    6006端口被占用了,把它干掉好啦。

    [root@inspur example]# lsof -i:6006
    COMMAND     PID USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
    tensorboa 28508 root    4u  IPv4 18373697      0t0  TCP *:6006 (LISTEN)
    [root@inspur example]# kill -9 28508
    [root@inspur example]# tensorboard --logdir=/tmp/tensorflow_logs
    Starting TensorBoard b'23' on port 6006
    (You can navigate to http://0.0.0.0:6006)
    

    进程号为28508的进程占用了端口6006,用kill命令干掉。

    另外程序是在服务器上跑得,所以在本机上查看的时候,网址要输 ‘’服务器IP:6006"

    展开全文
  • 使用 tensorboard

    千次阅读 2019-08-01 16:42:40
    安装 TensorFlow(不安装也可以,但是等会第3步命令行输入命令后会出现提示:TensorFlow installation not found - running with reduced feature set.),建议安装 CPU-Only 版本,使用命令:pip install tensorflow...
    1. 安装 TensorFlow(不安装也可以,但是等会第3步命令行输入命令后会出现提示:TensorFlow installation not found - running with reduced feature set.),建议安装 CPU-Only 版本,打开命令行使输入:pip install tensorflow,若网速慢,可使用清华软件源,具体步骤自行百度。
      安装 tensorboard:pip install tensorboard
      安装 tensorboard:pip install tensorboardX
    2. 在命令行中输入:tensorboard --logdir <your/running/dir> --port <your_bind_port>,for example:tensorboard --logdir=D:\logs,注意路径中不能有中文。另外,若路径中有文件夹名称中间有空格,可以使用双引号将整个路径括起来,如 "D:\For Example\logs"
    3. 打开 chrome,在搜索栏中输入:localhost:6006
    展开全文
  • tensorboard 使用

    2017-12-28 09:18:04
    tensorboard 使用首先我们随便定义一个图import tensorflow as tf x = tf.constant(6,name='x')y = tf.constant(8,name='y')z = tf.add(x,y,name='z')with tf.Session() as sess: writer = tf.summary.FileWriter('....
  • tensorboard使用

    2019-06-11 14:30:39
    揪心啊! 最近做蒸馏遇到的问题,大家请赏析,我就想知道tensorboard画的这是什么曲线?
  • tensorBoard使用

    2019-10-25 15:34:47
    生成tensorboard的正确方式 import tensorflow as tf #清除默认图形堆栈并重置全局默认图形 tf.reset_default_graph() #存放tensorboard文件的路径 logdir = r'D:/QQPCmgr/Desktop/tensorBoard' input1 = tf....

空空如也

1 2 3 4 5 ... 20
收藏数 7,667
精华内容 3,066
关键字:

tensorboard