精华内容
下载资源
问答
  • cvx凸优化工具 自己写的demo 用于设置变量
  • 今天小编就为大家分享一篇tensorflow: variable的值与variable.read_value()的值区别详解,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
  • 我们通过tf.Variable构造一个variable添加进图中,Variable()构造函数需要变量的初始值(是一个任意类型、任意形状的tensor),这个初始值指定variable的类型和形状。通过Variable()构造函数后,此variable的类型和...
  • 前面的博客简单讲了Variable和Parameter的区别,这次加入tenor,详细的分析三者的区别和联系。文中参考了Pytorch 中的 Tensor , Variable & Parameter 1.Tensor  pytorch中的Tensor类似于numpy中的array,而不直接...
  • Pytorch之Variable的用法

    2020-12-23 06:08:34
    1.简介 torch.autograd.Variable是Autograd的核心类,它封装了Tensor,并整合了反向传播的相关实现 Variable和tensor的区别...比如grad,梯度variable.grad是d(y)/d(variable)保存的是变量y对variable变量的梯度值,如
  • 基于源码修改JDBC8驱动连接Mycat1.6报错 Unknown system variable 'query_cache_size' ,配置好mycat相应配置直接bin目录启动即可
  • 复旦大学_软件安全_SEED labs_3-Environment_Variable_and_SetUID实验 是从雪城大学SEED labs上找的实验 资源包括:实验报告详细版、实验指导书、参考链接
  • Jira6不支持mysql5.7,需要此jar,具体就是报错 Unknown system variable 'storage_engine' 的解决
  • TF:tensorflow框架中常用函数介绍—tf.Variable()和tf.get_variable()用法及其区别 目录 tensorflow框架 tensorflow.Variable()函数 tensorflow.get_variable()函数 tensorflow框架 tf.Variable()和tf....

    TF:tensorflow框架中常用函数介绍—tf.Variable()和tf.get_variable()用法及其区别

     

     

    目录

    tensorflow框架

    tensorflow.Variable()函数

    tensorflow.get_variable()函数


     

    tensorflow框架

    tf.Variable()和tf.get_variable()在创建变量的过程基本一样。它们之间最大的区别在于指定变量名称的参数。

    • tf.Variable(),变量名称name是一个可选的参数。
    • tf.get_variable(),变量名称是一个必填的参数。

     

    tensorflow.Variable()函数


    @tf_export("Variable")
    class Variable(checkpointable.CheckpointableBase):
      """See the @{$variables$Variables How To} for a high level overview.

      A variable maintains state in the graph across calls to `run()`. You add a  variable to the graph by constructing an instance of the class `Variable`.

      The `Variable()` constructor requires an initial value for the variable, which can be a `Tensor` of any type and shape. The initial value defines the  type and shape of the variable. After construction, the type and shape of
      the variable are fixed. The value can be changed using one of the assign  methods.

      If you want to change the shape of a variable later you have to use an  `assign` Op with `validate_shape=False`.

      Just like any `Tensor`, variables created with `Variable()` can be used as inputs for other Ops in the graph. Additionally, all the operators overloaded for the `Tensor` class are carried over to variables, so you can
      also add nodes to the graph by just doing arithmetic on variables.

      ```python
      import tensorflow as tf

      # Create a variable.
      w = tf.Variable(<initial-value>, name=<optional-name>)

      # Use the variable in the graph like any Tensor.
      y = tf.matmul(w, ...another variable or tensor...)

      # The overloaded operators are available too.
      z = tf.sigmoid(w + y)

      # Assign a new value to the variable with `assign()` or a related method.
      w.assign(w + 1.0)
      w.assign_add(1.0)

    @tf_export(“变量”)

    类变量(checkpointable.CheckpointableBase):

    查看@{$variables$ variables How To}获取高级概述。

    一个变量在调用“run()”时维护图中的状态。通过构造类“variable”的一个实例,可以将一个变量添加到图形中。

    ‘Variable()’构造函数需要一个变量的初值,它可以是任何类型和形状的‘张量’。初始值定义变量的类型和形状。施工后,的类型和形状

    变量是固定的。可以使用指定方法之一更改值。

    如果以后要更改变量的形状,必须使用' assign ' Op和' validate_shape=False '。

    与任何“张量”一样,用“Variable()”创建的变量可以用作图中其他操作的输入。此外,“张量”类的所有运算符都重载了,因此可以转移到变量中

    还可以通过对变量进行运算将节点添加到图中。

    ”“python

    导入tensorflow作为tf

    创建一个变量。

    w =特遣部队。变量(name = <可选名称> <初值>)

    像使用任何张量一样使用图中的变量。

    y =特遣部队。matmul (w,…另一个变量或张量……)

    重载的操作符也是可用的。

    z =特遣部队。乙状结肠(w + y)

    用' Assign() '或相关方法为变量赋值。

    w。分配(w + 1.0)

    w.assign_add (1.0)

    ' ' '

    When you launch the graph, variables have to be explicitly initialized before you can run Ops that use their value. You can initialize a variable by running its *initializer op*, restoring the variable from a save file, or simply running an `assign` Op that assigns a value to the variable. In fact,  the variable *initializer op* is just an `assign` Op that assigns the variable's initial value to the variable itself.

      ```python
      # Launch the graph in a session.
      with tf.Session() as sess:
          # Run the variable initializer.
          sess.run(w.initializer)
          # ...you now can run ops that use the value of 'w'...
      ```

      The most common initialization pattern is to use the convenience function global_variables_initializer()` to add an Op to the graph that initializes  all the variables. You then run that Op after launching the graph.

      ```python
      # Add an Op to initialize global variables.
      init_op = tf.global_variables_initializer()

      # Launch the graph in a session.
      with tf.Session() as sess:
          # Run the Op that initializes global variables.
          sess.run(init_op)
          # ...you can now run any Op that uses variable values...
      ```

      If you need to create a variable with an initial value dependent on another variable, use the other variable's `initialized_value()`. This ensures that variables are initialized in the right order. All variables are automatically collected in the graph where they are created. By default, the constructor adds the new variable to the graph  collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function

      `global_variables()` returns the contents of that collection.

      When building a machine learning model it is often convenient to distinguish  between variables holding the trainable model parameters and other variables  such as a `global step` variable used to count training steps. To make this  easier, the variable constructor supports a `trainable=<bool>` parameter. If `True`, the new variable is also added to the graph collection `GraphKeys.TRAINABLE_VARIABLES`. The convenience function `trainable_variables()` returns the contents of this collection. The various `Optimizer` classes use this collection as the default list of  variables to optimize.

      WARNING: tf.Variable objects have a non-intuitive memory model. A Variable is represented internally as a mutable Tensor which can non-deterministically alias other Tensors in a graph. The set of operations which consume a Variable  and can lead to aliasing is undetermined and can change across TensorFlow versions. Avoid writing code which relies on the value of a Variable either  changing or not changing as other operations happen. For example, using Variable objects or simple functions thereof as predicates in a `tf.cond` is  dangerous and error-prone:

      ```
      v = tf.Variable(True)
      tf.cond(v, lambda: v.assign(False), my_false_fn)  # Note: this is broken.
      ```

      Here replacing tf.Variable with tf.contrib.eager.Variable will fix any nondeterminism issues.

      To use the replacement for variables which does not have these issues:

      * Replace `tf.Variable` with `tf.contrib.eager.Variable`;
      * Call `tf.get_variable_scope().set_use_resource(True)` inside a  `tf.variable_scope` before the `tf.get_variable()` call.

      @compatibility(eager)
      `tf.Variable` is not compatible with eager execution.  Use  `tf.contrib.eager.Variable` instead which is compatible with both eager  execution and graph construction.  See [the TensorFlow Eager Execution  guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers)
      for details on how variables work in eager execution.
      @end_compatibility
      """

    启动图形时,必须显式初始化变量,然后才能运行使用其值的操作。您可以通过运行它的*initializer op*来初始化一个变量,也可以从保存文件中恢复这个变量,或者简单地运行一个' assign ' op来为这个变量赋值。实际上,变量*初始化器op*只是一个' assign ' op,它将变量的初始值赋给变量本身。

    ”“python
    在会话中启动图形。
    session()作为sess:
    #运行变量初始化器。
    sess.run (w.initializer)
    #……现在可以运行使用'w'值的ops…
    ' ' '

    最常见的初始化模式是使用方便的函数global_variables_initializer() '将Op添加到初始化所有变量的图中。然后在启动图形之后运行该Op。

    ”“python
    #添加一个Op来初始化全局变量。
    init_op = tf.global_variables_initializer ()

    在会话中启动图形。
    session()作为sess:
    运行初始化全局变量的Op。
    sess.run (init_op)
    #……您现在可以运行任何使用变量值的Op…
    ' ' '

    如果需要创建一个初始值依赖于另一个变量的变量,请使用另一个变量的' initialized_value() '。这样可以确保以正确的顺序初始化变量。所有变量都自动收集到创建它们的图中。默认情况下,构造函数将新变量添加到图形集合“GraphKeys.GLOBAL_VARIABLES”中。方便的功能

    ' global_variables() '返回该集合的内容。

    在构建机器学习模型时,通常可以方便地区分包含可训练模型参数的变量和其他变量,如用于计算训练步骤的“全局步骤”变量。为了简化这一点,变量构造函数支持一个' trainable=<bool> '参数。</bool>如果为True,则新变量也将添加到图形集合“GraphKeys.TRAINABLE_VARIABLES”中。便利函数' trainable_variables() '返回这个集合的内容。各种“优化器”类使用这个集合作为要优化的默认变量列表。

    警告:tf。变量对象有一个不直观的内存模型。一个变量在内部被表示为一个可变张量,它可以不确定性地混叠一个图中的其他张量。使用变量并可能导致别名的操作集是未确定的,可以跨TensorFlow版本更改。避免编写依赖于变量值的代码,这些变量值随着其他操作的发生而改变或不改变。例如,在“tf”中使用变量对象或其简单函数作为谓词。cond’是危险的,容易出错的:

    ' ' '
    v = tf.Variable(真正的)
    特遣部队。cond(v, lambda: v.assign(False), my_false_fn) #注意:这个坏了。
    ' ' '

    这里替换特遣部队。与tf.contrib.eager变量。变量将修复任何非决定论的问题。

    使用替换变量不存在以下问题:

    *取代“特遣部队。变量与“tf.contrib.eager.Variable”;
    *在一个tf中调用' tf.get_variable_scope().set_use_resource(True) '。在调用tf.get_variable()之前调用variable_scope。

    @compatibility(渴望)
    “特遣部队。变量'与立即执行不兼容。使用“tf.contrib.eager。变量',它与立即执行和图形构造都兼容。参见[TensorFlow Eager执行指南](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#变量和优化器)
    有关变量在立即执行中如何工作的详细信息。
    @end_compatibility
    ”“”

      Args:
     initial_value: A `Tensor`, or Python object convertible to a `Tensor`,   which is the initial value for the Variable. The initial value must have  a shape specified unless `validate_shape` is set to False. Can also be a callable with no argument that returns the initial value when called. In  that case, `dtype` must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.)
          trainable: If `True`, the default, also adds the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as the default list of variables to use by the `Optimizer` classes. collections: List of graph collections keys. The new variable is added to these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
          validate_shape: If `False`, allows the variable to be initialized with a value of unknown shape. If `True`, the default, the shape of initial_value` must be known. caching_device: Optional device string describing where the Variable  should be cached for reading.  Defaults to the Variable's device.   If not `None`, caches on another device.  Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate  copying through `Switch` and other conditional statements.
          name: Optional name for the variable. Defaults to `'Variable'` and gets uniquified automatically.
          variable_def: `VariableDef` protocol buffer. If not `None`, recreates the Variable object with its contents, referencing the variable's nodes
            in the graph, which must already exist. The graph is not changed. `variable_def` and the other arguments are mutually exclusive.
          dtype: If set, initial_value will be converted to the given type.  If `None`, either the datatype will be kept (if `initial_value` is  a Tensor), or `convert_to_tensor` will decide.
          expected_shape: A TensorShape. If set, initial_value is expected  to have this shape.
          import_scope: Optional `string`. Name scope to add to the   `Variable.` Only used when initializing from protocol buffer.
          constraint: An optional projection function to be applied to the variable after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must  take as input the unprojected Tensor representing the value of the   variable and return the Tensor for the projected value   (which must have the same shape). Constraints are not safe to  use when doing asynchronous distributed training.

        Raises:
          ValueError: If both `variable_def` and initial_value are specified.
          ValueError: If the initial value is not specified, or does not have a shape and `validate_shape` is `True`.
          RuntimeError: If eager execution is enabled.

        @compatibility(eager)
        `tf.Variable` is not compatible with eager execution.  Use
        `tfe.Variable` instead which is compatible with both eager execution
        and graph construction.  See [the TensorFlow Eager Execution
        guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers)
        for details on how variables work in eager execution.
        @end_compatibility

    参数:
    initial_value:一个“张量”,或者Python对象可转换成一个“张量”,它是变量的初始值。除非将“validate_shape”设置为False,否则必须指定初始值的形状。也可以是可调用的,没有参数,调用时返回初始值。在这种情况下,必须指定' dtype '。(注意,在这里使用初始化器函数之前,init_ops.py必须先绑定到一个形状上。)
    可训练的:如果“True”是默认值,那么也会将变量添加到图形集合“GraphKeys.TRAINABLE_VARIABLES”中。此集合用作“优化器”类使用的默认变量列表。集合:图形集合键的列表。新变量被添加到这些集合中。默认为“[GraphKeys.GLOBAL_VARIABLES]”。
    validate_shape:如果为“False”,则允许使用未知形状的值初始化变量。如果' True '是默认值,则必须知道initial_value '的形状。caching_device:可选的设备字符串,用于描述变量应该被缓存到什么地方以便读取。变量设备的默认值。如果不是“None”,则缓存到另一个设备上。典型的用法是在使用变量驻留的操作系统所在的设备上进行缓存,通过“Switch”和其他条件语句进行重复复制。
    name:变量的可选名称。默认值为“变量”,并自动uniquified。
    variable_def: ' VariableDef '协议缓冲区。如果不是“None”,则使用其内容重新创建变量对象,并引用变量的节点
    在图中,它必须已经存在。图形没有改变。' variable_def '和其他参数是互斥的。
    如果设置了,initial_value将转换为给定的类型。如果‘None’,那么数据类型将被保留(如果‘initial_value’是一个张量),或者‘convert_to_张量’将决定。
    expected_shape: TensorShape。如果设置了,initial_value将具有此形状。
    import_scope:可选“字符串”。将作用域命名为“变量”。仅在从协议缓冲区初始化时使用。
    约束:一个可选的投影函数,在被“优化器”更新后应用到变量上(例如,用于实现规范约束或层权重的值约束)。函数必须将表示变量值的未投影张量作为输入,并返回投影值的张量(其形状必须相同)。在进行异步分布式培训时使用约束是不安全的。

    提出了:
    ValueError:如果同时指定了' variable_def '和initial_value。
    ValueError:如果没有指定初始值,或者没有形状,并且‘validate_shape’为‘True’。
    RuntimeError:如果启用了立即执行。

    @compatibility(渴望)
    “特遣部队。变量'与立即执行不兼容。使用
    tfe。变量',而不是与两个立即执行兼容
    和图施工。参见[TensorFlow立即执行]
    指南](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md # variables-and-optimizers)
    有关变量在立即执行中如何工作的详细信息。
    @end_compatibility

    
    @tf_export("Variable")
    class Variable(checkpointable.CheckpointableBase):
      """See the @{$variables$Variables How To} for a high level overview.
    
      A variable maintains state in the graph across calls to `run()`. You add a
      variable to the graph by constructing an instance of the class `Variable`.
    
      The `Variable()` constructor requires an initial value for the variable,
      which can be a `Tensor` of any type and shape. The initial value defines the
      type and shape of the variable. After construction, the type and shape of
      the variable are fixed. The value can be changed using one of the assign
      methods.
    
      If you want to change the shape of a variable later you have to use an
      `assign` Op with `validate_shape=False`.
    
      Just like any `Tensor`, variables created with `Variable()` can be used as
      inputs for other Ops in the graph. Additionally, all the operators
      overloaded for the `Tensor` class are carried over to variables, so you can
      also add nodes to the graph by just doing arithmetic on variables.
    
      ```python
      import tensorflow as tf
    
      # Create a variable.
      w = tf.Variable(<initial-value>, name=<optional-name>)
    
      # Use the variable in the graph like any Tensor.
      y = tf.matmul(w, ...another variable or tensor...)
    
      # The overloaded operators are available too.
      z = tf.sigmoid(w + y)
    
      # Assign a new value to the variable with `assign()` or a related method.
      w.assign(w + 1.0)
      w.assign_add(1.0)
      ```
    
      When you launch the graph, variables have to be explicitly initialized before
      you can run Ops that use their value. You can initialize a variable by
      running its *initializer op*, restoring the variable from a save file, or
      simply running an `assign` Op that assigns a value to the variable. In fact,
      the variable *initializer op* is just an `assign` Op that assigns the
      variable's initial value to the variable itself.
    
      ```python
      # Launch the graph in a session.
      with tf.Session() as sess:
          # Run the variable initializer.
          sess.run(w.initializer)
          # ...you now can run ops that use the value of 'w'...
      ```
    
      The most common initialization pattern is to use the convenience function
      `global_variables_initializer()` to add an Op to the graph that initializes
      all the variables. You then run that Op after launching the graph.
    
      ```python
      # Add an Op to initialize global variables.
      init_op = tf.global_variables_initializer()
    
      # Launch the graph in a session.
      with tf.Session() as sess:
          # Run the Op that initializes global variables.
          sess.run(init_op)
          # ...you can now run any Op that uses variable values...
      ```
    
      If you need to create a variable with an initial value dependent on another
      variable, use the other variable's `initialized_value()`. This ensures that
      variables are initialized in the right order.
    
      All variables are automatically collected in the graph where they are
      created. By default, the constructor adds the new variable to the graph
      collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function
      `global_variables()` returns the contents of that collection.
    
      When building a machine learning model it is often convenient to distinguish
      between variables holding the trainable model parameters and other variables
      such as a `global step` variable used to count training steps. To make this
      easier, the variable constructor supports a `trainable=<bool>` parameter. If
      `True`, the new variable is also added to the graph collection
      `GraphKeys.TRAINABLE_VARIABLES`. The convenience function
      `trainable_variables()` returns the contents of this collection. The
      various `Optimizer` classes use this collection as the default list of
      variables to optimize.
    
      WARNING: tf.Variable objects have a non-intuitive memory model. A Variable is
      represented internally as a mutable Tensor which can non-deterministically
      alias other Tensors in a graph. The set of operations which consume a Variable
      and can lead to aliasing is undetermined and can change across TensorFlow
      versions. Avoid writing code which relies on the value of a Variable either
      changing or not changing as other operations happen. For example, using
      Variable objects or simple functions thereof as predicates in a `tf.cond` is
      dangerous and error-prone:
    
      ```
      v = tf.Variable(True)
      tf.cond(v, lambda: v.assign(False), my_false_fn)  # Note: this is broken.
      ```
    
      Here replacing tf.Variable with tf.contrib.eager.Variable will fix any
      nondeterminism issues.
    
      To use the replacement for variables which does
      not have these issues:
    
      * Replace `tf.Variable` with `tf.contrib.eager.Variable`;
      * Call `tf.get_variable_scope().set_use_resource(True)` inside a
        `tf.variable_scope` before the `tf.get_variable()` call.
    
      @compatibility(eager)
      `tf.Variable` is not compatible with eager execution.  Use
      `tf.contrib.eager.Variable` instead which is compatible with both eager
      execution and graph construction.  See [the TensorFlow Eager Execution
      guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers)
      for details on how variables work in eager execution.
      @end_compatibility
      """
    
      def __init__(self,
                   initial_value=None,
                   trainable=True,
                   collections=None,
                   validate_shape=True,
                   caching_device=None,
                   name=None,
                   variable_def=None,
                   dtype=None,
                   expected_shape=None,
                   import_scope=None,
                   constraint=None):
        """Creates a new variable with value `initial_value`.
    
        The new variable is added to the graph collections listed in `collections`,
        which defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
    
        If `trainable` is `True` the variable is also added to the graph collection
        `GraphKeys.TRAINABLE_VARIABLES`.
    
        This constructor creates both a `variable` Op and an `assign` Op to set the
        variable to its initial value.
    
        Args:
          initial_value: A `Tensor`, or Python object convertible to a `Tensor`,
            which is the initial value for the Variable. The initial value must have
            a shape specified unless `validate_shape` is set to False. Can also be a
            callable with no argument that returns the initial value when called. In
            that case, `dtype` must be specified. (Note that initializer functions
            from init_ops.py must first be bound to a shape before being used here.)
          trainable: If `True`, the default, also adds the variable to the graph
            collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
            the default list of variables to use by the `Optimizer` classes.
          collections: List of graph collections keys. The new variable is added to
            these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
          validate_shape: If `False`, allows the variable to be initialized with a
            value of unknown shape. If `True`, the default, the shape of
            `initial_value` must be known.
          caching_device: Optional device string describing where the Variable
            should be cached for reading.  Defaults to the Variable's device.
            If not `None`, caches on another device.  Typical use is to cache
            on the device where the Ops using the Variable reside, to deduplicate
            copying through `Switch` and other conditional statements.
          name: Optional name for the variable. Defaults to `'Variable'` and gets
            uniquified automatically.
          variable_def: `VariableDef` protocol buffer. If not `None`, recreates
            the Variable object with its contents, referencing the variable's nodes
            in the graph, which must already exist. The graph is not changed.
            `variable_def` and the other arguments are mutually exclusive.
          dtype: If set, initial_value will be converted to the given type.
            If `None`, either the datatype will be kept (if `initial_value` is
            a Tensor), or `convert_to_tensor` will decide.
          expected_shape: A TensorShape. If set, initial_value is expected
            to have this shape.
          import_scope: Optional `string`. Name scope to add to the
            `Variable.` Only used when initializing from protocol buffer.
          constraint: An optional projection function to be applied to the variable
            after being updated by an `Optimizer` (e.g. used to implement norm
            constraints or value constraints for layer weights). The function must
            take as input the unprojected Tensor representing the value of the
            variable and return the Tensor for the projected value
            (which must have the same shape). Constraints are not safe to
            use when doing asynchronous distributed training.
    
        Raises:
          ValueError: If both `variable_def` and initial_value are specified.
          ValueError: If the initial value is not specified, or does not have a
            shape and `validate_shape` is `True`.
          RuntimeError: If eager execution is enabled.
    
        @compatibility(eager)
        `tf.Variable` is not compatible with eager execution.  Use
        `tfe.Variable` instead which is compatible with both eager execution
        and graph construction.  See [the TensorFlow Eager Execution
        guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers)
        for details on how variables work in eager execution.
        @end_compatibility

     

     

    tensorflow.get_variable()函数

    # The argument list for get_variable must match arguments to get_local_variable.
    # So, if you are updating the arguments, also update arguments to
    # get_local_variable below.
    @tf_export("get_variable")
    def get_variable(name,
                     shape=None,
                     dtype=None,
                     initializer=None,
                     regularizer=None,
                     trainable=None,
                     collections=None,
                     caching_device=None,
                     partitioner=None,
                     validate_shape=True,
                     use_resource=None,
                     custom_getter=None,
                     constraint=None,
                     synchronization=VariableSynchronization.AUTO,
                     aggregation=VariableAggregation.NONE):
      return get_variable_scope().get_variable(
          _get_default_variable_store(),
          name,
          shape=shape,
          dtype=dtype,
          initializer=initializer,
          regularizer=regularizer,
          trainable=trainable,
          collections=collections,
          caching_device=caching_device,
          partitioner=partitioner,
          validate_shape=validate_shape,
          use_resource=use_resource,
          custom_getter=custom_getter,
          constraint=constraint,
          synchronization=synchronization,
          aggregation=aggregation)

     

     

     

     

     

     

     

     

     

     

     

     

     

    展开全文
  • Reusing模式会被子vs继承 ...print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse) with tf.variable_scope('ss'): # ss是默认vs的子vs,故虽然没有使用reuse=True,wit...

    一、VariableScope的reuse模式的设置

    1.1节

    1、tf.get_variable_scope()可以获得当前的变量域,还可以通过其.name.reuse来查看其名称和当前是否为reuse模式。

    2、变量域有name,变量也有name。默认变量作用域的name为空白字符串。

    3、在变量域内命名的变量的name全称为:“变量域的name+变量定义时传入的name”(就像一个文件有名字作为标识符,但是在前面加上绝对路径就是它在整个文件系统中的全局标识符)。

    这三点贯穿本文,如果不太清楚,可以直接看后面的多个例子,会不断地体现在代码中。

    1.2节

    with tf.variable_scope()可以打开一个变量域,有两个关键参数。name_or_scope参数可以是字符串或tf.VariableScope对象,reuse参数为布尔值,传入True表示设置该变量域为reuse模式。

    还有一种方法可以将变量域设置为reuse模式,即使用VariableScope对象的reuse_variables()方法,例如tf.get_variable_scope().reuse_variables()可以将当前变量域设置为reuse模式。

    with tf.variable_scope('vs1'):
        tf.get_variable_scope().reuse_variables()
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    with tf.variable_scope('vs2',reuse=True):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    '''
    "vs1" True
    "vs2" True
    '''
    

    1.3节

    对某变量域设置reuse模式,则reuse模式会被变量域的子域继承

    # 注意,默认变量域的名称为空白字符串
    tf.get_variable_scope().reuse_variables() # 将默认变量域设置为reuse模式
    print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse) # 为了显示空白字符串,在名称两边加上双引号
    
    with tf.variable_scope('vs'): 
    	# vs是默认变量域的子域,故虽然没有明确设置vs的模式,但其也更改成了reuse模式
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    '''
    
    输出为:
    "" True
    "vs" True
    
    '''
    

    1.4节

    每次在with块中设置变量域的模式,退出with块就会失效(恢复回原来的模式)。

    with tf.variable_scope('vs1'):
        tf.get_variable_scope().reuse_variables()
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    with tf.variable_scope('vs1'):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    
    '''
    
    输出为:
    "vs1" True
    "vs1" False
    
    '''
    

    1.5节

    可以使用一个变量域(tf.VariableScope对象)的引用来打开它,这样可以不用准确的记住其name的字符串。下面的例子来自tensorflow官网

    with tf.variable_scope("model") as scope:
      output1 = my_image_filter(input1)
    with tf.variable_scope(scope, reuse=True):
      output2 = my_image_filter(input2)
    

    tf.VariableScope对象作为with tf.variable_scope( name_or_scope ):的参数时,该with语句块的模式是该scope对应的模式。(下面的代码同时也展现了前面所说的“继承”和“失效”的现象。)

    with tf.variable_scope('vs1'):
        tf.get_variable_scope().reuse_variables()
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        with tf.variable_scope('vs2') as scope: # vs2(全称是vs1/vs2)将会继承vs1的reuse模式
            print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    # 重新用with打开vs1和vs2,他们的reuse模式不受之前with块中的设置的影响
    with tf.variable_scope('vs1'): 
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        with tf.variable_scope('vs2'):
            print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    # tf.variable_scope也可以传入tf.VariableScope类型的变量,此处的scope是第4行with语句中定义的
    with tf.variable_scope(scope): 
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    # 第二个with中, vs1/vs2的reuse模式为False,即前面所说的退出with块之后reuse模式的设置“失效”
    # 但是第三个with中,vs1/vs2的reuse却为True,这是因为当`name_or_scope`参数是tf.VariableScope对象时,
    # 其打开的变量域的reuse模式由这个参数scope决定。
    # 此处的`scope`在第4行定义,“继承”vs1的reuse,且之后没有改变,所以第三个with打开的就是reuse=True
    
    '''
    
    输出为:
    "vs1" True
    "vs1/vs2" True
    "vs1" False
    "vs1/vs2" False
    "vs1/vs2" True
    
    '''
    

    二、reuse模式对tf.Variable() 的影响

    tf.Variable() 只有新建变量的功能,一个变量域是否为reuse模式不影响tf.Variable()的作用。如果该变量域中已经有同名的变量,则新建的变量会被重命名,加上数字后缀以区分。

    print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    v1=tf.Variable(tf.constant(1),name='v')                                         
    v2=tf.Variable(tf.constant(1),name='v')
    
    tf.get_variable_scope().reuse_variables()
    print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    
    v3=tf.Variable(tf.constant(1),name='v') # 在reuse模式下使用tf.Variable(),仍然会新建,故v3名称为v_2
    
    print(v1.name)
    print(v2.name)
    print(v3.name)
    
    '''
    
    输出为:
    "" False
    "" True
    v:0
    v_1:0
    v_2:0
    
    '''
    

    三、reuse模式对tf.get_variable()的影响

    reuse模式会对tf.get_variable()的实际效果有决定作用。

    3.1节

    在non-reuse模式下,tf.get_variable()作用为新建变量(设为v)。若变量域内已经有同名变量(设为w),则分两种情况:

    1. w是之前通过tf.Variable()创建的,则v将被重命名,即加上数字后缀。
    2. w是之前通过tf.get_variable()创建的,则不允许新建同名变量v
    with tf.variable_scope('vs'):
    	# 打印当前scope的名称和是否为reuse模式
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse) 
        v1=tf.Variable(tf.constant(1),name='v')
        print(v1.name) # 前缀为scope的名称加一个反斜线,即“vs/”,故全称为“vs/v:0”,“冒号0”的解释见后文。
        v2=tf.get_variable('v',shape=()) 
        print(v2.name) # 已经有名为v的变量,故v2的name会在v后面加上数字后缀(从1开始)
        v3=tf.get_variable('v',shape=()) # 已经有名为v且由tf.get_variable创建的变量,故v3的创建抛出异常
        print(v3.name)
    

    输出为:(题外话,“:0” 指的是该变量是创建它的operation的第一个输出,见 这个链接

    "vs" False
    vs/v:0
    vs/v_1:0
    --------------------------------------------------------------------------
    
    ValueError                               Traceback (most recent call last)
    
    <ipython-input-2-e0b97b39994d> in <module>()
          5     v2=tf.get_variable('v',shape=())
          6     print(v2.name)
    ----> 7     v3=tf.get_variable('v',shape=())
          8     print(v3.name)
          9
          
    <省略部分输出>
    ValueError: Variable vs/v already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?
    

    3.2节

    在reuse模式下,tf.get_variable()作用为重用(reuse)变量。注意只能重用之前在本变量域创建的、且使用tf.get_variable()创建的变量,即不能在本变量域中重用其他变量域中创建的变量,也不能重用那些使用tf.Variable()创建的变量。

    1.重用变量

    with tf.variable_scope('vs'):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        v=tf.get_variable('v',shape=())
        print(v.name)
    
    with tf.variable_scope('vs',reuse=True):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        reused_v=tf.get_variable('v',shape=()) # reused_v就是之前的v,他们是共享内存的变量
        print(reused_v.name)
    
    '''
    
    输出为:
    "vs" False
    vs/v:0
    "vs" True
    vs/v:0
    
    '''
    

    2.不能重用其他变量域中命名的变量(相当于你在A文件夹新建了v.txt,但是不能到B文件夹里面找v.txt)。

    # 在vs变量域新建v,尝试到vs1中重用变量
    with tf.variable_scope('vs'):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        v=tf.get_variable('v',shape=())
        # v=tf.Variable(tf.constant(1),name='v')
        print(v.name)
    
    with tf.variable_scope('vs1',reuse=True):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        # 下一行会报错,因为vs1这个变量域并没有用get_variable()创建过名为v的变量
        reused_v=tf.get_variable('v',shape=()) 
        print(reused_v.name)
    
    '''
    报错:
    ValueError: Variable vs1/v does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
    '''
    

    3.只能重用那些使用tf.get_variable()创建的变量,而不能重用那些使用tf.Variable()创建的变量。

    with tf.variable_scope('vs'):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        v=tf.Variable(tf.constant(1),name='v')
        print(v.name)
    
    with tf.variable_scope('vs',reuse=True):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        reused_v=tf.get_variable('v',shape=())
        print(reused_v.name)
    

    输出为:

    "vs" False
    vs/v:0
    "vs" True
    --------------------------------------------------------------------------
    
    ValueError                               Traceback (most recent call last)
    
    <ipython-input-2-63ddfa598083> in <module>()
          6 with tf.variable_scope('vs',reuse=True):
          7     print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
    ----> 8     reused_v=tf.get_variable('v',shape=())
          9     print(reused_v.name)
         10
        
    <省略部分输出>
    
    ValueError: Variable vs/v does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
    

    附加1:tf.name_scope()与tf.variable_scope()的区别

    tf.name_scope()tf.variable_scope()的功能很像,这里也顺便探讨一下他们的区别,以助于加深对两个方法的理解。

    本小节参考自 这篇知乎文章

    with tf.name_scope('ns'):
        print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
        v1=tf.get_variable('v',shape=())
        v2=tf.Variable(tf.constant(1),name='v')
        print(v1.name)
        print(v2.name)
        with tf.variable_scope('vs'):
            print('"'+tf.get_variable_scope().name+'"', tf.get_variable_scope().reuse)
            v3=tf.get_variable('v',shape=())
            v4=tf.Variable(tf.constant(1),name='v')
            print(v3.name)
            print(v4.name)
    
    with tf.variable_scope('vs'):
        with tf.name_scope('ns'):
            v5=tf.Variable(tf.constant(1),name='v')
            print(v5.name)
            v6=tf.get_variable('v',shape=()) # 这里将会抛出异常
            print(v6.name)
    

    输出如下,解释见对应的注释:

    "" False   # 1.with打开NameScope并不影响所在的VariableScope
    v:0        # 2.NameScope对于以tf.get_variable()新建的变量的命名不会有影响
    ns/v:0     # 3.对于以tf.Variable()方式新建的变量的命名,会加上NameScope的名字作为前缀
    "vs" False # 4.印证了第1点
    vs/v:0     # 5.印证了第2点
    ns/vs/v:0  # 6.对于被多层NameScope和VariableScope包围的、以tf.Variable()新建的变量,其命名以嵌套顺序来确定前缀
    vs/ns/v:0  # 7.印证了第6点
    # 下面的异常是由v6=tf.get_variable('v',shape=())导致的
    # 因为tf.get_variable()获得的变量的命名不受NameScope影响,所以这里其实对应了3.1节第2点的情况
    # 即在相同的VariableScope中使用tf.get_variable()定义了重名的变量
    Traceback (most recent call last):
      File "scope.py", line 79, in <module>
        v6=tf.get_variable('v',shape=())
      File "C:\Users\pyxies\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1203, in get_variable
        constraint=constraint)
      File "C:\Users\pyxies\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1092, in get_variable
        constraint=constraint)
      File "C:\Users\pyxies\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 425, in get_variable
        constraint=constraint)
      File "C:\Users\pyxies\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 394, in _true_getter
        use_resource=use_resource, constraint=constraint)
      File "C:\Users\pyxies\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 742, in _get_single_variable
        name, "".join(traceback.format_list(tb))))
    ValueError: Variable vs/v already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?
    

    附加2:单机多GPU下的变量共享/复用

    见 https://blog.csdn.net/xpy870663266/article/details/99330338

    展开全文
  • UEFI下Variable的实现

    千次阅读 2019-02-01 16:20:14
    这部分功能在标准的uefi中都是使用Variable 这套机制实现的.熟悉uefi的人都知道Variable这套机制在uefi中使用非常的广泛.现在我们来说说Variable是如何实现的. 首先:Variable在uefi的公共源码上是有相应的代码...

    最近在龙芯平台在调试Nvrom的存储功能,设置开机的启动项,设置开机密码等功能.这部分功能在标准的uefi中都是使用Variable

    这套机制实现的.熟悉uefi的人都知道Variable这套机制在uefi中使用非常的广泛.现在我们来说说Variable是如何实现的.

    首先:Variable在uefi的公共源码上是有相应的代码的,代码位于目录MdeModulePkg/Universal/Variable/RuntimeDxe和

    MdeModulePkg/Universal/Variable/EmuRuntimeDxe这两个目录,第一个目录是实体机需要使用的,第二个目录是为虚拟机实现

    的.一开始我们使用的是第二个,现在回过头来发现,使用的代码都不对,variable所有的变量都存储在了内存中,当掉电或者重

    启后所有的数据都丢失了,需要重新写入.因此是不正确的.正常的环境变量也就是variable使用存储到flash中的,这样才能够保

    存内容,掉电或者重启后数据不会丢失.在uefi中所有的环境变量都用variable来表示,系统统一使用gST->GetVariable和

    gST->SetVariable来读和写一个Variable.注意这里gST是uefi中的Runtime阶段的一个全局的表.这两个函数是注册到这个表的.注

    册的位置是在Vaiable的驱动模块中注册的,并不是DxeCore中那个表原有的函数地址.其实gST->SetVariable的实现是需要三个驱

    动来完成的,并不是在Variable这个驱动就能实现的,Variable这是最上层的.下面就详细的讲讲gST->SetVariable的函数实现.

    VariableRuntimeDxe:

    主要依赖三个驱动,分别为VariableRuntimeDxe和FvBlockServiceDxe还有FlashOperationDxe

    VariableRuntimeDxe主要实现一些必要的服务和初始化,以及设置Volatile类型和Non-Vloatile类型的Variable存储位置.

    我们这里设置Volatile的变量放在了Runtime类型的一段内存中,目前龙芯平台在uefi下还不支持Runtime,所以后续这些变量还是需

    要放到我们为固件保留的内存地址上.存放Non-Volatile的变量的位置就是flash 的地址,我们这里固件的起始64位地址是0x900000001fc00000,这个地址也就是flahs的其实地址,开头的位置存放了一些固件的描述信息和一些跳转指令,以及一些固件头的信息.这部分数据就是下面这个全局变量中的EFI_FIRMWARE_VOLUME_HEADER  FvbInfo信息完全一致.

    EFI_FVB_MEDIA_INFO mLoongsonPlatformFvbInfo = {

      0x10000,
      {
         {    
            0xc1, 0x9f, 0x1f, 0x3c, 0x08, 0x00, 0xe0, 0x03,
            0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
         },                                                                                                                                                                                  
         gEfiSystemNvDataFvGuid,
         0x10000,
         EFI_FVH_SIGNATURE,
         EFI_FVB2_MEMORY_MAPPED |
         EFI_FVB2_READ_ENABLED_CAP |
         EFI_FVB2_READ_STATUS |
         EFI_FVB2_WRITE_ENABLED_CAP |
         EFI_FVB2_WRITE_STATUS |
         EFI_FVB2_ERASE_POLARITY |
         EFI_FVB2_ALIGNMENT_16,
          0x48,//sizeof (EFI_FIRMWARE_VOLUME_HEADER) + sizeof (EFI_FV_BLOCK_MAP_ENTRY),
          0x00,   // CheckSum
          0,   // ExtHeaderOffset
          {    
            0,   
          },  // Reserved[1]
          2,  // Revision
          {    
            {    
              (FixedPcdGet32(PcdFlashNvStorageVariableSize))/FIRMWARE_BLOCK_SIZE,
              FIRMWARE_BLOCK_SIZE
            }    
          }    
        },   
        {    
          {    
            0,   
            0    
           }    
        }    
    };

    typedef struct {
      UINT64                      FvLength;
      EFI_FIRMWARE_VOLUME_HEADER  FvbInfo;
      EFI_FV_BLOCK_MAP_ENTRY      End[1];
    } EFI_FVB_MEDIA_INFO;

    下面我们看一下VariableRuntimeDxe的入口函数都做了那些工作.代码如下:

    EFI_STATUS
    EFIAPI
    VariableServiceInitialize (
      IN EFI_HANDLE         ImageHandle,
      IN EFI_SYSTEM_TABLE   *SystemTable
      )
    {
    
      EFI_STATUS                          Status;
      EFI_HANDLE                          *HandleBuffer;
      EFI_FIRMWARE_VOLUME_BLOCK_PROTOCOL  *Fvb;
      EFI_FIRMWARE_VOLUME_HEADER          *FwVolHeader;
      EFI_FVB_ATTRIBUTES_2                Attributes;
      EFI_PHYSICAL_ADDRESS                NvStorageVariableBase;
      EFI_PHYSICAL_ADDRESS                FvbBaseAddress;
      UINTN                               HandleCount;
      UINTN                               Index;
    
      Fvb         = NULL;
      Status = EFI_NOT_FOUND;
    
      DbgPrint(DEBUG_INFO,"Enter VariableServiceInitialize()\n");
    
      Status = gBS->LocateHandleBuffer (
                      ByProtocol,
                      &gEfiFirmwareVolumeBlockProtocolGuid,
                      NULL,
                      &HandleCount,
                      &HandleBuffer
                      );
      if (EFI_ERROR (Status)) {
        DbgPrint(DEBUG_INFO,"Not Support gEfiFirmwareVolumeBlockProtocol\n");
        return Status ;
      }
    for (Index = 0; Index < HandleCount; Index++) {
        Status = gBS->HandleProtocol (
                        HandleBuffer[Index],
                        &gEfiFirmwareVolumeBlockProtocolGuid,
                        (VOID **) &Fvb
                        );
        if (EFI_ERROR (Status)) {
          Status = EFI_NOT_FOUND;
          break;
        }
    
        Status = Fvb->GetAttributes (Fvb, &Attributes);
    
        if (EFI_ERROR (Status) || ((Attributes & EFI_FVB2_WRITE_STATUS) == 0)) {
          continue;
        }
    
        Status = Fvb->GetPhysicalAddress (Fvb, &FvbBaseAddress);
    
        if (EFI_ERROR (Status)) {
          continue;
        }
        FwVolHeader = (EFI_FIRMWARE_VOLUME_HEADER *) ((UINTN) FvbBaseAddress);
    
        NvStorageVariableBase = (EFI_PHYSICAL_ADDRESS) PcdGet64 (PcdFlashNvStorageVariableBase64);
        NvStorageVariableBase = NvStorageVariableBase + FwVolHeader->HeaderLength;
        if ((NvStorageVariableBase >= FvbBaseAddress) && (NvStorageVariableBase < (FvbBaseAddress + FwVolHeader->FvLength))) {
          Status      = EFI_SUCCESS;
          break;
        }
      }
    
      FreePool (HandleBuffer);
    if (!EFI_ERROR (Status) && Fvb != NULL) {
    
        Status = VariableCommonInitialize (Fvb);
        ASSERT_EFI_ERROR (Status);
    
        SystemTable->RuntimeServices->GetVariable         = RuntimeServiceGetVariable;
        SystemTable->RuntimeServices->GetNextVariableName = RuntimeServiceGetNextVariableName;
        SystemTable->RuntimeServices->SetVariable         = RuntimeServiceSetVariable;
        SystemTable->RuntimeServices->QueryVariableInfo   = RuntimeServiceQueryVariableInfo;
    
        Status = gBS->InstallMultipleProtocolInterfaces (
                      &mHandle,
                      &gEfiVariableArchProtocolGuid, NULL,
                      &gEfiVariableWriteArchProtocolGuid, NULL,
                      NULL
                      );
        ASSERT_EFI_ERROR (Status);
      }
    
      EfiCreateProtocolNotifyEvent (
        &gEfiFirmwareVolumeBlockProtocolGuid,
        TPL_CALLBACK,
        FvbNotificationEvent,
        (VOID *)SystemTable,
        &mFvbRegistration
        );
    
      DbgPrint(DEBUG_INFO,"return VariableServiceInitialize() (%r)\n",Status);
      return EFI_SUCCESS;
    }

    上面的代码就是VariableRuntimeDxe入口函数,根据代码可知首先要找到gEfiFirmwareVolumeBlockProtocol,这个protocol就是Variable需要依赖的一个protocol因为在SetVariable的时候需要调用的就是gEfiFirmwareVolumeBlockProtocol中的Write函数.locate和这个protocol成功之后来获取存储Variable的地址,这个地址是从fdf文件中采用pcd的方式get出来的,这个地址就是我们前面说的地址0x900000001fc00000,然而VariableStoreHeader的地址就是跳过0x48头的长度就是对应的VariableStoreHeader的地址.我们这里预留了256个字节,第一个Variable的位置就是从0x900000001fc00148的位置开始存储.最终位置到0x900000001fc06000.这部分地址就是用来存放BIOS下面的Variable的内容的地址.当写满之后,需要将有用的信息拿出来然后清楚后,再写进去.这是入口函数主要的功能.

    下面我们看一下GetVariable和SetVariable的函数,这两个函数是在BIOS阶段经常使用的

    gST->GetVariable();

    EFI_STATUS
    EFIAPI
    RuntimeServiceGetVariable (
      IN      CHAR16            *VariableName,
      IN      EFI_GUID          *VendorGuid,
      OUT     UINT32            *Attributes OPTIONAL,
      IN OUT  UINTN             *DataSize,
      OUT     VOID              *Data
      ) 
    {   
      EFI_STATUS              Status;
      VARIABLE_POINTER_TRACK  Variable;
      UINTN                   VarDataSize;
      
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"Enter RuntimeServiceGetVariable()\n");
      DbgPrint(DEBUG_INFO,"GetVariable:Name=%sx\n",VariableName);
    #endif
      if (VariableName == NULL || VendorGuid == NULL || DataSize == NULL) {
        return EFI_INVALID_PARAMETER;
      }
    
      AcquireLockOnlyAtBootTime(&mVariableModuleGlobal->VariableGlobal.VariableServicesLock);
    
      //
      // Find existing variable
      //
      Status = FindVariableInCache (VariableName, VendorGuid, Attributes, DataSize, Data);                                                                                                   
    
      if ((Status == EFI_BUFFER_TOO_SMALL) || (Status == EFI_SUCCESS)){
        UpdateVariableInfo (VariableName, VendorGuid, FALSE, TRUE, FALSE, FALSE, TRUE);
        goto Done;
      }
    //Status = FindVariable (VariableName, VendorGuid, &Variable, &mVariableModuleGlobal->VariableGlobal);
    
      Status = FindNonVolatileVariable(VariableName, VendorGuid, &Variable);
      if (Variable.CurrPtr == NULL || EFI_ERROR (Status)) {
        Status = FindVolatileVariable(VariableName, VendorGuid, &Variable);
      }
      if (Variable.CurrPtr == NULL || EFI_ERROR (Status)) {
        goto Done;
      }
      //
      // Get data size
      //
      VarDataSize = DataSizeOfVariable (Variable.CurrPtr);
      ASSERT (VarDataSize != 0);
    
      if (*DataSize >= VarDataSize) {
        if (Data == NULL) {
          Status = EFI_INVALID_PARAMETER;
          goto Done;
        }
    
        CopyMem (Data, GetVariableDataPtr (Variable.CurrPtr), VarDataSize);
        if (Attributes != NULL) {
          *Attributes = Variable.CurrPtr->Attributes;
        }
    
        *DataSize = VarDataSize;
        UpdateVariableInfo (VariableName, VendorGuid, Variable.Volatile, TRUE, FALSE, FALSE, FALSE);
        UpdateVariableMem (VariableName, VendorGuid, Variable.CurrPtr->Attributes, VarDataSize, Data);
     
        Status = EFI_SUCCESS;
        goto Done;
      } else {
        *DataSize = VarDataSize;
        Status = EFI_BUFFER_TOO_SMALL;
        goto Done;
      }
    
    Done:
      ReleaseLockOnlyAtBootTime (&mVariableModuleGlobal->VariableGlobal.VariableServicesLock);
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"Return RuntimeServiceGetVariable() (%r)\n",Status);                                                                                                               
    #endif
      return Status;
    }

    上面的函数就是GetVariable的函数实现,主要就是从FindVariable函数中来找Variable,这里为了提高查找的效率将FindVariable分成了两个函数FindVolatileVariable()和FindNonVolatileVariable(),因为在Get一个Variable的时候是通过名字和Guid来寻找的,并没有属性,所以只能先去NonVolatile的存储地方去找,存储的地址可以是0x900000001fc00048的地址也可以是这段地址的数据拷贝的内存地址.在上面的VariableRuntimeDxe的入口函数中的VariableCommonInitialize()函数中,会将0x900000001fc00048为起始地址拷贝0x6000大小的数据到内存,这段内存的数据是作为flash存放Variable数据的一段映射.这里的数据需要保持一致,我们在添加一个Variable的时候flash里面的数据会写,然后将这段数据更新到对应的内存地址上.回到GetVariable的过程,在FindVariable的过程中找到之后,Variable.CurrPtr指针就不会为空,这样就拿到了这个Variable头的地址,通过头的地址就可以拿到想要的数据.然后依次获取数据,和属性.这里需要注意,GetVariable的过程由于龙芯平台整个IO空间和内存地址空间是统一编址的,所以访问IO设备像Flash这种读的时候,直接可以使用类似读内存数据的方法来读,但是写必须要调用Flash的驱动函数去写才能写入.这也算龙芯平台的一个特点.如果使用写内存数据的方式去写Flash数据写不进去,但是也不会出现其他的问题.这点需要在注意.

    gST->SetVariable():

    EFI_STATUS
    EFIAPI
    RuntimeServiceSetVariable (
      IN CHAR16                  *VariableName,
      IN EFI_GUID                *VendorGuid,
      IN UINT32                  Attributes,
      IN UINTN                   DataSize,
      IN VOID                    *Data
      )
    {
      VARIABLE_POINTER_TRACK              Variable;
      EFI_STATUS                          Status;
      VARIABLE_HEADER                     *NextVariable;
      EFI_PHYSICAL_ADDRESS                Point;
    
      //
      // Check input parameters
      //
      if (VariableName == NULL || VariableName[0] == 0 || VendorGuid == NULL) {
        return EFI_INVALID_PARAMETER;
      } 
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"Enter RuntimeServiceSetVariable()\n");                                                                                                                            
      DbgPrint(DEBUG_INFO,"SetVariable Name:%s,DataSize=0x%lx,Data=0x%lx\n",VariableName,DataSize,Data);
    #endif
      if (DataSize != 0 && Data == NULL) {
        return EFI_INVALID_PARAMETER;
      }
    
      //
      // Not support authenticated variable write yet.
      //
      if ((Attributes & EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS) != 0) { 
        return EFI_INVALID_PARAMETER;
      }
    
      //
      //  Make sure if runtime bit is set, boot service bit is set also
      //
      if ((Attributes & (EFI_VARIABLE_RUNTIME_ACCESS | EFI_VARIABLE_BOOTSERVICE_ACCESS)) == EFI_VARIABLE_RUNTIME_ACCESS) {
        return EFI_INVALID_PARAMETER;
      }
      if ((Attributes & (EFI_VARIABLE_RUNTIME_ACCESS | EFI_VARIABLE_BOOTSERVICE_ACCESS)) == EFI_VARIABLE_RUNTIME_ACCESS) {
        return EFI_INVALID_PARAMETER;
      }
    
      if ((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) == EFI_VARIABLE_HARDWARE_ERROR_RECORD) {
        if ((DataSize > PcdGet32 (PcdMaxHardwareErrorVariableSize)) ||
            (sizeof (VARIABLE_HEADER) + StrSize (VariableName) + DataSize > PcdGet32 (PcdMaxHardwareErrorVariableSize))) {
          return EFI_INVALID_PARAMETER;
        }
        //
        // According to UEFI spec, HARDWARE_ERROR_RECORD variable name convention should be L"HwErrRecXXXX"
        //
        if (StrnCmp(VariableName, L"HwErrRec", StrLen(L"HwErrRec")) != 0) {
          return EFI_INVALID_PARAMETER;
        }
    
      } else {
        //
        //  The size of the VariableName, including the Unicode Null in bytes plus
        //  the DataSize is limited to maximum size of PcdGet32 (PcdMaxVariableSize) bytes.
        //
        if ((DataSize > PcdGet32 (PcdMaxVariableSize)) ||
            (sizeof (VARIABLE_HEADER) + StrSize (VariableName) + DataSize > PcdGet32 (PcdMaxVariableSize))) {
          return EFI_INVALID_PARAMETER;
        }
      }
    
      AcquireLockOnlyAtBootTime(&mVariableModuleGlobal->VariableGlobal.VariableServicesLock);
    
      //
      // Consider reentrant in MCA/INIT/NMI. It needs be reupdated;
      //
      if (1 < InterlockedIncrement (&mVariableModuleGlobal->VariableGlobal.ReentrantState)) {
        Point = mVariableModuleGlobal->VariableGlobal.NonVolatileVariableBase;
        //
        //  get last variable offset
        //
        NextVariable  = GetStartPointer ((VARIABLE_STORE_HEADER *) (UINTN) Point);
        while ((NextVariable < GetEndPointer ((VARIABLE_STORE_HEADER *) (UINTN) Point)) && IsValidVariableHeader (NextVariable)) {
          NextVariable = GetNextVariablePtr (NextVariable);
        }
        mVariableModuleGlobal->NonVolatileLastVariableOffset = (UINTN) NextVariable - (UINTN) Point;
    #if VARIABLE_DEBUG
        DbgPrint(DEBUG_INFO,"mVariableModuleGlobal->NonVolatileLastVariableOffset=0x%llx\n",mVariableModuleGlobal->NonVolatileLastVariableOffset);                                           
    #endif
      }
    //Status = FindVariable (VariableName, VendorGuid, &Variable, &mVariableModuleGlobal->VariableGlobal);
    
      if( (Attributes & EFI_VARIABLE_NON_VOLATILE) != 0){
        Status = FindNonVolatileVariable(VariableName, VendorGuid, &Variable);
      }else{
        Status = FindVolatileVariable(VariableName, VendorGuid, &Variable);
      }
    
      DbgPrint(DEBUG_INFO,"Find Variable Name %s  %r \n",VariableName,Status);
    
      Status = UpdateVariable (VariableName, VendorGuid, Data, DataSize, Attributes, &Variable);
    
      InterlockedDecrement (&mVariableModuleGlobal->VariableGlobal.ReentrantState);
      ReleaseLockOnlyAtBootTime (&mVariableModuleGlobal->VariableGlobal.VariableServicesLock);
    
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"Return RuntimeServiceSetVariable() (%r)\n",Status);
    #endif
    
      return Status;
    }

    上面的代码就是SetVariabel的代码,由上面的代码可知,函数的入参为    VariableName, *VendorGuid,Attributes,DataSize, *Data这几个参数都是要使用的,首先检查函数入参的合法性,都过了之后,就根据属性去FindVariable,Volatile的Variable就去Volatile的地址去找,NonVolatile的就去Flash获取对应的映射的内存地址去找,找到之后到后面的UpdataVariable函数就会走相对应的更新一个已经存在的Variable的函数流程,如果没有找到就会走添加一个新的Variable的函数流程.下面我们来看一下UpdataVariable的函数流程.

    EFI_STATUS
    EFIAPI
    UpdateVariable (
      IN      CHAR16                      *VariableName,
      IN      EFI_GUID                    *VendorGuid,
      IN      VOID                        *Data,
      IN      UINTN                       DataSize,
      IN      UINT32                      Attributes      OPTIONAL,
      IN      VARIABLE_POINTER_TRACK      *CacheVariable
      )
    {
      EFI_STATUS                          Status;
      VARIABLE_HEADER                     *NextVariable;
      UINTN                               ScratchSize;
      UINTN                               NonVolatileVarableStoreSize;
      UINTN                               VarNameOffset;
      UINTN                               VarDataOffset;
      UINTN                               VarNameSize;
      UINTN                               VarSize;
      BOOLEAN                             Volatile;
      EFI_FIRMWARE_VOLUME_BLOCK_PROTOCOL  *Fvb;
      UINT8                               State;
      BOOLEAN                             Reclaimed;
      VARIABLE_POINTER_TRACK              *Variable;
      VARIABLE_POINTER_TRACK              NvVariable;
      VARIABLE_STORE_HEADER               *VariableStoreHeader;
      UINTN                               CacheOffset;
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"Enter UpdateVariable ().\n");
    #endif
      if ((mVariableModuleGlobal->FvbInstance == NULL) && ((Attributes & EFI_VARIABLE_NON_VOLATILE) != 0)) {
        return EFI_NOT_AVAILABLE_YET;
      }
    
      if ((CacheVariable->CurrPtr == NULL) || CacheVariable->Volatile) {
        Variable = CacheVariable;
      } else {
        VariableStoreHeader  = (VARIABLE_STORE_HEADER *) ((UINTN) mVariableModuleGlobal->VariableGlobal.NonVolatileVariableBase);
        Variable = &NvVariable;
        Variable->StartPtr = GetStartPointer (VariableStoreHeader);
        Variable->EndPtr   = GetEndPointer (VariableStoreHeader);
        Variable->CurrPtr  = (VARIABLE_HEADER *)((UINTN)Variable->StartPtr + ((UINTN)CacheVariable->CurrPtr - (UINTN)CacheVariable->StartPtr));
        Variable->Volatile = FALSE;
      } 
      Fvb               = mVariableModuleGlobal->FvbInstance;                                                                                                                                
      Reclaimed         = FALSE;
    if (Variable->CurrPtr != NULL) {
        //
        // Update/Delete existing variable
        //
    #if VARIABLE_DEBUG
        DEBUG((EFI_D_ERROR, "Updatealready exist Variable %s.\n",VariableName));
    #endif
        if (EfiAtRuntime ()) {
          if (Variable->Volatile) {
            Status = EFI_WRITE_PROTECTED;
            goto Done;
          }
          if ((Variable->CurrPtr->Attributes & EFI_VARIABLE_NON_VOLATILE) == 0) {
            Status = EFI_INVALID_PARAMETER;
            goto Done;
          }
        }
        //
        // Setting a data variable with no access, or zero DataSize attributes specified causes it to be deleted.
        //
    
        if (DataSize == 0 || (Attributes & (EFI_VARIABLE_RUNTIME_ACCESS | EFI_VARIABLE_BOOTSERVICE_ACCESS)) == 0) {
          State = Variable->CurrPtr->State;
          State &= VAR_DELETED;
          Status = UpdateVariableStore (
                     &mVariableModuleGlobal->VariableGlobal,
                     Variable->Volatile,
                     FALSE,
                     Fvb,
                     (UINTN) &Variable->CurrPtr->State,
                     sizeof (UINT8),
                     &State
                     );
          if (!EFI_ERROR (Status)) {
            UpdateVariableInfo (VariableName, VendorGuid, Variable->Volatile, FALSE, FALSE, TRUE, FALSE);
            if (!Variable->Volatile) {
              CacheVariable->CurrPtr->State = State;
            }
          }                                                                                                                                                                                  
          goto Done;
        }
    //
        // If the variable is marked valid and the same data has been passed in
        // then return to the caller immediately.
        //
        if (DataSizeOfVariable (Variable->CurrPtr) == DataSize &&
            (CompareMem (Data, GetVariableDataPtr (Variable->CurrPtr), DataSize) == 0)) {
          UpdateVariableInfo (VariableName, VendorGuid, Variable->Volatile, FALSE, TRUE, FALSE, FALSE);
          Status = EFI_SUCCESS;
          goto Done;
        } else if ((Variable->CurrPtr->State == VAR_ADDED) ||
                   (Variable->CurrPtr->State == (VAR_ADDED & VAR_IN_DELETED_TRANSITION))) {
    
          //
          // Mark the old variable as in delete transition
          //
          State = Variable->CurrPtr->State;
          State &= VAR_IN_DELETED_TRANSITION;
    
          Status = UpdateVariableStore (
                     &mVariableModuleGlobal->VariableGlobal,
                     Variable->Volatile,
                     FALSE,
                     Fvb,
                     (UINTN) &Variable->CurrPtr->State,
                     sizeof (UINT8),
                     &State
                     );
          if (EFI_ERROR (Status)) {
            goto Done;
          }
          if (!Variable->Volatile) {
            CacheVariable->CurrPtr->State = State;
          }
        }                                                                                                                                                                                    
      } else {
    #if VARIABLE_DEBUG
        DEBUG((EFI_D_ERROR, "Creat new variable name = %s\n",VariableName));
    #endif
        //
        // Not found existing variable. Create a new variable
        //
    
        if (DataSize == 0 || (Attributes & (EFI_VARIABLE_RUNTIME_ACCESS | EFI_VARIABLE_BOOTSERVICE_ACCESS)) == 0) {
          Status = EFI_NOT_FOUND;
          goto Done;
        }
    
        if (EfiAtRuntime () &&
            (((Attributes & EFI_VARIABLE_RUNTIME_ACCESS) == 0) || ((Attributes & EFI_VARIABLE_NON_VOLATILE) == 0))) {
          Status = EFI_INVALID_PARAMETER;
          goto Done;
        }
      }
    
      //
      // Function part - create a new variable and copy the data.
      // Both update a variable and create a variable will come here.
      //
      NextVariable = GetEndPointer ((VARIABLE_STORE_HEADER *) ((UINTN) mVariableModuleGlobal->VariableGlobal.VolatileVariableBase));
      ScratchSize = MAX (PcdGet32 (PcdMaxVariableSize), PcdGet32 (PcdMaxHardwareErrorVariableSize));
    
      SetMem (NextVariable, ScratchSize, 0xff);
    
      NextVariable->StartId     = VARIABLE_DATA;
      NextVariable->Attributes  = Attributes;
      //
      // NextVariable->State = VAR_ADDED;
      //
    NextVariable->Reserved  = 0;
      VarNameOffset           = sizeof (VARIABLE_HEADER);
      VarNameSize             = StrSize (VariableName);
      CopyMem (
        (UINT8 *) ((UINTN) NextVariable + VarNameOffset),
        VariableName,
        VarNameSize
        );
      VarDataOffset = VarNameOffset + VarNameSize + GET_PAD_SIZE (VarNameSize);
      CopyMem (
        (UINT8 *) ((UINTN) NextVariable + VarDataOffset),
        Data,
        DataSize
        );
      CopyMem (&NextVariable->VendorGuid, VendorGuid, sizeof (EFI_GUID));
      //
      // There will be pad bytes after Data, the NextVariable->NameSize and
      // NextVariable->DataSize should not include pad size so that variable
      // service can get actual size in GetVariable
      //
      NextVariable->NameSize  = (UINT32)VarNameSize;
      NextVariable->DataSize  = (UINT32)DataSize;
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"NextVariable->DataSize=0x%llx\n",NextVariable->DataSize);
      DbgPrint(DEBUG_INFO,"NextVariable->NameSize=0x%llx\n",NextVariable->NameSize);
    #endif
      VarSize = VarDataOffset + DataSize + GET_PAD_SIZE (DataSize);
      if ((Attributes & EFI_VARIABLE_NON_VOLATILE) != 0) {
        Volatile = FALSE;
        NonVolatileVarableStoreSize = ((VARIABLE_STORE_HEADER *)(UINTN)(mVariableModuleGlobal->VariableGlobal.NonVolatileVariableBase))->Size;
    
        if ((((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) != 0) 
          && ((VarSize + mVariableModuleGlobal->HwErrVariableTotalSize) > PcdGet32 (PcdHwErrStorageSize)))
          || (((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) == 0) 
          && ((VarSize + mVariableModuleGlobal->CommonVariableTotalSize) > NonVolatileVarableStoreSize - RECLAIM_SIZE_OFFSET - VAR_STORE_HEAD_OFFSET - PcdGet32 (PcdHwErrStorageSize)))) { 
          if (EfiAtRuntime ()) {
            Status = EFI_OUT_OF_RESOURCES;
            goto Done;
          }
          Status = Reclaim (mVariableModuleGlobal->VariableGlobal.NonVolatileVariableBase, 
                            &mVariableModuleGlobal->NonVolatileLastVariableOffset, FALSE, Variable->CurrPtr);
    
          if (EFI_ERROR (Status)) {
            goto Done;                                                                                                                                                                       
          }
    //
          // If still no enough space, return out of resources
          //
          if ((((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) != 0) 
            && ((VarSize + mVariableModuleGlobal->HwErrVariableTotalSize) > PcdGet32 (PcdHwErrStorageSize)))
            || (((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) == 0) 
            && ((VarSize + mVariableModuleGlobal->CommonVariableTotalSize) > NonVolatileVarableStoreSize - RECLAIM_SIZE_OFFSET - VAR_STORE_HEAD_OFFSET - PcdGet32 (PcdHwErrStorageSize)))) {
            Status = EFI_OUT_OF_RESOURCES;
            DEBUG_SHOW_ALL_VAR (mVariableModuleGlobal->VariableGlobal.NonVolatileVariableBase);
            goto Done;
          }
          Reclaimed = TRUE;
        }
        //
        // Three steps
        // 1. Write variable header
        // 2. Set variable state to header valid  
        // 3. Write variable data
        // 4. Set variable state to valid
        //
    
        //
        // Step 1:
        //
        CacheOffset = mVariableModuleGlobal->NonVolatileLastVariableOffset;
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"mVariableModuleGlobal->NonVolatileLastVariableOffset=0x%llx (%a)[%d]\n",mVariableModuleGlobal->NonVolatileLastVariableOffset,__FUNCTION__,__LINE__);
    #endif
        Status = UpdateVariableStore (
                   &mVariableModuleGlobal->VariableGlobal,
                   FALSE,
                   TRUE,
                   Fvb,
                   mVariableModuleGlobal->NonVolatileLastVariableOffset,
                   sizeof (VARIABLE_HEADER),
                   (UINT8 *) NextVariable
                   );
        if (EFI_ERROR (Status)) {
          goto Done;
        }
    
        //
        // Step 2:
        //
    NextVariable->State = VAR_HEADER_VALID_ONLY;
        Status = UpdateVariableStore (
                   &mVariableModuleGlobal->VariableGlobal,
                   FALSE,
                   TRUE,
                   Fvb,
                   mVariableModuleGlobal->NonVolatileLastVariableOffset + OFFSET_OF (VARIABLE_HEADER, State),
                   sizeof (UINT8),
                   &NextVariable->State
                   );
        if (EFI_ERROR (Status)) {
          goto Done;
        }
    
        //
        // Step 3:
        //
        Status = UpdateVariableStore (
                   &mVariableModuleGlobal->VariableGlobal,
                   FALSE,
                   TRUE,
                   Fvb,
                   mVariableModuleGlobal->NonVolatileLastVariableOffset + sizeof (VARIABLE_HEADER),
                   (UINT32) VarSize - sizeof (VARIABLE_HEADER),
                   (UINT8 *) NextVariable + sizeof (VARIABLE_HEADER)
                   );
        if (EFI_ERROR (Status)) {
          goto Done;
        }
    
        //
        // Step 4:
        //
    NextVariable->State = VAR_ADDED;
        Status = UpdateVariableStore (
                   &mVariableModuleGlobal->VariableGlobal,
                   FALSE,
                   TRUE,
                   Fvb,
                   mVariableModuleGlobal->NonVolatileLastVariableOffset + OFFSET_OF (VARIABLE_HEADER, State),
                   sizeof (UINT8),
                   &NextVariable->State
                   );
    
        if (EFI_ERROR (Status)) {
          goto Done;
        }
    
        mVariableModuleGlobal->NonVolatileLastVariableOffset += HEADER_ALIGN (VarSize);
    
        if ((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) != 0) {
          mVariableModuleGlobal->HwErrVariableTotalSize += HEADER_ALIGN (VarSize);
        } else {
          mVariableModuleGlobal->CommonVariableTotalSize += HEADER_ALIGN (VarSize);
        }
        //
        // update the memory copy of Flash region.
        //
        CopyMem ((UINT8 *)mNvNonVariableMem + CacheOffset, (UINT8 *)NextVariable, VarSize);
      } else {
        //
        // Create a volatile variable
        //      
        Volatile = TRUE;
        if ((UINT32) (VarSize + mVariableModuleGlobal->VolatileLastVariableOffset) >
            ((VARIABLE_STORE_HEADER *) ((UINTN) (mVariableModuleGlobal->VariableGlobal.VolatileVariableBase)))->Size) {
          //
          // Perform garbage collection & reclaim operation
          //
    Status = Reclaim (mVariableModuleGlobal->VariableGlobal.VolatileVariableBase, 
                              &mVariableModuleGlobal->VolatileLastVariableOffset, TRUE, Variable->CurrPtr);
          if (EFI_ERROR (Status)) {
            goto Done;
          }
          //
          // If still no enough space, return out of resources
          //
          if ((UINT32) (VarSize + mVariableModuleGlobal->VolatileLastVariableOffset) >
                ((VARIABLE_STORE_HEADER *) ((UINTN) (mVariableModuleGlobal->VariableGlobal.VolatileVariableBase)))->Size
                ) {
            Status = EFI_OUT_OF_RESOURCES;
            goto Done;
          }
          Reclaimed = TRUE;
        }
    
        NextVariable->State = VAR_ADDED;
        Status = UpdateVariableStore (
                   &mVariableModuleGlobal->VariableGlobal,
                   TRUE,
                   TRUE,
                   Fvb,
                   mVariableModuleGlobal->VolatileLastVariableOffset,
                   (UINT32) VarSize,
                   (UINT8 *) NextVariable
                   );
    
        if (EFI_ERROR (Status)) {
          goto Done;
        }
    
        mVariableModuleGlobal->VolatileLastVariableOffset += HEADER_ALIGN (VarSize);
      }
     //
      // Mark the old variable as deleted
      //
      if (!Reclaimed && !EFI_ERROR (Status) && Variable->CurrPtr != NULL) {
        State = Variable->CurrPtr->State;
        State &= VAR_DELETED;
    
        Status = UpdateVariableStore (
                 &mVariableModuleGlobal->VariableGlobal,
                 Variable->Volatile,
                 FALSE,
                 Fvb,
                 (UINTN) &Variable->CurrPtr->State,
                 sizeof (UINT8),
                 &State
                 );
        if (!EFI_ERROR (Status) && !Variable->Volatile) {         
          CacheVariable->CurrPtr->State = State;
        }
      }
    
      if (!EFI_ERROR (Status)) {
        UpdateVariableInfo (VariableName, VendorGuid, Volatile, FALSE, TRUE, FALSE, FALSE);
        UpdateVariableMem (VariableName, VendorGuid, Attributes, DataSize, Data);
      }
    
    Done:
    #if VARIABLE_DEBUG
      DbgPrint(DEBUG_INFO,"mVariableModuleGlobal->NonVolatileLastVariableOffset=0x%llx (%a)[%d]\n",mVariableModuleGlobal->NonVolatileLastVariableOffset,__FUNCTION__,__LINE__);
      DbgPrint(DEBUG_INFO,"Return UpdateVariable ().\n");
    #endif                                                                                                                                                                                   
    
      return Status;
    }

    从上面的代码可以,这里包含了更新一个已经存在的和不存在创建新的Variable的过程,同时也包含了Volatile和NonVolatile的过程.详细的过程这里就不介绍了,只介绍几个要点.第一,创建一个NonVolatile的过程要分四部,首先创建好Variable的头,也就是VariableHeader的结构填内容,VariableHeader就是下面的结构体:typedef struct {
      UINT16      StartId;
      UINT8       State;
      UINT8       Reserved;
      UINT32      Attributes;
      UINT32      NameSize;
      UINT32      DataSize;
      EFI_GUID    VendorGuid;
    } VARIABLE_HEADER;

    为这些数据初始化完后,将这部分头先写入到Flash中,然后第二步更新头的中的State位,让这位变成有效.第三步:写Variable的数据,最后再次更新投中的State位为VAR_ADDED.也就是创建一个新的NonVolatile的数据要写付flash四次,写入flash就是调用的前面locate出来的FirmwareVolumeBlockProtocol中的write函数,这个函数最底层就是FlashDeviceOperationProtocl的写函数,后面再介绍FirmwareVolumeBlockProtocol的实现.在写入的过程中最主要的就是Offset这个地址,这个地址到FlashDeviceOperationProtocl那层是基于0x900000001fc000000的偏移地址,而现在传的地址是64位的虚拟地址.头的地址就是

    mVariableModuleGlobal->NonVolatileLastVariableOffset这个全局地址,每一次写入一个Variable都会更新这个地址,为下一次写入Variable的起始地址. mVariableModuleGlobal->NonVolatileLastVariableOffset += HEADER_ALIGN (VarSize);就是在更新他的地址.重启后首先还是要获取这个值,这个值的获取是需要遍历0x900000001fc0048为起始地址的所有Variable,然后得到最后一个地址,算出来下一次要写的起始地址.这部分代码也在上面的入口函数中执行的.还有一点需要注意,VariableRuntimeDxe阶段是做了BIOS下NvRom下满后Reclaim处理的函数的,但是由于时间问题,这部分功能我们还没有调试,在前面调试的过程中主要遇到的问题就是这个VariableRuntimeDxe所依赖的驱动以及这个驱动中涉及的Variable是如何存储的,以及以什么方式存.当更新一个Variabel的时候,是需要将已经存在的Variable的State位无效掉,然后再重新写一个Variable.这也是为何会写满的原因,在启动过程中有些Variable是需要更新的,以前存在的就会无效掉.在整个VariableRuntimeDxe的驱动中,有两个很重要的全局变量,一个是VARIABLE_MODULE_GLOBAL  *mVariableModuleGlobal = NULL,这个变量中的每个数据都是有用的,下面看一下这个变量的类型

    typedef struct {
      VARIABLE_GLOBAL VariableGlobal;
      UINTN           VolatileLastVariableOffset;
      UINTN           NonVolatileLastVariableOffset;
      UINTN           CommonVariableTotalSize;
      UINTN           HwErrVariableTotalSize;
      CHAR8           *PlatformLangCodes;//[256]; //Pre-allocate 256 bytes space to accommodate the PlatformlangCodes.
      CHAR8           *LangCodes;//[256]; //Pre-allocate 256 bytes space to accommodate the langCodes.
      CHAR8           *PlatformLang;//[8]; //Pre-allocate 8 bytes space to accommodate the Platformlang variable.
      CHAR8           Lang[4]; //Pre-allocate 4 bytes space to accommodate the lang variable.
      EFI_FIRMWARE_VOLUME_BLOCK_PROTOCOL *FvbInstance;
    } VARIABLE_MODULE_GLOBAL;

    里面的变量都是在驱动的入口函数中初始化的.这个变量也是在入口函数中初始化的.最后所有VariableRuntimeDxe所需要的所有的服务都注册好之后,需要Install gEfiVariableArchProtocolGuid和gEfiVariableWriteArchProtocolGuid这两个Protocol,否则在后续运行的时候,需要这两个protoco但是locate不到的时候就会出现挂起的现象.

    Varoable的Reclaim过程

    在启动过程中,一些Variable 知识一次性有效的,机器重启后就无效了,就需要再次写入flash,这样重启的次数多了就会出现Variable区域写满的情况.我们这里为Variable区域预留了24K的空间来存放variable变量.那么在写满之后是如何处理的呢?这个逻辑是在SetVariable的函数中添加的,在写一个Variable的时候(无论是创建一个新的Variable还是更新一个已经存在的Variable)当发现满足一下条件的时候

     if ((((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) != 0) 
          && ((VarSize + mVariableModuleGlobal->HwErrVariableTotalSize) > PcdGet32 (PcdHwErrStorageSize)))
          || (((Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) == 0) 
          && ((VarSize + mVariableModuleGlobal->CommonVariableTotalSize) > NonVolatileVarableStoreSize - RECLAIM_SIZE_OFFSET - VAR_STORE_HEAD_OFFSET - PcdGet32 (PcdHwErrStorageSize)))) {

    这个条件是Non-Volatile类型的Variable需要满足的条件,其实上面的条件中Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD始终是为0的,也就是满足VarSize + mVariableModuleGlobal->CommonVariableTotalSize) > NonVolatileVarableStoreSize - RECLAIM_SIZE_OFFSET - VAR_STORE_HEAD_OFFSET - PcdGet32 (PcdHwErrStorageSize)条件即可,这个条件说白了就是看是否现在要写入的Variable的长度加上已经一些的variable的长度是不是超过了我们预留的大小,如果超过了那么就需要执行Reclaim函数.如果是Volatile类型的Variable其原理是一样的,都是调用Reclaim函数.下面我们看一下这个Reclaim函数.其代码如下:

    EFI_STATUS
    Reclaim (
      IN  EFI_PHYSICAL_ADDRESS  VariableBase,
      OUT UINTN                 *LastVariableOffset,
      IN  BOOLEAN               IsVolatile,
      IN  VARIABLE_HEADER       *UpdatingVariable
      )
    {
      VARIABLE_HEADER       *Variable;
      VARIABLE_HEADER       *AddedVariable;
      VARIABLE_HEADER       *NextVariable;
      VARIABLE_HEADER       *NextAddedVariable;
      VARIABLE_STORE_HEADER *VariableStoreHeader;
      UINT8                 *ValidBuffer;
      UINTN                 MaximumBufferSize;
      UINTN                 VariableSize;
      UINTN                 VariableNameSize;
      UINTN                 UpdatingVariableNameSize;
      UINTN                 NameSize;
      UINT8                 *CurrPtr;
      VOID                  *Point0;
      VOID                  *Point1;
      BOOLEAN               FoundAdded;
      EFI_STATUS            Status;
      CHAR16                *VariableNamePtr;
      CHAR16                *UpdatingVariableNamePtr;
    
      VariableStoreHeader = (VARIABLE_STORE_HEADER *) ((UINTN) VariableBase);
      //
      // recaluate the total size of Common/HwErr type variables in non-volatile area.
      //
      if (!IsVolatile) {
        mVariableModuleGlobal->CommonVariableTotalSize = 0;
        mVariableModuleGlobal->HwErrVariableTotalSize  = 0;
      }
    
      //
      // Start Pointers for the variable.
      //
      Variable          = GetStartPointer (VariableStoreHeader);
      MaximumBufferSize = VAR_STORE_HEAD_OFFSET;
    while (IsValidVariableHeader (Variable)) {
        NextVariable = GetNextVariablePtr (Variable);
        if (Variable->State == VAR_ADDED || 
            Variable->State == (VAR_IN_DELETED_TRANSITION & VAR_ADDED)
           ) {
          VariableSize = (UINTN) NextVariable - (UINTN) Variable;
          MaximumBufferSize += VariableSize;
        }
    
        Variable = NextVariable;
      }
    
      //
      // Reserve the 1 Bytes with Oxff to identify the 
      // end of the variable buffer. 
      // 
      MaximumBufferSize += 1;
      ValidBuffer = AllocatePool (MaximumBufferSize);
      if (ValidBuffer == NULL) {
        return EFI_OUT_OF_RESOURCES;
      }
    
      SetMem (ValidBuffer, MaximumBufferSize, 0xff);
    
      //
      // Copy variable store header
      //
      CopyMem (ValidBuffer, VariableStoreHeader, VAR_STORE_HEAD_OFFSET); 
      CurrPtr = (UINT8 *) GetStartPointer ((VARIABLE_STORE_HEADER *) ValidBuffer);
    
      //
      // Reinstall all ADDED variables as long as they are not identical to Updating Variable
      // 
      Variable = GetStartPointer (VariableStoreHeader);
    while (IsValidVariableHeader (Variable)) {
        NextVariable = GetNextVariablePtr (Variable);
        if (Variable->State == VAR_ADDED) {
          if (UpdatingVariable != NULL) {
            if (UpdatingVariable == Variable) {
              Variable = NextVariable;
              continue;
            }
    
            VariableNameSize         = NameSizeOfVariable(Variable);
            UpdatingVariableNameSize = NameSizeOfVariable(UpdatingVariable);
    
            VariableNamePtr         = GetVariableNamePtr (Variable);
            UpdatingVariableNamePtr = GetVariableNamePtr (UpdatingVariable);
            if (CompareGuid (&Variable->VendorGuid, &UpdatingVariable->VendorGuid)    &&
                VariableNameSize == UpdatingVariableNameSize &&
                CompareMem (VariableNamePtr, UpdatingVariableNamePtr, VariableNameSize) == 0 ) {
              Variable = NextVariable;
              continue;
            }
          }
          VariableSize = (UINTN) NextVariable - (UINTN) Variable;
          CopyMem (CurrPtr, (UINT8 *) Variable, VariableSize);
          CurrPtr += VariableSize;
          if ((!IsVolatile) && ((Variable->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) == EFI_VARIABLE_HARDWARE_ERROR_RECORD)) {
            mVariableModuleGlobal->HwErrVariableTotalSize += VariableSize;
          } else if ((!IsVolatile) && ((Variable->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) != EFI_VARIABLE_HARDWARE_ERROR_RECORD)) {
            mVariableModuleGlobal->CommonVariableTotalSize += VariableSize;
          }
        }
        Variable = NextVariable;
      }
    
      //
      // Reinstall the variable being updated if it is not NULL
      //
    if (UpdatingVariable != NULL) {
        VariableSize = (UINTN)(GetNextVariablePtr (UpdatingVariable)) - (UINTN)UpdatingVariable;
        CopyMem (CurrPtr, (UINT8 *) UpdatingVariable, VariableSize);
        CurrPtr += VariableSize;
        if ((!IsVolatile) && ((UpdatingVariable->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) == EFI_VARIABLE_HARDWARE_ERROR_RECORD)) {
            mVariableModuleGlobal->HwErrVariableTotalSize += VariableSize;
        } else if ((!IsVolatile) && ((UpdatingVariable->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) != EFI_VARIABLE_HARDWARE_ERROR_RECORD)) {
            mVariableModuleGlobal->CommonVariableTotalSize += VariableSize;
        }
      }
    
      //
      // Reinstall all in delete transition variables
      // 
      Variable      = GetStartPointer (VariableStoreHeader);
      while (IsValidVariableHeader (Variable)) {
        NextVariable = GetNextVariablePtr (Variable);
        if (Variable != UpdatingVariable && Variable->State == (VAR_IN_DELETED_TRANSITION & VAR_ADDED)) {
    
          //
          // Buffer has cached all ADDED variable. 
          // Per IN_DELETED variable, we have to guarantee that
          // no ADDED one in previous buffer. 
          // 
         
          FoundAdded = FALSE;
          AddedVariable = GetStartPointer ((VARIABLE_STORE_HEADER *) ValidBuffer);
          while (IsValidVariableHeader (AddedVariable)) {
            NextAddedVariable = GetNextVariablePtr (AddedVariable);
            NameSize = NameSizeOfVariable (AddedVariable);
            if (CompareGuid (&AddedVariable->VendorGuid, &Variable->VendorGuid) &&
                NameSize == NameSizeOfVariable (Variable)
               ) {
              Point0 = (VOID *) GetVariableNamePtr (AddedVariable);
              Point1 = (VOID *) GetVariableNamePtr (Variable);
              if (CompareMem (Point0, Point1, NameSizeOfVariable (AddedVariable)) == 0) {
                FoundAdded = TRUE;
                break;
              }
            }
            AddedVariable = NextAddedVariable;
          }
    if (!FoundAdded) {
            //
            // Promote VAR_IN_DELETED_TRANSITION to VAR_ADDED
            //
            VariableSize = (UINTN) NextVariable - (UINTN) Variable;
            CopyMem (CurrPtr, (UINT8 *) Variable, VariableSize);
            ((VARIABLE_HEADER *) CurrPtr)->State = VAR_ADDED;
            CurrPtr += VariableSize;
            if ((!IsVolatile) && ((Variable->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) == EFI_VARIABLE_HARDWARE_ERROR_RECORD)) {
              mVariableModuleGlobal->HwErrVariableTotalSize += VariableSize;
            } else if ((!IsVolatile) && ((Variable->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD) != EFI_VARIABLE_HARDWARE_ERROR_RECORD)) {
              mVariableModuleGlobal->CommonVariableTotalSize += VariableSize;
            }
          }
        }
    
        Variable = NextVariable;
      }
    
      if (IsVolatile) {
        //
        // If volatile variable store, just copy valid buffer
        //
        SetMem ((UINT8 *) (UINTN) VariableBase, VariableStoreHeader->Size, 0xff);
        CopyMem ((UINT8 *) (UINTN) VariableBase, ValidBuffer, (UINTN) (CurrPtr - (UINT8 *) ValidBuffer));
        Status              = EFI_SUCCESS;
      } else {
        //
        // If non-volatile variable store, perform FTW here.
        //
        Status = ReclaimUpdateNvStore (
                   (UINT8 *) ValidBuffer, 
                   (UINT8 *) GetStartPointer ((VARIABLE_STORE_HEADER *) ValidBuffer), 
                   (UINT8 *) (CurrPtr) + 1
                   );
        CopyMem (mNvNonVariableMem, (CHAR8 *)(UINTN)VariableBase, VariableStoreHeader->Size);
      }
      if (!EFI_ERROR (Status)) {
        *LastVariableOffset = (UINTN) (CurrPtr - (UINT8 *) ValidBuffer);
      } else {                                                                                                                                                                               
        ASSERT(0);
        *LastVariableOffset = 0;
      }
    FreePool (ValidBuffer);
    
      return Status;
    }

    首先函数入参:

    VariableBase就是存放Variable_store_header的地址,如果是non-volatile的类型的,那么就是non-volatile的头地址,volatile的同样就是volatile variable_store-header的地址.

    LastVariableOffset就是上次写入的头的起始地址,也就是全局变量mVariableModuleGlobal->VariableGlobal.NonVolatileVariableBase或者

    mVariableModuleGlobal->VariableGlobal.VolatileVariableBase的值

    IsVolatile表示是否是volatile的variable也就是variable的类型.

    UpdatingVariable就是代码本次要写入的varibale.

    根据函数第一个参数就能获取到第一个variable的头,然后就一次变量所有的variable,检测其state状态,如果是有效的,就将这个variable保留下来,这样就获取到了之前写过的variable哪些是需要再次写入的,无效的就直接不要了.然后调用函数ReclaimUpdateNvStore来删除整个variable区域,再将有效的variable再写回去.下面是函数ReclaimUpdateNvStore的代码:

    EFI_STATUS
    ReclaimUpdateNvStore (
      IN  UINT8                               *StoreHeader,
      IN  UINT8                               *VariableStart,
      IN  UINT8                               *VariableEnd
      )
    {
      EFI_STATUS                              Status;
      UINTN                                   StartOffset;
      UINTN                                   EndOffset;
      UINTN                                   Offset;
      UINTN                                   BlockSize;
      EFI_LBA                                 LbaIndex;
      EFI_FIRMWARE_VOLUME_HEADER              *FwVolHeader;
      EFI_PHYSICAL_ADDRESS                    FvVolHdr;
    
      //
      // Get offset for Non-volatile range
      //
      StartOffset = (UINTN) VariableStart - (UINTN) StoreHeader;
      EndOffset   = ((VARIABLE_STORE_HEADER *) (UINTN) (mVariableModuleGlobal->VariableGlobal.NonVolatileVariableBase))->Size;
      Offset      = StartOffset;
    
      //
      // Get block size
      //
      Status = mVariableModuleGlobal->FvbInstance->GetPhysicalAddress(mVariableModuleGlobal->FvbInstance, &FvVolHdr);
      ASSERT_EFI_ERROR (Status);
      FwVolHeader = (EFI_FIRMWARE_VOLUME_HEADER *) ((UINTN) FvVolHdr);
      BlockSize = FwVolHeader->BlockMap->Length;
    
      //
      // Erase flash
      //
      while (Offset < EndOffset) {
        LbaIndex = (EFI_LBA) (Offset / BlockSize);
        mVariableModuleGlobal->FvbInstance->EraseBlocks (
                                              mVariableModuleGlobal->FvbInstance,
                                              LbaIndex,
                                              1,
                                              EFI_LBA_LIST_TERMINATOR
                                              );
        Offset += ERASE_ONE_BLOCK_SIZE;
      }
    //
      // Re-write flash
      //
      Status = UpdateVariableStore (
                 &mVariableModuleGlobal->VariableGlobal,
                 FALSE,
                 TRUE,
                 mVariableModuleGlobal->FvbInstance,
                 StartOffset,
                 (UINTN) VariableEnd - (UINTN) VariableStart,
                 (UINT8 *) VariableStart
                 );
    
      ASSERT_EFI_ERROR (Status);
      return Status;
    }

    上面的代码就包含了擦除和重新写入的过程.这里不再详细的介绍了.这里需要注意一下,无效一个variable只需要将setvariable函数的入参中的Datasize=0x0,DataBuff=NULL,即可.这样调用一次setvariable就会将这个variable无效掉.

     

    展开全文
  • Function Theory of One Complex Variable, PDF版的,很清晰,有需要的同学可以看看
  •   张量(Tensor)是TensorFlow的核心数据单位。一个张量由一组形成阵列(任意维数)的原始值组成。张量的阶是它的维数,而它的...- tf.Variable - tf.constant - tf.placeholder tf.Variable   Tensorflo...

      张量(Tensor)是TensorFlow的核心数据单位。一个张量由一组形成阵列(任意维数)的原始值组成。张量的阶是它的维数,而它的形状是一个整数元组,指定了阵列每个维度的长度。
    下面会介绍 Tensorflow 中几个特殊张量:
    - tf.Variable
    - tf.constant
    - tf.placeholder

    tf.Variable

      Tensorflow 变量是表示程序处理的共享持久状态的最佳方法。

    创建变量

      创建变量的最佳方式是调用 tf.get_variable函数。此函数需要指定变量的名字与形状。

    # 创建名为‘variable_a’,形状为[1,2,3]的3维张量
    variable_a = tf.get_variable("a",shape=[1,2,3])

      tf.get_variable还可以指定数据类型(默认的是 tf.float32)和初始化器。Tensorflow 提供了许多方便的初始化器,比如

    • tf.zeros_initializer

      初始化为0

      variable_b = tf.get_variable("b", shape=[1, 2, 3], dtype=tf.float32,
                                   initializer=tf.zeros_initializer)
    • tf.constant_initializer

      初始化为常量

      value = np.array([0, 1, 2, 3, 4, 5, 6, 7])
      variable_c = tf.get_variable("c", dtype=tf.float32, shape=value.shape,
                                   initializer=tf.constant_initializer(value))
    • tf.truncated_normal_initializer

      随机初始化

      variable_d = tf.get_variable('d', dtype=tf.float32, shape=[4, 3, 4], initializer=tf.truncated_normal_initializer)
      
    • tf.contrib.layers.xavier_initializer

      随机初始化

      variable_e = tf.get_variable('e', dtype=tf.float32, shape=[4, 3, 4], initializer=xavier_initializer())
      

        此处介绍几种常见的初始化器,其中在初始化方面使用比较多的是后两种,建议使用xavier_initializer初始化器,之前在用 lstm模型进行文本分类过程中,使用truncated_normal_initializer进行初始化,发现在训练过程中,lstm 模型执行少数batch后就收敛,致使模型在训练集和测试集上的准确率都很低。当然,这个也不绝对,具体还是要根据自己的模型而定。

    初始化变量

      变量必须先初始化后才可使用。在低级别的编程中,需要明确初始化,高级别框架下是自动初始化的。此处介绍下低级别编程中初始化的问题。

    • 指定初始化某个变量

      sess=tf.Session()
      sess.run(tf.initialize_variables([variable_c]))
      

      在变量比较少的模型中,使用上述方法是很 ok 的,但是在变量比较多的情况下,使用上述方法可能会很不爽,至少对我是这样的。

    • 初始化全部变量

      sess.run(tf.initialize_all_variables())

    tf.constant

      使用tf.constant来创建一个常量。

    ```
    # 创建0维常量
    tf.constant(0.2,name='a')
    # 创建1维常量
    tf.constant([1,2,3,4],name='b')
    tf.constant(-1.0, shape=[2, 3], name='b')
    ```
    

    tf.placeholder

      此函数的作用可以作为 java 方法中的形参,用于定义过程,在方法执行时再赋予具体的值。

    a = tf.placeholder(dtype=tf.float32, shape=None, name='a')
    b = tf.placeholder(dtype=tf.float32, shape=None, name='b')
    with tf.Session() as sess:
        print(sess.run(a + b, feed_dict={a: 1, b: 2}))
    
    展开全文
  • 浅谈Pytorch中的Variable的使用方法

    千次阅读 2020-06-19 17:51:10
    浅谈Pytorch中的Variable的使用方法Variable的基本概念Variable的自动求导PyTorch中Variable的使用方法获取Variable里面的数据完整教学实例 Variable的基本概念 autograd.Variable 是包的核心类。它包装了张量,并且...
  • tf.Variable()函数

    千次阅读 2021-01-28 15:14:02
    tf.Variable(initializer,name),参数initializer是初始化参数,name是可自定义的变量名称,用法如下: import tensorflow as tf v1=tf.Variable(tf.random_normal(shape=[4,3],mean=0,stddev=1),name='v1') v2=tf...
  • Tensorflow:variable变量和变量空间

    千次阅读 2019-08-01 21:13:12
    name_scope: 为了更好地管理变量的命名空间而提出的。比如在 tensorboard 中,... variable_scope: 大部分情况下,跟 tf.get_variable() 配合使用,实现变量共享的功能。with tf.variable_scope('scopename', reu...
  • tf.get_variable()和tf.Variable()的区别(最清晰的解释)

    千次阅读 多人点赞 2019-04-27 14:27:01
    ==== 1、tf.Variable()
  • ____tz_zs学习笔记 tf.Variable 官网api:https://www.tensorflow.org/api_docs/python/tf/Variable def __init__(self, initial_value=None, trainable=True, colle
  • tensorflow tf.Variable()和tf.get_variable()详解

    万次阅读 多人点赞 2018-07-26 22:29:53
    一、tf.Variable() (1)参数说明 tf.Variable是一个Variable类。通过变量维持图graph的状态,以便在sess.run()中执行;可以用Variable类创建一个实例在图中增加变量; Args参数说明: initial_value:Tensor或可...
  • PyTorch中Variable变量与torch.autograd.Variable

    千次阅读 多人点赞 2020-04-13 22:20:22
    一、了解Variable 顾名思义,Variable就是 变量 的意思。实质上也就是可以变化的量,区别于int变量,它是一种可以变化的变量,这正好就符合了反向传播,参数更新的属性。 具体来说,在pytorch中的Variable就是一个...
  • torch的Variable

    2020-10-29 19:56:37
    Variable数据格式及使用意义 使用这种数据相当于将数据加入到一个节点,这种数据有grad属性,而torch数据没有该属性 import torch import numpy as np from torch.autograd import Variable tensor = torch....
  • Variable Variable 和 Tensor 自动求导 从反向传播中排除子图 注册钩子 自定义Function 这个是0.3的版本,之后修改。 Autograd: 自动微分 autograd 包是 PyTorch 中神经网络的核心, 它可以为基于 tensor 的的...
  • 1. variable_scope的使用 首先,使用variable_scope可以很方便的管理get_varibale。 如何确定 get_variable 的 prefixed name? 1.1 variable scope是可以嵌套的: import tensorflow as tf with tf.variable_...
  • Duplicate local variable variable

    千次阅读 2019-05-06 16:10:49
    问题描述:重复定义了变量。
  • input=tf.variable(tf.random_normal([2,3,3,5])) 报错内容为: Traceback (most recent call last): File "E:/HelloWorld/AI/CV/project/exam.py", line 28, in input=tf.variable(tf.random_normal([2,3,...
  • C++11中std::condition_variable的使用

    万次阅读 多人点赞 2017-06-24 23:27:56
    C++11中std::condition_variable的使用
  • PyTorch 入门实战(二)——Variable

    万次阅读 多人点赞 2019-01-15 14:30:11
    如何解决"Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead."请看这个部分Variable转Numpy与Numpy转Variable PyTorch入门实战 1.博客:PyTorch 入门实战(一)——Tensor .....
  • Pytorch的Variable详解

    千次阅读 2020-08-20 20:57:17
    pytorch两个基本对象:Tensor(张量)和Variable(变量) 其中,tensor不能反向传播,variable可以反向传播。 tensor的算术运算和选取操作与numpy一样,一次你numpy相似的运算操作都可以迁移过来。 Variable ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 711,607
精华内容 284,642
关键字:

variable