精华内容
下载资源
问答
  • It appears that for every small solid layer SuperSlicer adds a dense layer for the whole object, which can be quite wasteful. <p>See pictures and Attachment : <p><strong>Layer 81 +122 + 131 -...
  • Dense layer can omit biases

    2021-01-06 19:10:55
    None, you get a dense layer with no biases. This matches the behavior of <code>tf.contrib.layers.fully_connected</code>.</p><p>该提问来源于开源项目:deepchem/deepchem</p></div>
  • dense layer tf dense api tf 实现dense layer numpy 实现dense layer import numpy as np

    1 Dense Layer

    目的:构建Dense层,对比两种方法实现的区别

    2 对比原始的add layer方法和继承方法的不同

    2.1 global config

    import tensorflow as tf
    from tensorflow import keras
    from tensorflow.keras.layers import Input, LSTM, Dense
    import numpy as np
    
    np.random.seed(1)
    
    rows = 10000  	# 样本数
    columns = 100	# 特征数
    
    train_x1 = np.random.random(size=(int(rows/2), columns))
    train_y1 = np.random.choice([0], size=(int(rows/2), 1))
    train_x2 = np.random.random(size=(int(rows/2), columns))+1
    train_y2 = np.random.choice([1], size=(int(rows/2), 1))
    
    train_x = np.vstack((train_x1, train_x2))
    train_y = np.vstack((train_y1, train_y2))
    
    units = 5	# 自定义cell个数
    

    2.1 用add实现

    tf.random.set_seed(1)		# 固定随机值
    
    model1 = keras.Sequential()
    model1.add(Input(shape=(columns,)))
    model1.add(Dense(units=units))
    
    model1.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
    
    model1.fit(train_x, train_y, epochs=10)
    model1.predict(train_x)[-1][-1]
    
    l1 = model1.layers[0]
    w1, b1 = l1.get_weights()
    

    api中的参数分几部分

    1. initializer 初始项,初始化参数
    2. regularizer 正则项,选择不同正则模式L1L2
    3. constraint 约束项,非负约束或者最大模约束

    2.2 用继承实现

    tf.random.set_seed(1)		# 固定随机值
    
    class MyDenseLayer(keras.layers.Layer):
    	def __init__(self, num_outputs):
    		super(MyDenseLayer, self).__init__()
    		self.num_outputs = num_outputs
    	def build(self, input_shape):
    		self.kernel = self.add_weight(name="kernel", shape=[int(input_shape[-1]), self.num_outputs])
    		self.bias = self.add_weight(name="bias", shape=[self.num_outputs, ], initializer=keras.initializers.zeros)
    		self.build = True
    	def call(self, input):
    		return tf.matmul(input, self.kernel) + self.bias
    
    model2 = keras.Sequential()
    model2.add(Input(shape=(columns,)))
    model2.add(MyDenseLayer(units))		
    
    model2.compile(loss="mse", optimizer="adam", metrics=['accuracy'])
    
    model2.fit(train_x, train_y, epochs=10)
    model2.predict(train_x)[-1][-1]
    
    l2 = model2.layers[0]
    w2, b2 = l2.get_weights()
    

    3 有权重的对比

    3.1 用自带add_weight方法自定义权重

    自定义初始的权重和偏移项。为什么要定义这两个方法?为了用add_weight方法初始权重和偏移项

    def w_init(shape, dtype=tf.float32):
    	return tf.random.normal(shape=shape, dtype=dtype)
    
    def b_init(shape, dtype=tf.float32):
    	return tf.zeros(shape=shape, dtype=dtype)
    

    3.2 用add实现

    tf.random.set_seed(1)		# 固定随机值
    
    model3 = keras.Sequential()
    model3.add(Input(shape=(columns,)))
    model3.add(Dense(units=units, kernel_initializer=w_init, bias_initializer=b_init))	# 需要固定weights
    		
    model3.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
    
    model3.fit(train_x, train_y, epochs=10)
    model3.predict(train_x)[-1][-1]
    
    l3 = model3.layers[0]
    w3, b3 = l3.get_weights()
    

    3.3 用继承实现

    tf.random.set_seed(1)		# 固定随机值
    
    class MyDenseLayer(keras.layers.Layer):
    	def __init__(self, num_outputs):
    		super(MyDenseLayer, self).__init__()
    		self.num_outputs = num_outputs
    	def build(self, input_shape):
    		self.kernel = self.add_weight(initializer=w_init, shape=(input_shape[-1], self.num_outputs), dtype=tf.float32)	# 自定义权重
    		self.bias = self.add_weight(initializer=b_init, shape=(self.num_outputs,), dtype=tf.float32)		# 自定义偏移项
    	def call(self, input):
    		return tf.matmul(input, self.kernel) + self.bias
    
    model4 = keras.Sequential()
    model4.add(Input(shape=(columns,)))
    model4.add(MyDenseLayer(units))		
    
    model4.compile(loss="mse", optimizer="adam", metrics=['accuracy'])
    
    model4.fit(train_x, train_y, epochs=10)
    model4.predict(train_x)[-1][-1]
    
    l4 = model4.layers[0]
    w4, b4 = l4.get_weights()
    

    4 用自定义矩阵为权重矩阵

    4.1 初始化权重和偏移项矩阵

    tf.random.set_seed(1)
    w = tf.random.normal(shape=(columns, units), dtype=tf.float32)
    b = tf.zeros(shape=(units,), dtype=tf.float32)
    
    def w_init(shape, dtype=tf.float32):
    	return w
    
    def b_init(shape, dtype=tf.float32):
    	return b
    

    4.2 用add实现

    tf.random.set_seed(1)		# 固定随机值
    
    model5 = keras.Sequential()
    model5.add(Input(shape=(columns,)))
    model5.add(Dense(units=units, kernel_initializer=w_init, bias_initializer=b_init))		
    
    model5.compile(loss="mse", optimizer="adam", metrics=['accuracy'])
    
    model5.fit(train_x, train_y, epochs=10)
    model5.predict(train_x)[-1][-1]
    

    4.3 用继承实现

    不用Layer.add_weight方法,自己实现一个权重矩阵,然后用权重矩阵作为初始化,进行训练

    tf.random.set_seed(1)		# 固定随机值
    
    class MyDenseLayer(keras.layers.Layer):
    	def __init__(self, num_outputs):
    		super(MyDenseLayer, self).__init__()
    		self.num_outputs = num_outputs
    	def build(self, input_shape):
    		self.kernel = tf.Variable(w, trainable=True)
    		self.bias = tf.Variable(b, trainable=True)
    	def call(self, input):
    		return tf.matmul(input, self.kernel) + self.bias
    
    model6 = keras.Sequential()
    model6.add(Input(shape=(columns,)))
    model6.add(MyDenseLayer(units))		
    
    model6.compile(loss="mse", optimizer="adam", metrics=['accuracy'])
    
    model6.fit(train_x, train_y, epochs=10)
    model6.predict(train_x)[-1][-1]
    

    5 用numpy实现

    5.1 初始化权重和偏移项矩阵

    tf.random.set_seed(1)
    
    train_x = np.ones(shape=(rows, columns), dtype="float32")	# 这里一定要dtype一致,否则numpy与keras计算结果会有差异,我这里统一使用float32
    train_y = np.vstack([np.ones(shape=(int(rows/2), 1), dtype="float32"), np.zeros(shape=(int(rows/2),1), dtype="float32")])
    
    w = tf.random.normal(shape=(columns, 1), dtype=tf.float32)
    b = tf.zeros(shape=(1,), dtype=tf.float32)
    
    def w_init(shape, dtype=tf.float32):
    	return tf.convert_to_tensor(w, dtype=tf.float32)
    
    def b_init(shape, dtype=tf.float32):
    	return tf.convert_to_tensor(b, dtype=tf.float32)
    
    
    

    5.2 add实现,有激活函数

    tf.random.set_seed(1)		# 固定随机值
    
    model7 = keras.Sequential()
    model7.add(Input(shape=(columns,)))
    model7.add(Dense(units=1, kernel_initializer=w_init, bias_initializer=b_init, activation="sigmoid"))
    
    h1 = model7.predict(train_x)
    
    model7.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate), metrics=['accuracy'])
    
    model7.fit(train_x, train_y, epochs=1, batch_size=rows)	#这里要注意batch_size要用BatchGD,因为numpy实现时没有用batch,用的是全量数据更新
    
    w1, b1 = model7.layers[0].weights
    

    5.3 tf 实现

    tf.random.set_seed(1)		# 固定随机值
    
    x = tf.Variable(train_x, dtype=tf.float32)
    w2 = w
    b2 = b
    with tf.GradientTape(persistent=True) as tape:
    	tape.watch([w2, b2])
    	y_pred = 1/(1+tf.math.exp(-1*tf.matmul(x, w2)+b2))
    	loss = tf.math.reduce_mean(tf.math.square(tf.subtract(y_pred, train_y)))
    
    dw2 = tape.gradient(target=loss, sources=w2)
    db2 = tape.gradient(target=loss, sources=b2)
    
    w2 = w2 - dw2*learning_rate
    b2 = b2 - db2*learning_rate
    
    

    5.4 numpy实现

    import numpy as np
    
    class MyModel:
        def __init__(self, w, b, learning_rate):
            self.w = w
            self.b = b
            self.learning_rate = learning_rate
        def fit(self, train_x, train_y, epochs, batch_size):
            self.x = train_x
            self.y = train_y
            for epoch in range(epochs):
                print(f"epoch {epoch}")
                self.forward()  # 正向传播
                self.get_loss()
                self.backward()
        def forward(self):
            self.h3 = self.sigmoid(np.dot(self.x, self.w) + self.b)
        def backward(self):
            learning_rate = 0.01
            dw3 = np.dot(self.x.T, 2*(self.h3 - self.y)*self.h3*(1-self.h3)/train_x.shape[0])  # loss对w的求导
            db3 = np.dot(np.ones(shape=(1, rows)), 2*(self.h3 - self.y)*self.h3*(1-self.h3)/train_x.shape[0])  # loss对b的求导
            self.w -= dw3 * learning_rate
            self.b -= db3 * learning_rate
        def sigmoid(self, x):
            return 1 / (1 + np.exp(-x))
        def get_loss(self):
            loss = np.sum((np.square(self.h3-self.y)), axis=0)/rows
            print(f"loss {loss}")
        def predict(self):
            pass
    
    model8 = MyModel(w, b, learning_rate)
    model8.fit(train_x, train_y, epochs=1, batch_size=rows)
    
    w3 = model8.w
    b3 = model8.b
    
    
    展开全文
  • <p>I noticed that the top-level IOs were still fully partitioned, and the dense layer directives were too aggressively serialized-- since essentially no directives were applied in serial mode at all, ...
  • module.densenet121.features.denseblock4.denselayer16.conv.2") File "/media/administrator/D/XRay/CheXnet_Demo/denseNet_localization.py", line 186, in generate fmaps = self._find(self....
  • Supporting dense layer

    2020-12-26 16:06:35
    The consequence for the joined case is that I have a denser density on le whole part as soon as the layer 16 (4.10mm height) due to small top solid areas (layer 19 -5.0mm height) and for all the next ...
  • 自定义dense layer进行训练1.keras.layers.dense简介2.自定义实现 1.自定义损失函数进行训练 def customized_mse(y_true, y_pred):#自定义一个损失函数 return tf.reduce_mean(tf.square(y_pred - y_true)) model ...

    1.自定义损失函数进行训练

    def customized_mse(y_true, y_pred):#自定义一个损失函数
        return tf.reduce_mean(tf.square(y_pred - y_true))
    
    model = keras.models.Sequential([
        keras.layers.Dense(30, activation='relu',
                           input_shape=x_train.shape[1:]),
        keras.layers.Dense(1),
    ])
    model.summary()
    model.compile(loss=customized_mse, optimizer="sgd",#将自定义的损失函数传进model
                  metrics=["mean_squared_error"])
    callbacks = [keras.callbacks.EarlyStopping(
        patience=5, min_delta=1e-2)]
    

    在这里插入图片描述

    2.自定义dense layer进行训练

    1.keras.layers.dense简介

    我们平时在model里添加的model.layers.dense(神经元个数)就是全连接网络层,里面最终要的两个值bias(偏置)和kernal,其实是在处理 x(矩阵) * w(就是kernal) + b(bias)的问题。其中w,和b都是矩阵。

    2.自定义实现

    一般没有参数的层次就不需要用子类(需要写很多行代码)实现,可以用lambda表达式创建。

    # tf.nn.softplus : log(1+e^x)
    customized_softplus = keras.layers.Lambda(lambda x : tf.nn.softplus(x))#这里是实现了一个激活函数。
    print(customized_softplus([-10., -5., 0., 5., 10.]))
    

    tf.Tensor([4.5417706e-05 6.7153489e-03 6.9314718e-01 5.0067153e+00 1.0000046e+01], shape=(5,), dtype=float32)

    但是对于有参数的还是要用子类实现。

    # customized dense layer.
    class CustomizedDenseLayer(keras.layers.Layer):
        def __init__(self, units, activation=None, **kwargs):
            self.units = units
            self.activation = keras.layers.Activation(activation)
            super(CustomizedDenseLayer, self).__init__(**kwargs)#从父类keras.layers.Layer继承相应初始化属性。
        
        def build(self, input_shape):
            """构建所需要的参数"""
            # x * w + b. input_shape:[None, a] w:[a,b]output_shape: [None, b]
            #首先添加kernal(权重)
            self.kernel = self.add_weight(name = 'kernel',
                                          shape = (input_shape[1], self.units),
                                          initializer = 'uniform',
                                          trainable = True)
            #再添加bias
            self.bias = self.add_weight(name = 'bias',
                                        shape = (self.units, ),
                                        initializer = 'zeros',
                                        trainable = True)
            super(CustomizedDenseLayer, self).build(input_shape)#继承父类的build方法。
        
        def call(self, x):
            """完成正向计算"""
            return self.activation(x @ self.kernel + self.bias)
    
    model = keras.models.Sequential([
        CustomizedDenseLayer(30, activation='relu',
                             input_shape=x_train.shape[1:]),
        CustomizedDenseLayer(1),
        customized_softplus,
        #customized_softplus也可以用下面两层叠加的方式实现。
        # keras.layers.Dense(1, activation="softplus"),
        # keras.layers.Dense(1), keras.layers.Activation('softplus'),
    ])
    model.summary()
    model.compile(loss="mean_squared_error", optimizer="sgd")
    callbacks = [keras.callbacks.EarlyStopping(
        patience=5, min_delta=1e-2)]
    

    在这里插入图片描述

    展开全文
  • t split the dense layer prior to the output layer as seen below. <p><img alt="image" src="https://user-images.githubusercontent.com/7274845/56815143-7ca29d80-6838-11e9-9b4d-66ad803b966f.png" /></p> ...
  • s last two dimension unchanged after the DenseLayer operation. I think is should be <pre><code> self.add_module('relu1', nn.ReLU(inplace=True)), self.add_module('conv2', nn.Conv...
  • <div><p>This implements skip connection like ResNet (https://arxiv.org/pdf/1512.03385.pdf) on the dense layer. The input and output size need to match (for now, maybe configure a weight matrix to ...
  • 在本文中介绍两种自定义层次的方法:子类法与lambda方法,其中子类法适合用于定义...1、子类法定义DenseLayer(全连接层) 实现自定义DenseLayer与之前实现的wide_deep模型很相似,都是通过继承类的方式实现。只不...

    在本文中介绍两种自定义层次的方法:子类法与lambda方法,其中子类法适合用于定义参数较多的层次,而lambda方法更适合实现自定义一个没有参数的层次,例如:激活函数。相比于子类法,lambda方法实现起来更简单,代码量更少,下面进行具体介绍。

    1、子类法定义DenseLayer(全连接层)

    实现自定义DenseLayer与之前实现的wide_deep模型很相似,都是通过继承类的方式实现。只不过在wide_deep模型中是将初始化函数与build函数合在一块了,而在这里是分开的。

    核心代码展示:

    # customized dense layer.
    class CustomizedDenseLayer(keras.layers.Layer):
        def __init__(self, units, activation=None, **kwargs):
            self.units = units #输出单元数
            self.activation = keras.layers.Activation(activation)
            super(CustomizedDenseLayer, self).__init__(**kwargs)
        
        def build(self, input_shape):
            """构建所需要的参数"""
            # x * w + b. x的input_shape:[None, a] ,output_shape: [None, b],w:[a,b]
            self.kernel = self.add_weight(name = 'kernel',
                                          shape = (input_shape[1], self.units),
                                          initializer = 'uniform', #初始化方法
                                          trainable = True)
            self.bias = self.add_weight(name = 'bias',
                                        shape = (self.units, ),
                                        initializer = 'zeros',
                                        trainable = True)
            super(CustomizedDenseLayer, self).build(input_shape)
        
        def call(self, x):
            """完成一次正向计算"""
            return self.activation(x @ self.kernel + self.bias)
    
    model = keras.models.Sequential([
        CustomizedDenseLayer(30, activation='relu',
                             input_shape=x_train.shape[1:]),
        CustomizedDenseLayer(1),
    ])

    note: keras.layers.Activation(activation)里小写的activation是一个函数(或者是字符串),然后传给keras.layers.Activation后构成了一个层次,在调用self.activation()的时候,这个会触发keras.layers.Activation的call方法,在call方法里,就是调用的之前传进去的activation方法去做。

     def call(self, inputs):
        return self.activation(inputs)
    

    2、使用lambda方法自定义softplus激活函数层

    softplus激活函数没有参数所以采用更简单的lambda方法实现。

    核心代码展示:

    # 激活函数:tf.nn.softplus : log(1+e^x)
    customized_softplus = keras.layers.Lambda(lambda x : tf.nn.softplus(x))
    #print(customized_softplus([-10., -5., 0., 5., 10.]))

    可以看到在上面有一行注释了的用于测试softplus的代码。它的输出为:

    tf.Tensor([4.5417706e-05 6.7153489e-03 6.9314718e-01 5.0067153e+00 1.0000046e+01], shape=(5,), dtype=float32)

    即输入x,输出为log(1+e^x)

    接下来把定义好的softplus层添加到刚刚定义的模型中,如下:

    model = keras.models.Sequential([
        CustomizedDenseLayer(30, activation='relu',
                             input_shape=x_train.shape[1:]),
        CustomizedDenseLayer(1), #没有传activation,则默认activation为None,在构造函数中被赋值,在call函数中被使用
        customized_softplus,
        #keras.layers.Dense(1, activation = "softplus"),
        #keras.layers.Dense(1), keras.layers.Actication('softplus'),
    ])

    note:其中:

     CustomizedDenseLayer(1),
     customized_softplus,

    与  keras.layers.Dense(1, activation = "softplus"),

    和  keras.layers.Dense(1),

         keras.layers.Actication('softplus'),

    都是等价的。

    附全部代码:

    import matplotlib as mpl
    import matplotlib.pyplot as plt
    %matplotlib inline
    import numpy as np
    import sklearn
    import pandas as pd
    import os
    import sys
    import time
    import tensorflow as tf
    
    from tensorflow import keras
    
    print(tf.__version__)
    print(sys.version_info)
    for module in mpl, np, pd, sklearn, tf, keras:
        print(module.__name__, module.__version__)
    
    layer = tf.keras.layers.Dense(100)
    layer = tf.keras.layers.Dense(100, input_shape=(None, 5))
    layer(tf.zeros([10, 5]))
    
    #layer的两个主要方法
    layer.variables #打印出来layer里面包含的所有参数
    # x * w + b
    layer.trainable_variables#获得所有可训练的变量
    
    from sklearn.datasets import fetch_california_housing
    
    housing = fetch_california_housing()
    print(housing.DESCR)
    print(housing.data.shape)
    print(housing.target.shape)
    
    from sklearn.model_selection import train_test_split
    
    x_train_all, x_test, y_train_all, y_test = train_test_split(
        housing.data, housing.target, random_state = 7)
    x_train, x_valid, y_train, y_valid = train_test_split(
        x_train_all, y_train_all, random_state = 11)
    print(x_train.shape, y_train.shape)
    print(x_valid.shape, y_valid.shape)
    print(x_test.shape, y_test.shape)
    
    from sklearn.preprocessing import StandardScaler
    
    scaler = StandardScaler()
    x_train_scaled = scaler.fit_transform(x_train)
    x_valid_scaled = scaler.transform(x_valid)
    x_test_scaled = scaler.transform(x_test)
    
    # 激活函数:tf.nn.softplus : log(1+e^x)
    customized_softplus = keras.layers.Lambda(lambda x : tf.nn.softplus(x))
    #print(customized_softplus([-10., -5., 0., 5., 10.]))
    
    # customized dense layer.
    class CustomizedDenseLayer(keras.layers.Layer):
        def __init__(self, units, activation=None, **kwargs):
            self.units = units #输出单元数
            self.activation = keras.layers.Activation(activation)
            super(CustomizedDenseLayer, self).__init__(**kwargs)
        
        def build(self, input_shape):
            """构建所需要的参数"""
            # x * w + b. x的input_shape:[None, a] ,output_shape: [None, b],w:[a,b]
            self.kernel = self.add_weight(name = 'kernel',
                                          shape = (input_shape[1], self.units),
                                          initializer = 'uniform', #初始化方法
                                          trainable = True)
            self.bias = self.add_weight(name = 'bias',
                                        shape = (self.units, ),
                                        initializer = 'zeros',
                                        trainable = True)
            super(CustomizedDenseLayer, self).build(input_shape)
        
        def call(self, x):
            """完成一次正向计算"""
            return self.activation(x @ self.kernel + self.bias)
    
    model = keras.models.Sequential([
        CustomizedDenseLayer(30, activation='relu',
                             input_shape=x_train.shape[1:]),
        CustomizedDenseLayer(1),
        customized_softplus,
        #keras.layers.Dense(1, activation = "softplus"),
        #keras.layers.Dense(1), keras.layers.Actication('softplus'),
    ])
    model.summary()
    model.compile(loss="mean_squared_error", optimizer="sgd")
    callbacks = [keras.callbacks.EarlyStopping(
        patience=5, min_delta=1e-2)]
    
    history = model.fit(x_train_scaled, y_train,
                        validation_data = (x_valid_scaled, y_valid),
                        epochs = 100,
                        callbacks = callbacks)
    
    def plot_learning_curves(history):
        pd.DataFrame(history.history).plot(figsize=(8, 5))
        plt.grid(True)
        plt.gca().set_ylim(0, 1)
        plt.show()
    plot_learning_curves(history)
    
    model.evaluate(x_test_scaled, y_test, verbose=0)
    
    

     

    展开全文
  • 子类以及lambda分别实现自定义DenseLayer层次 1.在介绍子类继承layer层自定义层次前,介绍两个查看参数的方法。 layer.variables 可以查看层次中的变量 layer.trainable_variables 可以查看可训练的变量 示例: ...
    子类以及lambda分别实现自定义DenseLayer层次

    1.在介绍子类继承layer层自定义层次前,介绍两个查看参数的方法。

    layer.variables 可以查看层次中的变量
    layer.trainable_variables 可以查看可训练的变量

    示例:

    import tensorflow as tf
    layer = tf.keras.layers.Dense(100, input_shape = [None, 5])
    layer(tf.zeros([2,5]))
    

    运算结果矩阵有200个数据,

    layer.variables
    

    截取一部分变量:

    x * w + b
    <tf.Variable ‘dense_2/kernel:0’ shape=(5, 100) dtype=float32, numpy=
    array([[ 8.61696750e-02, -1.33024722e-01, -1.06008053e-02
    kernel 表示 W 参数
    <tf.Variable ‘dense_2/bias:0’ shape=(100,) dtype=float32, numpy=
    array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
    bias 表示b偏差

    同样我们可以查看可训练的参数

    layer.trainable_variables
    

    2.子类以及lambda分别实现自定义DenseLayer层次。

    子类实现自定义是需要输入参数时使用,而如果我们不需要输入层次参数则可以选用lambda来构建自定义层次。

    代码示例:

    • 子类自定义实现
    #customized  dense layer 
    #使用子类继承Layer自定义
    
    class CustomizedDenseLayer(keras.layers.Layer):
        
        def __init__(self, units, activation = None, **kwargs):
            self.units = units   #层次单元
            self.activation = keras.layers.Activation(activation)
            super().__init__(**kwargs)
            
        def build(self, input_shape):
            """构建所需要的函数"""
            # x * w + b.
            self.kernel = self.add_weight(name = 'kernel', 
                                          shape = (input_shape[1], self.units),
                                          initializer = 'uniform',
                                          trainable = True)
            self.bias = self.add_weight(name = 'bias',
                                        shape = (self.units, ),
                                        initializer = 'zeros',
                                        trainable = True
                                       )
            super().build(input_shape)
        
        def call(self, x):
            """完成正向计算"""
            return self.activation(x @ self.kernel + self.bias)
    
    • lambda自定义实现,以softplus为例。
    #如果自定义的函数不需要输入参数,则可以通过lambda来自定义层次
    #tf.nn.softplus : log(1 + e^x)
    Customized_softplus = keras.layers.Lambda(lambda x :tf.nn.softplus(x))
    

    使用自定义层次构建网络:

    #使用序贯模型Sequential   tf.keras.models.sequential()
    
    model = keras.models.Sequential([
        #keras.layers.Flatten(input_shape = x_train.shape[1:]),如果数据已经展平,真不用再使用flatten。
        #使用自定义层次
        CustomizedDenseLayer(30, activation="relu",input_shape = x_train.shape[1:]),
        CustomizedDenseLayer(1),
        Customized_softplus,
        #最后一层lambda自定义层次相当于以下两种写法
        #keras.layers.Dense(1, activation = 'softplus')
        #keras.layers.Dense(1) , keras.layers.Activation('softplus')
    ])
    
    展开全文
  • 自定义DenseLayer 一,自定义损失函数 1,在TF中实现自定义损失函数 在TF中实现的demo如下: import tensorflow as tf import numpy as np y_pred = np.array([1,4,3,2,6,5,9]) y_true = np.array([1,4,3,2,1,4,...
  • Unexpected key(s) in state_dict: "dense_block1.denselayer1.norm.1 from torchvision.models import densenet121 from collections import OrderedDict model = densenet121(pretrained=False) state_dict ...
  • <div><p>Why do you have routing in the class capsules layer for EM capsules?</p><p>该提问来源于开源项目:naturomics/CapsLayer</p></div>
  • model.add(Dense(128, init='normal', input_dim=16_4_4)) File "C:\Users\Edward\Anaconda2\lib\site-packages\keras\models.py", line 308, in add output_tensor = layer(self....
  • ]) # Swapped for _DenseLayer self.add_module('denselayer%d' % (i + 1), layer) input_features = out_features </code></pre>该提问来源于开源项目:xavysp/DexiNed</p></div>
  • 2.自定义DenseLayer 2.1 带参 2.2 不带参 1.自定义损失函数 import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline import numpy as np import pandas as pd import os import ...
  • <div><h2>🐞Describe the bug <ul><li>A clear and brief description ...<p><strong>ValueError: Keras layer '' not supported</strong></p>该提问来源于开源项目:apple/coremltools</p></div>
  • <p>I am trying to do an experiment that compares the performance if we turn off the CRF layer use a dense layer as output decoder. ` <h1>input size: size of lstm hidden states, output size: label...
  • Convolutional layers/Pooling layers/Dense Layer 卷积层/池化层/稠密层 Convolutional layers 卷积层 Convolutional layers, which apply a specified number of convolution filters to the image. For each ...
  • <div><p>该提问来源于开源项目:rlworkgroup/garage</p></div>
  • s strange that the error upon reloading reports that the ToDense layer does not support RaggedTensors as input. The <strong>init</strong> for the layer sets _supports_ragged_inputs to True. ...
  • from keras.layers import Dense, Conv2D, Flatten <p>model = Sequential() model.add( Conv2D( input_shape=(28, 28,1), filters=64, kernel_size=(3,3), activation=keras.activations....
  • <div><p><strong>Describe the bug</strong></p> <p>Need some assistance in the setup of incremental training for my BERTClassification Model, Suppose if have num_of_labels =3, then, I would add ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,256
精华内容 502
关键字:

denselayer