精华内容
下载资源
问答
  • Stacked

    2020-12-08 18:07:41
    <div><p>该提问来源于开源项目:TargetProcess/tauCharts</p></div>
  • Stacked items

    2020-11-22 08:35:14
    <div><ol><li>I have the stacked items https://gyazo.com/a392c0b70236f59ecb1e34141c5da5ae</li><li>I drop the stacked items: ...
  • Stacked relative

    2021-01-06 03:57:18
    <div><p>I would like to get an effect of the <a href="http://rawgit.com/captivationsoftware/react-sticky/master/examples/stacked/index.html">stacked demo, where multiple sticky headers politely let ...
  • Stacked charts

    2020-12-08 18:07:44
    <div><p>Is it possible to create a stacked bar chart?</p><p>该提问来源于开源项目:valor-software/ng2-charts</p></div>
  • Stacked Toasts

    2021-01-06 10:01:44
    <div><p>We should work out a scheme where more toasts can be fired off and stacked up on top of each other. </p><p>该提问来源于开源项目:wasdk/WebAssemblyStudio</p></div>
  • Stacked icons

    2020-12-08 19:33:19
    <div><p>http://fortawesome.github.io/Font-Awesome/examples/#stacked</p>该提问来源于开源项目:martndemus/ember-font-awesome</p></div>
  • Stacked DNC

    2021-01-06 05:05:03
    <div><p>Is it possible to create a stacked DNC (add DNC on top of a DNC)? i'm struggling to understand the graph flow in the DNC implementation. </p><p>该提问来源于开源项目:deepmind/dnc</p>...
  • stacked label

    2020-12-25 21:34:08
    <p>To keep the same UI styles, I need to use stacked labels with select-searchable? <p>It is possible? <p>Thanks.</p><p>该提问来源于开源项目:eakoriakin/ionic-selectable</p></div>
  • stacked icons

    2021-01-08 06:10:27
    My inventory and usch are stacked up on each other in the left upper corner. Yesterday there was nothing wrong with my client, but when i logged in my client looked like this: ...
  • Stacked copies

    2021-01-06 12:34:32
    <p><strong>See video below.</strong> Allow for multiple copies to be stacked over each other with a defined gap so that the copies can break away from each other: <ul><li>Stack multiple copies of ...
  • Add Stacked Bar

    2020-12-08 18:32:04
    ve added stacked bar evolution in my project. This could be interesting for the next release ? <p>in vendor/consoletvs/charts/src/Builder/Chart.php</p> <p>after <code>public $responsive; adding : ...
  • stacked hourglass model visualization ,stacked hourglass模型可视化
  • Grouped Stacked Bars

    2020-12-08 18:00:48
    <div><p>Adds the ability to group stacked bar charts and normal bar charts (a stacked bar chart with only one key) referenced in #196. <p>To accomplish this I followed the example of the grouped bar ...
  • Stacked Autoencoder

    千次阅读 2017-12-31 12:49:07
    作者:chen_h 微信号 & QQ:862251340 微信公众号:coderpai ...自编码器 Autoencoder 稀疏自编码器 Sparse Autoencoder 降噪自编码器 Denoising Autoencoder 堆叠自编码器 Stacked Autoencoder 深度学习的威力

    作者:chen_h
    微信号 & QQ:862251340
    微信公众号:coderpai
    简书地址:https://www.jianshu.com/p/51d5639c2c71

    ———-


    深度学习的威力在于其能够逐层地学习原始数据的多种表达方式。每一层都以前一层的表达特征为基础,抽取出更加抽象,更加适合复杂的特征,然后做一些分类等任务。

    堆叠自编码器(Stacked Autoencoder,SAE)实际上就是做这样的事情,如前面的自编码器稀疏自编码器降噪自编码器都是单个自编码器,它们通过虚构一个 x -> h -> x 的三层网络,能过学习出一种特征变化 h = f(wx+b) 。实际上,当训练结束后,输出层已经没有什么意义了,我们一般将其去掉,即将自编码器表示为:

    之前之所以将自编码器模型表示为3层的神经网络,那是因为训练的需要,我们将原始数据作为假想的目标输出,以此构建监督误差来训练整个网络。等训练结束后,输出层就可以去掉了,因为我们只关心的是从 xh 的变换。

    接下来的思路就很自然了,我们已经得到特征表达 h ,那么我们可不可以将 h 再作为原始信息,训练一个新的自编码器,得到新的特征表达呢?当软可以,而且这就是所谓的堆叠自编码器(Stacked Autoencoder,SAE)。Stacked 就是逐层堆叠的意思,这个跟“栈”有点像。当把多个自编码器 Stack 起来之后,这个系统看起来就像这样:

    这样就把自编码器改成了深度结构了,即 learning multiple levels of representation and abstraction (Hinton, Bengio, LeCun, 2015)。需要注意的是,整个网络的训练不是一蹴而就的,而是逐层进行的。比如说我们要训练一个 n -> m -> k 结构的网络,实际上我们是先训练网络 n -> m -> n ,得到 n -> m 的变换,然后再训练 m -> k -> m 网络,得到 m -> k 的变换。最终堆叠成 SAE ,即为 n -> m -> k 的结果,整个过程就像一层层往上面盖房子,这就是大名鼎鼎的 layer-wise unsuperwised pre-training (逐层非监督预训练)。

    接下来我们来看一个具体的例子,假设你想要训练一个包含两个隐藏层的堆叠自编码器,用来训练 MNIST 手写数字分类。

    首先,你需要用原始输入 x(k) 训练第一个稀疏自编码器中,它能够学习得到原始输入的一阶特征表示 h(1)(k),如下图所示:

    接着,你需要把原始数据输入到上述训练好的稀疏自编码器中,对于每一个输入 x(k) ,都可以得到它对应的一阶特征表示 h(1)(k) 。然后你再用这些一阶特征作为另一个稀疏自编码器的输入,使用它们来学习二阶特征 h(2)(k) ,如下图:

    同样,再把一阶特征输入到刚训练好的第二层稀疏自编码器中,得到每个 h(1)(k) 对应的二阶特征激活值 h(2)(k) 。接下来,你可以把这些二阶特征作为 softmax 分类器的输入,训练得到一个能将二阶特征映射到数字标签的模型。如下图:

    最终,你可以将这三层结合起来构建一个包含两个隐藏层和一个最终 softmax 分类器层的堆叠自编码网络,这个网络能够如你所愿地对 MNIST 数据集进行分类。最终模型如下图:

    实验代码如下:

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    
    import tensorflow as tf 
    import numpy as np
    import input_data
    
    
    N_INPUT = 28*28
    N_HIDDEN_1 = 1000
    N_OUTPUT_1 = N_INPUT
    N_HIDDEN_2 = 1500
    N_OUTPUT_2 = N_HIDDEN_1
    N_OUTPUT = 10
    BATCH_SIZE = 16 0
    EPOCHES = 10
    RHO = .1
    BETA = tf.constant(3.0)
    LAMBDA = tf.constant(.0001)
    
    w_model_one_init = np.sqrt(6. / (N_INPUT + N_HIDDEN_1))
    
    model_one_weights = {
        "hidden": tf.Variable(tf.random_uniform([N_INPUT, N_HIDDEN_1], minval = -w_model_one_init, maxval = w_model_one_init)),
        "out": tf.Variable(tf.random_uniform([N_HIDDEN_1, N_OUTPUT_1], minval = -w_model_one_init, maxval = w_model_one_init))
    }
    model_one_bias = {
        "hidden": tf.Variable(tf.random_uniform([N_HIDDEN_1], minval = -w_model_one_init, maxval = w_model_one_init)),
        "out": tf.Variable(tf.random_uniform([N_OUTPUT_1], minval = -w_model_one_init, maxval = w_model_one_init))
    }
    
    w_model_two_init = np.sqrt(6. / (N_HIDDEN_1 + N_HIDDEN_2))
    
    model_two_weights = {
        "hidden": tf.Variable(tf.random_uniform([N_HIDDEN_1, N_HIDDEN_2], minval = -w_model_two_init, maxval = w_model_two_init)),
        "out": tf.Variable(tf.random_uniform([N_HIDDEN_2, N_OUTPUT_2], minval = -w_model_two_init, maxval = w_model_two_init))
    }
    model_two_bias = {
        "hidden": tf.Variable(tf.random_uniform([N_HIDDEN_2], minval = -w_model_two_init, maxval = w_model_two_init)),
        "out": tf.Variable(tf.random_uniform([N_OUTPUT_2], minval = -w_model_two_init, maxval = w_model_two_init))
    }
    
    w_model_init = np.sqrt(6. / (N_HIDDEN_2 + N_OUTPUT))
    
    model_weights = {
        "out": tf.Variable(tf.random_uniform([N_HIDDEN_2, N_OUTPUT], minval = -w_model_init, maxval = w_model_init))
    }
    model_bias = {
        "out": tf.Variable(tf.random_uniform([N_OUTPUT], minval = -w_model_init, maxval = w_model_init))
    }
    
    
    model_one_X = tf.placeholder("float", [None, N_INPUT])
    model_two_X = tf.placeholder("float", [None, N_HIDDEN_1])
    Y = tf.placeholder("float", [None, N_OUTPUT])
    
    def model_one(X):
        hidden = tf.sigmoid(tf.add(tf.matmul(X, model_one_weights["hidden"]), model_one_bias["hidden"]))
        out = tf.sigmoid(tf.add(tf.matmul(hidden, model_one_weights["out"]), model_one_bias["out"]))
        return [hidden, out]
    
    def model_two(X):
        hidden = tf.sigmoid(tf.add(tf.matmul(X, model_two_weights["hidden"]), model_two_bias["hidden"]))
        out = tf.sigmoid(tf.add(tf.matmul(hidden, model_two_weights["out"]), model_two_bias["out"]))
        return [hidden, out]
    
    def model(X):
        hidden_1 = tf.sigmoid(tf.add(tf.matmul(X, model_one_weights["hidden"]), model_one_bias["hidden"]))
        hidden_2 = tf.sigmoid(tf.add(tf.matmul(hidden_1, model_two_weights["hidden"]), model_two_bias["hidden"]))
        out = tf.add(tf.matmul(hidden_2, model_weights["out"]), model_bias["out"])
        return out
    
    def KLD(p, q):
        invrho = tf.sub(tf.constant(1.), p)
        invrhohat = tf.sub(tf.constant(1.), q)
        addrho = tf.add(tf.mul(p, tf.log(tf.div(p, q))), tf.mul(invrho, tf.log(tf.div(invrho, invrhohat))))
        return tf.reduce_sum(addrho)
    
    mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
    trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
    
    # model one
    model_one_hidden, model_one_out = model_one(model_one_X)
    # loss
    model_one_cost_J = tf.reduce_sum(tf.pow(tf.sub(model_one_out, model_one_X), 2))
    # cost sparse
    model_one_rho_hat = tf.div(tf.reduce_sum(model_one_hidden), N_HIDDEN_1)
    model_one_cost_sparse = tf.mul(BETA, KLD(RHO, model_one_rho_hat))
    # cost reg
    model_one_cost_reg = tf.mul(LAMBDA, tf.add(tf.nn.l2_loss(model_one_weights["hidden"]), tf.nn.l2_loss(model_one_weights["out"])))
    # cost function
    model_one_cost = tf.add(tf.add(model_one_cost_J, model_one_cost_reg), model_one_cost_sparse)
    train_op_1 = tf.train.AdamOptimizer().minimize(model_one_cost)
    # =======================================================================================
    
    # model two
    model_two_hidden, model_two_out = model_two(model_two_X)
    # loss
    model_two_cost_J = tf.reduce_sum(tf.pow(tf.sub(model_two_out, model_two_X), 2))
    # cost sparse
    model_two_rho_hat = tf.div(tf.reduce_sum(model_two_hidden), N_HIDDEN_2)
    model_two_cost_sparse = tf.mul(BETA, KLD(RHO, model_two_rho_hat))
    # cost reg
    model_two_cost_reg = tf.mul(LAMBDA, tf.add(tf.nn.l2_loss(model_two_weights["hidden"]), tf.nn.l2_loss(model_two_weights["out"])))
    # cost function
    model_two_cost = tf.add(tf.add(model_two_cost_J, model_two_cost_reg), model_two_cost_sparse)
    train_op_2 = tf.train.AdamOptimizer().minimize(model_two_cost)
    # =======================================================================================
    
    # final model
    model_out = model(model_one_X)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model_out, Y))
    train_op = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost)
    predict_op = tf.argmax(model_out, 1)
    # =======================================================================================
    
    
    with tf.Session() as sess:
    
        init = tf.initialize_all_variables()
        sess.run(init)
    
        for i in xrange(EPOCHES):
            for start, end in zip(range(0, len(trX), BATCH_SIZE), range(BATCH_SIZE, len(trX), BATCH_SIZE)):
                input_ = trX[start:end]
                sess.run(train_op_1, feed_dict = {model_one_X: input_})
        print 'finish model one ...'
    
        for i in xrange(EPOCHES):
            for start, end in zip(range(0, len(trX), BATCH_SIZE), range(BATCH_SIZE, len(trX), BATCH_SIZE)):
                input_ = trX[start:end]
                input_ = sess.run(tf.sigmoid(tf.add(tf.matmul(input_, model_one_weights["hidden"]), model_one_bias["hidden"])))
                sess.run(train_op_2, feed_dict = {model_two_X: input_})
        print 'finish model two ...'
    
        for i in xrange(EPOCHES):
            for start, end in zip(range(0, len(trX), BATCH_SIZE), range(BATCH_SIZE, len(trX), BATCH_SIZE)):
                input_ = trX[start:end]
                sess.run(train_op, feed_dict = {model_one_X: input_, Y: trY[start:end]})
    
            print i, np.mean(np.argmax(teY, axis = 1) == sess.run(predict_op, feed_dict = {model_one_X: teX, Y: teY}))
    
        print 'finish model ...'
        print np.mean(np.argmax(teY, axis = 1) == sess.run(predict_op, feed_dict = {model_one_X: teX, Y: teY}))
    

    Reference:

    UFLDL

    知乎


    作者:chen_h
    微信号 & QQ:862251340
    简书地址:https://www.jianshu.com/p/51d5639c2c71

    CoderPai 是一个专注于算法实战的平台,从基础的算法到人工智能算法都有设计。如果你对算法实战感兴趣,请快快关注我们吧。加入AI实战微信群,AI实战QQ群,ACM算法微信群,ACM算法QQ群。长按或者扫描如下二维码,关注 “CoderPai” 微信号(coderpai)
    这里写图片描述


    这里写图片描述

    展开全文
  • Stacked DeBERT

    2020-01-03 16:13:37
    论文地址:Stacked DeBERT: All Attention in Incomplete Data for Text Classification 项目地址:https://github.com/gcunhase/StackedDeBERT 我们提出将去噪BERT (DeBERT)叠加作为一种新颖的编码方案,用于对不...

    论文地址:Stacked DeBERT: All Attention in Incomplete Data for Text Classification

    项目地址:https://github.com/gcunhase/StackedDeBERT

    我们提出将去噪BERT (DeBERT)叠加作为一种新颖的编码方案,用于对不正确的句子进行不完全的意图分类和情绪分类。如图1所示,该模型的结构为嵌入层和普通变压器层的叠加,类似于传统的BERT[11],然后是新型降噪Transformer层。该模型的主要目的是通过对含有缺失词的句子进行隐藏嵌入重构,提高BERT对不完整数据的鲁棒性和有效性。通过重构这些隐藏的嵌入,我们可以改进BERT的编码方案。

                          

    图1:提出的叠加BERT模型分为三层:嵌入层、常规双向Transformer层和去噪双向Transformer层。

    该模型的初始部分是传统的BERT,一个多层双向Transformer编码器和一个强大的语言模型。在训练过程中,BERT对不完整的文本分类语料库进行微调(见第3节)。它还用特殊字符“[CLS]”作为标记序列的前缀,用“[SEP]”字符作为每个句子的后缀。然后是一个用于输入表示的嵌入层,最后的输入嵌入是一组令牌嵌入、分段嵌入和位置嵌入。第一个是令牌嵌入层,它使用词汇表字典将每个令牌转换为更具代表性的嵌入。分段嵌入层通过标记1或0来指示哪些标记构成一个句子。在我们的例子中,由于我们的数据是由单个句子组成的,所以在第一个‘[SEP]’字符出现之前(表示片段A),然后它变成了0(片段B)。正如名称所示,位置嵌入层添加了与令牌在句子中的位置相关的信息。这为普通双向Transformer层考虑的数据做了准备,它输出一个隐藏的嵌入,可以被我们的新型去噪Transformer层使用。

    尽管BERT在处理不完整的数据时表现得比其他基线模型更好,但它仍然不足以完整和有效地处理这些数据。因此,需要进一步改进从缺词句子中得到的隐含特征向量。基于这一目的,我们实现了一种新的编码方案,该方案由去噪Transformer和双向Transformer组成,其中去噪Transformer由多层感知器堆栈组成,通过提取更抽象和有意义的隐藏特征向量来重建缺失的词嵌入,而双向Transformer则用于改进嵌入表示。嵌入重构步骤以从不完整数据h_{inc}中提取的句子嵌入作为输入,以其完整版本h_{comp}对应的嵌入作为目标。输入和目标都得到应用嵌入层和vanilla transformers后,显示在图1中,并且形状(N_{bs}, 768, 128),其中,Nbs为批量大小,768为单个令牌的原始BERT嵌入大小,128为句子中的最大序列长度。

    多层感知器的堆栈结构是由三层构成的两组,每组包含两个隐藏层。第一个集合负责将h_{inc}压缩为一个隐空间表示,将更多抽象特征提取到具有形状(N_{bs}, 128, 128)、(N_{bs}, 32, 128)和(N_{bs}, 12, 128)的低维向量z_{1}z_{2}和z中。这一过程如式(1)所示:

                                 

    其中f(·)为参数化函数,将h_{inc}映射到隐藏状态z,第二组分别将z_{1}z_{2}、z重构为h_{rec1}h_{rec2}h_{rec}。这一过程如式(2)所示:                                       

    其中g(·)是将z重构为的参数化函数

    通过均方误差损失函数将重构后的隐藏句嵌入h_{rec}与完整的隐藏句嵌入h_{comp}进行比较,如Eq(3)所示:

                                

    根据不完全句重构出正确的隐藏嵌入后,将正确的隐藏嵌入提供给双向Transformer以生成输入表示。然后,在不完整的文本分类c上,以端到端方式对模型进行微调。

    利用前馈网络和softmax激活函数进行分类。Softmax   σ为数控类是一个离散型概率分布函数,类概率的总和是1和预测类的最大价值。所预测的类可以用公式进行数学计算:

                               

    其中o = W t + b,用于分类的前馈层的输出。

    展开全文
  • Stacked Utils refactor

    2021-01-07 07:55:24
    Stacked Area and Stacked Bar should be affected. It should just be a regression pass, no functionality affected. Try stacking data that mismatches for bar plot, or data where keys are weird</p><p>该...
  • Ionic Stacked Label

    2020-11-28 21:30:32
    stacked" label component that puts the label above the input component. To make an input stacked you just add "stacked" inside the ion-label component. <p>...
  • Stacked Icon Helper

    2020-12-05 03:56:07
    <div><p>There should be a helper for stacked icons. Using the example from: http://fortawesome.github.io/Font-Awesome/examples/#stacked</p> <pre><code> html <span class="fa-stack fa-lg"...
  • jasper stacked chart demo

    2016-05-20 15:09:21
    jasper如何创建stacked chart jasper stacked chart demo
  • Stacked bar chart

    2020-12-01 13:21:26
    Is there an easy way to implement stacked bar charts ? <p>From what I can see it's only the groups item that's missing. I tried adding it to the $scope but couldn't find where the you map...
  • Enabling Stacked Mode

    2020-12-26 12:32:55
    <div><p>How would one go about enable stacked mode?</p><p>该提问来源于开源项目:esbenp/react-native-clean-form</p></div>
  • Stacked line charts

    2021-01-09 05:33:01
    A logical way to represent this would be a stacked line chart <p><strong>Describe the solution you'd like</strong></p> <p>Add a stacked line chart option ...
  • Stacked Line Chart

    2020-12-08 18:11:39
    <div><p>Hi , How to make stacked line chart using this module.Can you please guide me.</p><p>该提问来源于开源项目:wuxudong/react-native-charts-wrapper</p></div>
  • Stacked timer reporting

    2021-01-01 11:51:41
    <p>Addressed #4119 - stacked timer was seg faulting when using with kokkos space_time_stack profiling tool. This has been fixed. Users can control the start of the top level timer by calling ...
  • Stacked area operation

    2020-11-28 21:40:52
    <div><p>Added a very simple operation that lets you produce a stacked area chart from an overlay of areas. Example included in the holoviews-contrib gallery PR.</p><p>该提问来源于开源项目:...
  • Stacked plot refactor

    2021-01-07 08:12:27
    Try seeing if you can still use the stacked plot and stacked area plot the expected way. There is no particular area to concentrate on, because the entire code of stacking has been changed. <p>New ...
  • Stacked bar Chart

    2020-11-23 00:46:58
    <div><p>How to display two stacked bar chart in one axis...i tried series option targetAxesIndex but it did not work</p><p>该提问来源于开源项目:angular-google-chart/angular-google-chart</p></...
  • Stacked chart interpolation

    2020-12-26 04:06:14
    <div><p>Fixes the issues with (relative) stacked area charts outlined here: https://www.notion.so/owid/Relative-StackedArea-goes-beyond-100-465d2e80ccc14fc8b50a6e51768e3421</p><p>该提问来源于开源项目&...
  • stacked bar chart

    2020-12-26 23:49:32
    <p>let me know, how to create a stacked bar chart, I was tried to configure options with optionraw but doesn't work. <p>thanks</p><p>该提问来源于开源项目:fxcosta/laravel-chartjs</p></div>

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 4,518
精华内容 1,807
关键字:

stacked