精华内容
下载资源
问答
  • Layer文件上传操作

    千次阅读 2017-12-28 18:58:00
    layer.closeAll( ' loading ' ); var result = '' ; for ( var i = 0 ; i < res.length; i ++ ){ result = result + res[i].nsrsbh + " = " + res[i].container + " \n " ; } $( " #result " )....

    1:upload.html

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <title></title>
        <link rel="stylesheet" href="./layui/css/layui.css" media="all">
    </head>
    <body>
        <div class="layui-container">
            <div class="layui-row" align="center" style="margin-top: 30px;">
                <button type="button" class="layui-btn" id="upload">
                    <i class="layui-icon">&#xe67c;</i>选择文件</button>
            </div>
            <div class="layui-row" align="center" style="margin-top: 30px;">
                <textarea id="result" cols="50" rows="10"></textarea>
            </div>
        </div>
    </body>
    
    <script src="./jquery/jquery.min.js"></script>
    <script src="./layui/layui.js"></script>
    
    <script>
        layui.use('upload', function(){
            var upload = layui.upload;
    
            //执行上传
            var uploadInst = upload.render({
                elem: '#upload' //绑定元素
                ,url: '/ssfwpt/ra/ramanage' //上传接口
                ,method: 'POST'
                ,accept: 'file'
                ,size: 50
                ,before: function(obj){
                    layer.load();
                }
                ,done: function(res){//上传完毕回调
                    layer.closeAll('loading');
                    var result = '';
    
                    for(var i=0; i<res.length; i++){
                        result = result + res[i].nsrsbh+"="+res[i].container+"\n";
                    }
    
                    $("#result").html(result);
                }
                ,error: function(){//请求异常回调
                    layer.closeAll('loading');
                    layer.msg('网络异常,请稍后重试!');
                }
            });
        });
    </script>
    </html>

    2:后台(Spring-boot)

    /**
         * 实现文件上传
         * */
        @RequestMapping(value = "/ramanage", method = RequestMethod.POST)
        @ResponseBody
        public List<Map<String,String>> ramanage(@RequestParam("file") MultipartFile file){
            List<Map<String,String>> result = new ArrayList<>();
    
            try {
                InputStream input = file.getInputStream();
    
                Workbook wb = new HSSFWorkbook(input);
    
                Sheet sheet = wb.getSheetAt(0);
    
                int rowNum = sheet.getLastRowNum()+1;
    
                Map<String,String> map;
                for(int i=1; i<rowNum; i++){
                    Row row = sheet.getRow(i);
    
                    //容器名称
                    Cell containerCell = row.getCell(0);
                    String container = containerCell.getStringCellValue();
    
                    //税号
                    Cell nsrsbhCell = row.getCell(1);
                    String nsrsbh = nsrsbhCell.getStringCellValue();
    
                    map = new HashMap<>();
                    map.put("nsrsbh", nsrsbh);
                    map.put("container", container);
    
                    result.add(map);
                }
            } catch (IOException e) {
                e.printStackTrace();
            }
    
            return result;
        }

     

    展开全文
  • layer 整站文件

    2014-10-08 21:56:37
    layer 整站文件
  • shapefile和layer文件的区别是什么?

    千次阅读 2012-09-30 21:39:32
    Layer文件(.lyr)是存储一个源数据集和其他图层属性路径的文件,其中包括符号。 与shapefile相比,layer文件只是真实数据如shapefile、要素类等的一种连接或参考。它不是真正的数据因为它不存储数据的属性和几何图形...

    Shapefile (.shp)是一种矢量数据的存储方式,用于存储地理要素的位置、形状和属性。 shapefile存储在一系列相关的文件中并包含一个要素类。

    Layer文件(.lyr)是存储一个源数据集和其他图层属性路径的文件,其中包括符号。

    与shapefile相比,layer文件只是真实数据如shapefile、要素类等的一种连接或参考。它不是真正的数据因为它不存储数据的属性和几何图形。Layer文件主要存储要素符号和其他在GIS应用中数据可视化有关的图层属性。

    例如,如果一个layer文件被发送给其他机器上的用户但没有源数据时,将由于没有包含源数据而不会显示在地图中。为将数据正确的显示出来,用户必须具备其layer文件和相应的shapefile。

    这是利用图层包使
    展开全文
  • openlayer三个文件

    2017-12-14 20:03:37
    学习openlayer的三个文件,之前在官网上没有下载下来
  • layer是一款近年来备受青睐的web弹层组件,她具备全方位的解决方案,致力于服务各水平段的开发人员,您的页面会轻松地拥有丰富友好的操作体验。
  • layer插件的js文件

    2018-10-27 11:45:06
    针对前端jsp中的弹窗的优化插件,layer插件,使弹窗更漂亮,更美化
  • Caffe源码中layer文件分析

    千次阅读 2017-03-08 14:11:30
    Caffe源码中layer文件分析

    Caffe源码(caffe version commit: 09868ac , date: 2015.08.15)中有一些重要的头文件,这里介绍下include/caffe/layer.hpp文件的内容:

    1.      include文件:

    (1)、<caffe/blob.hpp>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/59106613

    (2)、<caffe/common.hpp>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/54955236

    (3)、<caffe/layer_factory.hpp>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/54310956

    (4)、<caffe/proto/caffe.pb.h>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/55267162

    (5)、<caffe/util/device_alternate.hpp>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/54955236

    2.        类Layer:抽象基类,有纯虚函数,不能实例化,定义了所有layer的基本接口,具体的每个layer完成一类特定的计算

    Layer是Caffe模型的本质内容和执行计算的基本单元。Layer可以进行很多运算,如convolve(卷积)、pool(池化)、inner product(内积),rectified-linear和sigmoid等非线性运算,元素级的数据变换,normalize(归一化)、load data(数据加载)、softmax和hinge等losses(损失计算)。可在Caffe的  http://caffe.berkeleyvision.org/tutorial/layers.html  (层目录)中查看所有操作,其囊括了绝大部分目前最前沿的深度学习任务所需要的层类型。

    一个layer通过bottom(底部) 连接层接收blobs数据,通过top(顶部)连接层输出blobs数据。Caffe中每种类型layer的参数说明定义在caffe.proto文件中,具体的layer参数值则定义在具体应用的prototxt网络结构说明文件中。

    在Caffe中,一个网络的大部分功能都是以layer的形式去展开的。在创建一个Caffe模型的时候,也是以layer为基础进行的,需按照caffe.proto中定义的网络及参数格式定义网络prototxt文件。在.prototxt文件中会有很多个layer {  } 字段。

    每一个layer都定义了3种重要的运算:setup(初始化设置),forward(前向传播),backward(反向传播)。

    (1)、setup:在模型初始化时重置layers及其相互之间的连接;

    (2)、forward:从bottom层中接收数据,进行计算后将输出送人到top层中;

    (3)、backward:给定相对于top层输出的梯度,计算其相对于输入的梯度,并传递到bottom层。一个有参数的layer需要计算相对于各个参数的梯度值并存储在内部。

    特别地,forward和backward函数分别有CPU和GPU两张实现方式。如果没有实现GPU版本,那么layer将转向作为备用选项的CPU方式。这样会增加额外的数据传送成本(输入数据由GPU上复制到CPU,之后输出数据从CPU又复制回到GPU)。

    总的来说,Layer承担了网络的两个核心操作:forward pass(前向传播)----接收输入并计算输出;backward pass(反向传播)----接收关于输出的梯度,计算相对于参数和输入的梯度并反向传播给在它前面的层。由此组成了每个layer的前向和反向传播。

    Layer是网络的基本单元,由此派生出了各种层类。在Layer中input data用bottom表示,output data用top表示。由于Caffe网络的组合性和其代码的模块化,自定义layer是很容易的。只要定义好layer的setup(初始化设置)、forward(前向传播,根据input计算output)和backward(反向传播,根据output计算input的梯度),就可将layer纳入到网络中。

    前传(forward)过程为给定的待推断的输入计算输出。在前传过程中,Caffe组合每一层的计算以得到整个模型的计算”函数”。本过程自底向上进行。

    反传(backward)过程根据损失来计算梯度从而进行学习。在反传过程中,Caffe通过自动求导并反向组合每一层的梯度来计算整个网络的梯度。这就是反传过程的本质。本过程自顶向下进行。

    反传过程以损失开始,然后根据输出计算梯度。根据链式准则,逐层计算出模型其余部分的梯度。有参数的层,会在反传过程中根据参数计算梯度。

    与大多数的机器学习模型一样,在Caffe中,学习是由一个损失函数驱动的(通常也被称为误差、代价或者目标函数)。一个损失函数通过将参数集(即当前的网络权值)映射到一个可以标识这些参数”不良程度”的标量值来学习目标。因此,学习的目的是找到一个网络权重的集合,使得损失函数最小。

    在Caffe中,损失是通过网络的前向计算得到的。每一层由一系列的输入blobs(bottom),然后产生一系列的输出blobs(top)。这些层的某些输出可以用来作为损失函数。典型的一对多分类任务的损失函数是softMaxWithLoss函数。

    Caffe中每种类型layer的参数说明定义在caffe.proto文件中,具体的layer参数值则定义在具体应用的protobuf网络结构说明文件中。

    注:以上关于Layer内容的介绍主要摘自由CaffeCN社区翻译的《Caffe官方教程中译本》。

    <caffe/layer.hpp>文件的详细介绍如下:

    #ifndef CAFFE_LAYER_H_
    #define CAFFE_LAYER_H_
    
    #include <algorithm>
    #include <string>
    #include <vector>
    
    #include "caffe/blob.hpp"
    #include "caffe/common.hpp"
    #include "caffe/layer_factory.hpp"
    #include "caffe/proto/caffe.pb.h"
    #include "caffe/util/device_alternate.hpp"
    
    /**
     Forward declare boost::thread instead of including boost/thread.hpp
     to avoid a boost/NVCC issues (#1009, #1010) on OSX.
     */
    // 前向声明boost的互斥类:boost::mutex
    namespace boost { class mutex; }
    
    namespace caffe {
    /**
     * @brief An interface for the units of computation which can be composed into a
     *        Net.
     *
     * Layer%s must implement a Forward function, in which they take their input
     * (bottom) Blob%s (if any) and compute their output Blob%s (if any).
     * They may also implement a Backward function, in which they compute the error
     * gradients with respect to their input Blob%s, given the error gradients with
     * their output Blob%s.
     */
    template <typename Dtype>
    class Layer { // 抽象基类,有纯虚函数,不能实例化,定义了所有layer的基本接口
     public:
      /**
       * You should not implement your own constructor. Any set up code should go
       * to SetUp(), where the dimensions of the bottom blobs are provided to the
       * layer.
       */
    // 显式构造函数,不需要重写,获得成员变量layer_param_、phase_、blobs_的值
      explicit Layer(const LayerParameter& param)
        : layer_param_(param), is_shared_(false) {
          // Set phase and copy blobs (if there are any).
          phase_ = param.phase();
          if (layer_param_.blobs_size() > 0) {
            blobs_.resize(layer_param_.blobs_size());
            for (int i = 0; i < layer_param_.blobs_size(); ++i) {
              blobs_[i].reset(new Blob<Dtype>());
              blobs_[i]->FromProto(layer_param_.blobs(i));
            }
          }
        }
    // 虚析构函数
      virtual ~Layer() {}
    
      /**
       * @brief Implements common layer setup functionality.
       *
       * @param bottom the preshaped input blobs
       * @param top
       *     the allocated but unshaped output blobs, to be shaped by Reshape
       *
       * Checks that the number of bottom and top blobs is correct.
       * Calls LayerSetUp to do special layer setup for individual layer types,
       * followed by Reshape to set up sizes of top blobs and internal buffers.
       * Sets up the loss weight multiplier blobs for any non-zero loss weights.
       * This method may not be overridden.
       */
    // layer初始化,此方法不需要重写
      void SetUp(const vector<Blob<Dtype>*>& bottom,
          const vector<Blob<Dtype>*>& top) {
        InitMutex();
        CheckBlobCounts(bottom, top);
        LayerSetUp(bottom, top);
        Reshape(bottom, top);
        SetLossWeights(top);
      }
    
      /**
       * @brief Does layer-specific setup: your layer should implement this function
       *        as well as Reshape.
       *
       * @param bottom
       *     the preshaped input blobs, whose data fields store the input data for
       *     this layer
       * @param top
       *     the allocated but unshaped output blobs
       *
       * This method should do one-time layer specific setup. This includes reading
       * and processing relevent parameters from the <code>layer_param_</code>.
       * Setting up the shapes of top blobs and internal buffers should be done in
       * <code>Reshape</code>, which will be called before the forward pass to
       * adjust the top blob sizes.
       */
    // 通过Layer参数即LayerParameter类获得layer中某些成员变量的值
      virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,
          const vector<Blob<Dtype>*>& top) {}
    
      /**
       * @brief Whether a layer should be shared by multiple nets during data
       *        parallelism. By default, all layers except for data layers should
       *        not be shared. data layers should be shared to ensure each worker
       *        solver access data sequentially during data parallelism.
       */
    // 获得layer data共享状态:一个layer的data是否被多个net共享
      virtual inline bool ShareInParallel() const { return false; }
    
      /** @brief Return whether this layer is actually shared by other nets.
       *         If ShareInParallel() is true and using more than one GPU and the
       *         net has TRAIN phase, then this function is expected return true.
       */
    // 获得layer是否被其它net共享
      inline bool IsShared() const { return is_shared_; }
    
      /** @brief Set whether this layer is actually shared by other nets
       *         If ShareInParallel() is true and using more than one GPU and the
       *         net has TRAIN phase, then is_shared should be set true.
       */
    // 设置layer是否被其它net共享
      inline void SetShared(bool is_shared) {
        CHECK(ShareInParallel() || !is_shared)
            << type() << "Layer does not support sharing.";
        is_shared_ = is_shared;
      }
    
      /**
       * @brief Adjust the shapes of top blobs and internal buffers to accommodate
       *        the shapes of the bottom blobs.
       *
       * @param bottom the input blobs, with the requested input shapes
       * @param top the top blobs, which should be reshaped as needed
       *
       * This method should reshape top blobs as needed according to the shapes
       * of the bottom (input) blobs, as well as reshaping any internal buffers
       * and making any other necessary adjustments so that the layer can
       * accommodate the bottom blobs.
       */
    // 调整top blobs的shape
      virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
          const vector<Blob<Dtype>*>& top) = 0;
    
      /**
       * @brief Given the bottom blobs, compute the top blobs and the loss.
       *
       * @param bottom
       *     the input blobs, whose data fields store the input data for this layer
       * @param top
       *     the preshaped output blobs, whose data fields will store this layers'
       *     outputs
       * \return The total loss from the layer.
       *
       * The Forward wrapper calls the relevant device wrapper function
       * (Forward_cpu or Forward_gpu) to compute the top blob values given the
       * bottom blobs.  If the layer has any non-zero loss_weights, the wrapper
       * then computes and returns the loss.
       *
       * Your layer should implement Forward_cpu and (optionally) Forward_gpu.
       */
    // 前向传播,通过输入bottom blobs,计算输出top blobs和返回loss和
      inline Dtype Forward(const vector<Blob<Dtype>*>& bottom,
          const vector<Blob<Dtype>*>& top);
    
      /**
       * @brief Given the top blob error gradients, compute the bottom blob error
       *        gradients.
       *
       * @param top
       *     the output blobs, whose diff fields store the gradient of the error
       *     with respect to themselves
       * @param propagate_down
       *     a vector with equal length to bottom, with each index indicating
       *     whether to propagate the error gradients down to the bottom blob at
       *     the corresponding index
       * @param bottom
       *     the input blobs, whose diff fields will store the gradient of the error
       *     with respect to themselves after Backward is run
       *
       * The Backward wrapper calls the relevant device wrapper function
       * (Backward_cpu or Backward_gpu) to compute the bottom blob diffs given the
       * top blob diffs.
       *
       * Your layer should implement Backward_cpu and (optionally) Backward_gpu.
       */
    // 反向传播,通过给定top blob误差梯度,计算bottom blob误差梯度
      inline void Backward(const vector<Blob<Dtype>*>& top,
          const vector<bool>& propagate_down,
          const vector<Blob<Dtype>*>& bottom);
    
      /**
       * @brief Returns the vector of learnable parameter blobs.
       */
    // 获得layer的权值、偏置等
      vector<shared_ptr<Blob<Dtype> > >& blobs() {
        return blobs_;
      }
    
      /**
       * @brief Returns the layer parameter.
       */
    // 获得layer的配置参数
      const LayerParameter& layer_param() const { return layer_param_; }
    
      /**
       * @brief Writes the layer parameter to a protocol buffer
       */
    // 序列化函数,将layer参数写入protobuf文件
      virtual void ToProto(LayerParameter* param, bool write_diff = false);
    
      /**
       * @brief Returns the scalar loss associated with a top blob at a given index.
       */
    // 获得top blob指定index的loss值
      inline Dtype loss(const int top_index) const {
        return (loss_.size() > top_index) ? loss_[top_index] : Dtype(0);
      }
    
      /**
       * @brief Sets the loss associated with a top blob at a given index.
       */
    // 设置top blob指定index的loss值
      inline void set_loss(const int top_index, const Dtype value) {
        if (loss_.size() <= top_index) {
          loss_.resize(top_index + 1, Dtype(0));
        }
        loss_[top_index] = value;
      }
    
      /**
       * @brief Returns the layer type.
       */
    // 获得layer的类型
      virtual inline const char* type() const { return ""; }
    
      /**
       * @brief Returns the exact number of bottom blobs required by the layer,
       *        or -1 if no exact number is required.
       *
       * This method should be overridden to return a non-negative value if your
       * layer expects some exact number of bottom blobs.
       */
    // 获得layer所需的bottom blobs的个数
      virtual inline int ExactNumBottomBlobs() const { return -1; }
      /**
       * @brief Returns the minimum number of bottom blobs required by the layer,
       *        or -1 if no minimum number is required.
       *
       * This method should be overridden to return a non-negative value if your
       * layer expects some minimum number of bottom blobs.
       */
    // 获得layer所需的bottom blobs的最少个数
      virtual inline int MinBottomBlobs() const { return -1; }
      /**
       * @brief Returns the maximum number of bottom blobs required by the layer,
       *        or -1 if no maximum number is required.
       *
       * This method should be overridden to return a non-negative value if your
       * layer expects some maximum number of bottom blobs.
       */
    // 获得layer所需的bottom blobs的最多个数
      virtual inline int MaxBottomBlobs() const { return -1; }
      /**
       * @brief Returns the exact number of top blobs required by the layer,
       *        or -1 if no exact number is required.
       *
       * This method should be overridden to return a non-negative value if your
       * layer expects some exact number of top blobs.
       */
    // 获得layer所需的top blobs的个数
      virtual inline int ExactNumTopBlobs() const { return -1; }
      /**
       * @brief Returns the minimum number of top blobs required by the layer,
       *        or -1 if no minimum number is required.
       *
       * This method should be overridden to return a non-negative value if your
       * layer expects some minimum number of top blobs.
       */
    // 获得layer所需的top blobs的最少个数
      virtual inline int MinTopBlobs() const { return -1; }
      /**
       * @brief Returns the maximum number of top blobs required by the layer,
       *        or -1 if no maximum number is required.
       *
       * This method should be overridden to return a non-negative value if your
       * layer expects some maximum number of top blobs.
       */
    // 获得layer所需的top blobs的最多个数
      virtual inline int MaxTopBlobs() const { return -1; }
      /**
       * @brief Returns true if the layer requires an equal number of bottom and
       *        top blobs.
       *
       * This method should be overridden to return true if your layer expects an
       * equal number of bottom and top blobs.
       */
    // 判断layer所需的bottom blobs和top blobs的个数是否相等
      virtual inline bool EqualNumBottomTopBlobs() const { return false; }
    
      /**
       * @brief Return whether "anonymous" top blobs are created automatically
       *        by the layer.
       *
       * If this method returns true, Net::Init will create enough "anonymous" top
       * blobs to fulfill the requirement specified by ExactNumTopBlobs() or
       * MinTopBlobs().
       */
    // 判断layer所需的的top blobs是否需要由Net::Init来创建
      virtual inline bool AutoTopBlobs() const { return false; }
    
      /**
       * @brief Return whether to allow force_backward for a given bottom blob
       *        index.
       *
       * If AllowForceBackward(i) == false, we will ignore the force_backward
       * setting and backpropagate to blob i only if it needs gradient information
       * (as is done when force_backward == false).
       */
    // 判断layer指定的bottom blob是否需要强制梯度返回,因为有些layer其实不需要梯度信息
      virtual inline bool AllowForceBackward(const int bottom_index) const { return true; }
    
      /**
       * @brief Specifies whether the layer should compute gradients w.r.t. a
       *        parameter at a particular index given by param_id.
       *
       * You can safely ignore false values and always compute gradients
       * for all parameters, but possibly with wasteful computation.
       */
    // 判断layer指定的blob是否应该计算梯度
      inline bool param_propagate_down(const int param_id) {
        return (param_propagate_down_.size() > param_id) ?
            param_propagate_down_[param_id] : false;
      }
      /**
       * @brief Sets whether the layer should compute gradients w.r.t. a
       *        parameter at a particular index given by param_id.
       */
    // 设置layer指定的blob是否应该计算梯度
      inline void set_param_propagate_down(const int param_id, const bool value) {
        if (param_propagate_down_.size() <= param_id) {
          param_propagate_down_.resize(param_id + 1, true);
        }
        param_propagate_down_[param_id] = value;
      }
    
     protected:
    // Caffe中类的成员变量名都带有后缀"_",这样就容易区分临时变量和类成员变量
      /** The protobuf that stores the layer parameters */
    // 配置的layer参数,创建layer对象时,通过调用构造函数从上层传入,
    // 关于LayerParameter类的具体参数可参考caffe.proto中的message LayerParameter
      LayerParameter layer_param_;
      /** The phase: TRAIN or TEST */
    // layer状态:指定参与网络的是train还是test,
      Phase phase_;
      /** The vector that stores the learnable parameters as a set of blobs. */
    // 用于存储layer的学习的参数如权值和偏置
      vector<shared_ptr<Blob<Dtype> > > blobs_;
      /** Vector indicating whether to compute the diff of each param blob. */
    // 标志是否为layer指定的blob计算梯度值
      vector<bool> param_propagate_down_;
      /** The vector that indicates whether each top blob has a non-zero weight in
       *  the objective function. */
    // 标志layer指定的top blob是否有一个非0权值
      vector<Dtype> loss_;
    
      /** @brief Using the CPU device, compute the layer output. */
    // CPU实现layer的前向传播
      virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
          const vector<Blob<Dtype>*>& top) = 0;
      /**
       * @brief Using the GPU device, compute the layer output.
       *        Fall back to Forward_cpu() if unavailable.
       */
    // GPU实现layer的前向传播
      virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
          const vector<Blob<Dtype>*>& top) {
        // LOG(WARNING) << "Using CPU code as backup.";
        return Forward_cpu(bottom, top);
      }
    
      /**
       * @brief Using the CPU device, compute the gradients for any parameters and
       *        for the bottom blobs if propagate_down is true.
       */
    // CPU实现layer的反向传播
      virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
          const vector<bool>& propagate_down,
          const vector<Blob<Dtype>*>& bottom) = 0;
      /**
       * @brief Using the GPU device, compute the gradients for any parameters and
       *        for the bottom blobs if propagate_down is true.
       *        Fall back to Backward_cpu() if unavailable.
       */
    // GPU实现layer的反向传播
      virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
          const vector<bool>& propagate_down,
          const vector<Blob<Dtype>*>& bottom) {
        // LOG(WARNING) << "Using CPU code as backup.";
        Backward_cpu(top, propagate_down, bottom);
      }
    
      /**
       * Called by the parent Layer's SetUp to check that the number of bottom
       * and top Blobs provided as input match the expected numbers specified by
       * the {ExactNum,Min,Max}{Bottom,Top}Blobs() functions.
       */
    // 检查bottom 和top blobs个数是否匹配
      virtual void CheckBlobCounts(const vector<Blob<Dtype>*>& bottom,
                                   const vector<Blob<Dtype>*>& top) {
        if (ExactNumBottomBlobs() >= 0) {
          CHECK_EQ(ExactNumBottomBlobs(), bottom.size())
              << type() << " Layer takes " << ExactNumBottomBlobs()
              << " bottom blob(s) as input.";
        }
        if (MinBottomBlobs() >= 0) {
          CHECK_LE(MinBottomBlobs(), bottom.size())
              << type() << " Layer takes at least " << MinBottomBlobs()
              << " bottom blob(s) as input.";
        }
        if (MaxBottomBlobs() >= 0) {
          CHECK_GE(MaxBottomBlobs(), bottom.size())
              << type() << " Layer takes at most " << MaxBottomBlobs()
              << " bottom blob(s) as input.";
        }
        if (ExactNumTopBlobs() >= 0) {
          CHECK_EQ(ExactNumTopBlobs(), top.size())
              << type() << " Layer produces " << ExactNumTopBlobs()
              << " top blob(s) as output.";
        }
        if (MinTopBlobs() >= 0) {
          CHECK_LE(MinTopBlobs(), top.size())
              << type() << " Layer produces at least " << MinTopBlobs()
              << " top blob(s) as output.";
        }
        if (MaxTopBlobs() >= 0) {
          CHECK_GE(MaxTopBlobs(), top.size())
              << type() << " Layer produces at most " << MaxTopBlobs()
              << " top blob(s) as output.";
        }
        if (EqualNumBottomTopBlobs()) {
          CHECK_EQ(bottom.size(), top.size())
              << type() << " Layer produces one top blob as output for each "
              << "bottom blob input.";
        }
      }
    
      /**
       * Called by SetUp to initialize the weights associated with any top blobs in
       * the loss function. Store non-zero loss weights in the diff blob.
       */
    // 设置top blobs中diff值
      inline void SetLossWeights(const vector<Blob<Dtype>*>& top) {
        const int num_loss_weights = layer_param_.loss_weight_size();
        if (num_loss_weights) {
          CHECK_EQ(top.size(), num_loss_weights) << "loss_weight must be "
              "unspecified or specified once per top blob.";
          for (int top_id = 0; top_id < top.size(); ++top_id) {
            const Dtype loss_weight = layer_param_.loss_weight(top_id);
            if (loss_weight == Dtype(0)) { continue; }
            this->set_loss(top_id, loss_weight);
            const int count = top[top_id]->count();
            Dtype* loss_multiplier = top[top_id]->mutable_cpu_diff();
            caffe_set(count, loss_weight, loss_multiplier);
          }
        }
      }
    
     private:
      /** Whether this layer is actually shared by other nets*/
    //标志当前layer是否被其它net共享
      bool is_shared_;
    
      /** The mutex for sequential forward if this layer is shared */
    // 声明boost::mutex对象,互斥锁变量
      shared_ptr<boost::mutex> forward_mutex_;
    
      /** Initialize forward_mutex_ */
    // 初始化互斥锁
      void InitMutex();
      /** Lock forward_mutex_ if this layer is shared */
    // 如果layer是共享的则加锁
      void Lock();
      /** Unlock forward_mutex_ if this layer is shared */
    // 如果layer是共享的则解锁
      void Unlock();
    
    // 禁止使用Layer类的拷贝和赋值操作
      DISABLE_COPY_AND_ASSIGN(Layer);
    };  // class Layer
    
    // Forward and backward wrappers. You should implement the cpu and
    // gpu specific implementations instead, and should not change these
    // functions.
    // 前向传播,通过输入bottom blobs,计算输出top blobs和loss值
    template <typename Dtype>
    inline Dtype Layer<Dtype>::Forward(const vector<Blob<Dtype>*>& bottom,
        const vector<Blob<Dtype>*>& top) {
      // Lock during forward to ensure sequential forward
      Lock();
      Dtype loss = 0;
      Reshape(bottom, top);
      switch (Caffe::mode()) {
      case Caffe::CPU:
        Forward_cpu(bottom, top);
        for (int top_id = 0; top_id < top.size(); ++top_id) {
          if (!this->loss(top_id)) { continue; }
          const int count = top[top_id]->count();
          const Dtype* data = top[top_id]->cpu_data();
          const Dtype* loss_weights = top[top_id]->cpu_diff();
          loss += caffe_cpu_dot(count, data, loss_weights);
        }
        break;
      case Caffe::GPU:
        Forward_gpu(bottom, top);
    #ifndef CPU_ONLY
        for (int top_id = 0; top_id < top.size(); ++top_id) {
          if (!this->loss(top_id)) { continue; }
          const int count = top[top_id]->count();
          const Dtype* data = top[top_id]->gpu_data();
          const Dtype* loss_weights = top[top_id]->gpu_diff();
          Dtype blob_loss = 0;
          caffe_gpu_dot(count, data, loss_weights, &blob_loss);
          loss += blob_loss;
        }
    #endif
        break;
      default:
        LOG(FATAL) << "Unknown caffe mode.";
      }
      Unlock();
      return loss;
    }
    
    // 反向传播,通过给定top blob误差梯度,计算bottom blob误差梯度
    template <typename Dtype>
    inline void Layer<Dtype>::Backward(const vector<Blob<Dtype>*>& top,
        const vector<bool>& propagate_down,
        const vector<Blob<Dtype>*>& bottom) {
      switch (Caffe::mode()) {
      case Caffe::CPU:
        Backward_cpu(top, propagate_down, bottom);
        break;
      case Caffe::GPU:
        Backward_gpu(top, propagate_down, bottom);
        break;
      default:
        LOG(FATAL) << "Unknown caffe mode.";
      }
    }
    
    // Serialize LayerParameter to protocol buffer
    // 序列化函数,将layer参数写入protobuf文件
    template <typename Dtype>
    void Layer<Dtype>::ToProto(LayerParameter* param, bool write_diff) {
      param->Clear();
      param->CopyFrom(layer_param_);
      param->clear_blobs();
      for (int i = 0; i < blobs_.size(); ++i) {
        blobs_[i]->ToProto(param->add_blobs(), write_diff);
      }
    }
    
    }  // namespace caffe
    
    #endif  // CAFFE_LAYER_H_
    在caffe.proto文件中,主要 有一个message是与layer 相关的,如下:
    enum Phase { // layer状态:train、test
       TRAIN = 0;
       TEST = 1;
    }
    
    // NOTE
    // Update the next available ID when you add a new LayerParameter field.
    //
    // LayerParameter next available layer-specific ID: 137 (last added: reduction_param)
    message LayerParameter { // Layer参数
      optional string name = 1; // the layer name, layer名字,可由自己任意制定
      optional string type = 2; // the layer type, layer类型,在具体层中写定,可以通过type()函数获得
      repeated string bottom = 3; // the name of each bottom blob, bottom名字,可有多个
      repeated string top = 4; // the name of each top blob,top名字,可有多个
    
      // The train / test phase for computation.
      optional Phase phase = 10; // layer状态:enum Phase {TRAIN = 0; TEST = 1;}
    
      // The amount of weight to assign each top blob in the objective.
      // Each layer assigns a default value, usually of either 0 or 1,
      // to each top blob.
      repeated float loss_weight = 5; // 个数必须与top blob一致
    
      // Specifies training parameters (multipliers on global learning constants,
      // and the name and other settings used for weight sharing).
      repeated ParamSpec param = 6; // train时用到的参数
    
      // The blobs containing the numeric parameters of the layer.
      repeated BlobProto blobs = 7; // blobs个数
    
      // Specifies on which bottoms the backpropagation should be skipped.
      // The size must be either 0 or equal to the number of bottoms.
      repeated bool propagate_down = 11; // 长度或者是0或者与bottoms个数一致
    
      // Rules controlling whether and when a layer is included in the network,
      // based on the current NetState.  You may specify a non-zero number of rules
      // to include OR exclude, but not both.  If no include or exclude rules are
      // specified, the layer is always included.  If the current NetState meets
      // ANY (i.e., one or more) of the specified rules, the layer is
      // included/excluded.
      repeated NetStateRule include = 8; // net state rule
      repeated NetStateRule exclude = 9; // net state rule
    
      // Parameters for data pre-processing.
      optional TransformationParameter transform_param = 100; // 对data进行预处理包括缩放、剪切等
    
      // Parameters shared by loss layers.
      optional LossParameter loss_param = 101; // loss parameters
    
      // Layer type-specific parameters.
      //
      // Note: certain layers may have more than one computational engine
      // for their implementation. These layers include an Engine type and
      // engine parameter for selecting the implementation.
      // The default for the engine is set by the ENGINE switch at compile-time.
      // 具体layer参数
      optional AccuracyParameter accuracy_param = 102;
      optional ArgMaxParameter argmax_param = 103;
      optional ConcatParameter concat_param = 104;
      optional ContrastiveLossParameter contrastive_loss_param = 105;
      optional ConvolutionParameter convolution_param = 106;
      optional DataParameter data_param = 107;
      optional DropoutParameter dropout_param = 108;
      optional DummyDataParameter dummy_data_param = 109;
      optional EltwiseParameter eltwise_param = 110;
      optional ExpParameter exp_param = 111;
      optional FlattenParameter flatten_param = 135;
      optional HDF5DataParameter hdf5_data_param = 112;
      optional HDF5OutputParameter hdf5_output_param = 113;
      optional HingeLossParameter hinge_loss_param = 114;
      optional ImageDataParameter image_data_param = 115;
      optional InfogainLossParameter infogain_loss_param = 116;
      optional InnerProductParameter inner_product_param = 117;
      optional LogParameter log_param = 134;
      optional LRNParameter lrn_param = 118;
      optional MemoryDataParameter memory_data_param = 119;
      optional MVNParameter mvn_param = 120;
      optional PoolingParameter pooling_param = 121;
      optional PowerParameter power_param = 122;
      optional PReLUParameter prelu_param = 131;
      optional PythonParameter python_param = 130;
      optional ReductionParameter reduction_param = 136;
      optional ReLUParameter relu_param = 123;
      optional ReshapeParameter reshape_param = 133;
      optional SigmoidParameter sigmoid_param = 124;
      optional SoftmaxParameter softmax_param = 125;
      optional SPPParameter spp_param = 132;
      optional SliceParameter slice_param = 126;
      optional TanHParameter tanh_param = 127;
      optional ThresholdParameter threshold_param = 128;
      optional WindowDataParameter window_data_param = 129;
    }

    GitHubhttps://github.com/fengbingchun/Caffe_Test

    展开全文
  • 今天小编就为大家分享一篇layer ui 导入文件之前传入数据的实例,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
  • Caffe源码中Pooling Layer文件分析

    千次阅读 2017-03-09 16:45:25
    Caffe源码中Pooling Layer文件分析

    Caffe源码(caffe version commit: 09868ac , date: 2015.08.15)中有一些重要的头文件,这里介绍下include/caffe/vision_layers文件中PoolingLayer类,在最新版caffe中,PoolingLayer类被单独放在了include/caffe/layers/pooling_layer.hpp文件中,这两个文件中PoolingLayer类的内容及实现是完全一致的:

    1.  include文件:

    (1)、<caffe/blob.hpp>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/59106613

    (2)、<caffe/layer.hpp>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/60871052

    (3)、<caffe/proto/caffe.pb.h>:此文件的介绍可以参考:http://blog.csdn.net/fengbingchun/article/details/55267162

    2.  类PoolingLayer:池化层,Layer类的子类

    Pooling layer的主要作用是降维,缩小feature map,图像降采样,方法有:

    (1)、均值采样:取区域平均值作为降采样值;

    (2)、最大值采样:取区域最大值作为降采样值;

    (3)、随机采样:取区域内随机一个像素值。

    <caffe/layers/pooling_layer.hpp>文件的详细介绍如下:

    #ifndef CAFFE_POOLING_LAYER_HPP_
    #define CAFFE_POOLING_LAYER_HPP_
    
    #include <vector>
    
    #include "caffe/blob.hpp"
    #include "caffe/layer.hpp"
    #include "caffe/proto/caffe.pb.h"
    
    namespace caffe {
    /**
     * @brief Pools the input image by taking the max, average, etc. within regions.
     *
     * TODO(dox): thorough documentation for Forward, Backward, and proto params.
     */
    // 池化层,Layer类的子类,图像降采样,有三种Pooling方法:Max、Avx、Stochastic
    template <typename Dtype>
    class PoolingLayer : public Layer<Dtype> {
     public:
    // 显示构造函数
      explicit PoolingLayer(const LayerParameter& param) : Layer<Dtype>(param) {}
    // 参数初始化,通过类PoolingParameter获取成员变量值,包括:
    // global_pooling_、kernel_h_、kernel_w_、pad_h_、pad_w_、stride_h_、stride_w_
      virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);
    // 调整top blobs的shape,并有可能会reshape rand_idx_或max_idx_;
    // 获取成员变量值,包括:channels_、height_、width_、pooled_height_、pooled_width_
      virtual void Reshape(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);
    // 获得Pooling layer的类型: Pooling
      virtual inline const char* type() const { return "Pooling"; }
    // 获得Pooling layer所需的bottom blobs的个数: 1
      virtual inline int ExactNumBottomBlobs() const { return 1; }
    // 获得Pooling layer所需的bottom blobs的最少个数: 1
      virtual inline int MinTopBlobs() const { return 1; }
      // MAX POOL layers can output an extra top blob for the mask;
      // others can only output the pooled inputs.
    // 获得Pooling layer所需的bottom blobs的最多个数: Max为2,其它(Avg, Stochastic)为1
      virtual inline int MaxTopBlobs() const {
        return (this->layer_param_.pooling_param().pool() ==
                PoolingParameter_PoolMethod_MAX) ? 2 : 1;
      }
    
     protected:
    // CPU实现Pooling layer的前向传播,仅有Max和Ave两种方法实现
      virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);
    // GPU实现Pooling layer的前向传播,Max、Ave、Stochastic三种方法实现
      virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);
    // CPU实现Pooling layer的反向传播,仅有Max和Ave两种方法实现
      virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
          const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
    // GPU实现Pooling layer的反向传播,Max、Ave、Stochastic三种方法实现
      virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
          const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
    
    // Caffe中类的成员变量名都带有后缀"_",这样就容易区分临时变量和类成员变量
      int kernel_h_, kernel_w_; // 滤波器(卷积核)大小
      int stride_h_, stride_w_; // 步长大小
      int pad_h_, pad_w_; // 图像扩充大小
      int channels_; // 图像通道数
      int height_, width_; // 图像高、宽
    // 池化后图像高、宽
    // pooled_height_ = (height_ + 2 * pad_h_ - kernel_h_) / stride_h_)) + 1
    // pooled_width_ = (width_ + 2 * pad_w_ - kernel_w_) / stride_w_)) + 1
      int pooled_height_, pooled_width_;
      bool global_pooling_; // 是否全区域池化(将整幅图像降采样为1*1)
      Blob<Dtype> rand_idx_; // 随机采样索引,Pooling方法为STOCHASTIC时用到并会Reshape
      Blob<int> max_idx_; // 最大值采样索引,Pooling方法为MAX时用到并会Reshape
    };
    
    }  // namespace caffe
    
    #endif  // CAFFE_POOLING_LAYER_HPP_
    在caffe.proto文件中,有一个message是与pooling layer 相关的,如下:
    message PoolingParameter { // Pooling层参数类
      enum PoolMethod { // 枚举类型,Pooling的方法:Max(最大值采样)、AVE(均值采样)、STOCHASTIC(随机采样)
        MAX = 0;
        AVE = 1;
        STOCHASTIC = 2;
      }
      optional PoolMethod pool = 1 [default = MAX]; // The pooling method, pooling方法
      // Pad, kernel size, and stride are all given as a single value for equal
      // dimensions in height and width or as Y, X pairs.
      optional uint32 pad = 4 [default = 0]; // The padding size (equal in Y, X),图像扩充大小(添加图像边界的像素大小)
      optional uint32 pad_h = 9 [default = 0]; // The padding height,图像扩充大小,Y
      optional uint32 pad_w = 10 [default = 0]; // The padding width,图像扩充大小,X
      optional uint32 kernel_size = 2; // The kernel size (square),滤波器(卷积核、滑动窗)的大小(高=宽)
      optional uint32 kernel_h = 5; // The kernel height,滤波器(卷积核、滑动窗)的高
      optional uint32 kernel_w = 6; // The kernel width,滤波器(卷积核、滑动窗)的宽
      optional uint32 stride = 3 [default = 1]; // The stride (equal in Y, X),滑动步长(高=宽),卷积核卷积时平移的步幅
      optional uint32 stride_h = 7; // The stride height,滑动步长,高
      optional uint32 stride_w = 8; // The stride width,滑动步长,宽
      enum Engine {
        DEFAULT = 0;
        CAFFE = 1;
        CUDNN = 2;
      }
      optional Engine engine = 11 [default = DEFAULT]; //
      // If global_pooling then it will pool over the size of the bottom by doing
      // kernel_h = bottom->height and kernel_w = bottom->width
      optional bool global_pooling = 12 [default = false]; // 是否是全区域池化
    }
    pooling layer的测试代码如下:

    #include "funset.hpp"
    #include <string>
    #include <vector>
    #include "common.hpp"
    
    int test_caffe_layer_pooling()
    {
    	caffe::Caffe::set_mode(caffe::Caffe::CPU); // set run caffe mode
    
    	// set layer parameter
    	caffe::LayerParameter layer_param;
    	layer_param.set_phase(caffe::Phase::TRAIN);
    
    	// cv::Mat -> caffe::Blob
    	std::string image_name = "E:/GitCode/Caffe_Test/test_data/images/a.jpg";
    	cv::Mat mat1 = cv::imread(image_name, 1);
    	if (!mat1.data) {
    		fprintf(stderr, "read image fail: %s\n", image_name.c_str());
    		return -1;
    	}
    	mat1.convertTo(mat1, CV_32FC3);
    	std::vector<cv::Mat> mat2;
    	cv::split(mat1, mat2);
    	std::vector<int> mat_reshape{ 1, (int)mat2.size(), mat2[0].rows, mat2[0].cols };
    
    	caffe::Blob<float> blob;
    	blob.Reshape(mat_reshape);
    	size_t size = mat2[0].rows * mat2[0].cols;
    	float* data = new float[mat2.size() * size];
    	memcpy(data, mat2[0].data, size * sizeof(float));
    	memcpy(data + size, mat2[1].data, size * sizeof(float));
    	memcpy(data + 2 * size, mat2[2].data, size * sizeof(float));
    	blob.set_cpu_data(data);
    
    	for (int method = 0; method < 2; ++method) {
    		// set pooling parameter
    		caffe::PoolingParameter* pooling_param = layer_param.mutable_pooling_param();
    		if (method == 0) pooling_param->set_pool(caffe::PoolingParameter::MAX);
    		else pooling_param->set_pool(caffe::PoolingParameter::AVE);
    		pooling_param->set_kernel_size(3);
    		pooling_param->set_pad(2);
    		pooling_param->set_stride(2);
    		pooling_param->set_global_pooling(false);
    
    		std::vector<caffe::Blob<float>*> bottom_blob{ &blob }, top_blob{ &caffe::Blob<float>()/*, &caffe::Blob<float>() */ };
    
    		// test PoolingLayer function
    		caffe::PoolingLayer<float> pooling_layer(layer_param);
    		pooling_layer.SetUp(bottom_blob, top_blob);
    		fprintf(stderr, "top blob info: channels: %d, height: %d, width: %d\n",
    			top_blob[0]->channels(), top_blob[0]->height(), top_blob[0]->width());
    
    		pooling_layer.Forward(bottom_blob, top_blob);
    
    		int height = top_blob[0]->height();
    		int width = top_blob[0]->width();
    		const float* p = top_blob[0]->cpu_data();
    		std::vector<cv::Mat> mat3{ cv::Mat(height, width, CV_32FC1, (float*)p),
    			cv::Mat(height, width, CV_32FC1, (float*)(p + height * width)),
    			cv::Mat(height, width, CV_32FC1, (float*)(p + height * width * 2)) };
    		cv::Mat mat4;
    		cv::merge(mat3, mat4);
    		mat4.convertTo(mat4, CV_8UC3);
    		if (method == 0) image_name = "E:/GitCode/Caffe_Test/test_data/images/forward0.jpg";
    		else image_name = "E:/GitCode/Caffe_Test/test_data/images/forward1.jpg";
    		cv::imwrite(image_name, mat4);
    
    		for (int i = 0; i < bottom_blob[0]->count(); ++i)
    			bottom_blob[0]->mutable_cpu_diff()[i] = bottom_blob[0]->cpu_data()[i];
    		for (int i = 0; i < top_blob[0]->count(); ++i)
    			top_blob[0]->mutable_cpu_diff()[i] = top_blob[0]->cpu_data()[i];
    
    		std::vector<bool> propagate_down{ true };
    		pooling_layer.Backward(top_blob, propagate_down, bottom_blob);
    
    		height = bottom_blob[0]->height();
    		width = bottom_blob[0]->width();
    		p = bottom_blob[0]->cpu_diff();
    		std::vector<cv::Mat> mat5{ cv::Mat(height, width, CV_32FC1, (float*)p),
    			cv::Mat(height, width, CV_32FC1, (float*)(p + height * width)),
    			cv::Mat(height, width, CV_32FC1, (float*)(p + height * width * 2)) };
    		cv::Mat mat6;
    		cv::merge(mat5, mat6);
    		mat6.convertTo(mat6, CV_8UC3);
    		if (method == 0) image_name = "E:/GitCode/Caffe_Test/test_data/images/backward0.jpg";
    		else image_name = "E:/GitCode/Caffe_Test/test_data/images/backward1.jpg";
    		cv::imwrite(image_name, mat6);
    	}
    
    	delete[] data;
    	return 0;
    }
    执行结果如下图:


    图像结果(Lena.jpg)如下:前向传播、反向传播,前两幅为Max,后两幅为Avg方法



    GitHubhttps://github.com/fengbingchun/Caffe_Test

    展开全文
  • 为后端程序员设计的前端框架。这是一个封装了各种css和js、Ajax等等的前端框架,其封装程度之高,有时...包含layui和layer的js和css文件,文件非常完整齐全,不缺任何一个文件,解压即用!个人使用没有遇到任何问题。
  • <script src="assets/store/js/layui.js"></script> <script> layui.use('layer', function(){ layer = layui.layer; }) </script>
  • layer的使用

    万次阅读 2016-12-29 10:36:38
    1.下载插件 layer官网:layer.layui.com 目前是v3.0.1版本 解压后引入layer文件夹到项目中 2.在demo中引入jQuery ,就引入layer.layui.com 页面引入的jQuery ...3.使用layer的方法
  • layer所需js文件static_js.zip
  • layui演示 自己下载的layui web演示。。。。。。。。。。。
  • 在用thinkphp5做到用layer弹出层上传文件过程中,一直不能成功。详细代码如下: HTML代码: 仅允许导入xls或xlsx格式文件!  JS代码: //获取模板当中的url地址 url = $(obj).attr('daoru_url'); var $...
  • layer弹窗上传文件,loading

    千次阅读 2019-01-04 10:40:40
    &lt;!doctype html&gt; 标签 添加 {{--{{ csrf_field() }}--}} 选择文件 ...
  • layer插件的css资源

    2018-10-27 11:44:02
    针对前端jsp中的弹窗的优化插件,layer插件,使弹窗更漂亮,更美化
  • Layer stack up.

    2018-04-28 15:03:50
    PCB的层叠设计,对于不同层数的PCB电路板的不同方案。
  • https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1... hub.KerasLayer函数调用 你要是有会员直接下,没会员这里有百度云盘链接: 链接: https://pan.baidu.com/s/1OyBFF37ZAP71h2yuv2H3DA 提取码: cwub
  • layer.css web弹层组件

    2019-07-06 12:56:27
    layer是一款口碑极佳的web弹层组件,layer 基于,需要layer.js
  • layer.js 下载

    2020-08-18 11:30:21
    layer是一款近年来备受青睐的web弹层组件,她具备全方位的解决方案,致力于服务各水平段的开发人员,您的页面会轻松地拥有丰富友好的操作体验。
  • 的插件,可添加图层以可视化ASCIIGrid或GeoTIFF文件(EPSG:4326)中的字段( aka Rasters )。 警告! npm安装的新网址 npm install ih-leaflet-canvaslayer-field 这包括: L.CanvasLayer.ScalarField可以使用...
  • 使用layui.upload上传文件与使用layer.open方式上传文件。上传文件的方式有很多种,这里简单介绍以下两种方式。 第一种:layui.upload上传文件 layui.use(['layer','upload'], function () { var upload = layui....
  • layer-v2.4 弹层组件

    2016-11-22 14:14:38
    一、使用时,请把文件夹layer整个放置在您站点的任何一个目录,只需引入layer.js即可,除jQuery外,其它文件无需再引入。 二、如果您的js引入是通过合并处理或者您不想采用layer自动获取的绝对路径,您可以通过layer...
  • 通过Cesium Terrain Builder生成文件后,需要使用到的layer.json,直接复制到生成文件夹中就可以了,可以使用
  • 需求:已知根据数据后台查询到要素名称,想通过该名称查询要素多边形,不通过...首先引入引用文件: <link rel="stylesheet" href="https://js.arcgis.com/3.29/dijit/themes/tundra/tundra.css"> <link...
  • layui上传文件

    千次阅读 2019-05-31 15:47:27
    参考:https://www.cnblogs.com/Ivan-Wu/p/9561318.html upload.render({ //允许上传的文件后缀 elem: '#upload' ,url: upload_url ,data:{id:id} ,accept: 'file' //普通文件 ,exts:...
  • Layer多图片上传实例

    2018-07-31 14:38:04
    layer多图片上传,这是作者学习layer框架时自己做的测试例子。
  • layer ui 导入文件之前传入数据

    千次阅读 2017-10-26 17:24:08
    最近用layer ui上传文件遇到了一个问题,我想在上传文件之前把data-id传入后台,layer文档找了一下也没有找到类似的说明,经过一番折腾发现了其中的窍门,这里记录一下html代码 导入路由表 ...
  • openbmc中已经有许多厂商board的layer,每个厂商的layer都有所区别,因为每个厂商所使用的soc以及硬件外设不尽相同。要熟练创建自己的layer或者修改layer,就需要了解对layer的架构,本节内容主要介绍layer的架构。 ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 178,467
精华内容 71,386
关键字:

layer文件