精华内容
下载资源
问答
  • oracle递归with

    千次阅读 2018-08-29 17:18:29
    简介 基本语法 结果集顺序 与connect相关语法的等效替换 LEVEL CONNECT_BY_ROOT SYS_CONNECT_BY_PATH NOCYCLE and CONNECT_BY_...递归with(Recursive WITH Clauses)是一个主要用于层次查询(Hierarchical Q...

    简介

    递归with(Recursive WITH Clauses)是一个主要用于层次查询(Hierarchical Queries)的语法。要使用它,需要oracle的版本为Oracle 11g Release 2及以上。这个语法可以看成是对connect语法的补充。

    基本语法

    建立测试数据

    DROP TABLE tab1 PURGE;
    
    CREATE TABLE tab1 (
      id        NUMBER,
      parent_id NUMBER,
      CONSTRAINT tab1_pk PRIMARY KEY (id),
      CONSTRAINT tab1_tab1_fk FOREIGN KEY (parent_id) REFERENCES tab1(id)
    );
    
    CREATE INDEX tab1_parent_id_idx ON tab1(parent_id);
    
    INSERT INTO tab1 VALUES (1, NULL);
    INSERT INTO tab1 VALUES (2, 1);
    INSERT INTO tab1 VALUES (3, 2);
    INSERT INTO tab1 VALUES (4, 2);
    INSERT INTO tab1 VALUES (5, 4);
    INSERT INTO tab1 VALUES (6, 4);
    INSERT INTO tab1 VALUES (7, 1);
    INSERT INTO tab1 VALUES (8, 7);
    INSERT INTO tab1 VALUES (9, 1);
    INSERT INTO tab1 VALUES (10, 9);
    INSERT INTO tab1 VALUES (11, 10);
    INSERT INTO tab1 VALUES (12, 9);
    COMMIT;
    IDPARENT_ID
    1
    21
    32
    42
    54
    64
    71
    87
    91
    109
    1110
    129

    这是一个有父子关系的表。
    传统的写法是这样的

    select *
      from tab1 t1
     start with t1.parent_id is null
    connect by prior t1.id = t1.parent_id;
    IDPARENT_ID
    1
    21
    32
    42
    54
    64
    71
    87
    91
    109
    1110
    129

    现在,我们有了一个新的写法。

    with t1(id, parent_id) as (
    
    select*from tab1 t0 where t0.parent_id is null  -- Anchor member.
    
    union all
    
    select t2.id, t2.parent_id from tab1 t2, t1  -- Recursive member.
    where t2.parent_id = t1.id
    )
    
    select*from t1;
    IDPARENT_ID
    1
    21
    71
    91
    32
    42
    87
    109
    129
    54
    64
    1110

    来自官方的解释

    Basic Hierarchical Query

    A recursive subquery factoring clause must contain two query blocks combined by a UNION ALL set operator. The first block is known as the anchor member, which can not reference the query name. It can be made up of one or more query blocks combined by the UNION ALL, UNION, INTERSECT or MINUS set operators. The second query block is known as the recursive member, which must reference the query name once.

    The following query uses a recursive WITH clause to perform a tree walk. The anchor member queries the root nodes by testing for records with no parents. The recursive member successively adds the children to the root nodes.

    这真是个很奇葩的语法。union all不是一个合并结果集的语法,而是成为了一个构成语法的关键词。

    with中的语句必须是由union all分开的两部分。第一部分的作用是确定根节点,这一部分不会参与到递归当中。它相当于start with。第二部分可以接收来自上一层级的数据。相当于connect之后的语句。

    结果集顺序

    The ordering of the rows is specified using the SEARCH clause, which can use two methods.

    BREADTH FIRST BY : Sibling rows are returned before child rows are processed.
    DEPTH FIRST BY : Child rows are returned before siblings are processed.

    根据刚才的例子可以看出,默认的排序是广度优先的。如果要改成深度优先,需要这么写

    WITH t1(id, parent_id) AS (
      -- Anchor member.
      SELECT id,
             parent_id
      FROM   tab1
      WHERE  parent_id IS NULL
      UNION ALL
      -- Recursive member.
      SELECT t2.id,
             t2.parent_id
      FROM   tab1 t2, t1
      WHERE  t2.parent_id = t1.id
    )
    SEARCH DEPTH FIRST BY id SET order1
    SELECT id,
           parent_id
    FROM   t1
    ORDER BY order1;

    与connect相关语法的等效替换

    LEVEL

    WITH t1(id, parent_id, lvl) AS (
      -- Anchor member.
      SELECT id,
             parent_id,
             1 AS lvl
      FROM   tab1
      WHERE  parent_id IS NULL
      UNION ALL
      -- Recursive member.
      SELECT t2.id,
             t2.parent_id,
             lvl+1
      FROM   tab1 t2, t1
      WHERE  t2.parent_id = t1.id
    )
    SEARCH DEPTH FIRST BY id SET order1
    SELECT id,
           parent_id,
           RPAD('.', (lvl-1)*2, '.') || id AS tree,
           lvl
    FROM t1
    ORDER BY order1;
    IDPARENT_IDTREELVL
    111
    21..22
    32….33
    42….43
    54……54
    64……64
    71..72
    87….83
    91..92
    109….103
    1110……114
    129….123

    CONNECT_BY_ROOT

    WITH t1(id, parent_id, lvl, root_id) AS (
      -- Anchor member.
      SELECT id,
             parent_id,
             1 AS lvl,
             id AS root_id
      FROM   tab1
      WHERE  parent_id IS NULL
      UNION ALL
      -- Recursive member.
      SELECT t2.id,
             t2.parent_id,
             lvl+1,
             t1.root_id
      FROM   tab1 t2, t1
      WHERE  t2.parent_id = t1.id
    )
    SEARCH DEPTH FIRST BY id SET order1
    SELECT id,
           parent_id,
           RPAD('.', (lvl-1)*2, '.') || id AS tree,
           lvl,
           root_id
    FROM t1
    ORDER BY order1;
    IDPARENT_IDTREELVLROOT_ID
    1111
    21..221
    32….331
    42….431
    54……541
    64……641
    71..721
    87….831
    91..921
    109….1031
    1110……1141
    129….1231

    SYS_CONNECT_BY_PATH

    WITH t1(id, parent_id, lvl, root_id, path) AS (
      -- Anchor member.
      SELECT id,
             parent_id,
             1 AS lvl,
             id AS root_id,
             TO_CHAR(id) AS path
      FROM   tab1
      WHERE  parent_id IS NULL
      UNION ALL
      -- Recursive member.
      SELECT t2.id,
             t2.parent_id,
             lvl+1,
             t1.root_id,
             t1.path || '-' || t2.id AS path
      FROM   tab1 t2, t1
      WHERE  t2.parent_id = t1.id
    )
    SEARCH DEPTH FIRST BY id SET order1
    SELECT id,
           parent_id,
           RPAD('.', (lvl-1)*2, '.') || id AS tree,
           lvl,
           root_id,
           path
    FROM t1
    ORDER BY order1;
    IDPARENT_IDTREELVLROOT_IDPATH
    11111
    21..2211-2
    32….3311-2-3
    42….4311-2-4
    54……5411-2-4-5
    64……6411-2-4-6
    71..7211-7
    87….8311-7-8
    91..9211-9
    109….10311-9-10
    1110……11411-9-10-11
    129….12311-9-12

    NOCYCLE and CONNECT_BY_ISCYCLE

    UPDATE tab1 SET parent_id = 9 WHERE id = 1;
    COMMIT;
    
    
    WITH t1(id, parent_id, lvl, root_id, path) AS (
      -- Anchor member.
      SELECT id,
             parent_id,
             1 AS lvl,
             id AS root_id,
             TO_CHAR(id) AS path
      FROM   tab1
      WHERE  id = 1
      UNION ALL
      -- Recursive member.
      SELECT t2.id,
             t2.parent_id,
             lvl+1,
             t1.root_id,
             t1.path || '-' || t2.id AS path
      FROM   tab1 t2, t1
      WHERE  t2.parent_id = t1.id
    )
    SEARCH DEPTH FIRST BY id SET order1
    SELECT id,
           parent_id,
           RPAD('.', (lvl-1)*2, '.') || id AS tree,
           lvl,
           root_id,
           path
    FROM t1
    ORDER BY order1;
         *
    ERROR at line 27:
    ORA-32044: cycle detected while executing recursive WITH query

    如果不作处理的话,毫无疑问会报错。

    WITH t1(id, parent_id, lvl, root_id, path) AS (
      -- Anchor member.
      SELECT id,
             parent_id,
             1 AS lvl,
             id AS root_id,
             TO_CHAR(id) AS path
      FROM   tab1
      WHERE  id = 1
      UNION ALL
      -- Recursive member.
      SELECT t2.id,
             t2.parent_id,
             lvl+1,
             t1.root_id,
             t1.path || '-' || t2.id AS path
      FROM   tab1 t2, t1
      WHERE  t2.parent_id = t1.id
    )
    SEARCH DEPTH FIRST BY id SET order1
    CYCLE id SET cycle TO 1 DEFAULT 0
    SELECT id,
           parent_id,
           RPAD('.', (lvl-1)*2, '.') || id AS tree,
           lvl,
           root_id,
           path,
           cycle
    FROM t1
    ORDER BY order1;
    IDPARENT_IDTREELVLROOT_IDPATHCYCLE
    1911110
    21..2211-20
    32….3311-2-30
    42….4311-2-40
    54……5411-2-4-50
    64……6411-2-4-60
    71..7211-70
    87….8311-7-80
    91..9211-90
    19….1311-9-11
    109….10311-9-100
    1110……11411-9-10-110
    129….12311-9-120

    The NOCYCLE and CONNECT_BY_ISCYCLE functionality is replicated using the CYCLE clause. By specifying this clause, the cycle is detected and the recursion stops, with the cycle column set to the specified value, to indicate the row where the cycle is detected. Unlike the CONNECT BY NOCYCLE method, which stops at the row before the cycle, this method stops at the row after the cycle.

    需要注意的是,递归with语法在检测到循环后,依然会再向下递归一级,而connect语句则不会。

    例子

    select t1.*, level
      from tab1 t1
     start with t1.id = 1 
    connect by nocycle prior t1.id = t1.parent_id;
    IDPARENT_IDLEVEL
    191
    212
    323
    423
    544
    644
    712
    873
    912
    1093
    11104
    1293

    递归with语法对connect语法的改进

    很多人都喜欢把connect语法称为递归查询,然而严格来说这是一个错误的叫法。因为它无法把当前层所计算得到的值传递到下一层。就连官方对它的称呼都是Hierarchical Queries in Oracle (CONNECT BY)
    而递归with则彻底改变了这个情况。它的名字都带着递归 Recursive WITH Clauses

    辗转相除法求最大公约数,我觉得这是最能验证递归能力的方法。它需要将两个值交换并做一个取余数的操作,再把这个结果传递到下一层中。
    先用java简单复习一下

        public static int gcd(int a, int b){
            return a % b == 0 ? b : gcd(b, a % b);
        }

    用sql则需要这么写

    with t1(id, a1, a2) as (
    select 1, 176, 34
      from dual
    union all
    select id + 1, a2, mod(a1, a2)
      from t1
     where mod(a1, a2) > 0
    )
    select distinct first_value(t1.a2) over(order by t1.id desc) res from t1;
    
    --2

    为了方便理解,我把每一步的结果输出一下

    with t1(id, a1, a2) as (
    select 1, 176, 34
      from dual
    union all
    select id + 1, a2, mod(a1, a2)
      from t1
     where mod(a1, a2) > 0
    )
    select t1.* from t1;
    IDA1A2
    117634
    2346
    364
    442

    这是一个大幅增加sql适用范围的语法,大牛们可以用这个来搞出不少骚操作。

    参考资料

    https://oracle-base.com/articles/11g/recursive-subquery-factoring-11gr2#setup

    展开全文
  • 1、with table as 相当于建个临时表(用于一个语句中某些中间结果放在临时表空间的SQL语句),Oracle 9i 新增WITH语法,可以将查询中的子查询命名,放到SELECT语句的最前面。 语法就是 with tempname as (select .....

    关于oracle with table as 创建临时表的用法示例

    1、with table as 相当于建个临时表(用于一个语句中某些中间结果放在临时表空间的SQL语句),Oracle 9i 新增WITH语法,可以将查询中的子查询命名,放到SELECT语句的最前面。


    语法就是
    with tempname as (select ....)
    select ...

    例子:
    with t as (select * from emp where depno=10)
    select * from t where empno=xxx

    with
    wd as (select did,arg(salary) 平均工资 from work group by did),
    em as (select emp.*,w.salary from emp left join work w on emp.eid = w.eid)
    select * from wd,em where wd.did =em.did and wd.平均工资>em.salary;



    2、何时被清除

    临时表不都是会话结束就自动被PGA清除嘛! 但with as临时表是查询完成后就被清除了!

    注释:

    临时表分类

    ORACLE临时表有两种类型:会话级的临时表和事务级的临时表。

    1)ON COMMIT DELETE ROWS

    它是临时表的默认参数,表示临时表中的数据仅在事物过程(Transaction)中有效,当事物提交(COMMIT)后,临时表的暂时段将被自动截断(TRUNCATE),但是临时表的结构 以及元数据还存储在用户的数据字典中。如果临时表完成它的使命后,最好删除(drop命令)临时表,否则数据库会残留很多临时表的表结构和元数据。

    2)ON COMMIT PRESERVE ROWS

    它表示临时表的内容可以跨事物而存在,不过,当该会话结束时,临时表的暂时段将随着会话的结束而被丢弃,临时表中的数据自然也就随之丢弃。但是临时表的结构以及元数据还存储在用户的数据字典中。如果临时表完成它的使命后,最好删除(drop命令)临时表,否则数据库会残留很多临时表的表结构和元数据。


    23:48:58 SCOTT@orcl> with aa as(select * from dept)
    23:57:58   2  select * from aa;

        DEPTNO DNAME          LOC
    ---------- -------------- -------------
            10 ACCOUNTING     NEW YORK
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            40 OPERATIONS     BOSTON

    已用时间:  00: 00: 00.12
    23:58:06 SCOTT@orcl> select * from aa;
    select * from aa
                  *
    第 1 行出现错误:
    ORA-00942: 表或视图不存在


    已用时间:  00: 00: 00.02
    23:58:14 SCOTT@orcl>

    3、就这一功能来说,子查询就可以达到啊,为什么要用with呢? 用with有什么好处?
    都能写,但执行计划不同的。当有多个相似子查询的时候,用with写公共部分,因为子查询结果在内存临时表中,执行效率当然就高啦~

    4、问题:
    有张表数据如下:
    aaa 高
    bbb 低
    aaa 低
    aaa 高
    bbb 低
    bbb 高
    需要得到下列结果,
      高 低
    aaa 2 1
    bbb 1 2
    问 SQL 语句怎么写??

    答案:
    with tt as (
      select 'aaa' id, '高' value from dual union all
      select 'bbb' id, '低' value from dual union all
      select 'aaa' id, '低' value from dual union all
      select 'aaa' id, '高' value from dual union all
      select 'bbb' id, '低' value from dual union all
      select 'bbb' id, '高' value from dual)
    SELECT id,
           COUNT(decode(VALUE, '高', 1)) 高,
           COUNT(decode(VALUE, '低', 1)) 低
      FROM tt
     GROUP BY id;
    ===================================================================
    扩展:
    Oracle9i新增WITH语法,可以将查询中的子查询命名,放到SELECT语句的最前面。

      一个简单的例子:

    SQL> WITH
    2 SEG AS (SELECT SEGMENT_NAME, SUM(BYTES)/1024 K FROM USER_SEGMENTS GROUP BY SEGMENT_NAME),
    3 OBJ AS (SELECT OBJECT_NAME, OBJECT_TYPE FROM USER_OBJECTS)
    4 SELECT O.OBJECT_NAME, OBJECT_TYPE, NVL(S.K, 0) SIZE_K
    5 FROM OBJ O, SEG S
    6 WHERE O.OBJECT_NAME = S.SEGMENT_NAME (+)
    7 ;
    OBJECT_NAME OBJECT_TYPE SIZE_K
    ------------------------------ ------------------- ----------
    DAIJC_TEST TABLE 128
    P_TEST PROCEDURE 0
    IND_DAIJC_TEST_C1 INDEX 128

      通过WITH语句定义了两个子查询SEG和OBJ,在随后的SELECT语句中可以直接对预定义的子查询进行查询。从上面的例子也可以看出,使用WITH语句,将一个包含聚集、外连接等操作SQL清晰的展现出来。

      WITH定义的子查询不仅可以使查询语句更加简单、清晰,而且WITH定义的子查询还具有在SELECT语句的任意层均可见的特点。

      即使是在WITH的定义层中,后定义的子查询都可以使用前面已经定义好的子查询:

    SQL> WITH
    2 Q1 AS (SELECT 3 + 5 S FROM DUAL),
    3 Q2 AS (SELECT 3 * 5 M FROM DUAL),
    4 Q3 AS (SELECT S, M, S + M, S * M FROM Q1, Q2)
    5 SELECT * FROM Q3;
    S M S+M S*M
    ---------- ---------- ---------- ----------
    8 15 23 120

      利用WITH定义查询中出现多次的子查询还能带来性能提示。Oracle会对WITH进行性能优化,当需要多次访问WITH定义的子查询时,Oracle会将子查询的结果放到一个临时表中,避免同样的子查询多次执行,从而有效的减少了查询的IO数量。

    WITH能用在SELECT语句中,UPDATE和DELETE语句也是支持WITH语法的,只是需要版本支持:
    http://www.oracle.com.cn/viewthread.php?tid=83530

    =============================================================================
    with
    sql1 as (select to_char(a) s_name from test_tempa),
    sql2 as (select to_char(b) s_name from test_tempb where not exists (select s_name from sql1 where rownum=1))
    select * from sql1
    union all
    select * from sql2
    union all
    select 'no records' from dual
           where not exists (select s_name from sql1 where rownum=1)
           and not exists (select s_name from sql2 where rownum=1);

    再举个简单的例子

    with a as (select * from test)

    select * from a;

    其实就是把一大堆重复用到的SQL语句放在with as 里面,取一个别名,后面的查询就可以用它

    这样对于大批量的SQL语句起到一个优化的作用,而且清楚明了


    这是搜索到的英文文档资料(说得比较全,但是本人英文特菜,还没具体了解到,希望各高手具体谈谈这个with
    as 的好处)

    About Oracle WITH clause
    Starting in Oracle9i release 2 we see an incorporation of the SQL-99 “WITH clause”, a tool for materializing subqueries to save Oracle from having to re-compute them multiple times.

    The SQL “WITH clause” is very similar to the use of Global temporary tables (GTT), a technique that is often used to improve query speed for complex subqueries. Here are some important notes about the Oracle “WITH clause”:

       ? The SQL “WITH clause” only works on Oracle 9i release 2 and beyond.
       ? Formally, the “WITH clause” is called subquery factoring
       ? The SQL “WITH clause” is used when a subquery is executed multiple times
       ? Also useful for recursive queries (SQL-99, but not Oracle SQL)

    To keep it simple, the following example only references the aggregations once, where the SQL “WITH clause” is normally used when an aggregation is referenced multiple times in a query.
    We can also use the SQL-99 “WITH clause” instead of temporary tables. The Oracle SQL “WITH clause” will compute the aggregation once, give it a name, and allow us to reference it (maybe multiple times), later in the query.

    The SQL-99 “WITH clause” is very confusing at first because the SQL statement does not begin with the word SELECT. Instead, we use the “WITH clause” to start our SQL query, defining the aggregations, which can then be named in the main query as if they were “real” tables:

    WITH
    subquery_name
    AS
    (the aggregation SQL statement)
    SELECT
    (query naming subquery_name);

    Retuning to our oversimplified example, let’s replace the temporary tables with the SQL “WITH  clause”:

    WITH
    sum_sales AS
      select /*+ materialize */
        sum(quantity) all_sales from stores
    number_stores AS
      select /*+ materialize */
        count(*) nbr_stores from stores
    sales_by_store AS
      select /*+ materialize */
      store_name, sum(quantity) store_sales from
      store natural join sales
    SELECT
       store_name
    FROM
       store,
       sum_sales,
       number_stores,
       sales_by_store
    where
       store_sales > (all_sales / nbr_stores)
    ;

    Note the use of the Oracle undocumented “materialize” hint in the “WITH clause”. The Oracle materialize hint is used to ensure that the Oracle cost-based optimizer materializes the temporary tables that are created inside the “WITH” clause. This is not necessary in Oracle10g, but it helps ensure that the tables are only created one time.

    It should be noted that the “WITH clause” does not yet fully-functional within Oracle SQL and it does not yet support the use of “WITH clause” replacement for “CONNECT BY” when performing recursive queries.

    To see how the “WITH clause” is used in ANSI SQL-99 syntax, here is an excerpt from Jonathan Gennick’s great work “Understanding the WITH Clause” showing the use of the SQL-99 “WITH clause” to traverse a recursive bill-of-materials hierarchy

    The SQL-99 “WITH clause” is very confusing at first because the SQL statement does not begin with the word SELECT. Instead, we use the “WITH clause” to start our SQL query, defining the aggregations, which can then be named in the main query as if they were “real” tables:

    WITH
    subquery_name
    AS
    (the aggregation SQL statement)
    SELECT
    (query naming subquery_name);

    Retuning to our oversimplified example, let’s replace the temporary tables with the SQL “WITH” clause”:


    参考:

    http://blog.csdn.net/a9529lty/article/details/4923957/

    http://blog.csdn.net/a9529lty/article/details/4923988

    http://blog.csdn.net/a9529lty/article/details/4923948

    oracle with    百度

    Oracle with语句的用法


    展开全文
  • 文章作者:Tyan 博客:noahsnail.com  |  CSDN  |  简书 翻译论文汇总:...ImageNet Classification with Deep Convolutional Neural Networks...

    Deep Learning
    文章作者:Tyan
    博客:noahsnail.com  |  CSDN  |  简书

    翻译论文汇总:https://github.com/SnailTyan/deep-learning-papers-translation

    ImageNet Classification with Deep Convolutional Neural Networks

    Abstract

    We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

    摘要

    我们训练了一个大型深度卷积神经网络来将ImageNet LSVRC-2010竞赛的120万高分辨率的图像分到1000不同的类别中。在测试数据上,我们得到了top-1 37.5%, top-5 17.0%的错误率,这个结果比目前的最好结果好很多。这个神经网络有6000万参数和650000个神经元,包含5个卷积层(某些卷积层后面带有池化层)和3个全连接层,最后是一个1000维的softmax。为了训练的更快,我们使用了非饱和神经元并对卷积操作进行了非常有效的GPU实现。为了减少全连接层的过拟合,我们采用了一个最近开发的名为dropout的正则化方法,结果证明是非常有效的。我们也使用这个模型的一个变种参加了ILSVRC-2012竞赛,赢得了冠军并且与第二名 top-5 26.2%的错误率相比,我们取得了top-5 15.3%的错误率。

    1 Introduction

    Current approaches to object recognition make essential use of machine learning methods. To improve their performance, we can collect larger datasets, learn more powerful models, and use better techniques for preventing overfitting. Until recently, datasets of labeled images were relatively small -- on the order of tens of thousands of images (e.g., NORB [16], Caltech-101/256 [8, 9], and CIFAR-10/100 [12]). Simple recognition tasks can be solved quite well with datasets of this size, especially if they are augmented with label-preserving transformations. For example, the current best error rate on the MNIST digit-recognition task (<0.3%) approaches human performance [4]. But objects in realistic settings exhibit considerable variability, so to learn to recognize them it is necessary to use much larger training sets. And indeed, the shortcomings of small image datasets have been widely recognized (e.g., Pinto et al. [21]), but it has only recently become possible to collect labeled datasets with millions of images. The new larger datasets include LabelMe [23], which consists of hundreds of thousands of fully-segmented images, and ImageNet [6], which consists of over 15 million labeled high-resolution images in over 22,000 categories.

    1 引言

    当前的目标识别方法基本上都使用了机器学习方法。为了提高目标识别的性能,我们可以收集更大的数据集,学习更强大的模型,使用更好的技术来防止过拟合。直到最近,标注图像的数据集都相对较小--在几万张图像的数量级上(例如,NORB[16],Caltech-101/256 [8, 9]和CIFAR-10/100 [12])。简单的识别任务在这样大小的数据集上可以被解决的相当好,尤其是如果通过标签保留变换进行数据增强的情况下。例如,目前在MNIST数字识别任务上(<0.3%)的最好准确率已经接近了人类水平[4]。但真实环境中的对象表现出了相当大的可变性,因此为了学习识别它们,有必要使用更大的训练数据集。实际上,小图像数据集的缺点已经被广泛认识到(例如,Pinto et al. [21]),但收集上百万图像的标注数据仅在最近才变得的可能。新的更大的数据集包括LabelMe [23],它包含了数十万张完全分割的图像,ImageNet[6],它包含了22000个类别上的超过1500万张标注的高分辨率的图像。

    To learn about thousands of objects from millions of images, we need a model with a large learning capacity. However, the immense complexity of the object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet, so our model should also have lots of prior knowledge to compensate for all the data we don’t have. Convolutional neural networks (CNNs) constitute one such class of models [16, 11, 13, 18, 15, 22, 26]. Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.

    为了从数百万张图像中学习几千个对象,我们需要一个有很强学习能力的模型。然而对象识别任务的巨大复杂性意味着这个问题不能被指定,即使通过像ImageNet这样的大数据集,因此我们的模型应该也有许多先验知识来补偿我们所没有的数据。卷积神经网络(CNNs)构成了一个这样的模型[16, 11, 13, 18, 15, 22, 26]。它们的能力可以通过改变它们的广度和深度来控制,它们也可以对图像的本质进行强大且通常正确的假设(也就是说,统计的稳定性和像素依赖的局部性)。因此,与具有层次大小相似的标准前馈神经网络,CNNs有更少的连接和参数,因此它们更容易训练,而它们理论上的最佳性能可能仅比标准前馈神经网络差一点。

    Despite the attractive qualities of CNNs, and despite the relative efficiency of their local architecture, they have still been prohibitively expensive to apply in large scale to high-resolution images. Luckily, current GPUs, paired with a highly-optimized implementation of 2D convolution, are powerful enough to facilitate the training of interestingly-large CNNs, and recent datasets such as ImageNet contain enough labeled examples to train such models without severe overfitting.

    尽管CNN具有引人注目的质量,尽管它们的局部架构相当有效,但将它们大规模的应用到到高分辨率图像中仍然是极其昂贵的。幸运的是,目前的GPU,搭配了高度优化的2D卷积实现,强大到足够促进有趣地大量CNN的训练,最近的数据集例如ImageNet包含足够的标注样本来训练这样的模型而没有严重的过拟合。

    The specific contributions of this paper are as follows: we trained one of the largest convolutional neural networks to date on the subsets of ImageNet used in the ILSVRC-2010 and ILSVRC-2012 competitions [2] and achieved by far the best results ever reported on these datasets. We wrote a highly-optimized GPU implementation of 2D convolution and all the other operations inherent in training convolutional neural networks, which we make available publicly. Our network contains a number of new and unusual features which improve its performance and reduce its training time, which are detailed in Section 3. The size of our network made overfitting a significant problem, even with 1.2 million labeled training examples, so we used several effective techniques for preventing overfitting, which are described in Section 4. Our final network contains five convolutional and three fully-connected layers, and this depth seems to be important: we found that removing any convolutional layer (each of which contains no more than 1% of the model’s parameters) resulted in inferior performance.

    本文具体的贡献如下:我们在ILSVRC-2010和ILSVRC-2012[2]的ImageNet子集上训练了到目前为止最大的神经网络之一,并取得了迄今为止在这些数据集上报道过的最好结果。我们编写了高度优化的2D卷积GPU实现以及训练卷积神经网络内部的所有其它操作,我们把它公开了。我们的网络包含许多新的不寻常的特性,这些特性提高了神经网络的性能并减少了训练时间,详见第三节。即使使用了120万标注的训练样本,我们的网络尺寸仍然使过拟合成为一个明显的问题,因此我们使用了一些有效的技术来防止过拟合,详见第四节。我们最终的网络包含5个卷积层和3个全连接层,深度似乎是非常重要的:我们发现移除任何卷积层(每个卷积层包含的参数不超过模型参数的1%)都会导致更差的性能。

    In the end, the network’s size is limited mainly by the amount of memory available on current GPUs and by the amount of training time that we are willing to tolerate. Our network takes between five and six days to train on two GTX 580 3GB GPUs. All of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger datasets to become available.

    最后,网络尺寸主要受限于目前GPU的内存容量和我们能忍受的训练时间。我们的网络在两个GTX 580 3GB GPU上训练五六天。我们的所有实验表明我们的结果可以简单地通过等待更快的GPU和更大的可用数据集来提高。

    2 The Dataset

    ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000 categories. The images were collected from the web and labeled by human labelers using Amazon’s Mechanical Turk crowd-sourcing tool. Starting in 2010, as part of the Pascal Visual Object Challenge, an annual competition called the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has been held. ILSVRC uses a subset of ImageNet with roughly 1000 images in each of 1000 categories. In all, there are roughly 1.2 million training images, 50,000 validation images, and 150,000 testing images.

    2 数据集

    ImageNet数据集有超过1500万的标注高分辨率图像,这些图像属于大约22000个类别。这些图像是从网上收集的,使用了Amazon’s Mechanical Turk的众包工具通过人工标注的。从2010年起,作为Pascal视觉对象挑战赛的一部分,每年都会举办ImageNet大规模视觉识别挑战赛(ILSVRC)。ILSVRC使用ImageNet的一个子集,1000个类别每个类别大约1000张图像。总计,大约120万训练图像,50000张验证图像和15万测试图像。

    ILSVRC-2010 is the only version of ILSVRC for which the test set labels are available, so this is the version on which we performed most of our experiments. Since we also entered our model in the ILSVRC-2012 competition, in Section 6 we report our results on this version of the dataset as well, for which test set labels are unavailable. On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the model.

    ILSVRC-2010是ILSVRC竞赛中唯一可以获得测试集标签的版本,因此我们大多数实验都是在这个版本上运行的。由于我们也使用我们的模型参加了ILSVRC-2012竞赛,因此在第六节我们也报告了模型在这个版本的数据集上的结果,这个版本的测试标签是不可获得的。在ImageNet上,按照惯例报告两个错误率:top-1top-5top-5错误率是指测试图像的正确标签不在模型认为的五个最可能的便签之中。

    ImageNet consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we down-sampled the images to a fixed resolution of 256 × 256. Given a rectangular image, we first rescaled the image such that the shorter side was of length 256, and then cropped out the central 256×256 patch from the resulting image. We did not pre-process the images in any other way, except for subtracting the mean activity over the training set from each pixel. So we trained our network on the (centered) raw RGB values of the pixels.

    ImageNet包含各种分辨率的图像,而我们的系统要求不变的输入维度。因此,我们将图像进行下采样到固定的256×256分辨率。给定一个矩形图像,我们首先缩放图像短边长度为256,然后从结果图像中裁剪中心的256×256大小的图像块。除了在训练集上对像素减去平均活跃度外,我们不对图像做任何其它的预处理。因此我们在原始的RGB像素值(中心的)上训练我们的网络。

    3 The Architecture

    The architecture of our network is summarized in Figure 2. It contains eight learned layers — five convolutional and three fully-connected. Below, we describe some of the novel or unusual features of our network’s architecture. Sections 3.1-3.4 are sorted according to our estimation of their importance, with the most important first.

    3 架构

    我们的网络架构概括为图2。它包含八个学习层--5个卷积层和3个全连接层。下面,我们将描述我们网络结构中的一些新奇的不寻常的特性。3.1-3.4小节按照我们对它们评估的重要性进行排序,最重要的最有先。

    3.1 ReLU Nonlinearity

    The standard way to model a neuron’s output f as a function of its input x is with f(x) = tanh(x) or f(x) = (1 + e−x)−1. In terms of training time with gradient descent, these saturating nonlinearities are much slower than the non-saturating nonlinearity f(x) = max(0,x). Following Nair and Hinton [20], we refer to neurons with this nonlinearity as Rectified Linear Units (ReLUs). Deep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units. This is demonstrated in Figure 1, which shows the number of iterations required to reach 25% training error on the CIFAR-10 dataset for a particular four-layer convolutional network. This plot shows that we would not have been able to experiment with such large neural networks for this work if we had used traditional saturating neuron models.

    3.1 ReLU非线性

    将神经元输出f建模为输入x的函数的标准方式是用f(x) = tanh(x)f(x) = (1 + e−x)−1。考虑到梯度下降的训练时间,这些饱和的非线性比非饱和非线性f(x) = max(0,x)更慢。根据Nair和Hinton[20]的说法,我们将这种非线性神经元称为修正线性单元(ReLU)。采用ReLU的深度卷积神经网络训练时间比等价的tanh单元要快几倍。在图1中,对于一个特定的四层卷积网络,在CIFAR-10数据集上达到25%的训练误差所需要的迭代次数可以证实这一点。这幅图表明,如果我们采用传统的饱和神经元模型,我们将不能在如此大的神经网络上实验该工作。

    Figure 1

    Figure 1: A four-layer convolutional neural network with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons (dashed line). The learning rates for each network were chosen independently to make training as fast as possible. No regularization of any kind was employed. The magnitude of the effect demonstrated here varies with network architecture, but networks with ReLUs consistently learn several times faster than equivalents with saturating neurons.

    图1:使用ReLU的四层卷积神经网络在CIFAR-10数据集上达到25%的训练误差比使用tanh神经元的等价网络(虚线)快六倍。为了使训练尽可能快,每个网络的学习率是单独选择的。没有采用任何类型的正则化。影响的大小随着网络结构的变化而变化,这一点已得到证实,但使用ReLU的网络都比等价的饱和神经元快几倍。

    We are not the first to consider alternatives to traditional neuron models in CNNs. For example, Jarrett et al. [11] claim that the nonlinearity f(x) = |tanh(x)| works particularly well with their type of contrast normalization followed by local average pooling on the Caltech-101 dataset. However, on this dataset the primary concern is preventing overfitting, so the effect they are observing is different from the accelerated ability to fit the training set which we report when using ReLUs. Faster learning has a great influence on the performance of large models trained on large datasets.

    我们不是第一个考虑替代CNN中传统神经元模型的人。例如,Jarrett等人[11]声称非线性函数f(x) = |tanh(x)|与其对比度归一化一起,然后是局部均值池化,在Caltech-101数据集上工作的非常好。然而,在这个数据集上主要的关注点是防止过拟合,因此他们观测到的影响不同于我们使用ReLU拟合数据集时的加速能力。更快的学习对大型数据集上大型模型的性能有很大的影响。

    3.2 Training on Multiple GPUs

    A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it. It turns out that 1.2 million training examples are enough to train networks which are too big to fit on one GPU. Therefore we spread the net across two GPUs. Current GPUs are particularly well-suited to cross-GPU parallelization, as they are able to read from and write to one another’s memory directly, without going through host machine memory. The parallelization scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one additional trick: the GPUs communicate only in certain layers. This means that, for example, the kernels of layer 3 take input from all kernel maps in layer 2. However, kernels in layer 4 take input only from those kernel maps in layer 3 which reside on the same GPU. Choosing the pattern of connectivity is a problem for cross-validation, but this allows us to precisely tune the amount of communication until it is an acceptable fraction of the amount of computation.

    3.2 多GPU训练

    单个GTX580 GPU只有3G内存,这限制了可以在GTX580上进行训练的网络最大尺寸。事实证明120万图像用来进行网络训练是足够的,但网络太大因此不能在单个GPU上进行训练。因此我们将网络分布在两个GPU上。目前的GPU非常适合跨GPU并行,因为它们可以直接互相读写内存,而不需要通过主机内存。我们采用的并行方案基本上每个GPU放置一半的核(或神经元),还有一个额外的技巧:只在某些特定的层上进行GPU通信。这意味着,例如,第3层的核会将第2层的所有核映射作为输入。然而,第4层的核只将位于相同GPU上的第3层的核映射作为输入。连接模式的选择是一个交叉验证问题,但这可以让我们准确地调整通信数量,直到它的计算量在可接受的范围内。

    The resultant architecture is somewhat similar to that of the “columnar” CNN employed by Ciresan et al. [5], except that our columns are not independent (see Figure 2). This scheme reduces our top-1 and top-5 error rates by 1.7% and 1.2%, respectively, as compared with a net with half as
    many kernels in each convolutional layer trained on one GPU. The two-GPU net takes slightly less time to train than the one-GPU net.

    除了我们的列不是独立的之外(看图2),最终的架构有点类似于Ciresan等人[5]采用的“columnar” CNN。与每个卷积层一半的核在单GPU上训练的网络相比,这个方案降分别低了我们的top-1 1.7%top-5 1.2%的错误率。双GPU网络比单GPU网络稍微减少了训练时间。

    Figure 2

    Figure 2: An illustration of the architecture of our CNN, explicitly showing the delineation of responsibilities between the two GPUs. One GPU runs the layer-parts at the top of the figure while the other runs the layer-parts at the bottom. The GPUs communicate only at certain layers. The network’s input is 150,528-dimensional, and the number of neurons in the network’s remaining layers is given by 253,440–186,624–64,896–64,896–43,264– 4096–4096–1000.

    图 2:我们CNN架构图解,明确描述了两个GPU之间的责任。在图的顶部,一个GPU运行在部分层上,而在图的底部,另一个GPU运行在部分层上。GPU只在特定的层进行通信。网络的输入是150,528维,网络剩下层的神经元数目分别是253,440–186,624–64,896–64,896–43,264–4096–4096–1000(8层)。

    3.3 Local Response Normalization

    ReLUs have the desirable property that they do not require input normalization to prevent them from saturating. If at least some training examples produce a positive input to a ReLU, learning will happen in that neuron. However, we still find that the following local normalization scheme aids generalization. Denoting by a x , y i a_{x,y}^i ax,yi the activity of a neuron computed by applying kernel i i i at position ( x , y ) (x, y) (x,y) and then applying the ReLU nonlinearity, the response-normalized activity b x , y i b^i_{x,y} bx,yi is given by the expression

    b x , y i = a x , y i / ( k + α ∑ j = m a x ( 0 , i − n / 2 ) m i n ( N − 1 , i + n / 2 ) ( a x , y i ) 2 ) β b^i_{x,y} = a_{x,y}^i / ( k + \alpha \sum _{j = max(0, i-n / 2)} ^{min(N-1, i+n / 2)} (a_{x,y}^i)^2 )^\beta bx,yi=ax,yi/(k+αj=max(0,in/2)min(N1,i+n/2)(ax,yi)2)β

    where the sum runs over n “adjacent” kernel maps at the same spatial position, and N is the total number of kernels in the layer. The ordering of the kernel maps is of course arbitrary and determined before training begins. This sort of response normalization implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels. The constants k, n, α, and β are hyper-parameters whose values are determined using a validation set; we used k = 2, n = 5, α = 0.0001, and β = 0.75. We applied this normalization after applying the ReLU nonlinearity in certain layers (see Section 3.5).

    3.3 局部响应归一化

    ReLU具有让人满意的特性,它不需要通过输入归一化来防止饱和。如果至少一些训练样本对ReLU产生了正输入,那么那个神经元上将发生学习。然而,我们仍然发现接下来的局部响应归一化有助于泛化。 a x , y i a_{x,y}^i ax,yi表示神经元激活,通过在 ( x , y ) (x, y) (x,y)位置应用核 i i i,然后应用ReLU非线性来计算,响应归一化激活 b x , y i b^i_{x,y} bx,yi通过下式给定:

    b x , y i = a x , y i / ( k + α ∑ j = m a x ( 0 , i − n / 2 ) m i n ( N − 1 , i + n / 2 ) ( a x , y j ) 2 ) β b^i_{x,y} = a_{x,y}^i / ( k + \alpha \sum _{j = max(0, i-n / 2)} ^{min(N-1, i+n / 2)} (a_{x,y}^j)^2 )^\beta bx,yi=ax,yi/(k+αj=max(0,in/2)min(N1,i+n/2)(ax,yj)2)β

    求和运算在n个“毗邻的”核映射的同一位置上执行,N是本层的卷积核数目。核映射的顺序当然是任意的,在训练开始前确定。响应归一化的顺序实现了一种侧抑制形式,灵感来自于真实神经元中发现的类型,为使用不同核进行神经元输出计算的较大活动创造了竞争。常量k,n,α,β是超参数,它们的值通过验证集确定;我们设k=2,n=5,α=0.0001,β=0.75。我们在特定的层使用的ReLU非线性之后应用了这种归一化(请看3.5小节)。

    This scheme bears some resemblance to the local contrast normalization scheme of Jarrett et al. [11], but ours would be more correctly termed “brightness normalization”, since we do not subtract the mean activity. Response normalization reduces our top-1 and top-5 error rates by 1.4% and 1.2%, respectively. We also verified the effectiveness of this scheme on the CIFAR-10 dataset: a four-layer CNN achieved a 13% test error rate without normalization and 11% with normalization.

    这个方案与Jarrett等人[11]的局部对比度归一化方案有一定的相似性,但我们更恰当的称其为“亮度归一化”,因此我们没有减去均值。响应归一化分别减少了top-1 1.4%top-5 1.2%的错误率。我们也在CIFAR-10数据集上验证了这个方案的有效性:一个乜嘢归一化的四层CNN取得了13%的错误率,而使用归一化取得了11%的错误率。

    3.4 Overlapping Pooling

    Pooling layers in CNNs summarize the outputs of neighboring groups of neurons in the same kernel map. Traditionally, the neighborhoods summarized by adjacent pooling units do not overlap (e.g., [17, 11, 4]). To be more precise, a pooling layer can be thought of as consisting of a grid of pooling units spaced s s s pixels apart, each summarizing a neighborhood of size z × z z × z z×z centered at the location of the pooling unit. If we set s = z s = z s=z, we obtain traditional local pooling as commonly employed in CNNs. If we set s &lt; z s &lt; z s<z, we obtain overlapping pooling. This is what we use throughout our network, with s = 2 s = 2 s=2 and z = 3 z = 3 z=3. This scheme reduces the top-1 and top-5 error rates by 0.4% and 0.3%, respectively, as compared with the non-overlapping scheme s = 2 , z = 2 s = 2, z = 2 s=2,z=2, which produces output of equivalent dimensions. We generally observe during training that models with overlapping pooling find it slightly more difficult to overfit.

    3.4 重叠池化

    CNN中的池化层归纳了同一核映射上相邻组神经元的输出。习惯上,相邻池化单元归纳的区域是不重叠的(例如[17, 11, 4])。更确切的说,池化层可看作由池化单元网格组成,网格间距为 s s s个像素,每个网格归纳池化单元中心位置 z × z z × z z×z大小的邻居。如果设置 s = z s = z s=z,我们会得到通常在CNN中采用的传统局部池化。如果设置 s &lt; z s &lt; z s<z,我们会得到重叠池化。这就是我们网络中使用的方法,设置 s = 2 s = 2 s=2 z = 3 z = 3 z=3。这个方案分别降低了top-1 0.4%top-5 0.3%的错误率,与非重叠方案 s = 2 , z = 2 s = 2,z = 2 s=2z=2相比,输出的维度是相等的。我们在训练过程中通常观察采用重叠池化的模型,发现它更难过拟合。

    3.5 Overall Architecture

    Now we are ready to describe the overall architecture of our CNN. As depicted in Figure 2, the net contains eight layers with weights; the first five are convolutional and the remaining three are fully-connected. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. Our network maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution.

    3.5 整体架构

    现在我们准备描述我们的CNN的整体架构。如图2所示,我们的网络包含8个带权重的层;前5层是卷积层,剩下的3层是全连接层。最后一层全连接层的输出是1000维softmax的输入,softmax会产生1000类标签的分布。我们的网络最大化多项逻辑回归的目标,这等价于最大化预测分布下训练样本正确标签的对数概率的均值。

    The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (see Figure 2). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully-connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers, of the kind described in Section 3.4, follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.

    第2,4,5卷积层的核只与位于同一GPU上的前一层的核映射相连接(看图2)。第3卷积层的核与第2层的所有核映射相连。全连接层的神经元与前一层的所有神经元相连。第1,2卷积层之后是响应归一化层。3.4节描述的这种最大池化层在响应归一化层和第5卷积层之后。ReLU非线性应用在每个卷积层和全连接层的输出上。

    The first convolutional layer filters the 224 × 224 × 3 input image with 96 kernels of size 11 × 11 × 3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5 × 5 × 48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3 × 3 × 256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3 × 3 × 192 , and the fifth convolutional layer has 256 kernels of size 3 × 3 × 192. The fully-connected layers have 4096 neurons each.

    第1卷积层使用96个核对224 × 224 × 3的输入图像进行滤波,核大小为11 × 11 × 3,步长是4个像素(核映射中相邻神经元感受野中心之间的距离)。第2卷积层使用用第1卷积层的输出(响应归一化和池化)作为输入,并使用256个核进行滤波,核大小为5 × 5 × 48。第3,4,5卷积层互相连接,中间没有接入池化层或归一化层。第3卷积层有384个核,核大小为3 × 3 × 256,与第2卷积层的输出(归一化的,池化的)相连。第4卷积层有384个核,核大小为3 × 3 × 192,第5卷积层有256个核,核大小为3 × 3 × 192。每个全连接层有4096个神经元。

    4 Reducing Overfitting

    Our neural network architecture has 60 million parameters. Although the 1000 classes of ILSVRC make each training example impose 10 bits of constraint on the mapping from image to label, this turns out to be insufficient to learn so many parameters without considerable overfitting. Below, we describe the two primary ways in which we combat overfitting.

    4 减少过拟合

    我们的神经网络架构有6000万参数。尽管ILSVRC的1000类使每个训练样本从图像到标签的映射上强加了10比特的约束,但这不足以学习这么多的参数而没有相当大的过拟合。下面,我们会描述我们用来克服过拟合的两种主要方式。

    4.1 Data Augmentation

    The easiest and most common method to reduce overfitting on image data is to artificially enlarge the dataset using label-preserving transformations (e.g., [25, 4, 5]). We employ two distinct forms of data augmentation, both of which allow transformed images to be produced from the original images with very little computation, so the transformed images do not need to be stored on disk. In our implementation, the transformed images are generated in Python code on the CPU while the GPU is training on the previous batch of images. So these data augmentation schemes are, in effect, computationally free.

    4.1 数据增强

    图像数据上最简单常用的用来减少过拟合的方法是使用标签保留变换(例如[25, 4, 5])来人工增大数据集。我们使用了两种独特的数据增强方式,这两种方式都可以从原始图像通过非常少的计算量产生变换的图像,因此变换图像不需要存储在硬盘上。在我们的实现中,变换图像通过CPU的Python代码生成,而此时GPU正在训练前一批图像。因此,实际上这些数据增强方案是计算免费的。

    The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images and training our network on these extracted patches. This increases the size of our training set by a factor of 2048, though the resulting training examples are, of course, highly interdependent. Without this scheme, our network suffers from substantial overfitting, which would have forced us to use much smaller networks. At test time, the network makes a prediction by extracting five 224 × 224 patches (the four corner patches and the center patch) as well as their horizontal reflections (hence ten patches in all), and averaging the predictions made by the network’s softmax layer on the ten patches.

    第一种数据增强方式包括产生图像变换和水平翻转。我们从256×256图像上通过随机提取224 × 224的图像块实现了这种方式,然后在这些提取的图像块上进行训练。这通过一个2048因子增大了我们的训练集,尽管最终的训练样本是高度相关的。没有这个方案,我们的网络会有大量的过拟合,这会迫使我们使用更小的网络。在测试时,网络会提取5个224 × 224的图像块(四个角上的图像块和中心的图像块)和它们的水平翻转(因此总共10个图像块)进行预测,然后对网络在10个图像块上的softmax层进行平均。

    The second form of data augmentation consists of altering the intensities of the RGB channels in training images. Specifically, we perform PCA on the set of RGB pixel values throughout the ImageNet training set. To each training image, we add multiples of the found principal components, with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with mean zero and standard deviation 0.1. Therefore to each RGB image pixel I x y = [ I x y R , I x y G , I x y B ] T I_xy = [I^R_{xy} , I^G_{xy} , I^B_{xy} ]^T Ixy=[IxyR,IxyG,IxyB]T we add the following quantity:

    [ p 1 , p 2 , p 3 ] [ α 1 λ 1 , α 2 λ 2 , α 3 λ 3 ] T [p_1, p_2, p_3][\alpha_1\lambda_1, \alpha_2\lambda_2, \alpha_3\lambda_3]^T [p1,p2,p3][α1λ1,α2λ2,α3λ3]T

    where p i p_i pi and λ i \lambda_i λi are i i ith eigenvector and eigenvalue of the 3 × 3 covariance matrix of RGB pixel values, respectively, and α i \alpha_i αi is the aforementioned random variable. Each α i \alpha_i αi is drawn only once for all the pixels of a particular training image until that image is used for training again, at which point it is re-drawn. This scheme approximately captures an important property of natural images, namely, that object identity is invariant to changes in the intensity and color of the illumination. This scheme reduces the top-1 error rate by over 1%.

    第二种数据增强方式包括改变训练图像的RGB通道的强度。具体地,我们在整个ImageNet训练集上对RGB像素值集合执行PCA。对于每幅训练图像,我们加上多倍找到的主成分,大小成正比的对应特征值乘以一个随机变量,随机变量通过均值为0,标准差为0.1的高斯分布得到。因此对于每幅RGB图像像素 I x y = [ I x y R , I x y G , I x y B ] T I_xy = [I^R_{xy} , I^G_{xy} , I^B_{xy} ]^T Ixy=[IxyR,IxyG,IxyB]T,我们加上下面的数量:

    [ p 1 , p 2 , p 3 ] [ α 1 λ 1 , α 2 λ 2 , α 3 λ 3 ] T [p_1, p_2, p_3][\alpha_1\lambda_1, \alpha_2\lambda_2, \alpha_3\lambda_3]^T [p1,p2,p3][α1λ1,α2λ2,α3λ3]T

    p i p_i pi λ i \lambda_i λi分别是RGB像素值3 × 3协方差矩阵的第 i i i个特征向量和特征值, α i \alpha_i αi是前面提到的随机变量。对于某个训练图像的所有像素,每个 α i \alpha_i αi只获取一次,直到图像进行下一次训练时才重新获取。这个方案近似抓住了自然图像的一个重要特性,即光照的颜色和强度发生变化时,目标身份是不变的。这个方案减少了top 1错误率1%以上。

    4.2 Dropout

    Combining the predictions of many different models is a very successful way to reduce test errors [1, 3], but it appears to be too expensive for big neural networks that already take several days to train. There is, however, a very efficient version of model combination that only costs about a factor of two during training. The recently-introduced technique, called “dropout” [10], consists of setting to zero the output of each hidden neuron with probability 0.5. The neurons which are “dropped out” in this way do not contribute to the forward pass and do not participate in back-propagation. So every time an input is presented, the neural network samples a different architecture, but all these architectures share weights. This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons. At test time, we use all the neurons but multiply their outputs by 0.5, which is a reasonable approximation to taking the geometric mean of the predictive distributions produced by the exponentially-many dropout networks.

    4.2 失活(Dropout)

    将许多不同模型的预测结合起来是降低测试误差[1, 3]的一个非常成功的方法,但对于需要花费几天来训练的大型神经网络来说,这似乎太昂贵了。然而,有一个非常有效的模型结合版本,它只花费两倍的训练成本。这种最近引入的技术,叫做“dropout”[10],它会以0.5的概率对每个隐层神经元的输出设为0。那些“失活的”的神经元不再进行前向传播并且不参与反向传播。因此每次输入时,神经网络会采样一个不同的架构,但所有架构共享权重。这个技术减少了复杂的神经元互适应,因为一个神经元不能依赖特定的其它神经元的存在。因此,神经元被强迫学习更鲁棒的特征,它在与许多不同的其它神经元的随机子集结合时是有用的。在测试时,我们使用所有的神经元但它们的输出乘以0.5,对指数级的许多失活网络的预测分布进行几何平均,这是一种合理的近似。

    We use dropout in the first two fully-connected layers of Figure 2. Without dropout, our network exhibits substantial overfitting. Dropout roughly doubles the number of iterations required to converge.

    我们在图2中的前两个全连接层使用失活。如果没有失活,我们的网络表现出大量的过拟合。失活大致上使要求收敛的迭代次数翻了一倍。

    5 Details of learning

    We trained our models using stochastic gradient descent with a batch size of 128 examples, momentum of 0.9, and weight decay of 0.0005. We found that this small amount of weight decay was important for the model to learn. In other words, weight decay here is not merely a regularizer: it reduces the model’s training error. The update rule for weight w w w was

    v i + 1 : = 0.9 ∙ v i − 0.0005 ∙ ε ∙ w i − ε ∙ ⟨ ∂ L ∂ w ∣ w i ⟩ D i v_{i+1} := 0.9 \bullet v_i - 0.0005 \bullet \varepsilon \bullet w_i - \varepsilon \bullet \langle \frac{\partial L} {\partial w} |_{w_i}\rangle _{D_i} vi+1:=0.9vi0.0005εwiεwLwiDi

    where i i i is the iteration index, v v v is the momentum variable, ε \varepsilon ε is the learning rate, and ⟨ ∂ L ∂ w ∣ w i ⟩ D i \langle \frac{\partial L} {\partial w} |_{w_i}\rangle _{D_i} wLwiDi is the average over the i i ith batch D i D_i Di of the derivative of the objective with respect to w w w, evaluated at w i w_i wi.

    5 学习细节

    我们使用随机梯度下降来训练我们的模型,样本的batch size为128,动量为0.9,权重衰减为0.0005。我们发现少量的权重衰减对于模型的学习是重要的。换句话说,权重衰减不仅仅是一个正则项:它减少了模型的训练误差。权重 w w w的更新规则是

    v _ i + 1 : = 0.9 ∙ v _ i − 0.0005 ∙ ε ∙ w _ i − ε ∙ ⟨ ∂ L ∂ w ∣ _ w _ i ⟩ _ D _ i v\_{i+1} := 0.9 \bullet v\_i - 0.0005 \bullet \varepsilon \bullet w\_i - \varepsilon \bullet \langle \frac{\partial L} {\partial w} |\_{w\_i}\rangle \_{D\_i} v_i+1:=0.9v_i0.0005εw_iεwL_w_i_D_i

    i i i是迭代索引, v v v是动量变量, ε \varepsilon ε是学习率, ⟨ ∂ L ∂ w ∣ _ w _ i ⟩ _ D _ i \langle \frac{\partial L} {\partial w} |\_{w\_i}\rangle \_{D\_i} wL_w_i_D_i是目标函数对 w w w,在 w _ i w\_i w_i上的第 i i i批微分 D _ i D\_i D_i的平均。

    We initialized the weights in each layer from a zero-mean Gaussian distribution with standard deviation 0.01. We initialized the neuron biases in the second, fourth, and fifth convolutional layers, as well as in the fully-connected hidden layers, with the constant 1. This initialization accelerates the early stages of learning by providing the ReLUs with positive inputs. We initialized the neuron biases in the remaining layers with the constant 0.

    我们使用均值为0,标准差为0.01的高斯分布对每一层的权重进行初始化。我们在第2,4,5卷积层和全连接隐层将神经元偏置初始化为常量1。这个初始化通过为ReLU提供正输入加速了学习的早期阶段。我们在剩下的层将神经元偏置初始化为0。

    We used an equal learning rate for all layers, which we adjusted manually throughout training. The heuristic which we followed was to divide the learning rate by 10 when the validation error rate stopped improving with the current learning rate. The learning rate was initialized at 0.01 and reduced three times prior to termination. We trained the network for roughly 90 cycles through the training set of 1.2 million images, which took five to six days on two NVIDIA GTX 580 3GB GPUs.

    我们对所有的层使用相等的学习率,这个是在整个训练过程中我们手动调整得到的。当验证误差在当前的学习率下停止提供时,我们遵循启发式的方法将学习率除以10。学习率初始化为0.01,在训练停止之前降低三次。我们在120万图像的训练数据集上训练神经网络大约90个循环,在两个NVIDIA GTX 580 3GB GPU上花费了五到六天。

    6 Results

    Our results on ILSVRC-2010 are summarized in Table 1. Our network achieves top-1 and top-5 test set error rates of 37.5% and 17.0%. The best performance achieved during the ILSVRC-2010 competition was 47.1% and 28.2% with an approach that averages the predictions produced from six sparse-coding models trained on different features [2], and since then the best published results are 45.7% and 25.7% with an approach that averages the predictions of two classifiers trained on Fisher Vectors (FVs) computed from two types of densely-sampled features [24].

    Table 1

    Table 1: Comparison of results on ILSVRC-2010 test set.In italics are best results achieved by others.

    6 结果

    我们在ILSVRC-2010上的结果概括为表1。我们的神经网络取得了top-1 37.5%top-5 17.0%的错误率。在ILSVRC-2010竞赛中最佳结果是top-1 47.1%top-5 28.2%,使用的方法是对6个在不同特征上训练的稀疏编码模型生成的预测进行平均,从那时起已公布的最好结果是top-1 45.7%top-5 25.7%,使用的方法是平均在Fisher向量(FV)上训练的两个分类器的预测结果,Fisher向量是通过两种密集采样特征计算得到的[24]。

    表1

    表1:ILSVRC-2010测试集上的结果对比。斜体是其它人取得的最好结果。

    We also entered our model in the ILSVRC-2012 competition and report our results in Table 2. Since the ILSVRC-2012 test set labels are not publicly available, we cannot report test error rates for all the models that we tried. In the remainder of this paragraph, we use validation and test error rates interchangeably because in our experience they do not differ by more than 0.1% (see Table 2). The CNN described in this paper achieves a top-5 error rate of 18.2%. Averaging the predictions of five similar CNNs gives an error rate of 16.4%. Training one CNN, with an extra sixth convolutional layer over the last pooling layer, to classify the entire ImageNet Fall 2011 release (15M images, 22K categories), and then “fine-tuning” it on ILSVRC-2012 gives an error rate of 16.6%. Averaging the predictions of two CNNs that were pre-trained on the entire Fall 2011 release with the aforementioned five CNNs gives an error rate of 15.3%. The second-best contest entry achieved an error rate of 26.2% with an approach that averages the predictions of several classifiers trained on FVs computed from different types of densely-sampled features [7].

    Table 2

    Table 2: Comparison of error rates on ILSVRC-2012 validation and test sets. In italics are best results achieved by others. Models with an asterisk were “pre-trained” to classify the entire ImageNet 2011 Fall release. See Section 6 for details.

    我们也用我们的模型参加了ILSVRC-2012竞赛并在表2中报告了我们的结果。由于ILSVRC-2012的测试集标签不可以公开得到,我们不能报告我们尝试的所有模型的测试错误率。在这段的其余部分,我们会使用验证误差率和测试误差率互换,因为在我们的实验中它们的差别不会超过0.1%(看图2)。本文中描述的CNN取得了top-5 18.2%的错误率。五个类似的CNN预测的平均误差率为16.4%。为了对ImageNet 2011秋季发布的整个数据集(1500万图像,22000个类别)进行分类,我们在最后的池化层之后有一个额外的第6卷积层,训练了一个CNN,然后在它上面进行“fine-tuning”,在ILSVRC-2012取得了16.6%的错误率。对在ImageNet 2011秋季发布的整个数据集上预训练的两个CNN和前面提到的五个CNN的预测进行平均得到了15.3%的错误率。第二名的最好竞赛输入取得了26.2%的错误率,他的方法是对FV上训练的一些分类器的预测结果进行平均,FV在不同类型密集采样特征计算得到的。

    表2

    表2:ILSVRC-2012验证集和测试集的误差对比。斜线部分是其它人取得的最好的结果。带星号的是“预训练的”对ImageNet 2011秋季数据集进行分类的模型。更多细节请看第六节。

    Finally, we also report our error rates on the Fall 2009 version of ImageNet with 10,184 categories and 8.9 million images. On this dataset we follow the convention in the literature of using half of the images for training and half for testing. Since there is no established test set, our split necessarily differs from the splits used by previous authors, but this does not affect the results appreciably. Our top-1 and top-5 error rates on this dataset are 67.4% and 40.9%, attained by the net described above but with an additional, sixth convolutional layer over the last pooling layer. The best published results on this dataset are 78.1% and 60.9% [19].

    最后,我们也报告了我们在ImageNet 2009秋季数据集上的误差率,ImageNet 2009秋季数据集有10,184个类,890万图像。在这个数据集上我们按照惯例用一半的图像来训练,一半的图像来测试。由于没有建立测试集,我们的数据集分割有必要不同于以前作者的数据集分割,但这对结果没有明显的影响。我们在这个数据集上的的top-1和top-5错误率是67.4%和40.9%,使用的是上面描述的在最后的池化层之后有一个额外的第6卷积层网络。这个数据集上公开可获得的最好结果是78.1%和60.9%[19]。

    6.1 Qualitative Evaluations

    Figure 3 shows the convolutional kernels learned by the network’s two data-connected layers. The network has learned a variety of frequency and orientation-selective kernels, as well as various colored blobs. Notice the specialization exhibited by the two GPUs, a result of the restricted connectivity described in Section 3.5. The kernels on GPU 1 are largely color-agnostic, while the kernels on on GPU 2 are largely color-specific. This kind of specialization occurs during every run and is independent of any particular random weight initialization (modulo a renumbering of the GPUs).

    Figure 3

    Figure 3: 96 convolutional kernels of size 11×11×3 learned by the first convolutional layer on the 224×224×3 input images. The top 48 kernels were learned on GPU 1 while the bottom 48 kernels were learned on GPU 2. See Section 6.1 for details.

    6.1 定性评估

    图3显示了网络的两个数据连接层学习到的卷积核。网络学习到了大量的频率核、方向选择核,也学到了各种颜色点。注意两个GPU表现出的专业化,3.5小节中描述的受限连接的结果。GPU 1上的核主要是没有颜色的,而GPU 2上的核主要是针对颜色的。这种专业化在每次运行时都会发生,并且是与任何特别的随机权重初始化(以GPU的重新编号为模)无关的。

    Figure 3

    图3:第一卷积层在224×224×3的输入图像上学习到的大小为11×11×3的96个卷积核。上面的48个核是在GPU 1上学习到的而下面的48个卷积核是在GPU 2上学习到的。更多细节请看6.1小节。

    In the left panel of Figure 4 we qualitatively assess what the network has learned by computing its top-5 predictions on eight test images. Notice that even off-center objects, such as the mite in the top-left, can be recognized by the net. Most of the top-5 labels appear reasonable. For example, only other types of cat are considered plausible labels for the leopard. In some cases (grille, cherry) there is genuine ambiguity about the intended focus of the photograph.

    Figure 4

    Figure 4: (Left) Eight ILSVRC-2010 test images and the five labels considered most probable by our model. The correct label is written under each image, and the probability assigned to the correct label is also shown with a red bar (if it happens to be in the top 5). (Right) Five ILSVRC-2010 test images in the first column. The remaining columns show the six training images that produce feature vectors in the last hidden layer with the smallest Euclidean distance from the feature vector for the test image.

    在图4的左边部分,我们通过在8张测试图像上计算它的top-5预测定性地评估了网络学习到的东西。注意即使是不在图像中心的目标也能被网络识别,例如左上角的小虫。大多数的top-5标签似乎是合理的。例如,对于美洲豹来说,只有其它类型的猫被认为是看似合理的标签。在某些案例(格栅,樱桃)中,网络在意的图片焦点真的很含糊。

    Figure 4

    图4:(左)8张ILSVRC-2010测试图像和我们的模型认为最可能的5个标签。每张图像的下面是它的正确标签,正确标签的概率用红条表示(如果正确标签在top 5中)。(右)第一列是5张ILSVRC-2010测试图像。剩下的列展示了6张训练图像,这些图像在最后的隐藏层的特征向量与测试图像的特征向量有最小的欧氏距离。

    Another way to probe the network’s visual knowledge is to consider the feature activations induced by an image at the last, 4096-dimensional hidden layer. If two images produce feature activation vectors with a small Euclidean separation, we can say that the higher levels of the neural network consider them to be similar. Figure 4 shows five images from the test set and the six images from the training set that are most similar to each of them according to this measure. Notice that at the pixel level, the retrieved training images are generally not close in L2 to the query images in the first column. For example, the retrieved dogs and elephants appear in a variety of poses. We present the results for many more test images in the supplementary material.

    探索网络可视化知识的另一种方式是思考最后的4096维隐藏层在图像上得到的特征激活。如果两幅图像生成的特征激活向量之间有较小的欧式距离,我们可以认为神经网络的更高层特征认为它们是相似的。图4表明根据这个度量标准,测试集的5张图像和训练集的6张图像中的每一张都是最相似的。注意在像素级别,检索到的训练图像与第一列的查询图像在L2上通常是不接近的。例如,检索的狗和大象似乎有很多姿态。我们在补充材料中对更多的测试图像呈现了这种结果。

    Computing similarity by using Euclidean distance between two 4096-dimensional, real-valued vectors is inefficient, but it could be made efficient by training an auto-encoder to compress these vectors to short binary codes. This should produce a much better image retrieval method than applying auto-encoders to the raw pixels [14], which does not make use of image labels and hence has a tendency to retrieve images with similar patterns of edges, whether or not they are semantically similar.

    通过两个4096维实值向量间的欧氏距离来计算相似性是效率低下的,但通过训练一个自动编码器将这些向量压缩为短二值编码可以使其变得高效。这应该会产生一种比将自动编码器应用到原始像素上[14]更好的图像检索方法,自动编码器应用到原始像素上的方法没有使用图像标签,因此会趋向于检索与要检索的图像具有相似边缘模式的图像,无论它们是否是语义上相似。

    7 Discussion

    Our results show that a large, deep convolutional neural network is capable of achieving record-breaking results on a highly challenging dataset using purely supervised learning. It is notable that our network’s performance degrades if a single convolutional layer is removed. For example, removing any of the middle layers results in a loss of about 2% for the top-1 performance of the network. So the depth really is important for achieving our results.

    7 探讨

    我们的结果表明一个大型深度卷积神经网络在一个具有高度挑战性的数据集上使用纯有监督学习可以取得破纪录的结果。值得注意的是,如果移除一个卷积层,我们的网络性能会降低。例如,移除任何中间层都会引起网络损失大约2%的top-1性能。因此深度对于实现我们的结果非常重要。

    To simplify our experiments, we did not use any unsupervised pre-training even though we expect that it will help, especially if we obtain enough computational power to significantly increase the size of the network without obtaining a corresponding increase in the amount of labeled data. Thus far, our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the infero-temporal pathway of the human visual system. Ultimately we would like to use very large and deep convolutional nets on video sequences where the temporal structure provides very helpful information that is missing or far less obvious in static images.

    为了简化我们的实验,我们没有使用任何无监督的预训练,尽管我们希望它会有所帮助,特别是在如果我们能获得足够的计算能力来显著增加网络的大小而标注的数据量没有对应增加的情况下。到目前为止,我们的结果已经提高了,因为我们的网络更大、训练时间更长,但为了匹配人类视觉系统的下颞线(视觉专业术语)我们仍然有许多数量级要达到。最后我们想在视频序列上使用非常大的深度卷积网络,视频序列的时序结构会提供非常有帮助的信息,这些信息在静态图像上是缺失的或远不那么明显。

    References

    [1] R.M.BellandY.Koren.Lessonsfromthenetflixprizechallenge.ACMSIGKDDExplorationsNewsletter, 9(2):75–79, 2007.

    [2] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. www.imagenet.org/challenges. 2010.

    [3] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.

    [4] D. Cires ̧an, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. Arxiv preprint arXiv:1202.2745, 2012.

    [5] D.C. Cires ̧an, U. Meier, J. Masci, L.M. Gambardella, and J. Schmidhuber. High-performance neural networks for visual object classification. Arxiv preprint arXiv:1102.0183, 2011.

    [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.

    [7] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ILSVRC-2012, 2012. URL http://www.image-net.org/challenges/LSVRC/2012/.

    [8] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 106(1):59–70, 2007.

    [9] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007. URL http://authors.library.caltech.edu/7694.

    [10] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.

    [11] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In International Conference on Computer Vision, pages 2146–2153. IEEE, 2009.

    [12] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009.

    [13] A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010.

    [14] A. Krizhevsky and G.E. Hinton. Using very deep autoencoders for content-based image retrieval. In ESANN, 2011.

    [15] Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, 1990.

    [16] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–97. IEEE, 2004.

    [17] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. In Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on, pages 253–256. IEEE, 2010.

    [18] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 609–616. ACM, 2009.

    [19] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric Learning for Large Scale Image Classification: Generalizing to New Classes at Near-Zero Cost. In ECCV - European Conference on Computer Vision, Florence, Italy, October 2012.

    [20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th International Conference on Machine Learning, 2010.

    [21] N. Pinto, D.D. Cox, and J.J. DiCarlo. Why is real-world visual object recognition hard? PLoS computational biology, 4(1):e27, 2008.

    [22] N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579,2009.

    [23] B.C. Russell, A. Torralba, K.P. Murphy, and W.T. Freeman. Labelme: a database and web-based tool for image annotation. International journal of computer vision, 77(1):157–173, 2008.

    [24] J.SánchezandF.Perronnin.High-dimensionalsignaturecompressionforlarge-scaleimageclassification. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1665–1672. IEEE,2011.

    [25] P.Y. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis and Recognition, volume 2, pages 958–962, 2003.

    [26] S.C.Turaga,J.F.Murray,V.Jain,F.Roth,M.Helmstaedter,K.Briggman,W.Denk,andH.S.Seung.Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation, 22(2):511–538, 2010.

    展开全文
  • NVIDIA之AI Course:Getting Started with AI on Jetson Nano—Class notes(二) Notice The original text comes fromNVIDIA-AI Course. This article only provides Chinese translation. Setting up your ...

    NVIDIA之AI Course:Getting Started with AI on Jetson Nano—Class notes(二)

    Notice
    The original text comes from NVIDIA-AI Course. This article only provides Chinese translation.

     

    Setting up your Jetson Nano

    正在更新……

    Introduction

            The  NVIDIA® Jetson Nano™ Developer Kit is a small AI computer for makers, learners, and developers. After following along with this brief guide, you’ll be ready to start building practical AI applications, cool AI robots, and more.
           NVIDIA®Jetson Nano™Developer Kit是一款面向制造商、学习者和开发人员的小型人工智能计算机。在遵循了这个简短的指南之后,您将准备好开始构建实际的AI应用程序、酷的AI机器人等等。

         Initially, a computer with Internet connection and the ability to flash your microSD card is also required. Downloading the image may take a considerable amount of time, depending on your internet speed. You may wish to begin downloading the NVIDIA DLI Jetson Nano SD Card Image immediately so that it can work in the background while you review the rest of the setup information.
         才开始,还需要一台具有能够联网带有闪存microSD卡功能的计算机。下载图像可能需要相当长的时间,这取决于您的互联网速度。您可能希望立即开始下载NVIDIA DLI Jetson nano SD卡图像,以便在您查看其余设置信息时在后台工作。

     

    Included In The Box

    Your Jetson Nano Developer Kit box includes:   您的Jetson Nano开发工具包包括

    • Jetson Nano Developer Kit   Jetson Nano开发工具包
    • Small paper card with quick start and support information.  小纸卡具有快速启动和支持信息
    • Folded paper stand

     

    For This Course, You’ll Also Need:  这门课,你还需要

    • microSD Memory Card (32GB UHS-I minimum),microSD存储卡(最小32GB UHS-I)
    • 5V 4A Power Supply with 2.1mm DC barrel connector,5V 4A电源,2.1mm直流电筒连接器
    • 2-pin Jumper                 2针跨接器
    • Compatible camera:      兼容的相机:
      • Logitech C270 USB Webcam,Logitech C270 USB网络摄像头
      • Raspberry Pi Camera Module v2 with CSI camera connector ,Raspberry Pi相机模块v2与CSI相机连接器
    • USB cable (Micro-B to Type-A)   USB线(微型b至a型)

     

    Prepare For Setup 准备安装

    Items For Getting Started 入门项目

    MicroSD Card

         The Jetson Nano Developer Kit uses a microSD card as a boot device and for main storage. It’s important to have a card that’s fast and large enough for your projects; the minimum recommended for this course is a 32GB UHS-I card.
          Jetson Nano开发工具包使用microSD卡作为引导设备和主存储。对于你的项目来说,拥有一张足够快和足够大的卡片是很重要的;本课程推荐的最小容量是32GB的UHS-I卡

         See the instructions in the "Write Image to the microSD Card" lesson that follows to flash your microSD card according to the type of computer you are using: Windows, Mac, or Linux.
         请参阅“将图像写入microSD卡”教程中的说明,该教程将根据您使用的计算机类型(Windows、Mac或Linux)来刷新microSD卡。

    5V 4A Power Supply With 2.1mm DC Barrel Connector 5V 4A电源,2.1mm直流电筒连接器

         For this course, the 5V 4A DC barrel jack power supply is required. Although it is possible to power the Jetson Nano with a smaller microUSB supply, this is not robust enough for the high GPU compute load we require for our projects. In addition, you will need the microUSB port available as a direct connection to your computer for this course.
         本课程需要5V 4A直流电筒千斤顶供电。虽然可以使用更小的microUSB电源为Jetson Nano供电,但对于我们项目所需的高GPU计算负载来说,这还不够健壮。此外,本课程还需要使用microUSB端口作为与计算机的直接连接。

         The barrel jack must be 5.5mm OD x 2.1mm ID x 9.5mm length, center-positive. As an example of a good power supply, NVIDIA has validated Adafruit’s 5V 4A (4000mA) switching power supply - UL Listed.
          筒形千斤顶外径5.5mm,内径2.1mm,长度9.5mm,中心正。作为一个良好电源的例子,NVIDIA已经验证了Adafruit的5V 4A (4000mA)开关电源- UL列出。

    2-Pin Jumper

         To specify use of the barrel-type power supply on the Jetson Nano Developer Kit, a 2-pin jumper is required. This is an inexpensive item available at many outlets.
         要在Jetson Nano开发工具包上指定桶型电源的使用,需要一个2针跳线。这是许多商店都能买到的廉价商品。

    Logitech C270 USB Webcam网络摄像头

         You'll need a camera to capture images in the course projects. As an example of a compatible camera, NVIDIA has verified that the Logitech C270 USB Webcam works with these projects. The ability to position the camera easily for capturing images hands-free makes this a great choice. Some other USB webcams may also work with the projects. If you already have one on hand, you could test it as an alternative.
         在课程设计中,你需要一个相机来捕捉图像。作为兼容摄像头的一个例子,NVIDIA已经验证了Logitech C270 USB网络摄像头可以与这些项目兼容。定位相机的能力,方便地捕捉图像免提使这是一个伟大的选择。其他一些USB网络摄像头也可以与该项目一起工作。如果您手头已经有一个,您可以测试它作为替代。

         The Jetson Nano Developer Kit includes a MIPI Camera Serial Interface (CSI) port. As an example of a compatible camera, NVIDIA has verified that theRaspberry Pi Camera Module V2-8 Megapixel,1080p works with these projects.
         Jetson Nano开发工具包包括一个MIPI摄像机串行接口(CSI)端口。作为兼容相机的一个例子,NVIDIA已经验证了Raspberry Pi相机模块v2 - 800万像素,1080p可以与这些项目兼容。

         Instructions to test both USB and CSI cameras are provided in the Hello Camera lesson.
          在Hello Camera教程中提供了测试USB和CSI摄像机的说明。

    USB Cable (Micro-B To Type-A)  USB线(微型b至a型)

         You'll also need a Micro USB to USB-A cable to directly connect your computer to the Jetson Nano Developer Kit's Micro USB port. The cable must be capable of data transfers, rather than only designed to power a device. This is a common cable available at many outlets if you don't already have one on hand.
         您还需要一个微型USB到USB-电缆直接连接到您的计算机到Jetson Nano开发工具包的微型USB端口。电缆必须能够传输数据,而不仅仅是为设备供电。如果你没有一个现成的话,这是一种常见的电缆,可在许多插座。

     

    Write Image To The MicroSD Card

         To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter.
        要准备你的microSD卡,你需要一台可以上网的电脑,并且能够读写SD卡,可以通过内置的SD卡插槽或适配器。

    1. Download the NVIDIA DLI Jetson Nano SD Card Image , and note where it was saved on the computer. 下载英伟达DLI Jetson Nano SD卡图像,并注意保存在计算机上的位置。
    2. Write the image to your microSD card by following the instructions below according to the type of computer you are using: Windows, Mac, or Linux.   根据您正在使用的计算机类型(Windows、Mac或Linux),按照下面的说明将图像写入microSD卡。

    INSTRUCTIONS FOR WINDOWS

           Format your microSD card using SD Memory Card Formatter from the SD Association.
          使用SD协会的SD存储卡格式化程序格式化microSD卡。

    1. Download, install, and launch SD Memory Card Formatter for Windows.   下载、安装和启动Windows SD存储卡格式化程序。
    2. Select card drive   选择卡驱动
    3. Select “Quick format”    选择“快速格式化”
    4. Leave “Volume label” blank   将“卷标”留空
    5. Click “Format” to start formatting, and “Yes” on the warning dialog    点击“格式化”开始格式化,在警告对话框中点击“是”

    Use Etcher to write the Jetson Nano Developer Kit SD Card Image to your microSD card
    使用Etcher 将Jetson Nano开发人员工具包SD卡映像写入microSD卡

    1. Download, install, and launch Etcher.    下载、安装和启动Etcher。
    2. Click “Select image” and choose the zipped image file downloaded earlier.   点击“选择图像”,选择先前下载的压缩图像文件。
    3. Insert your microSD card if not already inserted.    插入你的microSD卡,如果还没有插入。
      Click Cancel (per this explanation) if Windows prompts you with a dialog like this:    如果Windows提示您如下对话框,请单击Cancel(按此解释):
    4. Click “Select drive” and choose the correct device.    点击“选择驱动器”,选择正确的设备。
    5. Click “Flash!” It will take Etcher about 10 minutes to write and validate the image if your microSD card is connected via USB3.   点击“闪!“如果你的microSD卡是通过USB3连接的,Etcher将花费大约10分钟来编写和验证图像。
    6. After Etcher finishes, Windows may let you know it doesn’t know how to read the SD Card. Just click Cancel and remove the microSD card.   蚀刻完成后,Windows可能会让你知道它不知道如何读取SD卡。只需单击“取消”并删除microSD卡。


    INSTRUCTIONS FOR MAC

    更多参考……

    INSTRUCTIONS FOR LINUX

    更多参考……

     

    Setup And First Boot 安装和第一次引导

    Headless Device Mode

          For this course, we are running the Jetson Nano Developer Kit in a "headless" configuration. That means you do not hook up a monitor directly to the Jetson Nano Developer Kit. This method conserves memory resources on the Jetson Nano and has the added benefit of eliminating the requirement for extra hardware, i.e. a monitor, keyboard, and mouse.
          在本课程中,我们将以“headless”配置运行Jetson Nano开发人员工具包。这意味着您不需要将监视器直接连接到Jetson Nano开发人员工具包。这种方法节省了Jetson Nano上的内存资源,而且还有一个额外的好处,那就是不需要额外的硬件,比如显示器、键盘和鼠标。

          In addition, we will further simplify the configuration by using "USB Device Mode". In this mode, your Jetson Nano Developer Kit connects directly to your computer through a USB cable. This eliminates the need for a network connection on the Jetson Nano, as well as the need to determine the IP address on your network. It is always 192.168.55.1:8888 in this mode.
          此外,我们还将使用“USB设备模式”进一步简化配置。在这种模式下,您的Jetson Nano开发工具包通过USB电缆直接连接到您的计算机。这样就不需要在Jetson Nano上建立网络连接,也不需要确定网络上的IP地址。在这种模式下,地址一直是192.168.55.1:8888。

          In the steps that follow, you will boot the Jetson Nano in the minimum configuration (without a camera) to make sure it boots correctly from the microSD card you flashed with the DLI course image.
          在接下来的步骤中,您将以最小配置启动Jetson Nano(没有摄像头),以确保它正确地从与DLI课程图像一起闪烁的microSD卡引导。

    Setup Steps

    1. Unfold the paper stand and place inside the developer kit box.    展开纸架,放入显影盒内。
    2. Set the developer kit on top of the paper stand.    将开发工具包放在纸架上。
    3. Insert the microSD card (with system image already written to it) into the slot on the underside of the Jetson Nano module.将microSD卡(已写入系统映像)插入Jetson Nano模块下方的插槽中。
    4. Insert the 2-pin jumper across the 2-pin connector, J48, located next to the MIPI CSI camera connector. This enables the DC barrel power supply.    将2针跳线插入位于MIPI CSI摄像机连接器旁边的2针连接器J48上。这使得直流桶供电。
    5. Connect your DC barrel jack power supply (5V/4A). The Jetson Nano Developer Kit will power on and boot automatically.连接你的直流电筒千斤顶电源(5V/4A)。Jetson Nano开发工具包将自动启动并启动。
    6. A green LED next to the Micro-USB connector will light as soon as the developer kit powers on. Wait about 30 seconds. Then connect the USB cable from the Micro USB port on the Jetson Nano Developer Kit to the USB port on your computer.开发工具包一开机,微型usb连接器旁边的绿色LED灯就会亮起来。等大约30秒。然后将USB电缆从Jetson Nano开发工具包上的微型USB端口连接到计算机上的USB端口。

    Logging Into The JupyterLab Server  登录到JupyterLab服务器

    1. Open the following link address : 192.168.55.1:8888
      The JupyterLab server running on the Jetson Nano will open up with a login prompt the first time.
      打开以下链接地址:192.168.55.1:8888
      运行在Jetson Nano上的JupyterLab服务器将在第一次打开时显示一个登录提示。
    2. Enter the password: dlinano  输入密码:dlinano

    You will see this screen. Congratulations!

     

    Headless Device Mode Setup For Jetson Nano Demonstration

    Troubleshooting  故障排除

    • The LED does not light up when the DC barrel jack power supply is connected.
      • Check to be sure you have shorted the two pins on the J48 header with a jumper.
    • The LED lights up, but I cannot access the JupyterLab server from my browser.
      • Try a different browser.
    • I cannot access the JupyterLab server from any browser.
      • Check your computer to see if any new USB devices are recognized when plugging in the USB cable to your computer
      • If on Windows, check “Device Manager” to see if any new device was added.
    • My computer does not recognize Jetson Nano when connected.
      • Check your USB cable to see if it is enabled for data transfer.
      • You can test it by connecting the Micro-B end of the USB cable to some other USB peripheral such as tablet, Kindle, or other device that communicates over a USB Micro-B port.
    • My USB cable seems good, but my computer does not recognize Jetson Nano.
      • Check if your Jetson Nano Developer Kit is properly booting by connecting it to a TV through an HDMI cable. See if the TV displays the NVIDIA logo when booted, and eventually displays the Ubuntu desktop.
    • My Jetson Nano does not show anything on the TV when booting with the TV attached.
      • Check if you have inserted your microSD card all the way into the microSD card slot. You should hear a small click sound.
    • The microSD card is fully inserted, but my Jetson Nano does not boot properly.
      • Go back to “Write Image To The MicroSD Card” to reflash your SD card.

     

    Camera Setup  相机的设置

           Now that you've verified that your system can boot to your Jetson Nano Developer Kit with the microSD card, let's add the camera! Power down the Jetson Nano Developer Kit by unplugging the power. Then connect the camera using the instructions below for your type of camera (Logitech C270 or Raspberry Pi v2).
          现在,您已经验证了您的系统可以使用microSD卡引导到Jetson Nano开发人员工具包,让我们添加相机!拔掉电源,关掉Jetson Nano开发工具包。然后使用下面针对您的相机类型(Logitech  C270或Raspberry Pi v2)的说明连接相机。

    Connecting The Logitech C270 Webcam   连接Logitech C270网络摄像头

          This is very straightforward. Just plug in the USB connector into any of the Jetson Nano Developer Kit USB ports.
          这很简单。只需将USB连接器插入任何Jetson Nano开发人员Kit USB端口。

    Connecting The Raspberry Pi V2 Camera

        连接Raspberry Pi V2相机

    1. The Raspberry Pi v2 Camera connects to the MIPI CSI port. Begin by unlatching the MIPI CSI connector. This loosens the "grip" of the connector by just a small amount.      Raspberry Pi v2摄像机连接到MIPI CSI端口。首先打开MIPI CSI连接器。这只放松了连接器的“抓地力”的一小部分

    2. Insert the ribbon cable of the camera so that the metal side faces into the Nano board.     插入相机的带状电缆,使金属侧面对纳米板。
    3. Latch the connector with a gentle push downward on the sides of the plastic. The ribbon cable should be securely held by the connector.    锁紧连接器,轻轻地向下推塑料的两侧。带状电缆应由连接器牢固地固定
    4. Remove the protective film from the lens of the camera.    从相机镜头上取下保护膜。

     

    Other Cameras  其他的相机

          Other cameras may also work with your Jetson Nano Developer Kit. You'll need to test them to find out. If you have another camera on hand, such as a USB webcam, feel free to give it a try using the "Hello Camera" test notebooks in the next lesson.
         其他相机也可以与您的Jetson Nano开发工具包一起使用。你需要测试他们来找出答案。如果你手头有另一个相机,比如USB网络摄像头,请在下节课中使用“Hello camera”测试笔记本试试。

    Raspberry Pi Camera Connection Demonstration  树莓派相机连接演示

     

    Troubleshooting  故障排除

    • The camera and JupyterLab appear "frozen"
      • Check to be sure the Raspberry Pi Camera Module v2 does not touch any of the metal parts (headers, pads, t erminals, ports) of the Jetson Nano Developer Kit board as this may cause an electrical short.
    • JupyterLab is working, but I cannot execute a cell to run my camera. I get an error or it "hangs"
      • The camera may have previously been assigned but not released.
        1. Shutdown the kernel for the notebook using the pulldown menu at the top of JupyterLab.
        2. Open a terminal window from the Launch page (if no Launch page, click the '+' icon)
        3. enter sudo systemctl restart nvargus-daemon in the terminal window. You will be prompted for the password, which is dlinano
        4. Restart your notebook

     

     

    Hello Camera

          Now with the camera attached, boot the system using headless Device Mode as you did before.
         现在有了摄像头,像以前一样使用 headless 设备模式启动系统。

    Boot With Camera Attached   带摄像头开机

    1. Disconnect the USB cable from the computer if it is still attached. 
      如果USB线还在电脑上,断开它
    2. Connect your DC barrel jack power supply (5V/4A). The Jetson Nano Developer Kit will power on and boot automatically.连接你的直流电筒千斤顶电源(5V/4A)。Jetson Nano开发工具包将自动启动并启动。
    3. Wait about 30 seconds. Then connect the USB cable from the Micro USB port on the Jetson Nano Developer Kit to the USB port on your computer.    等大约30秒。然后将USB电缆从Jetson Nano开发工具包上的微型USB端口连接到计算机上的USB端口。
    4. Open a browser window on your computer and enter the address to the Jetson Nano JupyterLab server: 192.168.55.1:8888
      The JupyterLab server running on the Jetson Nano will open up.
       打开计算机上的浏览器窗口,输入Jetson Nano JupyterLab服务器的地址:192.168.55.1:8888

    Open The Hello Camera Notebook  打开Hello Camera笔记本

          The JupyterLab interface is a dashboard that provides access to the Jupyter interactive notebooks and the operating system for Jetson Nano. The first view you'll see includes a directory tree on the left and a "Launcher" page on the right. To open the "Hello Camera" notebook:
         JupyterLab接口是一个仪表板,提供访问Jupyter交互式笔记本和Jetson Nano操作系统的权限。您将看到的第一个视图包括左边的目录树和右边的“Launcher”页面。打开“相机你好”笔记本:

    1. Navigate to the nvdli-nano folder with a double-click
      双击导航到nvdli-nano文件

    2. Navigate to the hello_camera folder in the same way
      以同样的方式导航到hello_camera文件

    3. If you are testing a USB webcam camera such as the Logitech C270, double-click the usb_camera.ipynb notebook to open it. If you are testing a CSI camera, double-click the csi_camera.ipynb instead.
      如果您正在测试一个USB网络摄像头,如Logitech  C270,双击usb_camera。ipynb笔记本打开它。如果您正在测试CSI摄像机,请双击csi_camera。ipynb代替。

    4. Find out more about JupyterLab in the next section. If you are already familiar with JupyterLab features, go ahead and jump right in! When you're satisfied that your camera works correctly, return here for project instructions.
      在下一节中了解更多关于JupyterLab 的信息。如果您已经熟悉JupyterLab的特性,那么就直接进入吧!当您满意您的相机工作正常,返回这里的项目说明。

     

     

    JupyterLab

           For this course, your Jetson Nano has been configured to run a JupyterLab server on port 8888. When you boot the system and open a browser to the Jetson Nano IP address at that port, you see the JupyterLab interface.
          在本课程中,您的Jetson Nano已配置为在端口8888上运行JupyterLab  terlab服务器。当您启动系统并打开浏览器到该端口的Jetson Nano IP地址时,您将看到JupyterLab接口。

    JupyterLab Interface

           The JupyterLab Interface is a dashboard that provides access to interactive iPython notebooks, as well as the folder structure for your Jetson Nano and a terminal window into the Ubuntu operating system. The first view you'll see includes a menu bar at the top, a directory tree in the left sidebar, and a main work area that is initially open to the "Launcher" page.
           JupyterLab接口是一个仪表板,提供了访问交互式iPython笔记本、Jetson Nano的文件夹结构以及Ubuntu操作系统的终端窗口。您将看到的第一个视图包括顶部的菜单栏、左侧侧边栏中的目录树和最初打开到“Launcher”页面的主工作区。

          Complete details for all the features and menu actions available can be found in the JupyterLab Interface document. Here are some key capabilities that will be especially useful in this course:
         所有功能和菜单操作的完整细节可以在JupyterLab接口文档中找到。以下是一些在本课程中特别有用的关键功能:

    • File browser:   文件浏览器:

           The file browser in the left sidebar allows navigation through the Jetson Nano file structure. Double-clicking on a notebook or file opens it in the main work area.
            左侧侧边栏中的文件浏览器允许导航通过Jetson Nano文件结构。双击笔记本或文件,就会在主工作区打开它。

    • iPython notebooks:   iPython笔记本:

              The interactive notebooks used in this course have an ".ipynb" file extension. When a notebook is double-clicked from the file browser, it will open in the main work area and its process will start. The notebooks consist of text and code "cells". When a code cell is "run", by clicking the run button at the top of the notebook or the keyboard shortcut [CTRL][ENTER], the block of code in the cell is executed and the output, if there is any, appears below the cell in the notebook. To the left of each executable cell there is an "execution count" or "prompt number" in brackets. If the cell takes more than a few seconds to run, you will see an asterisk mark there, indicating that the cell has not finished its execution. Once processing of that cell is finished, a number will show in the brackets.
              本课程所使用的互动笔记有一个“。ipynb”文件扩展名。当从文件浏览器双击笔记本时,它将在主工作区打开,并启动进程。笔记本由文本和代码“单元格”组成。当代码单元格“运行”时,通过单击笔记本顶部的运行按钮或键盘快捷键[CTRL][ENTER],执行单元格中的代码块,如果有输出,则显示在笔记本单元格的下方。在每个可执行单元的左边,括号中都有一个“执行计数”或“提示数”。如果计算单元运行时间超过几秒钟,您将在那里看到星号标记,表示计算单元尚未完成其执行。该单元格的处理完成后,方括号中将显示一个数字。

    • Kernel operations:  内核操作:

            The kernel for each running notebook is a separate process that runs the user code. The kernel starts automatically when the notebook is opened from the file browser. The kernel menu on the main menu bar includes commands to shutdown or restart the kernel, which you will need to use periodically. After a kernel shutdown, no code cells can be executed. When a kernel is restarted, all memory is lost regarding imported packages, variable assignments, and so on.
           ​​​​​​​ 每个运行的笔记本的内核都是一个运行用户代码的独立进程。当从文件浏览器打开笔记本时,内核将自动启动。主菜单栏中的内核菜单包含关闭或重启内核的命令,您需要定期使用这些命令。内核关闭后,不能执行任何代码单元格。当内核重新启动时,所有与导入包、变量赋值等相关的内存都会丢失

    • Cell tabs:  ​​​​​​​单元格选项卡:   

             You can move any cell to new window tabs in the main work area by right-clicking the cell and selecting "Create New View for Output". This way, you can continue to scroll down the JupyterLab notebook while still watching a particular cell. This is especially helpful in the cell includes a camera view!
            您可以通过右键单击单元格并选择“为输出创建新视图”,将任何单元格移动到主工作区中的新窗口选项卡。通过这种方式,您可以继续向下滚动JupyterLab笔记本,同时仍然可以查看特定的单元格。这是特别有用的,在单元格中包括一个相机视图!

    • Terminal window:  ​​​​​​​ 终端窗口:

             You can work directly in a Terminal window on your Jetson Nano Ubuntu OS. From the Launcher page, click the Terminal icon under "Other". To bring up the Launcher page, if it is no longer visible, click the "+" icon at the top of the left sidebar.

    •        您可以直接在Jetson Nano Ubuntu操作系统的终端窗口中工作。从启动页面,点击“其他”下的终端图标。要打开启动器页面,如果它不再可见,单击左边栏顶部的“+”图标。

     

     

     

    展开全文
  • Oracle with子句的简单介绍.

    千次阅读 2013-12-03 15:42:10
    在ocp题库中有一题是关于with...Which statements are true regarding the usage of the WITH clause in complex correlated subqueries? (Choose all that apply.) A. It can be used only with the SELECT claus
  • 说一说Glide.with()

    千次阅读 2019-02-26 18:17:12
    一、引子   Glide框架是google推荐的Android图片加载框架,使用起来非常轻便,比如以下代码就可以实现在fragment内,以...Glide.with(fragment) .load(myUrl) .placeholder(placeholder) .fitCente...
  • 注:虽然原文写的是Both...and,但我觉得应该是“或”的关系,不是“和”的关系,而且不同普通的或,为了表示清楚,需通过sql表达式说明: !(entry__headline__contains='Lennon' or entry__pub_date__year=2008...
  • ... 纸上得来终觉浅,绝知此事要躬行   博客园 首页 新闻 新随笔 联系 ...ORACLE WITH AS 用法 ...With查询语句不是以select开始的,而是以“WITH”关键字开头  可认为在真正进行查询之前预...
  • Linux 中which,find,locate,where,type命令

    千次阅读 2010-12-16 14:04:00
    Linux中which,find,locate,where,type命令使用总结: Which命令:$ man which NAME  which - shows the full path of (shell) commands. SYNOPSIS  which [options] [--] programname [...] DESCRIPTION  ...
  • 在tf 1 版本中,placeholder可以这么用,placeholder相当于一个占位符 with是开启这个会话,等到有feed_dict喂入时,placeholder代表的参数才会真正地进入会话之中,运算开始进行。 tf.placeholder() is meant to be...
  • 读《Which Training Methods for GANs do actually Converge?》

    千次阅读 多人点赞 2019-03-03 22:24:04
    因为一旦“要开方的那一项”无限接近0,那么就无限接近要开出一个虚轴部分来,那么这时候就需要一个 intractably small learning rates 才能收敛 。 但我们一般训练模型不可能设置这么小的学习速率啊! ...
  • Tutorial: Using Gazebo plugins with ROS

    千次阅读 2015-10-04 00:20:01
    Tutorial: Using Gazebo plugins with ROS 参考:http://gazebosim.org/tutorials?tut=ros_gzplugins Gazebo plugins give your URDF models greater functionality and can tie in ROS messages and service calls
  • Which three statements are true regarding the functioning of the Autotask Background Process (ABP)? (Choose three.) 关于自动任务后台进程(ABP)的功能哪三种说法是真实的? A. It creates jobs without ...
  • ImageNet Classification with Deep Convolutional NeuralNetwork 利用深度卷积神经网络进行ImageNet分类 Abstract We trained a large, deep convolutionalneural network to classify the 1.2 million high-...
  • Security with HTTPS and SSL

    千次阅读 2015-07-24 01:35:22
    Servers should be able to upgrade to stronger keys over time (“key rotation”), which replaces the public key in the certificate with a new one. Unfortunately, now the client app has to be updated ...
  • iOS7 Networking with NSURLSession

    千次阅读 2014-06-08 22:19:26
    Apple 引入类似 NSURLConnection 概念的 NSURLSession ,你将会在本系列 Networking with NSURLSession 教程的这篇文章中了解到 NSURLSession 是现代的,更易于使用的,而且是附有为移动应用设计的初衷。...
  • #求期望时认为y的各维是相互独立的,所以下面相当于先各维独立求期望(在第2维上),再 #将各维的期望相加(第1维上) kl_tmp = tf.reshape(q_y*(log_q_y-tf.log( 1.0 /K)),[- 1 ,N,K]) KL = tf.reduce_sum(kl_tmp,[ ...
  • 本文是Docker 初学者在阅读官方文档并尝试在本机安装Docker过程的笔记,相当于复习了。 希望能减少初学者在阅读官网文档耗费的时间。 1. Read the documentation!!! 仔细阅读官方文档。 如果遇到问题,将官方文档多...
  • 题目:A linked list is given such that each node contains an additional random pointer which could point to any node in the list or null.Return a deep copy of the list.本题与前面的链表结构都不太一样,...
  • 论文理解 - ImageNet Classification with Deep Convolutional Neural Networks [AlexNet - Paper] [原文地址] 关于 AlexNet 的介绍. 博主 Yuens 对论文和网络结构理解真心透彻,膜拜. 转载并非常感谢原博主...
  • 日常声明:论文均来自谷歌学术或者其他国外付费论文站,博主只是读论文,译论文,分享知识,如有侵权联系我删除,谢谢。同时希望和大家一起学习,有好的论文可以推荐给我,我翻译了放上来,也欢迎大家关注我的读...
  • sql SELECT时的with(nolock)选项说明

    千次阅读 2016-01-15 15:23:12
    I used to see my senior developers use WITH (NOLOCK) when querying in SQL Server and wonder why they use. Now i explored it and found that it's useful to improve the performance in executing the ...
  • python 常用文件读写及with的用法

    千次阅读 2018-07-30 13:53:00
     with的用法稍后讨论,上面运行结果,那么我们深入到reader这个方法本身去看看究竟,python源代码如下: def reader(iterable, dialect='excel', *args, **kwargs): # real signature unknown; NOTE: ...
  • Spring注入bean报错 Error creating bean with name的网上找不到的解决方案
  • 其实这是一篇译文,看官方文档的时候觉得不好对重点做标记,加上以后遗忘的时候看中文可以更快速的捡起来,所以在阅读的过程中就直接翻译出来记录在此了,借助博客的一下编辑功能对重点做一些突出表现。...
  • iOS7 Networking with NSURLSession: Part 2

    千次阅读 2014-02-08 23:10:31
    但是取消下载任务,相当于停止了任务,这样就不可能在之后再恢复下载。然而,有一个替代的方案。通过调用 cancelByProducingResumeData:    取消一个下载任务,它的完成处理程序块接收一个 NSData 对象参数,...
  • 相当于电子云带的能量。 $ Q^{\pi}(s_t,a_t)=E_{r_{i \ge t},s_i>t\sim{E},a_i>t\sim{\pi}}[R_t\mid s_t,a_t]\qquad(1) $ 文中第一个标号的公式,也是期望值,类似前面的解释,再说一点, π \pi π 和 E E E 都...
  • 《Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks》收录Neural Information Processing Systems (NIPS), 2015( Meachine Learning领域的顶会)。R-CNN和Fast R-CNN引领了近两...
  • 基本情况这篇文章是NIPS2011和CVPR2012的文章...虽然区域级别模型通常具有密集的成对连接性,但像素级模型相当大,只允许使用稀疏图形结构. 本文的主要贡献是对于fc CRFs的高效近似推理算法,其中pairwise edge potent
  • USB Host Driver的实现坐落drivers/usb/host/, HCDI的实现在/drivers/usb/core/hcd.c HCD通常实现以下三种标准中的一种或几种 Enhanced Host Controller Interface (EHCI), Universal Host Controller ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 67,078
精华内容 26,831
关键字:

which相当于with