精华内容
下载资源
问答
  • Exception in thread “main” java.lang.IllegalStateException: Cannot read while there is an open stream writer 解决问题:将打开的流关闭即可。 初次在进行处理 PDF 文件编辑时,由于粗心总是忘记关闭打开的...

    Exception in thread “main” java.lang.IllegalStateException: Cannot read while there is an open stream writer

    错误描述
    解决问题:将打开的流关闭即可。

    初次在进行处理 PDF 文件编辑时,由于粗心总是忘记关闭打开的 PDPageContentStream 流,留下笔记。


    愿世上没Bug

    展开全文
  • 1、查看/etc/krb5.conf下的includedir是否存在 2、查看includedir对应的路径是否正确 如图: 此时includedir中的路径下为空,因此读取不到Kerberos的配置,会报标题中的错误 解决:注释includedir即可

    1、查看/etc/krb5.conf下的includedir是否存在
    2、查看includedir对应的路径是否正确

    如图:
    在这里插入图片描述
    此时includedir中的路径下为空,因此读取不到Kerberos的配置,会报标题中的错误
    解决:注释includedir即可

    展开全文
  • from: http://oracle-online-help.blogspot.com/2007/03/db-file-sequential-read-while-doing.html These days, we are working on data warehouse inwhich we have a master table which will have 1.5m (ap

    from: http://oracle-online-help.blogspot.com/2007/03/db-file-sequential-read-while-doing.html

     

    These days, we are working on data warehouse in which we have a master table which will have 1.5m (approx) rows inserted every half hour and we have few fast refresh materialized view based on it. These mviews have some aggregate functions on it, which makes it a bit complex.


    To start the experiment, each mview refreshes used to take some 18-20 mins, which is totally against the business requirement. Then we tried to figure out on why the mview refresh is taking so much time, in spite of dropping all the bitmap indexes on the mview (generally b-map indexes are not good for inserts/updates).

     

    The 10046 trace (level 12) highlighted that there were many “db file sequential reads” on mview because of optimizer using “I_SNAP$_mview” to fetch the rows from mview and merge the rows with that of master table to make the aggregated data for the mview.

    Good part of the story is access to master table was quite fast because we used direct load (using sqlldr direct=y) to insert the data in it. When you use direct load to insert the data, oracle maintains the list of rowids added to table in a view called “SYS.ALL_SUMDELTA”. So while doing fast mview refresh, news rows inserted are picked directly from table using the rowids given from ALL_SUMDELTA view and not from Mview log, so this saves time.

    Concerned part was still Oracle using I_SNAP$ index while fetching the data from mview and there were many “db file sequential read” waits and it was clearly visible that Oracle waited on sequential read the most. We figured it out that full table scan (which uses scattered read, and multi block read count) was very fast in comparison to index access by running simple test against table. Also the tables are dependent mviews are only for the day. End of the day the master table and mview’s data will be pushed to historical tables and master table and mviews will be empty post midnight.

    I gathered the stats of mview and then re-ran the mview refresh, and traced the session, and this time optimizer didn’t use the index which was good news.

    Now the challenge was to run the mview stats gathering job every half an hour or induce wrong stats to table/index to ensure mview refresh never uses index access or may be to lock the stats using DBMS_STATS.LOCK_TABLE_STATS.

    But we found another solution by creating the mview with “USING NO INDEX” clause. This way “I_SNAP$” index is not created with “CREATE MATERIALIZED VIEW’ command. As per Oracle the “I_SNAP$” index is good for fast refresh but it proved to be reverse for us because our environment is different and the data changes is quite frequent.

    Now, we ran the tests again, we loaded 48 slices of data (24 hrs x 2 times within hour) and the results were above expectations. We could load the data with max 3 mins per load of data.

    This is not the end of story. In the trace we could see the mview refresh using “MERGE” command and using full table scan access to mview (which we wanted) and rowid range access to master table.

    The explain plan looks like:

     

    Rows     Row Source Operation
    -------  ---------------------------------------------------
          2  MERGE  SF_ENV_DATA_MV (cr=4598 pr=5376 pw=0 time=47493463 us)
     263052   VIEW  (cr=3703 pr=3488 pw=0 time=24390284 us)
     263052    HASH JOIN OUTER (cr=3703 pr=3488 pw=0 time=24127224 us)
     263052     VIEW  (cr=1800 pr=1790 pw=0 time=14731732 us)
     263052      SORT GROUP BY (cr=1800 pr=1790 pw=0 time=14205624 us)
     784862       VIEW  (cr=1800 pr=1790 pw=0 time=3953958 us)
     784862        NESTED LOOPS  (cr=1800 pr=1790 pw=0 time=3169093 us)
          1         VIEW  ALL_SUMDELTA (cr=9 pr=0 pw=0 time=468 us)
          1          FILTER  (cr=9 pr=0 pw=0 time=464 us)
          1           MERGE JOIN CARTESIAN (cr=9 pr=0 pw=0 time=459 us)
          1            NESTED LOOPS  (cr=6 pr=0 pw=0 time=99 us)
          1             TABLE ACCESS BY INDEX ROWID OBJ$ (cr=3 pr=0 pw=0 time=56 us)
          1              INDEX UNIQUE SCAN I_OBJ1 (cr=2 pr=0 pw=0 time=23 us)(object id 36)
          1             TABLE ACCESS CLUSTER USER$ (cr=3 pr=0 pw=0 time=40 us)
          1              INDEX UNIQUE SCAN I_USER# (cr=1 pr=0 pw=0 time=7 us)(object id 11)
          1            BUFFER SORT (cr=3 pr=0 pw=0 time=354 us)
          1             INDEX RANGE SCAN I_SUMDELTA$ (cr=3 pr=0 pw=0 time=243 us)(object id 158)
          0           NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
          0            INDEX RANGE SCAN I_OBJAUTH1 (cr=0 pr=0 pw=0 time=0 us)(object id 103)
          0            FIXED TABLE FULL X$KZSRO (cr=0 pr=0 pw=0 time=0 us)
          0           FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)
     784862         TABLE ACCESS BY ROWID RANGE SF_ENV_SLICE_DATA (cr=1791 pr=1790 pw=0 time=2383760 us)
     708905     MAT_VIEW ACCESS FULL SF_ENV_DATA_MV (cr=1903 pr=1698 pw=0 time=6387829 us)


    You can see the access pattern as above.

     

    Interesting twist in the story is when I saw the wait events in trace file.

     

      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                      2253        0.74          7.73
      db file scattered read                        240        1.05          6.77
      log file switch completion                 6        0.98          3.16
      log file switch                                    8        0.98          2.47
      rdbms ipc reply                                 6        0.00          0.00
      log buffer space                                3        0.42          0.61

     

    Again, even when we are doing full table scan, there are “db file sequential reads”?

    To confirm I opened the raw trace file (before tkprof), and checked the obj# on sequential read wait event, it was the mview (SF_ENV_DATA_MV) !! and there were many. To further investigate I checked if there were any scattered reads to mview or not. I found there were scattered reads but there were many sequential reads also on which Oracle waited more than that of scattered read which did most of the data fetching.

    After giving some thought, I realized that we created the mviews without storage clause, which means Oracle created the mview with default storage clause.

     

    So assuming there are 17 blocks in an mview (container table) extent and Multi block read count is 16, Oracle will use scattered read mechanism (multiple blocks) to read the first 16 blocks and for the rest 1 it will use sequential read mechanism (one block), so you will find many sequential reads wait events sandwiched between scattered reads. To overcome this we created the mview with larger extent sizes and also multiple of MBCR (multi block read count).

    Also, another cause of sequential read is chained or migrated rows, if your mview (or table) rows are migrated, the pointer to the next row is maintained in old (original) block, which will always be read by a single i/o call i.e. by sequential read.You can check the count of chained rows using DBA_TABLES.CHAIN_CNT after analysing the table . So to overcome this, we created the mview with genuine pctfree so when the merge runs (as a part of mview refresh) and updates few rows, the rows are not moved to a different block and hence avoiding sequential read.

    Conclusion:

    1. Mview creation with “USING NO INDEX” does not create “I_SNAP$” index which sometimes help in fast refresh when the data changes are quite frequent and you cannot afford to collect stats after every few mins.
    2. Create mview with storage clause suiting to your environment. Default extent sizes may not be always good.
    3. PCTFREE can be quite handy to avoid sequential reads and extra block read.
    展开全文
  • Shell while read

    2018-12-12 17:50:26
    #!/bin/bash set -x function rm OldRelease(){ awk -F "/" '{print $13}' release.log >&... while read "line" do if [ ${line} ];then rm -rf ${line} ...
    #!/bin/bash
    set -x
    
    function rm OldRelease(){
        awk -F "/" '{print $13}' release.log >> tmp.txt
    
        while read line || [[ -n $line ]]
        do
            if [ ${line} ];then
                rm -rf ${line}
                sed -i "/${line}/d" release.log
            fi
        done < tmp.txt
    
        if [ -f tmp.txt ];then
            rm -f tmp.txt
        fi
        
        sed -i '$!d' release.log
    }
    
    ls -l |awk '{print $9}' |grep -v "^$" >dirList.txt
    
    while read line || [[ -n $line ]]
    do
        if [[ ${line} =~ ^HiCloud.*$ ]];then
            pushd /opt/hw/release/hitouch/${line}
            rmOldRelease
            popd
        fi
    done < dirList.txt
    
    rm -f dirList.txt
    
    

    release.log 

    2018-11-29:16:22:06 install 1.1.0.38.20181129161838 /opt/hw/release/hitouch/HiTouchCloud_CDps/20181129162206 taskid: 877f3dfd1c974e9a9ba119b8ecfb061b backup_path: None
    2018-12-04:18:35:10 install 1.1.0.38.20181129161838 /opt/hw/release/hitouch/HiTouchCloud_CDps/20181204183510 taskid: 96982981aab147988f2461bd9f5a9995 backup_path: /opt/hw/release/hitouch/HiTouchCloud_CDps/20181129162206
    2018-12-04:20:31:52 install 1.1.0.38.20181204202758 /opt/hw/release/hitouch/HiTouchCloud_CDps/20181204203152 taskid: b919f2a51bc14893a0edcc940a699b50 backup_path: /opt/hw/release/hitouch/HiTouchCloud_CDps/20181204183510
    2018-12-04:20:34:32 install 1.1.0.38.20181204202758 /opt/hw/release/hitouch/HiTouchCloud_CDps/20181204203432 taskid: 3f3139430c0d4d74b0eca60e237df7a5 backup_path: /opt/hw/release/hitouch/HiTouchCloud_CDps/20181204203152
    
    

     

    展开全文
  • while read line

    千次阅读 2018-03-21 08:46:08
    循环中的重定向或许你应该在其他脚本中见过下面的这种写法:while read linedo …done &lt; file刚开始看到这种结构时,很难理解&lt; file是如何与循环配合在一起工作的。因为循环内有很多条命令,而我们...
  • CSH while read

    2019-09-23 14:43:03
    转载于:https://www.cnblogs.com/lelin/p/11376692.html
  • while read读取文本内容

    2020-10-18 14:31:24
    while read line do cmd done 方式二: cat FILE_PATH |while read line do cmd done 方式三: while read line do cmd done <FILE 举例: ip.txt内容如下: 10.1.1.11 root 123 10.1....
  • while read loop

    千次阅读 2012-07-04 11:45:27
    cut -d: -f3 /etc/passwd | while read line do if [ $line -ge 500 ] then let res++ echo $res fi done echo $res 脚本实现的功能就是统计UID大于500的项之和。下面是输|出结果 1 2 3 4 5 6 7 8 9 10 0...
  • while read与标识符

    2017-03-08 12:01:00
    read -u3 i 的意思是从 3 号 fd (file descriptor,文件描述符) 中读一行数据到 i 变量中, 同理你明白 read -u4 j 的意思而 3<...bfile所以,整个代码while read -u3 i && read -u4 j;do echo $i $j do...
  • while read 和do

    2019-03-25 15:46:17
    while read line do        … done < file 刚开始看到这种结构时,很难理解< file是如何与循环配合在一起工作的。因为循环内有很多条命令,而我们之前...
  • while read line 与for循环的区别 例子:要从一个ip列表中获取ip、port,然后ssh ip 到目标机器进行特定的command操作ssh -o StrictHostKeyChecking=no -p22 ip "ls -la /data/"ip列表:115.159.93 1 22115.159....
  • while.sh的内容:cat while.sh#!/bin/bashA=1pstreecat 1.txt | while read line;do#起了子进程,看如下pstreepstree echo $A A=${A}_${line}echo $Adoneecho $Ash while.sh├─sshd───sshd───s...
  • Enabling read-while-writer

    千次阅读 2012-06-27 11:14:59
    The following settings are required for the read-while-writer feature to function: CONFIG proxy.config.cache.max_doc_size INT 0 CONFIG proxy.config.cache.enable_read_while_writer INT 1 CONFIG proxy.c
  • 管道技巧-while read line

    2017-11-15 22:57:00
    管道法: cat $FILENAME | while read LINE Function While_read_LINE(){ cat $FILENAME | while read LINE do echo $LINE done } 注释:我只所有把这种方式叫做管道法,相比大家应该可以看出来了吧。当遇见管道的...
  • while stdin 原来默认是/dev/tty,被重定向到管道或文件后,如果你还想读屏幕(/dev/tty),那就单独执行某个命令时...while read info do echo "$info" read -p'enter' dev </dev/tty echo $dev done < /root/us...
  • 正确的在脚本中使用 while read 可以得到诸多好处, read 命令从标准输入中取得输入存入变量. 使用 read 的脚本都可以获得 linux pipe 的所有优点. 将你的脚本放入管道 while read host ip; do echo &amp;...
  • 总结一下while read line 与 for循环的区别(白话) 都是读取文件 while read line 以\n为分割符,而for是以空格为分隔符 补充一点就是:for会一行一行的读取,while read line会一次性读走 ssh遍历时很明显 还有一...
  • while read line的问题

    2018-04-10 08:43:00
    while read line do … done < file 刚开始看到这种结构时,很难理解< file是如何与循环配合在一起工作的。因为循环内有很多条命令,而我们之前接触的重定向都是为一条命令工作的。这里有一个原则,这个...
  • 【Shell】while read line

    2019-06-28 15:43:10
    while read line do … done < file 对循环重定向的输入可适用于循环中的所有需要从标准输入读取数据的命令; 对循环重定向的输出可适用于循环中的所有需要向标准输出写入数据的命令; 当在循环内部显式地使用...
  • while read line 用法详细介绍

    千次阅读 2020-01-20 10:58:28
    while read line do … done < file 刚开始看到这种结构时,很难理解< file是如何与循环配合在一起工作的。因为循环内有很多条命令,而我们之前接触的重定向都是为一条命令工作的。这里有一个原则,这个...
  • shell脚本--while read line循环

    千次阅读 2019-11-06 11:29:30
    while read line循环可以按行读入,一直到所有行读完才退出循环。在实际工作中,经常采用这种循环进行数据处理。 #!/bin/bash # 测试 while read line 循环的使用 # 循环读取一个ip文件的每一行,输出每行的设备...
  • 问题:whilereadline中使用ssh只能读取...while read line do echo $line ssh root@$line "echo 123456 | passwd --stdin peter" > /dev/null done < hosts.txt 结果hosts.txt中只有第一行ip地址生效,...
  • cat file | while read line do let i++ done echo "$i" 上述程序每次输出的i都为0 在多次执行过程中发现i总是有还原成默认值的情况。 把echo "$i"放在done前每次循环都可见在累加 都过查找资料发现:while read ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 45,122
精华内容 18,048
关键字:

readwhile