精华内容
下载资源
问答
  • uwsgi 启动时就报错一般能看到:--- no python application found, check your ...有用日志信息(每次启动 uwsgi 的时候就会有启动的日志产生,如果出错了就在这块找找):*** Operational MODE: preforking ***failed...

    uwsgi 启动时就报错

    一般能看到:--- no python application found, check your startup logs for errors ---,基本上是配置出错了,无法正常启动 uwsgi。

    有用日志信息(每次启动 uwsgi 的时候就会有启动的日志产生,如果出错了就在这块找找):

    *** Operational MODE: preforking ***

    failed to open python file /root/yzq/djangos/testdata/Testdata/wsgi.py

    unable to load app 0 (mountpoint='') (callable not found or import error)

    它说 /root/yzq/djangos/testdata/Testdata/wsgi.py 这个路径打开失败,我这里是路径写错了,少写了一级。

    *** Operational MODE: preforking ***

    failed to open python file /root/yzq/djangos/testdata/Testdata/wsgi.py

    unable to load app 0 (mountpoint='') (callable not found or import error)

    *** no app loaded. going in full dynamic mode ***

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    *** uWSGI is running in multiple interpreter mode ***

    spawned uWSGI master process (pid: 25141)

    spawned uWSGI worker 1 (pid: 25142, cores: 1)

    spawned uWSGI worker 2 (pid: 25143, cores: 1)

    *** Stats server enabled on 127.0.0.1:9193 fd: 11 ***

    ...brutally killing workers...

    worker 1 buried after 1 seconds

    worker 2 buried after 1 seconds

    binary reloading uWSGI...

    chdir() to /root/yzq/configs

    closing all non-uwsgi socket fds > 2 (max_fd = 100001)...

    found fd 3 mapped to socket 0 (127.0.0.1:9092)

    running /root/.virtualenvs/blog/bin/uwsgi

    [uWSGI] getting INI configuration from /root/yzq/configs/testdata_uwsgi.ini

    *** Starting uWSGI 2.0.17 (64bit) on [Fri Feb 15 21:31:14 2019] ***

    compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-16) on 04 April 2018 04:11:16

    os: Linux-3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017

    nodename: VM_2_29_centos

    machine: x86_64

    clock source: unix

    pcre jit disabled

    detected number of CPU cores: 1

    current working directory: /root/yzq/configs

    detected binary path: /root/.virtualenvs/blog/bin/uwsgi

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    chdir() to /root/yzq/djangos/testdata

    your processes number limit is 7283

    your memory page size is 4096 bytes

    detected max file descriptor number: 100001

    lock engine: pthread robust mutexes

    thunder lock: disabled (you can enable it with --thunder-lock)

    uwsgi socket 0 inherited INET address 127.0.0.1:9092 fd 3

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    Python version: 3.6.4 (default, Mar 16 2018, 22:27:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]

    Set PythonHome to /root/.virtualenvs/testdata

    *** Python threads support is disabled. You can enable it with --enable-threads ***

    Python main interpreter initialized at 0x28f9820

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    your server socket listen backlog is limited to 100 connections

    your mercy for graceful operations on workers is 60 seconds

    mapped 304776 bytes (297 KB) for 2 cores

    *** Operational MODE: preforking ***

    failed to open python file /root/yzq/djangos/testdata/Testdata/wsgi.py

    unable to load app 0 (mountpoint='') (callable not found or import error)

    *** no app loaded. going in full dynamic mode ***

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    *** uWSGI is running in multiple interpreter mode ***

    gracefully (RE)spawned uWSGI master process (pid: 25141)

    spawned uWSGI worker 1 (pid: 27985, cores: 1)

    spawned uWSGI worker 2 (pid: 27986, cores: 1)

    *** Stats server enabled on 127.0.0.1:9193 fd: 11 ***

    *** Starting uWSGI 2.0.17 (64bit) on [Fri Feb 15 21:32:11 2019] ***

    compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-16) on 04 April 2018 04:11:16

    os: Linux-3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017

    nodename: VM_2_29_centos

    machine: x86_64

    clock source: unix

    pcre jit disabled

    detected number of CPU cores: 1

    current working directory: /

    writing pidfile to /root/yzq/running/uwsgi_testdata.pid

    detected binary path: /root/.virtualenvs/blog/bin/uwsgi

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    chdir() to /root/yzq/djangos/testdata

    your processes number limit is 7283

    your memory page size is 4096 bytes

    detected max file descriptor number: 1000000

    lock engine: pthread robust mutexes

    thunder lock: disabled (you can enable it with --thunder-lock)

    uwsgi socket 0 bound to TCP address 127.0.0.1:9092 fd 3

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    Python version: 3.6.4 (default, Mar 16 2018, 22:27:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]

    Set PythonHome to /root/.virtualenvs/testdata

    *** Python threads support is disabled. You can enable it with --enable-threads ***

    Python main interpreter initialized at 0x17141f0

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    your server socket listen backlog is limited to 100 connections

    your mercy for graceful operations on workers is 60 seconds

    mapped 304776 bytes (297 KB) for 2 cores

    *** Operational MODE: preforking ***

    failed to open python file /root/yzq/djangos/testdata/Testdata/wsgi.py

    unable to load app 0 (mountpoint='') (callable not found or import error)

    *** no app loaded. going in full dynamic mode ***

    uWSGI running as root, you can use --uid/--gid/--chroot options

    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

    *** uWSGI is running in multiple interpreter mode ***

    spawned uWSGI master process (pid: 527)

    spawned uWSGI worker 1 (pid: 541, cores: 1)

    spawned uWSGI worker 2 (pid: 542, cores: 1)

    *** Stats server enabled on 127.0.0.1:9193 fd: 11 ***

    --- no python application found, check your startup logs for errors ---

    [pid: 541|app: -1|req: -1/1] 193.112.40.139 () {32 vars in 365 bytes} [Fri Feb 15 21:36:07 2019] GET / => generated 21 bytes i

    n 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)

    --- no python application found, check your startup logs for errors ---

    [pid: 541|app: -1|req: -1/2] 193.112.40.139 () {32 vars in 365 bytes} [Fri Feb 15 21:36:10 2019] GET / => generated 21 bytes i

    n 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)

    uwsgi 正常启动,访问相关视图才报错

    日志里明确的得到有用信息:

    django.urls.exceptions.NoReverseMatch: Reverse for 'detail' with arguments '('',)' not found. 1 pattern(s) tried: ['article/(?

    P[0-9]+)/$']

    路由缺少参数,我这个问题是因为缓存没更新引起的。具体情况是,django 新模板代码 拿到旧的数据结构(因为是缓存数据-旧的数据结构,缓存时长 12H)去解析导致的,只需要进入缓存,把 key 删除就可以,或者换个新 key。

    [pid: 11774|app: 0|req: 420/491] 111.206.221.40 () {44 vars in 1222 bytes} [Wed Dec 25 20:06:17 2019] GET /like_get/?cur_url=http%3A%2F%2Fzhuoqun.info%2Farticle%2F215%2F&article_id=215&csrfmiddlewaretoken=DcI4hLPyMtwaL5hutjx1s5KnfUTEWcWBudwwYYk9iyHu8iI2IQQ1Ez1saXXeY3WZ => generated 242 bytes in 44 msecs (HTTP/1.1 200) 3 headers in 101 bytes (1 switches on core 2)

    [pid: 11775|app: 0|req: 72/492] 111.206.221.104 () {50 vars in 1101 bytes} [Wed Dec 25 20:06:17 2019] POST /visit_web/ => generated 1342 bytes in 25 msecs (HTTP/1.1 403) 3 headers in 102 bytes (1 switches on core 3)

    django.security.csrf: 2019-12-25 20:06:17,771 /root/.virtualenvs/blog/lib/python3.6/site-packages/django/middleware/csrf.py [l

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 990, in render

    bit = node.render_annotated(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 957, in render_annotated

    return self.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/loader_tags.py", line 72, in render

    result = block.nodelist.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 990, in render

    bit = node.render_annotated(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 957, in render_annotated

    return self.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/loader_tags.py", line 72, in render

    result = block.nodelist.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 990, in render

    bit = node.render_annotated(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 957, in render_annotated

    return self.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/library.py", line 245, in render

    return t.render(new_context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 209, in render

    return self._render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 199, in _render

    return self.nodelist.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 990, in render

    bit = node.render_annotated(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 957, in render_annotated

    return self.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/defaulttags.py", line 216, in render

    nodelist.append(node.render_annotated(context))

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 957, in render_annotated

    return self.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/defaulttags.py", line 322, in render

    return nodelist.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 990, in render

    bit = node.render_annotated(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/base.py", line 957, in render_annotated

    return self.render(context)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/template/defaulttags.py", line 458, in render

    url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/urls/base.py", line 91, in reverse

    return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))

    File "/root/.virtualenvs/blog/lib/python3.6/site-packages/django/urls/resolvers.py", line 497, in _reverse_with_prefix

    raise NoReverseMatch(msg)

    django.urls.exceptions.NoReverseMatch: Reverse for 'detail' with arguments '('',)' not found. 1 pattern(s) tried: ['article/(?

    P[0-9]+)/$']

    [pid: 9439|app: 0|req: 19/23] 223.166.75.193 () {46 vars in 779 bytes} [Wed Dec 25 00:57:26 2019] GET / => generated 2073 byte

    s in 624 msecs (HTTP/1.1 200) 5 headers in 278 bytes (1 switches on core 2)

    [pid: 9439|app: 0|req: 20/24] 124.225.47.166 () {42 vars in 781 bytes} [Wed Dec 25 00:57:28 2019] GET /favicon.ico => generate

    d 0 bytes in 11 msecs (HTTP/1.1 301) 4 headers in 160 bytes (1 switches on core 3)

    展开全文
  • 主要介绍了Shell日志分析常用命令和例子,本文重点在一样实现日志分析命令语句例子上,本文给出了10条常用的分析实例,需要的朋友可以参考下
  • #!/bin/bash #进入当前脚本目录 SELFPATH=$(cd$(dirname"$0");pwd) #设置路径 LOGGERAPATH= XXX LOGGERBPATH= XXX LOGGERCPATH= XXX #LOGPATH="${GLOGGERPATH}" #本sh路径 ...#FatalLO...
    #!/bin/bash
    
    
    
    #进入当前脚本目录
    
    SELFPATH=$(cd $(dirname "$0");pwd)
    
    
    
    #设置路径
    
    LOGGERAPATH= XXX
    
    LOGGERBPATH= XXX
    
    LOGGERCPATH= XXX
    
    
    
    #LOGPATH="${GLOGGERPATH}"
    
    #本sh路径
    
    #BO的路径 这里是sh脚本的上一层
    
    #Fatal LOG的路径
    
    
    
    #设置系统环境变量 https://www.cnblogs.com/benmm/p/4010834.html
    
    export LC_CTYPE=zh_CN.UTF-8
    
    export LANG=en_US.UTF-8
    
    
    
    #根据当前日期组装默认日志文件名
    
    TARGETDATE="`date  %Y%m%d`"
    
    echo $TARGETDATE
    
    TARGETFILE="XXX-${TARGETDATE}_XXX.txt"
    
    echo $TARGETFILE
    
    
    
    VERB=0
    
    #对于脚本的所有选项 -v -t time        -v就是执行一段是否暂停
    
    while getopts "vt:" OPT; do
    
        case $OPT in
    
            "v")
    
                VERB=1
    
                ;;
    
            "t")
    
            
    
                TARGETDATE=${OPTARG}
    
                TARGETFILE="XXX-${TARGETDATE}_XXX.txt"
    
                echo "option time $TARGETDATE"
    
                ;;
    
            ":")
    
                echo "unknown argument, $OPT"
    
                exit -2
    
                ;;
    
            "?")
    
                echo "unknown argument, $OPT"
    
                exit -2
    
                ;;
    
            *)
    
                echo "unknown argument, $OPT"
    
                exit -3
    
                ;;
    
        esac
    
    done
    
    
    
    function verb_pause()
    
    {
    
        if [[ ${VERB} -eq 1 ]]; then
    
            read -n 1 -p "Press any key to continue ..."
    
        fi
    
    }
    
    
    
    cd "${SELFPATH}/"
    
    
    
    echo "all timeout:"
    
    
    
    echo "================================================================"
    
    echo "${TARGETDATE}, timeout@XXX"
    
    echo "================================================================"
    
    cat "${LOGGERXXXPATH}/"${TARGETFILE} | grep -a "timeout" >> timeout.txt
    
    verb_pause
    
    
    
    echo "================================================================"
    
    echo "${TARGETDATE}, LastErrorCode@fatal"
    
    echo "================================================================"
    
    cd "${GLOGGERFATALPATH}/"
    
    rm -rf totalfile.txt
    
    touch totalfile.txt
    
    chmod 777 totalfile.txt
    
    iconv -c  -f GB2312 -t UTF-8 ${TARGETFILE} >> totalfile.txt
    
    #iconv -f GB2312 -t UTF-8//IGNORE ${TARGETFILE} -o "totalfile.txt"
    
    #cat "${GLOGGERFATALPATH}/"${TARGETFILE} | grep -a "LastErrorCode" | sed 's/,/ /g' | awk '{print $5}'
    
    #怎么输出个换行符???
    
    cat totalfile.txt | grep -a "LastErrorCode" | sed 's/,/ /g' | awk '{print $5}' >> ${SELFPATH}/CrashAddress.txt
    
    cd ${SELFPATH}
    
    verb_pause
    
    
    
    #eg: G1,27-22:43:00,!!!!!!!! CServerNetMng::Send Error SENDMODE:11 iReturn:-1 !!!!!!!!
    
    echo "================================================================"
    
    echo "${TARGETDATE}, Send Error@err"
    
    echo "================================================================"
    
    TARGETFILE="err-${TARGETDATE}_00.txt"
    
    echo $TARGETFILE
    
    
    
    cd "${GLOGGERERRPATH}/"
    
    rm -rf totalfile.txt
    
    touch totalfile.txt
    
    chmod 777 totalfile.txt
    
    iconv -c  -f GB2312 -t UTF-8 ${TARGETFILE} >> totalfile.txt
    
    cat totalfile.txt | grep -a "Send Error" | sed 's/,/ /g' | awk '{print $6,$7}' >> ${SELFPATH}/SendError.txt
    
    cat totalfile.txt | grep -aE "Send Error.*SENDMODE" | awk '{print $7}' |uniq -c
    
    cd ${SELFPATH}
    
    verb_pause
    
    
    
    
    cd "${CURRPATH}/"

    其它例子:

    1 查找指定进程

    ps -ef|grep svn

    2 查找指定进程个数

    ps -ef|grep svn -c

    ps -ef|grep -c svn

    3 ps x | grep sh | wc -l

    [root@DJWANPC ~]# ps x | grep sh
      400 ?        S      0:05 [kdmflush]
      402 ?        S      0:00 [kdmflush]
      987 ?        S      0:00 [kdmflush]
     1891 ?        Ss     0:00 /usr/sbin/sshd
     1926 ?        S      0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/DJWANPC.pid
     2378 tty1     Ss+    0:00 -bash
    24415 ?        Ss     0:01 sshd: root@pts/0 
    24417 pts/0    Ss     0:00 -bash
    30828 ?        S      0:00 [flush-253:0]
    30847 pts/0    S+     0:00 grep sh
    [root@DJWANPC ~]# ps x | grep sh | wc -l
    10
    

    4 按行统计 按照,分隔后的第三个值排序并统计数量,并按数量多少排序

    cat XXX.txt | awk -F, '{print $3}' | sort | uniq -c | sort -nr > result.txt

    5 看上一个例子结果的前十位

    加上:  | head -n 10

     

     

    展开全文
  • 大数据日志分析例子

    千次阅读 2014-03-15 20:31:59
    1.项目是要对apache日志进行分析 2.数据特点:apache log,每天产生一个日志文件 3.项目需要对历史日志文件和每天的日志文件进行处理 4.处理步骤 4.1 把linux上的logs上传到hdfs中 4.1.1 当前apache logs与...

    1.项目是要对apache日志进行分析

    2.数据特点:apache log,每天产生一个日志文件

    3.项目需要对历史日志文件和每天的日志文件进行处理

    4.处理步骤

    4.1 把linux上的logs上传到hdfs中

    4.1.1 当前apache logs与hadoop在同一台服务器,可以直接使用命令上传;

         上传的命令写在shell脚本中,该脚本区分是一次性上传所有历史日志和每天重复执行的。

     init.sh

     daily.sh

     把daily.sh配置到crontab中,使其每天都执行。执行命令crontab -e编辑配置:

     * 1 * * * /apache_logs/daily.sh

    4.1.2 当apache logs与hadoop不在同一台服务器时,可以使用nfs共享磁盘的方式访问;

    4.1.2 当apache server很多时,可以使用flume进行分布式数据收集处理;

    4.2 数据清洗,把原始数据中与业务无关的信息、脏数据等处理掉,统一放到/hmbbs_cleaned/YYYY-mm-dd

    4.3 使用hive的外部分区表处理数据

    4.3.1 创建一个外部分区表

          CREATE EXTERNAL TABLE hmbbs(ip string,logtime string, url string) PARTITIONED BY (logdate string) ROW FORMATDELIMITED FIELDS TERMINATED BY '\t' LOCATION '/hmbbs_cleaned';

    4.3.2 每天要修改分区字段,添加分区

          ALTER TABLE hmbbs ADD PARTITION(logdate="2013_05_31") LOCATION "/h-mbbs_cleaned/2013_05_31";

    4.3.3 统计某一天的指标

          --pv

          SELECT COUNT(1) FROM hmbbs WHERElogdate='2013_05_30';

      CREATE TABLE pv_2013_05_30 AS SELECT'2013_05_30', COUNT(1) FROM hmbbs WHERE logdate='2013_05_30';

      --reguser

      SELECT COUNT(1) FROM hmbbs WHERElogdate='2013_05_30' AND instr(url, 'member.php?mod=register')>0;

      CREATE TABLE reguser_2013_05_30 AS SELECT'2013_05_30', COUNT(1) FROM hmbbs WHERE logdate='2013_05_30' AND instr(url,'member.php?mod=register')>0;

          --ip

      SELECT COUNT(distinct ip) FROM hmbbs WHERElogdate='2013_05_30';        

          CREATE TABLE ip_2013_05_30 AS SELECT'2013_05_30', COUNT(distinct ip) FROM hmbbs WHERE logdate='2013_05_30';

      --jumper

      SELECT COUNT(1) FROM (SELECT ip, COUNT(1)FROM hmbbs WHERE logdate='2013_05_30' GROUP BY ip HAVING COUNT(1)=1) t;

      CREATE TABLE jumper_2013_05_30 AS SELECT'2013_05_30', COUNT(1) FROM (SELECT ip, COUNT(1) FROM hmbbs WHERElogdate='2013_05_30' GROUP BY ip HAVING COUNT(1)=1) t;

    4.3.4 把每天的统计结果导出到mysql中

    sqoopexport --connect jdbc:mysql://hadoop0:3306/hmbbs --username root --passwordadmin --table daily_pv--export-dir        '/user/hive/warehouse/pv_2013_05_30' --fields-terminated-by '\001'

     

    sqoopexport --connect jdbc:mysql://hadoop0:3306/hmbbs --username root --passwordadmin --table daily_reguser--export-dir        '/user/hive/warehouse/reguser_2013_05_30' --fields-terminated-by '\001'

     

    sqoopexport --connect jdbc:mysql://hadoop0:3306/hmbbs --username root --passwordadmin --table daily_ip--export-dir        '/user/hive/warehouse/ip_2013_05_30' --fields-terminated-by '\001'

     

    sqoopexport --connect jdbc:mysql://hadoop0:3306/hmbbs --username root --passwordadmin --table daily_jumper--export-dir        '/user/hive/warehouse/jumper_2013_05_30' --fields-terminated-by '\001'

    4.4 使用hbase查询明细数据

    4.4.1 创建hbase数据库,执行create'hmbbs','cf'         


    展开全文
  • 有关python实现apahce网站日志分析的方法。 应用到:shell与python数据交互、数据抓取,编码转换 #coding:utf-8 #!/usr/bin/python'''程序说明:apache access.log日志分析 分析访问网站IP 来源情况 日期:2014-01-...

    有关python实现apahce网站日志分析的方法。

    应用到:shell与python数据交互、数据抓取,编码转换

    #coding:utf-8

    #!/usr/bin/python
    '''
    程序说明:apache access.log日志分析
    分析访问网站IP 来源情况
    日期:2014-01-06 17:01
    author:gyh9711

    程序说明:应用到:shell与python数据交互、数据抓取,编码转换
    '''
    import os
    import json
    import httplib
    import codecs
    LogFile='/var/log/apache2/access.log'
    #日志
    logMess='/tmp/acc.log'
    if os.path.isfile(logMess):
    os.system('cp /dev/null %s'% logMess)
    file=codecs.open(logMess,'w+',encoding='utf-8')
    def cmd(cmd):
    return os.popen(cmd).readlines()
    '''
    def getIp(ip):
    return json.loads(os.popen("/usr/bin/curl http://ip.taobao.com/service/getIpInfo.php?ip=%s" % ip).readline())['data']
    '''
    conn = httplib.HTTPConnection('ip.taobao.com')
    def getIpCountry(ip): www.jbxue.com
    conn.request('GET','/service/getIpInfo.php?ip=%s' % ip)
    r1=conn.getresponse()
    if r1.status == 200:
    return json.loads(r1.read())['data']
    else:
    return "Error"
    #将access.log文件进行分析,并转为python数组
    file.write(u"字段说明:ip 访问次数据 ip国家 城市的 isp号 省份 所在地区\n")
    ipDb=[]
    for i in cmd('''/usr/bin/awk '{print $1}' %s |sort |uniq -c''' % LogFile):
    ip = i.strip().split(' ')
    ipDb.append(ip)
    #通过taobao 提供接口分析ip地址来源
    for i in ipDb:
    _tmpD=getIpCountry(i[1])
    #格式说明:ip 访问次数据 ip国家 城市的 isp号 省份 所在地区
    out="%s%s%s%s%s%s%s"%(i[1].ljust(20),i[0].ljust(10),_tmpD['country'].ljust(20),_tmpD['city'].ljust(16),_tmpD['isp_id'].ljust(16),_tmpD['region'].ljust(16),_tmpD['area'].ljust(16))
    print out
    file.write("%s\n"%out)
    conn.close()
    file.close()
    '''
    '''

    转载于:https://www.cnblogs.com/cfinder010/p/3830470.html

    展开全文
  • Hive日志分析实践例子

    千次阅读 2014-01-23 22:03:00
    日志格式为: 36.248.169.9 - - [22/Sep/2013:01:21:45 +0800] "GET /mapreduce/hadoop-terasort-analyse/ HTTP/1.1" 200 22166 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;)" 36.248.169.9 - ...
  • 日志 127.0.0.1 - - [25/Sep/2018:12:02:10 +0800] "GET /admin/role-auth HTTP/1.1" 200 13545 "http://*/admin" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497....
  • 这篇文章主要介绍了Shell日志分析常用命令和例子,本文重点在一样实现日志分析命令语句例子上,本文给出了10条常用的分析实例,需要的朋友可以参考下 学会用shell分析日志只要一上午!!! 很多地方分享了日志分析的...
  • package com.sekorm.dataAnalysis.Main; import java.io.IOException; import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem;...import org.ap
  • 如何分析Linux日志

    2015-08-11 19:13:50
    本文介绍一些你现在就能做的基本日志分析例子(只需要搜索即可),还将涉及一些更高级的分析。对数据进行高级分析的例子包括生成汇总计数、对有效值进行过滤等等。首先会向你展示如何在命令行中使用多个不同的工具,...
  • 如何分析 Linux 日志

    2017-05-02 17:08:00
    在这篇文章中我们会介绍一些你现在就能做的基本日志分析例子(只需要搜索即可)。我们还将涉及一些更高级的分析,但这些需要你前期努力做出适当的设置,后期就能节省很多时间。对数据进行高级分析的例子包括生成汇总...
  • 如何分析 Linux 日志

    2020-04-13 11:51:46
    在这篇文章中我们会介绍一些你现在就能做的基本日志分析例子(只需要搜索即可)。我们还将涉及一些更高级的分析,但这些需要你前期努力做出适当的设置,后期就能节省很多时间。对数据进行高级分析的...
  • 日志分析脚本

    2012-07-24 22:12:09
    这是一个性能调优日志分析例子,内含六个版本,描述了效率提高的步骤,为了达成最重要的效率需求不惜牺牲代码的严谨性,记得什么才是最重要的。
  • 这里借助 percona的pt-query-digest工具对mysql的慢日志分析,这里使用mysql5.5做例子。1. 下载分析工具cd /usr/local/srcwget percona....
  • public function visit(){ // //$file= exec("cat /home/wwwlogs/access.log",$output);    $file= fopen("/home/wwwlogs/access.log"... $clientkeywords = array('mobile', '...
  • 日志分析查看

    2017-05-18 09:33:46
    日志分析查看——grep,sed,sort,awk运用 概述  我们日常应用中都离不开日志。可以说日志是我们在排查问题的一个重要依据。但是日志并不是写了就好了,当你想查看日志的时候,你会发现线上日志堆积的...
  • 文章目录一、写在前面二、什么是slf4j2.1 什么是slf4j2.2 slf4j、logback和log4j的关系2.3 在springboot中...本文的特点是会举很多通俗易懂的例子,让大家通过例子来学习使用slf4j。 二、什么是slf4j 2.1 什么是slf4
  • GC日志分析

    2017-06-11 22:38:00
    结合上篇博文(其中的Eclipse Memory Analyzer分析内存溢出)进行分析GC日志  (1)首先,给出一个日志输出的例子:  参数设置为:  -XX:+PrintGCDetails -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -...
  • 日子分析例子 1.常用的显示GC日志的参数 解释: 日志中,GC和Full GC表示的是GC的类型。GC只在新生代进行,Full GC包括新生代和老年代、方法区。 Allocation Failure:GC发生的原因,一般新生代的GC发生的原因都是...
  • 首先,给出一个日志输出的例子:参数设置为:-XX:+PrintGCDetails -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -XX:NewSize=10M -XX:MaxNewSize=10M参数解释:-XX:+PrintGCDetails 启用日志-XX:-...
  • 通过一个例子向大家演示如何运用 Pandas 来进行 Apache 访问日志分析。本文内容其实也是原作者对 Pandas 库的一次尝试。1、载入并解析数据在解析网站日志时需要用到 apachelog 模块 ,因此我们首先需要了解一下 ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 945
精华内容 378
关键字:

日志分析例子