精华内容
下载资源
问答
  • Logstash安装配置

    2019-11-05 14:54:15
    2、安装:解压 配置 功能:采集日志文件 (1)控制台输入和控制台输出 input { stdin { } } output { stdout {} } 命令: bin/logstash -e 'input { stdin { } } output { stdout {} }' 在控制台上写helloyou ...

    1、Logstash:   flume

    2、安装:解压  配置

    功能:采集日志文件

    (1)控制台输入和控制台输出   input { stdin { } } output { stdout {} }

    命令:

    bin/logstash -e 'input { stdin { } } output { stdout {} }'

    在控制台上写helloyou    会被Logstash采集到并打印到控制台上

    (2)把配置信息写到文件中

    vi console.conf

    input { stdin { } } output { stdout {} }

    bin/logstash -f   console.conf     

    (3)    检测文件数据发生变化就采集

     vi file.conf

    input {
        file{
            path => "/root/apps/logstash-5.6.16/data.txt"
        }
     }
    output {
         stdout {}
     }

    启动:

    bin/logstash -f file.conf

     

     

     

     

     

    展开全文
  • logstash 安装配置

    2020-12-06 21:25:56
    logstash是用于数据采集和数据迁移的,我们可以将mysql的数据迁移到es上 安装步骤 用rpm包安装 wget ...logstash安装后的目录:/usr/share/logstash 编写脚本地址:/etc/logstash/conf.d ...

    logstash是用于数据采集和数据迁移的,我们可以将mysql的数据迁移到es上
    安装步骤
    用rpm包安装
    wget https://artifacts.elastic.co/downloads/logstash/logstash-6.7.0.rpm
    将rpm包下载下来
    rpm -ivh logstash-6.7.0.rpm
    即可
    logstash安装后的目录:/usr/share/logstash
    编写脚本地址:/etc/logstash/conf.d

    展开全文
  • docker+elk7.8实战之logstash安装配置

    千次阅读 热门讨论 2020-08-10 19:00:57
    本文主要介绍了文件作为输入通过logstash的管道配置输出到es和控制台的过程,依赖于配置了TLS的ES集群环境。

    1.拉取镜像

    [root@localhost conf]# docker pull logstash:7.8.1
    Trying to pull repository docker.io/library/logstash ... 
    7.8.1: Pulling from docker.io/library/logstash
    524b0c1e57f8: Already exists 
    4ea79d464a65: Pull complete 
    245cfcbe00e5: Pull complete 
    1b4d03815886: Pull complete 
    505552b55db2: Pull complete 
    d440869a711b: Pull complete 
    086ef50d80ce: Pull complete 
    11b8a22f5fe6: Pull complete 
    aece5f411b8b: Pull complete 
    f7f6ec9f2b6e: Pull complete 
    03353e162ddf: Pull complete 
    Digest: sha256:f7ff8907ac010e35df8447ea8ea32ea57d07fb261316d92644f168c63cf99287
    Status: Downloaded newer image for docker.io/logstash:7.8.1
    

    2.准备配置文件

    1) 创建对应目录并授权

    [root@localhost conf]# mkdir -p /opt/elk7/logstash/conf
    [root@localhost conf]# cd /opt/elk7/logstash/conf/
    [root@localhost conf]# ls
    [root@localhost conf]# touch logstash.conf
    [root@localhost conf]# touch logstash.yml
    [root@localhost conf]# touch pipeline.yml
    [root@localhost conf]# mkdir -p /opt/elk7/logstash/pipeline
    [root@localhost conf]# mkdir -p /opt/elk7/logstash/data
    [root@localhost conf]# chmod 777 -R /opt/elk7/logstash
    

    2)准备logstash.yml

    位置: /opt/elk7/logstash/conf ---- /usr/share/logstash/config

    node.name: logstash-203
    # 日志文件目录配置
    path.logs: /usr/share/logstash/logs
    # 验证配置文件及存在性
    config.test_and_exit: false
    # 配置文件改变时是否自动加载
    config.reload.automatic: false
    # 重新加载配置文件间隔
    config.reload.interval: 60s
    # debug模式 开启后会打印解析后的配置文件 包括密码等信息 慎用
    # 需要同时配置日志等级为debug
    config.debug: true
    log.level: debug
    # The bind address for the metrics REST endpoint.
    http.host: 0.0.0.0
    # 日志格式 json/plain
    log.format: json
    

    3)准备logstash.conf

    位置: /opt/elk7/logstash/pipeline ---- /usr/share/logstash/pipeline

    为了便于演示多通道和测试,这里选择data里面的两个文件进行测试。

    期望效果: 修改test.log并保存 在logstash日志里面应该会有对应的输出

    logstash-file.conf

    input {
    	file{
    		path => "/usr/share/logstash/data/test.log"
    		codec => json
    		start_position => "beginning"
    	}
    }
    output {
    	stdout {
            codec => rubydebug
        }
    }
    

    logstash-file2.conf

    input {
    	file{
    		path => "/usr/share/logstash/data/test2.log"
    		codec => plain
    		start_position => "beginning"
    	}
    }
    output {
    	stdout {
            codec => rubydebug
        }
    }
    

    4) 准备pipelines.yml

    位置: /opt/elk7/logstash/conf ---- /usr/share/logstash/config

    - pipeline.id: main
      path.config: /usr/share/logstash/config/logstash-file.conf
      
    - pipeline.id: file2
      path.config: /usr/share/logstash/config/logstash-file2.conf
    

    5) 日志配置文件log4j2.properties

    位置: /opt/elk7/logstash/conf ---- /usr/share/logstash/config

    status = error
    name = LogstashPropertiesConfig
    
    appender.console.type = Console
    appender.console.name = plain_console
    appender.console.layout.type = PatternLayout
    appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
    
    appender.json_console.type = Console
    appender.json_console.name = json_console
    appender.json_console.layout.type = JSONLayout
    appender.json_console.layout.compact = true
    appender.json_console.layout.eventEol = true
    
    rootLogger.level = ${sys:ls.log.level}
    rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
    

    6) jvm配置文件jvm.options

    位置: /opt/elk7/logstash/conf ---- /usr/share/logstash/config

    -Xmx512m
    -Xms512m
    

    7)启动参数配置startup.options

    位置: /opt/elk7/logstash/conf ---- /usr/share/logstash/config

    ################################################################################
    # These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
    # startup script for Logstash and is not used by Logstash itself. It should
    # automagically use the init system (systemd, upstart, sysv, etc.) that your
    # Linux distribution uses.
    #
    # After changing anything here, you need to re-run $LS_HOME/bin/system-install
    # as root to push the changes to the init script.
    ################################################################################
    
    # Override Java location
    #JAVACMD=/usr/bin/java
    
    # Set a home directory
    LS_HOME=/usr/share/logstash
    
    # logstash settings directory, the path which contains logstash.yml
    LS_SETTINGS_DIR=/etc/logstash
    
    # Arguments to pass to logstash
    LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
    
    # Arguments to pass to java
    LS_JAVA_OPTS=""
    
    # pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
    LS_PIDFILE=/var/run/logstash.pid
    
    # user and group id to be invoked as
    LS_USER=logstash
    LS_GROUP=logstash
    
    # Enable GC logging by uncommenting the appropriate lines in the GC logging
    # section in jvm.options
    LS_GC_LOG_FILE=/var/log/logstash/gc.log
    
    # Open file limit
    LS_OPEN_FILES=16384
    
    # Nice level
    LS_NICE=19
    
    # Change these to have the init script named and described differently
    # This is useful when running multiple instances of Logstash on the same
    # physical box or vm
    SERVICE_NAME="logstash"
    SERVICE_DESCRIPTION="logstash"
    
    # If you need to run a command or script before launching Logstash, put it
    # between the lines beginning with `read` and `EOM`, and uncomment those lines.
    ###
    ## read -r -d '' PRESTART << EOM
    ## EOM
    

    3.重新生成logstash容器并运行

    docker run -it --name logstash \
    -v /opt/elk7/logstash/conf:/usr/share/logstash/config \
    -v /opt/elk7/logstash/data:/usr/share/logstash/data \
    -v /opt/elk7/logstash/logs:/usr/share/logstash/logs \
    -v /opt/elk7/logstash/pipeline:/usr/share/logstash/pipeline \
    -d logstash:7.8.1
    

    检查容器运行状态

    [root@kf202 conf]# docker ps
    CONTAINER ID        IMAGE                               COMMAND                  CREATED             STATUS              PORTS                                                                NAMES
    c5d28e4f25ed        172.16.10.205:5000/logstash:7.8.1   "/usr/local/bin/dock…"   2 hours ago         Up 2 hours          5044/tcp, 0.0.0.0:9600->9600/tcp                                     logstash
    5dd146f7d16f        172.16.10.205:5000/kibana:7.8.1     "/usr/local/bin/dumb…"   6 hours ago         Up 6 hours          0.0.0.0:5601->5601/tcp                                               kibana
    edf01440dbb2        elasticsearch:7.8.1                 "/tini -- /usr/local…"   6 hours ago         Up 6 hours          9200/tcp, 0.0.0.0:9203->9203/tcp, 9300/tcp, 0.0.0.0:9303->9303/tcp   es-03
    281a9e99e0d4        elasticsearch:7.8.1                 "/tini -- /usr/local…"   6 hours ago         Up 6 hours          9200/tcp, 0.0.0.0:9202->9202/tcp, 9300/tcp, 0.0.0.0:9302->9302/tcp   es-02
    1a0d40f6861a        elasticsearch:7.8.1                 "/tini -- /usr/local…"   6 hours ago         Up 6 hours          0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp                       es-01
    

    检查容器运行日志

    [root@kf202 conf]# docker logs -f logstash --tail 200
    OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to method sun.nio.ch.NativeThread.signal(long)
    WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
    [2020-08-10T06:25:11,263][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.8.1", "jruby.version"=>"jruby 9.2.11.1 (2.5.7) 2020-03-25 b1f55b1a40 OpenJDK 64-Bit Server VM 11.0.7+10-LTS on 11.0.7+10-LTS +indy +jit [linux-x86_64]"}
    [2020-08-10T06:25:12,924][INFO ][org.reflections.Reflections] Reflections took 40 ms to scan 1 urls, producing 21 keys and 41 values 
    [2020-08-10T06:25:13,830][INFO ][logstash.javapipeline    ][file2] Starting pipeline {:pipeline_id=>"file2", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash-file2.conf"], :thread=>"#<Thread:0xc2e0c06@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:122 run>"}
    [2020-08-10T06:25:13,827][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash-file.conf"], :thread=>"#<Thread:0x15b73d10 run>"}
    [2020-08-10T06:25:15,019][INFO ][logstash.inputs.file     ][file2] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_897786e80ff00d65a6928cab732327f8", :path=>["/usr/share/logstash/data/test2.log"]}
    [2020-08-10T06:25:15,025][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_e6a3fcb43f05e42e8f9d3130699f14de", :path=>["/usr/share/logstash/data/test.log"]}
    [2020-08-10T06:25:15,041][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
    [2020-08-10T06:25:15,042][INFO ][logstash.javapipeline    ][file2] Pipeline started {"pipeline.id"=>"file2"}
    [2020-08-10T06:25:15,100][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :file2], :non_running_pipelines=>[]}
    [2020-08-10T06:25:15,138][INFO ][filewatch.observingtail  ][main][cf77cbf866922c4bd1db2874cf9f2e93205e6dd41b95c29ad607347574c6d414] START, creating Discoverer, Watch with file and sincedb collections
    [2020-08-10T06:25:15,148][INFO ][filewatch.observingtail  ][file2][f967bb0285eea34805b3ab366df25a6fe116eb0521456be1bff642b6e58ab95b] START, creating Discoverer, Watch with file and sincedb collections
    [2020-08-10T06:25:15,476][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
    

    4.验证

    通过vim在test.log和test2.log修改保存并观察logstash日志输出。
    日志输出-json日志输出-plain

    可以看到容器状态正常且日志打印正常。

    至此一个文件作为输入打印到标准输出控制台的logstash搭建完成。

    5.配置输出到elasticsearch中并指定索引

    新建logstash-es.conf

    位置: /opt/elk7/logstash/pipeline ---- /usr/share/logstash/pipeline

    input {
      file {
        path => "/usr/share/logstash/data/test2.log"
        codec => plain
        start_position => "beginning"
      }
    }
    
    output{
      elasticsearch {
        hosts => "172.16.10.202:9200"
        index => "logstash-file-test-%{+YYYY.MM.dd}"
        user => elastic
        password => xxxxx
      }
    }
    

    修改pipeline.yml

    位置: /opt/elk7/logstash/conf ---- /usr/share/logstash/config

    - pipeline.id: main
      path.config: /usr/share/logstash/config/logstash-file.conf
      
    - pipeline.id: file2
      path.config: /usr/share/logstash/config/logstash-file2.conf
    
    - pipeline.id: es
      path.config: /usr/share/logstash/config/logstash-es.conf
    

    6.再次验证

    [root@kf202 conf]# docker logs -f logstash --tail 50
    [2020-08-10T10:07:34,605][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
    [2020-08-10T10:07:34,696][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
    [2020-08-10T10:07:34,697][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
    [2020-08-10T10:07:35,745][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"file2"}
    [2020-08-10T10:07:35,746][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
    [2020-08-10T10:07:35,790][INFO ][logstash.runner          ] Logstash shut down.
    OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to method sun.nio.ch.NativeThread.signal(long)
    WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
    [2020-08-10T10:07:57,493][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.8.1", "jruby.version"=>"jruby 9.2.11.1 (2.5.7) 2020-03-25 b1f55b1a40 OpenJDK 64-Bit Server VM 11.0.7+10-LTS on 11.0.7+10-LTS +indy +jit [linux-x86_64]"}
    [2020-08-10T10:07:58,555][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:es, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [A-Za-z0-9_-], [ \\t\\r\\n], \"#\", \"{\", [A-Za-z0-9_], \"}\" at line 14, column 22 (byte 278) after output{\n  elasticsearch {\n    hosts => \"172.16.10.202:9200\"\n    index => \"logstash-file-test-%{+YYYY.MM.dd}\"\n    user => elastic\n    password => cnhqd", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:44:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:356:in `block in converge_state'"]}
    [2020-08-10T10:07:59,701][INFO ][org.reflections.Reflections] Reflections took 44 ms to scan 1 urls, producing 21 keys and 41 values 
    [2020-08-10T10:08:00,546][INFO ][logstash.javapipeline    ][file2] Starting pipeline {:pipeline_id=>"file2", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash-file2.conf"], :thread=>"#<Thread:0x345e4d08 run>"}
    [2020-08-10T10:08:00,547][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash-file.conf"], :thread=>"#<Thread:0x370c3bcf@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
    [2020-08-10T10:08:01,679][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_e6a3fcb43f05e42e8f9d3130699f14de", :path=>["/usr/share/logstash/data/test.log"]}
    [2020-08-10T10:08:01,680][INFO ][logstash.inputs.file     ][file2] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_897786e80ff00d65a6928cab732327f8", :path=>["/usr/share/logstash/data/test2.log"]}
    [2020-08-10T10:08:01,693][INFO ][logstash.javapipeline    ][file2] Pipeline started {"pipeline.id"=>"file2"}
    [2020-08-10T10:08:01,700][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
    [2020-08-10T10:08:01,770][INFO ][filewatch.observingtail  ][main][cf77cbf866922c4bd1db2874cf9f2e93205e6dd41b95c29ad607347574c6d414] START, creating Discoverer, Watch with file and sincedb collections
    [2020-08-10T10:08:01,775][INFO ][filewatch.observingtail  ][file2][f967bb0285eea34805b3ab366df25a6fe116eb0521456be1bff642b6e58ab95b] START, creating Discoverer, Watch with file and sincedb collections
    [2020-08-10T10:08:02,105][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
    

    继续修改test2.log,可以看到console的日志输出
    在这里插入图片描述

    通过kibana查看es中的数据
    在这里插入图片描述

    可以看到是有数据进到es的。

    注意:

    1.由于是通过logstash向es发送索引创建命令,所以需要先在对应的test2.log文件中修改并保存后才会触发索引创建命令。
    2.kibana中有个索引模式的概念,es中的索引需要通过kibana页面上操作创建索引模式后才会在kibana的Discovery模块中出现。
    这里的索引模式我理解就是对现有索引按照一定规则进行归档匹配,匹配上的可以出现在一个匹配索引里。

    到此文件-logstash-es的数据流转环境搭建就完成了。

    展开全文
  • (1)Logstash安装 Logstash的下载地址为:https://artifacts.elastic.co/downloads/logstash/logstash-6.1.3.tar.gz。然后上传至CentOS服务器,进入压缩文件所在的目录,这里同样选择放在了/opt目录下,进入/opt...

    (1)Logstash的安装

    Logstash的下载地址为:https://artifacts.elastic.co/downloads/logstash/logstash-6.1.3.tar.gz。然后上传至CentOS服务器,进入压缩文件所在的目录,这里同样选择放在了/opt目录下,进入/opt目录,安装解压目录如下。

    # tar –zxvf logstash-6.1.2.tar.gz

    同样为了后续使用方面将解压后的目录文件重名名为logstash,重命名命令如下。

    # mv logstash-6.1.2 logstash

    首先,我们通过运行最基本的Logstash管道来测试的Logstash安装。

    Logstash管道有两个必需的元素,input并且output,以及一个可选的元素,filter。输入插件消耗来自源的数据,过滤器插件会按照您指定的方式修改数据,并且输出插件将数据写入到目的地。

    安装完成后运行如下命令。结果如图一所示。
    # /opt/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

    可以看到,输入什么内容logstash按照某种格式输出,其中-e参数允许Logstash直接通过命令行接受设置。这点尤其快速的帮助我们反复的测试配置是否正确而不用写配置文件。使用ctrl-c命令可以退出之前运行的Logstash。

    使用-e参数在命令行中指定配置是很常用的方式,不过如果需要配置更多设置则需要很长的内容。这种情况,首先创建一个简单的配置文件,并且指定logstash使用这个配置文件。 例如:在 logstash 安装目录下(/opt/logstash/config)创建一个“基本配置”测试文件 logstash-test.conf, 创建配置文件内容如下:

    input { stdin { } }

    output {

        stdout { codec=> rubydebug }

    }

    Logstash使用input和output定义收集日志时的输入和输出的相关配置,本例中input 定义了一个叫"stdin"的input,output定义一个叫"stdout"的output。无论我们输入什么字符, Logstash都会按照某种格式来返回我们输入的字符,其中output被定义为"stdout"并使用了codec参数来指定logstash输出格式。

    使用logstash的-f参数来读取配置文件,执行如图二所示操作开始进行测试:

    (2)测试Elasticsearch 和 Logstash 来收集日志数据

    接下来在Logstash安装目录下创建一个用于测试 logstash 使用Elasticsearch作为Logstash 的后端的测试文件Logstash-test1.conf,该文件中定义了stdout和Elasticsearch作为output,这样的“多重输出”即保证输出结果显示到屏幕上,同时也输出到Elastisearch中。

    前提要保证Elasticsearch和Logstash都正常启动(需要先启动Elasticsearch,再启动Logstash)

    input { stdin { } }

    output {

        elasticsearch {hosts => "10.10.3.69:9200" } #Elasticsearch服务地址

        stdout { codec=> rubydebug }

    }

    然后开启服务,执行如下命令,结果如图三所示。

    # /opt/logstash/bin/logstash -f /opt/logstash/config/logstash-test1.conf


    通过浏览器访问请求http://10.10.3.69:9200/_search?pretty来查看 ES 是否接收到了数据 ,结果如图四所示。

    至此,就可以成功利用Elasticsearch和Logstash来收集日志数据了。

    展开全文
  • 文章目录简介部署安装配置详解输入过滤输出读取地自定义日志输出内容到Elasticsearch 简介 Logstash是一个开源的数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据保存到指定库中,如ES。 部署安装 ...
  • 2.在bin目录下新建.conf文件(名称随意),比如logstash.conf,填入如下配置 input { stdin { } jdbc { # 配置数据库信息 jdbc_connection_string => "jdbc:mysql://localhost:3306/jgmes_
  • logstash配置文件

    2018-08-30 21:49:01
    配置文件有logstash的stdin、file、tcp、udp、syslog、beats、grok、elasticsearch插件配置
  • Logstash 安装配置使用

    2019-03-02 11:25:00
    一、Windows下安装运行  官网下载,下载与elastic...解压logstash ,并且在bin目录下运行命令(参考下面命令):加入-e标志可以在命令行直接指定配置文件。 logstash -e "" 或者: logstash -e "input { s...
  • Logstash安装配置

    2019-10-18 15:02:18
    最近使用elk来管理网络项目的访问log,其中Elasticsearch 和 Kibana由公司系统提供,logstash需要自己搭建,项目的访问log...记录一下logstash的搭建配置过程。 ## 安装 ```bash #下载并安装公共签名密钥: su...
  • Logstash安装部署

    2021-11-03 16:34:32
    1、logstash安装准备  logstash是基于java语言开发,所以先安装JDK。logstash 7 目前只支持 ...2、安装配置 [root@localhost app]# tar xf logstash-7.12.1-linux-x86_64.tar.gz [root@localhost app]# mv logst...
  • LogStash安装部署与应用

    千次阅读 2019-09-17 22:20:30
    LogStash安装部署与应用 介绍 1、Logstash是一个接收,处理,转发日志的工具; 2、Logstash支持网络日志、系统日志、应用日志、apache日志等等,总之可以处理所有日志类型; 3、典型应用场景ELK:logstash负责采集、...
  • logstash安装配置

    千次阅读 2019-08-15 15:05:21
    首先声明logstash的版本是6.x,而非7.x,若7.0则提示: You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for ...
  • elk之logstash安装配置

    千次阅读 2016-03-11 20:05:52
    1.基本概念 ELK:Elasticsearch + Logstash + Kibana  ...ELK作用:数据采集分析展示一体化 Logstash的角色:数据采集 Elasticsearch的角色:数据存储分析 ...(a)logstash运行依赖jre,jre的安装可参考:
  • logstash安装 window版

    千次阅读 2020-01-09 11:54:09
    1.首先下载logstash window版 官网最新版本:...配置问价 新建一个my-link的文件夹 下载下面的驱动放到该目录下 链接:https://pan.baidu.com/s/1Epgbk53Gbgba8g066hHzQw 提取码:4...
  • 安装 tar -zxvf logstash-6.2.2.tar.gz mv logstash-6.2.2 /home/elk/logstash-6.2.2-01 cp -r /home/elk/logstash-6.2.2-01 /home/elk/logstash-6.2.2-02 配置 logstash-pipeline01.conf input{ kafka{ b...
  • Logstash配置文件

    2021-05-15 19:06:55
    Logstash有两种类型的配置文件:*pipeline 配置文件*,用来定义管道。(后面我称之为管道配置文件)*settings 文件*,用来控制Logstash的启动和运行。...对于使用deb或rpm安装Logstash,pipeline配置文件位于`/etc/...
  • logstash安装部署(整合es)

    千次阅读 2020-05-30 10:03:13
    下载kibana安装包 ... 最新版本:...解压tar -xzvf logstash-6.2.2.tar.gz /usr/local/logstash-6.2.2 传数据库启动jar包到logstash目录下 /usr/local/logstash-6.2.2 在/usr/local/log
  • Logstash在linux上安装配置

    千次阅读 2019-03-18 11:12:33
    1 Logstash在Centos上安装 1.1 通过下载压缩包进行安装 下载地址: https://www.elastic.co/cn/downloads/logstash# 1.1.1 下载并解压缩Logstash 1.1.2 准备一个logsta .conf配置文件 1.1.3 运行 bin/logstash -f ...
  • 目录一:查找Logstash镜像二:拉取Logstash镜像三:创建Logstash容器1. 首先创建一个容器 用来获取它的配置文件2. 创建创建所需挂载文件2.文件夹赋权3.命名行启动4.查看是否启动成功四:mysql同步配置五:常见问题 ...
  • Logstash 安装配置

    2018-08-22 17:16:10
    ELK实战:https://blog.csdn.net/beyond_qjm/article/details/81943187   一、下载  https://www.elastic.co/downloads/past-releases  https://artifacts.elastic.co/downloads/logstash/logstash...
  • logstash配置文件

    2018-05-25 13:41:57
    logstach的exe.conf配置文件............................................................................................................
  • 了解对照关系,决定要安装logstash版本 二、Logstash工作原理 Logstash事件处理管道有三个阶段:输入→过滤器→输出,输入生成事件,过滤器修改它们,然后输出将它们发送到其他地方。输入和输出支持编解码器,使你...
  • 1. 官网下载logstash https://www.elastic.co/cn/downloads/logstash 注意:elasticseash+kibana+logstash版本要保持一致!!! 2. movieslens官网下载数据集 https://grouplens.org/datasets/movielens/20m/ 2.1 ...
  • logstash kafka elasticsearch hadoop
  • 自定义文件的logstash配置文件

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 32,980
精华内容 13,192
关键字:

logstash安装配置