ELK环境搭建和SpringBoot集成测试

2018-01-31 11:01:12来源:oschina作者:日落北极人点击

分享

环境:Centos7.3


一、ElasticSearch安装

目录:/usr/local/elk/es


1.获取ElasticSearch安装包


wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.2.tar.gz

2.解压后运行


tar xf elasticsearch-6.1.2.tar.gz
sh elasticsearch-6.1.2/bin/elasticsearch

会报如下错误:


[2018-01-24T13:59:16,633][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.1.2.jar:6.1.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.1.2.jar:6.1.2]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.1.2.jar:6.1.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.1.2.jar:6.1.2]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.1.2.jar:6.1.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.1.2.jar:6.1.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.1.2.jar:6.1.2]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root

该问题是因为运行es不能使用root用户,因此要新建用户es


useradd es
passwd es
修改文件所属为es
chown -R es:es /usr/local/es

修改elasticsearch.yml


network.host: 192.168.15.38
http.port: 9200

再次启动:
报如下问题:


[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
解决:
vim /etc/security/limits.conf
在最后面追加下面内容
es hard nofile 65536
es soft nofile 65536[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
解决:
切换到root用户
vi /etc/sysctl.conf
添加
vm.max_map_count=655360
执行命令:
sysctl -p

再次执行./elasticsearch


[2018-01-24T15:36:35,412][INFO ][o.e.n.Node ] [] initializing ...
[2018-01-24T15:36:35,508][INFO ][o.e.e.NodeEnvironment] [KMyyO-3] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [46.8gb], net total_space [49.9gb], types [rootfs]
[2018-01-24T15:36:35,509][INFO ][o.e.e.NodeEnvironment] [KMyyO-3] heap size [990.7mb], compressed ordinary object pointers [true]
[2018-01-24T15:36:35,510][INFO ][o.e.n.Node ] node name [KMyyO-3] derived from node ID [KMyyO-3KRPy_Q3Eb0mYDaw]; set [node.name] to override
[2018-01-24T15:36:35,511][INFO ][o.e.n.Node ] version[6.1.2], pid[3404], build[5b1fea5/2018-01-10T02:35:59.208Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
[2018-01-24T15:36:35,511][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/es/elasticsearch-6.1.2, -Des.path.conf=/usr/local/es/elasticsearch-6.1.2/config]
[2018-01-24T15:36:36,449][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [aggs-matrix-stats]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [analysis-common]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [ingest-common]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [lang-expression]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [lang-mustache]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [lang-painless]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [mapper-extras]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [parent-join]
[2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [percolator]
[2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [reindex]
[2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [repository-url]
[2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [transport-netty4]
[2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService ] [KMyyO-3] loaded module [tribe]
[2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService ] [KMyyO-3] no plugins loaded
[2018-01-24T15:36:37,956][INFO ][o.e.d.DiscoveryModule] [KMyyO-3] using discovery type [zen]
[2018-01-24T15:36:38,643][INFO ][o.e.n.Node ] initialized
[2018-01-24T15:36:38,643][INFO ][o.e.n.Node ] [KMyyO-3] starting ...
[2018-01-24T15:36:38,880][INFO ][o.e.t.TransportService ] [KMyyO-3] publish_address {192.168.15.38:9300}, bound_addresses {192.168.15.38:9300}
[2018-01-24T15:36:38,890][INFO ][o.e.b.BootstrapChecks] [KMyyO-3] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-01-24T15:36:41,955][INFO ][o.e.c.s.MasterService] [KMyyO-3] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {KMyyO-3}{KMyyO-3KRPy_Q3Eb0mYDaw}{RY8JlkNjT3iTPoO_VT1isw}{192.168.15.38}{192.168.15.38:9300}
[2018-01-24T15:36:41,961][INFO ][o.e.c.s.ClusterApplierService] [KMyyO-3] new_master {KMyyO-3}{KMyyO-3KRPy_Q3Eb0mYDaw}{RY8JlkNjT3iTPoO_VT1isw}{192.168.15.38}{192.168.15.38:9300}, reason: apply cluster state (from master [master {KMyyO-3}{KMyyO-3KRPy_Q3Eb0mYDaw}{RY8JlkNjT3iTPoO_VT1isw}{192.168.15.38}{192.168.15.38:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-01-24T15:36:41,990][INFO ][o.e.h.n.Netty4HttpServerTransport] [KMyyO-3] publish_address {192.168.15.38:9200}, bound_addresses {192.168.15.38:9200}
[2018-01-24T15:36:41,990][INFO ][o.e.n.Node ] [KMyyO-3] started
[2018-01-24T15:36:41,997][INFO ][o.e.g.GatewayService ] [KMyyO-3] recovered [0] indices into cluster_state

启动成功


浏览器中输入:http://192.168.15.38:9200/


{
"name" : "KMyyO-3",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Z2ReGjxgTx28uA3wHT-gZg",
"version" : {
"number" : "6.1.2",
"build_hash" : "5b1fea5",
"build_date" : "2018-01-10T02:35:59.208Z",
"build_snapshot" : false,
"lucene_version" : "7.1.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

下面进行安装ElasticSearch的head插件
下载安装包:


wget https://nodejs.org/dist/v8.9.1/node-v8.9.1-linux-x64.tar.xz

解压:


tar -xJf node-v8.9.1-linux-x64.tar.xz

配置环境变量:


vi /etc/profile

添加:


export JAVA_BIN=$JAVA_HOME/bin
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$NODE_HOME/bin:$PATH
source /etc/profile

立即生效


查看版本:


[root@localhost node]# node -v
v8.9.1
[root@localhost node]# npm -v
5.5.1

2.安装git


yum install git -y

查看git版本:


[root@localhost node]# git --version
git version 1.8.3.1

卸载git命令


yum remove git

3.通过git获取head插件


git clone git://github.com/mobz/elasticsearch-head.git

进入head根目录,切换到root用户进行安装


[root@localhost elasticsearch-head]# npm install

启动head插件


[es@localhost elasticsearch-head]$ npm run start

4.配置config/elasticsearch.yml文件
在最后添加:


http.cors.enabled: true
http.cors.allow-origin: "*"

5.启动es服务和head插件
切换到es用户下


[es@localhost elasticsearch-head]$ sh ../../elasticsearch-6.1.2/bin/elasticsearch -d
[es@localhost elasticsearch-head]$ npm run start

浏览器中访问:http://192.168.15.38:9100/
输入图片说明


二、Logstash安装

目录:/usr/local/elk/logstash
获取安装包:


wget https://artifacts.elastic.co/downloads/logstash/logstash-6.1.2.tar.gz
tar zxvf logstash-6.1.2.tar.gz

解压后进入config目录下,创建log_to_es.conf配置文件


添加下面内容:


input用来开启tcp连接的4560端口,用来接收日志信息


output配置ElasticSearch的地址,将接收到的信息输出到ES中


input {
tcp {
host => "192.168.15.38"
port => 4560
mode => "server"
tags => ["tags"]
codec => json_lines
}
}
output {
stdout{codec =>rubydebug}
elasticsearch {
hosts => "192.168.15.38"
}
}

开启logstash


./logstash -f ../config/log_to_es.conf
[root@localhost bin]# ./logstash -f ../config/log_to_es.conf
Sending Logstash's logs to /usr/local/logstash/logstash-6.1.2/logs which is now configured via log4j2.properties
[2018-01-30T15:54:55,100][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/local/logstash/logstash-6.1.2/modules/fb_apache/configuration"}
[2018-01-30T15:54:55,118][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/local/logstash/logstash-6.1.2/modules/netflow/configuration"}
[2018-01-30T15:54:55,724][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-01-30T15:54:56,433][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.1.2"}
[2018-01-30T15:54:56,856][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}
[2018-01-30T15:55:02,031][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.15.38:9200/]}}
[2018-01-30T15:55:02,044][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.15.38:9200/, :path=>"/"}
[2018-01-30T15:55:02,263][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.15.38:9200/"}
[2018-01-30T15:55:02,342][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-01-30T15:55:02,346][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-01-30T15:55:02,365][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-01-30T15:55:02,384][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-01-30T15:55:02,436][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.15.38"]}
[2018-01-30T15:55:02,458][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500, :thread=>"#"}
[2018-01-30T15:55:02,554][INFO ][logstash.inputs.tcp] Starting tcp input listener {:address=>"192.168.15.38:4560", :ssl_enable=>"false"}
[2018-01-30T15:55:02,765][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"main"}
[2018-01-30T15:55:02,882][INFO ][logstash.agent] Pipelines running {:count=>1, :pipelines=>["main"]}

输出上面的内容后,logstash搭建成功


三、搭建Kibana

目录:/usr/local/elk/kibana


wget https://artifacts.elastic.co/downloads/kibana/kibana-6.1.2-linux-x86_64.tar.gz

解压:


tar zxvf kibana-6.1.2-linux-x86_64.tar.gz

执行命令:


[root@localhost bin]# ./kibana
log [07:03:29.712] [info][status][plugin:kibana@6.1.2] Status changed from uninitialized to green - Ready
log [07:03:29.775] [info][status][plugin:elasticsearch@6.1.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [07:03:29.817] [info][status][plugin:console@6.1.2] Status changed from uninitialized to green - Ready
log [07:03:29.840] [info][status][plugin:elasticsearch@6.1.2] Status changed from yellow to green - Ready
log [07:03:29.856] [info][status][plugin:metrics@6.1.2] Status changed from uninitialized to green - Ready
log [07:03:30.088] [info][status][plugin:timelion@6.1.2] Status changed from uninitialized to green - Ready
log [07:03:30.094] [info][listening] Server running at http://192.168.15.38:5601

启动成功


输入:http://192.168.15.38:5601/


输入图片说明


四、SpringBoot集成

1.添加依赖



net.logstash.logback
logstash-logback-encoder
4.11

2.添加配置文件:logback.xml


<?xml version="1.0" encoding="UTF-8"?>


192.168.15.38:4560




%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n








3.application.yml配置


logging:
config: classpath:logback.xml

4.集成测试


@Autowired
private StudentService studentService;
@RequestMapping("/findAll")
public List findAll() {
logger.info("log info ...");
logger.error("log error ...");
logger.debug("log debug ...");
return studentService.findAll();
}

查看Logstash输出:


{
"logger_name" => "com.spark.Application",
"level_value" => 20000,
"thread_name" => "main",
"level" => "INFO",
"host" => "10.10.30.98",
"@version" => 1,
"message" => "Starting Application on DESKTOP-DBPFNEL with PID 6980 (E://workplace//es-demo//target//classes started by admin in E://workplace//es-demo)",
"port" => 63561,
"tags" => [
[0] "tags"
],
"@timestamp" => 2018-01-30T07:04:27.523Z
}
{
"logger_name" => "com.spark.Application",
"level_value" => 20000,
"thread_name" => "main",
"level" => "INFO",
"host" => "10.10.30.98",
"@version" => 1,
"message" => "No active profile set, falling back to default profiles: default",
"port" => 63561,
"tags" => [
[0] "tags"
],
"@timestamp" => 2018-01-30T07:04:27.525Z
}
{
"logger_name" => "org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext",
"level_value" => 20000,
"thread_name" => "main",
"level" => "INFO",
"host" => "10.10.30.98",
"@version" => 1,
"message" => "Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@7e3181aa: startup date [Tue Jan 30 15:04:27 CST 2018]; root of context hierarchy",
"port" => 63561,
"tags" => [
[0] "tags"
],
"@timestamp" => 2018-01-30T07:04:27.604Z
}
{
"logger_name" => "org.hibernate.validator.internal.util.Version",
"level_value" => 20000,
"thread_name" => "background-preinit",
"level" => "INFO",
"host" => "10.10.30.98",
"@version" => 1,
"message" => "HV000001: Hibernate Validator 5.3.6.Final",
"port" => 63561,
"tags" => [
[0] "tags"
],
"@timestamp" => 2018-01-30T07:04:27.677Z
}

输出如下内容,说明我们的ELK环境搭建成功
在浏览器中查看:
http://192.168.15.38:5601/


输入图片说明


SpringBoot测试项目源码:https://github.com/YunDongTeng/springboot-es.git

最新文章

123

最新摄影

闪念基因

微信扫一扫

第七城市微信公众平台