There was an article about EFK(Kibana + ElasticSearch + Filebeat) Plug in log collection .Filebeat The plug-in is used to forward and centralize log data , And forward them to Elasticsearch or Logstash To index , but Filebeat As Elastic A member of the , Only in Elastic Used throughout the system .

Fluentd

Fluentd It's an open source , Distributed log collection system , From different services , Data source collection log , Filter logs , Distributed to a variety of storage and processing systems . Support for various plug-ins , Data caching mechanism , And it requires very few resources , Built in reliability , Combined with other services , It can form an efficient and intuitive log collection platform .

This article is introduced in Rainbond Use in Fluentd plug-in unit , Collect business logs , Output to multiple different services .

One 、 Integration Architecture

When collecting component logs , Just open in the component Fluentd plug-in unit , This article will demonstrate the following two ways :

  1. Kibana + ElasticSearch + Fluentd
  2. Minio + Fluentd

We will Fluentd Made into Rainbond Of General type plug-ins , After the app starts , The plug-in also starts and automatically collects logs for output to multiple service sources , The whole process has no intrusion into the application container , And strong expansibility .

Two 、 Plug in principle analysis

Rainbond V5.7.0 There's a new : Install plug-ins from the open source app store , The plug-ins in this article have been released to the open source app store , When we use it, we can install it with one click , Modify the configuration file as required .

Rainbond The plug-in architecture is relative to Rainbond Part of the application model , The plug-in is mainly used to realize the application container extension operation and maintenance capability . The implementation of operation and maintenance tools has great commonality , Therefore, the plug-in itself can be reused . The plug-in must be bound to the application container to have runtime state , To realize an operation and maintenance capability , For example, performance analysis plug-ins 、 Network governance plug-in 、 Initialize the type plug-in .

In the production Fluentd In the process of plug-in , Using the General type plug-ins , You can think of it as one POD Start two Container,Kubernetes Native supports a POD Start multiple Container, But the configuration is relatively complex , stay Rainbond The plug-in implementation in makes the user operation easier .

3、 ... and 、EFK Log collection practices

Fluentd-ElasticSearch7 The output plug-in writes log records to Elasticsearch. By default , It uses batch API Create record , The API In a single API Call to perform multiple index operations . This reduces overhead and can greatly improve indexing speed .

3.1 Operation steps

application (Kibana + ElasticSearch) And plug-ins (Fluentd) Can be deployed with one click through the open source app store .

  1. Connect with the open source app store
  2. Search the app store for elasticsearch And install 7.15.2 edition .
  3. Team view -> plug-in unit -> Install from the app store Fluentd-ElasticSearch7 plug-in unit
  4. Create components based on images , Mirror usage nginx:latest, And mount storage var/log/nginx. Use here Nginx:latest As a demonstration
    • After the storage is mounted in the component , The plug-in will also mount the storage on its own , And access to Nginx Generated log files .
  5. stay Nginx Open plug-ins in the component , It can be modified as needed Fluentd The configuration file , Refer to the profile introduction section below .

  1. add to ElasticSearch rely on , take Nginx Connect to ElasticSearch, Here's the picture :

  1. visit Kibana panel , Enter into Stack Management -> data -> Index management , You can see that the existing index name is fluentd.es.nginx.log,

  2. visit Kibana panel , Enter into Stack Management -> Kibana -> Index mode , Create index mode .

  3. Enter into Discover, The log is displayed normally .

3.2 Profile Introduction

Profile reference Fluentd file output_elasticsearch.

<source>
@type tail
path /var/log/nginx/access.log,/var/log/nginx/error.log
pos_file /var/log/nginx/nginx.access.log.pos
<parse>
@type nginx
</parse>
tag es.nginx.log
</source> <match es.nginx.**>
@type elasticsearch
log_level info
hosts 127.0.0.1
port 9200
user elastic
password elastic
index_name fluentd.${tag}
<buffer>
chunk_limit_size 2M
queue_limit_length 32
flush_interval 5s
retry_max_times 30
</buffer>
</match>

Configuration item explanation :

<source></source> The input source of the log :

Configuration item interpretative statement
@type Collection log type ,tail Indicates that the log contents are read incrementally
path Log path , Multiple paths can be separated by commas
pos_file Used to mark the file that has been read to the location (position file) Path
<parse></parse> Log format analysis , According to your own log format , Write corresponding parsing rules .

<match></match> The output of the log :

Configuration item interpretative statement
@type The type of service output to
log_level Set the output log level to info; The supported log levels are :fatal, error, warn, info, debug, trace.
hostselasticsearch The address of
portelasticsearch The port of
user/passwordelasticsearch User name used / password
index_nameindex The name of the definition
<buffer></buffer> Log buffer , Used to cache log events , Improve system performance . Memory is used by default , You can also use file file
chunk_limit_size Maximum size of each block : Events will be written to blocks , Until the size of the block becomes this size , The default memory is 8M, file 256M
queue_limit_length The queue length limit for this buffer plug-in instance
flush_interval Buffer log flush event , Default 60s Refresh the output once
retry_max_times Maximum number of times to retry failed block output

The above is only part of the configuration parameters , Other configurations can be customized with the official website documents .

Four 、Fluentd + Minio Log collection practices

Fluentd S3 The output plug-in writes log records to the standard S3 Object storage service , for example Amazon、Minio.

4.1 Operation steps

application (Minio) And plug-ins (Fluentd S3) Can be deployed with one click through the open source application store .

  1. Connect with the open source app store . Search the open source app store for minio, And install 22.06.17 edition .

  2. Team view -> plug-in unit -> Install from the app store Fluentd-S3 plug-in unit .

  3. visit Minio 9090 port , The user password is Minio Components -> Get from dependency .

    • establish Bucket, Custom name .

    • Get into Configurations -> Region, Set up Service Location

      • Fluentd In the configuration file of the plug-in s3_region The default is en-west-test2.
  4. Create components based on images , Mirror usage nginx:latest, And mount storage var/log/nginx. Use here Nginx:latest As a demonstration

    • After the storage is mounted in the component , The plug-in will also mount the storage on its own , And access to Nginx Generated log files .
  5. Enter into Nginx In component , Opening Fluentd S3 plug-in unit , Modify... In the configuration file s3_buckets3_region

  1. Build dependencies ,Nginx Component dependency Minio, Update the component to make it effective .

  1. visit Nginx service , Let it generate logs , In a moment, you can Minio Of Bucket see .

4.2 Profile Introduction

Profile reference Fluentd file Apache to Minio.

<source>
@type tail
path /var/log/nginx/access.log
pos_file /var/log/nginx/nginx.access.log.pos
tag minio.nginx.access
<parse>
@type nginx
</parse>
</source> <match minio.nginx.**>
@type s3
aws_key_id "#{ENV['MINIO_ROOT_USER']}"
aws_sec_key "#{ENV['MINIO_ROOT_PASSWORD']}"
s3_endpoint http://127.0.0.1:9000/
s3_bucket test
s3_region en-west-test2
time_slice_format %Y%m%d%H%M
force_path_style true
path logs/
<buffer time>
@type file
path /var/log/nginx/s3
timekey 1m
timekey_wait 10s
chunk_limit_size 256m
</buffer>
</match>

Configuration item explanation :

<source></source> The input source of the log :

Configuration item interpretative statement
@type Collection log type ,tail Indicates that the log contents are read incrementally
path Log path , Multiple paths can be separated by commas
pos_file Used to mark the file that has been read to the location (position file) Path
<parse></parse> Log format analysis , According to your own log format , Write corresponding parsing rules .

<match></match> The output of the log :

Configuration item interpretative statement
@type The type of service output to
aws_key_idMinio user name
aws_sec_keyMinio password
s3_endpointMinio Access address
s3_bucketMinio Bucket name
force_path_style prevent AWS SDK Destroy endpoint URL
time_slice_format Each file name is stamped with this time stamp
<buffer></buffer> Log buffer , Used to cache log events , Improve system performance . Memory is used by default , You can also use file file
timekey Every time 60 Seconds to refresh the accumulated chunk
timekey_wait wait for 10 Seconds to refresh
chunk_limit_size Maximum size of each block

Last

Fluentd The plug-in can flexibly collect business logs and output multiple services , And combine Rainbond One click installation in plug-in market , Make our use easier 、 quick .

at present Rainbond Open source plug-in application market Flunetd Plugins are just Flunetd-S3Flunetd-ElasticSearch7, Welcome to contribute plug-ins !

Pick up Fluentd, combination Rainbond Plug in market , More relevant articles for faster log collection

  1. kubernets Light weight contain log Log collection techniques

    First, the logs to be collected here are container logs , Instead of a log of cluster status Three points to complete , collect , monitor , Call the police , Collection is the foundation , Monitoring and alarm can be done based on the collected logs , This article mainly implements the collection Those who don't want to read can directly read the code , There are not many lines altogether , Try to adjust ...

  2. ELK series ~Nxlog Log collection and forwarding ( solve log4 The newline of the log results in json Conversion failure problem )

    This article will inherit the previous article , It mainly talks about collecting and sending logs through tools ,<ELK series ~NLog.Targets.Fluentd How to get through tcp Send to fluentd> Nxlog It's a log collection tool , ...

  3. Syslog and Windows Event log collection

    Syslog and Windows Event log collection EventLog Analyzer From distributed Windows Device collects event logs , Or from distributed Linux and UNIX equipment . Switches and routers (Cisco) collect syslog. things ...

  4. Rainbond Integration through plug-ins ELK/EFK, Implement log collection

    Preface ELK It's the acronym for three open source projects :Elasticsearch.Logstash and Kibana. But later FileBeat Can be completely replaced Logstash Data collection function of , It's also lighter . Ben ...

  5. Use Fluentd + MongoDB Build a real-time log collection system

    Fluentd It's a log collection system , Its characteristic is that its parts are customizable , You can simply configure , Collect logs in different places . At present, the open source community has contributed the following storage plug-ins :MongoDB, Redis, Couch ...

  6. Dynamics CRM 2015Online Update1 new feature And Plug in trace log

     In the latest CRM2015Online Update1 A new feature has been added to the version - Plug in trace log , It is not so much a new function as a strengthening of the original function , because ITracingService This interface is in 2013 Has introduced , ...

  7. Logstash collect nginx The use of logs grok Filter plug-in parsing log

    grok As a logstash The filter plug-in , Support parsing text log lines according to patterns , Split into fields . nginx Log configuration : log_format main '$remote_addr - $remote_user [ ...

  8. 【SVN】Linux Next svn The whole process of construction and configuration —— Easy for beginners

    Version control mainly uses git and svn, among svn The interface is easy to use and operate , This is a brief introduction to SVN The whole process of construction and configuration . 1. Download and install yum install subversion View version svnserve --v ...

  9. First time to know Dash -- Build an interface that everyone can easily use , Manipulating data and Visualization

    Working in Data Science , You have to use Pandas.scikit-learn these Python A weapon in the ecosystem , There is also the control of workflow Jupyter Notebooks, No , You and your colleagues love to use . however , If you want to translate the results of your work ...

  10. Blockchain is easy to start with : principle 、 Source code 、 Build and apply pdf Download electronically

    Blockchain is easy to start with : principle . Source code . Build and apply pdf Download electronically link :https://pan.baidu.com/s/1rKF4U9wq612RMIChs0zv8w Extraction code :hquz < Blockchain is easy to start with : primary ...

Random recommendation

  1. 【APP automated testing 】Monkey Test principle and method

    Reference material :http://blog.csdn.net/io_field/article/details/52189972 One .Monkey Test principle :Monkey yes Android A command line tool in , It can carry ...

  2. modelsim Use command

    1. Common simulation commands vlib work    // establish work Simulation library vmap work wrok   // Mapping Library vlog   -cover  bcest  *.v    // With coverage analysis ...

  3. seek 1+2+...+n

    subject : seek 1+2+…+n, It is required that multiplication and division shall not be used .for.while.if.else.switch.case And other keywords and conditional judgment statements (A?B:C). The procedure is very simple , I can't think of it . tragedy , I belong to the latter ... count ...

  4. file A Include files B, find A It doesn't contain B That part.

    file A: a f b e c d file B: b c a Purpose :A contain B, find A There is a but in it B What's not in the book Code : The first use of dos2unix The order will windows The file is converted to unix file dos2unix a.t ...

  5. 《java Season one 》 Regular expression of small cases

    Case a : Judge whether the mobile phone number meets the requirements import java.util.Scanner; /* * * demand : * Judge whether the mobile phone number meets the requirements ? * * analysis : * 13436975980 * 13688 ...

  6. [ Blue Bridge Cup ]2015 Blue bridge provincial Championship B Group topic and detailed explanation

    /*——————————————————————————————————————————————————————————— [ Results fill in the blanks ]T1 subject : The number of lottery tickets Some people are superstitious about numbers , Take... For example “4” The number of , recognize ...

  7. Concurrent programming ( 3、 ... and )—— ReentrantLock Usage of

    ReentrantLock yes Java And a reentrant mutex provided in the contract .ReentrantLock and synchronized In basic usage , The behavior semantics are similar , It's also reentrant . It's just that compared to the original Sync ...

  8. java_ fraction

    Topic content : Design a class that represents scores Fraction. This class uses two int Variables of type represent numerator and denominator respectively . The constructor of this class is : Fraction(int a, int b) Construct a a/b The scores of . This class needs ...

  9. es6 Learning notes 2-—symbol、 Variables and scopes

    1. New string features The label template : String.raw(callSite, ...substitutions) : string Used to get “ original ” Template label for string content ( Backslashes are no longer escape characters ): > ...

  10. python The journey : Concurrent programming

    One Background knowledge seeing the name of a thing one thinks of its function , A process is a process in progress . A process is an abstraction of a running program . The concept of process originated from the operating system , Is the core concept of the operating system , It is also one of the oldest and most important abstract concepts provided by the operating system . The rest of the operating system ...