logstash
General Format
input => filter* => output
# Example csv
input {
file {
path => [ "D:/it_depends/_kibana/GPU-Z Sensor Log.txt" ]
}
}
filter {
if ([message] =~ "\bDate\b") {
drop { }
} else {
csv {
columns => ["date", "core_clock",
"memory_clock", "gpu_temp",
"memory_used", "gpu_load",
"memory_controller_load",
"video_engine_load", "vddc"]
}
mutate {
strip => ["date", "core_clock",
"memory_clock", "gpu_temp",
"memory_used", "gpu_load",
"memory_controller_load",
"video_engine_load", "vddc"]
}
}
}
output {
stdout { debug => true }
}
Input
Built-in
- file: reads from a file on the filesystem, much like the UNIX command “tail -0a”
- syslog: listens on the well-known port 514 for syslog messages and parses according to RFC3164 format
- redis: reads from a redis server, using both redis channels and also redis lists. Redis is often used as a “broker” in a centralized Logstash installation, which queues Logstash events from remote Logstash “shippers”.
- lumberjack: processes events sent in the lumberjack protocol. Now called logstash-forwarder.
JDBC
input {
jdbc {
jdbc_driver_library => "D:/elastic search/logstash-1.5.2/lib/mysql-connector-java-5.1.13-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/test"
jdbc_user => "root"
jdbc_password => ""
statement => "SELECT * from data"
# statement_filepath => "query.sql"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
}
}
JMX
Filter
- grok: parses arbitrary text and structure it. Grok is currently the best way in Logstash to parse unstructured log data into something structured and queryable. With 120 patterns shipped built-in to Logstash, it’s more than likely you’ll find one that meets your needs!
- mutate: The mutate filter allows you to do general mutations to fields. You can rename, remove, replace, and modify fields in your events.
- drop: drop an event completely, for example, debug events.
- clone: make a copy of an event, possibly adding or removing fields.
- geoip: adds information about geographical location of IP addresses (and displays amazing charts in kibana)
Output
- elasticsearch: If you’re planning to save your data in an efficient, convenient and easily queryable format
- file: writes event data to a file on disk.
- graphite: sends event data to graphite, a popular open source tool for storing and graphing metrics
- statsd: a service which “listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services”.
# elasticsearch output
# Documents in ElasticSearch are identified by tuples of (index, mapping
# type, document_id).
# References:
# - http://logstash.net/docs/1.3.2/outputs/elasticsearch
# - http://stackoverflow.com/questions/15025876/what-is-an-index-in-elasticsearch
# We make the document id unique (for a specific index/mapping type pair) by
# using the relevant Eliot fields. This means replaying messages will not
# result in duplicates, as long as the replayed messages end up in the same
# index (see below).
document_id => "%{task_uuid}_%{task_level}"
# By default logstash sets the index to include the current date. When we
# get to point of replaying log files on startup for crash recovery we might
# want to use the last modified date of the file instead of current date,
# otherwise we'll get documents ending up in wrong index.
#index => "logstash-%{+YYYY.MM.dd}"
index_type => "Eliot"
# In a centralized ElasticSearch setup we'd be specifying host/port
# or some such. In this setup we run it ourselves:
embedded => true
Run
# bin/logstash -f logstash-simple.conf
Examples
Apache Access/Error Log Parsing
# vi logstash-apache.conf
input {
file {
path => "/var/log/httpd/*_log"
}
}
filter {
if [path] =~ "access" {
mutate { replace => { type => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
} else if [path] =~ "error" {
mutate { replace => { type => "apache_error" } }
} else {
mutate { replace => { type => "random_logs" } }
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}