elasticsearch


Want to aggregate all the message fields into one event using logstash?


I am using logstash 2.4.0.
My config is like this:
input {
file {
path => "F:\logstash-2.4.0\logstash-2.4.0\bin\picaso.txt"
start_position => "beginning"
}
}
filter {
grok {
match => [ "message", "\[%{TIMESTAMP_ISO8601:TIMESTAMP}\]\[%{LOGLEVEL:LEVEL}%{SPACE}\]\[%{DATA:QUERY}\]%{SPACE}\[%{DATA:QUERY1}\]%{SPACE}\[%{DATA:INDEX-NAME}\]\[%{DATA:SHARD}\]%{SPACE}took\[%{DATA:TOOK}\],%{SPACE}took_millis\[%{DATA:TOOKM}\], types\[%{DATA:types}\], stats\[%{DATA:stats}\], search_type\[%{DATA:search_type}\], total_shards\[%{NUMBER:total_shards}\], source\[%{DATA:source_query}\], extra_source\[%{DATA:extra_source}\],"]
}
# ==> add this filter to convert TOOKM to integer
mutate {
convert => { "TOOKM" => "integer" }
}
# ==> use TOOKM field instead
if [TOOKM] > 30 {
aggregate {
task_id => "%{message}"
code => "event.set('message')"
end_of_task => true
timeout => 120
}
} else {
drop { }
}
}
output {
stdout { codec => rubydebug }
}
My output in the screen is like this:
"message" => "[2017-01-14 10:59:58,591][WARN ][index.search.slowlog.query] [yaswanth] [bank][2] took[50.2ms], took_millis[50], types[details], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{\"sort\":[{\"balance\":{\"order\":\"asc\"}}]}], extra_source[], \r",
"#version" => "1",
"#timestamp" => "2017-05-11T03:13:53.563Z",
"path" => "F:\\logstash-2.4.0\\logstash-2.4.0\\bin\\picaso.txt",
"host" => "yaswanth",
"TIMESTAMP" => "2017-01-14 10:59:58,591",
"LEVEL" => "WARN",
"QUERY" => "index.search.slowlog.query",
"QUERY1" => "yaswanth",
"INDEX-NAME" => "bank",
"SHARD" => "2",
"TOOK" => "50.2ms",
"TOOKM" => 50,
"types" => "details",
"search_type" => "QUERY_THEN_FETCH",
"total_shards" => "5",
"source_query" => "{\"sort\":[{\"balance\":{\"order\":\"asc\"}}]}",
"tags" => [
[0] "_aggregateexception"
]
}
{
"message" => "[2017-01-14 10:59:58,593][WARN ][index.search.slowlog.query] [yaswanth] [bank][1] took[52.2ms], took_millis[52], types[details], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{\"sort\":[{\"balance\":{\"order\":\"asc\"}}]}], extra_source[], \r",
"#version" => "1",
"#timestamp" => "2017-05-11T03:13:53.564Z",
"path" => "F:\\logstash-2.4.0\\logstash-2.4.0\\bin\\picaso.txt",
"host" => "yaswanth",
"TIMESTAMP" => "2017-01-14 10:59:58,593",
"LEVEL" => "WARN",
"QUERY" => "index.search.slowlog.query",
"QUERY1" => "yaswanth",
"INDEX-NAME" => "bank",
"SHARD" => "1",
"TOOK" => "52.2ms",
"TOOKM" => 52,
"types" => "details",
"search_type" => "QUERY_THEN_FETCH",
"total_shards" => "5",
"source_query" => "{\"sort\":[{\"balance\":{\"order\":\"asc\"}}]}",
"tags" => [
[0] "_aggregateexception"
]
What i want in my final event should have all the message fields in the above logs like
{
"message" => [2017-01-14 10:59:58,591][WARN ][index.search.slowlog.query] [yaswanth] [bank][2] took[50.2ms], took_millis[50], types[details], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{\"sort\":[{\"balance\":{\"order\":\"asc\"}}]}], extra_source[], \r",[2017-01-14 10:59:58,593][WARN ][index.search.slowlog.query] [yaswanth] [bank][1] took[52.2ms], took_millis[52], types[details], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{\"sort\":[{\"balance\":{\"order\":\"asc\"}}]}], extra_source[], \r"
}
Whether my approach for the scenario is correct?
Thanks

Related Links

org.elasticsearch.common.netty.channel.ChannelException: Failed to create a selector
Tokenising / filtering text with markup
elasticsearch: boost query based on values of a variable
Elasticsearch function_score query
Elasticsearch 2.0.0 cluster zen discovery in docker
How can I use Kafka to retain logs in logstash for longer period?
Optimal way to set up ELK stack on three servers
Cluster Level Logging with Elasticsearch and Kibana does not work in kubernetes
ElasticSearch Couchbase Replication Issue
How to query for inner_hits against grandparents in multi-generational setup
Request timedout during delete/create on elasticsearch while snapshot is being taken
how to use two parallel Aggregation for elasticsearch nest
How do I enable remote access/request in Elasticsearch 2.0?
elasticsearch faking index per user - how are routing values inferred when updating?
elasticsearch: Proper config in 3 node cluster for each node to have full copy of index?
Multiple Nested Aggregations in Elasticsearch

Categories

HOME
extjs
knockout.js
matrix
functional-programming
xcode8.1
code-formatting
hdfs
vsts-build
tcp
mirc
openflow
apache-kafka-connect
msmq
twitter-oauth
match
phpstorm-2017.1
vimeo
angular4
sonicwall
synthesis
google-tasks-api
robolectric
varnish-vcl
xsl-fo
http-authentication
codefluent
gitkraken
amazon-sns
react-dnd
router
android-service
onchange
ccavenue
jsonresult
instructions
mapnik
reporting
lxml
cloudera-manager
rails-postgresql
pdfminer
subclassing
ftp-server
google-now
game-center
raytracing
apache-toree
webalizer
image-registration
ilrepack
webvtt
sequence-diagram
android-mediarecorder
asp.net-web-api-routing
alertify
business-rules
http4s
bigdecimal
mathml
qpixmap
node-glob
facebook-ios-sdk
jeasyui
django-validation
wikimedia-commons
perceptron
mousehover
asp.net-web-api-helppages
android-studio-import
wso2ml
libz
compositetype
unidata
ipod
innerhtml
signed
mod-auth-openidc
rubber
zend-mail
visual-studio-6
libgcc
lumia-imaging-sdk
git-reset
django-settings
eclim
mute
ip-geolocation
chronometer
wcf-callbacks
kolite
motodev-studio
botnet
fragment-identifier
css-friendly
servlet-container
webresponse
aptitude
cassini
reliability
fgetc
mysqli-multi-query
jquery-effects
downcasting
database-cloning
icon-language
prism-2
rendering-engine
bindable-linq

Resources

Encrypt Message