elasticsearch


ElasticSearch RollOver index - Why can't an alias point to multiple indices?


Lets take the following scenario.
I have an alias A1 pointing to index I1. Now, I would like to use rollOver feature of ES and create index I2 and make alias point to I1 and I2.
Can I always keep rolling over and make my alias A1 point to last 2 indices or in general last 'n' indices ?
You can point one alias to multiple indices like this:
POST /_aliases
{
"actions" : [
{ "add" : { "indices" : ["l1", "l2"], "alias" : "A1" } }
]
}
or even point the alias to a wildcard index pattern like this:
POST /_aliases
{
"actions" : [
{ "add" : { "index" : "l*", "alias" : "A1" } }
]
}
EDIT: With rollover, you can only point the alias to one index - the latest index. If you want an alias that points to the last 2 indices, n indices, or all of the indices matching the pattern l*, you'll have to create an additional alias using the requests I showed above.
EDIT 2: If I wanted to maintain 30 days of logs in an index, this is how I would accomplish it. I stayed consistent with the naming of indices as 'l1' and alias of 'A1'. After the first 30 days, a new index will be created called l000002 (the naming convention is incrementing the number of the last index and zero padding with a length of 6) and the alias A1 will be pointing at the index l000002.
PUT /l1
{ "aliases": { "A1": {} } }
POST /A1/_rollover
{ "conditions": { "max_age": "30d" } }

Related Links

Need a customized index in the elasticsearch
ElasticSearch assign own IDs while indexing with LogStash
How to retrieve the latest event with a given field value?
Converting nginx access log bytes to number in Kibana4
Elasticsearch minBy
Snowball analyzer vs language analyzer
AND between tokens in elasticsearch
Elasticsearch Stemmer Override Token Filter not working when usind rules path
Logstash extracting values from sp_executesql
Logstash Grok Modifying and overwriting #timestamp
how to restore elasticsearch.yml config file to original?
Index a graph with ElasticSearch
ElasticSearch query not returning expected results
Fluentd High Availability Custom Index
How do I add an attribute to an Elasticsearch node for the purpose of Shard Allocation Filtering?
Configure ElasticSearch attachment mapper to use OCR plugin

Categories

HOME
office365
ionic2
libgdx
apple-push-notifications
xcode8.1
qc
softlayer
writefile
tcp
navigation
javafx-8
angular2-template
stack-overflow
erd
xbap
webstore
temperature
google-sites
office-ui-fabric
amazon-mws
fido-u2f
kamailio
altium-designer
connection-refused
device
quartz.net
morris.js
android-service
boolean-logic
fatfs
amazon-iam
locks
toad
cucumber-junit
assertions
spinner
fastlane
unification
stat
system-on-chip
access-denied
1wire
minikube
virtual-memory
asihttprequest
kitura
ol3-google-maps
scala-breeze
solr-query-syntax
winmerge
test-data
glassfish-4.1
case-when
ltrace
rustdoc
gawk
thread-exceptions
pyaudio
maven-release-plugin
highlighting
uibinder
fdt
persistent-object-store
psd
chicagoboss
calcite
360-degrees
green-threads
cache-manifest
ampersand
abstract-factory
rebar
back
full-text-indexing
distributed-r
mfmessagecomposeview
screwturn
thruway
nosql-aggregation
saga
libxml-js
google-code-prettify
android-authenticator
instance-variables
microblogging
drawimage
bll
android-2.1-eclair
ios-4.2
3-tier
j2mepolish

Resources

Encrypt Message