How ElasticSearch achieves its speed
Speed 1: Batch Operation
The entire bulk request needs to be loaded into memory by the node that receives our request, so the bigger the request, the less memory available for other requests. Fortunately, it is easy to find this sweet spot: Try indexing typical documents in batches of increasing size. When performance starts to drop off, your batch size is too big. A good place to start is with batches of 1,000 to 5,000 documents or, if your documents are very large, with even smaller batches.
curl -XPUT localhost:9200/[index_name]/_settings -d '{
"index" : {
"refresh_interval" : "-1"
} }'
Speed 2: Filters
- filters: do not contribute to score
- range filter for a date/price
- term filter for a category
- geo filter for a bounding box(convert to geo hash so you can do prefix search. The longer the length, the higher the precision)
- filters can be cached independently from query using bitsets. So, a query should include filter if possible to reduce the dataset you need to scan through.
Filtered Query vs Top Level Filter
Typically, you want your filters to run first, since scoring documents is more expensive than just a boolean pass/fail.
# filtered query - run the filter first before the query (way faster)
{
"query": {
"filtered" : {
"query" : {
"term" : { "foo" : "bar" }
},
"filter" : {
"term" : { "foo2" : "bar2" }
}
}
}
}
vs.
# top level filter - run the query first before the filter
{
"query": {
"term" : { "foo" : "bar" }
},
"filter" : {
"term" : { "foo2" : "bar2" }
}
}
The result of a filter is cached and stored in filter cache. The size of it is configurable per node. If it runs out of space, it will use LRU eviction policy. However, your search may be slowed down if it needs to evict the space. So, you should avoid eviction over query time via putting TTL on cache entries. You can do that at the index level
# elasticsearch yml
# default is 10%
indices.cache.filter.size: 30%
index.cache.filter.expire: 30m
Combined filters that makes use of bitwise operations.
Tips:
- Make use of bool filter to combine other bitset filters like range, term filters.
- There are 4 filters that makes use of bitsets. They are : term, terms, exists/missing, prefix.
- Others like and, or, not filters cannot perform bitwise operations like AND or OR. They will pass the filter set to next one and continue like filter chain.
- If you are using both bitset and nonbitset filters, you can combine the bitset ones in bool filter and put the bool filter in an and/or/not filter, along with the nonbitset filters.
- Whether you combine filter with the bool, and, or, or not filters, the order in which those filters are executed is important. Cheaper filters, such as the term filter, should be placed before expensive filters, such as the script filter. This would make the expensive filter run on a smaller set of documents— those that already matched previous filters.
curl localhost:9200/[index_name]/[type]/_search?pretty -d'{
"query": {
"filtered": {
"filter": {
"and": [
{
 "bool": {
"should": [
{ "term": { "tags.verbatim": "elasticsearch"} },
{ "term": { "members": "lee" } }
]
 }
},
{
"script": {
"script": "doc[\"members\"].values.length > minMembers",
"params": { "minMembers": 2 }
}
}
]
}
}
}
}'
Speed 3: Query & OS Cache
Speed 3: Missing fields
How to search an in inverted index for non-existing fields (exists & missing filter)? For example, find documents without the field "author"?
Logical approach: Get all documents with the field and substract them from the full document id list. The leftover from the full list is the one missing this field. This involves scan all terms in the inverted index of the field and pull all document ids from each term and add them all them to a Set. (costy if the field has high cardinality - means many distinct terms).
Simple way: index document field names under _field_names. Then it becomes a regular lucene search againt a field.
Speed 4: Aggregation
Aggregation needs to get the terms from docIds and aggregated them. It cannot be helped much from inverted index. To resolve it effectively, we need fielddata that is uninverted index.
- docId=> list of terms
GET /my_index/_search
{
"query" : {
"match" : {
"body" : "brown"
}
},
"aggs" : {
"popular_terms": {
"terms" : {
"field" : "body"
}
}
}
}
- doc_values => non-heap
Speed 5: Counting Distinct Values
Naive: Load all data into a set and check.
Solution: cardinality Aggregation that uses HyperLogLog++
- Configurable precision - trade memory for accuracy
- Excellent accuracy on low-cardinality sets
- Fixed memory usage: no matter if there are tens of billions of unique values (memory usage only depends on configurable precision)
Speed 6: Calculate percentile
Naive: Maintain a sorted list of all values
Solution: percentiles aggregation that uses T-Digest
- Extreme percentile are more accurate
- small sets can be up to 100% accuracy
- while values are added to a bucket, the algorithm trades accuracy for memory savings.
Speed 7: Hardware
- CPU (indexing, searching, highlighting) - threadpools are sized on # of cores
- I/O (indexing, searching, mreging) - Disk: SSD
- Memory (aggregation, indices) - as much as possible (leave it to system for file system cache)
- Network (relocation, snapshot and restore) - GbE or better
Speed 8: OS
- file system cache
- file handles
- memory locking: bootstrap.mlockall (set to true will tell OS never swap out ES)
- Don't swap, no OOM killer (disable OOM killer)