How to change field types for existing index using Elasticsearch Mapping API
Solution 1
You can't change the mapping of an index when it already exists, except when you create new fields to Objects or multi-fields.
If you want to use the Mapping API for that your request would look like this:
PUT /prod1-db.log-*/_mapping/log
{
"properties": {
"message": {
"type": "string",
"index": "not_analyzed"
}
}
}
However I would recommend you create a JSON file with your mappings and add it to your logstash config.
A template file might look like this (You need to customize this):
{
"template": "logstash-*",
"mappings": {
"_default_": {
"properties": {
"action" : {
"type" : "string",
"fields" : {
"raw" : {
"index" : "not_analyzed",
"type" : "string"
}
}
},
"ad_domain" : {
"type" : "string"
},
"auth" : {
"type" : "long"
},
"authtime" : {
"type" : "long"
},
"avscantime" : {
"type" : "long"
},
"cached" : {
"type" : "boolean"
}
}
}
}
}
And the elasticsearch
entry in your Logstash config looks like this:
elasticsearch {
template => "/etc/logstash/template/template.json"
template_overwrite => true
}
Solution 2
If at all you haven't specified any mappings for your fields while index creation, the first time you index a document into your index, elastic search automatically chooses the best mapping for each of the fields based on the data provided. Looking at the document you have provided in the question, elasticsearch would have already assigned an analyser for the field message
. Once its assigned you cannot change it. Only way to do that is to create a fresh index.
daiyue
Updated on June 14, 2022Comments
-
daiyue almost 2 years
I am using
ELK
and have the following document structure{ "_index": "prod1-db.log-*", "_type": "db.log", "_id": "AVadEaq7", "_score": null, "_source": { "message": "2016-07-08T12:52:42.026+0000 I NETWORK [conn4928242] end connection 192.168.170.62:47530 (31 connections now open)", "@version": "1", "@timestamp": "2016-08-18T09:50:54.247Z", "type": "log", "input_type": "log", "count": 1, "beat": { "hostname": "prod1", "name": "prod1" }, "offset": 1421607236, "source": "/var/log/db/db.log", "fields": null, "host": "prod1", "tags": [ "beats_input_codec_plain_applied" ] }, "fields": { "@timestamp": [ 1471513854247 ] }, "sort": [ 1471513854247 ] }
I want to change the
message
field tonot_analyzed
. I am wondering how to useElasticsedarch Mapping API
to achieve that? For example, how to usePUT Mapping API
to add a new type to the existing index?I am using
Kibana 4.5
andElasticsearch 2.3
.UPDATE Tried the following
template.json
inlogstash
,1 { 2 "template": "logstash-*", 3 "mappings": { 4 "_default_": { 5 "properties": { 6 "message" : { 7 "type" : "string", 8 "index" : "not_analyzed" 9 } 10 } 11 } 12 } 13 }
got the following errors when starting
logstash
,logstash_1 | {:timestamp=>"2016-08-24T11:00:26.097000+0000", :message=>"Invalid setting for elasticsearch output plugin:\n\n output {\n elasticsearch {\n # This setting must be a path\n # File does not exist or cannot be opened /home/dw/docker-elk/logstash/core_mapping_template.json\n template => \"/home/dw/docker-elk/logstash/core_mapping_template.json\"\n ...\n }\n }", :level=>:error} logstash_1 | {:timestamp=>"2016-08-24T11:00:26.153000+0000", :message=>"Pipeline aborted due to error", :exception=>#<LogStash::ConfigurationError: Something is wrong with your configuration.>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/config/mixin.rb:134:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:63:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error} logstash_1 | {:timestamp=>"2016-08-24T11:00:29.168000+0000", :message=>"stopping pipeline", :id=>"main"}
-
daiyue over 7 yearstried
PUT /prod1-db.log-*/_mapping/log { "properties": { "message": { "type": "string", "index": "not_analyzed" } } }
but got an error fromelasticsearch
, ` java.lang.IllegalArgumentException: invalid version format: {"PROPERTIES": {"MESSAGE": {"TYPE": "STRING", "INDEX": "NOT_ANALYZED"}}} HTTP/1.1` -
Fairy over 7 years@daiyue Have you recreated the index?
-
daiyue over 7 yearswhat do you mean by recreating the index? How to do that in combination of adding a mapping?
-
Fairy over 7 years@daiyue Remapping an existing index is not possible (except some expections). Mapping only applies to an index which is being created. I would strongly recommend you go the route of using a template file beccause you then don't have to deal with curl and can edit changes really easily.
-
daiyue over 7 yearsTried the
template.json
inlogstash
as shown in the OP. I am wondering what does"template": "logstash-*"
do? Do I need to modify it to fit in my index names? -
Fairy over 7 yearsYes. It defines which index it should apply the mapping to.
-
daiyue over 7 yearsBeen trying the
template.json
, like this in mylogstash.conf
,output { elasticsearch { hosts => ["172.17.0.2:9200"] manage_template => false template => "/home/user_name/docker-elk/logstash/core_mapping_template.json" template_overwrite => true index => "%{host}-%{[@metadata][log_type]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][log_type]}" } stdout { codec => json } }
. But got aFile does not exist or cannot be opened
error inlogstash
, which is very strange. -
Fairy over 7 yearsLet us continue this discussion in chat.
-
sami over 6 yearsbut even with a template, the current mapping for the current index cannot be changed, right? a new-generated index would have the new mapping
-
Fairy over 6 years@sami Yes. Once the index is created, you can't change the mapping.