how to fix this code exception error elasticsearch

10,024

Solution 1

when i got this error, i changed the port from 9200 to 9300. hope it works for you.

note: i am a beginner to elasticsearch.

probably this will NOT solve your problem, but it will probably help other begginers who searched for the same error message and was led to this page.

Solution 2

The way you are configuring elasticsearch (using cluster name only) triggers multicast discovery of the cluster. You can try configuring in the cluster directly to see if that works:

elasticsearch {
    host=>"ip_here"
    port=>9200
    protocol=>http
}

Also your cluster output said your cluster was in a "yellow" state, so you might want to figure out what's going on there -- you want it to be "green".

Solution 3

The mistake that I made was in the configuration of the Elastic instances for unicast discovery. I inadvertently added the port number 9200 to the configuration of the unicast hosts list.

Incorrect:

discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost.localdomain:9200"]

Correct:

discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost.localdomain"]
Share:
10,024
bottus
Author by

bottus

Updated on June 14, 2022

Comments

  • bottus
    bottus almost 2 years

    I'm currently working with elasticsearch, logstash and kibana.

    i'm getting an exception that i can't get through.

    first here is what i get when i put ip:9200/_cluster/health in my browser ..

    {
     "cluster_name":"mr-cluster",
     "status":"yellow",
     "timed_out":false,
     "number_of_nodes":1,
     "number_of_data_nodes":1,
     "active_primary_shards":5,
     "active_shards":5,
     "relocating_shards":0,
     "initializing_shards":0,
     "unassigned_shards":5
    }
    

    Here is what kibana get when trying to request elastic search

    Remote Address:ip:9200
    Request ip:9200/_all/_search
    Request Method:POST
    Status Code:200 OK
    

    It seems okay until now ..

    Here is my logstash config file:

    input {
    gelf {
        port => "5000"
    }
    udp {
        port => "5001"
    }
    }
    
    output {
    file {
        path => "/home/g/stdout.log"
    }
    elasticsearch {
        cluster => "mr-cluster"
        codec => "json"
    }
    }
    

    Something pretty simple when i only use a file as output it works perfectly, logstash works. The problem is when i wanna use elasticsearch as output, nothing works anymore (event file output) and i get this exception from elasticsearch. I've been searching on google for hours now and didn't find the solution.

    Here is the exception :

    [2014-05-21 09:18:35,060][WARN ][http.netty               ] [mr-node-elasticsearch] Caught     exception while handling client http traffic, closing connection [id: 0x27d0ccce, /0:0:0:0:0:0:0:1:44164 => /0:0:0:0:0:0:0:1:9200]
    java.lang.IllegalArgumentException: empty text
    at org.elasticsearch.common.netty.handler.codec.http.HttpVersion.<init>(HttpVersion.java:97)
    at org.elasticsearch.common.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:62)
    at org.elasticsearch.common.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:75)
    at org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:189)
    at org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:101)
    at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
    at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744) 
    

    Thank you for helping guys !