Kafka: client has run out of available brokers

20,237

I think you create this way 2 or more consumers that get grouped into a single group (probably go-kafka-consumer). Your Broker has a Topic with 1 Partition, so one of Group gets assigned, the other one produces this error message. If you would raise the Partitions of that Topic to 2 the error would go away. But I think your problem is that you somehow have instantiated more consumers than before.

From Kafka in a Nutshell:

Consumers can also be organized into consumer groups for a given topic — each consumer within the group reads from a unique partition and the group as a whole consumes all messages from the entire topic. If you have more consumers than partitions then some consumers will be idle because they have no partitions to read from. If you have more partitions than consumers then consumers will receive messages from multiple partitions. If you have equal numbers of consumers and partitions, each consumer reads messages in order from exactly one partition.

They would not exactly produce an Error, so that would be an issue with Sarama.

Share:
20,237
Bob Smith
Author by

Bob Smith

Loading . . .

Updated on April 27, 2020

Comments

  • Bob Smith
    Bob Smith about 4 years

    UPDATE: It turned out I had an issue with my ports in Docker. Not sure why that fixed this phenomenon.

    I believe I have come across a strange error. I am using the Sarama library and am able to create a consumer successfully.

    func main() {
     config = sarama.NewConfig()
     config.ClientID = "go-kafka-consumer"
     config.Consumer.Return.Errors = true
     // Create new consumer
     master, err := sarama.NewConsumer("localhost:9092", config)
     if err != nil {
        panic(err)
     }
    
     defer func() {
         if err := master.Close(); err != nil {
             panic(err)
         }
     }()
    
     partitionConsumer, err := master.ConsumePartition("myTopic",0, 
     sarama.OffsetOldest)
     if err != nil {
         panic(err)
     }
    }
    

    As soon as I break this code up and move outside the main routine, I run into the error:

    kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

    I have split my code up as follows: the previous main() method I have now converted into a consumer package with a method called NewConsumer() and my new main() calls NewConsumer() like so:

    c := consumer.NewConsumer()
    

    The panic statement is getting triggered in the line with sarama.NewConsumer and prints out kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

    Why would breaking up my code this way trigger Sarama to fail to make the consumer? Does Sarama need to be run directly from main?

  • Bob Smith
    Bob Smith about 5 years
    thanks for your answer. turned out it was an issue with my kafka docker image port configuration. Why that solved the I'm not entirely sure.
  • OneCricketeer
    OneCricketeer about 5 years
    @Bob because just the port forward isn't enough to get clients to communicate with Kafka in a container rmoff.net/2018/08/02/kafka-listeners-explained
  • Bob Smith
    Bob Smith about 4 years
    @cricket_007 circled back to this problem, the article you provided was very helpful and was a key in the solution I implemented to allow kafka access both internally and externally to docker. Thank you.
  • Chris
    Chris about 3 years
    This gave me a nice lead in realizing that I needed to use the exact node addresses in the producer's hosts entries and not a loadbalancer (k8s) address to the kafka cluster.