Guidelines to handle Timeout exception for Kafka Producer?

10,907

Solution 1

The default Kafka config values, both for producers and brokers, are conservative enough that, under general circumstances, you shouldn't run into any timeouts. Those problems typically point to a flaky/lossy network between the producer and the brokers.

The exception you're getting, Failed to update metadata, usually means one of the brokers is not reachable by the producer, and the effect is that it cannot get the metadata.

For your second question, Kafka will automatically retry to send messages that were not fully ack'ed by the brokers. It's up to you if you want to catch and retry when you get a timeout on the application side, but if you're hitting 1+ min timeouts, retrying is probably not going to make much of a difference. You're going to have to figure out the underlying network/reachability problems with the brokers anyway.

In my experience, usually the network problems are:

  • Port 9092 is blocked by a firewall, either on the producer side or on the broker side, or somewhere in the middle (try nc -z broker-ip 9092 from the server running the producer)
  • DNS resolution is broken, so even though the port is open, the producer cannot resolve to an IP address.

Solution 2

"What are the general causes of these Timeout exceptions?"

  1. The most common cause that I saw earlier was due to staled metadata information: one broker went down, and the topic partitions on that broker were failed over to other brokers. However, the topic metadata information has not been updated properly, and the client still tries to talk to the failed broker to either get metadata info, or to publish the message. That causes timeout exception.

  2. Netwowrk connectivity issues. This can be easily diagnosed with telnet broker_host borker_port

  3. The broker is overloaded. This can happen if the broker is saturated with high workload, or hosts too many topic partitions.

To handle the timeout exceptions, the general practice is:

  1. Rule out broker side issues. make sure that the topic partitions are fully replicated, and the brokers are not overloaded

  2. Fix host name resolution or network connectivity issues if there are any

  3. Tune parameters such as request.timeout.ms, delivery.timeout.ms etc. My past experience was that the default value works fine in most of the cases.

Solution 3

The Timeout Exception would happen if the value of "advertised.listeners"(protocol://host:port) is not reachable by the producer or consumer

check the configuration of property "advertised.listeners" by the following command:

cat $KAFKA_HOME/config/server.properties

Solution 4

I suggest to use the following properties while constructing Producer config

Need acks from Partition - Leader

kafka.acks=1

Maximum number fo retries kafka producer will do to send message and recieve acks from Leader

kafka.retries=3

Request timeout for each indiviual request

timeout.ms=200

Wait to send next request again ; This is to avoid sending requests in tight loop;

retry.backoff.ms=50

Upper bound to finish all the retries

dataLogger.kafka.delivery.timeout.ms=1200

producer.send(record, new Callback {
  override def onCompletion(recordMetadata: RecordMetadata, e: Exception): Unit = {
    if (e != null) {
      logger.debug(s"KafkaLogger : Message Sent $record to  Topic  ${recordMetadata.topic()}, Partition ${recordMetadata.partition()} , Offset ${recordMetadata.offset()} ")
    } else {
      logger.error(s"Exception while sending message $item to Error topic :$e")
    }
  }
})

Close the Producer with timeout

producer.close(1000, TimeUnit.MILLISECONDS)

Share:
10,907

Related videos on Youtube

xabhi
Author by

xabhi

Updated on June 04, 2022

Comments

  • xabhi
    xabhi almost 2 years

    I often get Timeout exceptions due to various reasons in my Kafka producer. I am using all the default values for producer config currently.

    I have seen following Timeout exceptions:

    org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

    org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for topic-1-0: 30001 ms has passed since last append

    I have following questions:

    1. What are the general causes of these Timeout exceptions?

      1. Temporary network issue
      2. Server issue? if yes then what kind of server issue?
    2. what are the general guidelines to handling the Timeout exception?

      1. Set 'retries' config so that Kafka API does the retries?
      2. Increase 'request.timeout.ms' or 'max.block.ms' ?
      3. Catch the exception and have application layer retry sending the message but this seems hard with Async send as messages will then be sent out of order?
    3. Are Timeout exceptions retriable exceptions and is it safe to retry them?

    I am using Kafka v2.1.0 and Java 11.

    Thanks in advance.

  • xabhi
    xabhi about 5 years
    For kafka to automatically retry to send messages, I will have to set 'retries' config greater than 0 right? Default value is 0, does this mean kafka doesn't retry by default?
  • mjuarez
    mjuarez about 5 years
    @xabhi retries in Kafka 2.1.0 are actually set to 2+ billion. This default changed from the 1.x versions, where it actually was zero. Check the docs out: kafka.apache.org/documentation.html
  • jumping_monkey
    jumping_monkey over 4 years
    You might want to tune max.block.ms too on the producer, whose default is 60s, in which time can get you a cuppa of coffee ;-)
  • 0x52
    0x52 over 2 years
    At last I found this property thanks to you, the previous ones did not work for me, after deploying an old microservice after 12 hours.