Multiple liveness probes in kuberenetes

11,239

The Kubernetes API allows one liveness and one readness per application (Deployment / POD). I recommend creating a validations centralizing service that has an endpoint rest:

livenessProbe:
  httpGet:
    path: /monitoring/alive
    port: 3401
    httpHeaders:
    - name: X-Custom-Header
      value: Awesome
  initialDelaySeconds: 15
  timeoutSeconds: 1
  periodSeconds: 15

or try one bash to same task, like:

livenessProbe:
  exec:
    command:
      - bin/bash
      - -c
      - ./liveness.sh
  initialDelaySeconds: 220
  timeoutSeconds: 5

liveness.sh

#!/bin/sh
if [ $(ps -ef | grep java | wc -l) -ge 1 ]; then
  echo 0
else
  echo "Nothing happens!" 1>&2
    exit 1
fi

Recalling what the handling of messages can see in the events the failure in question: "Warning Unhealthy Pod Liveness probe failed: Nothing happens!"

Hope this helps

Share:
11,239
styrofoam fly
Author by

styrofoam fly

Updated on July 18, 2022

Comments

  • styrofoam fly
    styrofoam fly almost 2 years

    I have a program which has multiple independent1 components.

    It is trivial to add a liveness probe in all of the components, however it's not easy to have a single liveness probe which would determine the health of all of the program's components.

    How can I make kubernetes look at multiple liveness probes and restart the container when any of those are defunct?

    I know it can be achieved by adding more software, for example an additional bash script which does the liveness checks, but I am looking for a native way to do this.


    1By independent I mean that failure of one component does not make the other components fail.

    • PhilipGough
      PhilipGough about 6 years
      Shouldnt they be running in different Pods or different containers within a Pod at the very least?
    • styrofoam fly
      styrofoam fly about 6 years
      @user2983542 they should, however not all applications are well-designed.
    • PhilipGough
      PhilipGough about 6 years
      Right, I don't think this is possible, although I stand to be corrected. How about creating a sidecar container with some logic and set the probe to that?
    • 9ilsdx 9rvj 0lo
      9ilsdx 9rvj 0lo almost 5 years
      So if 1 of 3 of your components fail liveness probe, the pod should be 33% unavailable? Trying to express what you want to achieve would already give you a clue.
  • simohe
    simohe about 4 years
    More exactly, there is one probe per container (and there can be several containers per pod/deployment/daemonset). The container is restarted when it's liveness probe fails. -- Thank's for the hint about the message! I did not understans this un the doc.