Kubernetes and JVM memory settings
Solution 1
The default "max heap" if you don't specify -Xmx
is 1/4 (25%) of the host RAM.
JDK 10 improved support for containers in that it uses container's RAM limits instead of underlying host. As pointed by @David Maze this has been backported to JDK 8.
Assuming you have a sufficiently recent version of JDK 8, you can use -XX:MaxRAMPercentage
to modify the default percentage of total RAM used for Max heap. So instead of specifying -Xmx
you can tell, e.g. -XX:MaxRAMPercentage=75.0
. See also https://medium.com/adorsys/usecontainersupport-to-the-rescue-e77d6cfea712
Here's an example using alpine JDK docker image: https://hub.docker.com/_/openjdk (see section "Make JVM respect CPU and RAM limits" in particular).
# this is running on the host with 2 GB RAM
docker run --mount type=bind,source="$(pwd)",target=/pwd -it openjdk:8
# running with MaxRAMPercentage=50 => half of the available RAM is used as "max heap"
root@c9b0b4d9e85b:/# java -XX:+PrintFlagsFinal -XX:MaxRAMPercentage=50.0 -version | grep -i maxheap
uintx MaxHeapFreeRatio = 100 {manageable}
uintx MaxHeapSize := 1044381696 {product}
openjdk version "1.8.0_265"
OpenJDK Runtime Environment (build 1.8.0_265-b01)
OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)
# running without MaxRAMPercentage => default 25% of RAM is used
root@c9b0b4d9e85b:/# java -XX:+PrintFlagsFinal -version | grep -i maxheap
uintx MaxHeapFreeRatio = 100 {manageable}
uintx MaxHeapSize := 522190848 {product}
openjdk version "1.8.0_265"
Solution 2
In my K8s setup, I am using consul to manage the pod configuration. Here is a command to override the jvm setting on the fly. It is a pretty much project specific but it might give you a hint if you are using consul for configuration.
kubectl -n <namespace> exec -it consul-server -- bash -c "export CONSUL_HTTP_ADDR=https://localhost:8500 && /opt/../home/bin/bootstrap-config --token-file /opt/../config/etc/SecurityCertificateFramework/tokens/consul/default/management.token kv write config/processFlow/jvm/java_option_xmx -Xmx8192m"
PNS
Updated on June 14, 2022Comments
-
PNS almost 2 years
In a
Kubernetes
cluster with numerous microservices, one of them is used exclusively for aJava Virtual Machine
(JVM) that runs aJava 1.8
data processing application.Up to recently, jobs running in that JVM pod consumed less than 1 GB of RAM, so the pod has been setup with 4 GB of maximum memory, without any explicit heap size settings for the JVM.
Some new data now require about 2.5 GB for the entire pod, including the JVM (as reported by the
kubernetes top
command, after launching with an increased memory limit of 8 GB), but the pod crashes soon after starting with a limit of 4 GB.Using a head size range like
-Xms256m -Xmx3072m
with a limit of 4 GB does not solve the problem. In fact, now the pod does not even start.Is there any way to parameterize the JVM for accommodating the 2.5 GB needed, without increasing the 4 GB maximum memory for the pod?
-
PNS over 3 yearsWe have already tried of all these facts and configuration options, but none of them works. As a matter of fact, we are running OpenJDK 1.8 versions newer than 8u200, but somehow the option
-XX:MaxRAMPercentage
is still not recognized. -
Juraj Martinka over 3 yearsAt least the latest alpine JDK docker image seems to support it - see my edited answer and also hub.docker.com/_/openjdk
-
Juraj Martinka over 3 yearsHow did you check that it's "not recognized"? Did you use
-XX:+PrintFlagsFinal
, some monitoring tool or it's just a guess depending based on the behavior of your application?