Idle AWS EC2 but high memory usage
I don't have a C4.large instance handy to check my theory, so I may be shooting in the dark, but have you checked the stats for the Xen balloon driver?
Here's a dramatic explanation of the possible mechanism: http://lowendbox.com/blog/how-to-tell-your-xen-vps-is-overselling-memory/
And here's documentation of the various sysfs paths that will give you more information: https://www.kernel.org/doc/Documentation/ABI/stable/sysfs-devices-system-xen_memory
Related videos on Youtube
Haskell
Updated on September 18, 2022Comments
-
Haskell over 1 year
I'm using Amazon EC2 instance C4.large, total 3.75G memory, running Amazon-Linux-2015-09-HVM
The memory usage increases day by day, like there's a memory leak. Then I kill all my program and all memory hog processes like
Nginx/PHP-FPM/Redis/MySQL/sendmail
. It's very strange the memory is not released, still very high. The line-/+ buffers/cache: 3070 696
indicates actual free memory with buffer/cache excluded:$ free -m total used free shared buffers cached Mem: 3767 3412 354 4 138 203 -/+ buffers/cache: 3070 696 Swap: 0 0 0
As you can see after kill there are only a few user processes running, the highest is only 0.1% memory usage:
$ ps aux --sort=-resident|head -30 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 32397 0.0 0.1 114232 6672 ? Ss 08:04 0:00 sshd: ec2-user [priv] ec2-user 32399 0.0 0.1 114232 4032 ? S 08:04 0:00 sshd: ec2-user@pts/0 ntp 2329 0.0 0.1 23788 4020 ? Ss Dec06 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g ec2-user 32400 0.0 0.0 113572 3368 pts/0 Ss 08:04 0:00 -bash rpcuser 2137 0.0 0.0 39828 3148 ? Ss Dec06 0:00 rpc.statd root 2303 0.0 0.0 76324 2944 ? Ss Dec06 0:00 /usr/sbin/sshd root 2089 0.0 0.0 247360 2676 ? Sl Dec06 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 1545 0.0 0.0 11364 2556 ? Ss Dec06 0:00 /sbin/udevd -d root 1 0.0 0.0 19620 2540 ? Ss Dec06 0:00 /sbin/init ec2-user 1228 0.0 0.0 117152 2480 pts/0 R+ 10:32 0:00 ps aux --sort=-resident root 2030 0.0 0.0 9336 2264 ? Ss Dec06 0:00 /sbin/dhclient -q -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhclient-eth0.pid eth0 rpc 2120 0.0 0.0 35260 2264 ? Ss Dec06 0:00 rpcbind root 2071 0.0 0.0 112040 2116 ? S<sl Dec06 0:00 auditd root 1667 0.0 0.0 11308 2064 ? S Dec06 0:00 /sbin/udevd -d root 1668 0.0 0.0 11308 2040 ? S Dec06 0:00 /sbin/udevd -d root 2373 0.0 0.0 117608 2000 ? Ss Dec06 0:00 crond ec2-user 1229 0.0 0.0 107912 1784 pts/0 S+ 10:32 0:00 head -30 root 2100 0.0 0.0 13716 1624 ? Ss Dec06 0:09 irqbalance --pid=/var/run/irqbalance.pid root 2432 0.0 0.0 4552 1580 ttyS0 Ss+ Dec06 0:00 /sbin/agetty ttyS0 9600 vt100-nav root 2446 0.0 0.0 4316 1484 tty6 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty6 root 2439 0.0 0.0 4316 1464 tty3 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty3 root 2437 0.0 0.0 4316 1424 tty2 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty2 root 2444 0.0 0.0 4316 1416 tty5 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty5 root 2434 0.0 0.0 4316 1388 tty1 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty1 root 2441 0.0 0.0 4316 1388 tty4 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty4 dbus 2160 0.0 0.0 21768 232 ? Ss Dec06 0:00 dbus-daemon --system root 2383 0.0 0.0 15372 144 ? Ss Dec06 0:00 /usr/sbin/atd root 2106 0.0 0.0 4384 88 ? Ss Dec06 0:16 rngd --no-tpm=1 --quiet root 2 0.0 0.0 0 0 ? S Dec06 0:00 [kthreadd]
No process using high memory but system total free is only 696M out of 3.75G, is it a bug of EC2 or Amazon Linux? I have another T2.micro instance running, after kill
Nginx/MySQL/PHP-FPM
the memory is released and free number bumped. It's appreciated if someone could help.-
Michael - sqlbot over 8 yearsSystem "available" memory is essentially 3070, of which 696 is completely free and would not need to be reclaimed from the cache. I see nothing of concern, here. Also, EC2 is a virtual machine environment, not an operating system, and as such it technically could not have a bug that caused system memory to leak.
-
Haskell over 8 yearsThanks for your attention. The 3070 is not 'available' since when around 500M left
mysql_safe
restarts itself -
Michael - sqlbot over 8 years
egrep 'kernel|oom' /var/log/syslog
. mysqld_safe is restarting mysql because the kernel is killing mysqld due to low memory. It's probably dipping lower than what you see, but when mysqld gets killed, the real memory hog may also abruptly react to the loss of connection to mysql and release some of the memory it's hogging. You should add a swap file so you can catch the hogs in action before they trigger such a reaction.
-
-
Haskell over 8 yearsThanks for your attention. I've listed the output of
ps aux --sort=-resident
. I killed most processes, now only a few system essentials are running, no process occupying large amount of memory. It's strange to me. -
Haskell over 8 yearsThanks for your attention. I saw that article, and I'm just saying the line
-/+ buffers/cache: 3070 696
which indicated actually free memory already (buffer/cache excluded). -
Haskell over 8 yearslowendbox.com/blog/… is a similar issue, idle instance but high memory. However C4.large with Amazon-Linux-HVM doesn't have Xen balloon, paths like /proc/xen/balloon, /sys/devices/system/xen_memory/ don't exist. I try
echo 3 > /proc/sys/vm/drop_caches
to free caches and free number decreases. Thanks for your answer however I don't get why killing processes doesn't affect free number. -
Michael - sqlbot over 8 yearsThe "lowendbox" author is grasping at straws and, in any event, AWS doesn't play oversubscription games like this. This is speculation, and does not apply here.