Ubuntu is quickly running out of RAM, and my computer is starting to freeze. What command will solve this?
Solution 1
In my experience Firefox and Chrome use more RAM than my first 7 computers combined. Probably more than that but I'm getting away from my point. The very first thing you should do is close your browser. A command?
killall -9 firefox google-chrome google-chrome-stable chromium-browser
I've tied the most popular browsers together into one command there but obviously if you're running something else (or know you aren't using one of these) just modify the command. The killall -9 ...
is the important bit. People do get iffy about SIGKILL
(signal number 9) but browsers are extremely resilient. More than that, terminating slowly via SIGTERM
will mean the browser does a load of cleanup rubbish —which requires a burst of additional RAM— and that's something you can't afford in this situation.
If you can't get that into an already-running terminal or an Alt+F2 dialogue, consider switching to a TTY. Control + Alt + F2 will get you to TTY2 which should allow you to login (though it might be slow) and should even let you use something like htop
to debug the issue. I don't think I've ever run out of RAM to the point I couldn't get htop
up.
The long term solution involves either buying more RAM, renting it via a remote computer, or not doing what you're currently doing. I'll leave the intricate economic arguments up to you but generally speaking, RAM is cheap to buy, but if you only need a burst amount, a VPS server billed per minute, or hour is a fine choice.
Solution 2
On a system with the Magic System Request Key enabled, pressing Alt + System Request + f (if not marked on your keyboard, System Request is often on the Print Screen key) will manually invoke the kernel's out of memory killer (oomkiller), which tries to pick the worst offending process for memory usage and kill it. You can do this if you have perhaps less time than you've described and the system is just about to start (or maybe has already started) thrashing - in which case you probably don't care exactly what gets killed, just that you end up with a usable system. Sometimes this can end up killing X, but most of the time these days it's a lot better at picking a bad process than it used to be.
Solution 3
Contrary to other answers, I suggest that you disable swap while you are doing this. While swap keeps your system running in a predictable manner, and is often used to increase the throughput of applications accessing the disk (by evicting unused pages to allow room for the disk cache), in this case it sounds like your system is being slowed down to unusable levels because too much actively used memory is being forcibly evicted to swap.
I would recommend disabling swap altogether while doing this task, so that the out-of-memory killer will act as soon as the RAM fills up.
Alternative solutions:
- Increase the read speed of swap by putting your swap partition in RAID1
- Or RAID0 if you're feeling risky but that will bring down a large number of running programs if any of your disks malfunction.
- Decrease the number of concurrent build jobs ("more cores = more speed", we all say, forgetting that it takes a linear toll on RAM)
- This could go both ways, but try enabling
zswap
in the kernel. This compresses pages before they are sent to swap, which may provide just enough wiggle room to speed your machine up. On the other hand, it could just end up being a hindrance with the extra compression/decompression it does. - Turn down optimisations or use a different compiler. Optimising code can sometimes take up several gigabytes of memory. If you have LTO turned on, you're going to use a lot of RAM at the link stage too. If all else fails, you can try compiling your project with a lighter-weight compiler (e.g.
tcc
), at the expense of a slight runtime performance hit to the compiled product. (This is usually acceptable if you're doing this for development/debugging purposes.)
Solution 4
You can use the following command (repeatedly if needed) to kill the process using the most RAM on your system:
ps -eo pid --no-headers --sort=-%mem | head -1 | xargs kill -9
With:
-
ps -eo pid --no-headers --sort=-%mem
: display the process ids of all running processes, sorted by memory usage -
head -1
: only keep the first line (process using the most memory) -
xargs kill -9
: kill the process
Edit after Dmitry's accurate comment:
This is a quick and dirty solution that should be executed when there are no sensitive tasks running (tasks that you don't want to kill -9
).
Solution 5
Before running your resource consuming commands, you could also use the setrlimit(2) system call, probably with the ulimit
builtin of your bash shell (or the limit
builtin in zsh) notably with -v
for RLIMIT_AS
. Then too big virtual address space consumption (e.g. with mmap(2) or sbrk(2) used by malloc(3)) will fail (with errno(3) being ENOMEM
).
Then they (i.e. the hungry processes in your shell, after you typed ulimit
) would be terminated before freezing your system.
Read also Linux Ate My RAM and consider disabling memory overcommitment (by running the command echo 0 > /proc/sys/vm/overcommit_memory
as root, see proc(5)...).
Related videos on Youtube
Anon
Specialties: Keyboard Layouts Audiobooks and Text to Speech Qt
Updated on September 18, 2022Comments
-
Anon over 1 year
It happens pretty often to me when I am compiling software in the background and suddenly everything starts to slow down and eventually freeze up [if I do nothing], as I have run out of both RAM and swap space.
This question assumes that I have enough time and resources to open up Gnome Terminal, search through my history, and execute one
sudo
command.What command can save me from having to do a hard reboot, or any reboot at all?
-
Thomas Ward almost 7 yearsComments are not for extended discussion; this conversation has been moved to chat.
-
JoL almost 7 yearsIf you run out of swap space, I think you have too little of it. I got 20G of swap space on this computer. The point is for it to give you enough time with a usable system to kill whatever is eating up your memory. It's not something where you take only what you'll use, but what you hope you'll never use.
-
sudo almost 7 yearsAre you sure both RAM and swap are being filled? If that were the case, the OOM handler would kill your compiler and free up memory (and also screw up your build process). Otherwise, I'd think it's just getting filled up, and maybe your system is slow because your swap is on your system disk.
-
Shahbaz almost 7 yearsTry reducing the number of your parallel builds if you don't have enough RAM to support it. If your build starts swapping, you will be way slower. With
make
, try-j4
for example for 4 parallel builds at a time. -
Admin almost 7 years"Alexa order me 8 gigs of ram"
-
-
T. Sar almost 7 yearsThis is a very bad idea if you memory is running out because you're compiling very complex stuff. There is a very non-trivial chance to kill your compiler and lose all your progress up to now, which in a very large project could be a big deal.
-
Iluvathar almost 7 years@T.Sar if you're going straight into thrashing, you already either lose or get a chance of killing memory-eater. You don't gain anything if you just refrain from acting.
-
Iluvathar almost 7 years@Muzer this will only work when you have set
kernel.sysrq
to1
or a number including the correct bit in your/etc/sysctl.d/10-magic-sysrq.conf
. -
T. Sar almost 7 years@Ruslan I'm not saying to refrain from acting, just that this specific command can cause some undesirable loss of progress, and maybe another option could be a better choice. In Windows 7, inserting a flashdrive with TurboBoost configured on it could very save you from a OOM issue, for example, by giving the System more memory to work with.
-
Muzer almost 7 years@T.Sar You're not going to lose your progress if you're using a sane build system. You'll retain all the object files but the one you were actually compiling, then you'll get to go back to pretty much where you left off.
-
Muzer almost 7 years@T.Sar I dunno, if you're doing a parallel build and the files have reasonably complex interdependencies, I can see it consuming a fair amount of memory. That plus the usual memory hog email client and web browser with more than a few tabs open, and I can see it really pushing weaker systems.
-
T. Sar almost 7 years@Muzer The thing is - for a compiler, Memory in use is work in progress. If the compiler ever needs to load that much stuff in the first place, to the point it's not cleaning up and just piling up stuff forever you certainly isn't building something sane. Keep in mind that Linux itself - which is a extremely huge and complex system - can be pretty much compiled by any development machine nowadays. I have very big doubts that the OP is compiling something more complex than Linux itself on a low end machine.
-
Muzer almost 7 years@T.Sar Just because the thing you're compiling isn't sane doesn't mean the build system isn't sane. Build systems since time immemorial have stored object files for re-use in subsequent compilations. On the other hand, I can certainly name plenty of software projects with less sanity than Linux (which is generally pretty well-designed). For example, compiling something like Firefox or OpenOffice with 8 parallel build threads, I can easily see it taking in the order of gigabytes of RAM. There are also plenty of monolithic corporate systems that depend on hundreds of libraries.
-
Iluvathar almost 7 years@T.Sar Linux isn't really complex from the compiler's POV. Actually there are hardly any C programs which are. What about C++? Have you ever tried building a program using Eigen or Boost? You'd be surprised how much memory the compiler sometimes eats with such programs — and they don't have to be complex themselves.
-
Sergiy Kolodyazhnyy almost 7 yearsI've this method implemented as a script, actually, here. Quite useful for adding swap on the fly.
-
Anon almost 7 yearsNote, making a swap file only works for some filesystems. BTRFS for example does not support a swap file, while Ext4 does.
-
Anon almost 7 yearsInteresting... care to explain that command logic?
-
Sergiy Kolodyazhnyy almost 7 years@Akiva basically this tells the Linux kernel to free up the RAM. This doesn't get rid of the cause , which is killing the offending process, so Oli's answer is the solution to the problem. Dropping caches will prevent your system from running out of memory, therefore prevent freezing out, thus buying you time to figure out the actual issue. This probably will be a bit faster than making a swap file, especially if you're on hard drive and not on SSD
-
wizzwizz4 almost 7 years@T.Sar Do you mean Linux (the kernel that is used in some builds) or do you mean the GNU coreutils (
bash
,[
,man
etc.) or do you mean the GUI (X server, probablyopenbox
, something else like LXDE) or do you mean the application software (stuff you get fromapt
or whatever package manager you use)? Some are more complex than others. -
Score_Under almost 7 yearsThe cache is the first thing to go when you fill up memory, so I don't think this will help very much. In fact, I don't think this command has a practical use outside of debugging kernel behaviour or timing disk access optimisations. I would humbly recommend against running this command on any system in need of more performance.
-
Anon almost 7 yearsWhile I am doing what?
-
Score_Under almost 7 yearsWhile you are compiling your project, or if you compile frequently, maybe while you are developing in general.
-
Anon almost 7 years
out-of-memory killer will act as soon as the RAM fills up.
this has never happened to me, ever. I have left computers run over night, and they are as frozen the next day as when I left them hours prior. Depends on the application maybe? -
Score_Under almost 7 yearsIf you have swap turned off, that is Linux's behaviour when you run out of memory. If Linux does not invoke the out-of-memory killer but freezes instead, that might signify that there are deeper problems with the setup. Of course, if swap is turned on, the behaviour is slightly different.
-
Criggie almost 7 yearsSome swap is generally wise, but allocating large amounts simply lets the machine thrash more before OOM killer steps in and picks a volunteer. The hoary old role of thumb about "double your ram as swap" is long dead. Personally I see no value in allocating more than ~1 GB swap total.
-
Jonas Schäfer almost 7 years@Akiva Have you ever tried without swap? This answer is spot-on. I’d like to add that running
sudo swapoff -a
may save you when you are already in a bind: it will immediately stop any additional use of swap space, i.e. the OOM killer should be invoked in the next instant and bring the machine into working order.sudo swapoff -a
is also an excellent precautionary measure when debugging memory leaks or compiling, say, firefox. Normally, swap is a bit useful (e.g. for hibernation or swapping out really uneeded stuff), but when you’re actually using memory, the freezes are worse. -
Peter Cordes almost 7 yearsWith ext4, you can
fallocate -l 8G /root/moreswap
instead ofdd
to avoid ever needing to do 8GB of I/O while the system is thrashing. This doesn't work with any other filesystem, though. Definitely not XFS, where swapon sees unwritten extents as holes. (I guess this xfs mailing list discussion didn't pan out). See alsoswapd
, a daemon which creates/removes swap files on the fly to save disk space. Also askubuntu.com/questions/905668/… -
Peter Cordes almost 7 yearsBut on modern desktops with reasonable amounts of RAM and disk space, that's probably not useful. It just makes it slower to recover if a buggy program is going berserk allocating+using memory.
-
Peter Cordes almost 7 years@JonasWielicki: That's fantastic. I'd assumed that
swapoff
would refuse to work, or just trigger more thrashing as the system tried to page in whatever it could (and evict read-only pages backed by files) when a runaway process is evicting everyone else's pages. I hadn't thought of it triggering the OOM killer on the next demand for more pages. -
Peter Cordes almost 7 years@Score_Under: Separate swap partitions on each disk is supposed to be significantly more efficient than swap on an md raid0 device. I forget where I read that. The Linux RAID wiki recommends separate partitions over raid0, but doesn't say anything very strong about why it's better. Anyway yes, RAID1 or RAID10n2 makes sense for swap, especially if you mostly just want to be able to swap out some dirty but very cold pages to leave more RAM for the pagecache. i.e. swap performance isn't a big deal.
-
Jules almost 7 years@T.Sar "In Windows 7, inserting a flashdrive with TurboBoost configured on it could very save you from a OOM issue" ... I think you mean ReadyBoost, not TurboBoost (TurboBoost is a CPU frequency adaptation technology). ReadyBoost won't help in an OOM situation -- it provides additional disk cache, not additional virtual memory.
-
Jules almost 7 years@Score_Under - "The cache is the first thing to go when you fill up memory" -- well, that depends on your setting in
/proc/sys/vm/swappiness
. With swappiness set to 0, you're right. With the default setting of 60, you're close. With it set to 200, however, it'll be the least recently-used pages of running processes that get dropped first... in that particular case, this command may be useful. But setting swappiness to 0 (or some low value, maybe 20 or 30) would be a better general approach, however. -
William Hay almost 7 years@Criggie,@Peter Cordes the question is presented as an immediate problem. Adding swap will allow more things to fit inside virtual memory at the cost of speed. Consuming lots of memory doesn't necessarily mean the program is going berserk just that it needs more memory than you have.
-
Dmitry Grigoryev almost 7 years-1. On a computer with limited RAM, disabling swap during a compilation is one sure way to crash it.
-
Dmitry Grigoryev almost 7 years@Score_Under This command was useful on old kernels with
kswapd
bug (some people even created cronjobs with it). But you're right, I doubt it will help with this question. -
Dmitry Grigoryev almost 7 years@Criggie " Personally I see no value in allocating more than ~1 GB swap total" - Have you tried to build Firefox?
-
Dmitry Grigoryev almost 7 yearsThis is much worse than letting the OOM killer handle the situation. The OOM killer is much smarter than that. Do you really run such commands on a computer with ongoing compilations?
-
Anon almost 7 yearsDoes this work on all filesystems like btrfs?
-
Anon almost 7 years@DmitryGrigoryev Really? Is Firefox actually that hefty of a build?
-
Dmitry Grigoryev almost 7 years@T.Sar Linux itself is only 2 to 10 MB of compiled code, it's hardly a complex piece of software by today's standards.
-
Dmitry Grigoryev almost 7 years@Akiva zram never touches the disk, so I would say yes ;)
-
Dmitry Grigoryev almost 7 years@Akiva Last time I have checked, the recommended build configuration was 16 GB of RAM. The main executable file (
xul.dll
) is around 50 MB, so it's about 10 times heavier than Linux kernel. -
Score_Under almost 7 years@DmitryGrigoryev Yes, programs will exit without warning - because of Linux's OOM killer - but this is far preferable to the system locking up without recourse.
-
Dmitry Grigoryev almost 7 yearsMy point is that following your advice, one may not be able to run those programs at all, because they need swap. A build that fails 100% of the time is worse than a build which has 50% chance to lock up the system, isn't it?
-
Riking almost 7 years@Dmitry But the cause of the failure is fairly obvious - you just caused it - and you can make an informed decision at that point to turn it back on (or not).
-
David Schwartz almost 7 yearsWithout swap, on many machines it is impossible to compile large chunks of code. Why would you assume that it's the compiling he wants to sacrifice?
-
jamesqf almost 7 years@wizzwizz4: Well, that's kind of the point of *nix, that pretty much everything that is the "system" is smallish independent pieces. Also complexity & memory use of software really isn't all that closely related to complexity of compilation. I've worked on parallel apps that can use hundreds of GBytes and run for days doing some fairly complex calculations, yet they compile in a few minutes without overloading memory on a 2 GB laptop.
-
Thomas Ward almost 7 yearsComments are not for extended discussion; this conversation has been moved to chat.
-
Francisco Presencia almost 7 yearsI got a couple of commands linked to my own
lazygit
command that I use from time to time, maybe something like that could be applied here? The wholekillall ...
script could be reduced to a simpleemptyram
or something like that -
TOOGAM almost 7 years'tis just a shame that I had to scroll down so far to find this answer. I was hoping someone would propose a way that would suspend progress on this RAM eater.
-
Oli almost 7 yearsYou don't need to run the full command if you know what browser is running and I'd assume most people who can identify a RAM shortage do. By extension, I'd find it harder to remember that I'd written an
emptyram
script than just punching inkillall -9 firefox
. -
rackandboneman almost 7 yearsOnce you're at the level where you handle things in such ways, get rid of the awkward sudo crutch :)
-
Sergiy Kolodyazhnyy almost 7 years@rackandboneman what do you mean ?
-
user541686 almost 7 years@Muzer: Regarding losing progress: I don't know what you call a "sane build system", and I haven't tried this on Linux, but e.g. canceling builds in Visual Studio has frequently given me unusable object files that I had to manually delete (since they were half-baked). It's not all-or-nothing-per-object-file necessarily, unless your compiler does it that way.
-
Muzer almost 7 years@Mehrdad Never experienced that myself, but I've not used Visual Studio. GCC and Clang tend not to output the object file with its final filename until it's completely done - before that I guess it's saved as a temporary file or something.
-
Stephan Bijzitter almost 7 yearsBuying RAM... why not just download more RAM?
-
Oli almost 7 yearsWell you might joke but if you need to do something for a short time that needs far more RAM and CPU that you have, renting a VPS by the minute is pretty economical for one-shots.
-
9ilsdx 9rvj 0lo almost 7 yearsIt is nothing near the answer that OP expects, but it answers the question literary: my crap machine is rendered unusable when I build on it - stop building on crap machine.
-
sudo almost 7 yearsOh, I just saw that someone commented that above somewhere.
-
sudo almost 7 yearsI'm totally with @Criggie on this one. If your machine has a modern amount of memory, best to not let it thrash forever if something goes haywire. If you need more swap for something specifically, you can always temporarily swapon some more.
-
Iluvathar almost 7 years@DmitryGrigoryev it's so smart to sometimes kill Xorg on my desktop. In modern kernels OOMK seems to have gained some sanity, but I wouldn't really trust it after all that.
-
GnP almost 7 years@Criggie you have to be careful about overcommit settings though.
-
Jonas Schäfer almost 7 years@DavidSchwartz Sometimes one is caught by surprise that a process requires that high amount of memory. Once that is known (and it is good to find out in a sane way, i.e. crashing the compilation and not locking up the computer entirely, possibly losing valuable data in other processes this way), it is possible to free up more memory, e.g. by closing browsers, mail clients and other non-compiler-related software for the duration of the compilation process and in a controlled manner. With swap and bad I/O scheduling, all you get is a freeze you’re unlikely to recover from.
-
Anon about 4 yearsIs this something worthy of a kernel patch?