Why is the Linux OOM killer terminating my programs?

5,470

Memory cgroup out of memory

You need to avoid filling the memory cgroup that you are running within.

Task in /slurm/uid_11122/job_58003653/step_0 killed as a result of limit of /slurm/uid_11122/job_58003653
memory: usage 8,388,608kB, limit 8,388,608kB, failcnt 3673
memory+swap: usage 8388608kB, limit 16777216kB, failcnt 0
kmem: usage 0kB, limit 9007199254740988kB, failcnt 0

Memory cgroup stats for /slurm/uid_11122/job_58003653: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Memory cgroup stats for /slurm/uid_11122/job_58003653/step_extern: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB Memory cgroup stats for /slurm/uid_11122/job_58003653/step_batch: cache:0KB rss:4452KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4452KB inactive_file:0KB active_file:0KB unevictable:0KB Memory cgroup stats for /slurm/uid_11122/job_58003653/step_0: cache:6,399,032KB rss:1,985,124KB rss_huge:1,476,608KB mapped_file:2,0232KB swap:0KB inactive_anon:1,890,552KB active_anon:6,491,116KB inactive_file:1,216KB active_file:892KB unevictable:0KB

It looks like you have ~ 6.4GB in "shmem", which usually means a tmpfs. (Some other types of shmem are sysv IPC shared memory as shown by ipcs, or a memfd...). Combined with ~ 2GB RSS, that puts you over the 8.4GB limit for your cgroup. "shmem" is not mentioned in the messages, but I infer it from the ~ 6.4GB which is shown in both "cache" and "active_anon".

cache - page cache, including tmpfs (shmem), in bytes

active_anon - anonymous and swap cache on active least-recently-used (LRU) list, including tmpfs (shmem), in bytes

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-memory


When a cgroup goes over its limit, we first try to reclaim memory from the cgroup so as to make space for the new pages that the cgroup has touched. If the reclaim is unsuccessful, an OOM routine is invoked to select and kill the bulkiest task in the cgroup.

https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt

Control groups, usually referred to as cgroups, are a Linux kernel feature which allow processes to be organized into hierarchical groups whose usage of various types of resources can then be limited and monitored. The kernel's cgroup interface is provided through a pseudo-filesystem called cgroupfs. Grouping is implemented in the core cgroup kernel code, while resource tracking and limits are implemented in a set of per-resource-type subsystems (memory, CPU, and so on).

http://man7.org/linux/man-pages/man7/cgroups.7.html

Share:
5,470

Related videos on Youtube

Jadzia
Author by

Jadzia

Updated on September 18, 2022

Comments

  • Jadzia
    Jadzia almost 2 years

    I am running a complex workflow via bash scripts, which are using external programs/command to do different things. It runs fine for several hours, but then suddenly the OOM killer terminates programs of my workflow or the entire bash scripts, even though there is still plenty of memory available. I have logged the memory usage every 0.01 seconds with the ps command, there is no increase or change at all, and still several GB available. But suddenly from one memory snapshot to the next some process gets terminated by the OOM killer. Here is a typical ps snapshot of the memory usage:

    PID     %MEM  RSS      VSZ      COMMAND      USER
    139443  1.2   1651768  8622936  java         jadzia
    123601  0.1   163352   523068   obabel       jadzia
    139355  0.0   5488     253120   srun         jadzia
    125747  0.0   5252     365088   obabel       jadzia
    125757  0.0   5252     365088   obabel       jadzia
    125388  0.0   5224     365088   obabel       jadzia
    125824  0.0   3764     267736   obabel       jadzia
    21062   0.0   3724     128628   bash         jadzia
    125778  0.0   3628     267736   obabel       jadzia
    127018  0.0   1904     113416   bash         jadzia
    126127  0.0   1812     161476   ps           jadzia
    139526  0.0   1740     10288    one-step.sh  jadzia
    139508  0.0   1736     10252    one-step.sh  jadzia
    139473  0.0   1728     10256    one-step.sh  jadzia
    139477  0.0   1728     10252    one-step.sh  jadzia
    139558  0.0   1724     10252    one-step.sh  jadzia
    139585  0.0   1724     10252    one-step.sh  jadzia
    139539  0.0   1704     10292    one-step.sh  jadzia
    139370  0.0   1688     9676     one-step.sh  jadzia
    139485  0.0   1688     10200    one-step.sh  jadzia
    125742  0.0   1544     10252    one-step.sh  jadzia
    125752  0.0   1532     10252    one-step.sh  jadzia
    125772  0.0   1532     10256    one-step.sh  jadzia
    125819  0.0   1532     10252    one-step.sh  jadzia
    125363  0.0   1508     10292    one-step.sh  jadzia
    123586  0.0   1496     10200    one-step.sh  jadzia
    139357  0.0   860      48364    srun         jadzia
    104975  0.0   724      6448     ng           jadzia
    91240   0.0   720      6448     ng           jadzia
    

    The RSS sum over all processes always stay below 3GB and never spikes.

    When looking at the dmesg output, the entries show that it are different programs which invoke the oom-killer: from external binary programs such as obabel to the "tr" utility or the bash script which executes the commands itself. Here are two different dmesg examples which show the oom events:

    [Thu Nov  1 15:15:27 2018] tr invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0
    [Thu Nov  1 15:15:27 2018] tr cpuset=step_0 mems_allowed=0-1
    [Thu Nov  1 15:15:27 2018] CPU: 27 PID: 33591 Comm: tr Tainted: G           OE  ------------   3.10.0-693.21.1.el7.x86_64 #1
    [Thu Nov  1 15:15:27 2018] Hardware name: Dell Inc. PowerEdge M630/0R10KJ, BIOS 2.5.4 08/17/2017
    [Thu Nov  1 15:15:27 2018] Call Trace:
    [Thu Nov  1 15:15:27 2018]  [<ffffffff816ae7c8>] dump_stack+0x19/0x1b
    [Thu Nov  1 15:15:27 2018]  [<ffffffff816a9b90>] dump_header+0x90/0x229
    [Thu Nov  1 15:15:27 2018]  [<ffffffff810c7c82>] ? default_wake_function+0x12/0x20
    [Thu Nov  1 15:15:27 2018]  [<ffffffff8118a3d6>] ? find_lock_task_mm+0x56/0xc0
    [Thu Nov  1 15:15:27 2018]  [<ffffffff811f5fb8>] ? try_get_mem_cgroup_from_mm+0x28/0x60
    [Thu Nov  1 15:15:27 2018]  [<ffffffff8118a884>] oom_kill_process+0x254/0x3d0
    [Thu Nov  1 15:15:27 2018]  [<ffffffff811f9cd6>] mem_cgroup_oom_synchronize+0x546/0x570
    [Thu Nov  1 15:15:27 2018]  [<ffffffff811f9150>] ? mem_cgroup_charge_common+0xc0/0xc0
    [Thu Nov  1 15:15:27 2018]  [<ffffffff8118b114>] pagefault_out_of_memory+0x14/0x90
    [Thu Nov  1 15:15:27 2018]  [<ffffffff816a7f2e>] mm_fault_error+0x68/0x12b
    [Thu Nov  1 15:15:27 2018]  [<ffffffff816bb741>] __do_page_fault+0x391/0x450
    [Thu Nov  1 15:15:27 2018]  [<ffffffff816bb835>] do_page_fault+0x35/0x90
    [Thu Nov  1 15:15:27 2018]  [<ffffffff816b7768>] page_fault+0x28/0x30
    [Thu Nov  1 15:15:27 2018] Task in /slurm/uid_11122/job_58003653/step_0 killed as a result of limit of /slurm/uid_11122/job_58003653
    [Thu Nov  1 15:15:27 2018] memory: usage 8388608kB, limit 8388608kB, failcnt 3673
    [Thu Nov  1 15:15:27 2018] memory+swap: usage 8388608kB, limit 16777216kB, failcnt 0
    [Thu Nov  1 15:15:27 2018] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
    [Thu Nov  1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
    [Thu Nov  1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653/step_extern: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
    [Thu Nov  1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653/step_batch: cache:0KB rss:4452KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4452KB inactive_file:0KB active_file:0KB unevictable:0KB
    [Thu Nov  1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653/step_0: cache:6399032KB rss:1985124KB rss_huge:1476608KB mapped_file:20232KB swap:0KB inactive_anon:1890552KB active_anon:6491116KB inactive_file:1216KB active_file:892KB unevictable:0KB
    [Thu Nov  1 15:15:27 2018] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
    [Thu Nov  1 15:15:27 2018] [20087] 11122 20087    28321      420      12        0             0 bash
    [Thu Nov  1 15:15:27 2018] [33058] 11122 33058    63274     1357      31        0             0 srun
    [Thu Nov  1 15:15:27 2018] [33060] 11122 33060    12085      207      23        0             0 srun
    [Thu Nov  1 15:15:27 2018] [33073] 11122 33073     2416      406       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33153] 11122 33153  3735255   498759    1385        0             0 java
    [Thu Nov  1 15:15:27 2018] [42230] 11122 42230     2542      422       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [42240] 11122 42240     2543      421       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [42261] 11122 42261     2542      421       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [42285] 11122 42285     2541      422       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [42302] 11122 42302     2543      422       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [42316] 11122 42316     2542      422       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [42331] 11122 42331     2564      424       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [42359] 11122 42359     2544      421       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33529] 11122 33529     2148      167      10        0             0 timeout
    [Thu Nov  1 15:15:27 2018] [33538] 11122 33538     1078       88       7        0             0 time_bin
    [Thu Nov  1 15:15:27 2018] [33540] 11122 33540     2148      167      10        0             0 timeout
    [Thu Nov  1 15:15:27 2018] [33541] 11122 33541     2148      166      10        0             0 timeout
    [Thu Nov  1 15:15:27 2018] [33542] 11122 33542     1609      177       8        0             0 ng
    [Thu Nov  1 15:15:27 2018] [33543] 11122 33543     1090       89       8        0             0 tail
    [Thu Nov  1 15:15:27 2018] [33544] 11122 33544     2472      181      11        0             0 awk
    [Thu Nov  1 15:15:27 2018] [33546] 11122 33546     1078       88       8        0             0 time_bin
    [Thu Nov  1 15:15:27 2018] [33554] 11122 33554     1078       88       8        0             0 time_bin
    [Thu Nov  1 15:15:27 2018] [33556] 11122 33556     1609      177      10        0             0 ng
    [Thu Nov  1 15:15:27 2018] [33562] 11122 33562     1609      177       9        0             0 ng
    [Thu Nov  1 15:15:27 2018] [33570] 11122 33570     9084      299      18        0             0 tar
    [Thu Nov  1 15:15:27 2018] [33586] 11122 33586     2564      333       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33587] 11122 33587     2148      166      10        0             0 timeout
    [Thu Nov  1 15:15:27 2018] [33588] 11122 33588     2564      279       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33589] 11122 33589     1078       89       8        0             0 time_bin
    [Thu Nov  1 15:15:27 2018] [33590] 11122 33590     2472      181      10        0             0 awk
    [Thu Nov  1 15:15:27 2018] [33591] 11122 33591     1075       48       6        0             0 tr
    [Thu Nov  1 15:15:27 2018] [33592] 11122 33592     2564      243       8        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33593] 11122 33593     2542      330       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33594] 11122 33594     1609      177       9        0             0 ng
    [Thu Nov  1 15:15:27 2018] [33595] 11122 33595     2542      318       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33596] 11122 33596     2542      240       8        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33597] 11122 33597     2542      240       9        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33598] 11122 33598     2542      240       8        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33599] 11122 33599     2542      240       8        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] [33600] 11122 33600     2542      240       8        0             0 one-step.sh
    [Thu Nov  1 15:15:27 2018] Memory cgroup out of memory: Kill process 33576 (java) score 238 or sacrifice child
    [Thu Nov  1 15:15:27 2018] Killed process 33153 (java) total-vm:14941020kB, anon-rss:1973844kB, file-rss:1008kB, shmem-rss:20184kB
    
    
    [Thu Nov  1 03:40:17 2018] obabel invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0
    [Thu Nov  1 03:40:17 2018] obabel cpuset=step_0 mems_allowed=0-1
    [Thu Nov  1 03:40:17 2018] CPU: 29 PID: 123601 Comm: obabel Tainted: G           OE  ------------ T 3.10.0-693.21.1.el7.x86_64 #1
    [Thu Nov  1 03:40:17 2018] Hardware name: Dell Inc. PowerEdge M630/0R10KJ, BIOS 2.5.4 08/17/2017
    [Thu Nov  1 03:40:17 2018] Call Trace:
    [Thu Nov  1 03:40:17 2018]  [<ffffffff816ae7c8>] dump_stack+0x19/0x1b
    [Thu Nov  1 03:40:17 2018]  [<ffffffff816a9b90>] dump_header+0x90/0x229
    [Thu Nov  1 03:40:17 2018]  [<ffffffff810c7c82>] ? default_wake_function+0x12/0x20
    [Thu Nov  1 03:40:17 2018]  [<ffffffff8118a3d6>] ? find_lock_task_mm+0x56/0xc0
    [Thu Nov  1 03:40:17 2018]  [<ffffffff811f5fb8>] ? try_get_mem_cgroup_from_mm+0x28/0x60
    [Thu Nov  1 03:40:17 2018]  [<ffffffff8118a884>] oom_kill_process+0x254/0x3d0
    [Thu Nov  1 03:40:17 2018]  [<ffffffff811f9cd6>] mem_cgroup_oom_synchronize+0x546/0x570
    [Thu Nov  1 03:40:17 2018]  [<ffffffff811f9150>] ? mem_cgroup_charge_common+0xc0/0xc0
    [Thu Nov  1 03:40:17 2018]  [<ffffffff8118b114>] pagefault_out_of_memory+0x14/0x90
    [Thu Nov  1 03:40:17 2018]  [<ffffffff816a7f2e>] mm_fault_error+0x68/0x12b
    [Thu Nov  1 03:40:17 2018]  [<ffffffff816bb741>] __do_page_fault+0x391/0x450
    [Thu Nov  1 03:40:17 2018]  [<ffffffff816bb835>] do_page_fault+0x35/0x90
    [Thu Nov  1 03:40:17 2018]  [<ffffffff816b7768>] page_fault+0x28/0x30
    [Thu Nov  1 03:40:17 2018] Task in /slurm/uid_11122/job_57832937/step_0 killed as a result of limit of /slurm/uid_11122/job_57832937
    [Thu Nov  1 03:40:17 2018] memory: usage 8388608kB, limit 8388608kB, failcnt 363061
    [Thu Nov  1 03:40:17 2018] memory+swap: usage 8388608kB, limit 16777216kB, failcnt 0
    [Thu Nov  1 03:40:17 2018] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
    [Thu Nov  1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
    [Thu Nov  1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937/step_extern: cache:152KB rss:3944KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:3936KB inactive_file:76KB active_file:76KB unevictable:0KB
    [Thu Nov  1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937/step_batch: cache:0KB rss:4760KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4760KB inactive_file:0KB active_file:0KB unevictable:0KB
    [Thu Nov  1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937/step_0: cache:6554284KB rss:1825468KB rss_huge:401408KB mapped_file:13556KB swap:0KB inactive_anon:439516KB active_anon:7937116KB inactive_file:1500KB active_file:1476KB unevictable:0KB
    [Thu Nov  1 03:40:17 2018] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
    [Thu Nov  1 03:40:17 2018] [127018] 11122 127018    28354      476      12        0             0 bash
    [Thu Nov  1 03:40:17 2018] [139355] 11122 139355    63280     1372      33        0             0 srun
    [Thu Nov  1 03:40:17 2018] [139357] 11122 139357    12091      215      25        0             0 srun
    [Thu Nov  1 03:40:17 2018] [139370] 11122 139370     2419      422      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139443] 11122 139443  2155734   412939     953        0             0 java
    [Thu Nov  1 03:40:17 2018] [139473] 11122 139473     2564      432      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139477] 11122 139477     2563      432      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139485] 11122 139485     2550      422      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139508] 11122 139508     2563      434      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139526] 11122 139526     2572      435      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139539] 11122 139539     2573      426      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139558] 11122 139558     2563      431      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [139585] 11122 139585     2563      431      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [21062] 11122 21062    32157      931      14        0             0 bash
    [Thu Nov  1 03:40:17 2018] [91238] 11122 91238     2151      170      10        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [91239] 11122 91239     1081       88       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [91240] 11122 91240     1612      180       9        0             0 ng
    [Thu Nov  1 03:40:17 2018] [104964] 11122 104964     2151      171      10        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [104969] 11122 104969     1081       88       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [104975] 11122 104975     1612      181       8        0             0 ng
    [Thu Nov  1 03:40:17 2018] [123586] 11122 123586     2550      374      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [123592] 11122 123592     2151      171      10        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [123593] 11122 123593     3325      171      12        0             0 sed
    [Thu Nov  1 03:40:17 2018] [123596] 11122 123596     1081       89       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [123601] 11122 123601   130767    40835     261        0             0 obabel
    [Thu Nov  1 03:40:17 2018] [125363] 11122 125363     2573      377      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [125369] 11122 125369     2151      171      10        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [125372] 11122 125372     1089       81       8        0             0 uniq
    [Thu Nov  1 03:40:17 2018] [125373] 11122 125373     3324      171      11        0             0 sed
    [Thu Nov  1 03:40:17 2018] [125380] 11122 125380     1081       88       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [125388] 11122 125388    91272     1302     179        0             0 obabel
    [Thu Nov  1 03:40:17 2018] [125742] 11122 125742     2563      386      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [125743] 11122 125743     2151      170      10        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [125744] 11122 125744     1089       81       8        0             0 uniq
    [Thu Nov  1 03:40:17 2018] [125745] 11122 125745     3324      171      12        0             0 sed
    [Thu Nov  1 03:40:17 2018] [125746] 11122 125746     1081       87       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [125747] 11122 125747    91272     1309     180        0             0 obabel
    [Thu Nov  1 03:40:17 2018] [125752] 11122 125752     2563      383      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [125753] 11122 125753     2151      170       9        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [125754] 11122 125754     1089       82       9        0             0 uniq
    [Thu Nov  1 03:40:17 2018] [125755] 11122 125755     3324      172      11        0             0 sed
    [Thu Nov  1 03:40:17 2018] [125756] 11122 125756     1081       88       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [125757] 11122 125757    91272     1309     179        0             0 obabel
    [Thu Nov  1 03:40:17 2018] [125772] 11122 125772     2564      383      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [125773] 11122 125773     2151      170      10        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [125774] 11122 125774     1088       86       7        0             0 uniq
    [Thu Nov  1 03:40:17 2018] [125775] 11122 125775     3324      172      12        0             0 sed
    [Thu Nov  1 03:40:17 2018] [125776] 11122 125776     1081       88       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [125778] 11122 125778    66934      902     131        0             0 obabel
    [Thu Nov  1 03:40:17 2018] [125819] 11122 125819     2563      383      10        0             0 one-step.sh
    [Thu Nov  1 03:40:17 2018] [125820] 11122 125820     2151      171      11        0             0 timeout
    [Thu Nov  1 03:40:17 2018] [125821] 11122 125821     1088       87       8        0             0 uniq
    [Thu Nov  1 03:40:17 2018] [125822] 11122 125822     3324      172      10        0             0 sed
    [Thu Nov  1 03:40:17 2018] [125823] 11122 125823     1081       88       8        0             0 time_bin
    [Thu Nov  1 03:40:17 2018] [125824] 11122 125824    66934      931     132        0             0 obabel
    [Thu Nov  1 03:40:17 2018] [126131] 11122 126131    40335      445      33        0             0 ps
    [Thu Nov  1 03:40:17 2018] [126132] 11122 126132    26980      166      10        0             0 head
    [Thu Nov  1 03:40:17 2018] [126133] 11122 126133    26990      153      10        0             0 column
    [Thu Nov  1 03:40:17 2018] Memory cgroup out of memory: Kill process 125649 (NGSession 36387) score 197 or sacrifice child
    [Thu Nov  1 03:40:17 2018] Killed process 139443 (java) total-vm:8622936kB, anon-rss:1637312kB, file-rss:960kB, shmem-rss:13484kB
    

    The java application which runs in the background is always killed at the end by the oom-manager because it frees up most memory I think. Regarding the java program, I have checked the garbage collection/GC log files, all normal there. Also, I used three different versions for the JVM, but the problem seems to be independent of that.

    How can I find out what is really causing the oom-manager to terminate my programs?

    I am not an admin on the machines I use, they are compute nodes of a Linux cluster. The kernel version is 3.10.0-693.21.1.el7.x86_64.

  • Jadzia
    Jadzia over 5 years
    My jobs accumulate indeed many GB of memory there. I'm pretty sure you discovered the problem. Thank you so much for your help, really appreciated it. You might want to adjust the main text in your answer so that I can accept it.