Dump process core without killing the process
Solution 1
The usual trick is to have something (possibly a signal like SIGUSR1
) trigger the program to fork()
, then the child calls abort()
to make itself dump core.
from os import fork, abort
(...)
def onUSR1(sig, frame):
if os.fork == 0:
os.abort
and during initialization
from signal import signal, SIGUSR1
from wherever import onUSR1
(...)
signal.signal(signal.SIGUSR1, wherever.onUSR1)
Used this way, fork
won't consume much extra memory because almost all of the address space will be shared (which is also why this works for generating the core dump).
Once upon a time this trick was used with a program called undump
to generate an executable from a core dump to save an image after complex initialization; emacs
used to do this to generate a preloaded image from temacs
.
Solution 2
You could try using gcore
. Is that an option for you?
Falmarri
Updated on September 18, 2022Comments
-
Falmarri over 1 year
Is there a way to get a core dump (or something similar) for a process without actually killing the processes? I have a multithreaded python process running on an embedded system. And I want to be able to get a snapshot of the process under normal conditions (ie with the other processes required to be running), but I don't have enough memory to connect gdb (or run it under gdb) without the python process being the only one running.
I hope this question makes sense.
-
Gilles 'SO- stop being evil' about 13 yearsIf this is only while you're debugging, have you considered something crazy like swap on an NFS file or a network block device?
-
-
KatyB over 9 yearsAt some point gcore was a stand-alone program but I don't think it's part of the gdb package anymore - however you can run gdb --pid=<PID> and then use it's gcore command to dump a core file. gcore.c is a fairly simple program that is easily googlable if you want something lighter weight.
-
PSkocik almost 3 yearsDoesn't quite work for multithreaded processes, unfortunately :/