How to generate a core dump in Linux on a segmentation fault?
Solution 1
This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type
ulimit -c unlimited
then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.
In tcsh, you'd type
limit coredumpsize unlimited
Solution 2
As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.
If you've come here hoping to learn how to generate a core dump for a hung process, the answer is
gcore <pid>
if gcore is not available on your system then
kill -ABRT <pid>
Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process
Solution 3
To check where the core dumps are generated, run:
sysctl kernel.core_pattern
or:
cat /proc/sys/kernel/core_pattern
where %e
is the process name and %t
the system time. You can change it in /etc/sysctl.conf
and reloading by sysctl -p
.
If the core files are not generated (test it by: sleep 10 &
and killall -SIGSEGV sleep
), check the limits by: ulimit -a
.
If your core file size is limited, run:
ulimit -c unlimited
to make it unlimited.
Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:
Segmentation fault: 11 (core dumped)
See also: core dumped - but core file is not in current directory?
Ubuntu
In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/
. However, it is disabled by default in stable releases.
For more details, please check: Where do I find the core dump in Ubuntu?.
macOS
For macOS, see: How to generate core dumps in Mac OS X?
Solution 4
What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file
command. That forced generation of a core dump.
Solution 5
Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under AIX
) and prints the stack trace up to the point of a segmentation fault. You will need to change the sprintf
variable to use gdb
in the case of Linux.
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <stdarg.h>
static void signal_handler(int);
static void dumpstack(void);
static void cleanup(void);
void init_signals(void);
void panic(const char *, ...);
struct sigaction sigact;
char *progname;
int main(int argc, char **argv) {
char *s;
progname = *(argv);
atexit(cleanup);
init_signals();
printf("About to seg fault by assigning zero to *s\n");
*s = 0;
sigemptyset(&sigact.sa_mask);
return 0;
}
void init_signals(void) {
sigact.sa_handler = signal_handler;
sigemptyset(&sigact.sa_mask);
sigact.sa_flags = 0;
sigaction(SIGINT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGSEGV);
sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGBUS);
sigaction(SIGBUS, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGQUIT);
sigaction(SIGQUIT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGHUP);
sigaction(SIGHUP, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGKILL);
sigaction(SIGKILL, &sigact, (struct sigaction *)NULL);
}
static void signal_handler(int sig) {
if (sig == SIGHUP) panic("FATAL: Program hanged up\n");
if (sig == SIGSEGV || sig == SIGBUS){
dumpstack();
panic("FATAL: %s Fault. Logged StackTrace\n", (sig == SIGSEGV) ? "Segmentation" : ((sig == SIGBUS) ? "Bus" : "Unknown"));
}
if (sig == SIGQUIT) panic("QUIT signal ended program\n");
if (sig == SIGKILL) panic("KILL signal ended program\n");
if (sig == SIGINT) ;
}
void panic(const char *fmt, ...) {
char buf[50];
va_list argptr;
va_start(argptr, fmt);
vsprintf(buf, fmt, argptr);
va_end(argptr);
fprintf(stderr, buf);
exit(-1);
}
static void dumpstack(void) {
/* Got this routine from http://www.whitefang.com/unix/faq_toc.html
** Section 6.5. Modified to redirect to file to prevent clutter
*/
/* This needs to be changed... */
char dbx[160];
sprintf(dbx, "echo 'where\ndetach' | dbx -a %d > %s.dump", getpid(), progname);
/* Change the dbx to gdb */
system(dbx);
return;
}
void cleanup(void) {
sigemptyset(&sigact.sa_mask);
/* Do any cleaning up chores here */
}
You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.
Related videos on Youtube
Comments
-
Nathan Fellman over 2 years
I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?
-
Ciro Santilli OurBigBook.com about 5 yearsHow to view it aftewards: stackoverflow.com/questions/8305866/…
-
-
Nathan Fellman about 15 yearsBy "current directory of the process" do you mean the $cwd at the time the process was run? ~/abc> /usr/bin/cat def if cat crashes, is the current directory in question ~/abc or /usr/bin?
-
Mark Harrison about 15 years~/abc. Hmm, comments have to be 15 characters long!
-
Darron over 14 yearsThis would be the current directory at the time of the SEGV. Also, processes running with a different effective user and/or group than the real user/group will not write core files.
-
ed9w2in6 almost 13 yearsI am sorry, but is this really answer your question? you asked how to generate, but it say how to set the limits
-
Eli Courtwright almost 13 years@lzprgmr: To clarify: the reason why core dumps are not generated by default is that the limit is not set and/or set to 0, which prevents the core from being dumped. By setting a limit of unlimited, we guarantee that core dumps can always be generated.
-
Salsa over 12 yearsThis link goes deeper and gives some more options to enable generation of core dumps in linux. The only drawback is that some commands/settings are left unexplained.
-
a1an over 11 yearsOn bash 4.1.2(1)-release limits such as 52M cannot be specified, resulting in a invalid number error message. The man page tells that "Values are in 1024-byte increments".
-
Chani almost 11 yearsHow did you attach gdb to the process ?
-
Jean-Dominique Frattini almost 11 yearsTo answer to Ritwik G, to attach a process to gdb, simply launch gdb and enter 'attach <pid>' where <pid> is the pid number of the process you want to attach.
-
IceCool over 10 yearsWell I had a "small" OpenGL project, that once did some weird thing, and caused X-server crash. When I logged back, I saw a cute little 17 GB core file (on a 25 GB partition). It's definitely a good idea to keep the core file's size limited :)
-
PolarisUser over 9 yearsI have a question. I don't want to set mine to unlimited. How do I know how large of a coredump should allow?
-
Eli Courtwright over 9 years@PolarisUser: If you wanted to make sure your partition doesn't get eaten, I recommend setting a limit of something like 1 gig. That should be big enough to handle any reasonable core dump, while not threatening to use up all of your remaining hard drive space.
-
Naveen almost 7 yearsIn Step 3, How to 're-run' the terminal? Do you mean reboot?
-
mrgloom almost 7 years@Naveen no, just close terminal and open new one, also seems you can just put
ulimit -c unlimited
in terminal for temporary solution, because only editing~/.bashrc
require terminal restrart to changes make effect. -
Digicrat over 6 yearsFor Ubuntu, to quickly revert to normal behavior (dumping a core file in the current directory), simply stop the apport service with "sudo service apport stop". Also note that if you are running within docker, that setting is controlled on the host system and not within the container.
-
JSybrandt over 6 yearsI want to echo setting a limit for coredumpsize, as someone who just had to clean up a couple hundred 20G core dumps.
-
Imskull over 6 yearsattention: it is not persisted after login user quits, at least on CentOS, you have to edit /etc/security/limits.conf if you want so.
-
user202729 almost 6 years(abbreviated as
ge
) -
user202729 almost 6 yearsIf they have a new question, they should ask a new question instead of asking in a comment.
-
Nathan Fellman over 5 yearswhy is that better?
-
kgbook over 5 yearscore file generated after crash, no need to
ulimit -c unlimited
in the command line environment, and then rerun the application. -
Nathan Fellman over 5 yearsI don't want a core dump every time it crashes, only when a user contacts me as the developer to look at it. If it crashes 100 times, I don't need 100 core dumps to look at.
-
kgbook over 5 yearsIn tha case, better to use
ulimit -c unlimited
. Also you can compile with marco definition, application will not includeenable_core_dump
symbol if not define that macro when release, and you will get a core dump replace with debug version. -
Nathan Fellman over 5 yearseven if it's qualified by a macro, that still requires me to recompile if I want to generate a core dump, rather than simply executing a command in the shell before rerunning.
-
kgbook over 5 yearsIt's so convenient for developer to obtain a core dump file and more verbose debug information. In release version, usually compiling with
-O2
and without-g
, and debug information stripped or optimized, and I control all debug options and core file dump using that marco definition in CMakeLists.txt or Makefile. You can have your own choice. -
BreakBadSP over 5 yearsand remember to put this in in .bashrc so that you wont need to do this all the time.
-
celticminstrel about 4 yearsI think it's far more likely that
-ABRT
will invoke a signal handler than-SEGV
, as an abort is more likely to be recoverable than a segfault. (If you handle a segfault, normally it'll just trigger again as soon as your handler exits.) A better choice of signal for generating a core dump is-QUIT
. -
CodyChan almost 4 yearsWeird thing is I already set
ulimit -c
tounlimited
, but the core file is stilled no created, thegenerate-core-file
file in gdb session does create the core file, thanks. -
Marcel almost 2 yearsInstead of disabling apport every time it could be more lasting just to uninstall apport (ignoring the recommendation dependency) since the service adds no value for developers.