How to generate a core dump in Linux on a segmentation fault?

419,107

Solution 1

This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type

ulimit -c unlimited

then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.

In tcsh, you'd type

limit coredumpsize unlimited

Solution 2

As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.

If you've come here hoping to learn how to generate a core dump for a hung process, the answer is

gcore <pid>

if gcore is not available on your system then

kill -ABRT <pid>

Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process

Solution 3

To check where the core dumps are generated, run:

sysctl kernel.core_pattern

or:

cat /proc/sys/kernel/core_pattern

where %e is the process name and %t the system time. You can change it in /etc/sysctl.conf and reloading by sysctl -p.

If the core files are not generated (test it by: sleep 10 & and killall -SIGSEGV sleep), check the limits by: ulimit -a.

If your core file size is limited, run:

ulimit -c unlimited

to make it unlimited.

Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:

Segmentation fault: 11 (core dumped)

See also: core dumped - but core file is not in current directory?


Ubuntu

In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/. However, it is disabled by default in stable releases.

For more details, please check: Where do I find the core dump in Ubuntu?.

macOS

For macOS, see: How to generate core dumps in Mac OS X?

Solution 4

What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file command. That forced generation of a core dump.

Solution 5

Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under AIX) and prints the stack trace up to the point of a segmentation fault. You will need to change the sprintf variable to use gdb in the case of Linux.

#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <stdarg.h>

static void signal_handler(int);
static void dumpstack(void);
static void cleanup(void);
void init_signals(void);
void panic(const char *, ...);

struct sigaction sigact;
char *progname;

int main(int argc, char **argv) {
    char *s;
    progname = *(argv);
    atexit(cleanup);
    init_signals();
    printf("About to seg fault by assigning zero to *s\n");
    *s = 0;
    sigemptyset(&sigact.sa_mask);
    return 0;
}

void init_signals(void) {
    sigact.sa_handler = signal_handler;
    sigemptyset(&sigact.sa_mask);
    sigact.sa_flags = 0;
    sigaction(SIGINT, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGSEGV);
    sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGBUS);
    sigaction(SIGBUS, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGQUIT);
    sigaction(SIGQUIT, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGHUP);
    sigaction(SIGHUP, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGKILL);
    sigaction(SIGKILL, &sigact, (struct sigaction *)NULL);
}

static void signal_handler(int sig) {
    if (sig == SIGHUP) panic("FATAL: Program hanged up\n");
    if (sig == SIGSEGV || sig == SIGBUS){
        dumpstack();
        panic("FATAL: %s Fault. Logged StackTrace\n", (sig == SIGSEGV) ? "Segmentation" : ((sig == SIGBUS) ? "Bus" : "Unknown"));
    }
    if (sig == SIGQUIT) panic("QUIT signal ended program\n");
    if (sig == SIGKILL) panic("KILL signal ended program\n");
    if (sig == SIGINT) ;
}

void panic(const char *fmt, ...) {
    char buf[50];
    va_list argptr;
    va_start(argptr, fmt);
    vsprintf(buf, fmt, argptr);
    va_end(argptr);
    fprintf(stderr, buf);
    exit(-1);
}

static void dumpstack(void) {
    /* Got this routine from http://www.whitefang.com/unix/faq_toc.html
    ** Section 6.5. Modified to redirect to file to prevent clutter
    */
    /* This needs to be changed... */
    char dbx[160];

    sprintf(dbx, "echo 'where\ndetach' | dbx -a %d > %s.dump", getpid(), progname);
    /* Change the dbx to gdb */

    system(dbx);
    return;
}

void cleanup(void) {
    sigemptyset(&sigact.sa_mask);
    /* Do any cleaning up chores here */
}

You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.

Share:
419,107

Related videos on Youtube

Nathan Fellman
Author by

Nathan Fellman

SOreadytohelp

Updated on January 27, 2022

Comments

  • Nathan Fellman
    Nathan Fellman over 2 years

    I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?

  • Nathan Fellman
    Nathan Fellman about 15 years
    By "current directory of the process" do you mean the $cwd at the time the process was run? ~/abc> /usr/bin/cat def if cat crashes, is the current directory in question ~/abc or /usr/bin?
  • Mark Harrison
    Mark Harrison about 15 years
    ~/abc. Hmm, comments have to be 15 characters long!
  • Darron
    Darron over 14 years
    This would be the current directory at the time of the SEGV. Also, processes running with a different effective user and/or group than the real user/group will not write core files.
  • ed9w2in6
    ed9w2in6 almost 13 years
    I am sorry, but is this really answer your question? you asked how to generate, but it say how to set the limits
  • Eli Courtwright
    Eli Courtwright almost 13 years
    @lzprgmr: To clarify: the reason why core dumps are not generated by default is that the limit is not set and/or set to 0, which prevents the core from being dumped. By setting a limit of unlimited, we guarantee that core dumps can always be generated.
  • Salsa
    Salsa over 12 years
    This link goes deeper and gives some more options to enable generation of core dumps in linux. The only drawback is that some commands/settings are left unexplained.
  • a1an
    a1an over 11 years
    On bash 4.1.2(1)-release limits such as 52M cannot be specified, resulting in a invalid number error message. The man page tells that "Values are in 1024-byte increments".
  • Chani
    Chani almost 11 years
    How did you attach gdb to the process ?
  • Jean-Dominique Frattini
    Jean-Dominique Frattini almost 11 years
    To answer to Ritwik G, to attach a process to gdb, simply launch gdb and enter 'attach <pid>' where <pid> is the pid number of the process you want to attach.
  • IceCool
    IceCool over 10 years
    Well I had a "small" OpenGL project, that once did some weird thing, and caused X-server crash. When I logged back, I saw a cute little 17 GB core file (on a 25 GB partition). It's definitely a good idea to keep the core file's size limited :)
  • PolarisUser
    PolarisUser over 9 years
    I have a question. I don't want to set mine to unlimited. How do I know how large of a coredump should allow?
  • Eli Courtwright
    Eli Courtwright over 9 years
    @PolarisUser: If you wanted to make sure your partition doesn't get eaten, I recommend setting a limit of something like 1 gig. That should be big enough to handle any reasonable core dump, while not threatening to use up all of your remaining hard drive space.
  • Naveen
    Naveen almost 7 years
    In Step 3, How to 're-run' the terminal? Do you mean reboot?
  • mrgloom
    mrgloom almost 7 years
    @Naveen no, just close terminal and open new one, also seems you can just put ulimit -c unlimited in terminal for temporary solution, because only editing ~/.bashrc require terminal restrart to changes make effect.
  • Digicrat
    Digicrat over 6 years
    For Ubuntu, to quickly revert to normal behavior (dumping a core file in the current directory), simply stop the apport service with "sudo service apport stop". Also note that if you are running within docker, that setting is controlled on the host system and not within the container.
  • JSybrandt
    JSybrandt over 6 years
    I want to echo setting a limit for coredumpsize, as someone who just had to clean up a couple hundred 20G core dumps.
  • Imskull
    Imskull over 6 years
    attention: it is not persisted after login user quits, at least on CentOS, you have to edit /etc/security/limits.conf if you want so.
  • user202729
    user202729 almost 6 years
    (abbreviated as ge)
  • user202729
    user202729 almost 6 years
    If they have a new question, they should ask a new question instead of asking in a comment.
  • Nathan Fellman
    Nathan Fellman over 5 years
    why is that better?
  • kgbook
    kgbook over 5 years
    core file generated after crash, no need to ulimit -c unlimited in the command line environment, and then rerun the application.
  • Nathan Fellman
    Nathan Fellman over 5 years
    I don't want a core dump every time it crashes, only when a user contacts me as the developer to look at it. If it crashes 100 times, I don't need 100 core dumps to look at.
  • kgbook
    kgbook over 5 years
    In tha case, better to use ulimit -c unlimited. Also you can compile with marco definition, application will not include enable_core_dump symbol if not define that macro when release, and you will get a core dump replace with debug version.
  • Nathan Fellman
    Nathan Fellman over 5 years
    even if it's qualified by a macro, that still requires me to recompile if I want to generate a core dump, rather than simply executing a command in the shell before rerunning.
  • kgbook
    kgbook over 5 years
    It's so convenient for developer to obtain a core dump file and more verbose debug information. In release version, usually compiling with -O2 and without -g , and debug information stripped or optimized, and I control all debug options and core file dump using that marco definition in CMakeLists.txt or Makefile. You can have your own choice.
  • BreakBadSP
    BreakBadSP over 5 years
    and remember to put this in in .bashrc so that you wont need to do this all the time.
  • celticminstrel
    celticminstrel about 4 years
    I think it's far more likely that -ABRT will invoke a signal handler than -SEGV, as an abort is more likely to be recoverable than a segfault. (If you handle a segfault, normally it'll just trigger again as soon as your handler exits.) A better choice of signal for generating a core dump is -QUIT.
  • CodyChan
    CodyChan almost 4 years
    Weird thing is I already set ulimit -c to unlimited, but the core file is stilled no created, the generate-core-file file in gdb session does create the core file, thanks.
  • Marcel
    Marcel almost 2 years
    Instead of disabling apport every time it could be more lasting just to uninstall apport (ignoring the recommendation dependency) since the service adds no value for developers.