Empty core dump file after Segmentation fault

12,743

Solution 1

setting ulimit -c unlimited turned on generation of dumps. by default core dumps were generated in current directory which was on nfs. setting /proc/sys/kernel/core_pattern to /tmp/core helped me to solve the problem of empty dumps.

The comment from Ranjith Ruban helped me to develop this workaround.

What is the filesystem that you are using for dumping the core?

Solution 2

It sounds like you're using a batch scheduler to launch your executable. Maybe the shell that Torque/PBS is using to spawn your job inherits a different ulimit value? Maybe the scheduler's default config is not to preserve core dumps?

Can you run your program directly from the command line instead?

Or if you add ulimit -c unlimited and/or ulimit -s unlimited to the top of your PBS batch script before invoking your executable, you might be able to override PBS' default ulimit behavior. Or adding 'ulimit -c' could report what the limit is anyway.

Solution 3

If you run the core file in a mounted drive.The core file can't be written to a mounted drive but must be written to the local drive.

You can copy the file to the local drive.

Share:
12,743
Ali
Author by

Ali

I found R as a perfect answer to my problems in bioinformatics, and Stackoverflow as a perfect answer to my problems in R.

Updated on June 05, 2022

Comments

  • Ali
    Ali almost 2 years

    I am running a program, and it is interrupted by Segmentation fault. The problem is that the core dump file is created, but of size zero.

    Have you heard about such a case and how to resolve it?

    I have enough space on the disk. I have already performed ulimit -c unlimited to unlimit the size of core file - both running it or putting on the top of the submitted batch file - but still have 0 byte core dump files. The permissions of the folder containing these files are uog+rw and the permissions on the core files created are u+rw only.

    The program is written by C++ and submitted on a linux cluster with qsub command of the Grid Engine, I don't know this information is relevant or not to this question.

    • Mark Loeser
      Mark Loeser over 11 years
      You do have free space on the drive I'm assuming?
    • eh9
      eh9 over 11 years
      What are the write permissions on the zero-length file?
    • eh9
      eh9 over 11 years
      Next questions: What are the permissions on the containing directory? Is the process running under an effective user id that's different than the directory owner?
    • eh9
      eh9 over 11 years
      You said you're using Grid Engine. Is it correct that there are multiple nodes in the cluster? It's easy for multiple node to share a single file system, but if they don't also share a user account system it's likely that a job running on another node cannot run the job under your own user id, and thus looks to the file system as an "other" id.
    • eh9
      eh9 over 11 years
      Try making a temporary directory and setting its permissions to world-writable.
    • eh9
      eh9 over 11 years
      I'm out of ideas. Also, I'd recommend adding some of this information to the question, so we can clean up these comments.
    • nvlass
      nvlass over 11 years
      Have you tried setting the file size on qsub? (e.g. -l file=100mb)
    • Ali
      Ali over 11 years
      @nvlass It says: Unable to run job: unknown resource "file".
    • nvlass
      nvlass over 11 years
      @Ali my bad, I erroneously assumed a "linux like" qsub. However, there should be some related resource like "max filesize per job", or perhaps "max core size per job". Is there some man page on the job resources?
  • Ali
    Ali over 11 years
    I put both ulimit -c unlimited and ulimit -s unlimited to the PBS batch script, but still the core dumps are empty!
  • Ranjith Ruban
    Ranjith Ruban over 11 years
    What is the filesystem that you are using for dumping the core?
  • Mike Tunnicliffe
    Mike Tunnicliffe almost 9 years
    I just had this problem on a Linux VirtualBox image with a vboxsf filesystem that mapped to an NTFS drive (the drive of the host machine).
  • phyatt
    phyatt over 7 years
    modifying the core_pattern as the root user works miracles! The NFS drive path made core files zero bytes. stackoverflow.com/a/12760552/999943 Besides setting the path where it gets created, there is some nifty syntax for changing how a core file gets named, too. linuxhowtos.org/Tips%20and%20Tricks/coredump.htm
  • Daniel
    Daniel over 5 years
    Had the same problem with a mounted filesystem under VirtualBox. Thanks!