How does a breakpoint in debugger work?

20,935

Solution 1

Compiler does not need to "modify" the binary in any way to support the breakpoints. However it is important, that:

  • Compiler includes enough information in the executable (that is not in the code itself but in special sections in same file), so that debugger can relate source that user wants to debug with machine code. One typical thing debugger needs to know to be able to set breakpoints (unless you specify addresses directly), is where (at which address) program functions and lines of source code start (within machine code).
  • Code is not optimized by compiler in any way, that makes it impossible to relate source and machine code. Typically you will want debug code that was not optimized or code where only carefully selected optimizations were performed.

The rest of work is then performed by debugger itself.

  1. Software breakpoints don't necessarily need special hardware features. Debugger here relies on modifying original binary (it's copy that is loaded to memory). When you set a breakpoint, debugger will place special instruction at the location of breakpoint. This special instruction needs to somehow let debugger detect when it (this special instruction) is executing. This can be some instruction that causes some kind of interrupt/exception, that debugger can hook onto, or some instruction that handles the control to debug unit. If this runs under some OS, that OS needs to support modifying running program (with something like ptrace poke/peek). Downside of SW breakpoints is that debugger needs to be able to modify running program, which is not possible if program is running from some kind of read-only memory (quite common in embedded world).
  2. Hardware breakpoints (which need to be supported by CPU) implement similar behavior without modifying program binary. This is CPU specific, but usually it lets you to at least define a program address at which execution should hit a breakpoint. CPU continuously compares current PC with these breakpoint addresses and once the condition is matched, it breaks the execution. Number of these breakpoints is always limited.

Solution 2

To put a break point first we have to add some special information in to the binary .We use the flag -g while compiling the c source files to include this info.The Software debugger actually use this info to put break points.The best example for hardware break point support is in VxWorks as I have experienced. Basically at the break point the processor halts.So internally any step which will give an exception to processor can be used to put a software break point.While a Hardware break point works by matching the address stored in Hardware registers to cause an exception.So Hardware break point is very powerful but it is heavily architecture dependent.

A very good explanation is here What is the difference between hardware and software breakpoints? A good intro with Processor related information is given here http://processors.wiki.ti.com/index.php/How_Do_Breakpoints_Work

Share:
20,935
Shan
Author by

Shan

PhD Student at University of California Irvine.

Updated on July 20, 2020

Comments

  • Shan
    Shan almost 4 years

    Breakpoints are one of the coolest feature supported by most popular Debuggers like GDB. But how a breakpoint works ? What code modifications does the compiler do to achieve the breakpoint? Are there any special hardware features used to support breakpoints?

  • stib
    stib over 7 years
    So point 2 is why stuff runs much slower when being debugged?
  • dbrank0
    dbrank0 over 7 years
    No, generally program should not run slower when being debugged. Some debug features are an exception, for example software watchpoints, but setting breakpoints should not slow down a program. What you describe is probably a side effect of no-optimization (or low optimization) required when you are debugging.