Separate 'debug' and 'release' builds?

11,784

Solution 1

This might be minor, but it adds up to what others have said here. One of the advantages of having QA test release builds is that over time the built in debugging and logging capabilities of your software will advance due to the needs of developers who need to figure out why things are going wrong in QA.

The more the developers need to debug release builds, the better tools you'll have later when customers start having issues. Of course, no reason for developers to work on release builds as part of the development cycle.

Also, I don't know any software company that has long enough cycles to afford the overhead of switching QA from debug to release builds halfway through a version's testing period. Having to do a full QA cycle is something that all too often happens pretty rarely.

Solution 2

Having separate debug and release builds is a good idea, because it does make development easier.

But debug builds should be for development only, not for testing. You test release builds only. And you don't use developers to test those builds, you use testers.

It's a simple policy that gives the best of both worlds, IMO.

Edit: In response to a comment, I think it's obvious that debug and release builds (can) generate different code. Think "-DDEBUG" vs. "-DNDEBUG", and "#if defined(DEBUG)", etc.

So it's vital that you test the code that you end up shipping. If you do generate different code in debug and release builds, that means testing twice - regardless of whether or not it's tested by the same person.

Debug symbols are not that big an issue, however. Always build with debugging symbols, keep a copy of the unstripped binary, but release a stripped binary. As long as you tag each binary with a build number somehow, you should always be able to identify which unstripped binary corresponds to the stripped binary that you have to debug...

How to strip binaries and load symbols in your debugger from an external source is platform-dependent.

Solution 3

Our policy is to have developers work on Debug builds, but EVERYONE else (QA, BAs, sales etc) runs the release version. Yesterday I had to fix a bug that only showed up in the release build it, was obvious what was happening simply BECAUSE it only showed up in release

It's first one here in this shop, and I've been here 18 months or so.

Where things get hairy is when the Release build does different things to the debug build - Yes, I have been to Hell and seen this in some very old, very ropy production code.

I see no reason why not to have both if the only difference between the configurations are debug symbols and optimisations.

Solution 4

so you need to build a release which you can if necessary debug ... this may mean enabling debug symbols, and disabling some optimizations, even in the 'release' build.

Ummm... it sounds like you're doing a debug build to me... right?

The part where you went wrong is this statement:

I think it's better to release the version of the software which your developers actually tested

Developers don't test code. Tests test code.

Your unit tests should test ALL build configurations. Do not make your developers work with one hand tied behind their back - let them use all the debugging tools they have at there disposal. A Debug build is one of these.

Regarding asserts: the use of assertions greatly depends on whether or not you program by contract. If you do, then assertions merely check the contract in a debug build.

Solution 5

As per my answer in the linked thread, we also use the same build for debug and release for very similar reasons. The 10%-20% performance gains from the optimiser tend to be very minor when compared to manual optimisations at algorithm level. A single build removes many potential bugs. Specifically;

  • Uninitialised variables and small buffer overflows may end up with very different results in debug and optimised release builds.

  • Even with the symbolic information available, debugging an optimised release can be difficult as the object doesn't match the source, e.g. variables may have been optimised out and code may have been re-arranged. Thus bugs reported in tested release builds can be more difficult, and hence time-consuming, to track down.

Having compared unoptimised and optimised builds under automated regression tests, the performance gains provided by the optimisation don't provide enough extra value to have two builds in my case. It is may be worth noting that the software that I develop is very CPU hungry (e.g. creating and manipulating large surface models).

Share:
11,784

Related videos on Youtube

ChrisW
Author by

ChrisW

Updated on November 29, 2020

Comments

  • ChrisW
    ChrisW over 3 years

    I think it's better to release the version of the software which your developers actually tested; I therefore tend to delete the 'debug' target from the project/makefile, so that there's only one version that can be built (and tested, and debugged, and released).

    For a similar reason, I don't use 'assertions' (see also Are assertions always bad? ...).

    One person there argued that the reason for a 'debug' version is that it's easier to debug: but, I counter-argued that you may eventually want to support and debug whatever it is you released, and so you need to build a release which you can if necessary debug ... this may mean enabling debug symbols, and disabling some optimizations, even in the 'release' build.

    Someone else said that "this is such a bad idea"; it's a policy I evolved some years ago, having been burned by:

    • Some developers' testing their debug but not release versions
    • Some developers' writing bugs which show up only in the release version
    • The company's releasing the release version after inadequate testing (is it ever entirely adequate?)
    • Being called on to debug the release version

    Since then I've seen more than one other development shop follow this practice (i.e. not have separate debug and release builds).

    What's your policy?

    • SmacL
      SmacL over 15 years
      Apparently consensus suggest's it is not such a bad idea after all ;)
  • Mitch Wheat
    Mitch Wheat over 15 years
    @Paul Tomblin: I would disagree. Only ever test against Release builds. I've seen side effect code present in debug mode only. Testing the "full QA cycle " twice is fraught with danger...
  • hachu
    hachu over 15 years
    That's all fine and dandy, provided that you have the staff to support that kind of testing. But what if (like me), you work in a company that has neither the staff nor the desire to acquire the staff to test in that manner?
  • Patrick
    Patrick over 15 years
    You should always try to have some one other than the developer do final testing, even if it's just another developer. A new person will come at it from a different angle.
  • SmacL
    SmacL over 15 years
    @Paul, we overcame this by changing #ifdef DEBUG to if (_debugging()) such that the overhead was only in executable size and we could still invoke debug/diagnostic code in the release as and when required. (C or C++ only)
  • ChrisW
    ChrisW over 15 years
    Do you also use a code analyzer to detect "uninitialised variables and small buffer overflows"? Also you might enable compiler optimization of only the most critical module[s] (on a per-module basis), while still keeping the policy of having only one build target.
  • ChrisW
    ChrisW over 15 years
    Re. "the log files were pretty damn big" perhaps that's because your debug build has extra log statements; instead of that I like to control the level of logging (how much detail is logged) via run-time configuration options.
  • Tom
    Tom over 15 years
    valgrind or other tools can identify invalid usage of memory far better than just looking for different results, so that's a fairly weak justification.
  • SmacL
    SmacL over 15 years
    @Chris, I use PC-lint to carry out static analysis on the code, and Boundschecker and AQTime for dynamic analysis. I also use a lot of 3rd party libs that I have much less control over (or desire to debug).
  • SmacL
    SmacL over 15 years
    @Tom, Valgrind is a great tool but unfortunately I'm Windows based. I do use both static and dynamic analysis tools but they have their limitations. e.g. try throwing a couple of hundred thousand lines of someone else code to lint and decipher the megs of error messages returned.
  • ChrisW
    ChrisW over 15 years
    "... right?" Call it what you will: it's a release build that includes debug information ... the one-and-only build ... a hybrid.
  • ChrisW
    ChrisW over 15 years
    "Developers don't test code. Tests test code." Some tests can't easily be automated, or haven't been automated.
  • Graeme Perrow
    Graeme Perrow over 15 years
    I agree that some tests can't be easily automated, but whether this is a problem for you depends on the size of your QA team. If your QA "team" is Frank down the hall, then the developers need to do some QA as well. If you have a QA team of twenty, then they should be running manual tests.
  • Daniel Paull
    Daniel Paull over 15 years
    @ChrisW: I never said automated tests! You should have test scenarios written down for code that requires manual testing - these are still tests. Do not rely on ad hoc testing during development; you are testing intermediate versions of the system and the test results are meaningless.
  • Tom
    Tom over 15 years
    smacl - I know what you mean - try turning on -Wall -Wextra -Werror -ansi -pedantic -std=c++98 on any legacy codebase and see how many compilation units you can break. IMO, Compiler warnings need to be controlled with an iron fist in any software shop, to keep everything clean enough to analyze.
  • ChrisW
    ChrisW over 15 years
    That's true. What I'm saying is that even in C++ project, because you need to support (and therefore might need to debug) the relased software, therefore even the 'Release' build should be debuggable ... and therefore you don't need (and IMO don't want) a separate 'Debug' build.
  • danielschemmel
    danielschemmel over 15 years
    I totally agree that a "debug build" even in a C++ project should mainly consist of changing compiler options and not the code that is executed.
  • demoncodemonkey
    demoncodemonkey over 15 years
    "...all too often happens pretty rarely" - ummm... :D
  • peterchen
    peterchen almost 15 years
    @Mike: There's good statistical evidence that developers don't find their own bugs. That's ok for one-man shows where the customer has a direct wire to the developer, and an urgent fix can be out in an hour between phone ringing and DLL delivered. Even for a one-man-show, I'd make a separation of development and testing. There should be at least a minimal, manual protocol for things to test on the final build before it leaves the door.
  • Tim Long
    Tim Long almost 14 years
    I tackle this by having my CI build server build only the Release configuration. Developers can then feel free to use whatever configuration they like, but as soon as they commit code to version control, everything is Release from that point on.
  • ChrisW
    ChrisW over 10 years
    You link doesn't work. Click on the share button under the answer that you want to link to, and use the URL whose format is like stackoverflow.com/a/406775/49942
  • sleske
    sleske over 8 years
    As far as I know, including debugging information in Java bytecode (javac -g) has no measurable runtime difference at all, it just makes the JARs bigger. See Is there a performance difference between Javac debug on and off?.
  • Aaron Digulla
    Aaron Digulla over 8 years
    When I did a performance measurement with Oracle Java 6, we had a loss of about 5%. Hardly noticeable.