Compiling & linking multiple files in C++

38,252

Solution 1

The main reason people compile object by object is to save time. High-level localised code changes often only require compilation of one object and a relink, which can be faster. (Compiling too many objects that draw in heaps of headers, or redundantly instantiate the same templates, may actually be slower when a change in common code triggers a fuller recompilation).

If the project is so small that it can be compiled in 2 seconds, then there's not much actual benefit to the traditional approach, though doing what's expected can save developer time - like yours and ours on here :-). Balancing that, maintaining a makefile takes time too, though you may well end up doing that anyway in order to conveniently capture include directories, libraries, compiler switches etc.

Actual implications to written/generated code:

  • cpp files normally first include their own headers, which provides a sanity check that the header content can be used independently by other client code: put everything together and the namespace is already "contaminated" with includes from earlier headers/implementation files
  • the compiler may optimise better when everything is in one translation unit (+1 for leppie's comment, do do the same...)
  • static non-member variables and anonymous namespaces are private to the translation unit, so including multiple cpps means sharing these around, for better or worse (+1 for Alexander :-))
  • say a cpp files defines a function or variable which is not mentioned in its header and might even be in an anonymous namespace or static: code later in the translation unit could call it freely without needing to hack up their own forward declaration (this is bad - if the function was intended to be called outside its own cpp then it should have been in the header and an externally exposed symbol in its translation unit's object)

BTW - in C++ your headers can declare functions without explicitly using the extern keyword, and it's normal to do so.

Solution 2

The reason for the second style is because each .cpp file can be treated separately, with its own classes, global variables, ect.. without risk of conflict.

It is also easier in IDEs that automatically link all the .cpp files (like MSVC).

Share:
38,252
Admin
Author by

Admin

Updated on July 09, 2022

Comments

  • Admin
    Admin almost 2 years

    One of my "non-programmer" friends recently decided to make a C++ program to solve a complicated mechanical problem.

    He wrote each function in a separate .cpp file, then included them all in the main source file, something like this:

    main.cpp:

    #include "function1.cpp"
    #include "function2.cpp"
    ...
    int main() 
    {
    ...
    
    }
    

    He then compiled the code, with a single gcc line:

    g++ main.cpp    // took about 2 seconds 
    

    Now, I know that this should work, but I'm not sure whether including .cpp files directly into the main program is a good idea. I have seen the following scheme several times, where all the function prototypes go into a header file with the extern keyword, like this:

    funcs.h:

    extern void function1(..);
    extern void function2(..);
    ...
    

    main.cpp:

    ...
    #include "funcs.h"
    ...
    

    & compiling with:

    g++ -c function1.cpp
    g++ -c function2.cpp
    ...
    g++ -c main.cpp
    g++ -o final main.o function1.o function2.o ...
    

    I think that this scheme is better (with a makefile, ofcourse). What reasons can I give my friend to convince him so?