In makefile when do we use .o files?

5,360

Solution 1

Update:

For me, a lot of things become self evident when an analogy is used, so forgive me for adding an analogy here, too.

If the pages in a book are all numbered anyway, why bother with books? If I want to read a book, I can just pick up the pages, right? If .c + .h files are individual pages, then .o-files are like chapters. They are sort-of standalone entities of machine code, but don't have to (and often don't) make sense on their own. They are pre-compiled entities that need to be passed to the linker, who'll put them together so it all makes sense.

Just like a bunch of loose papers or chapters seldom mean anything, until someone puts them all together in the right order. But putting loose papers together is boring, and time consuming, and leaves a lot of room for error. Putting some chapters together is faster, easier and safer. In that respect, object files are just... necessary.

Put simply, if you were to write, and then compile a program, say a MySQL client, you'd have to write your DB-querying code separately from the code that deals with the front-end/GUI stuff.

If you'd put them all into a single file, it'd be a right mess, and nor would you be able to reuse any of the code. At least, not easily.

By separating the GUI layer of the code, for example, you can use the code you wrote that draws buttons for your next program.

Also, by separating the code according to what it does, you avoid any possible bugs, owing to name-conflicts, invalid typedefs and the like. Besides: who in their right mind would look at this snippet of code and think that this is normal

gtk_window_set_default_size(GTK_WINDOW(window), 230, 150);
gtk_window_set_position(GTK_WINDOW(window), GTK_WIN_POS_CENTER);
gtk_window_set_icon(GTK_WINDOW(window), create_pixbuf("icon.png"));
MYSQL *connection = mysql_init(NULL);
if (connection == NULL) exit(EXIT_FAILURE);

Mixing in back-end and GUI stuff is just awful in every way.

  • Code has to be structured to reduce changes of bugs, simplify & speed up development and increase re-usability of code.

So that's why we separate code depending on what it's supposed to do. The same logic can be applied to compiling the code. Because separation allows you to compile segments of a program, you can use those segments again and again, without having to recompile the lot again.

  • using o-files is faster, because we can re-use what we have written, without having to re-compile it, too.

When you're compiling, especially when still developing/debugging/testing your code, you want the compilation phase to be fast. Only compile what you've changed is easy by keeping the .o files of those source files that weren't altered.

Couple that to the fact that adding new features, and (thinking of SVC systems here) working with submodules is just a lot easier this way and you soon realize you'd have to be insane not to organize your work like this.

So building your program in little bits and pieces, and then linking them all together is, from a development standpoint, the best way to go.

  • Debugging is easier, if the linker throws up errors, you know where to look.

When you compile code that doesn't contain a main function, you get an error. In fact, it's the linker that generates this error. It looks through all the objects it needs to link together in order to create the binary file, in search of all the symbols your program relies on.
If no main symbol is to be found anywhere, the program has no starting point, and thus can't run. The compiler doesn't require this function, as it is quite happy to compile code to a non-standalone .o file. The linker spits out the program, and if there, the main function can't be found, it can't continue.

That means that the linker is what actually informs you of what functions are missing (possible typo's), and because you've compiled your code first into objects, to be linked together later on, you can then see, just by the error in which module you should be looking to fix that missing symbol, or which dependency is not being linked properly.

Why, then, not change the makefile once your code is ready to be released?

  • Why bother?
  • If your makefile is public, probably you're dealing with an open-source project. Within a day, everybody who's interested in the code will have made his own makefile (which doesn't help an Open source project -> standards are important). compiling the lot in 1 go doesn't make your project Open-source friendly: nobody can tinker away at 1 given piece of code, without having to constantly recompile the lot. Why would you spend time & effort to annoy possible contributors?
  • Makefiles should be thought of as code: make them easy to read, for your sake and ours. If you have 5 submodules, each depending on anywhere between 5 and 10 params having to be passed to gcc, your makefile will look like a mess.
  • When you compile a program that has more than 2 files that need linking, who's to say that all files have to be linked to each other, in the same way? Not only that, who's to say that you want all files themselves being compiled in the same way? Depending on the release, or if you're dealing with a project that supports extensions you may find yourself having to compile various bits of the source differently, and in a given order.
  • If it's an open-source project: if your code builds on your system, that means you have all dependencies at the ready. Others might not. A make-file (ideally, you have a configure file, too) can tell people eager to try whatever you made, what dependencies they lack

Take this for example:

MAIN   ===> API ==> MySQL
  |          /\      /
  |  --->  std*  ---/
  v        /
 GUI -----/
  |
  v
 GTK+

Your makefile would compile the API + MySQL code, the GUI + GTK code completely separately, and then link them all together onto the "MAIN" of the example given above. To compile everything in one go is a lie. If the API is well written, it shouldn't matter if there's a GUI or not.

As this is the case here, you often find yourself linking to shared dependencies. It can happen that the linker needs your help to resolve the dependencies correctly, though I can't find an example ATM. This alone forces you to compile in steps, and thus write a make-file.

Then there are static dependencies. I'll just leave you with this link, which does a great job at explaining this, and once you read through it, you should have a good understanding of why a makefile is so valuable.

Solution 2

This also works when there are no .c files to generate the .o files from. And if the .c are there and .o not, it takes the built-in implicit rules to generate them.

Solution 3

Your example seems to be drawn from a popular tutorial, like: http://www.cs.colby.edu/maxwell/courses/tutorials/maketutor/ (or google for "hellomake.o hellofunc.o") in which several levels of style are shown. The crucial difference is whether dependencies are explicitly or implicitly expressed. Thus (from that tutorial):

hellomake: hellomake.c hellofunc.c
        gcc -o hellomake hellomake.c hellofunc.c -I.

hellomake: hellomake.o hellofunc.o
        $(CC) -o hellomake hellomake.o hellofunc.o -I.

The first case says that the hellomake target file must be no older than the *.c files - and gives a production rule that always re-compiles both *.c files into the target. (which implies that the *.c files will be recompiled to produce *.o files which can be loaded into hellomake target file.)

The second case says that the hellomake target file must be no older than the *.o files - and and gives a rule that only loads the *.o files into hellomake target file.) (the $(CC) knows to call the loader for *.o files).

One approach to Makefiles is to make most of the rules implicit - rules like:

foo.o should be no older than foo.c
compile a *.c to produce a *.o

or

foo: foo.c bar.c foo.h

Another approach is to make many rules explicit:

foo: foo.o bar.o
    gcc -o foo foo.o bar.o

foo.o: foo.c
    gcc foo.c

bar.o: bar.c
    gcc bar.c 

I prefer the minimal and terse approach - express as few dependencies as possible, and let make(1) use the built-in rules to 'do the right thing'. Make knows to call a C compiler to produce a *.o from a *.c, and to call a C compiler to produce an executable from some *.o files - all we should have to do is to tell it which *.c files should be no older than which executable.

Other prefer the very explicit and verbose approach.

I speculate that this is a pattern that evolved in the late 1980s when tools to generate Makefiles became popular, and they generated very verbose Makefiles that expressed every dependency, and then people started imitating that style in hand-written Makefiles.

(Formatting note - production rules in Makefiles must be indented with a TAB character.)

Share:
5,360

Related videos on Youtube

user2799508
Author by

user2799508

Updated on September 18, 2022

Comments

  • user2799508
    user2799508 almost 2 years

    In the following make file, what is the significance of adding .o files instead of .c files?

    CC=gcc
    CFLAGS=-I.
    
    hellomake: hellomake.o hellofunc.o
         $(CC) -o hellomake hellomake.o hellofunc.o -I.
    
  • user2799508
    user2799508 over 10 years
    internal rules means ? I am expecting error when no .o files are there..
  • Zelda
    Zelda over 10 years
    sorry, they are called built-in implicit rules in the make man page.
  • psusi
    psusi about 10 years
    Wow, that is an impressive wall of text for something that could have just said "so you don't have to recompile everything when you change only one file" ;)
  • Elias Van Ootegem
    Elias Van Ootegem about 10 years
    @psusi: I know, my answers tend to be a tad too verbose from time to time. Note that I also said that often, you can't compile everything (ie shared dependencies)