Detecting 64bit compile in C

41,541

Solution 1

Since you tagged this "gcc", try

#if __x86_64__
/* 64-bit */
#endif

Solution 2

Here is the correct and portable test which does not assume x86 or anything else:

#include <stdint.h>
#if UINTPTR_MAX == 0xffffffff
/* 32-bit */
#elif UINTPTR_MAX == 0xffffffffffffffff
/* 64-bit */
#else
/* wtf */
#endif

Solution 3

An easy one that will make language lawyer squeem.

if(sizeof (void *) * CHARBIT == 64) {
...
}
else {
...
}

As it is a constant expression an optimizing compiler will drop the test and only put the right code in the executable.

Solution 4

A compiler and platform neutral solution would be this:

// C
#include <stdint.h>

// C++
#include <cstdint>

#if INTPTR_MAX == INT64_MAX
// 64-bit
#elif INTPTR_MAX == INT32_MAX
// 32-bit
#else
#error Unknown pointer size or missing size macros!
#endif

Avoid macros that start with one or more underscores. They are not standard and might be missing on your compiler/platform.

Solution 5

Use a compiler-specific macro.

I don't know what architecture you are targeting, but since you don't specify it, I will assume run-of-the-mill Intel machines, so most likely you are interested in testing for Intel x86 and AMD64.

For example:

#if defined(__i386__)
// IA-32
#elif defined(__x86_64__)
// AMD64
#else
# error Unsupported architecture
#endif

However, I prefer putting these in the separate header and defining my own compiler-neutral macro.

Share:
41,541
Daniel
Author by

Daniel

Software &amp; Web Developer

Updated on July 24, 2022

Comments

  • Daniel
    Daniel almost 2 years

    is there a C macro or some kind of way that i can check if my c program was compiled as 64bit or 32bit at compile time in C?

    Compiler: GCC Operating systems that i need to do the checks on: Unix/Linux

    Also how could i check when running my program if the OS is capable of 64bit?

  • Gunther Piez
    Gunther Piez about 13 years
    An other macro to test is '_____LP64_____' which will work on a non x86-64 architecture.
  • Paul R
    Paul R about 13 years
    +1 for __LP64__, but note this will not work for some of the more obscure 64 bit architectures which do not use the LP64 model.
  • Alex B
    Alex B about 13 years
    If you need to use inline assembly, you have to use architecture-specific macros.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE about 13 years
    Testing any macro beginning with _[A-Z] or __ is almost surely the wrong answer.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE about 13 years
    Use a standard macro (see my answer), not a compiler-specific one.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE about 13 years
    If you need to use inline assembly, knowing the number of bits is not helpful. You need to know the name of the arch and adjust your build system/macros/etc. accordingly.
  • Alex B
    Alex B about 13 years
    @R.. Yes, I know of that one, and it breaks with C++ code, so I usually stick with compiler-specific ones.
  • Alex B
    Alex B about 13 years
    I know this question is for C, but since it's mixed with (or included from) C++ a lot of the time, so here is a C++ caveat: C99 requires that to get limit macros defined in C++, you have to have __STDC_LIMIT_MACROS defined before you include the header. As it may have been already included, the only way to ensure the correct definition is to force the client to always include it as a first header in the source file, or add -D__STDC_LIMIT_MACROS to your compile options for all files.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE about 13 years
    Then use ULONG_MAX instead of UINTPTR_MAX. On any real-world unixy system they'll be the same. It's surely a lot more portable to assume long and pointers are the same size than to assume some particular compiler's macros are present.
  • Alex B
    Alex B about 13 years
    @R.. And it's still wrong on 64-bit Windows. I prefer that my code fails to compile, rather than silently compile the wrong thing.
  • Steve Jessop
    Steve Jessop almost 13 years
    Portability is theoretically limited by the fact that uintptr_t is an optional type. I suspect it would be perverse though for a 64 bit implementation to omit it, since unsigned long long is a big enough integer type.
  • Adam Rosenfield
    Adam Rosenfield almost 13 years
    @R..: No, it's almost surely the right answer. The macros beginning with _[A-Z] or __ are reserved by the implementation (i.e. the compiler/preprocessor), which means you can't define them yourself, but you can certainly test their existence to query the implementation.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE almost 13 years
    My view is that a system that omits uintptr_t probably has very good reason for doing so (a very pathological or at least atypical memory model, for instance) and that any assumptions made on the basis that this is "a 32-bit system" or "a 64-bit system" would be invalid on such an implementation. As such, the "wtf" case in my answer should probably either contain #error or else hyper-portable code that's completely agnostic to traditional assumptions about memory models, type sizes, etc.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE almost 13 years
    @Adam: And the result will only be meaningful on some implementations. If you instead test a standard macro like UINTPTR_MAX, it's reliable across all implementations. (Hint: A valid implementation could happily predefine __LP64__ on 32-bit machines, or as an even more extreme example, it could treat all macro names beginning with __ as defined unless they're explicitly undefined.)
  • Anomie
    Anomie almost 13 years
    @R..: OTOH, the C99 standard guarantees that uintptr_t is large enough to hold a pointer, but it doesn't guarantee that it is not larger than needed. An implementation could use a 64-bit uintptr_t even though all pointers are 32 bits. Or, for that matter, since uintptr_t is optional in C99 your "standard" macro may not be defined anyway.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE almost 13 years
    See the comments on my answer for a discussion of that issue.
  • Anomie
    Anomie almost 13 years
    @R..: I see nothing there about the possibility that the size of uintptr_t is larger than the size of any actual pointer.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE almost 13 years
    I'm not sure why it would matter. In any case, a question of whether a a system "is 64-bit" is rather ambiguous. Do you want to know if you have fast 64-bit arithmetic? If you have large virtual address space? Or what? To answer these individual questions, there are various stdint.h types whose limits you could test.
  • Ciro Santilli OurBigBook.com
    Ciro Santilli OurBigBook.com almost 11 years
    Where are those documented in the cpp docs? I tried gcc.gnu.org/onlinedocs/cpp/Predefined-Macros.html but it explicitly says there that system specific defines will not be documented there... where are they then?
  • Kenyakorn Ketsombut
    Kenyakorn Ketsombut almost 10 years
    This doesn't work on Linux PAE kernels. Kernels with PAE activated, are 32 bit but can address RAM like a 64 bit system. This code determines the architecture by checking the maximum addressable RAM. A 32 bit PAE kernel machine would be seen as 64 bit with this, so the inserted source code (possible some inline assembler instruction) would not work.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE almost 10 years
    @KenyakornKetsombut: No they cannot. PAE has nothing to do with the size of the address space. It's merely an extension for the kernel to access more physical memory, but the virtual address space is always, inherently, permanently 32-bit on a 32-bit system.
  • Tomasz Gandor
    Tomasz Gandor almost 10 years
    It usually is true, but please, please stop making assertions like "... so an optimizing compiler will ...". Preprocessor is preprocessor, and often the code following "else" will not compile when the condition is true.
  • Jarosław Bielawski
    Jarosław Bielawski almost 10 years
    I don't see what the preprocessor has to do with anything? The OP asked for a method to detect the mem model used (64 or 32 bit), he didn't ask for a preprocessor solution. Nobody asked a way to replace conditionnal compilation. Of course my solution requires that both branches are syntactically correct. The compiler will compile them always. If the compiler is optimizing it will remove the generated code, but even if it doesn't there's no problem with that. Care to elaborate what you mean?
  • Tomasz Gandor
    Tomasz Gandor almost 10 years
    OK, you're right. The exact wording was "a C macro or some kind of way". I didn't notice the "some kind of way" at first.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 9 years
    @LưuVĩnhPhúc: In what sense is x32 "64-bit"? If having N-bit registers available when they're needed makes an implementation N-bit, why isn't i686 "128-bit"? After all you have 128-bit SSE registers. For most purposes, "N-bit" means "address space is an N-bit space". If you have another purpose in mind you need to clarify what it is; from this perspective, x32 is 32-bit.
  • phuclv
    phuclv over 9 years
    from my perspective any architectures that can do 64-bit arithmetics natively is a 64-bit architecture. And there are several architectures with only 24-bit address bus but still called "32-bit" because their registers are 32 bits. The same to 8-bit MCUs, although their address buses are often 14 to 16 bits or more
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 9 years
    @LưuVĩnhPhúc: "Natively" is not an observable aspect of a C implementation. Whether arithmetic takes place as one instruction or in some other form in the machine code is not observable. In any case I don't see anyone calling i686 a 128-bit architecture, which would be the obvious consequence of your criterion...
  • phuclv
    phuclv over 9 years
    @R.. i686 can't do 128-bit arithmetics, only 128-bit SSE registers, so no one calls it a 128-bit architecture anyway.
  • phuclv
    phuclv over 9 years
    and you can't call the above machines 14, 16 or 24-bit right?
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 9 years
    While I agree there's a range of ways you could go about making the classification (and the whole classification is rather stupid except in the context of concepts like ILP32 model/LP64 model/etc.), I would not go by the number of wired bits on the physical address bus but rather the logical (or virtual, on archs with MMU) address space. If pointers take 32 bits of storage and addressing instructions use 32-bit registers for addresses, I would call that 32-bit even if 8 of the pins go nowhere on the metal.
  • phuclv
    phuclv over 9 years
    On 8-bit architectures, 16-bit pointers are still stored as 16 bits, not 8. And classifying by GPR size is probably more common like Pascal Cuoq in his answer “64-bit machine” is an ambiguous term but usually means that the processor's General-Purpose Registers are 64-bit wide
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 9 years
    @LưuVĩnhPhúc: Note that he said 64-bit machine. That's a concept that has nothing to do with the C implementation or the compilation environment. C code compiled on a 32-bit implementation with a target like x86 or arm or mips-o32 could run on a 64-bit machine (like x86_64 or aarch64 or mips64, respectively). But this whole conversation is rather pointless. If you want to use your definition of 64-bit, nobody is stopping you, but it's not useful from a standpoint of C.
  • deltamind106
    deltamind106 almost 9 years
    @R..: Meh, often the number of bits is good enough. Especially if you know your app is destined exclusively for x86 hardware, then knowing whether the compiler is 32 or 64 bit is often all you need to code the correct assembly source.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE almost 9 years
    @deltamind106: Are you really still producing x86-only products in 2015? How long do you expect that line of business to be around? :-)
  • Shelby Moore III
    Shelby Moore III over 8 years
    I downvoted this answer because it assures 64-bit pointers (thus probably address space), but it doesn't assure an int is 64-bit. Many cases of testing for 64-bit are to insure that 64-bit integer arithmetic is fast because it is not emulated. For example Emscripten might provide 64-bit pointers but it emulates 64-bit integer arithmetic because the Javascript output target doesn't support 64-bit integers.
  • Shelby Moore III
    Shelby Moore III over 8 years
    I downvoted this answer because it assures 64-bit pointers (thus probably address space), but it doesn't assure an int is 64-bit. Many cases of testing for 64-bit are to insure that 64-bit integer arithmetic is fast because it is not emulated. For example Emscripten might provide 64-bit pointers but it emulates 64-bit integer arithmetic because the Javascript output target doesn't support 64-bit integers.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 8 years
    @ShelShelbyMooreIII: I accept your reasoning but the question is not clear on what "64-bit" even means. In the absence of a specific definition I generally assume address space size because it's the only thing that affects what your program can do and not just performance.
  • Jarosław Bielawski
    Jarosław Bielawski over 8 years
    Except for classic Crays and the defunct HAL nobody uses ILP64 (SILP64 even for Cray). So trying to find if int arithmetic is 64 bit has not much prectical value.
  • Shelby Moore III
    Shelby Moore III over 8 years
    @PatrickSchlüter you are correct that 32-bit int does not guarantee that uint64_t arithmetic is emulated with 32-bit arithmetic. I will correct my answer.
  • Shelby Moore III
    Shelby Moore III over 8 years
    I am downvoting. See my answer for the reason. Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless except in the context of a build system that is not agnostic. Thus it is preferred to set build macros that so the build system can select which variant is compiled.
  • Shelby Moore III
    Shelby Moore III over 8 years
    I am downvoting. See my answer for the reason. Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless except in the context of a build system that is not agnostic. Thus it is preferred to set build macros that so the build system can select which variant is compiled.
  • Shelby Moore III
    Shelby Moore III over 8 years
    I am upvoting. My answer goes into more detail as to why your answer is correct.
  • Shelby Moore III
    Shelby Moore III over 8 years
    But another issue is that 64-bit pointers don't even guarantee a 64-bit address space. See Anomie's comment and my answer for an example. Thus the correct answers are, "Do not detect 64-bit from the preprocessor and instead use the build system to define a macro".
  • Damon
    Damon about 8 years
    @ShelbyMooreIII: Ummmmm... excuse me? The distinction of a 32-bit vs 64-bit target has absolutely nothing to do with the size of int (indeed, its size differs e.g. in LP64 as used in Linux/BSD vs. LLP64 as used in Windows, while both are very clearly 64-bit). It also has nothing to do with how fast a compiler might optimize a particular operation (or how fast Javascript performs).
  • Shelby Moore III
    Shelby Moore III about 8 years
    @Damon true, but obviously that is irrelevant to the point I made. Try reading again. The question didn't specify a 64-bit address space. It asks whether the program will be compiled at 64-bit, which is a general question. You are presuming the question meant what you want it to mean, but I read the question literally. Your "ummmmm..." drama :rolleyes:
  • Peter Cordes
    Peter Cordes about 6 years
    Doesn't detect ILP32 ABIs on 64-bit architectures, e.g. the Linux x32 ABI or the AArch64 ILP32 ABI. That's 32-bit pointers in 64-bit mode. So long long is still efficient on those targets, unlike on 32-bit CPUs where 64-bit integers take 2 instructions per operation, and 2 registers.
  • jww
    jww over 5 years
    @Shelby - "non-emulated 64-bit arithmetic is available" - That's the important one for me. We have two specific implementations, each optimized for a specific platform, and we need to know which one to use.
  • Jackie Yeh
    Jackie Yeh about 4 years
    This is actually the best practice!
  • Jimmio92
    Jimmio92 over 2 years
    Just thought it was a good idea to mention... it took Microsoft 11 years to add stdint.h to its c99 support. If _MSC_VER is less than 1600, it doesn't exist. (Granted it's old, but it may still be encountered)