long long int vs. long int vs. int64_t in C++

191,906

Solution 1

You don't need to go to 64-bit to see something like this. Consider int32_t on common 32-bit platforms. It might be typedef'ed as int or as a long, but obviously only one of the two at a time. int and long are of course distinct types.

It's not hard to see that there is no workaround which makes int == int32_t == long on 32-bit systems. For the same reason, there's no way to make long == int64_t == long long on 64-bit systems.

If you could, the possible consequences would be rather painful for code that overloaded foo(int), foo(long) and foo(long long) - suddenly they'd have two definitions for the same overload?!

The correct solution is that your template code usually should not be relying on a precise type, but on the properties of that type. The whole same_type logic could still be OK for specific cases:

long foo(long x);
std::tr1::disable_if(same_type(int64_t, long), int64_t)::type foo(int64_t);

I.e., the overload foo(int64_t) is not defined when it's exactly the same as foo(long).

[edit] With C++11, we now have a standard way to write this:

long foo(long x);
std::enable_if<!std::is_same<int64_t, long>::value, int64_t>::type foo(int64_t);

[edit] Or C++20

long foo(long x);
int64_t foo(int64_t) requires (!std::is_same_v<int64_t, long>);

Solution 2

Do you want to know if a type is the same type as int64_t or do you want to know if something is 64 bits? Based on your proposed solution, I think you're asking about the latter. In that case, I would do something like

template<typename T>
bool is_64bits() { return sizeof(T) * CHAR_BIT == 64; } // or >= 64

Solution 3

So my question is: Is there a way to tell the compiler that a long long int is the also a int64_t, just like long int is?

This is a good question or problem, but I suspect the answer is NO.

Also, a long int may not be a long long int.


# if __WORDSIZE == 64
typedef long int  int64_t;
# else
__extension__
typedef long long int  int64_t;
# endif

I believe this is libc. I suspect you want to go deeper.

In both 32-bit compile with GCC (and with 32- and 64-bit MSVC), the output of the program will be:

int:           0
int64_t:       1
long int:      0
long long int: 1

32-bit Linux uses the ILP32 data model. Integers, longs and pointers are 32-bit. The 64-bit type is a long long.

Microsoft documents the ranges at Data Type Ranges. The say the long long is equivalent to __int64.

However, the program resulting from a 64-bit GCC compile will output:

int:           0
int64_t:       1
long int:      1
long long int: 0

64-bit Linux uses the LP64 data model. Longs are 64-bit and long long are 64-bit. As with 32-bit, Microsoft documents the ranges at Data Type Ranges and long long is still __int64.

There's a ILP64 data model where everything is 64-bit. You have to do some extra work to get a definition for your word32 type. Also see papers like 64-Bit Programming Models: Why LP64?


But this is horribly hackish and does not scale well (actual functions of substance, uint64_t, etc)...

Yeah, it gets even better. GCC mixes and matches declarations that are supposed to take 64 bit types, so its easy to get into trouble even though you follow a particular data model. For example, the following causes a compile error and tells you to use -fpermissive:

#if __LP64__
typedef unsigned long word64;
#else
typedef unsigned long long word64;
#endif
// intel definition of rdrand64_step (http://software.intel.com/en-us/node/523864)
// extern int _rdrand64_step(unsigned __int64 *random_val);
// Try it:
word64 val;
int res = rdrand64_step(&val);

It results in:

error: invalid conversion from `word64* {aka long unsigned int*}' to `long long unsigned int*'

So, ignore LP64 and change it to:

typedef unsigned long long word64;

Then, wander over to a 64-bit ARM IoT gadget that defines LP64 and use NEON:

error: invalid conversion from `word64* {aka long long unsigned int*}' to `uint64_t*'
Share:
191,906

Related videos on Youtube

Travis Gockel
Author by

Travis Gockel

If you've got the hard drive and magnetic needle, I've got the steady hand. Feel free to contact me at: email: printf("%[email protected]%s.%s", "travis", "gockelhut", "com");

Updated on September 30, 2020

Comments

  • Travis Gockel
    Travis Gockel about 2 years

    I experienced some odd behavior while using C++ type traits and have narrowed my problem down to this quirky little problem for which I will give a ton of explanation since I do not want to leave anything open for misinterpretation.

    Say you have a program like so:

    #include <iostream>
    #include <cstdint>
    template <typename T>
    bool is_int64() { return false; }
    template <>
    bool is_int64<int64_t>() { return true; }
    int main()
    {
     std::cout << "int:\t" << is_int64<int>() << std::endl;
     std::cout << "int64_t:\t" << is_int64<int64_t>() << std::endl;
     std::cout << "long int:\t" << is_int64<long int>() << std::endl;
     std::cout << "long long int:\t" << is_int64<long long int>() << std::endl;
     return 0;
    }
    

    In both 32-bit compile with GCC (and with 32- and 64-bit MSVC), the output of the program will be:

    int:           0
    int64_t:       1
    long int:      0
    long long int: 1
    

    However, the program resulting from a 64-bit GCC compile will output:

    int:           0
    int64_t:       1
    long int:      1
    long long int: 0
    

    This is curious, since long long int is a signed 64-bit integer and is, for all intents and purposes, identical to the long int and int64_t types, so logically, int64_t, long int and long long int would be equivalent types - the assembly generated when using these types is identical. One look at stdint.h tells me why:

    # if __WORDSIZE == 64
    typedef long int  int64_t;
    # else
    __extension__
    typedef long long int  int64_t;
    # endif
    

    In a 64-bit compile, int64_t is long int, not a long long int (obviously).

    The fix for this situation is pretty easy:

    #if defined(__GNUC__) && (__WORDSIZE == 64)
    template <>
    bool is_int64<long long int>() { return true; }
    #endif
    

    But this is horribly hackish and does not scale well (actual functions of substance, uint64_t, etc). So my question is: Is there a way to tell the compiler that a long long int is the also a int64_t, just like long int is?


    My initial thoughts are that this is not possible, due to the way C/C++ type definitions work. There is not a way to specify type equivalence of the basic data types to the compiler, since that is the compiler's job (and allowing that could break a lot of things) and typedef only goes one way.

    I'm also not too concerned with getting an answer here, since this is a super-duper edge case that I do not suspect anyone will ever care about when the examples are not horribly contrived (does that mean this should be community wiki?).


    Append: The reason why I'm using partial template specialization instead of an easier example like:

    void go(int64_t) { }
    int main()
    {
        long long int x = 2;
        go(x);
        return 0;
    }
    

    is that said example will still compile, since long long int is implicitly convertible to an int64_t.


    Append: The only answer so far assumes that I want to know if a type is 64-bits. I did not want to mislead people into thinking that I care about that and probably should have provided more examples of where this problem manifests itself.

    template <typename T>
    struct some_type_trait : boost::false_type { };
    template <>
    struct some_type_trait<int64_t> : boost::true_type { };
    

    In this example, some_type_trait<long int> will be a boost::true_type, but some_type_trait<long long int> will not be. While this makes sense in C++'s idea of types, it is not desirable.

    Another example is using a qualifier like same_type (which is pretty common to use in C++0x Concepts):

    template <typename T>
    void same_type(T, T) { }
    void foo()
    {
        long int x;
        long long int y;
        same_type(x, y);
    }
    

    That example fails to compile, since C++ (correctly) sees that the types are different. g++ will fail to compile with an error like: no matching function call same_type(long int&, long long int&).

    I would like to stress that I understand why this is happening, but I am looking for a workaround that does not force me to repeat code all over the place.

    • Irfy
      Irfy almost 7 years
      One important statement is missing from the answers/comments, which helped me when this quirk hit me: Never use fixed-size types for reliably specializing templates. Always use basic types and cover all possible cases (even if you use fixed-size types to instantiate those templates). All possible cases means: if you need to instantiate with int16_t, then specialize with short and int and you'll be covered. (and with signed char if you're feeling adventurous)
  • casablanca
    casablanca almost 12 years
    Aren't you missing a return and a semicolon?
  • Travis Gockel
    Travis Gockel almost 12 years
    Not what I'm looking for at all. The example was provided to show a way for the error to manifest itself, not as an actual requirement.
  • Ben Voigt
    Ben Voigt almost 12 years
    Still, you should be using sizeof for this.
  • Travis Gockel
    Travis Gockel almost 12 years
    No, I shouldn't. template <T> struct has_trivial_destructor : boost::false_type { }; template <> struct has_trivial_destructor<int64_t> : boost::true_type { }; Now has_trivial_destructor<long long int> will be erroneously be a boost::false_type. That is an example of this problem manifesting itself that has nothing to do with variable size.
  • Logan Capaldo
    Logan Capaldo almost 12 years
    long long int and long int are not the same type whether or not they happen to be the same size. The behavior is not erroneous. That's just C++.
  • dan04
    dan04 almost 12 years
    It's not a limitation of nominal typing. It's a limitation of meaningless nominal typing. In the old days, the de facto standard was short = 16 bits, long = 32 bits, and int = native size. In these days of 64-bit, int and long don't mean anything anymore.
  • Keith Thompson
    Keith Thompson about 11 years
    @dan04: They're no more or less meaningful than they ever were. short is at least 16 bits, int is at least 16 bits, and long is at least 32 bits, with (sloppy notation follows) short <= int <= long. The "old days" you refer to never existed; there have always been variations within the restrictions imposed by the language. The "All the world's an x86" fallacy is just as dangerous as the older "All the world's a VAX fallacy.
  • Ax3l
    Ax3l about 4 years
    Sad news is, e.g. on 64bit MSVC19 (2017) sizeof() long and int is identical, but std::is_same<long, int>::value returns false. Same weirdness on AppleClang 9.1 on OSX HighSierra.
  • MSalters
    MSalters about 4 years
    @Ax3l: That's not weird. Virtually every compiler since ISO C 90 has at least one such pair.
  • Ax3l
    Ax3l about 4 years
    That's true, they are distinct types.

Related