Exact width integers in C++/Qt: Is typedef int qint32 really correct?

10,196

Solution 1

How can they guarantee the sizes?

They target specific platforms that they know that the above definitions work. Qt doesn't care for other platforms and other platforms don't care about Qt. So, it won't always work. But where it won't work neither will Qt.

Why don't they use stdint (and __intN on windows platforms)?

Sometimes it's simpler to maintain the typedefs in <stdint.h> for your own project rather than conditionally including the standard header on platforms that have it and still maintaining a fallback for platforms that don't (e.g. VS2005).

Solution 2

Qt supports many platforms, but what matters is the platform's 64-bit data model (or probably more correctly, the compiler plus platform's data model).

Most common 64-bit platforms use 32-bit integers because they implement some form of the LP64 data model. The ILP64-based data models (ILP64, SILP64, etc) define ints as 64-bits. But I don't believe Qt supports any of these platforms.

Check out the ILP64 wiki page and this support page for PVS-Studio that has some good technical information.

Share:
10,196
leemes
Author by

leemes

Updated on June 05, 2022

Comments

  • leemes
    leemes almost 2 years

    The Qt documentation says:

    typedef qint8

    Typedef for signed char. This type is guaranteed to be 8-bit on all platforms supported by Qt.

    typedef qint16

    Typedef for signed short. This type is guaranteed to be 16-bit on all platforms supported by Qt.

    typedef qint32

    Typedef for signed int. This type is guaranteed to be 32-bit on all platforms supported by Qt.

    typedef qint64

    Typedef for long long int (__int64 on Windows). This type is guaranteed to be 64-bit on all platforms supported by Qt.

    The types are defined in qglobal.h as follows:

    /*
       Size-dependent types (architechture-dependent byte order)
       Make sure to update QMetaType when changing these typedefs
    */
    typedef signed char qint8;         /* 8 bit signed */
    typedef unsigned char quint8;      /* 8 bit unsigned */
    typedef short qint16;              /* 16 bit signed */
    typedef unsigned short quint16;    /* 16 bit unsigned */
    typedef int qint32;                /* 32 bit signed */
    typedef unsigned int quint32;      /* 32 bit unsigned */
    //....
    

    But I'm wondering how (for example) qint32 can always be 32 bits long when there is no guarantee that int is 32 bits long. As far as I know, on 64-bit architectures ints are (or at least can be) 64 bits long. [EDIT: I was wrong. See comments below.]

    How can they guarantee the sizes? Why don't they use stdint (and __intN on windows platforms)?