What platforms have something other than 8-bit char?

23,251

Solution 1

char is also 16 bit on the Texas Instruments C54x DSPs, which turned up for example in OMAP2. There are other DSPs out there with 16 and 32 bit char. I think I even heard about a 24-bit DSP, but I can't remember what, so maybe I imagined it.

Another consideration is that POSIX mandates CHAR_BIT == 8. So if you're using POSIX you can assume it. If someone later needs to port your code to a near-implementation of POSIX, that just so happens to have the functions you use but a different size char, that's their bad luck.

In general, though, I think it's almost always easier to work around the issue than to think about it. Just type CHAR_BIT. If you want an exact 8 bit type, use int8_t. Your code will noisily fail to compile on implementations which don't provide one, instead of silently using a size you didn't expect. At the very least, if I hit a case where I had a good reason to assume it, then I'd assert it.

Solution 2

When writing code, and thinking about cross-platform support (e.g. for general-use libraries), what sort of consideration is it worth giving to platforms with non-8-bit char?

It's not so much that it's "worth giving consideration" to something as it is playing by the rules. In C++, for example, the standard says all bytes will have "at least" 8 bits. If your code assumes that bytes have exactly 8 bits, you're violating the standard.

This may seem silly now -- "of course all bytes have 8 bits!", I hear you saying. But lots of very smart people have relied on assumptions that were not guarantees, and then everything broke. History is replete with such examples.

For instance, most early-90s developers assumed that a particular no-op CPU timing delay taking a fixed number of cycles would take a fixed amount of clock time, because most consumer CPUs were roughly equivalent in power. Unfortunately, computers got faster very quickly. This spawned the rise of boxes with "Turbo" buttons -- whose purpose, ironically, was to slow the computer down so that games using the time-delay technique could be played at a reasonable speed.


One commenter asked where in the standard it says that char must have at least 8 bits. It's in section 5.2.4.2.1. This section defines CHAR_BIT, the number of bits in the smallest addressable entity, and has a default value of 8. It also says:

Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

So any number equal to 8 or higher is suitable for substitution by an implementation into CHAR_BIT.

Solution 3

Machines with 36-bit architectures have 9-bit bytes. According to Wikipedia, machines with 36-bit architectures include:

  • Digital Equipment Corporation PDP-6/10
  • IBM 701/704/709/7090/7094
  • UNIVAC 1103/1103A/1105/1100/2200,

Solution 4

A few of which I'm aware:

  • DEC PDP-10: variable, but most often 7-bit chars packed 5 per 36-bit word, or else 9 bit chars, 4 per word
  • Control Data mainframes (CDC-6400, 6500, 6600, 7600, Cyber 170, Cyber 176 etc.) 6-bit chars, packed 10 per 60-bit word.
  • Unisys mainframes: 9 bits/byte
  • Windows CE: simply doesn't support the `char` type at all -- requires 16-bit wchar_t instead

Solution 5

There is no such thing as a completely portable code. :-)

Yes, there may be various byte/char sizes. Yes, there may be C/C++ implementations for platforms with highly unusual values of CHAR_BIT and UCHAR_MAX. Yes, sometimes it is possible to write code that does not depend on char size.

However, almost any real code is not standalone. E.g. you may be writing a code that sends binary messages to network (protocol is not important). You may define structures that contain necessary fields. Than you have to serialize it. Just binary copying a structure into an output buffer is not portable: generally you don't know neither the byte order for the platform, nor structure members alignment, so the structure just holds the data, but not describes the way the data should be serialized.

Ok. You may perform byte order transformations and move the structure members (e.g. uint32_t or similar) using memcpy into the buffer. Why memcpy? Because there is a lot of platforms where it is not possible to write 32-bit (16-bit, 64-bit -- no difference) when the target address is not aligned properly.

So, you have already done a lot to achieve portability.

And now the final question. We have a buffer. The data from it is sent to TCP/IP network. Such network assumes 8-bit bytes. The question is: of what type the buffer should be? If your chars are 9-bit? If they are 16-bit? 24? Maybe each char corresponds to one 8-bit byte sent to network, and only 8 bits are used? Or maybe multiple network bytes are packed into 24/16/9-bit chars? That's a question, and it is hard to believe there is a single answer that fits all cases. A lot of things depend on socket implementation for the target platform.

So, what I am speaking about. Usually code may be relatively easily made portable to certain extent. It's very important to do so if you expect using the code on different platforms. However, improving portability beyond that measure is a thing that requires a lot of effort and often gives little, as the real code almost always depends on other code (socket implementation in the example above). I am sure that for about 90% of code ability to work on platforms with bytes other than 8-bit is almost useless, for it uses environment that is bound to 8-bit. Just check the byte size and perform compilation time assertion. You almost surely will have to rewrite a lot for a highly unusual platform.

But if your code is highly "standalone" -- why not? You may write it in a way that allows different byte sizes.

Share:
23,251

Related videos on Youtube

Exectron
Author by

Exectron

Christian (Sabbath keeper too) piano player (gr 7 AMEB; like jazz) Linux user Embedded software developer (C and C++) Python

Updated on September 25, 2021

Comments

  • Exectron
    Exectron almost 3 years

    Every now and then, someone on SO points out that char (aka 'byte') isn't necessarily 8 bits.

    It seems that 8-bit char is almost universal. I would have thought that for mainstream platforms, it is necessary to have an 8-bit char to ensure its viability in the marketplace.

    Both now and historically, what platforms use a char that is not 8 bits, and why would they differ from the "normal" 8 bits?

    When writing code, and thinking about cross-platform support (e.g. for general-use libraries), what sort of consideration is it worth giving to platforms with non-8-bit char?

    In the past I've come across some Analog Devices DSPs for which char is 16 bits. DSPs are a bit of a niche architecture I suppose. (Then again, at the time hand-coded assembler easily beat what the available C compilers could do, so I didn't really get much experience with C on that platform.)

    • Thomas Matthews
      Thomas Matthews over 14 years
      The CDC Cyber series had a 6/12 bit encoding. The most popular characters were 6 bits. The remaining characters used 12 bits.
    • zebrabox
      zebrabox over 14 years
      I'm sure there are some platforms that have non 8-bit chars but in 15 years coding including working with custom hardware through to games consoles, I've never encountered one yet. Still time though....
    • user1703401
      user1703401 over 14 years
      The PDP-11 nailed it down. The notion that a character can be encoded in a char is seriously obsolete.
    • Windows programmer
      Windows programmer over 14 years
      "The PDP-11 nailed it down" -- You mean because C was first implemented for the PDP-11 with 8 bit bytes? But C was next implemented for Honeywell machines with 9 bit bytes. See K&R version 1. Also, the question asked about char (i.e. byte) not about character (one or more bytes encoding something that wasn't asked about).
    • David R Tribble
      David R Tribble over 14 years
      DEC-10 and DEC-20 had 36-bit words. Five 7-bit ASCII characters per word was quite common. Also six 6-bit characters were used.
    • Sam
      Sam about 13 years
      Honeyboxen with 9-bit bytes are a serious annoyance.
    • vsz
      vsz over 8 years
      I've seen compilers specifically designed for microcontrollers, where you could specify the size of char in the compiler options.
    • Exectron
      Exectron over 8 years
      @vsz: Can you say specifically which compilers for which microcontrollers?
    • vsz
      vsz over 8 years
      @CraigMcQueen : If I remember correctly, CodeVision for Atmel microcontrollers lets one choose the size of char
  • Exectron
    Exectron over 14 years
    True. The name char is a bit quaint now in Unicode days. I care more about 8-bit units (octets) when dealing with binary data, e.g. file storage, network communications. uint8_t is more useful.
  • Jerry Coffin
    Jerry Coffin over 14 years
    @ephemient:I'm pretty sure there was at least one (pre-standard) C compiler for the PDP-10/DecSystem 10/DecSystem 20. I'd be very surprised at a C compiler for the CDC mainframes though (they were used primarily for numeric work, so the Fortran compiler was the big thing there). I'm pretty sure the others do have C compilers.
  • Steve Jessop
    Steve Jessop over 14 years
    Did the Windows CE compiler really not support the char type at all? I know that the system libraries only supported the wide char versions of functions that take strings, and that at least some versions of WinCE removed the ANSI string functions like strlen, to stop you doing char string-handling. But did it really not have a char type at all? What was sizeof(TCHAR)? What type did malloc return? How was the Java byte type implemented?
  • Jerry Coffin
    Jerry Coffin over 14 years
    @Steve:Well, it's been a while since I wrote any code for CE, so I can't swear to it, but my recollection is that even attempting to define a char variable leads to a compiler error. Then again, that is depending on my memory, which means it isn't exactly certain.
  • Steve Jessop
    Steve Jessop over 14 years
    How strange. And certainly not C. I worked at a company with a multi-platform product that included at least two versions of WinCE, but I never interacted much with Windows code, and the portable code in the product (that is, most of the product) wasn't compiled with Microsoft's compiler.
  • myron-semack
    myron-semack over 14 years
    TI C62xx and C64xx DSPs also have 16-bit chars. (uint8_t isn't defined on that platform.)
  • Mark Ransom
    Mark Ransom over 14 years
    I haven't seen a Turbo button in at least 20 years - do you really think it's germane to the question?
  • John Feminella
    John Feminella over 14 years
    @Mark Ransom: That's the whole point. Developers often rely on assumptions which seem to be true at the moment, but which are much shakier than they initially appear. (Can't count the number of times I've made that mistake!) The Turbo button should be a painful reminder not to make unnecessary assumptions, and certainly not to make assumptions that aren't guaranteed by a language standard as if they were immutable facts.
  • Windows programmer
    Windows programmer over 14 years
    The question didn't ask about characters (whether Unicode or not). It asked about char, which is a byte.
  • Windows programmer
    Windows programmer over 14 years
    Windows CE supports char, which is a byte. See Craig McQueen's comment on Richard Pennington's answer. Bytes are needed just as much in Windows CE as everywhere else, no matter what sizes they are everywhere else.
  • Windows programmer
    Windows programmer over 14 years
    Also Honeywell machines, such as maybe the second machine where C was implemented. See K&R version 1.
  • Windows programmer
    Windows programmer over 14 years
    "Not sure about other languages though" -- historically, most languages allowed the machine's architecture to define its own byte size. Actually historically so did C, until the standard set a lower bound at 8.
  • ephemient
    ephemient over 14 years
    Huh, I thought C skipped over the PDP-10. But perhaps there was a port; all of this is before my time anyhow ;-)
  • Adam Badura
    Adam Badura over 14 years
    Could you point out to place in C++ Standard which says that the bye has at least 8 bits? It is a common belief however I personally failed to find it in the Standard. The only thing I found in Standard is which characters must be representable by char there are more then 64 of them but less that 128 so 7 bits would be enough.
  • Admin
    Admin over 14 years
    Actually, the Dec-10 had also 6-bit characters - you could pack 6 of these into a 36-bit word (ex-Dec-10 programmer talking)
  • David R Tribble
    David R Tribble over 14 years
    The DEC-20 used five 7-bit ASCII characters per 36-bit word on the TOPS-20 O/S.
  • Windows programmer
    Windows programmer over 14 years
    Section 18.2.2 invokes the C standard for it. In the C standard it's section 7.10 and then section 5.4.2.4.1. Page 22 in the C standard.
  • AProgrammer
    AProgrammer over 14 years
    There are (were?) at least two implementations of C for the PDP-10: KCC and a port of gcc (pdp10.nocrew.org/gcc).
  • AProgrammer
    AProgrammer over 14 years
    As far as I remember, on the PDP-10 7 bits ASCII, packed 5 bytes to a word was the most common format for text files (dropping a bit, which when set was interpreted as an indication that the word was a line number in some contexts). The SIXBIT charset (a subset of ASCII, dropping the control and lower case columns) was used for some things (for instance in for names in object file) but not for text file as there was no way to indicate the end of lines... 9 bits characters was not of common use, excepted perhaps to port C programs to the PDP-10.
  • ninjalj
    ninjalj almost 14 years
    Also, the execution character set has nothing to do with opcodes, it's the character set used at execution, think of cross-compilers.
  • Ken Bloom
    Ken Bloom almost 13 years
    The C standard would not allow 7-bit chars packed 5 per 36-bit word (as you mentioned for the PDP-10), nor would it allow 6-bit chars, as you mentioned for the Control Data mainframes. See parashift.com/c++-faq-lite/intrinsic-types.html#faq-26.6
  • Jerry Coffin
    Jerry Coffin almost 13 years
    @Ken: Quite true -- such implementations definitely would not conform with the standard (but most, if not all of them, were obsolete before the first C standard in any case).
  • Ken Bloom
    Ken Bloom almost 13 years
    @Jerry: BTW, I didn't mean to say you couldn't implement a C compiler on that hardware, just that you'd have to use different char sizes to do it.
  • vpalmu
    vpalmu over 12 years
    That joke was actually implemented for supporting Unicode on this architecture.
  • David Cary
    David Cary almost 12 years
    Many DSPs for audio processing are 24-bit machines; the BelaSigna DSPs from On Semi (after they bought AMI Semi); the DSP56K/Symphony Audio DSPs from Freescale (after they were spun off from Motorola).
  • bames53
    bames53 almost 12 years
    I imagine that the reason octal was ever actually used was because 3 octal digits neatly represent a 9-bit byte, just like we usually use hexadecimal today because two hexadecimal digits neatly represent an 8-bit byte.
  • me22
    me22 almost 11 years
    Unicode never needed a full 32 bits, actually. They originally planned for 31 (see the original UTF-8 work), but now they're content with only 21 bits. They probably realized they wouldn't be able to print the book any more if they actually needed all 31 bits :P
  • user3528438
    user3528438 about 9 years
    @msemack C64xx has hardware for 8/16/32/40, and 8bit char
  • myron-semack
    myron-semack about 9 years
    @user3528438 It did not at the time I posted that. Code Composer Studio 3.x, there was no uint8_t in stdint.h.
  • Lars Brinkhoff
    Lars Brinkhoff about 9 years
    The PDP-6/PDP-10/DEC-10/DEC-20 did not have just 6-bit bytes, or 7-bit bytes, or 8-bit bytes, or 9-bit bytes. It had an arbitrary byte size from 1 to 36 bits.
  • supercat
    supercat almost 9 years
    If one stores one octet per unsigned char value there should be no portability problems unless code uses aliasing tricks rather than shifts to convert sequences of octets to/from larger integer types. Personally, I think the C standard should define intrinsics to pack/unpack integers from sequences of shorter types (most typically char) storing a fixed guaranteed-available number of bits per item (8 per unsigned char, 16 per unsigned short, or 32 per unsigned long).
  • Keith Thompson
    Keith Thompson over 8 years
    Rather than assert() (if that's what you meant), I'd use #if CHAR_BIT != 8 ... #error "I require CHAR_BIT == 8" ... #endif
  • underscore_d
    underscore_d over 8 years
    ...what? Why do you think enum is likely to be smaller than other native types? Are you aware it defaults to the same storage as int? "you have some structure that needs 15 bits, so you stick it in an int, but on some other platform an int is 48 bits or whatever....." - so #include <cstdint> and make it an int16_t for the best chance of minimising bit usage. I'm really not sure what you thought you were saying among all those ellipses.
  • vpalmu
    vpalmu over 8 years
    There have been 9 bit UNIX boxes for ages. Lots of the old standards talk about 9 bit bytes, CHAR_BIT notwithstanding.
  • Shannon Severance
    Shannon Severance over 8 years
    @me22, Unicode originally planned for 16 bits. "Unicode characters are consistently 16 bits wide, regardless of language..." Unicode 1.0.0. unicode.org/versions/Unicode1.0.0/ch01.pdf.
  • Qix - MONICA WAS MISTREATED
    Qix - MONICA WAS MISTREATED over 7 years
    @KeithThompson Is there any reason not to use static_assert()?
  • Keith Thompson
    Keith Thompson over 7 years
    @Qix: Portability. IIRC static_assert was only added to the C standard in 2011.
  • Lars Brinkhoff
    Lars Brinkhoff over 7 years
    I know about four C compilers for the PDP-10: C10, KCC, PCC, and GCC.
  • Jerry Jeremiah
    Jerry Jeremiah over 6 years
    So other answers and comments mention machines with 5 bit, 6 bit and 7 bit bytes. Does that mean that you cannot run a C program on that machine that complies with the standard?
  • Lanting
    Lanting over 6 years
    Regarding TIs hardware: uint8_t is most definitely defined: github.com/energia/c2000-core/blob/master/cores/c2000/… (line 112) (typedef unsigned char uint8_t; ) This workaround made a library i was using compile, but then it broke down at runtime :'(
  • Martin Bonner supports Monica
    Martin Bonner supports Monica about 6 years
    "Historically, the x86 platform's opcode was one byte long" : how sweet. Historically, C was developed on a PDP-11 (1972), long before x86 had been invented (1978).
  • Sergey.quixoticaxis.Ivanov
    Sergey.quixoticaxis.Ivanov about 6 years
    About assumptions and video games: even games made in 2000 are often 1) assuming the existence of cycles~time correlation and 2) assuming the code is executed by a single threaded CPU. Try running vanilla version of "Deus Ex" (without delay/render/affinity patches) on modern PC to see how things break.
  • Sergey.quixoticaxis.Ivanov
    Sergey.quixoticaxis.Ivanov about 6 years
    @JerryJeremiah from C++11 ISO [intro.memory]: A byte is at least large enough to contain any member of the basic execution character set and the eight-bit code units of the Unicode UTF-8 encoding form and is composed of a contiguous sequence of bits, the number of which is implementation defined. I don't know about C.
  • Ben Voigt
    Ben Voigt over 5 years
    @JerryJeremiah: You can run C on a machine whose hardware datum unit is less than 8 bits, but then a C "byte" will be multiple datum units. Your physical pointers will have a step size less than a byte, but the C program will never use that granularity. (And note there won't be any C data type for a sub-byte datum)
  • prosfilaes
    prosfilaes over 4 years
    ISO 10646 was originally 31 bits, and Unicode merged with ISO 10646, so it might be sloppy to say that Unicode was 31 bits, but it's not really untrue. Note they don't actually print the full code tables any more.
  • saagarjha
    saagarjha almost 4 years
    Saying that code that assumes that char is 8 bits is "violating the standard" isn't really accurate–the standard does not mandate that your code be portable. Your code is just implementation-specific but still valid; perhaps (purposefully) "ignorant of the C/C++ language standards". (If POSIX is the standard you care about, this is not just valid but guaranteed).
  • Joe Z
    Joe Z over 2 years
    The C6000 DSP family always had CHAR_BIT = 8. I was at TI before C62x debuted, and worked on the product family (or closely adjacent families) throughout my career there. The uint8_t type didn't show up until C99, and so it wasn't part of the tool chain until they added C99 support. C62x debuted in 1997. I don't remember when we added C99 support but it wasn't 1999, I'm sure.