Usage of uint8, uint16 etc
Solution 1
What is the use of/benefit in using a uint16 where an uint32 will also suffice(if, there is any)?
If those uint16s
are parts of arrays or structures, you can save memory and perhaps be able to handle larger data sets than with uint32s
in those same arrays or structures. It really depends on your code.
Data protocols and file formats may use uint16s
and it may not be correct to use uint32s
instead. This depends on the format and semantics (e.g. if you need values to wrap around from 65535 to 0, uint16
will do that automatically while uint32
won't).
OTOH, if those uint16s
are just single local or global variables, replacing them with 32-bit ones might make no significant difference because they are likely to occupy the same space due to alignment and they are passed as 32-bit parameters (on the stack or in registers) on MIPS anyway.
Will there be any savings in memory usage in using shorter data types (considering data alignment)?
There may be savings, especially when uint16s
are parts of many structures or elements of big arrays.
If it is to save a few bytes of memory, is it something sensible to do in modern hardware?
Yes, you lower the memory bandwidth (which is always a good thing) and you often lower various cache misses (data caches and TLB) when you operate on less data.
Solution 2
First of all if you have types such as uint16 defined, where are they defined? They are not standard types, so will be defined in some proprietary header - maybe yours or may be supplied by some third party library; in which case you have to ask yourself how portable that code is, and whether you are creating a dependency that might not make sense in some other application.
Another problem is that many libraries (ill-advisedly IMO) define such types with various names such as UINT16, uint16, U16 UI16 etc. that it becomes somewhat of a nightmare ensuring type agreement and avoiding name clashes. If such names are defined, they should ideally be placed in a namespace or given a library specific prefix to indicate what library they were defined for use with, for example rtos::uint16
to rtos_uint16
.
Since the ISO C99 standard library provides standard bit-length specific types in stdint.h, you should prefer their use over any defined in a proprietary or third-party header. These types have a _t
suffix, e.g. uint16_t
. In C++ they may be placed in the std::
namespace (though that is not a given since the header was introduced in C99).
1] What is the use of/benefit in using a uint16 where an uint32 will also suffice(if, there is any)?
Apart from my earlier advice to prefer stdint.h
's uint16_t
, there are at least two legitimate reasons to use length specific types:
- To match a specific hardware register width.
- To enforce a common and compatible API across different architectures.
2] Will there be any savings in memory usage in using shorter data types (considering data alignment)?
Possibly, but if memory is not your problem, that is not a good reason to use them. Worth considering perhaps for large data objects or arrays, but applying globally is seldom worth the effort.
3] If it is to save a few bytes of memory, is it something sensible to do in modern hardware?
See [2]. "Modern hardware" however does not necessarily imply large resources; there are plenty of 32 bit ARM Cortex-M devices with only a few Kb of RAM for example. That is more about die space, cost and power consumption than it is about age of design or architecture.
Solution 3
cstdint
has loads of typedef
s for different purposes.
-
intN_t
for a specific width -
int_fastN_t
for the fastest integer, which has at least N bits -
int_leastN_t
for the smallest integer, which has at least N bits - Their
unsigned
equivalents
You should choose depending on your circumstances. Storing thousands in a std::vector
and not doing loads of computation? intN_t
is probably your man. Need fast computation on a small number of integers? int_fastN_t
is probably your guy.
Solution 4
One has to check the produced machine code / assembler to verify there are any saving of code. In RISC type architectures the typical immediate is 16-bit, but using uint16_t will anyway consume a full 32-bit register -- thus, if using int types, but committing to use values near zero will produce the same results and being more portable.
IMO saving memory is worthwhile in modern platforms too. Tighter code leads to e.g. better battery life and more fluent UX. However, I'd suggest micro-managing the size only when working with (large) arrays, or when the variable maps to some real HW resource.
ps. Compilers are smart, but the folks writing them work at the moment making them even better.
Solution 5
Ans. 1. Software has certain requirements and specifications which strictly tells to take only 8/16-bits of a parameter while encoding/decoding or some other certain use. So, even if u assign a value bigger than 127 into a u8 say, it trims the data automatically for you.
Ans. 2. We should not forget that our compilers are way beyond intelligent to do the optimization, be it memory or complexity. So it is always recommended to use a smaller memory when possible.
Ans. 3. Of course saving memory makes sense on modern h/w.
Related videos on Youtube
Comments
-
NeonGlow about 4 years
Currently I am working with a code base (C, C++ mixed) targeted for a 32 bit MIPS platform. The processor is a fairly modern one [just to mention that we have a good amount of processing power and memory].
The code base uses data types like uint8[1 byte wide unsigned integer], uint16[2 byte wide unsigned integer], uint32[4 byte wide unsigned integer] etc.
I know how the usage of these constructs are helpful while porting the code to different platforms.
My questions are:
What is the use of/benefit in using a uint16 where an uint32 will also suffice(if, there is any)?
Will there be any savings in memory usage in using shorter data types (considering data alignment)?
If it is to save a few bytes of memory, is it something sensible to do in modern hardware?
-
Some programmer dude about 11 yearsIt depends on what you do with the data? If your application heavy in communication? If so do you communicate using textual or binary protocols? Or are you writing to/from hardware registers? Also remember that even saving just a byte here and there, it all accumulates and can become quite a big saving when counted together.
-
mjshaw about 11 yearsAgree with @JoachimPileborg completely depends on data. If you save 16 bits for every int by using uint16 instead of uint32 then you have halved your data packets. This, however, depends on the architecture. Try and see what happens.
-
NeonGlow about 11 yearsThank you. When I use a uint16 (instead of uint32) in
non hardware
code, say as a loop counter of a for loop, I assume the only benefit saving memory. Please correct me if I am wrong here. -
Basile Starynkevitch about 11 yearsIt might even slow down your code. It is hardware dependent. But 16 bits arithmetic is not 32 bits arithmetic (so incrementing 32767 gives different results)
-
NeonGlow about 11 yearsThanks. I was trying to differentiate the memory usage between uint16 and uint32. Not between uint16 and its
typedef
. -
NeonGlow about 11 yearsGt it. Thanks for the explanation.
-
NeonGlow about 11 years+1. Thanks for explaining this by mapping with real world situations.
-
NeonGlow about 11 yearsFrom the ans to the first question, Won't the structs be unpacked, by default? In that case will there be any memory saving?
-
Alexey Frunze about 11 yearsThere can be if the order of the structure members permits. If you interleave 4
uint16s
with 4uint32s
instead of having 4uint16s
and then 4uint32s
(or the other way around), there isn't going to be any saving. -
Clifford about 11 yearsI considered mentioning communication protocols (and file formats) in my answer but chose not to do so since in that case byte order becomes an issue also, in which case you may end up dealing with 8 bit types regardless of the field widths.
-
Lundin about 11 yearsAlso, some CPU architectures have variable length encoding instructions, meaning that a smaller type can be handled with a smaller op code. So it will not only occupy less data memory, but also less program memory.
-
Alexey Frunze about 11 years@Clifford True. However, you may have platform-specific protocols and formats and one could use things like
htons()
andntohs()
if the alignment (or the order of structure members) permits direct manipulations with 16-bit integers. -
Alexey Frunze about 11 years@Lundin True. 16-bit constants directly encoded in instructions will make the instructions shorter or fewer as MIPS does not support 32-bit constants in instructions and needs several instructions using 16-bit halves to achieve the effect. It's worse if MIPS16e is used.
-
supercat about 9 yearsUnfortunately, the new types are all defined in terms of types whose behavior is defined in ways that seldom match application requirements. On the ARM, for example, if a variable is held in a register,
uint32_t
may be faster thanuint16_t
while using no more memory, but if it's held in memory then auint16_t
may save memory without any cost in speed. Unfortunately, there's no type that means "The cheapest thing that holds numbers 0-65535". -
Clifford about 9 years@supercat : I am not really sure what your point is or why it relates to my answer specifically. Moreover "Seldom match application requirements" really? I would suggest that it is seldom that important. The usages I have suggested they are legitimate have nothing to do with such micro-optimisation; that's the compilers job. However stdint also defines least N and fast N types which potentially provide the finer control you seek