Is TCHAR still relevant?

41,092

Solution 1

I would still use the TCHAR syntax if I was doing a new project today. There's not much practical difference between using it and the WCHAR syntax, and I prefer code which is explicit in what the character type is. Since most API functions and helper objects take/use TCHAR types (e.g.: CString), it just makes sense to use it. Plus it gives you flexibility if you decide to use the code in an ASCII app at some point, or if Windows ever evolves to Unicode32, etc.

If you decide to go the WCHAR route, I would be explicit about it. That is, use CStringW instead of CString, and casting macros when converting to TCHAR (eg: CW2CT).

That's my opinion, anyway.

Solution 2

The short answer: NO.

Like all the others already wrote, a lot of programmers still use TCHARs and the corresponding functions. In my humble opinion the whole concept was a bad idea. UTF-16 string processing is a lot different than simple ASCII/MBCS string processing. If you use the same algorithms/functions with both of them (this is what the TCHAR idea is based on!), you get very bad performance on the UTF-16 version if you are doing a little bit more than simple string concatenation (like parsing etc.). The main reason are Surrogates.

With the sole exception when you really have to compile your application for a system which doesn't support Unicode I see no reason to use this baggage from the past in a new application.

Solution 3

I have to agree with Sascha. The underlying premise of TCHAR / _T() / etc. is that you can write an "ANSI"-based application and then magically give it Unicode support by defining a macro. But this is based on several bad assumptions:

That you actively build both MBCS and Unicode versions of your software

Otherwise, you will slip up and use ordinary char* strings in many places.

That you don't use non-ASCII backslash escapes in _T("...") literals

Unless your "ANSI" encoding happens to be ISO-8859-1, the resulting char* and wchar_t* literals won't represent the same characters.

That UTF-16 strings are used just like "ANSI" strings

They're not. Unicode introduces several concepts that don't exist in most legacy character encodings. Surrogates. Combining characters. Normalization. Conditional and language-sensitive casing rules.

And perhaps most importantly, the fact that UTF-16 is rarely saved on disk or sent over the Internet: UTF-8 tends to be preferred for external representation.

That your application doesn't use the Internet

(Now, this may be a valid assumption for your software, but...)

The web runs on UTF-8 and a plethora of rarer encodings. The TCHAR concept only recognizes two: "ANSI" (which can't be UTF-8) and "Unicode" (UTF-16). It may be useful for making your Windows API calls Unicode-aware, but it's damned useless for making your web and e-mail apps Unicode-aware.

That you use no non-Microsoft libraries

Nobody else uses TCHAR. Poco uses std::string and UTF-8. SQLite has UTF-8 and UTF-16 versions of its API, but no TCHAR. TCHAR isn't even in the standard library, so no std::tcout unless you want to define it yourself.

What I recommend instead of TCHAR

Forget that "ANSI" encodings exist, except for when you need to read a file that isn't valid UTF-8. Forget about TCHAR too. Always call the "W" version of Windows API functions. #define _UNICODE just to make sure you don't accidentally call an "A" function.

Always use UTF encodings for strings: UTF-8 for char strings and UTF-16 (on Windows) or UTF-32 (on Unix-like systems) for wchar_t strings. typedef UTF16 and UTF32 character types to avoid platform differences.

Solution 4

If you're wondering if it's still in practice, then yes - it is still used quite a bit. No one will look at your code funny if it uses TCHAR and _T(""). The project I'm working on now is converting from ANSI to unicode - and we're going the portable (TCHAR) route.

However...

My vote would be to forget all the ANSI/UNICODE portable macros (TCHAR, _T(""), and all the _tXXXXXX calls, etc...) and just assume unicode everywhere. I really don't see the point of being portable if you'll never need an ANSI version. I would use all the wide character functions and types directly. Preprend all string literals with a L.

Solution 5

I would like to suggest a different approach (neither of the two).

To summarize, use char* and std::string, assuming UTF-8 encoding, and do the conversions to UTF-16 only when wrapping API functions.

More information and justification for this approach in Windows programs can be found in http://www.utf8everywhere.org.

Share:
41,092
Shane Larson
Author by

Shane Larson

Living in southern Brazil.

Updated on May 08, 2020

Comments

  • Shane Larson
    Shane Larson about 4 years

    I'm new to Windows programming and after reading the Petzold book I wonder:

    is it still good practice to use the TCHAR type and the _T() function to declare strings or if I should just use the wchar_t and L"" strings in new code?

    I will target only Windows 2000 and up and my code will be i18n from the start up.

  • Chris Walton
    Chris Walton about 14 years
    You might write some code you'll want to use somewhere else where you do need an ANSI version, or (as Nick said) Windows might move to DCHAR or whatever, so I still think it's a very good idea to go with TCHAR instead of WCHAR.
  • mhenry1384
    mhenry1384 almost 14 years
    WinCE uses 16-bit wchar_t strings just like Win32. We have a large base of code that runs on WinCE and Win32 and we never use TCHAR.
  • Cody Gray
    Cody Gray almost 13 years
    The CLR is a very different environment than unmanaged code. That is not an argument.
  • 0xC0000022L
    0xC0000022L almost 12 years
    Unicode has at least three current encodings (UTF-8, UTF-16, UTF-32) and one deprecated encoding (UCS-2, a subset of what is now UTF-16). Which one do you refer to? I like the rest of the suggestions though +1
  • 0xC0000022L
    0xC0000022L almost 12 years
    2012 calling: there are still applications to be maintained without #define _UNICODE even now. End of transmission :)
  • 0xC0000022L
    0xC0000022L almost 12 years
    Fun fact: UTF-16 was not always there on the NT platform. Surrogate code points were introduced with Unicode 2.0, in 1996, which was the same year NT 4 got released. Up until, IIRC, (including) Windows 2000 all NT versions used UCS-2, effectively a subset of UTF-16 which assumed each character to be representable with one code point (i.e. no surrogates).
  • 0xC0000022L
    0xC0000022L almost 12 years
    btw, while I agree that TCHAR shouldn't be used anymore, I disagree that this was a bad idea. I also think that if you choose to be explicit instead of using TCHAR you should be explicit everywhere. I.e. not use functions with TCHAR/_TCHAR (such as _tmain) in their declaration either. Simply put: be consistent. +1, still.
  • josesuero
    josesuero over 11 years
    @0xC0000022L the question was about new code. When you maintain old code, you obviously have to work with the environment that code is written for. If you're maintaining a COBOL application, then it doesn't matter if COBOL is a good language or not, you're stuck with it. And if you're maintaining an application which relies on TCHAR then it doesn't matter if that was a good decision or not, you're stuck with it.
  • dan04
    dan04 over 11 years
    I doubt that Windows will ever switch to UTF-32.
  • Pavel Radzivilovsky
    Pavel Radzivilovsky over 11 years
    -1 for UTF-16 recommendation. Not only this creates non-portable (windows-centric) code, which is unacceptable for libraries - even though may be used for the simplest of cases like UI code - it is not efficient even on Windows itself. utf8everywhere.org
  • Pavel Radzivilovsky
    Pavel Radzivilovsky over 11 years
    Indeed, TCHAR is not useful unless in COBOL)
  • Pavel Radzivilovsky
    Pavel Radzivilovsky over 11 years
    Steven, you are quoting a text written by someone who does not understand the meaning of the word 'Unicode'. It is one of those unfortunate documents from the time of UCS-2 confusion.
  • Pavel Radzivilovsky
    Pavel Radzivilovsky over 11 years
    Even Microsoft makes mistakes.
  • IInspectable
    IInspectable over 11 years
    -1 The question is tagged C and C++. Answers can always be deleted by their respective authors. This would be a good time to use that provision.
  • Adrian McCarthy
    Adrian McCarthy over 10 years
    It was a good idea back when it was introduced, but it should be irrelevant in new code.
  • Medinoc
    Medinoc over 9 years
    Indeed, that's what will still work when the character encoding is eventually changed ''again''.
  • Caroline Beltran
    Caroline Beltran over 9 years
    @PavelRadzivilovsky, when implementing your suggestion in a VC++ application, would we set the VC++ charachter set to 'None' or 'Multibyte (MBCS)'? The reason I am asking is that I just installed Boost::Locale and the default character set was MBCS. FWIW, my pure ASCII application was set to 'None' and I have now set it to 'MBCS' (since I will be using Boost::Locale in it) and it works just fine. Please advise.
  • Pavel Radzivilovsky
    Pavel Radzivilovsky over 9 years
    As utf8everywhere recommends, I would set it to 'Use Unicode character set'. This ads extra safety, but is not required. Boost::locale's author is a very smart guy, I am sure he did the right thing though.
  • Deduplicator
    Deduplicator over 9 years
    You prefer code which is explicit in what the character type is, and thus use a type which is sometimes this and sometimes that? Very persuasive.
  • IInspectable
    IInspectable over 8 years
    You misrepresent, what TCHARs were initially introduced for: To ease development of code for Win 9x and Windows NT based versions of Windows. At that time, Windows NT's UTF-16 implementation was UCS-2, and the algorithms for string parsing/manipulation were identical. There were no surrogates. And even with surrogates, algorithms for DBCS (the only supported MBCS encoding for Windows) and UTF-16 are the same: In either encoding, a code point consists of one or two code units.
  • Cheers and hth. - Alf
    Cheers and hth. - Alf almost 8 years
    −1 for the inconsistency noted by @Deduplicator, and for the negative payoff advice to use a macro that can be whatever (and will generally not be tested for more than one specific value).
  • IInspectable
    IInspectable almost 8 years
    _UNICODE controls how the generic-text mappings are resolved in the CRT. If you don't want to call the ANSI version of a Windows API, you need to define UNICODE.
  • Edward Falk
    Edward Falk almost 8 years
    Suppose I want to use FormatMessage() to convert a value from WSAGetLastError() to something printable. The documentation for WSAGetLastError() says it takes LPTSTR as the pointer to the buffer. I really don't have much choice but to use TCHAR, no?
  • IInspectable
    IInspectable over 7 years
    @EdwardFalk: WSAGetLastError doesn't take any arguments, so I'm assuming that you're referring to FormatMessage. As the documentation points out, there is a Unicode export, FormatMessageW, that takes an LPWSTR. No need to use the generic-text mappings. This is true for almost all Windows API calls that take string arguments.
  • IInspectable
    IInspectable over 7 years
    @PavelRadzivilovsky: The document was written for a system, where Unicode and UTF-16LE are commonly used interchangeably. While technically inaccurate, it is unambiguous nonetheless. This is also explicitly pointed out in the introduction of the same text: "Windows represents Unicode characters using UTF-16 encoding [...]".
  • IInspectable
    IInspectable over 7 years
    The UTF-8 Everywhere mantra won't become the right solution, just because it is repeated more often. UTF-8 is undoubtedly an attractive encoding for serialization (e.g. files, or network sockets), but on Windows it is frequently more appropriate, to store character data using the native UTF-16 encoding internally, and convert at the application boundary. One reason is, that UTF-16 is the only encoding, that can be converted immediately to any other supported encoding. This is not the case with UTF-8.
  • Pavel Radzivilovsky
    Pavel Radzivilovsky over 7 years
    "..UTF-16 is the only encoding, that can be converted immediately to any other supported encoding." what do you mean? What's the problem to convert UTF-8 encoding to anything else?
  • IInspectable
    IInspectable over 7 years
    @PavelRadzivilovsky: "What's the problem to convert UTF-8 encoding to anything else?" - That's not what I said. You can immediately convert UTF-8 to UTF-16, calling MultiByteToWideChar. But you cannot convert from UTF-8 to anything else, without first converting to UTF-16.
  • IInspectable
    IInspectable over 7 years
    "UTF-8 is now the dominant encoding" - This turned wrong, by leaving out the second part of the quote ("for the World Wide Web"). For desktop applications, the most used native character encoding is likely still UTF-16. Windows uses it, Mac OS X does, too, and so do .NET's and Java's string types. That accounts for a massive amount of code out there. Don't get me wrong, there's nothing wrong with UTF-8 for serialization. But more often than not (especially on Windows), you'll find, that using UTF-16 internally is more appropriate.
  • Pavel Radzivilovsky
    Pavel Radzivilovsky over 7 years
    I do not understand. To anything else - like what? E.g. UCS-4? Why not? Seems very easy, all numeric algorithm..