If _UNICODE is defined, TCHAR is a wchar_t, otherwise it's a char.
Why? Before Microsoft released Windows NT, there was blind panic. There was a whole bunch of features that had to be stuffed into the OS/2 codebase. When it came to multi-lingual support, they decided to use UNICODE-16 natively, but they needed some way to map the old WIN16 ASCII character set functions onto WIN32.
For all the C string functions, str... functions, there are wcs... equivalents for UNICODE-16. Further more, it uses wchar_t instead of char. For example, there's wcscpy, wcscat, wcschr, wcsstr, ...
Windows provides _tcs... versions that map onto str/wcs if _UNICODE is un/defined. For example, there's _tcscpy, _tcscat, _tcschr, _tcsstr, ...
The native WIN32 calls are all Unicode, but equivalent WIN16 versions are available. They're implemented in user32.dll/.lib. The Unicode versions end with W, and the ASCII version with A. For example, MessageBoxW and MessageBoxA. When _UNICODE is defined, a macro MessageBox maps onto MessageBoxW, otherwise MessageBoxA.
So, putting it all together, when _UNICODE is defined, TCHAR maps to wchar_t, MessageBox will map onto MessageBoxW and _tcscpy will map onto wcscpy. Otherwise they map onto char, MessageBoxA and strcpy.
Anyway, that what's there and why. If you explicitly need the unicode versions in an ASCII program, you can call the unicode versions directly and vice-versa. But if you had some legacy char app that you wanted to port to Windows Unicode, you just use the TCHAR stuff and defining or undefining _UNICODE will switch in a portable way.
This is the basis for std::string and std::wstring. One's ASCII, the other's UNICODE-16.
Finally, since all that stuff back in 1993/4, ICU has been developed. Most modern projects use ICU for Unicode support, including Boost.