Windows doesn't have much of a choice. Some of its API functions will attempt to prevent you from doing things like that, but it is possible to put any valid character in any part of the filename, extension or otherwise.
About what? The difference in reported sizes?
Of course in UNIX one can generally do whatever one likes, but escaping / quoting spaces or any other char with alternate meaning in the shell for a file-name is such a pain that no one does it - or should get their ass kicked if they do.
Such is the result of maintaining a massive, archaic system from when computers had extreme limitations.
I am sure you are talking about DOS with it's 8.3 filename limitation. That was a self imposed design choice by MS, not a limitation because of hardware. In comparison UNIX / Linux could run on the same hardware, and had no such limitations.
Although I agree, MS does suffer in various subtle ways sometimes because of the need for backwards compatibility.
I remember when in 1989 at work we had 3 networked "High end" Sun SPARC Station with 33MHz chip & 8 Mb of RAM with 100 Mb Hard Drive, I don't remember whether it was 16 or 32 bit . A typical PC at the time had a 8MHz AT chip (Pre XT & 80286), 640Kb RAM, & only 10Mb Hard Drive - running DOS 3.0.
On the SPARC station, we ran a quite good CAD system, whereas the office PC's ran real basic word processing & Lotus123 spreadsheets.
I have also heard of companies running their entire General Ledger on machines that only had 128Kb of RAM.
Any way that is enough of my history blog for today :)
@TheIdeasMan no, I mean all modern operating systems to this day are still held back because they can't move on from these old design decisions.
Why do paths have to be strings?
Why do spaces in paths and filenames screw up some software and not others?
Why is a file's type determined from its name?
Why should moving or renaming a file cause the link to its program to be broken?
Why do we have to combine multiple files into one file to download multiple files at once? Why can't you download a directory?
Why are there so many illegal characters for filenames?
Why do commandlines have to be strings?
While some things may have been practical at the time, their practicality has changed in recent years.
GNOME uses mime-type databases to identify files. The database is built using file extensions ("globs") and magic numbers (the data that a file of a given type starts with, e.g. ELF executables start with 0x7F followed by the string "ELF" (reads as ".ELF" in a hex-editor) and scripts usually start with "#!" followed by the path to an interpreter to use (e.g. "#!/bin/bash")). Windows, on the other hand, only uses file extensions to determine filetypes, which is why it can't identify files with no extension or the wrong extension.
I also want to add that sometimes you can have access to filename, but not to it content (You have no rights to read this file, file is in compressed directory...). Some file naming rules (extensions, for example) can be useful to determine file type in those cases. Best way to use both extensions and content inspections to find out what some file contains.
Why are there so many illegal characters for filenames?
As ne555 said: Ambiguity in some cases and in other cases these symbols can be reserved for system use.
Some names can be reserved too: try to create folder named "con" or "nul" in Windows (Note that system do support these names as folder names, you just need to be creative with command line) EDIT: I have found out how to create file ending with dot: copy nul \\?\D:\file.
Not all OS's have been held back by earlier design decisions - Windows might be the most notable one that sometimes is.
I am struggling to see how UNIX / Linux OS's have been held back in any way at all.
Do you use Windows exclusively? If so, maybe it might be worth your while to install Linux on a spare hard drive or physical partition. Then you would have free access to all kinds of programming & discover the power of the shell. Apologies if you are aware of all this already - I can't tell whether you are or not, despite your considerable knowledge of C++ & other languages.
There are other OS's that were badly limited: I saw an old machine in 2005 (might have been a WANG) that I was told could only have 4 directories per hard disk, but in reality it had 4 partitions & no directory facility at all. Presumably it just had some sort of file table system - pretty bare bones stuff. The hell of it was that it was still being used by the company for a small part of it's daily operations, although they did have some quite powerful brand new servers as well.
Windows isn't held back by very much. It does something much more impressive -- it is very backwards compatible. Meaning that ancient programs still work fine on it, while modern programs can do things the right way.
No one claims it is perfect, unlike the Linux evangelists we keep hearing.
Well there are pro's & cons for everything - A Windows vs Linux debate is just the same as a language Y vs language Z debate. Nothing is ever perfect, each has it uses, and good & bad points. In terms of "hot air", I could argue it is on both sides.
At the risk of propagating more evangelistic hot air commercials, here are some hopefully brief points:
I am not as one-sided as you might think, I mention UNIX / Linux because there are a lot of people out there who have only ever used MS, and aren't aware of the potential outside this sphere.
I guess I was very impressed when I first saw UNIX in 1987: compared to DOS or even Windows, It just blew the MS stuff right out of the water. The big difference between UNIX & MS is that the former was always developed with commercial use in mind, while MS started out as a microscopic OS, (useful for the everyday person - the greatest thing about it), now MS has evolved into a very useful & most popular OS & software. In my mind, the shell is still one of the most impressive aspects of UNIX / Linux, and that is why it seems to be preferred for managing large & / or critical systems. These days with Linux, it is still so much a programmer's utopia, with all kinds of scripting & languages available for free. Of course there is lots of free stuff for Windows too.
Having said that, I still use Windows when I need to for work. I use AutoDesk's Civil3D for my Surveying / Civil Design work - I like it because it is smart & fully capable. I prefer systems (software & OS) that are complex & fully capable over ones that are simple & limited.
Some of the best things IMO about MS, is the way the applications look (always very nice), the way they interoperate - not only from the GUI POV, but also in terms of the way one can write VBA or .NET code that deals with any of the Office applications from within an AutoDesk application or vice versa for example.
Another thing is that although Corporate & Education (Universities say) organisations have Windows machines at every desk, they still have their core Database & Web Servers running on UNIX. One would be very brave to run a MS Web Server because apparently it is so full of security holes that it really isn't funny, so the UNIX Apache Server is used pretty much exclusively.
Finally, there are a lot of huge and / or complex business systems that typically run on UNIX and are not sold or even mentioned publicly - people seem to forget about this & discuss things which are in the public domain without realising that is only a fraction of the whole.
Edit: It didn't turn out to be that brief - sorry 8+)