My first hard drive had space for 80MB, when it was new.
The essential task is to classify the files into bins. For example: OS, programs, active documents, archived documents, reproducible, and temporary files. These deserve different access speed and backup schema. The classification leads to a plan about what you do need.
I live with at most 120 mb left on system drive (after rebooting, which deletes all temporary files). Cannot even install .NET 4 and have no problems with it :).
I recommend you to buy 1TB external hard drive and dump all your junk onto it, deleting unneeded in process.
My solution to this problem was getting an HDD dock. In the long run it's cheaper, more convenient, and/or safer (in terms of data integrity) than screwing around with portable HDDs or DVDs. When I need more space I can just unload stuff onto an offline disk, or buy another one or two TBs without having to sell internal organs.
I recommend using an eSATA port with the above solution.
Don't install games on the external USB drive. Store data on it.
For the #notserious server bit:
10 processors on same node is serious steel. 8 CPU motherboards I do know of, but of course they can be 10-core models, so a 80-core box. For those, 100GiB RAM is peanuts. Dual-CPU nodes reach that hands down. Larger rack chassis tends to have up to 24 hotswap places for HDD. 32TB internal space takes less. However, a storage server requires next to no compute capacity, so all the i7 and RAM would be wasted (exception: ZFS can eat some), and mere 32TB in a SAN is not that much any more.