Like Moschops already pointed out, that's pretty much what the process would entail. You would have to process per-pixel information and deduce shapes in the greater image by taking contrast differences, color values etc into consideration. This falls in the area of image analysis/processing and machine vision, you may want to look into that.
For identifying the head, the algorithm would have to function somewhat like the magic wand tool does in Adobe Photoshop -- differentiating similar from dissimilar. Then that (assumed) head would have to be processed in terms of x and y coordinates to determine from which side the curvature of it's outline changes around the middle of it's total height, producing a bump of sorts that we assume must represent the nose. However, it's good to note that all this presupposes ideal lighting conditions, low overall image complexity as far as contrast and color changes go, etc. . Which means that if you took those pictures in a relatively controlled environment (like a mostly monotone background, like a greenscreen) it will be easier for an algorithm to distinguish them than if taken in a highly random background like a city street, etc. .
Unless your project involves specifically the topic of machine vision and image processing, it's definitely much easier and safer to specify the direction the man is facing to your program in some other way, by examiming them yourself one by one if you can and then loading them in as "left-facing images" and "right-facing images". If you are specifically interested in image analysis on the other hand, one book I'd recommend is "Image Processing: Analysis and Machine Vision" by Sonka, Hlavac and Boyle.