Let's start from nearest neighbor. When you scale an image -- like I noted earlier, the existing pixel values are the only information you have access to in order to generate a larger or smaller version of that image. For larger versions of the original image, you can take the original pixel values and place them analogically across the new specified dimension, so you can fill up your new canvas size, but there will be new (and vacant) pixel positions that you will have to generate values for, otherwise your image will look like a series of detached grid squares with a black border (see nearest neighbor article I link). In order to avoid that, the resampling algorithm has to fill that space. However, note that nearest neighbor doesn't perform any interpolation for filling up the values of these newly generated pixels. It copies values to fill the required gap, so no new values are ever generated. In scaling up, it just determines through analogy the x and y positions for the existing pixels to be positioned at, and then copies their values to the vacant pixels that are nearest to those. See:
Interpolation means that new pixel values are generated to fill the gap. This resampling algorithm will create a smoother result, because none of the interpolated values are the same with the next one (like nearest neighbor's copying). Instead, they are all progressively different, so that the vacant space is filled with pixel values that smoothly transition between the existing values taken from the original image. For bilinear interpolation to be understood, let's start from linear interpolation:
This is a pretty good article. The implementation sample code is in java, but it's fairly descriptive. Essentially the mathematical notion of linear interpolation is applied in each of the channels that compose a pixel value -- that is, R, G, and B.
This, however, gives you the linearly interpolated pixel value in only one dimension. Textures and images are in two dimensions, which means that interpolating in only one dimension won't suffice. You need to interpolate for pixel values in both x and y dimensions, so this means you'll have to use something like bilinear interpolation.
For bilinear interpolation, see:
Similarly, for scaling down, you would need to take into consideration the dimension ratio between the old and new image size. Let's approach this one dimensionally and assume a one-dimensional array of pixels that we want to resize: This means that, if an image is 50% of the original one, then each of the pixels of the new image will correspond to two pixels of the original image (e.g. old image dimension: 120 pixels. new image dimension is 50%: 60 pixels. 120/60=2 pixels correspond to one pixel in the new image. ). So an average of these two pixels would have to be calculated (for R, G and B), and the result applied to the single pixel corresponding to those in the new image. This, done for all pixels in the array, eventually creates a value for the pixels of the new (now scaled down) image/array.
So, expanding to a 2D example: for a uniform scale-down of your image you'd want to uniformly affect the x and y dimensions by a given amount. Continuing the previous example, 50% scaling down, and with 120x120 as my original image size, this gives the new image size of 60x60 and the ratio of 2 full pixels (in each dimesion) of the original image corresponding to one pixel of the new image. So, you could divide your image in elementary "textures" of 2x2 pixels, and then average these to a single pixel, and then apply that to the corresponding pixel value in your new image. This is a simplified example, of course. For images that are not square, and whose scaling amount differs, you would end up with floating point ratios -- which would have to be resolved into integral units , since you are dealing with designating integral pixel values. Similarly, for non-uniform scaling, you would have to alter your calculations so that the non-uniformity is taken into account: this is completely off the top of my head, although I can't remember a non-uniform scale-down algorithm that performs interpolation, so I thought that perhaps you can create a new image with the new aspect ratio in the form of a scale-up, applying bi-linear interpolation to generate it's pixels, then apply a uniform scale-down to that resultant image in order to generate the non-uniformly scaled-down result you originally intended (I'm certainly not sure whether the non-uniform scale-down requires these two steps or if it can be achieved in one. I could very likely be wrong and this can be applied in a single step rather than two -- which would be much more efficient if this is indeed the case -- although pixel filter algorithms are not current for me and I am not sure at the moment).
I hope this is somewhat helpful to you!