<SOLVED> Unsigned char array - assigning values converted from double

Hi,

I'm having a pretty weird problem. I've created an unsigned char array for an image buffer:

1
2
	buffer_rgb = new unsigned char[_w * _h * 3];
	memset(buffer_rgb, 0x0, sizeof(unsigned char)* _w * _h * 3);


And I add pixel color values to it like so:

1
2
3
		buffer_rgb[i] = ((unsigned char)(col[0] * 255));
		buffer_rgb[i + 1] = ((unsigned char)(col[1] * 255));
		buffer_rgb[i + 2] = ((unsigned char)(col[2] * 255));


Where col is a 'vec4' struct with a double[4] with values between 0 and 1 (this is checked and clamped elsewhere, and the output is safely within bounds). This is basically used to store rgb and intensity values.

Now, when I add a constant integer as a pixel value, i.e.:

buffer_rgb[i] = ((unsigned char)255;

Everything works as it should. However, when I use the above code, where col is different for every sample sent to the buffer, the resulting image becomes skewed in a weird way, as if the buffer writing is becoming offset as it goes.

These two images illustrate the problem:

tomsvilans.com/temp/140803_render_skew.png
tomsvilans.com/temp/140803_render_noskew.png

You can see in the 'noskew' image all pixels are the same value, from just using an unchanging int to set them. It seems to work with any value between 0-255 but fails only when this value is pulled from my changing col array.

Whole function is here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// adds sample to pixel. coordinates must be between (-1,1)
void Frame::addSample(vec4 col, double contrib, double x, double y)
{

	if (x < -1 || x >= 1 || y < -_aaspect || y >= _aaspect)
	{
		return;
	}

	unsigned int px_x = (unsigned int)((x + 1) / 2 * _w);
	unsigned int px_y = (unsigned int)((-y + _aaspect) / _aaspect / 2 * _h);

	unsigned long i = (px_y * _w + px_x) * 3;
	unsigned long ii = px_y * _w + px_x;

	if (_bw)
	{
		//buffer_rgb[i] = (int)(col[3] * 255);
		//buffer_rgb[i + 1] = (int)(col[3] * 255);
		//buffer_rgb[i + 2] = (int)(col[3] * 255);
		//buffer_rgb[i] = buffer_rgb[i + 1] = buffer_rgb[i + 2] = (uint8_t)255;
		buffer_rgb[i] = buffer_rgb[i + 1] = buffer_rgb[i + 2] = (unsigned int)255;


	}
	else
	{

		//buffer_rgb[i] = ((unsigned char)(col[0] * 255));
		//buffer_rgb[i + 1] = ((unsigned char)(col[1] * 255));
		//buffer_rgb[i + 2] = ((unsigned char)(col[2] * 255));
		buffer_rgb[i] = buffer_rgb[i + 1] = buffer_rgb[i + 2] = 255;


	}
	num_points++;
}


Any help would be much appreciated! I'm quite stumped. I'm quite new at this so I'm sure I'm missing some crucial bit.
Last edited on
Solved. If anyone is interested, it was a simple matter of std::ofstream not having the std::ios::binary flag when writing the files.
Topic archived. No new replies allowed.