Make class with constant size. Help!

Pages: 123
Disch wrote:
Anyone want to transcribe this to an article for me? I'm too lazy to do it now.

I did that! When an editor accepts it, you should see it in the articles section!
I'm not sure if this link works for you, but try it: http://www.cplusplus.com/articles/DzywvCM9/
Last edited on
I am now trying to learn what you've written Disch. I am currently trying to write a 16-bit value "0x1122" to the file "file", but when I read the file and output the data through the Standard Output I get the hexadeximal value : cccc.
(I want to wait with the strings a bit before I have understood this)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#include <iostream>
#include <cstdint>
#include <fstream>
#include "MyTypes.h"

using namespace std;

u16 readu16(istream& file);
void writeu16(ostream& file, u16 val);

int main() {

	ofstream fileout("file", ios::out | ios::binary);
	ifstream filein("file", ios::in | ios::binary);

	u16 n1 = 0x1122;
	u16 n2 = 0;
	writeu16(fileout, n1);
	n2 = readu16(filein);
	cout << hex << n2 << endl;
	

	system("pause");
	return 0;
}

u16 readu16(istream& file) {
	u8 bytes[2];
	u16 val;
	file.read((char*) bytes, 2);
	val = bytes[0] | (bytes[1] << 8);

	return val;
}

void writeu16(ostream& file, u16 val) {
	u8 bytes[2];
	bytes[0] = val & 0xFF;
	bytes[1] = (val >> 8) & 0xFF;
	file.write((char*)bytes, 2);
}


Value that I try writing to the file: 0x1122.
Value printed by cout: Hexadecimal value cccc.
Last edited on
You are probably reading the file before the write has actually been written to the file. File I/O is not immediate, there is several layers of buffering going on.

Likewise, opening a file for reading might fail when it's already open for writing. It's possible 'filein' is in a bad state because it failed to open the file.

Try closing outfile before you open the file for reading:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
int main() {

	u16 n1 = 0x1122;
	u16 n2 = 0;
	{
		ofstream fileout("file", ios::out | ios::binary);
		writeu16(fileout, n1);
	} // fileout closed here
	{
		ifstream filein("file", ios::in | ios::binary);
		n2 = readu16(filein);
	} // filein closed here

	cout << hex << n2 << endl;
	
Yes, it worked :). Thank you I will remember that the next time.
I am now working on a test program to see if I acn make functions that can read and write 16-bit, 32-bit and 64-bit. But at the 64- bit I get problems. It tells me that I am trying to use the shifting operators with too high values and that I will get loss of precision.

Here is the code :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
#include <iostream>
#include <fstream>
#include "MyDatatypes.h"

using namespace std;

// Declaring function prototypes to handle 16-bit values.
void writeu16(ofstream& fileout, u16 value);
u16 readu16(ifstream& filein);
// Declaring function prototypes to handle 32-bit values.
void writeu32(ofstream& fileout, u32 value);
u32 readu32(ifstream& filein);
// Declaring function prototypes to handle 64-bit values.
void writeu64(ofstream& fileout, u64 value);
u64 readu64(ifstream filein);

// Little Endianess is used.

int main() {

	ofstream fileout("testfile", ios::out | ios::binary);
	fileout.close();
	ifstream filein("testfile", ios::in | ios::binary);
	filein.close();

	u16 val1 = 0x1111;
	writeu16(fileout, val1);
	val1 = readu16(filein);
	cout << "Test for 16-bit: " << val1 << endl;
	u32 val2 = 0x22222222;
	writeu32(fileout, val2);
	val2 = readu32(filein);
	cout << "Test for 32-bit: " << val2 << endl;
	u64 val3 = 0x3333333333333333;
	writeu64(fileout, val3);
	

	system("pause");
	return 0;
}

// Defining function for writing 16-bit values.
void writeu16(ofstream& fileout, u16 value) {
	u8 bytes[2];
	bytes[0] = value & 0x00FF;
	bytes[1] = value >> 8;
	fileout.open("testfile", ios::out | ios::binary);
	if(!fileout) {
		cout << "File could not be opened in function" << 
			" writeu16." << endl;
		exit(EXIT_FAILURE);
	}
	fileout.write((char*) bytes, 2);
	fileout.close();
}

// Defining function for reading 16-bit values.
u16 readu16(ifstream& filein) {
	u8 bytes[2];
	u16 value = 0;
	filein.open("testfile", ios::in | ios::binary);
	if(!filein){
		cout << "The file could not be opened by function" << 
			" readu16." << endl;
		exit(EXIT_FAILURE);
	}
	filein.read((char*) bytes, 2);
	filein.close();
	value = bytes[0] | (bytes[1] << 8);
	return value;
}

// Defining function for writing 32-bit values.
void write32u(ofstream& fileout, u32 value) {
	u8 bytes[4];
	bytes[0] = value & 0x000000FF;
	bytes[1] = (value & 0x0000FF00) >> 8;
	bytes[2] = (value & 0x00FF0000) >> 16;
	bytes[3] = value >> 24;
	fileout.open("testfile", ios::out | ios::binary);
	if(!fileout) {
		cout << "The file could not be opened by function"
			<< " writeu32." << endl;
		exit(EXIT_FAILURE);
	}
	fileout.write((char*) bytes,4);
	fileout.close();
}

// Defining function for reading 32-bit values.
u32 readu32(ifstream& filein) {
	u8 bytes[4];
	u32 value = 0;
	filein.open("testfile", ios::in | ios::binary);
	if(!filein) {
		cout << "The file could not be opened by function" <<
			" readu32." << endl;
		exit(EXIT_FAILURE);
	}
	filein.read((char*) bytes, 4);
	filein.close();
	value = bytes[0] | (bytes[1] << 8) | (bytes[2] << 16) | (bytes[3] << 24);
	return value;
}

// Defining function for writing 64-bit values.
void writeu64(ofstream& fileout, u64 value) {
	u8 bytes[8];
	bytes[0] = value & 0x00000000000000FF;
	bytes[1] = (value & 0x000000000000FF00) >> 8;
	bytes[2] = (value & 0x0000000000FF0000) >> 16;
	bytes[3] = (value & 0x00000000FF000000) >> 24;
	bytes[4] = (value & 0x000000FF00000000) >> 32;
	bytes[5] = (value & 0X0000FF0000000000) >> 40;
	bytes[6] = (value & 0x00FF000000000000) >> 48;
	bytes[7] = value >> 56;
	fileout.open("testfile", ios::out | ios::binary);
	if(!fileout) {
		cout << "The file could not be opened by function"
			<< " writeu64." << endl;
		exit(EXIT_FAILURE);
	}
	fileout.write((char*) bytes, 8);
	fileout.close();
}

// Defining function for reading 64-bit values.
u64 readu64(ifstream& filein) {
	u8 bytes[8];
	u64 value = 0;
	filein.open("testfile", ios::in | ios::binary);
	if(!filein) {
		cout << "The file could not be opened by function"
			 << " readu64." << endl;
		exit(EXIT_FAILURE);
	}
	filein.read((char*) bytes, 8);
	filein.close();
	value = bytes[0] | (bytes[1] << 8) | (bytes[2] << 16)
		| (bytes[3] << 24) | (bytes[4] << 32) | (bytes[5] << 40)
		| (bytes[6] << 48) | (bytes[7] << 56);
	return value;
}


Here are is the output window giving me warnings and errors:


1>------ Build started: Project: Project4, Configuration: Debug Win32 ------
1>  Main.cpp
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(111): warning C4244: '=' : conversion from 'u64' to 'u8', possible loss of data
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(113): warning C4244: '=' : conversion from 'u64' to 'u8', possible loss of data
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(114): warning C4244: '=' : conversion from 'u64' to 'u8', possible loss of data
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(115): warning C4244: '=' : conversion from 'u64' to 'u8', possible loss of data
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(140): warning C4293: '<<' : shift count negative or too big, undefined behavior
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(140): warning C4293: '<<' : shift count negative or too big, undefined behavior
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(141): warning C4293: '<<' : shift count negative or too big, undefined behavior
1>c:\users\zerpent\documents\visual studio 2010\projects\project4\main.cpp(141): warning C4293: '<<' : shift count negative or too big, undefined behavior
1>Main.obj : error LNK2019: unresolved external symbol "void __cdecl writeu32(class std::basic_ofstream<char,struct std::char_traits<char> > &,unsigned int)" (?writeu32@@YAXAAV?$basic_ofstream@DU?$char_traits@D@std@@@std@@I@Z) referenced in function _main
1>C:\Users\Zerpent\Documents\Visual Studio 2010\Projects\Project4\Debug\Project4.exe : fatal error LNK1120: 1 unresolved externals
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========


1: Have I understood the principles/technique for how to do this?
2: How do I solve this problem?
Last edited on
will review when I get home from work =x
Thank you so much for taking the time.
I'm going to ignore the weirdness of how you are passing file streams to functions, only to open and close them locally in that function -- and why you are closing the file immediately after opening it at the start of main and focus on your errors.


warning C4244: '=' : conversion from 'u64' to 'u8', possible loss of data

The compiler sees you assigning a 64-bit value to an 8-bit value and it's giving you a heads up by saying "hey, I'm going to drop 56 bits of data here. Are you sure that's what you want to do?"

You can avoid this warning by explicitly casting to a u8 rather than just assigning:

1
2
3
4
5
// C style cast will work:
bytes[0] = (u8)(value & 0x00000000000000FF);

// or C++ style static_cast will also work:
bytes[0] = static_cast<u8>(value & 0x00000000000000FF);



warning C4293: '<<' : shift count negative or too big, undefined behavior

This is a bit less intuitive. C++ will "promote" variables to a larger type to do mathematic operations. Typically it'll promote them to "int" size. What's happening here is "int size" is 4 bytes, but you need it to promote to 8 bytes.

If it only gets promoted to int size (32-bits) and you shift left by 48 bits... the result is pretty much going to be zero because you shifted out the entire number.

So solve this, you must explicitly promote to a 64-bit variable, so it can do 64-bit operations on it. Again this can be done with casting:

1
2
3
4
5
// C style cast:
((u64)(bytes[7]) << 56)

// or C++ style case:
(static_cast<u64>(bytes[7]) << 56)



unresolved external symbol "void __cdecl writeu32

You have a prototype for a writeu32 function, but you never gave that function a body, so the linker is confused.

You accidentally gave the body to a "write32u" function instead (note the typo)
Everything worked now :). Thank you very much Disch.

I will move onto the strings and chars now that you showed me on the previous page.
Last edited on
There are three things that I wonder now that I have moved on to the strings:

1: Why do you print out the length of the string before you print out the string? We already know that it is 32 bits long no matter how long it is.

1
2
WriteU32(file, len);
 file.write( str.c_str(), len );


2: I do not understand your explanation of how to define the file format. What parts of this-


char[4]      header     "MyFi" - identifies this file as my kind of file
u32          version    1 for this version of the spec

u32          foo        some data
string       bar        some more data
vector<u16>  baz        some more data


is in the actual file? Is everything there? Because I am not sure of what exactly shoudl make out my fileformat and what you use to explain the different parts of the file. Should "string" and "u32" also go into the file or is it just explanations.

I also wonder how you navigate through a file like this with all the tabs and spaces and such. Do you use direct access with the seekp() and the tellp() functions?

To me it looks like you have a row structure, I have read that a binary file does not have that, in a binary file, all bytes are stored in one or just a line of bytes row, where you have to use the size of each data to navigate through the file, not by making rows. Or are the rows just to make it easier for me to see?

Could you show me some code of how to make the definition so that I can see how it is done in practice? Do you just use the functions that you showed me before to create the structure of the file content?

Sorry but I am very confused about this.
Last edited on
1: Why do you print out the length of the string before you print out the string? We already know that it is 32 bits long no matter how long it is.


No, strings are generally not 32 bits long. Strings are a variable length container. This means that their size can be anything.

[ascii] strings have 1 byte for each character in the string.

So..
1
2
3
4
5
string foo = "Example";  // "Example" is 7 characters, so we'd need to write 7 bytes to the file
string bar = "car";  // however, this string needs only 3 bytes

string baz = "ljldjfwoljeljsdlfjsldfjlwjeoljofsjdflsjdljweofjlsdjfljiowejfolwjelfjslfisjefl";
   // and this one requires a few dozen. 


Putting the length of the string in the file is so that when the file is read, the reader knows how many characters to actually read from the file. It's important to get that exactly right, because if you read more or less bytes than were written, then your position in the file goes out of sync and you start reading incorrect data.

Here's some example binary data:

61 62 65 6F 03 07 09 10

Let's say that when the file was written, the first 4 bytes were data for a string... and the next 4 bytes were 4 other u8 variables that we wrote.

For this to load properly, when we read the string, we must read EXACTLY those 4 bytes. Once we do, the file position is left at the '03' byte. So when we read our first u8 variable, we get '03' as expected. The next one read gets '07' as expected, and so on.

Now if we screw up and read too many characters (let's say we goof and read 5 characters instead of 4). Now the file position is left at the '07'... so when we read our first u8 variable, we get '07' instead of the '03' we expected! We have desynced!

Now all future reads will read incorrect data because the position is screwed up.

is in the actual file? Is everything there? Because I am not sure of what exactly shoudl make out my fileformat and what you use to explain the different parts of the file. Should "string" and "u32" also go into the file or is it just explanations.


No that is not the file. That is just text explaining how the file is organized -- it's just documentation.

The actual file will look like this:
1
2
4D 79 46 69 01 00 00 00  06 94 00 00 03 00 00 00
4D 6F 6F 02 00 00 00 EF  BE 0D F0


Get a hex editor (google for a free one) and look at some small binary files. It will really help illustrate the concept.


EDIT:

I'll also whip up an illustration later, but now I REALLY have to get to work! ;)
Last edited on
Okay I'm going to illustrate this with ASCII the best I can.

When you do binary file I/O, the data gets read/written to the file at the current file position. This file position is adjusted after every read/write.

To illustrate the file pointer, I will put [brackets] where the file position is... so it indicates the next byte that will be read/written.

File contents are illustrated in the right column:

Writing a file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ofstream myfile("file.bin",ios::binary | ios::out);


//write a single byte to the file
u8 v = 5;
WriteU8(myfile,v);

// write a 2 byte variable to the file
u16 v2 = 0x1234;
WriteU16(myfile,v2);

// write a 4 byte value:
u16 v4 = 0xFFEEDDCC;
WriteU32(myfile,v4);
[  ]



 05[  ]
 

 
 05 34 12[  ]
 


 05 34 12 CC DD EE FF[  ]


Notice how the size of the variable you write directly determines how many bytes are written. The same is true when reading, which is why it's so important to read exactly the same sizes as were written.

Reading:
1
2
3
4
5
6
7
8
9
10
11
ifstream myfile("file.bin",ios::binary | ios::in);

// read a single byte to the file
u8 v = ReadU8(myfile);

// read a 2 byte variable to the file
u16 v2 = ReadU16(myfile);

// read a 4 byte value:
u16 v4 = ReadU32(myfile);
[05]34 12 CC DD EE FF


 05[34]12 CC DD EE FF
 
 
 05 34 12[CC]DD EE FF
 
 
 05 34 12 CC DD EE FF[  ]



I don't know if that clarifies things any.


Like I say... get a hex editor and look at the files you're generating. It really will help.
Thank you very much Disch, I will start on working on this now. Your new illustrations really helped and I have gotten a free hex editor as well. I

I will save your explanations locally as backups so that I can read it on my PC and to save it for the future so that it does not get lost. The explanations are really good.
Since you wrote them I just wonder if that is ok with you?

One more qurstion which I did not ask before since I did not notice this. When you want to read 16 bit data, you make a string that can contain 2 elements (1 byte per element). Is this not risky? I have read before that you aways should have n+1 elements in a string so that you have one element left for the '\0' (btw what is the technical term for that in english?) to indicate the end of the string. Some functions in C++ might make your program crash since they need that to work properly. Or am i wrong?

One more thing, why do you have to explicitly cast the first argument in the write() function and the read() function as a char pointer? The argument is already a char or a char array since we declared it as that (I am talking about the "bytes" array). It it because that the parameters only accept char pointers?
Last edited on
I will save your explanations locally as backups so that I can read it on my PC and to save it for the future so that it does not get lost. The explanations are really good.
Since you wrote them I just wonder if that is ok with you?


Thanks. It's nice to be appreciated. :)

Of course it's okay with me. I wouldn't be posting it if I didn't want people to use it. ;)

When you want to read 16 bit data, you make a string that can contain 2 elements (1 byte per element).


No, no. There are no strings involved here. We are writing raw binary data to the file... just a series of 0s and 1s. We aren't converting anything to a string.

The "05 34" data you see in the file when you look at it in a hex editor (and in my above illustrations) is just a graphical representation of that data.

You can think of a binary file as a big array of bytes. Hex editors just take that big array of bytes and print the numbers in hexadecimal so that humans can read it.

One more thing, why do you have to explicitly cast the first argument in the write() function and the read() function as a char pointer? The argument is already a char or a char array since we declared it as that (I am talking about the "bytes" array)


u8's are unsigned chars, which are considered different types from regular chars (regular chars are signed). If you try it without the cast, the compiler will probably complain about an incorrect pointer type (a good compiler will complain, anyway).
Ok I see. I made a binary file "testfile.bin" and filled it with 4 bytes: "ABCD" according to Little Endian, giving the result "DCBA" in the file, which I wanted.

When I open the Windows CMD and type the file, it does not type it out in binary or hexadecimal format but it actually types it out as "DCBA". Since it is a binary file, nothing is wrong with it because of this right? I mean it is still raw binary data, but the program that types the file in the CMD, translates the raw binary code to ASCII character representation ofthe binary code right?

I understand it more now that there is no conversion to strings, since you told me but I do not understand how or why it works like that. Since "bytes" is a char array, it can only store chars/characters and not numerical values like 0x12, or can it? I thought that when you store a numerical value to a char, it automatically converts it to ASCII code that represents that numerical value. I am still a bit confused of why it works the way that you described above. Sorry if I dive too much into details but I am just curious and want to understand it.

Thanks for all help and quick and well written replies.
Last edited on
Ok I see. I made a binary file "testfile.bin" and filled it with 4 bytes: "ABCD" according to Little Endian, giving the result "DCBA" in the file, which I wanted.


"ABCD" is 2 bytes. Sounds to me like we need to go over some fundamentals here. Excuse me if you already know all this...

1 bit is a single binary digit. It can be 0 or 1... nothing else.
1 byte is traditionally 8 bits. so it can range from:
00000000 (0 in decimal, 0x00 in hexadecimal)
to:
11111111 (255 in decimal, 0xFF in hexadecimal)

11111111, 255, and 0xFF are all the exact same number with absolutely zero differences as far as the computer is concerned. The only difference is the way they are presented to us human beings. "11111111" is represented in binary form (base 2), 255 is in decimal form (base 10) and 0xFF is in hexadecimal form (base 16).

But again that's just textual representation. The number itself is not stored any differently in the computer. "1010", "10", and "0x0A" are all the number "ten" -- they're just that number printed in different numerical bases.

Hexadecimal is traditionally used to view binary data because it conveniently represents a single byte with 2 digits. It also is much easier to convert hex to binary and vice versa.

When you look at a file in a hex editor, many of them will put spaces between each byte so that it's easier to read.

Here's a screenshot of a file in a hex editor I'm currently using. Your hex editor I'm sure will look very similar.
http://i45.tinypic.com/izyec8.png

There are 3 main columns here.

The black column on the far left (that has rows of numbers that count up by 10 each row) is the offset which is a fancy term to mean "file position". The highlighted "4E" value in the picture is at offset 0 because it's the first byte in the file. The "45" after is is offset 1, and so on.

The big column in the middle with all the blue and red numbers is the actual data in the file. Each 2-digit pair is a single byte.

The black column on the far right (that starts with "NES") is the ASCII representation of each byte in the file. The 'N' is highlighted because I'm also highlighting the '4E' in the center column... and both of those things represent offset 0 (0x4E is the ASCII code for the character 'N'). If you were to open this file in a text editor like notepad... it would display text similar to what you see in this far right column.


That is the fundamental difference between text editors and hex editors. Text editors look at a file and assume that each byte is a character of text, and display the data as text (that is, if they see a '4E' byte in the file, they will display it as the character 'N'). Whereas hex editors just give you the actual raw data without doing any conversion.

For giggles, you can try opening a plain text file in a hex editor just to get the idea.


When I open the Windows CMD and type the file, it does not type it out in binary or hexadecimal format but it actually types it out as "DCBA". Since it is a binary file, nothing is wrong with it because of this right?


I'm not sure what you're doing with CMD. As far as I know, CMD doesn't display binary files.... so I don't know what's going on. Is your hex editor commandline based? If it is throw it out and get a real one.

I'm using "translhextion" in my example. It's actually pretty crappy and I wouldn't ordinarily recommend it, but I don't know of any better free ones (too lazy to really look). You can get it here:
http://www.romhacking.net/utilities/219/
Look for the "Download file now" link below the screenshot. Don't let the "hacking" in the url scare you -- it's just a game mod site... it's perfectly safe and friendly.


Since "bytes" is a char array, it can only store chars/characters and not numerical values like 0x12, or can it?


It can. This is where C/C++ are a little confusing.

chars are normal integers... just like ints. The only difference is that a char is 1 byte wide whereas an int is usually 4 bytes wide. You can store numbers in chars and add them together just like they were ints.

Actual character literals are converted to their ASCII codes by the compiler. The only reason they're in the language at all is to be more convenient for the programmer.

We've already established that the ASCII code for 'N' is 0x4E. You can test this out by doing this in a C++ program:

1
2
3
if( 'N' == 0x4E )
{
  cout << "This will print because the above expression is true!";


So when you do something like this: char foo = 'N';
The compiler treats it the same as this: char foo = 0x4E;

This is why 0 and '0' are different. 0 is actually zero, whereas '0' is the ASCII code for the numeral '0' (which is 0x30)



Hopefully that clears up things a bit more. Keep those questions coming... I feel like I'm committed to teaching you this stuff. :)


EDIT:

It dawns on me that you may have been abbreviating your 4 bytes to ABCD and you didn't mean ABCD literally.

Oh well -- sorry about that =x. Hopefully the post is still informative.
Last edited on
Disch wrote:
Hopefully that clears up things a bit more. Keep those questions coming... I feel like I'm committed to teaching you this stuff. :)


Disch's life purpose has been realized =D

On a more serious note, what is the purpose of a hex editor? If a file is already read by characters, the only thing that a hex editor does is allow you to change values...which is the same thing a text editor does, no? Maybe I'm not looking at this properly, but it seems like hex editors don't "do" anything a regular person can do.

Care to enlighten two people tonight?
If a file is already read by characters, the only thing that a hex editor does is allow you to change values...which is the same thing a text editor does, no?


Not at all.

Not all numerical values have textual ASCII codes. Text editors often won't display bytes that don't have a textual symbol. For example, if you have a lot of binary zeros in the file, a hex editor will actually display the 00s, whereas a text editor will likely skip over them because the literal value of 0 does not coorespond to any ascii character (it's the null character). Or maybe it will print that 'something is screwed up' square.


It's also easier to look at the actual binary data without having to convert text to literal values. For example, say you write the following to a binary file:

1
2
WriteU32( file, 100 );
WriteU8( file, 0x52 );


If you look at that file in a hex editor, you will see the actual values you wrote:
64 00 00 00 52

Here we can clearly see our 4-byte value of 100 (0x00000064), and our one byte value of 0x52.

If you look at that file in a text editor it will try to display that as text... which means instead of printing the actual values, it will print the ASCII characters that those values represent. It will look like this:

dR

or possibly this:

d???R

or possibly something else depending on how it decides to render those 0s.

"dR" sure doesn't tell us anything useful. And how would you go about editing that? What if you want to change that 100 value to 5.. how would you do that in a text editor? 5 has no ascii code that you can type from a keyboard... and even if you could, the text editor might erase all those 0s when you save the file.
Last edited on
Thank you so much Disch it clarifies it more :). Btw, I have stored your exlanations on 3 harddrives (one external) and one USB stick to keep it safe xD.

No, my hex editor is or command line based, do not bother that CMD questions i got it now when you explained the other things a bit more. I am just using it to become a faster typis and just for fun.

I meant ABCD as textual representation, not as hexadecimal, so it it 4*1 bytes (1 for each ASCII character) right? But it would have been 2 bytes in hexadecimal representation and that is what you meant first I right?

I asked you about how to navigate through the binary file and you explained the read and write that it moves through the file after reading/writing each byte. But let's say that I in the near future have stored data members of 2 objects in a binary file:


23 34 45 56 45 56 67 23 A3 B3 (enf og first object) 23 34 45 56 45 56 67 23 A3 B3 (end of second object)


Than I want to only read the data for the second object. How do I "skip/ignore" the first 10 bytes so that the file pointer starts at the 2nd object, on the 11th byta? Do I simply move it with seekp() and seekg() functons for direct access?
Last edited on
Do I simply move it with seekp() and seekg() functons for direct access?


Yep.
Pages: 123