I was wondering why is buffering so long in comparison of a backup with getline.
eg :
1 2 3 4 5 6 7 8 9 10 11
ifstream in;
in.open("mytest.txt", std::ios::in|std::ios::binary);
string line,c;
while ( getline(in,line) )
{
c += line;
}
cout << c;
c.clear();
vs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
std::ifstream is ("mytest.txt", std::ifstream::binary);
if (is) {
// get length of file:
is.seekg (0, is.end);
int length = is.tellg();
is.seekg (0, is.beg);
char * buffer = newchar [length];
std::cout << "Reading " << length << " characters... ";
// read data as a block:
is.read (buffer,length);
is.close();
cout << buffer;
// ...buffer contains the entire file...
delete[] buffer;
It happens that my first code is waaaay easier and twice as fast.
c is a kind of buffer here, isn't it ?
Any reason to use buffers "the right way" if it is slower ?
why bother with the read and so on if a simple += can do the whole thing ?
Can do what thing exactly? That is the real question.
They're different functions used for different things. getline() stops when a certain character is found, while read() stops after a certain number of characters were read.
Try doing this with getline() -- reading binary data: