Functions place.

I wrote this code (below) following those guidelines:

(1) write a function named “arrayToFile”. The function should accept 3 arguments: the name of a file, a pointer to an int array, and the size of the array. The function should open the specified file in binary mode, write the contents of the array to the file, and then close the file.

(2)write another function name “fileToArray”. This function should accept 3 arguments: name of a file, a pointer to an int array, and the size of the array. The function should open the specified file in binary mode, read its contents into the array, and then close the file.

(3)write a complete program that demonstrates these functions by using “arrayToFile” function to read the data from the same file. After the data are read from the file into the array, display the array’s contents on the screen.

My code works perfectly but my concern is the following. After checking through the web I realize another way to do my code which is to make the function (1) and (2) after the main and before the main you put just :

void arrayToFile(string, int *, int);
void fileToArray(string, int *, int);

Well, in my case I made my functions (1) and (2) before the main. But apparently, both way works well.

My concerns are, I wonder what is the difference of doing either way before or after the main? Which one is the more efficient and when should I know I need to make my functions before the main or not?

Also another question is I do not see the point to have the array size in this program? Can you someone have a clue of his utility here ? (point the line I'm talking about in the code comment)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#include <iostream>
#include <string>
#include <fstream>
using namespace std;

void arrayToFile(string fileName, int *array, int sizeArray) //why the need of the array size ? I think the pb can be done without it
{
    ofstream outputFile(fileName, ios::binary);

    outputFile.write(reinterpret_cast<char *>(array), sizeArray); //.write member accepts pointer to char as first argument
                                                                  // so use reinterpret_cast when calling it

    outputFile.close();
}

void fileToArray(string fileName, int *array, int sizeArray)
{
    ifstream inputFile(fileName, ios::binary);

    inputFile.read(reinterpret_cast<char *>(array), sizeArray); //.read member accepts pointer to char as first argument
                                                                // so use reinterpret_cast when calling it

    inputFile.close();
}

int main()
{
    const int arraySize = 5;
    int array[arraySize] = {1, 2, 3, 4, 5};
    int arrayToRead[arraySize];

    //call arrayToFile function to write into file
    cout << "Let's start to write on the file from the array\n";
    arrayToFile("ziziMechant.dat", array, sizeof(array)); 
    cout << "Done\n";

    //call fileToArray to read from file
    cout << "Let's read that file to the array\n";
    fileToArray("ziziMechant.dat", arrayToRead, sizeof(arrayToRead));
    cout << "Job done\n";

    //display the array we just read
    cout << "Let's display the array content\n";
    for (int i = 0; i < arraySize; i++)
    {
        cout << arrayToRead[i] << ",";
    }
    cout << "\n";

    return 0;
}.
The order of the functions makes no real difference. There is some convention that assumes the main function is at the end of a file, but I actually prefer it to be the first in the file.

The linker puts these functions into an executable according to its own rules, and it makes no difference where they come from in simple programs.

In the future you'll place functions in several different source files, where the linker will bring them together when producing the output executable. There again, the order makes no difference, but placement is more about human readability.

That said, there is one small difference.

As you have it now, you do not require declarations of the functions fileToArray and arrayToFile. At the time the compiler "reads" through the main function, these two functions are defined (and automatically declared), so everything works out fine.

However, if these functions appeared after main, then at the point where the compiler "reads" the main function, and you call either fileToArray or arrayToFile, an error would result because the compiler has no idea what they are. It will "know" when it reads them, but C and C++ rely on one pass compilers, meaning that the order does matter to the compiler.

What is required is something like a table of contents. These are called declarations.


void fileToArray(string fileName, int *array, int sizeArray);

The above line is a declaration of fileToArray. The version that includes the full body of the code is called the definition.

If a declaration appears, the compiler reads it, and is informed that the function can be called. It doesn't matter that the compiler has no idea what that function does. All it must "know" is how the function will be called (which the declaration gives by the parameters listed).

The same concept applies to functions in other files. The need to declare functions that appear in other files is why there are header files included, headers that you will eventually create.

The answer to your array size question can be simple, but it does get deeper. Eventually you'll learn that this is a C style, and should be avoided in C++. However, a simple array like this does not provide an automatic means of knowing how many elements it contains. That is the reason for the array size.

However, there are workarounds - but I sense they should wait for now.

You use arraySize appropriately in the for loop, which is how the loop stops at the appropriate point. If it continued there would be a crash.

Later, in C++, you'll use containers which do track the size. In reality, the container stores a counterpart to arraySize, and increments as you add new entries (which makes it dynamic, i.e. changes at runtime)



Last edited on
Later, in C++, you'll use containers which do track the size.

IMHO C++ education should start to treat containers as basic, first class citizens and leave raw arrays and pointers as late, advanced topic.
IMHO C++ education should start to treat containers as basic, first class citizens and leave raw arrays and pointers as late, advanced topic.


Agreed :) :)

C++ isn't C with add-ons - which is how in many cases it seems to be taught. It's a language in it's own right.
You need to pass the size of the array because without it, arrayToFile() and fileToArray() wouldn't know how many bytes to read/write.

By the way, "the size of the array" usually refers to the number of elements in it, not the number of bytes that it occupies, so it would be more typical for the code to be :
1
2
3
4
5
6
7
8
9
10
void arrayToFile(string fileName, int *array, int sizeArray)
{
    ...
    outputFile.write(reinterpret_cast<char *>(array), sizeArray*sizeof(int));
    ...
}
main()
{
    ...
    arrayToFile("ziziMechant.dat", array, arraySize); 


To keskiverto and seeplus' point, what you're practicing is, actually, C.

Stroustrup agrees with the point that C is not only a separate language, but that a new student should really study C++ on its own, and in an order that avoids C style writing.

The counterpoint to this, which is weak and dated at this point, is that C++ itself was created as an extension to C, such that C programmers could and would adopt C++ over time, incrementally in their own work.

Indeed, it has been so as C++ evolved. C++ 14 and later are quite different from the older generation (C++ 03, sometimes called C++ 0x). At this point it is advisable to learn C++ as 14 or 17 or, if not right now, soon, 20.

This works backwards all the way to C, because in most "real world" work, one ends up working on a codebase which a long history, so they programmer must be able to recognize and work in all of these version, and in C.

I think it is somewhat easier for elder hands like myself, who learned C before C++ existed, in that our memory was built up episodically as the language evolved. Those of us who remain current studying that remember the older ways, especially when we work on projects with long duration (maintenance of existing products).

C was built specifically to write the UNIX operating system, in order to make it portable across CPU's and platforms. From that view it was considered an assembler, though that isn't frequently how it is representing in modern terms. C works at that level of the CPU where most C statements translate into 1 or 2 machine language instructions readily, but it also means the programmer is managing all of the details associated with such low access to the hardware.

C++ built on top of that lineage, and so retains the opportunity to work at the level of performance of C, and of assembler to a great extent (arguments ensue on that claim), but with a tremendous leverage of higher level concepts which provide extreme safety and reliability (when used), as well as more rapid development (due to fewer bugs to track down, less work to handle typical tasks, etc).

Also, dhayden spotted a bug (I didn't even look for one).

That's the kind of bug that can take a while to recognize, especially for a beginner. It is also rather precisely the kind of bug C++ usage avoids.



Topic archived. No new replies allowed.