Hello, I'm having trouble returning the correct number of Tokens in a file.
For example, the input of the file is as follows:
1 2 3 4
"a regular string""skipping\nto a new line""this is a \\ backslash""\d\i\d\ \t\h\i\s\ \w\o\r\k\? ... \"maybe\""
The output should be:
However, my output for the Tokens is 6. The output for me Lines is correct.
So, I tried to see the input from the program and this is what it tells me,
1 2 3 4 5 6
SCONST(a regular string)
SCONST(skipping\nto a new line)
SCONST(this is a \\ backslash)
SCONST(\d\i\d\ \t\h\i\s\ \w\o\r\k\? ... )
The problem with this is that I think it is skipping the backslash and double quotes which whatever is inside is supposed to be 1 whole string (SCONST). So, in my program when it sees the double quotes it takes it and put it in a new lines, thus counting separated.
When you read the escape character ("\"), you want to read the next character and consider it as the read character. That is the character you want to place in lexeme.
You need to figure out the order of evaluation so you know that an escaped '"' is treated as a normal character and not the end of a token.
So pull out pencil and paper and figure out what characters you want placed in lexeme from various inputs. You need to figure out which special conditions you need to test first (probably '" first) and when to add the character to lexeme. Take your time and write it out. Don't expect one of us to do your analysis for you. When you figure out your strategy (also called 'design'), coding will be pretty straight-forward.
Also remember to use 'else' clauses to bypass code when logically necessary.
Something else to consider: what happens when '\' is the last character of the line. You should probably skip the NL (and CR if on windows) and move on to the next line, appending the next line to lexeme.
Parsing is not easy and takes quite a bit of thought and analysis.