Any Assistance? lexical Analyzer program

Please help. definitely having trouble with assignment.

For Program 2, the lexical analyzer, you will be provided with a description of the lexical syntax of the language. You will produce a lexical analysis function and a program to test it.

The lexical analyzer function must have the following calling signature:

Token getNextToken(istream& in, int& linenumber);

The first argument to getNextToken is a reference to an istream that the function should read from. The second argument to getNextToken is a reference to an integer that contains the current line number. getNextToken will update this integer every time it reads a newline. getNextToken returns a Token. A Token is a class that contains a TokenType, a string for the lexeme, and the line number that the token was found on.

A header file, tokens.h, will be provided for you. It contains a declaration for the Token class, and a declaration for all of the TokenType values. You MUST use the header file that is provided. You may NOT change it.

The lexical rules of the language are as follows:

1. The language has identifiers, which are defined to be a letter followed by zero or more letters or numbers. This will be the TokenType ID.

2. The language has integer constants, which are defined to be one or more digits. This will be the TokenType ICONST.

3. The language has string constants, which are a double-quoted sequence of characters, all on the same line. This will be the TokenType SCONST.

4. A string constant can include escape sequences: a backslash followed by a character. The sequence \n should be interpreted as a newline. The sequence \\ should be interpreted as a backslash. All other escapes should simply be interpreted as the character after the backslash.

5. The language has reserved the keywords print, set, if, loop, begin, end. They will be TokenTypes PRINT SET IF LOOP BEGIN END.

6. The language has several operators. They are + - * / ( ) which will be TokenTypes PLUS MINUS STAR SLASH LPAREN RPAREN

7. The language recognizes a semicolon as the token SC

8. The language recognizes a newline as the token NL

9. A comment is all characters from a # to the end of the line; it is ignored and is not returned as a token. NOTE that a # in the middle of an SCONST is NOT a comment!

10. Whitespace between tokens can be used for readability. It serves to delimit tokens.

11. An error will be denoted by the ERR token.

12. End of file will be denoted by the DONE token.

Note that any error detected by the lexical analyzer should result in the ERR token, with the lexeme value equal to the string recognized when the error was detected.

Note also that both ERR and DONE are unrecoverable.

Once the getNextToken function returns a Token for either of these token types, you shouldn’t call getNextToken again. The assignment is to write the lexical analyzer function and some test code around it.

It is a good idea to implement the lexical analyzer in one source file, and the main test program in another source file.

The test code is a main() program that takes several command line arguments: -v (optional) if present, every token is printed when it is seen -strings (optional) if present, print out all the string constants in alphabetical order -ids (optional) if present, print out all of the identifiers in alphabetical order filename (optional) if present, read from the filename; otherwise read from standard in Note that no other flags (arguments that begin with a dash) are permitted. If an unrecognized flag is present, the program should print “UNRECOGNIZED FLAG {arg}”, where {arg} is whatever flag was given, and it should stop running.

At most one filename can be provided, and it must be the last command line argument. If more than one filename is provided, the program should print “ONLY ONE FILE NAME ALLOWED” and it should stop running. If the program cannot open a filename that is given, the program should print “CANNOT OPEN {arg}”, where {arg} is the filename given, and it should stop running.

The program should repeatedly call getNextToken until it returns DONE or ERR. If it returns DONE, the program proceeds to handling the -strings and -ids options, in that order. It should then print summary information and exit.

If getNextToken returns ERR, the program should print “Error on line N ({lexeme})”, where N is the line number for the token and lexeme is the lexeme from the token, and it should stop running. If the -v option is present, the program should print each token as it is read and recognized, one token per line.

The output format for the token is the token name in all capital letters (for example, the token LPAREN should be printed out as the string LPAREN. In the case of token ID, ICONST, and SCONST, the token name should be followed by a space and the lexeme in parens. For example, if the identifier “hello” is recognized, the -v output for it would be ID (hello). The -strings option should cause the program to print STRINGS: on a line by itself, followed by every string constant found, one string per line, in alphabetical order. If there are no SCONSTs in the input, then nothing is printed. The -ids option should cause the program to print IDENTIFIERS: followed by a comma-separated list of every identifier found, in alphabetical order. If there are no IDs in the input, then nothing is printed. The summary information is as follows: Total lines: L Total tokens: N Where L is the number of input lines and N is the number of tokens (not counting DONE). If L is zero, no further lines are printed.

● Compiles ● Argument error cases ● Files that cannot be opened ● Too many filenames ● Properly handles a zero length file ● Recognizes keywords and identifiers ● Summary information ● -v mode

Recognizes all remaining tokens ● Recognizes string with a newline in it as an error ● Recognizes string with a # in it as a string, not a comment ● Recognizes single character token types ● Supports -strings and -ids


#ifndef TOKENS_H_
#define TOKENS_H_

#include <string>
#include <iostream>
using std::string;
using std::istream;
using std::ostream;

enum TokenType {
// keywords

// an identifier

// an integer and string constant

// the operators, parens, semicolon, newline
PLUS, // a +
MINUS, // a -
STAR, // a *
SLASH, // a /
LPAREN, // a (
RPAREN, // a )
SC, // a semicolon
NL, // a newline

// any error returns this token

// when completed (EOF), return this token

class Token {
TokenType tt;
string lexeme;
int lnum;

Token() {
tt = ERR;
lnum = -1;
Token(TokenType tt, string lexeme, int line) {
this->tt = tt;
this->lexeme = lexeme;
this->lnum = line;

bool operator==(const TokenType tt) const { return this->tt == tt; }
bool operator!=(const TokenType tt) const { return this->tt != tt; }

TokenType GetTokenType() const { return tt; }
string GetLexeme() const { return lexeme; }
int GetLinenum() const { return lnum; }

extern ostream& operator<<(ostream& out, const Token& tok);

extern Token getNextToken(istream& in, int& linenum);

#endif /* TOKENS_H_ */
Where exactly are you stuck?
Honestly, I'm having trouble understanding the task. the assignment is hard for me to understand. anything to get my code going or to help would be awesome.
You only need to create the bodies for the functions declared in the header file. All the extra verbiage is to explain what the functions should do. It looks like a lot, but the basic idea is to simply be able to identify the pieces of input.

Sometimes it helps to rewrite things with examples.

For example:

ID:      a   x1   point
ICONST:  9  34  7000
SCONST:  ""  "Hello"  "Hello \"Carminuch\"!\nHow are you?"
PRINT:   print
SET:     set

And so on.

The purpose now is to read a text file that contains these things, and convert them into a list of Token. For example, here is a short program to multiply two numbers:
set x 2;
set y 3;
set product x * y;
print x; print " times "; print y; print " equals "; print product; print "\n";

Let’s tokenize it:
SET "set"
ID "x"
SC ";"
SET "set"
ID "y"
SC ";"

and so on.

Notice that a token has two parts: the type and the lexeme. The type is the funny named used to identify what kind of thing the text is. The lexeme is the text itself.

So both "x" and "y" are lexemes, and they are also both IDs.

"set" is also a lexeme, but its type is SET.

getNextToken() reads the next lexeme from file and identifies its token type.

If you read a newline, you return a NL token and increment the line number.

All this stuff is the basics of writing a programming language. The tokenizer breaks up the human-readable text into pieces that the computer can reason about.

Hope this helps.
Topic archived. No new replies allowed.