Bacause the tokenizer only returns pointers to the beginning and the end of the token, calculating line numbers/column offsets is more complicated than needed.
Feature or enhancement
Instead of the tokenizer returning a token type and settings pointers, we wanna return the token type (remains as is) and then set a struct token that has the following information:
Pointers to beginning and end
Location information (lineno, col_offset, etc.)
Level (the level in the parenstack)
This way the parser will have a much easier job of setting line numbers & column offsets in the generated AST numbers and will make some of our work on the f-strings parsing easier.
The text was updated successfully, but these errors were encountered:
Right now, the tokenizer only returns type and two pointers to the
start and end of the token. This PR modifies the tokenizer to return
the type and set all of the necessary information, so that the parser
does not have to this.
Right now, the tokenizer only returns type and two pointers to the start and end of the token.
This PR modifies the tokenizer to return the type and set all of the necessary information,
so that the parser does not have to this.
Right now, the tokenizer only returns type and two pointers to the start and end of the token.
This PR modifies the tokenizer to return the type and set all of the necessary information,
so that the parser does not have to this.
lysnikolaou commentedOct 6, 2022
Bacause the tokenizer only returns pointers to the beginning and the end of the token, calculating line numbers/column offsets is more complicated than needed.
Feature or enhancement
Instead of the tokenizer returning a token type and settings pointers, we wanna return the token type (remains as is) and then set a
struct token
that has the following information:lineno
,col_offset
, etc.)parenstack
)This way the parser will have a much easier job of setting line numbers & column offsets in the generated AST numbers and will make some of our work on the f-strings parsing easier.
The text was updated successfully, but these errors were encountered: