Fine tune table-driven tokenizing.

This commit is contained in:
Stephen Chung
2023-03-15 17:22:11 +08:00
parent 2aa7b99d1e
commit 41636eac55
11 changed files with 351 additions and 159 deletions

View File

@@ -31,6 +31,7 @@ Enhancements
* Range cases in `switch` statements now also match floating-point and decimal values. In order to support this, however, small numeric ranges cases are no longer unrolled.
* Loading a module via `import` now gives the module access to the current scope, including variables and constants defined inside.
* Some very simple operator calls (e.g. integer add) are short-circuited to avoid the overhead of a function call, resulting in a small speed improvement.
* The tokenizer now uses table-driven keyword recognizers generated by GNU gperf. At least _theoretically_ it should be faster...
Version 1.12.0