Skip to content

Commit

Permalink
Remove tokenization warning
Browse files Browse the repository at this point in the history
  • Loading branch information
kg583 authored Jun 23, 2024
1 parent 4af5ecf commit 32897f4
Showing 1 changed file with 0 additions and 3 deletions.
3 changes: 0 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,9 +241,6 @@ img.show()

Functions to decode and encode strings into tokens can be found in `tivars.tokenizer`. These functions utilize the [TI-Toolkit token sheets](https://github.com/TI-Toolkit/tokens), which are kept as a submodule in `tivars.tokens`. Support currently exists for all models in the 82/83/84 series; PR's concerning the sheets themselves should be directed upstream.

> [!IMPORTANT]
> In contrast to some other tokenizers like SourceCoder, tokenization does _not_ depend on whether the content appears inside a BASIC string literal. Text is always assigned to the _longest_ permissible token.
## Documentation

Library documentation can be found on [GitHub Pages](https://ti-toolkit.github.io/tivars_lib_py/).
Expand Down

0 comments on commit 32897f4

Please sign in to comment.