Pre-tokenizers
BertPreTokenizer
BertPreTokenizer
This pre-tokenizer splits tokens on spaces, and also on punctuation. Each occurence of a punctuation character will be treated separately.
ByteLevel
class tokenizers.pre_tokenizers.ByteLevel
( add_prefix_space = True use_regex = True )
Parameters
- add_prefix_space (
bool
, optional, defaults toTrue
) — Whether to add a space to the first word if there isn’t already one. This lets us treat hello exactly like say hello. - use_regex (
bool
, optional, defaults toTrue
) — Set this toFalse
to prevent this pre_tokenizer from using the GPT2 specific regexp for spliting on whitespace.
ByteLevel PreTokenizer
This pre-tokenizer takes care of replacing all bytes of the given string with a corresponding representation, as well as splitting into words.
Returns the alphabet used by this PreTokenizer.
Since the ByteLevel works as its name suggests, at the byte level, it encodes each byte value to a unique visible character. This means that there is a total of 256 different characters composing this alphabet.
CharDelimiterSplit
This pre-tokenizer simply splits on the provided char. Works like .split(delimiter)
Digits
This pre-tokenizer simply splits using the digits in separate tokens
Metaspace
class tokenizers.pre_tokenizers.Metaspace
( replacement = '_' prepend_scheme = 'always' split = True )
Parameters
- replacement (
str
, optional, defaults to▁
) — The replacement character. Must be exactly one character. By default we use the ▁ (U+2581) meta symbol (Same as in SentencePiece). - prepend_scheme (
str
, optional, defaults to"always"
) — Whether to add a space to the first word if there isn’t already one. This lets us treat hello exactly like say hello. Choices: “always”, “never”, “first”. First means the space is only added on the first token (relevant when special tokens are used or other pre_tokenizer are used).
Metaspace pre-tokenizer
This pre-tokenizer replaces any whitespace by the provided replacement character. It then tries to split on these spaces.
PreTokenizer
Base class for all pre-tokenizers
This class is not supposed to be instantiated directly. Instead, any implementation of a PreTokenizer will return an instance of this class when instantiated.
Pre-tokenize a ~tokenizers.PyPreTokenizedString
in-place
This method allows to modify a PreTokenizedString
to
keep track of the pre-tokenization, and leverage the capabilities of the
PreTokenizedString
. If you just want to see the result of
the pre-tokenization of a raw string, you can use
pre_tokenize_str()
Pre tokenize the given string
This method provides a way to visualize the effect of a
PreTokenizer but it does not keep track of the
alignment, nor does it provide all the capabilities of the
PreTokenizedString
. If you need some of these, you can use
pre_tokenize()
Punctuation
This pre-tokenizer simply splits on punctuation as individual characters.
Sequence
This pre-tokenizer composes other pre_tokenizers and applies them in sequence
Split
class tokenizers.pre_tokenizers.Split
( pattern behavior invert = False )
Parameters
- pattern (
str
orRegex
) — A pattern used to split the string. Usually a string or a regex built with tokenizers.Regex. If you want to use a regex pattern, it has to be wrapped around a tokenizer.Regex, otherwise we consider is as a string pattern. For example pattern=”|” means you want to split on | (imagine a csv file for example), while patter=tokenizer.Regex(“1|2”) means you split on either ‘1’ or ‘2’. - behavior (
SplitDelimiterBehavior
) — The behavior to use when splitting. Choices: “removed”, “isolated”, “merged_with_previous”, “merged_with_next”, “contiguous” - invert (
bool
, optional, defaults toFalse
) — Whether to invert the pattern.
Split PreTokenizer
This versatile pre-tokenizer splits using the provided pattern and according to the provided behavior. The pattern can be inverted by making use of the invert flag.
UnicodeScripts
This pre-tokenizer splits on characters that belong to different language family It roughly follows https://github.com/google/sentencepiece/blob/master/data/Scripts.txt Actually Hiragana and Katakana are fused with Han, and 0x30FC is Han too. This mimicks SentencePiece Unigram implementation.
Whitespace
This pre-tokenizer simply splits using the following regex: \w+|[^\w\s]+