source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
>>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` Streamer that stores print-ready text in a queue, to be used by a downstream application as an iterator. This is useful for applications that benefit from acessing the generated text in a non-blocking way (e.g. in an interactive
427_13_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
useful for applications that benefit from acessing the generated text in a non-blocking way (e.g. in an interactive Gradio demo). <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> Parameters: tokenizer (`AutoTokenizer`): The tokenized used to decode the tokens. skip_prompt (`bool`, *optional*, defaults to `False`): Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots. timeout (`float`, *optional*):
427_13_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots. timeout (`float`, *optional*): The timeout for the text queue. If `None`, the queue will block indefinitely. Useful to handle exceptions in `.generate()`, when it is called in a separate thread. decode_kwargs (`dict`, *optional*): Additional keyword arguments to pass to the tokenizer's `decode` method. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
427_13_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer >>> from threading import Thread
427_13_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
>>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextIteratorStreamer(tok)
427_13_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
>>> # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way. >>> generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20) >>> thread = Thread(target=model.generate, kwargs=generation_kwargs) >>> thread.start() >>> generated_text = "" >>> for new_text in streamer: ... generated_text += new_text >>> generated_text 'An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,' ```
427_13_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
>>> generated_text 'An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,' ``` Streamer that stores print-ready text in a queue, to be used by a downstream application as an async iterator. This is useful for applications that benefit from acessing the generated text asynchronously (e.g. in an interactive Gradio demo). <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> Parameters:
427_13_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
The API for the streamer classes is still under development and may change in the future. </Tip> Parameters: tokenizer (`AutoTokenizer`): The tokenized used to decode the tokens. skip_prompt (`bool`, *optional*, defaults to `False`): Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots. timeout (`float`, *optional*): The timeout for the text queue. If `None`, the queue will block indefinitely. Useful to handle exceptions in `.generate()`, when it is called in a separate thread.
427_13_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
in `.generate()`, when it is called in a separate thread. decode_kwargs (`dict`, *optional*): Additional keyword arguments to pass to the tokenizer's `decode` method. Raises: TimeoutError: If token generation time exceeds timeout value. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, AsyncTextIteratorStreamer >>> from threading import Thread >>> import asyncio
427_13_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
>>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
427_13_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
>>> # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way. >>> async def main(): ... # Important: AsyncTextIteratorStreamer must be initialized inside a coroutine! ... streamer = AsyncTextIteratorStreamer(tok) ... generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20) ... thread = Thread(target=model.generate, kwargs=generation_kwargs) ... thread.start() ... generated_text = ""
427_13_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
.md
... thread = Thread(target=model.generate, kwargs=generation_kwargs) ... thread.start() ... generated_text = "" ... async for new_text in streamer: ... generated_text += new_text >>> print(generated_text) >>> asyncio.run(main()) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ```
427_13_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Base, abstract class for all caches. The actual data structure is specific to each subclass. - update Base, abstract class for all caches. The actual data structure is specific to each subclass. Config - update Configuration class for quantized cache settings. Attributes: backend (`str`, *optional*, defaults to `"quanto"`): Backend to use when performing quantization, Can be one of [`quanto`, `HQQ`] nbits (`Optional[int]`, *optional*, defaults to 4):
427_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
nbits (`Optional[int]`, *optional*, defaults to 4): Number of bits, can be 2 or 4 for the `quanto` backend and one of [1, 2, 3, 4, 8] for the `HQQ` backend. Defaults to 2. axis_key (`int`, *optional*, defaults to 0): Axis over which to perform grouping for the key tensors. Can be [0, -1] for `quanto` backend and [0, 1] for `HQQ` backend. axis_value (`int`, *optional*, defaults to 0): Axis over which to perform grouping for the value tensors. Can be [0, -1] for `quanto` backend and [0, 1] for `HQQ` backend.
427_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Axis over which to perform grouping for the value tensors. Can be [0, -1] for `quanto` backend and [0, 1] for `HQQ` backend. q_group_size (`Optional[int]`, *optional*, defaults to 64): Size of the quantization group, should be a divisor of the model's hidden dimension. Defaults to 64. residual_length (`Optional[int]`, *optional*, defaults to 128): Length of the residual cache which will always be stored in original presicion. Defaults to 128.
427_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Length of the residual cache which will always be stored in original presicion. Defaults to 128. compute_dtype (`torch.dtype`, *optional*, defaults to `torch.float16`): The defualt dtype used for computations in the model. Keys and Values will be cast to this dtype after dequantization. device (`str`, *optional*, defaults to `"cpu"`): Device on which to perform computations, should be same as the model's device. - validate
427_14_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Device on which to perform computations, should be same as the model's device. - validate A cache that grows dynamically as more tokens are generated. This is the default for generative models. It stores the Key and Value states as a list of tensors, one for each layer. The expected shape for each tensor is `[batch_size, num_heads, seq_len, head_dim]`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, DynamicCache
427_14_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt")
427_14_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> past_key_values = DynamicCache() >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation DynamicCache() ``` - update - get_seq_length - reorder_cache - to_legacy_cache - from_legacy_cache
427_14_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
DynamicCache() ``` - update - get_seq_length - reorder_cache - to_legacy_cache - from_legacy_cache A quantizer cache similar to what is described in the [KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache paper](https://arxiv.org/abs/2402.02750). It allows the model to generate longer sequence length without allocating too much memory for Key and Value cache by applying quantization.
427_14_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
The cache has two types of storage, one for original precision and one for the quantized cache. A `residual length` is set as a maximum capacity for the original precision cache. When the length goes beyond maximum capacity, the original precision cache is discarded and moved into the quantized cache. The quantization is done per-channel with a set `q_group_size` for both Keys and Values, in contrast to what was described in the paper.
427_14_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
It stores Keys and Values a list of quantized tensors (tuples in case we need to store metadata), one for each layer. Additionally, it stores the Key and Value in original precision states as a list of tensors, one for each layer. The size of each tensor is `[batch_size, num_heads, seq_len - residual_length, head_dim]` - update - get_seq_length Quantized Cache class that uses `quanto` as a backend to perform quantization. Current implementation supports `int2` and `int4` dtypes only. Parameters:
427_14_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Parameters: cache_config (`QuantizedCacheConfig`): A configuration containing all the arguments to be used by the quantizer, including axis, qtype and group size. Example: ```python >>> # Run pip install quanto first if you don't have it yet >>> from transformers import AutoTokenizer, AutoModelForCausalLM, QuantoQuantizedCache, QuantizedCacheConfig
427_14_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt")
427_14_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> cache_config = QuantizedCacheConfig(nbits=4) >>> past_key_values = QuantoQuantizedCache(cache_config=cache_config) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation QuantoQuantizedCache() ``` Quantized Cache class that uses `HQQ` as a backend to perform quantization. Current implementation supports `int2`, `int4`, `int8` dtypes.
427_14_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Parameters: cache_config (`QuantizedCacheConfig`): A configuration containing all the arguments to be used by the quantizer, including axis, qtype and group size. Example: ```python >>> # Run pip install hqq first if you don't have it yet >>> from transformers import AutoTokenizer, AutoModelForCausalLM, HQQQuantizedCache, QuantizedCacheConfig
427_14_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt")
427_14_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> cache_config = QuantizedCacheConfig(nbits=4, axis_key=1, axis_value=1) >>> past_key_values = HQQQuantizedCache(cache_config=cache_config) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation HQQQuantizedCache() ``` A cache that as described in the [Attention Sinks paper](https://arxiv.org/abs/2309.17453). It allows the model to
427_14_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
``` A cache that as described in the [Attention Sinks paper](https://arxiv.org/abs/2309.17453). It allows the model to generate beyond the length of its context window, without losing fluency in the conversation. As it discards past tokens, the model will lose the ability to generate tokens that depend on the context that was discarded. It stores the Key and Value states as a list of tensors, one for each layer. The expected shape for each tensor is `[batch_size, num_heads, seq_len, head_dim]`.
427_14_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
`[batch_size, num_heads, seq_len, head_dim]`. Parameters: window_length (`int`): The length of the context window. num_sink_tokens (`int`): The number of sink tokens. See the original paper for more information. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache
427_14_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt")
427_14_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> past_key_values = SinkCache(window_length=256, num_sink_tokens=4) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation SinkCache() ``` - update - get_seq_length - reorder_cache A drop-in replacement for DynamicCache that conserves GPU memory at the expense of more CPU memory.
427_14_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
- reorder_cache A drop-in replacement for DynamicCache that conserves GPU memory at the expense of more CPU memory. Useful for generating from models with very long context. In addition to the default CUDA stream, where all forward() computations happen, this class uses another stream, the prefetch stream, which it creates itself. Since scheduling of operations on separate streams happens independently, this class uses
427_14_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Since scheduling of operations on separate streams happens independently, this class uses the prefetch stream to asynchronously prefetch the KV cache of layer k+1 when layer k is executing. The movement of the layer k-1 cache to the CPU is handled by the default stream as a simple way to ensure the eviction is scheduled after all computations on that cache are finished. - update - prefetch_layer - evict_previous_layer Static Cache class to be used with `torch.compile(model)` and `torch.export()`.
427_14_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
- prefetch_layer - evict_previous_layer Static Cache class to be used with `torch.compile(model)` and `torch.export()`. Parameters: config (`PretrainedConfig`): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a
427_14_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. If you are manually setting the batch size, make sure to take into account the number of beams if you are running beam search max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`torch.device` or `str`): The device on which the cache should be initialized. Should be the same as the layer.
427_14_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
device (`torch.device` or `str`): The device on which the cache should be initialized. Should be the same as the layer. dtype (`torch.dtype`, *optional*, defaults to `torch.float32`): The default `dtype` to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]]`, `optional`): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus.
427_14_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache
427_14_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >>> inputs = tokenizer(text="My name is Llama", return_tensors="pt")
427_14_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = StaticCache(config=model.config, batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True)
427_14_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation StaticCache() ``` - update - get_seq_length - reset Static cache class to be used with `torch.compile(model)` that offloads to the CPU or another device. Args: config (`PretrainedConfig): The configuration file defining the shape-related attributes required to initialize the static cache. max_batch_size (`int`):
427_14_28
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
The configuration file defining the shape-related attributes required to initialize the static cache. max_batch_size (`int`): The maximum batch size with which the model will be used. max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`Union[str, torch.device]`): The device on which the cache should be initialized. Should be the same as the layer device. dtype (`torch.dtype`, *optional*): The default `dtype` to use when initializing the cache.
427_14_29
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
layer device. dtype (`torch.dtype`, *optional*): The default `dtype` to use when initializing the cache. offload_device (`Union[str, torch.device]`, *optional*, defaults to `cpu`): The device to offload to. Defaults to CPU. layer_device_map (`Dict[int, Union[str, torch.device, int]]`, *optional*): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus.
427_14_30
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Attributes: key_cache (`List[torch.Tensor]`): Off-loaded key cache tensors. First one will be on device, where-as the others are off-loaded. value_cache (`List[torch.Tensor]`): Off-loaded value cache tensors. First one will be on device, where-as the others are off-loaded. max_batch_size (`int`): The maximum batch size with which this cache can be used. max_cache_len (`int`):
427_14_31
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
off-loaded. max_batch_size (`int`): The maximum batch size with which this cache can be used. max_cache_len (`int`): The maximum sequence length with which this cache can be used. device (`torch.device`): The device on which the cache is used. offload_device (`torch.device`): The device used to offload to. dtype (`torch.dtype`): The `dtype` used to initializing the cache. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, OffloadedStaticCache
427_14_32
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> inputs = tokenizer(text="My name is GPT2", return_tensors="pt")
427_14_33
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = OffloadedStaticCache(config=model.config, max_batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True)
427_14_34
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> past_kv_length = outputs.past_key_values # access cache filled with key/values from generation ``` - update - get_seq_length - reset Hybrid Cache class to be used with `torch.compile` for Gemma2 models that alternate between a local sliding window attention and global attention in every other layer. Under the hood, Hybrid Cache leverages ["SlidingWindowCache"] for sliding window attention
427_14_35
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
and ["StaticCache"] for global attention. For more information, see the documentation of each subcomponeent cache class. Parameters: config (`PretrainedConfig): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. max_cache_len (`int`): The maximum sequence length with which the model will be used.
427_14_36
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
smaller batch size is used. max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`torch.device` or `str`, *optional*, defaults to `"cpu"`): The device on which the cache should be initialized. Should be the same as the layer. dtype (torch.dtype, *optional*, defaults to `torch.float32`): The default `dtype` to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]]`, `optional`):
427_14_37
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
layer_device_map(`Dict[int, Union[str, torch.device, int]]]`, `optional`): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, HybridCache
427_14_38
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("google/gemma-2-2b") >>> tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") >>> inputs = tokenizer(text="My name is Gemma", return_tensors="pt")
427_14_39
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = HybridCache(config=model.config, batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True)
427_14_40
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation HybridCache() ``` - update - get_seq_length - reset Sliding Window Cache class to be used with `torch.compile` for models like Mistral that support sliding window attention. Every time when we try to update the cache, we compute the `indices` based on `cache_position >= self.config.sliding_window - 1`,
427_14_41
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
if true(which means the cache can not hold all the old key value states and new states together because of the sliding window constraint), we need to do a cycle shift based on `indices` to replace the oldest states by the new key value states passed in. The `to_shift` is only true once we are above sliding_window. Thus with `sliding_window==64`: indices = (slicing + to_shift[-1].int()-1) % self.config.sliding_window tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
427_14_42
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 0]) We overwrite the cache using these, then we always write at cache_position (clamped to `sliding_window`) Parameters: config (`PretrainedConfig`):
427_14_43
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Parameters: config (`PretrainedConfig`): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`torch.device` or `str`): The device on which the cache should be initialized. Should be the same as the layer.
427_14_44
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
device (`torch.device` or `str`): The device on which the cache should be initialized. Should be the same as the layer. dtype (`torch.dtype`, *optional*, defaults to `torch.float32`): The default `dtype` to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]]`, `optional`): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus.
427_14_45
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, SlidingWindowCache
427_14_46
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3") >>> inputs = tokenizer(text="My name is Mistral", return_tensors="pt")
427_14_47
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = SlidingWindowCache(config=model.config, batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True)
427_14_48
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation SlidingWindowCache() ``` - update - reset Base, abstract class for all encoder-decoder caches. Can be used to hold combinations of self-attention and cross-attention caches. Example: ```python >>> from transformers import AutoProcessor, AutoModelForCausalLM, DynamicCache, EncoderDecoderCache
427_14_49
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = AutoModelForCausalLM.from_pretrained("openai/whisper-small") >>> processor = AutoProcessor.from_pretrained("openai/whisper-small") >>> inputs = processor(audio=YOUR-AUDIO, return_tensors="pt")
427_14_50
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> # Prepare cache classes for encoder and decoder and pass it to model's forward >>> self_attention_cache = DynamicCache() >>> cross_attention_cache = DynamicCache() >>> past_key_values = EncoderDecoderCache(self_attention_cache, cross_attention_cache) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation EncoderDecoderCache() ``` - get_seq_length - to_legacy_cache - from_legacy_cache - reset
427_14_51
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
EncoderDecoderCache() ``` - get_seq_length - to_legacy_cache - from_legacy_cache - reset - reorder_cache Cache for mamba model which does not have attention mechanism and key value states. Arguments: config (`PretrainedConfig): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used.
427_14_52
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
smaller batch size is used. dtype (`torch.dtype`, *optional*, defaults to `torch.float16`): The default `dtype` to use when initializing the layer. device (`torch.device` or `str`, *optional*): The device on which the cache should be initialized. Should be the same as the layer. Attributes: dtype: (`torch.dtype`): The default `dtype` used to initializing the cache. intermediate_size: (`int`): Model's intermediate_size taken from config. ssm_state_size: (`int`): Model's state_size taken from config.
427_14_53
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
Model's intermediate_size taken from config. ssm_state_size: (`int`): Model's state_size taken from config. conv_kernel_size: (`int`): Model's convolution kernel size taken from config conv_states: (`torch.Tensor`): A tensor of shape `[layer_idx, batch_size, intermediate_size, conv_kernel_size]` that holds convolutional states. ssm_states: (`torch.Tensor`): A tensor of shape `[layer_idx, batch_size, intermediate_size, ssm_state_size]` that holds ssm states Example: ```python
427_14_54
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
A tensor of shape `[layer_idx, batch_size, intermediate_size, ssm_state_size]` that holds ssm states Example: ```python >>> from transformers import AutoTokenizer, MambaForCausalLM, MambaCache
427_14_55
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") >>> inputs = tokenizer(text="My name is Mamba", return_tensors="pt")
427_14_56
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
.md
>>> inputs = tokenizer(text="My name is Mamba", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> past_key_values = MambaCache(config=model.config, batch_size=1, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values MambaCache() ``` - update_conv_state - update_ssm_state - reset
427_14_57
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
Class that holds arguments for watermark generation and should be passed into `GenerationConfig` during `generate`. See [this paper](https://arxiv.org/abs/2306.04634) for more details on the arguments. Accepts the following keys: - greenlist_ratio (`float`): Used for watermarking. The ratio of "green" tokens used to the vocabulary size. Defaults to 0.25. - bias (`float`): Used with watermarking. The bias added to the selected "green" tokens' logits. Defaults to 2.0. - hashing_key (`int`):
427_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
Used with watermarking. The bias added to the selected "green" tokens' logits. Defaults to 2.0. - hashing_key (`int`): Hashing key used for watermarking. Defaults to 15485863 (the millionth prime). - seeding_scheme (`str`): Algorithm to use for watermarking. Accepts values: - "lefthash" (default): "green" tokens selection depend on the last token (Algorithm 2 from the paper) - "selfhash": "green" tokens selection depends on the current token itself (Algorithm 3 from the paper)
427_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
- "selfhash": "green" tokens selection depends on the current token itself (Algorithm 3 from the paper) The downside of this scheme is that it considers all possible next tokens and can be slower than "lefthash". - context_width(`int`): The context length of previous tokens to use in seeding. Higher context length makes watermarking more robust. - __call__ Detector for detection of watermark generated text. The detector needs to be given the exact same settings that were
427_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
Detector for detection of watermark generated text. The detector needs to be given the exact same settings that were given during text generation to replicate the watermark greenlist generation and so detect the watermark. This includes the correct device that was used during text generation, the correct watermarking arguments and the correct tokenizer vocab size. The code was based on the [original repo](https://github.com/jwkirchenbauer/lm-watermarking/tree/main).
427_15_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
The code was based on the [original repo](https://github.com/jwkirchenbauer/lm-watermarking/tree/main). See [the paper](https://arxiv.org/abs/2306.04634) for more information. Args: model_config (`PretrainedConfig`): The model config that will be used to get model specific arguments used when generating. device (`str`): The device which was used during watermarked text generation. watermarking_config (Union[`WatermarkingConfig`, `Dict`]):
427_15_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
The device which was used during watermarked text generation. watermarking_config (Union[`WatermarkingConfig`, `Dict`]): The exact same watermarking config and arguments used when generating text. ignore_repeated_ngrams (`bool`, *optional*, defaults to `False`): Whether to count every unique ngram only once or not. max_cache_size (`int`, *optional*, defaults to 128): The max size to be used for LRU caching of seeding/sampling algorithms called for every token. Examples: ```python
427_15_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
The max size to be used for LRU caching of seeding/sampling algorithms called for every token. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkDetector, WatermarkingConfig
427_15_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> model_id = "openai-community/gpt2" >>> model = AutoModelForCausalLM.from_pretrained(model_id) >>> tok = AutoTokenizer.from_pretrained(model_id) >>> tok.pad_token_id = tok.eos_token_id >>> tok.padding_side = "left" >>> inputs = tok(["This is the beginning of a long story", "Alice and Bob are"], padding=True, return_tensors="pt") >>> input_len = inputs["input_ids"].shape[-1]
427_15_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> # first generate text with watermark and without >>> watermarking_config = WatermarkingConfig(bias=2.5, seeding_scheme="selfhash") >>> out_watermarked = model.generate(**inputs, watermarking_config=watermarking_config, do_sample=False, max_length=20) >>> out = model.generate(**inputs, do_sample=False, max_length=20)
427_15_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> # now we can instantiate the detector and check the generated text >>> detector = WatermarkDetector(model_config=model.config, device="cpu", watermarking_config=watermarking_config) >>> detection_out_watermarked = detector(out_watermarked, return_dict=True) >>> detection_out = detector(out, return_dict=True) >>> detection_out_watermarked.prediction array([ True, True])
427_15_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> detection_out.prediction array([False, False]) ``` - __call__ This is the configuration class to store the configuration of a [`BayesianDetectorModel`]. It is used to instantiate a Bayesian Detector model according to the specified arguments. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: watermarking_depth (`int`, *optional*): The number of tournament layers.
427_15_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
Args: watermarking_depth (`int`, *optional*): The number of tournament layers. base_rate (`float1`, *optional*, defaults to 0.5): Prior probability P(w) that a text is watermarked. Bayesian classifier for watermark detection. This detector uses Bayes' rule to compute a watermarking score, which is the sigmoid of the log of ratio of the posterior probabilities P(watermarked|g_values) and P(unwatermarked|g_values). Please see the section on BayesianScore in the paper for further details.
427_15_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
BayesianScore in the paper for further details. Paper URL: https://www.nature.com/articles/s41586-024-08025-4 Note that this detector only works with non-distortionary Tournament-based watermarking using the Bernoulli(0.5) g-value distribution. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
427_15_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BayesianDetectorConfig`]): Model configuration class with all the parameters of the model.
427_15_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
Parameters: config ([`BayesianDetectorConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. - forward Class that holds arguments for watermark generation and should be passed into `GenerationConfig` during `generate`.
427_15_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
- forward Class that holds arguments for watermark generation and should be passed into `GenerationConfig` during `generate`. See [this paper](https://www.nature.com/articles/s41586-024-08025-4) for more details on the arguments. Args: ngram_len (`int`): Ngram length. keys (`List[int]`): A sequence of watermarking keys, one for each depth. context_history_size (`int`, *optional*, defaults to 1024): Size of the tensor to keep track of seen contexts.
427_15_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
context_history_size (`int`, *optional*, defaults to 1024): Size of the tensor to keep track of seen contexts. sampling_table_seed (`int`, *optional*, defaults to 0): Random seed to generate the sampling table. sampling_table_size (`int`, *optional*, defaults to 65536): Size of the sampling table. skip_first_ngram_calls (`bool`, *optional*, defaults to `False`): Whether to skip first ngram calls. debug_mode (`bool`, optional, *optional*, defaults to `False`):
427_15_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
Whether to skip first ngram calls. debug_mode (`bool`, optional, *optional*, defaults to `False`): Logits are modified to uniform one got before watermarking modification is applied. This is to test the implementation. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, SynthIDTextWatermarkingConfig
427_15_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> tokenizer = AutoTokenizer.from_pretrained('google/gemma-2-2b', padding_side="left") >>> model = AutoModelForCausalLM.from_pretrained('google/gemma-2-2b') >>> # SynthID Text configuration >>> watermarking_config = SynthIDTextWatermarkingConfig( ... keys=[654, 400, 836, 123, 340, 443, 597, 160, 57], ... ngram_len=5, ... )
427_15_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> # Generation with watermarking >>> tokenized_prompts = tokenizer(["Once upon a time, "], return_tensors="pt", padding=True) >>> output_sequences = model.generate( ... **tokenized_prompts, watermarking_config=watermarking_config, do_sample=True, max_new_tokens=10 ... ) >>> watermarked_text = tokenizer.batch_decode(output_sequences, skip_special_tokens=True) ``` SynthID text watermark detector class. This class has to be initialized with the trained bayesian detector module check script
427_15_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
This class has to be initialized with the trained bayesian detector module check script in examples/synthid_text/detector_training.py for example in training/saving/loading this detector module. The folder also showcases example use case of this detector. Parameters: detector_module ([`BayesianDetectorModel`]): Bayesian detector module object initialized with parameters. Check examples/research_projects/synthid_text/detector_training.py for usage. logits_processor (`SynthIDTextWatermarkLogitsProcessor`):
427_15_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
logits_processor (`SynthIDTextWatermarkLogitsProcessor`): The logits processor used for watermarking. tokenizer (`Any`): The tokenizer used for the model. Examples: ```python >>> from transformers import ( ... AutoTokenizer, BayesianDetectorModel, SynthIDTextWatermarkLogitsProcessor, SynthIDTextWatermarkDetector ... )
427_15_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> # Load the detector. See examples/research_projects/synthid_text for training a detector. >>> detector_model = BayesianDetectorModel.from_pretrained("joaogante/dummy_synthid_detector") >>> logits_processor = SynthIDTextWatermarkLogitsProcessor( ... **detector_model.config.watermarking_config, device="cpu" ... ) >>> tokenizer = AutoTokenizer.from_pretrained(detector_model.config.model_name) >>> detector = SynthIDTextWatermarkDetector(detector_model, logits_processor, tokenizer)
427_15_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
.md
>>> # Test whether a certain string is watermarked >>> test_input = tokenizer(["This is a test input"], return_tensors="pt") >>> is_watermarked = detector(test_input.input_ids) ``` - __call__
427_15_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#compile-utils
.md
Class that holds arguments relative to `torch.compile` behavior, when using automatic compilation in `generate`. See [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html) for more details on the arguments. Args: fullgraph (`bool`, *optional*, defaults to `True`): If `True`, requires that the whole forward be capturable in a single graph. dynamic (`bool` or `None`, *optional*): Whether to try to use dynamic shape graphs.
427_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#compile-utils
.md
dynamic (`bool` or `None`, *optional*): Whether to try to use dynamic shape graphs. backend (`str` or `Callable`, *optional*, defaults to `"inductor"`): Backend to be used. mode (`str`, *optional*, defaults to `"reduce-overhead"`): Controls balance between performance and overhead. options (`dict`, *optional*): A dictionary of options to pass to the backend. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, CompileConfig
427_16_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#compile-utils
.md
>>> tokenizer = AutoTokenizer.from_pretrained('google/gemma-2-2b') >>> model = AutoModelForCausalLM.from_pretrained('google/gemma-2-2b').cuda() >>> # Automatic compile configuration, used with static cache >>> compile_config = CompileConfig(dynamic=True)
427_16_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#compile-utils
.md
>>> # Automatic compile configuration, used with static cache >>> compile_config = CompileConfig(dynamic=True) >>> # Generation with static cache and compile config >>> input = tokenizer.encode("Hello there, how", return_tensors="pt").cuda() >>> output = model.generate( ... input, do_sample=False, max_new_tokens=300, cache_implementation="static", compile_config=compile_config ... ) >>> output_text = tokenizer.batch_decode(output, skip_special_tokens=True)[0] ``` - __call__
427_16_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/time_series_utils.md
https://huggingface.co/docs/transformers/en/internal/time_series_utils/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
428_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/time_series_utils.md
https://huggingface.co/docs/transformers/en/internal/time_series_utils/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
428_0_1