source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | This class facilitates the application of SynthID text watermarking, a method for embedding imperceptible signals
into generated text to aid in detecting synthetic content. It operates by subtly manipulating the probabilities of
token selection during text generation in a manner that can be reliably recovered later for verification.
Key Features:
* **State Management:** Maintains internal state to track token sequences and generate watermarking keys
dynamically. | 427_7_107 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | * **State Management:** Maintains internal state to track token sequences and generate watermarking keys
dynamically.
* **Key Generation:** Computes hashes based on token sequences and watermarking parameters to create unique keys
for each position.
* **G-Value Sampling:** Employs a pre-computed sampling table to sample watermarking values (g-values) based on
the generated keys.
* **Score Adjustment:** Applies calculated g-values to modify token probabilities during generation, embedding the | 427_7_108 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | * **Score Adjustment:** Applies calculated g-values to modify token probabilities during generation, embedding the
watermark.
* **Context Repetition Handling:** Incorporates logic to avoid watermarking tokens in repeated contexts,
preserving naturalness.
* **EOS Token Masking:** Supports masking end-of-sentence tokens to prevent their inclusion in watermarking
calculations.
* **Utility Functions:** Provides functions to compute g-values directly, check for context repetition, create | 427_7_109 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | calculations.
* **Utility Functions:** Provides functions to compute g-values directly, check for context repetition, create
EOS token masks, and estimate expected mean g-values.
Refer to paper url: https://www.nature.com/articles/s41586-024-08025-4 for more details around this.
Args:
ngram_len (`int`):
Ngram length.
keys (`List[int]`):
A sequence of watermarking keys, one for each depth.
sampling_table_size (`int`):
Size of the sampling table.
sampling_table_seed (`int`): | 427_7_110 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | sampling_table_size (`int`):
Size of the sampling table.
sampling_table_seed (`int`):
Random seed to generate the sampling table.
context_history_size (`int`):
Size of the tensor to keep track of seen contexts.
device (`torch.device`):
Device to use.
skip_first_ngram_calls (`bool`, *optional*, defaults to `False`):
Whether to skip first ngram calls.
debug_mode (`bool`, optional, *optional*, defaults to `False`): | 427_7_111 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Whether to skip first ngram calls.
debug_mode (`bool`, optional, *optional*, defaults to `False`):
Logits are modified to uniform one got before watermarking modification is applied. This is to test the
implementation.
Examples:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, SynthIDTextWatermarkingConfig | 427_7_112 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> tokenizer = AutoTokenizer.from_pretrained('google/gemma-2-2b', padding_side="left")
>>> model = AutoModelForCausalLM.from_pretrained('google/gemma-2-2b')
>>> # SynthID Text configuration
>>> watermarking_config = SynthIDTextWatermarkingConfig(
... keys=[654, 400, 836, 123, 340, 443, 597, 160, 57],
... ngram_len=5,
... ) | 427_7_113 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # Generation with watermarking
>>> tokenized_prompts = tokenizer(["Once upon a time, "], return_tensors="pt", padding=True)
>>> output_sequences = model.generate(
... **tokenized_prompts, watermarking_config=watermarking_config, do_sample=True, max_new_tokens=10
... )
>>> watermarked_text = tokenizer.batch_decode(output_sequences, skip_special_tokens=True)
```
- __call__
[`LogitsProcessor`] for temperature (exponential scaling output probability distribution), which effectively means | 427_7_114 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | - __call__
[`LogitsProcessor`] for temperature (exponential scaling output probability distribution), which effectively means
that it can control the randomness of the predicted tokens. Often used together with [`TopPLogitsWarper`] and
[`TopKLogitsWarper`].
<Tip>
Make sure that `do_sample=True` is included in the `generate` arguments otherwise the temperature value won't have
any effect.
</Tip>
Args:
temperature (`float`): | 427_7_115 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | any effect.
</Tip>
Args:
temperature (`float`):
Strictly positive float value used to modulate the logits distribution. A value smaller than `1` decreases
randomness (and vice versa), with `0` being equivalent to shifting all probability mass to the most likely
token.
Examples:
```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed | 427_7_116 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> set_seed(0) # for reproducibility
>>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> model.config.pad_token_id = model.config.eos_token_id
>>> inputs = tokenizer(["Hugging Face Company is"], return_tensors="pt") | 427_7_117 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # With temperature=1.0, the default, we consistently get random outputs due to random sampling.
>>> generate_kwargs = {"max_new_tokens": 10, "do_sample": True, "temperature": 1.0, "num_return_sequences": 2}
>>> outputs = model.generate(**inputs, **generate_kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Hugging Face Company is one of these companies that is going to take a',
"Hugging Face Company is a brand created by Brian A. O'Neil"] | 427_7_118 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # However, with temperature close to 0, it approximates greedy decoding strategies (invariant)
>>> generate_kwargs["temperature"] = 0.0001
>>> outputs = model.generate(**inputs, **generate_kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Hugging Face Company is a company that has been around for over 20 years',
'Hugging Face Company is a company that has been around for over 20 years']
```
- __call__ | 427_7_119 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | 'Hugging Face Company is a company that has been around for over 20 years']
```
- __call__
[`LogitsProcessor`] that performs top-k, i.e. restricting to the k highest probability elements. Often used
together with [`TemperatureLogitsWarper`] and [`TopPLogitsWarper`].
Args:
top_k (`int`):
The number of highest probability vocabulary tokens to keep for top-k-filtering.
filter_value (`float`, *optional*, defaults to -inf):
All filtered values will be set to this float value. | 427_7_120 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | filter_value (`float`, *optional*, defaults to -inf):
All filtered values will be set to this float value.
min_tokens_to_keep (`int`, *optional*, defaults to 1):
Minimum number of tokens that cannot be filtered.
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed | 427_7_121 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> set_seed(1)
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")
>>> inputs = tokenizer("A sequence: A, B, C, D", return_tensors="pt")
>>> # With sampling, the output is unexpected -- sometimes too unexpected.
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: A, B, C, D, E — S — O, P — R | 427_7_122 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # With `top_k` sampling, the output gets restricted the k most likely tokens.
>>> # Pro tip: In practice, LLMs use `top_k` in the 5-50 range.
>>> outputs = model.generate(**inputs, do_sample=True, top_k=2)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: A, B, C, D, E, F, G, H, I
```
- __call__
[`LogitsProcessor`] that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off. | 427_7_123 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | - __call__
[`LogitsProcessor`] that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off.
Often used together with [`TemperatureLogitsWarper`] and [`TopKLogitsWarper`].
Args:
top_p (`float`):
If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or
higher are kept for generation.
filter_value (`float`, *optional*, defaults to -inf):
All filtered values will be set to this float value. | 427_7_124 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | filter_value (`float`, *optional*, defaults to -inf):
All filtered values will be set to this float value.
min_tokens_to_keep (`int`, *optional*, defaults to 1):
Minimum number of tokens that cannot be filtered.
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed | 427_7_125 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> set_seed(1)
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")
>>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt")
>>> # With sampling, the output is unexpected -- sometimes too unexpected.
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3 | < 4 (left-hand pointer) ;
<BLANKLINE>
<BLANKLINE> | 427_7_126 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # With `top_p` sampling, the output gets restricted to high-probability tokens.
>>> # Pro tip: In practice, LLMs use `top_p` in the 0.9-0.95 range.
>>> outputs = model.generate(**inputs, do_sample=True, top_p=0.1)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9
```
- __call__
[`LogitsProcessor`] that performs typical decoding. Inspired on how humans use language, it prioritizes tokens | 427_7_127 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | - __call__
[`LogitsProcessor`] that performs typical decoding. Inspired on how humans use language, it prioritizes tokens
whose log probability is close to the entropy of the token probability distribution. This means that the most
likely tokens may be discarded in the process.
See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information.
Args:
mass (`float`, *optional*, defaults to 0.9):
Value of typical_p between 0 and 1 inclusive, defaults to 0.9. | 427_7_128 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Args:
mass (`float`, *optional*, defaults to 0.9):
Value of typical_p between 0 and 1 inclusive, defaults to 0.9.
filter_value (`float`, *optional*, defaults to -inf):
All filtered values will be set to this float value.
min_tokens_to_keep (`int`, *optional*, defaults to 1):
Minimum number of tokens that cannot be filtered.
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed | 427_7_129 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
>>> inputs = tokenizer("1, 2, 3", return_tensors="pt")
>>> # We can see that greedy decoding produces a sequence of numbers
>>> outputs = model.generate(**inputs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, | 427_7_130 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # For this particular seed, we can see that sampling produces nearly the same low-information (= low entropy)
>>> # sequence
>>> set_seed(18)
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
1, 2, 3, 4, 5, 6, 7, 8, 9 and 10 | 427_7_131 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # With `typical_p` set, the most obvious sequence is no longer produced, which may be good for your problem
>>> set_seed(18)
>>> outputs = model.generate(
... **inputs, do_sample=True, typical_p=0.1, return_dict_in_generate=True, output_scores=True
... )
>>> print(tokenizer.batch_decode(outputs.sequences, skip_special_tokens=True)[0])
1, 2, 3 and 5 | 427_7_132 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # We can see that the token corresponding to "4" (token 934) in the second position, the most likely token
>>> # as seen with greedy decoding, was entirely blocked out
>>> print(outputs.scores[1][0, 934])
tensor(-inf)
```
- __call__
Logits processor for Classifier-Free Guidance (CFG). The processors computes a weighted average across scores
from prompt conditional and prompt unconditional (or negative) logits, parameterized by the `guidance_scale`. | 427_7_133 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | from prompt conditional and prompt unconditional (or negative) logits, parameterized by the `guidance_scale`.
The unconditional scores are computed internally by prompting `model` with the `unconditional_ids` branch.
See [the paper](https://arxiv.org/abs/2306.17806) for more information.
Args:
guidance_scale (`float`):
The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale != 1`. | 427_7_134 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale != 1`.
Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer quality. A value smaller than 1 has the opposite effect, while
making the negative prompt provided with negative_prompt_ids (if any) act as a positive prompt.
model (`PreTrainedModel`): | 427_7_135 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | making the negative prompt provided with negative_prompt_ids (if any) act as a positive prompt.
model (`PreTrainedModel`):
The model computing the unconditional scores. Supposedly the same as the one computing the conditional
scores. Both models must use the same tokenizer.
unconditional_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of input sequence tokens in the vocabulary for the unconditional branch. If unset, will default to
the last token of the prompt. | 427_7_136 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | the last token of the prompt.
unconditional_attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Attention mask for unconditional_ids.
use_cache (`bool`, *optional*, defaults to `True`):
Whether to cache key/values during the negative prompt forward pass.
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM | 427_7_137 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> inputs = tokenizer(["Today, a dragon flew over Paris, France,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=1.5)
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Today, a dragon flew over Paris, France, killing at least 50 people and injuring more than 100' | 427_7_138 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # with a negative prompt
>>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=2, negative_prompt_ids=neg_inputs["input_ids"])
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Today, a dragon flew over Paris, France, killing at least 130 people. French media reported that' | 427_7_139 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # with a positive prompt
>>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=0, negative_prompt_ids=neg_inputs["input_ids"])
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
"Today, a dragon flew over Paris, France, and I'm very happy to be here. I"
```
- __call__
[`LogitsProcessor`] that modifies the logits for the generation of timestamps in the transcription. When the input | 427_7_140 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | - __call__
[`LogitsProcessor`] that modifies the logits for the generation of timestamps in the transcription. When the input
tokens are at a specific threshold, the processor sets the scores to negative infinity. The processor makes sure
that timestamp tokens appear in pairs, by masking out the logits that would break this pairing pattern. This is
done to maintain the consistency and structure of generated timestamps. It also ensures that when the predicted | 427_7_141 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | done to maintain the consistency and structure of generated timestamps. It also ensures that when the predicted
probability of sampling any of the timestamp token is greater than any individual non-timestamp token, those
non-timestamp logits are set to negative infinity. This is done to ensure the generation of timestamps over other
potential tokens.
See [the paper](https://arxiv.org/abs/2212.04356) for more information.
Args:
generate_config (`GenerateConfig`): | 427_7_142 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | See [the paper](https://arxiv.org/abs/2212.04356) for more information.
Args:
generate_config (`GenerateConfig`):
The generate config used to generate the output. The following parameters are required:
eos_token_id (`int`, *optional*, defaults to 50257):
The id of the *end-of-sequence* token.
no_timestamps_token_id (`int`, *optional*, defaults to 50363):
The id of the `"<|notimestamps|>"` token.
max_initial_timestamp_index (`int`, *optional*, defaults to 1): | 427_7_143 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | The id of the `"<|notimestamps|>"` token.
max_initial_timestamp_index (`int`, *optional*, defaults to 1):
Used to set the maximum value of the initial timestamp. This is used to prevent the model from
predicting timestamps that are too far in the future.
begin_index (`Optional`, *optional*): Token index of the first token that is generated by the model.
_detect_timestamp_from_logprob (`bool`, *optional*): Whether timestamps can be predicted from logprobs over all timestamps.
Examples:
``` python | 427_7_144 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Examples:
``` python
>>> import torch
>>> from transformers import AutoProcessor, WhisperForConditionalGeneration, GenerationConfig
>>> from datasets import load_dataset | 427_7_145 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = processor(ds[3]["audio"]["array"], return_tensors="pt")
>>> input_features = inputs.input_features | 427_7_146 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> #Displaying timestamps
>>> generated_ids = model.generate(inputs=input_features, return_timestamps=True)
>>> transcription = processor.batch_decode(generated_ids, decode_with_timestamps=True)[0]
>>> print("Transcription:", transcription)
Transcription: <|startoftranscript|><|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and can<|6.44|><|6.44|> discover in it but little of rocky Ithaca.<|9.44|><|endoftext|> | 427_7_147 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> #No timestamps & change EOS:
>>> #This allows the user to select a specific token to terminate the sequence on, in this case it's the word "can"(460)
>>> model.generation_config.eos_token_id = 460
>>> generated_ids = model.generate(inputs=input_features,return_timestamps=False)
>>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
>>> print("Transcription:", transcription) | 427_7_148 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> print("Transcription:", transcription)
Transcription: He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can
```
- __call__
Logits processor for watermarking generated text. The processor modifies model output scores by adding a small bias to
randomized set of "green" tokens before generating the next token. "Green" tokens selection process depends on the | 427_7_149 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | randomized set of "green" tokens before generating the next token. "Green" tokens selection process depends on the
`seeding_scheme` used. The code was based on the [original repo](https://github.com/jwkirchenbauer/lm-watermarking/tree/main).
The text generated by this `LogitsProcessor` can be detected using `WatermarkDetector`. See [`~WatermarkDetector.__call__`] for details,
See [the paper](https://arxiv.org/abs/2306.04634) for more information.
Args:
vocab_size (`int`): | 427_7_150 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | See [the paper](https://arxiv.org/abs/2306.04634) for more information.
Args:
vocab_size (`int`):
The model tokenizer's vocab_size. Used to calculate "green" tokens ratio.
device (`str`):
The device where model is allocated.
greenlist_ratio (`float`, optional, *optional*, defaults to 0.25):
The ratio of "green" tokens used to the vocabulary size. Defaults to 0.25.
bias (`float`, optional, *optional*, defaults to 2.0):
The bias added to the selected "green" tokens' logits. Consider lowering the | 427_7_151 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | The bias added to the selected "green" tokens' logits. Consider lowering the
`bias` if the text generation quality degrades. Recommended values are in the
range of [0.5, 2.0]. Defaults to 2.0.
hashing_key (`int`, optional, *optional*, defaults to 15485863):
Key used for hashing. If you deploy this watermark, we advise using another private key.
Defaults to 15485863 (the millionth prime).
seeding_scheme (`str`, optional, *optional*, defaults to `"lefthash"`): | 427_7_152 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Defaults to 15485863 (the millionth prime).
seeding_scheme (`str`, optional, *optional*, defaults to `"lefthash"`):
The seeding scheme used for selecting "green" tokens. Accepts values:
- "lefthash" (default): "green" tokens selection depend on the last token (Algorithm 2 from paper)
- "selfhash": "green" tokens selection depends on the current token itself (Algorithm 3 from paper)
The downside of this scheme is that it considers all possible next tokens and can be slower than "lefthash". | 427_7_153 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | The downside of this scheme is that it considers all possible next tokens and can be slower than "lefthash".
The context length of previous tokens to use in seeding. Higher context length makes watermarking more robust.
context_width (`int`, *optional*, defaults to 1):
The number of previous tokens to use when setting the seed.
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkingConfig | 427_7_154 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> inputs = tokenizer(["Alice and Bob are"], return_tensors="pt")
>>> # normal generation
>>> out = model.generate(inputs["input_ids"], max_length=20, do_sample=False)
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Alice and Bob are both in the same room.\n\n"I\'m not sure if you\'re' | 427_7_155 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # watermarked generation
>>> watermarking_config = WatermarkingConfig(bias=2.5, context_width=2, seeding_scheme="selfhash")
>>> out = model.generate(inputs["input_ids"], watermarking_config=watermarking_config, max_length=20, do_sample=False)
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Alice and Bob are both still alive and well and the story is pretty much a one-hour adventure' | 427_7_156 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> # to detect watermarked text use the WatermarkDetector class
>>> from transformers import WatermarkDetector
>>> detector = WatermarkDetector(model_config=model.config, device="cpu", watermarking_config= watermarking_config)
>>> detection_preds = detector(out)
>>> detection_preds
array([ True])
```
- __call__ | 427_7_157 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow | .md | TFForcedBOSTokenLogitsProcessor
- __call__
TFForcedEOSTokenLogitsProcessor
- __call__
TFForceTokensLogitsProcessor
- __call__
TFLogitsProcessor
- __call__
TFLogitsProcessorList
- __call__
TFLogitsWarper
- __call__
TFMinLengthLogitsProcessor
- __call__
TFNoBadWordsLogitsProcessor
- __call__
TFNoRepeatNGramLogitsProcessor
- __call__
TFRepetitionPenaltyLogitsProcessor
- __call__
TFSuppressTokensAtBeginLogitsProcessor
- __call__
TFSuppressTokensLogitsProcessor
- __call__ | 427_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow | .md | - __call__
TFSuppressTokensAtBeginLogitsProcessor
- __call__
TFSuppressTokensLogitsProcessor
- __call__
TFTemperatureLogitsWarper
- __call__
TFTopKLogitsWarper
- __call__
TFTopPLogitsWarper
- __call__ | 427_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#flax | .md | FlaxForcedBOSTokenLogitsProcessor
- __call__
FlaxForcedEOSTokenLogitsProcessor
- __call__
FlaxForceTokensLogitsProcessor
- __call__
FlaxLogitsProcessor
- __call__
FlaxLogitsProcessorList
- __call__
FlaxLogitsWarper
- __call__
FlaxMinLengthLogitsProcessor
- __call__
FlaxSuppressTokensAtBeginLogitsProcessor
- __call__
FlaxSuppressTokensLogitsProcessor
- __call__
FlaxTemperatureLogitsWarper
- __call__
FlaxTopKLogitsWarper
- __call__
FlaxTopPLogitsWarper
- __call__ | 427_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#flax | .md | - __call__
FlaxTemperatureLogitsWarper
- __call__
FlaxTopKLogitsWarper
- __call__
FlaxTopPLogitsWarper
- __call__
FlaxWhisperTimeStampLogitsProcessor
- __call__ | 427_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | A [`StoppingCriteria`] can be used to change when to stop generation (other than EOS token). Please note that this is exclusively available to our PyTorch implementations.
Abstract base class for all stopping criteria that can be applied during generation.
If your stopping criteria depends on the `scores` input, make sure you pass `return_dict_in_generate=True,
output_scores=True` to `generate`.
- __call__
Abstract base class for all stopping criteria that can be applied during generation. | 427_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | - __call__
Abstract base class for all stopping criteria that can be applied during generation.
If your stopping criteria depends on the `scores` input, make sure you pass `return_dict_in_generate=True,
output_scores=True` to `generate`.
List
- __call__
This class can be used to stop generation whenever the full generated number of tokens exceeds `max_length`. Keep
in mind for decoder-only type of transformers, this will include the initial prompted tokens.
Args:
max_length (`int`): | 427_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | in mind for decoder-only type of transformers, this will include the initial prompted tokens.
Args:
max_length (`int`):
The maximum length that the output sequence can have in number of tokens.
max_position_embeddings (`int`, *optional*):
The maximum model length, as defined by the model's `config.max_position_embeddings` attribute.
- __call__
This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the | 427_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the
time will start being counted when you initialize this function. You can override this by passing an
`initial_time`.
Args:
max_time (`float`):
The maximum allowed time in seconds for the generation.
initial_time (`float`, *optional*, defaults to `time.time()`):
The start of the generation allowed time.
- __call__ | 427_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | initial_time (`float`, *optional*, defaults to `time.time()`):
The start of the generation allowed time.
- __call__
This class can be used to stop generation whenever specific string sequences are generated. It preprocesses
the strings together with the tokenizer vocab to find positions where tokens can validly complete the stop strings.
Generation is stopped as soon as a token is generated that completes any of the stop strings. | 427_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | Generation is stopped as soon as a token is generated that completes any of the stop strings.
We want to catch any instance in which the stop string would be present in the decoded output, which means
we must also catch cases with "overhangs" off one or both ends. To make this more concrete, for the stop string
"stop", any of the following token sequences would trigger the match:
- ["st", "op"]
- ["stop"]
- ["st", "opera"]
- ["sto", "pper"]
- ["las", "topper"]
- ["s", "to", "pped"] | 427_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | - ["st", "op"]
- ["stop"]
- ["st", "opera"]
- ["sto", "pper"]
- ["las", "topper"]
- ["s", "to", "pped"]
Note that a match will only be triggered if the stop string is at the end of the generated sequence. In other
words, these sequences will not trigger a match:
- ["stop", "at"]
- ["st", "op", "at"]
- ["st", "opera", "tion"]
The reason these are not a match is that the stop string does not overlap with the final token. If you can remove | 427_10_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | The reason these are not a match is that the stop string does not overlap with the final token. If you can remove
one or more tokens from the end of the sequence without destroying the stop string, then this criterion will not
match that stop string. This is by design; because this check is run after each token is generated, we can't miss a
valid stop string if one is generated, but we don't want to halt generation just because the stop string exists
somewhere in the past input_ids. | 427_10_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | somewhere in the past input_ids.
How is the match actually performed, though? We do it in quite a confusing way, because we want the entire match
process to be compilable with Torch or XLA, which means we cannot use standard string methods. However, it is possible,
with some work, to do string matching with pure tensor operations. We'll begin by describing the algorithm we use
with standard string operations, and then at the end we'll explain how this is converted to pure tensor operations. | 427_10_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | with standard string operations, and then at the end we'll explain how this is converted to pure tensor operations.
The key to the algorithm is an observation: Because the stop string must overlap with the end of the token sequence, we can start at
the end of the sequence and work backwards. Specifically, we check that there is an overlap between the start of
the final token and the end of the stop_string, or to put it another way, stop_string[-i:] == token[:i] for | 427_10_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | the final token and the end of the stop_string, or to put it another way, stop_string[-i:] == token[:i] for
some i > 0. If you look at the positive examples above, you'll see the last token in all of them fulfills this
property:
- ["st", "op"] (overlap is "op", overlap length == 2)
- ["stop"] (overlap is "stop", overlap length == 4)
- ["st", "opera"] (overlap is "op", overlap length == 2)
- ["sto", "pper"] (overlap is "p", overlap length == 1) | 427_10_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | - ["st", "opera"] (overlap is "op", overlap length == 2)
- ["sto", "pper"] (overlap is "p", overlap length == 1)
- ["las", "topper"] (overlap is "top", overlap length == 3)
- ["s", "to", "pped"] (overlap is "p", overlap length == 1)
It's impossible to construct a matching sequence that does not have this property (feel free to verify this
yourself). However, although this overlap between the start of the final token and the end of the stop string is | 427_10_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | yourself). However, although this overlap between the start of the final token and the end of the stop string is
necessary for a match, it is not sufficient. We also need to check that the rest of the token sequence is
consistent with the stop string.
How do we do that? Let's use ["s", "to", "pped"] as an example. We know that the final token, "pped", has an
overlap of 1 with the stop string, "stop". We then go back to the previous token, "to". Since we have already | 427_10_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | overlap of 1 with the stop string, "stop". We then go back to the previous token, "to". Since we have already
matched 1 character from the stop string, the remainder to check is "sto". We check that the next token "to"
matches the end of the remainder, which it does. We have now matched 3 characters from the stop string, and the
remainder to match is "s". We go back to the previous token again, which is also "s". This is a match, and so
we have matched the entire stop string. | 427_10_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | we have matched the entire stop string.
How does it work when the tokens run off the start of the stop string, though? Let's consider the example of
["las", "topper"]. The final token, "topper", has an overlap of 3 with the stop string, "stop". Therefore,
the remaining stop string to match is "s". We go back to the previous token, "las". Because the remainder to
match is just "s", with length 1, we consider only the final 1 character from the token, which is "s". This | 427_10_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | match is just "s", with length 1, we consider only the final 1 character from the token, which is "s". This
matches the stop string, and so the entire string is matched.
How do we compute these matches with tensor operations, though? Simply: we efficiently precompute the necessary
information for all tokens! For every token, we compute:
- Its overlap with the end of the stop string, if any
- The positions inside the stop string where the token matches, including matches that run off the start. | 427_10_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | - The positions inside the stop string where the token matches, including matches that run off the start.
- The total length of the token
For example, for the token "pped", we would compute an end overlap of 1, no internal matching positions,
and a length of 4. For the token "to", we would compute no end overlap, a single internal matching position
of 1 (counting from the end), and a length of 2. For the token "s", we would compute no end overlap, | 427_10_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | of 1 (counting from the end), and a length of 2. For the token "s", we would compute no end overlap,
a single internal matching position of 3 (again counting from the end) and a length of 1.
As long as we have this information, we can execute the algorithm above without any string comparison
operations. We simply perform the following steps:
- Check if the final token has an end-overlap with the start string
- Continue backwards, keeping track of how much of the stop string we've matched so far | 427_10_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | - Continue backwards, keeping track of how much of the stop string we've matched so far
- At each point, check if the next token has the current position as one of its valid positions
- Continue until either a match fails, or we completely match the whole stop string
Again, consider ["s", "to", "pped"] as an example. "pped" has an end overlap of 1, so we can begin a match.
We have matched 1 character so far, so we check that the next token "to", has 1 as a valid position (again, | 427_10_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | We have matched 1 character so far, so we check that the next token "to", has 1 as a valid position (again,
counting from the end). It does, so we add the length of "to" to our position tracker. We have now matched
3 characters, so we check that the next token "s" has 3 as a valid position. It does, so we add its length
to the position tracker. The position tracker is now 4, which is the length of the stop string. We have matched the
entire stop string. | 427_10_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | entire stop string.
In the second case, ["las", "topper"], "topper" has an end overlap of 3, so we can begin a match. We have
matched 3 characters so far, so we check that the next token "las" has 3 as a valid position. It does, because we
allow tokens to match positions that run off the start of the stop string. We add its length to the position
tracker. The position tracker is now 6, which is greater than the length of the stop string! Don't panic, though - | 427_10_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | tracker. The position tracker is now 6, which is greater than the length of the stop string! Don't panic, though -
this also counts as a match of the stop string. We have matched the entire stop string.
Args:
tokenizer (`PreTrainedTokenizer`):
The model's associated tokenizer (necessary to extract vocab and tokenize the termination sequences)
stop_strings (`Union[str, List[str]]`):
A list of strings that should end generation. If a string is passed, it will be treated like a
list with a single element. | 427_10_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | A list of strings that should end generation. If a string is passed, it will be treated like a
list with a single element.
Examples:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer | 427_10_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
>>> model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2")
>>> inputs = tokenizer("The biggest states in the USA by land area:", return_tensors="pt")
>>> gen_out = model.generate(**inputs)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
The biggest states in the USA by land area:
- Alaska
- Texas
- California | 427_10_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | >>> # Passing one or more stop strings will halt generation after those strings are emitted
>>> # Note that generating with stop strings requires you to pass the tokenizer too
>>> gen_out = model.generate(**inputs, stop_strings=["Texas"], tokenizer=tokenizer)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
The biggest states in the USA by land area:
- Alaska
- Texas
```
- __call__
This class can be used to stop generation whenever the "end-of-sequence" token is generated. | 427_10_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria | .md | - Texas
```
- __call__
This class can be used to stop generation whenever the "end-of-sequence" token is generated.
By default, it uses the `model.generation_config.eos_token_id`.
Args:
eos_token_id (`Union[int, List[int], torch.Tensor]`):
The id(s) of the *end-of-sequence* token.
- __call__ | 427_10_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#constraints | .md | A [`Constraint`] can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusively available to our PyTorch implementations.
Abstract base class for all constraints that can be applied during generation.
It must define how the constraint can be satisfied.
All classes that inherit Constraint must follow the requirement that
```py
completed = False
while not completed:
_, completed = constraint.update(constraint.advance())
``` | 427_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#constraints | .md | ```py
completed = False
while not completed:
_, completed = constraint.update(constraint.advance())
```
will always terminate (halt).
[`Constraint`] enforcing that an ordered sequence of tokens is included in the output.
Args:
token_ids (`List[int]`):
The id of the token that must be generated by the output.
A special [`Constraint`] that is fulfilled by fulfilling just one of several constraints.
Args:
nested_token_ids (`List[List[int]]`): | 427_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#constraints | .md | Args:
nested_token_ids (`List[List[int]]`):
A list of words, where each word is a list of ids. This constraint is fulfilled by generating just one from
the list of words.
Abstract base class for all constraints that can be applied during generation.
It must define how the constraint can be satisfied.
All classes that inherit Constraint must follow the requirement that
```py
completed = False
while not completed:
_, completed = constraint.update(constraint.advance())
``` | 427_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#constraints | .md | ```py
completed = False
while not completed:
_, completed = constraint.update(constraint.advance())
```
will always terminate (halt).
ListState | 427_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | Abstract base class for all beam scorers that are used for [`~PreTrainedModel.beam_search`] and
[`~PreTrainedModel.beam_sample`].
- process
- finalize
[`BeamScorer`] implementing standard beam search decoding.
Adapted in part from [Facebook's XLM beam search
code](https://github.com/facebookresearch/XLM/blob/9e6f6814d17be4fe5b15f2e6c43eb2b2d76daeb4/src/model/transformer.py#L529).
Reference for the diverse beam search algorithm and implementation [Ashwin Kalyan's DBS | 427_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | Reference for the diverse beam search algorithm and implementation [Ashwin Kalyan's DBS
implementation](https://github.com/ashwinkalyan/dbs/blob/master/dbs/beam_utils.lua)
Args:
batch_size (`int`):
Batch Size of `input_ids` for which standard beam search decoding is run in parallel.
num_beams (`int`):
Number of beams for beam search.
device (`torch.device`):
Defines the device type (*e.g.*, `"cpu"` or `"cuda"`) on which this instance of `BeamSearchScorer` will be
allocated. | 427_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | Defines the device type (*e.g.*, `"cpu"` or `"cuda"`) on which this instance of `BeamSearchScorer` will be
allocated.
length_penalty (`float`, *optional*, defaults to 1.0):
Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to
the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while | 427_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while
`length_penalty` < 0.0 encourages shorter sequences.
do_early_stopping (`bool` or `str`, *optional*, defaults to `False`):
Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values:
`True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an | 427_12_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | `True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an
heuristic is applied and the generation stops when is it very unlikely to find better candidates;
`"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical
beam search algorithm).
num_beam_hyps_to_keep (`int`, *optional*, defaults to 1):
The number of beam hypotheses that shall be returned upon calling
[`~transformers.BeamSearchScorer.finalize`]. | 427_12_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | The number of beam hypotheses that shall be returned upon calling
[`~transformers.BeamSearchScorer.finalize`].
num_beam_groups (`int`, *optional*, defaults to 1):
Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams.
See [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details.
max_length (`int`, *optional*):
The maximum length of the sequence to be generated.
- process
- finalize | 427_12_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | max_length (`int`, *optional*):
The maximum length of the sequence to be generated.
- process
- finalize
[`BeamScorer`] implementing constrained beam search decoding.
Args:
batch_size (`int`):
Batch Size of `input_ids` for which standard beam search decoding is run in parallel.
num_beams (`int`):
Number of beams for beam search.
constraints (`List[Constraint]`):
A list of positive constraints represented as `Constraint` objects that must be fulfilled in the generation | 427_12_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | A list of positive constraints represented as `Constraint` objects that must be fulfilled in the generation
output. For more information, the documentation of [`Constraint`] should be read.
device (`torch.device`):
Defines the device type (*e.g.*, `"cpu"` or `"cuda"`) on which this instance of `BeamSearchScorer` will be
allocated.
length_penalty (`float`, *optional*, defaults to 1.0):
Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to | 427_12_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to
the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while
`length_penalty` < 0.0 encourages shorter sequences.
do_early_stopping (`bool` or `str`, *optional*, defaults to `False`): | 427_12_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | `length_penalty` < 0.0 encourages shorter sequences.
do_early_stopping (`bool` or `str`, *optional*, defaults to `False`):
Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values:
`True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an
heuristic is applied and the generation stops when is it very unlikely to find better candidates; | 427_12_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | heuristic is applied and the generation stops when is it very unlikely to find better candidates;
`"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical
beam search algorithm).
num_beam_hyps_to_keep (`int`, *optional*, defaults to 1):
The number of beam hypotheses that shall be returned upon calling
[`~transformers.BeamSearchScorer.finalize`].
num_beam_groups (`int`, *optional*, defaults to 1): | 427_12_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch | .md | [`~transformers.BeamSearchScorer.finalize`].
num_beam_groups (`int`, *optional*, defaults to 1):
Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams.
See [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details.
max_length (`int`, *optional*):
The maximum length of the sequence to be generated.
- process
- finalize | 427_12_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers | .md | Simple text streamer that prints the token(s) to stdout as soon as entire words are formed.
<Tip warning={true}>
The API for the streamer classes is still under development and may change in the future.
</Tip>
Parameters:
tokenizer (`AutoTokenizer`):
The tokenized used to decode the tokens.
skip_prompt (`bool`, *optional*, defaults to `False`):
Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots.
decode_kwargs (`dict`, *optional*): | 427_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers | .md | Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots.
decode_kwargs (`dict`, *optional*):
Additional keyword arguments to pass to the tokenizer's `decode` method.
Examples:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer | 427_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers | .md | >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
>>> streamer = TextStreamer(tok) | 427_13_2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.