source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
| [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)| [![Open in AWS
30_3_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation.ipynb)|
30_3_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
| [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)| [![Open in AWS
30_3_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)|
30_3_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
| [How to train a language model from scratch](https://github.com/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| Highlight all the steps to effectively train Transformer model on custom data | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| [![Open in AWS
30_3_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)|
30_3_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
| [How to generate text](https://github.com/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| How to use different decoding methods for language generation with transformers | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| [![Open in AWS
30_3_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)|
30_3_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
| [How to generate text (with constraints)](https://github.com/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| How to guide language generation with user-provided constraints | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| [![Open in AWS
30_3_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)|
30_3_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp
.md
| [Reformer](https://github.com/huggingface/blog/blob/main/notebooks/03_reformer.ipynb)| How Reformer pushes the limits of language modeling | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)|
30_3_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| Notebook | Description | | |
30_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:|
30_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to fine-tune a model on image classification (Torchvision)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | [![Open in
30_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)|
30_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to fine-tune a model on image classification (Albumentations)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | Show how to preprocess the data using Albumentations and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | [![Open in
30_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb)|
30_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to fine-tune a model on image classification (Kornia)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | Show how to preprocess the data using Kornia and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | [![Open in
30_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)|
30_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to perform zero-shot object detection with OWL-ViT](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) | Show how to perform zero-shot object detection on images with text queries | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)| [![Open in
30_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)|
30_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to fine-tune an image captioning model](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | Show how to fine-tune BLIP for image captioning on a custom dataset | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | [![Open in
30_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb)|
30_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to build an image similarity system with Transformers](https://github.com/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | Show how to build an image similarity system | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | [![Open in
30_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb)|
30_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | [![Open in
30_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb)|
30_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [How to fine-tune a VideoMAE model on video classification](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | Show how to preprocess the data and fine-tune a pretrained VideoMAE model on Video Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | [![Open in AWS
30_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visionpytorch-cv
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb)|
30_4_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#audiopytorch-audio
.md
| Notebook | Description | | | |:----------|:-------------|:-------------|------:|
30_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#audiopytorch-audio
.md
| [How to fine-tune a speech recognition model in English](https://github.com/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| [![Open in AWS
30_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#audiopytorch-audio
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)|
30_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#audiopytorch-audio
.md
| [How to fine-tune a speech recognition model in any language](https://github.com/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| [![Open in AWS
30_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#audiopytorch-audio
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)|
30_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#audiopytorch-audio
.md
| [How to fine-tune a model on audio classification](https://github.com/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| [![Open in AWS
30_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#audiopytorch-audio
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)|
30_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| Notebook | Description | | | |:----------|:----------------------------------------------------------------------------------------|:-------------|------:|
30_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | [![Open in AWS
30_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) |
30_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [How to generate protein folds](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | See how to go from protein sequence to a full protein model and PDB file | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | [![Open in AWS
30_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) |
30_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [How to fine-tune a Nucleotide Transformer model](https://github.com/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) | See how to tokenize DNA and fine-tune a large pre-trained DNA "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) | [![Open in AWS
30_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) |
30_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [Fine-tune a Nucleotide Transformer model with LoRA](https://github.com/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) | Train even larger DNA models in a memory-efficient way | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) | [![Open in AWS
30_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencespytorch-bio
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) |
30_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#other-modalitiespytorch-other
.md
| Notebook | Description | | | |:----------|:----------------------------------------------------------------------------------------|:-------------|------:|
30_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#other-modalitiespytorch-other
.md
| [Probabilistic Time Series Forecasting](https://github.com/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | See how to train Time Series Transformer on a custom dataset | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | [![Open in AWS
30_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#other-modalitiespytorch-other
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) |
30_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#utility-notebookspytorch-utility
.md
| Notebook | Description | | | |:----------|:-------------|:-------------|------:|
30_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#utility-notebookspytorch-utility
.md
| [How to export model to ONNX](https://github.com/huggingface/notebooks/blob/main/examples/onnx-export.ipynb)| Highlight how to export and run inference workloads through ONNX | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/onnx-export.ipynb)| [![Open in AWS
30_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#utility-notebookspytorch-utility
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/onnx-export.ipynb)|
30_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#utility-notebookspytorch-utility
.md
| [How to use Benchmarks](https://github.com/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| How to benchmark models with transformers | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)|
30_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| Notebook | Description | | | |:----------|:-------------|:-------------|------:|
30_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [![Open in AWS
30_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)|
30_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb) | How to easily start using transformers |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)| [![Open in AWS
30_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)|
30_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| [![Open in AWS
30_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)|
30_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| [![Open in AWS
30_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)|
30_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| [![Open in AWS
30_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)|
30_9_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| [![Open in AWS
30_9_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)|
30_9_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| [![Open in AWS
30_9_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)|
30_9_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| [![Open in AWS
30_9_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)|
30_9_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
| [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| [![Open in AWS
30_9_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingtensorflow-nlp
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)|
30_9_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visiontensorflow-cv
.md
| Notebook | Description | | |
30_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visiontensorflow-cv
.md
|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|:-------------|------:|
30_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visiontensorflow-cv
.md
| [How to fine-tune a model on image classification](https://github.com/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb) | Show how to preprocess the data and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)| [![Open in AWS
30_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visiontensorflow-cv
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)|
30_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visiontensorflow-cv
.md
| [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)| [![Open in AWS
30_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#computer-visiontensorflow-cv
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)|
30_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencestensorflow-bio
.md
| Notebook | Description | | | |:----------|:-------------|:-------------|------:|
30_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencestensorflow-bio
.md
| [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | [![Open in AWS
30_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#biological-sequencestensorflow-bio
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) |
30_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#utility-notebookstensorflow-utility
.md
| Notebook | Description | | | |:----------|:-------------|:-------------|------:|
30_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#utility-notebookstensorflow-utility
.md
| [How to train TF/Keras models on TPU](https://github.com/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | See how to train at high speed on Google's TPU hardware | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | [![Open in AWS
30_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#utility-notebookstensorflow-utility
.md
| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) |
30_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
🤗 [Optimum](https://github.com/huggingface/optimum) is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares. | Notebook | Description | | | |:----------|:-------------|:-------------|------:|
30_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
| [How to quantize a model with ONNX Runtime for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| Show how to apply static and dynamic quantization on a model using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for any GLUE task. | [![Open in
30_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
dynamic quantization on a model using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| [![Open in AWS
30_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)|
30_13_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
| [How to quantize a model with Intel Neural Compressor for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| Show how to apply static, dynamic and aware training quantization on a model using [Intel Neural Compressor (INC)](https://github.com/intel/neural-compressor) for any GLUE task. | [![Open in
30_13_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
on a model using [Intel Neural Compressor (INC)](https://github.com/intel/neural-compressor) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| [![Open in AWS
30_13_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)|
30_13_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
| [How to fine-tune a model on text classification with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| Show how to preprocess the data and fine-tune a model on any GLUE task using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| [![Open in AWS
30_13_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)|
30_13_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
| [How to fine-tune a model on summarization with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| Show how to preprocess the data and fine-tune a model on XSUM using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| [![Open in AWS
30_13_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#optimum-notebooks
.md
[![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)|
30_13_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md
https://huggingface.co/docs/transformers/en/notebooks/#community-notebooks
.md
More notebooks developed by the community are available [here](https://hf.co/docs/transformers/community#community-notebooks).
30_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
31_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
31_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing
.md
Let's take a look at how 🤗 Transformers models are tested and how you can write new tests and improve the existing ones. There are 2 test suites in the repository: 1. `tests` -- tests for the general API 2. `examples` -- tests primarily for various applications that aren't part of the API
31_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#how-transformers-are-tested
.md
1. Once a PR is submitted it gets tested with 9 CircleCi jobs. Every new commit to that PR gets retested. These jobs are defined in this [config file](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml), so that if needed you can reproduce the same environment on your machine. These CI jobs don't run `@slow` tests. 2. There are 3 jobs run by [github actions](https://github.com/huggingface/transformers/actions):
31_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#how-transformers-are-tested
.md
2. There are 3 jobs run by [github actions](https://github.com/huggingface/transformers/actions): - [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): checks whether torch hub integration works. - [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): runs fast tests on GPU only on commits on
31_2_1