source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-the-stdoutstderr-output
.md
def test_result_and_stdout(): msg = "Hello" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # optional: if you want to replay the consumed streams: sys.stdout.write(out) # test: assert msg in out ``` An important potential issue with capturing stdout is that it may contain `\r` characters that in normal `print` reset everything that has been printed so far. There is no problem with `pytest`, but with `pytest -s` these
31_34_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-the-stdoutstderr-output
.md
reset everything that has been printed so far. There is no problem with `pytest`, but with `pytest -s` these characters get included in the buffer, so to be able to have the test run with and without `-s`, you have to make an extra cleanup to the captured output, using `re.sub(r'~.*\r', '', buf, 0, re.M)`. But, then we have a helper context manager wrapper to automatically take care of it all, regardless of whether it has some `\r`'s in it or not, so it's a simple: ```python
31_34_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-the-stdoutstderr-output
.md
some `\r`'s in it or not, so it's a simple: ```python from transformers.testing_utils import CaptureStdout
31_34_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-the-stdoutstderr-output
.md
with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out) ``` Here is a full test example: ```python from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}" ``` If you'd like to capture `stderr` use the `CaptureStderr` class instead: ```python from transformers.testing_utils import CaptureStderr
31_34_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-the-stdoutstderr-output
.md
with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err) ``` If you need to capture both streams at once, use the parent `CaptureStd` class: ```python from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out) ``` Also, to aid debugging test issues, by default these context managers automatically replay the captured streams on exit from the context.
31_34_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#capturing-logger-stream
.md
If you need to validate the output of a logger, you can use `CaptureLogger`: ```python from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger("transformers.models.bart.tokenization_bart") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n" ```
31_35_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-with-environment-variables
.md
If you want to test the impact of environment variables for a specific test you can use a helper decorator `transformers.testing_utils.mockenv` ```python from transformers.testing_utils import mockenv
31_36_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-with-environment-variables
.md
class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None) ``` At times an external program needs to be called, which requires setting `PYTHONPATH` in `os.environ` to include multiple local paths. A helper class `transformers.test_utils.TestCasePlus` comes to help: ```python from transformers.testing_utils import TestCasePlus
31_36_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-with-environment-variables
.md
class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # now call the external program, passing `env` to it ``` Depending on whether the test file was under the `tests` test suite or `examples` it'll correctly set up `env[PYTHONPATH]` to include one of these two directories, and also the `src` directory to ensure the testing is done against the current repo, and finally with whatever `env[PYTHONPATH]` was already set to before the test was called if anything.
31_36_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-with-environment-variables
.md
called if anything. This helper method creates a copy of the `os.environ` object, so the original remains intact.
31_36_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#getting-reproducible-results
.md
In some situations you may want to remove randomness for your tests. To get identical reproducible results set, you will need to fix the seed: ```python seed = 42 # python RNG import random random.seed(seed) # pytorch RNGs import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # numpy RNG import numpy as np np.random.seed(seed) # tf RNG import tensorflow as tf tf.random.set_seed(seed) ```
31_37_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#debugging-tests
.md
To start a debugger at the point of the warning, do this: ```bash pytest tests/utils/test_logging.py -W error::UserWarning --pdb ```
31_38_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#working-with-github-actions-workflows
.md
To trigger a self-push workflow CI job, you must: 1. Create a new branch on `transformers` origin (not a fork!). 2. The branch name has to start with either `ci_` or `ci-` (`main` triggers it too, but we can't do PRs on `main`). It also gets triggered only for specific paths - you can find the up-to-date definition in case it changed since this document has been written [here](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml) under *push:*
31_39_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#working-with-github-actions-workflows
.md
3. Create a PR from this branch. 4. Then you can see the job appear [here](https://github.com/huggingface/transformers/actions/workflows/self-push.yml). It may not run right away if there is a backlog.
31_39_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-experimental-ci-features
.md
Testing CI features can be potentially problematic as it can interfere with the normal CI functioning. Therefore if a new CI feature is to be added, it should be done as following. 1. Create a new dedicated job that tests what needs to be tested 2. The new job must always succeed so that it gives us a green ✓ (details below). 3. Let it run for some days to see that a variety of different PR types get to run on it (user fork branches,
31_40_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-experimental-ci-features
.md
3. Let it run for some days to see that a variety of different PR types get to run on it (user fork branches, non-forked branches, branches originating from github.com UI direct file edit, various forced pushes, etc. - there are so many) while monitoring the experimental job's logs (not the overall job green as it's purposefully always green) 4. When it's clear that everything is solid, then merge the new changes into existing jobs.
31_40_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-experimental-ci-features
.md
green) 4. When it's clear that everything is solid, then merge the new changes into existing jobs. That way experiments on CI functionality itself won't interfere with the normal workflow. Now how can we make the job always succeed while the new CI feature is being developed? Some CIs, like TravisCI support ignore-step-failure and will report the overall job as successful, but CircleCI and Github Actions as of this writing don't support that. So the following workaround can be used:
31_40_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-experimental-ci-features
.md
Github Actions as of this writing don't support that. So the following workaround can be used: 1. `set +euo pipefail` at the beginning of the run command to suppress most potential failures in the bash script. 2. the last command must be a success: `echo "done"` or just `true` will do Here is an example: ```yaml - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure
31_40_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-experimental-ci-features
.md
this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" ``` For simple commands you could also do: ```bash cmd_that_may_fail || true ``` Of course, once satisfied with the results, integrate the experimental step or job with the rest of the normal jobs,
31_40_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-experimental-ci-features
.md
``` Of course, once satisfied with the results, integrate the experimental step or job with the rest of the normal jobs, while removing `set +euo pipefail` or any other things you may have added to ensure that the experimental job doesn't interfere with the normal CI functioning. This whole process would have been much easier if we only could set something like `allow-failure` for the experimental step, and let it fail without impacting the overall status of PRs. But as mentioned earlier CircleCI and
31_40_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#testing-experimental-ci-features
.md
experimental step, and let it fail without impacting the overall status of PRs. But as mentioned earlier CircleCI and Github Actions don't support it at the moment. You can vote for this feature and see where it is at these CI-specific threads: - [Github Actions:](https://github.com/actions/toolkit/issues/399) - [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344)
31_40_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/testing.md
https://huggingface.co/docs/transformers/en/testing/#deepspeed-integration
.md
For a PR that involves the DeepSpeed integration, keep in mind our CircleCI PR CI setup doesn't have GPUs. Tests requiring GPUs are run on a different CI nightly. This means if you get a passing CI report in your PR, it doesn’t mean the DeepSpeed tests pass. To run DeepSpeed tests: ```bash RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py ``` Any changes to the modeling or PyTorch examples code requires running the model zoo tests as well. ```bash RUN_SLOW=1 pytest tests/deepspeed ```
31_41_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/
.md
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
32_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/
.md
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
32_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#performance-and-scalability
.md
Training large transformer models and deploying them to production present various challenges. During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment phase, the model can struggle to handle the required throughput in a production environment. This documentation aims to assist you in overcoming these challenges and finding the optimal settings for your use-case.
32_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#performance-and-scalability
.md
This documentation aims to assist you in overcoming these challenges and finding the optimal settings for your use-case. The guides are divided into training and inference sections, as each comes with different challenges and solutions. Within each section you'll find separate guides for different hardware configurations, such as single GPU vs. multi-GPU for training or CPU vs. GPU for inference. Use this document as your starting point to navigate further to the methods that match your scenario.
32_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#training
.md
Training large transformer models efficiently requires an accelerator such as a GPU or TPU. The most common case is where you have a single GPU. The methods that you can apply to improve training efficiency on a single GPU extend to other setups such as multiple GPU. However, there are also techniques that are specific to multi-GPU or CPU training. We cover them in separate sections.
32_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#training
.md
separate sections. * [Methods and tools for efficient training on a single GPU](perf_train_gpu_one): start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both. * [Multi-GPU training section](perf_train_gpu_many): explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data, tensor, and pipeline parallelism. * [CPU training section](perf_train_cpu): learn about mixed precision training on CPU.
32_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#training
.md
* [CPU training section](perf_train_cpu): learn about mixed precision training on CPU. * [Efficient Training on Multiple CPUs](perf_train_cpu_many): learn about distributed CPU training. * [Training on TPU with TensorFlow](perf_train_tpu_tf): if you are new to TPUs, refer to this section for an opinionated introduction to training on TPUs and using XLA. * [Custom hardware for training](perf_hardware): find tips and tricks when building your own deep learning rig.
32_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#training
.md
* [Custom hardware for training](perf_hardware): find tips and tricks when building your own deep learning rig. * [Hyperparameter Search using Trainer API](hpo_train)
32_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#inference
.md
Efficient inference with large models in a production environment can be as challenging as training them. In the following sections we go through the steps to run inference on CPU and single/multi-GPU setups. * [Inference on a single CPU](perf_infer_cpu) * [Inference on a single GPU](perf_infer_gpu_one) * [Multi-GPU inference](perf_infer_gpu_multi) * [XLA Integration for TensorFlow Models](tf_xla)
32_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#training-and-inference
.md
Here you'll find techniques, tips and tricks that apply whether you are training a model, or running inference with it. * [Instantiating a big model](big_models) * [Troubleshooting performance issues](debugging)
32_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/performance.md
https://huggingface.co/docs/transformers/en/performance/#contribute
.md
This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to make please don't hesitate to open a PR or if you aren't sure start an Issue and we can discuss the details there. When making contributions that A is better than B, please try to include a reproducible benchmark and/or a link to the source of that information (unless it comes directly from you).
32_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
33_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
33_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#what--transformers-can-do
.md
🤗 Transformers is a library of pretrained state-of-the-art models for natural language processing (NLP), computer vision, and audio and speech processing tasks. Not only does the library contain Transformer models, but it also has non-Transformer models like modern convolutional networks for computer vision tasks. If you look at some of the most popular consumer products today, like smartphones, apps, and televisions, odds are that some kind of deep learning technology is behind it. Want to remove a
33_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#what--transformers-can-do
.md
like smartphones, apps, and televisions, odds are that some kind of deep learning technology is behind it. Want to remove a background object from a picture taken by your smartphone? This is an example of a panoptic segmentation task (don't worry if you don't know what this means yet, we'll describe it in the following sections!).
33_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#what--transformers-can-do
.md
This page provides an overview of the different speech and audio, computer vision, and NLP tasks that can be solved with the 🤗 Transformers library in just three lines of code!
33_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#audio
.md
Audio and speech processing tasks are a little different from the other modalities mainly because audio as an input is a continuous signal. Unlike text, a raw audio waveform can't be neatly split into discrete chunks the way a sentence can be divided into words. To get around this, the raw audio signal is typically sampled at regular intervals. If you take more samples within an interval, the sampling rate is higher, and the audio more closely resembles the original audio source.
33_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#audio
.md
Previous approaches preprocessed the audio to extract useful features from it. It is now more common to start audio and speech processing tasks by directly feeding the raw audio waveform to a feature encoder to extract an audio representation. This simplifies the preprocessing step and allows the model to learn the most essential features.
33_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#audio-classification
.md
Audio classification is a task that labels audio data from a predefined set of classes. It is a broad category with many specific applications, some of which include: * acoustic scene classification: label audio with a scene label ("office", "beach", "stadium") * acoustic event detection: label audio with a sound event label ("car horn", "whale calling", "glass breaking") * tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting)
33_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#audio-classification
.md
* tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting) * music classification: label music with a genre label ("metal", "hip-hop", "country") ```py >>> from transformers import pipeline
33_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#audio-classification
.md
>>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") >>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] ```
33_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#automatic-speech-recognition
.md
Automatic speech recognition (ASR) transcribes speech into text. It is one of the most common audio tasks due partly to speech being such a natural form of human communication. Today, ASR systems are embedded in "smart" technology products like speakers, phones, and cars. We can ask our virtual assistants to play music, set reminders, and tell us the weather.
33_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#automatic-speech-recognition
.md
But one of the key challenges Transformer architectures have helped with is in low-resource languages. By pretraining on large amounts of speech data, finetuning the model on only one hour of labeled speech data in a low-resource language can still produce high-quality results compared to previous ASR systems trained on 100x more labeled data. ```py >>> from transformers import pipeline
33_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#automatic-speech-recognition
.md
>>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ```
33_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#computer-vision
.md
One of the first and earliest successful computer vision tasks was recognizing images of zip code numbers using a [convolutional neural network (CNN)](glossary#convolution). An image is composed of pixels, and each pixel has a numerical value. This makes it easy to represent an image as a matrix of pixel values. Each particular combination of pixel values describes the colors of an image. Two general ways computer vision tasks can be solved are:
33_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#computer-vision
.md
Two general ways computer vision tasks can be solved are: 1. Use convolutions to learn the hierarchical features of an image from low-level features to high-level abstract things. 2. Split an image into patches and use a Transformer to gradually learn how each image patch is related to each other to form an image. Unlike the bottom-up approach favored by a CNN, this is kind of like starting out with a blurry image and then gradually bringing it into focus.
33_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-classification
.md
Image classification labels an entire image from a predefined set of classes. Like most classification tasks, there are many practical use cases for image classification, some of which include: * healthcare: label medical images to detect disease or monitor patient health * environment: label satellite images to monitor deforestation, inform wildland management or detect wildfires * agriculture: label images of crops to monitor plant health or satellite images for land use monitoring
33_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-classification
.md
* agriculture: label images of crops to monitor plant health or satellite images for land use monitoring * ecology: label images of animal or plant species to monitor wildlife populations or track endangered species ```py >>> from transformers import pipeline
33_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-classification
.md
>>> classifier = pipeline(task="image-classification") >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}
33_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-classification
.md
{'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} ```
33_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#object-detection
.md
Unlike image classification, object detection identifies multiple objects within an image and the objects' positions in an image (defined by the bounding box). Some example applications of object detection include: * self-driving vehicles: detect everyday traffic objects such as other vehicles, pedestrians, and traffic lights * remote sensing: disaster monitoring, urban planning, and weather forecasting * defect detection: detect cracks or structural damage in buildings, and manufacturing defects ```py
33_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#object-detection
.md
* defect detection: detect cracks or structural damage in buildings, and manufacturing defects ```py >>> from transformers import pipeline
33_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#object-detection
.md
>>> detector = pipeline(task="object-detection") >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] ```
33_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-segmentation
.md
Image segmentation is a pixel-level task that assigns every pixel in an image to a class. It differs from object detection, which uses bounding boxes to label and predict objects in an image because segmentation is more granular. Segmentation can detect objects at a pixel-level. There are several types of image segmentation: * instance segmentation: in addition to labeling the class of an object, it also labels each distinct instance of an object ("dog-1", "dog-2")
33_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-segmentation
.md
* panoptic segmentation: a combination of semantic and instance segmentation; it labels each pixel with a semantic class **and** each distinct instance of an object
33_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-segmentation
.md
Segmentation tasks are helpful in self-driving vehicles to create a pixel-level map of the world around them so they can navigate safely around pedestrians and other vehicles. It is also useful for medical imaging, where the task's finer granularity can help identify abnormal cells or organ features. Image segmentation can also be used in ecommerce to virtually try on clothes or create augmented reality experiences by overlaying objects in the real world through your camera. ```py
33_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-segmentation
.md
```py >>> from transformers import pipeline
33_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#image-segmentation
.md
>>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} ```
33_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#depth-estimation
.md
Depth estimation predicts the distance of each pixel in an image from the camera. This computer vision task is especially important for scene understanding and reconstruction. For example, in self-driving cars, vehicles need to understand how far objects like pedestrians, traffic signs, and other vehicles are to avoid obstacles and collisions. Depth information is also helpful for constructing 3D representations from 2D images and can be used to create high-quality 3D representations of biological
33_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#depth-estimation
.md
for constructing 3D representations from 2D images and can be used to create high-quality 3D representations of biological structures or buildings.
33_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#depth-estimation
.md
There are two approaches to depth estimation: * stereo: depths are estimated by comparing two images of the same image from slightly different angles * monocular: depths are estimated from a single image ```py >>> from transformers import pipeline
33_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#depth-estimation
.md
>>> depth_estimator = pipeline(task="depth-estimation") >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) ```
33_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#natural-language-processing
.md
NLP tasks are among the most common types of tasks because text is such a natural way for us to communicate. To get text into a format recognized by a model, it needs to be tokenized. This means dividing a sequence of text into separate words or subwords (tokens) and then converting these tokens into numbers. As a result, you can represent a sequence of text as a sequence of numbers, and once you have a sequence of numbers, it can be input into a model to solve all sorts of NLP tasks!
33_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#text-classification
.md
Like classification tasks in any modality, text classification labels a sequence of text (it can be sentence-level, a paragraph, or a document) from a predefined set of classes. There are many practical applications for text classification, some of which include: * sentiment analysis: label text according to some polarity like `positive` or `negative` which can inform and support decision-making in fields like politics, finance, and marketing
33_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#text-classification
.md
* content classification: label text according to some topic to help organize and filter information in news and social media feeds (`weather`, `sports`, `finance`, etc.) ```py >>> from transformers import pipeline
33_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#text-classification
.md
>>> classifier = pipeline(task="sentiment-analysis") >>> preds = classifier("Hugging Face is the best thing since sliced bread!") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] ```
33_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#token-classification
.md
In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as [tokens](glossary#token). Token classification assigns each token a label from a predefined set of classes. Two common types of token classification are: * named entity recognition (NER): label a token according to an entity category like organization, person, location or date. NER is especially popular in biomedical settings, where it can label genes, proteins, and drug names.
33_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#token-classification
.md
* part-of-speech tagging (POS): label a token according to its part-of-speech like noun, verb, or adjective. POS is useful for helping translation systems understand how two identical words are grammatically different (bank as a noun versus bank as a verb). ```py >>> from transformers import pipeline
33_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#token-classification
.md
>>> classifier = pipeline(task="ner") >>> preds = classifier("Hugging Face is a French company based in New York City.") >>> preds = [ ... { ... "entity": pred["entity"], ... "score": round(pred["score"], 4), ... "index": pred["index"], ... "word": pred["word"], ... "start": pred["start"], ... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n")
33_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#token-classification
.md
... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24}
33_12_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#token-classification
.md
{'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} ```
33_12_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#question-answering
.md
Question answering is another token-level task that returns an answer to a question, sometimes with context (open-domain) and other times without context (closed-domain). This task happens whenever we ask a virtual assistant something like whether a restaurant is open. It can also provide customer or technical support and help search engines retrieve the relevant information you're asking for. There are two common types of question answering:
33_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#question-answering
.md
There are two common types of question answering: * extractive: given a question and some context, the answer is a span of text from the context the model must extract * abstractive: given a question and some context, the answer is generated from the context; this approach is handled by the [`Text2TextGenerationPipeline`] instead of the [`QuestionAnsweringPipeline`] shown below ```py >>> from transformers import pipeline
33_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#question-answering
.md
>>> question_answerer = pipeline(task="question-answering") >>> preds = question_answerer( ... question="What is the name of the repository?", ... context="The name of the repository is huggingface/transformers", ... ) >>> print( ... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ... ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers ```
33_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#summarization
.md
Summarization creates a shorter version of a text from a longer one while trying to preserve most of the meaning of the original document. Summarization is a sequence-to-sequence task; it outputs a shorter text sequence than the input. There are a lot of long-form documents that can be summarized to help readers quickly understand the main points. Legislative bills, legal and financial documents, patents, and scientific papers are a few examples of documents that could be summarized to save readers time and
33_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#summarization
.md
documents, patents, and scientific papers are a few examples of documents that could be summarized to save readers time and serve as a reading aid.
33_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#summarization
.md
Like question answering, there are two types of summarization: * extractive: identify and extract the most important sentences from the original text * abstractive: generate the target summary (which may include new words not in the input document) from the original text; the [`SummarizationPipeline`] uses the abstractive approach ```py >>> from transformers import pipeline
33_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#summarization
.md
>>> summarizer = pipeline(task="summarization") >>> summarizer(
33_14_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#summarization
.md
... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former
33_14_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#summarization
.md
WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles."
33_14_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#summarization
.md
... ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] ```
33_14_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#translation
.md
Translation converts a sequence of text in one language to another. It is important in helping people from different backgrounds communicate with each other, help translate content to reach wider audiences, and even be a learning tool to help people learn a new language. Along with summarization, translation is a sequence-to-sequence task, meaning the model receives an input sequence and returns a target output sequence.
33_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#translation
.md
In the early days, translation models were mostly monolingual, but recently, there has been increasing interest in multilingual models that can translate between many pairs of languages. ```py >>> from transformers import pipeline
33_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#translation
.md
>>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task="translation", model="google-t5/t5-small") >>> translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}] ```
33_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#language-modeling
.md
Language modeling is a task that predicts a word in a sequence of text. It has become a very popular NLP task because a pretrained language model can be finetuned for many other downstream tasks. Lately, there has been a lot of interest in large language models (LLMs) which demonstrate zero- or few-shot learning. This means the model can solve tasks it wasn't explicitly trained to do! Language models can be used to generate fluent and convincing text, though you need to be careful since the text may not
33_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#language-modeling
.md
to do! Language models can be used to generate fluent and convincing text, though you need to be careful since the text may not always be accurate.
33_16_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#language-modeling
.md
There are two types of language modeling: * causal: the model's objective is to predict the next token in a sequence, and future tokens are masked ```py >>> from transformers import pipeline
33_16_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#language-modeling
.md
>>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task="text-generation") >>> generator(prompt) # doctest: +SKIP ``` * masked: the model's objective is to predict a masked token in a sequence with full access to the tokens in the sequence ```py >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... {
33_16_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#language-modeling
.md
>>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... { ... "score": round(pred["score"], 4), ... "token": pred["token"], ... "token_str": pred["token_str"], ... "sequence": pred["sequence"], ... } ... for pred in preds ... ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] ```
33_16_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#multimodal
.md
Multimodal tasks require a model to process multiple data modalities (text, image, audio, video) to solve a particular problem. Image captioning is an example of a multimodal task where the model takes an image as input and outputs a sequence of text describing the image or some properties of the image.
33_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#multimodal
.md
Although multimodal models work with different data types or modalities, internally, the preprocessing steps help the model convert all the data types into embeddings (vectors or list of numbers that holds meaningful information about the data). For a task like image captioning, the model learns relationships between image embeddings and text embeddings.
33_17_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#document-question-answering
.md
Document question answering is a task that answers natural language questions from a document. Unlike a token-level question answering task which takes text as input, document question answering takes an image of a document as input along with a question about the document and returns an answer. Document question answering can be used to parse structured documents and extract key information from it. In the example below, the total amount and change due can be extracted from a receipt. ```py
33_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#document-question-answering
.md
```py >>> from transformers import pipeline >>> from PIL import Image >>> import requests
33_18_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#document-question-answering
.md
>>> url = "https://huggingface.co/datasets/hf-internal-testing/example-documents/resolve/main/jpeg_images/2.jpg" >>> image = Image.open(requests.get(url, stream=True).raw)
33_18_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#document-question-answering
.md
>>> doc_question_answerer = pipeline("document-question-answering", model="magorshunov/layoutlm-invoices") >>> preds = doc_question_answerer( ... question="What is the total amount?", ... image=image, ... ) >>> preds [{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}] ```
33_18_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/task_summary.md
https://huggingface.co/docs/transformers/en/task_summary/#document-question-answering
.md
... image=image, ... ) >>> preds [{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}] ``` Hopefully, this page has given you some more background information about all the types of tasks in each modality and the practical importance of each one. In the next [section](tasks_explained), you'll learn **how** 🤗 Transformers work to solve these tasks.
33_18_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md
https://huggingface.co/docs/transformers/en/hpo_train/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
34_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md
https://huggingface.co/docs/transformers/en/hpo_train/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
34_0_1