text
stringlengths
3
7.31k
source
stringclasses
40 values
url
stringlengths
53
184
source_section
stringlengths
0
105
file_type
stringclasses
1 value
id
stringlengths
3
6
If you want to know if you are logged in, you can use `huggingface-cli whoami`. This command doesn't have any options and simply prints your username and the organizations you are a part of on the Hub: ```bash huggingface-cli whoami Wauplin orgs: huggingface,eu-test,OAuthTesters,hf-accelerate,HFSmolCluster ``` If you are not logged in, an error message will be printed.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-whoami
#huggingface-cli-whoami
.md
28_6
This command logs you out. In practice, it will delete all tokens stored on your machine. If you want to remove a specific token, you can specify the token name as an argument. This command will not log you out if you are logged in using the `HF_TOKEN` environment variable (see [reference](../package_reference/environment_variables#hftoken)). If that is the case, you must unset the environment variable in your machine configuration.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-logout
#huggingface-cli-logout
.md
28_7
Use the `huggingface-cli download` command to download files from the Hub directly. Internally, it uses the same [`hf_hub_download`] and [`snapshot_download`] helpers described in the [Download](./download) guide and prints the returned path to the terminal. In the examples below, we will walk through the most common use cases. For a full list of available options, you can run: ```bash huggingface-cli download --help ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-download
#huggingface-cli-download
.md
28_8
To download a single file from a repo, simply provide the repo_id and filename as follow: ```bash >>> huggingface-cli download gpt2 config.json downloading https://huggingface.co/gpt2/resolve/main/config.json to /home/wauplin/.cache/huggingface/hub/tmpwrq8dm5o (…)ingface.co/gpt2/resolve/main/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 665/665 [00:00<00:00, 2.49MB/s] /home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json ``` The command will always print on the last line the path to the file on your local machine.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#download-a-single-file
#download-a-single-file
.md
28_9
In some cases, you just want to download all the files from a repository. This can be done by just specifying the repo id: ```bash >>> huggingface-cli download HuggingFaceH4/zephyr-7b-beta Fetching 23 files: 0%| | 0/23 [00:00<?, ?it/s] ... ... /home/wauplin/.cache/huggingface/hub/models--HuggingFaceH4--zephyr-7b-beta/snapshots/3bac358730f8806e5c3dc7c7e19eb36e045bf720 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#download-an-entire-repository
#download-an-entire-repository
.md
28_10
You can also download a subset of the files from a repository with a single command. This can be done in two ways. If you already have a precise list of the files you want to download, you can simply provide them sequentially: ```bash >>> huggingface-cli download gpt2 config.json model.safetensors Fetching 2 files: 0%| | 0/2 [00:00<?, ?it/s] downloading https://huggingface.co/gpt2/resolve/11c5a3d5811f50298f278a704980280950aedb10/model.safetensors to /home/wauplin/.cache/huggingface/hub/tmpdachpl3o (…)8f278a7049802950aedb10/model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8.09k/8.09k [00:00<00:00, 40.5MB/s] Fetching 2 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 3.76it/s] /home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10 ``` The other approach is to provide patterns to filter which files you want to download using `--include` and `--exclude`. For example, if you want to download all safetensors files from [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), except the files in FP16 precision: ```bash >>> huggingface-cli download stabilityai/stable-diffusion-xl-base-1.0 --include "*.safetensors" --exclude "*.fp16.*"* Fetching 8 files: 0%| | 0/8 [00:00<?, ?it/s] ... ... Fetching 8 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8/8 (...) /home/wauplin/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-1.0/snapshots/462165984030d82259a11f4367a4eed129e94a7b ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#download-multiple-files
#download-multiple-files
.md
28_11
The examples above show how to download from a model repository. To download a dataset or a Space, use the `--repo-type` option: ```bash # https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k >>> huggingface-cli download HuggingFaceH4/ultrachat_200k --repo-type dataset # https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat >>> huggingface-cli download HuggingFaceH4/zephyr-chat --repo-type space ... ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#download-a-dataset-or-a-space
#download-a-dataset-or-a-space
.md
28_12
The examples above show how to download from the latest commit on the main branch. To download from a specific revision (commit hash, branch name or tag), use the `--revision` option: ```bash >>> huggingface-cli download bigcode/the-stack --repo-type dataset --revision v1.1 ... ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#download-a-specific-revision
#download-a-specific-revision
.md
28_13
The recommended (and default) way to download files from the Hub is to use the cache-system. However, in some cases you want to download files and move them to a specific folder. This is useful to get a workflow closer to what git commands offer. You can do that using the `--local-dir` option. A `.cache/huggingface/` folder is created at the root of your local directory containing metadata about the downloaded files. This prevents re-downloading files if they're already up-to-date. If the metadata has changed, then the new file version is downloaded. This makes the `local-dir` optimized for pulling only the latest changes. <Tip> For more details on how downloading to a local file works, check out the [download](./download.md#download-files-to-a-local-folder) guide. </Tip> ```bash >>> huggingface-cli download adept/fuyu-8b model-00001-of-00002.safetensors --local-dir fuyu ... fuyu/model-00001-of-00002.safetensors ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#download-to-a-local-folder
#download-to-a-local-folder
.md
28_14
If not using `--local-dir`, all files will be downloaded by default to the cache directory defined by the `HF_HOME` [environment variable](../package_reference/environment_variables#hfhome). You can specify a custom cache using `--cache-dir`: ```bash >>> huggingface-cli download adept/fuyu-8b --cache-dir ./path/to/cache ... ./path/to/cache/models--adept--fuyu-8b/snapshots/ddcacbcf5fdf9cc59ff01f6be6d6662624d9c745 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#specify-cache-directory
#specify-cache-directory
.md
28_15
To access private or gated repositories, you must use a token. By default, the token saved locally (using `huggingface-cli login`) will be used. If you want to authenticate explicitly, use the `--token` option: ```bash >>> huggingface-cli download gpt2 config.json --token=hf_**** /home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#specify-a-token
#specify-a-token
.md
28_16
By default, the `huggingface-cli download` command will be verbose. It will print details such as warning messages, information about the downloaded files, and progress bars. If you want to silence all of this, use the `--quiet` option. Only the last line (i.e. the path to the downloaded files) is printed. This can prove useful if you want to pass the output to another command in a script. ```bash >>> huggingface-cli download gpt2 --quiet /home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#quiet-mode
#quiet-mode
.md
28_17
On machines with slow connections, you might encounter timeout issues like this one: ```bash `requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='cdn-lfs-us-1.huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: a33d910c-84c6-4514-8362-c705e2039d38)')` ``` To mitigate this issue, you can set the `HF_HUB_DOWNLOAD_TIMEOUT` environment variable to a higher value (default is 10): ```bash export HF_HUB_DOWNLOAD_TIMEOUT=30 ``` For more details, check out the [environment variables reference](../package_reference/environment_variables#hfhubdownloadtimeout).And rerun your download command.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#download-timeout
#download-timeout
.md
28_18
Use the `huggingface-cli upload` command to upload files to the Hub directly. Internally, it uses the same [`upload_file`] and [`upload_folder`] helpers described in the [Upload](./upload) guide. In the examples below, we will walk through the most common use cases. For a full list of available options, you can run: ```bash >>> huggingface-cli upload --help ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-upload
#huggingface-cli-upload
.md
28_19
The default usage for this command is: ```bash # Usage: huggingface-cli upload [repo_id] [local_path] [path_in_repo] ``` To upload the current directory at the root of the repo, use: ```bash >>> huggingface-cli upload my-cool-model . . https://huggingface.co/Wauplin/my-cool-model/tree/main/ ``` <Tip> If the repo doesn't exist yet, it will be created automatically. </Tip> You can also upload a specific folder: ```bash >>> huggingface-cli upload my-cool-model ./models . https://huggingface.co/Wauplin/my-cool-model/tree/main/ ``` Finally, you can upload a folder to a specific destination on the repo: ```bash >>> huggingface-cli upload my-cool-model ./path/to/curated/data /data/train https://huggingface.co/Wauplin/my-cool-model/tree/main/data/train ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-an-entire-folder
#upload-an-entire-folder
.md
28_20
You can also upload a single file by setting `local_path` to point to a file on your machine. If that's the case, `path_in_repo` is optional and will default to the name of your local file: ```bash >>> huggingface-cli upload Wauplin/my-cool-model ./models/model.safetensors https://huggingface.co/Wauplin/my-cool-model/blob/main/model.safetensors ``` If you want to upload a single file to a specific directory, set `path_in_repo` accordingly: ```bash >>> huggingface-cli upload Wauplin/my-cool-model ./models/model.safetensors /vae/model.safetensors https://huggingface.co/Wauplin/my-cool-model/blob/main/vae/model.safetensors ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-a-single-file
#upload-a-single-file
.md
28_21
To upload multiple files from a folder at once without uploading the entire folder, use the `--include` and `--exclude` patterns. It can also be combined with the `--delete` option to delete files on the repo while uploading new ones. In the example below, we sync the local Space by deleting remote files and uploading all files except the ones in `/logs`: ```bash # Sync local Space with Hub (upload new files except from logs/, delete removed files) >>> huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub" ... ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-multiple-files
#upload-multiple-files
.md
28_22
To upload to a dataset or a Space, use the `--repo-type` option: ```bash >>> huggingface-cli upload Wauplin/my-cool-dataset ./data /train --repo-type=dataset ... ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-to-a-dataset-or-space
#upload-to-a-dataset-or-space
.md
28_23
To upload content to a repo owned by an organization instead of a personal repo, you must explicitly specify it in the `repo_id`: ```bash >>> huggingface-cli upload MyCoolOrganization/my-cool-model . . https://huggingface.co/MyCoolOrganization/my-cool-model/tree/main/ ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-to-an-organization
#upload-to-an-organization
.md
28_24
By default, files are uploaded to the `main` branch. If you want to upload files to another branch or reference, use the `--revision` option: ```bash # Upload files to a PR >>> huggingface-cli upload bigcode/the-stack . . --repo-type dataset --revision refs/pr/104 ... ``` **Note:** if `revision` does not exist and `--create-pr` is not set, a branch will be created automatically from the `main` branch.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-to-a-specific-revision
#upload-to-a-specific-revision
.md
28_25
If you don't have the permission to push to a repo, you must open a PR and let the authors know about the changes you want to make. This can be done by setting the `--create-pr` option: ```bash # Create a PR and upload the files to it >>> huggingface-cli upload bigcode/the-stack . . --repo-type dataset --revision refs/pr/104 https://huggingface.co/datasets/bigcode/the-stack/blob/refs%2Fpr%2F104/ ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-and-create-a-pr
#upload-and-create-a-pr
.md
28_26
In some cases, you might want to push regular updates to a repo. For example, this is useful if you're training a model and you want to upload the logs folder every 10 minutes. You can do this using the `--every` option: ```bash # Upload new logs every 10 minutes huggingface-cli upload training-model logs/ --every=10 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#upload-at-regular-intervals
#upload-at-regular-intervals
.md
28_27
Use the `--commit-message` and `--commit-description` to set a custom message and description for your commit instead of the default one ```bash >>> huggingface-cli upload Wauplin/my-cool-model ./models . --commit-message="Epoch 34/50" --commit-description="Val accuracy: 68%. Check tensorboard for more details." ... https://huggingface.co/Wauplin/my-cool-model/tree/main ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#specify-a-commit-message
#specify-a-commit-message
.md
28_28
To upload files, you must use a token. By default, the token saved locally (using `huggingface-cli login`) will be used. If you want to authenticate explicitly, use the `--token` option: ```bash >>> huggingface-cli upload Wauplin/my-cool-model ./models . --token=hf_**** ... https://huggingface.co/Wauplin/my-cool-model/tree/main ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#specify-a-token
#specify-a-token
.md
28_29
By default, the `huggingface-cli upload` command will be verbose. It will print details such as warning messages, information about the uploaded files, and progress bars. If you want to silence all of this, use the `--quiet` option. Only the last line (i.e. the URL to the uploaded files) is printed. This can prove useful if you want to pass the output to another command in a script. ```bash >>> huggingface-cli upload Wauplin/my-cool-model ./models . --quiet https://huggingface.co/Wauplin/my-cool-model/tree/main ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#quiet-mode
#quiet-mode
.md
28_30
If you want to delete files from a Hugging Face repository, use the `huggingface-cli repo-files` command.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-repo-files
#huggingface-cli-repo-files
.md
28_31
The `huggingface-cli repo-files <repo_id> delete` sub-command allows you to delete files from a repository. Here are some usage examples. Delete a folder : ```bash >>> huggingface-cli repo-files Wauplin/my-cool-model delete folder/ Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo... ``` Delete multiple files: ```bash >>> huggingface-cli repo-files Wauplin/my-cool-model delete file.txt folder/pytorch_model.bin Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo... ``` Use Unix-style wildcards to delete sets of files: ```bash >>> huggingface-cli repo-files Wauplin/my-cool-model delete "*.txt" "folder/*.bin" Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo... ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#delete-files
#delete-files
.md
28_32
To delete files from a repo you must be authenticated and authorized. By default, the token saved locally (using `huggingface-cli login`) will be used. If you want to authenticate explicitly, use the `--token` option: ```bash >>> huggingface-cli repo-files --token=hf_**** Wauplin/my-cool-model delete file.txt ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#specify-a-token
#specify-a-token
.md
28_33
Scanning your cache directory is useful if you want to know which repos you have downloaded and how much space it takes on your disk. You can do that by running `huggingface-cli scan-cache`: ```bash >>> huggingface-cli scan-cache REPO ID REPO TYPE SIZE ON DISK NB FILES LAST_ACCESSED LAST_MODIFIED REFS LOCAL PATH --------------------------- --------- ------------ -------- ------------- ------------- ------------------- ------------------------------------------------------------------------- glue dataset 116.3K 15 4 days ago 4 days ago 2.4.0, main, 1.17.0 /home/wauplin/.cache/huggingface/hub/datasets--glue google/fleurs dataset 64.9M 6 1 week ago 1 week ago refs/pr/1, main /home/wauplin/.cache/huggingface/hub/datasets--google--fleurs Jean-Baptiste/camembert-ner model 441.0M 7 2 weeks ago 16 hours ago main /home/wauplin/.cache/huggingface/hub/models--Jean-Baptiste--camembert-ner bert-base-cased model 1.9G 13 1 week ago 2 years ago /home/wauplin/.cache/huggingface/hub/models--bert-base-cased t5-base model 10.1K 3 3 months ago 3 months ago main /home/wauplin/.cache/huggingface/hub/models--t5-base t5-small model 970.7M 11 3 days ago 3 days ago refs/pr/1, main /home/wauplin/.cache/huggingface/hub/models--t5-small Done in 0.0s. Scanned 6 repo(s) for a total of 3.4G. Got 1 warning(s) while scanning. Use -vvv to print details. ``` For more details about how to scan your cache directory, please refer to the [Manage your cache](./manage-cache#scan-cache-from-the-terminal) guide.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-scan-cache
#huggingface-cli-scan-cache
.md
28_34
`huggingface-cli delete-cache` is a tool that helps you delete parts of your cache that you don't use anymore. This is useful for saving and freeing disk space. To learn more about using this command, please refer to the [Manage your cache](./manage-cache#clean-cache-from-the-terminal) guide.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-delete-cache
#huggingface-cli-delete-cache
.md
28_35
The `huggingface-cli tag` command allows you to tag, untag, and list tags for repositories.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-tag
#huggingface-cli-tag
.md
28_36
To tag a repo, you need to provide the `repo_id` and the `tag` name: ```bash >>> huggingface-cli tag Wauplin/my-cool-model v1.0 You are about to create tag v1.0 on model Wauplin/my-cool-model Tag v1.0 created on Wauplin/my-cool-model ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#tag-a-model
#tag-a-model
.md
28_37
If you want to tag a specific revision, you can use the `--revision` option. By default, the tag will be created on the `main` branch: ```bash >>> huggingface-cli tag Wauplin/my-cool-model v1.0 --revision refs/pr/104 You are about to create tag v1.0 on model Wauplin/my-cool-model Tag v1.0 created on Wauplin/my-cool-model ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#tag-a-model-at-a-specific-revision
#tag-a-model-at-a-specific-revision
.md
28_38
If you want to tag a dataset or Space, you must specify the `--repo-type` option: ```bash >>> huggingface-cli tag bigcode/the-stack v1.0 --repo-type dataset You are about to create tag v1.0 on dataset bigcode/the-stack Tag v1.0 created on bigcode/the-stack ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#tag-a-dataset-or-a-space
#tag-a-dataset-or-a-space
.md
28_39
To list all tags for a repository, use the `-l` or `--list` option: ```bash >>> huggingface-cli tag Wauplin/gradio-space-ci -l --repo-type space Tags for space Wauplin/gradio-space-ci: 0.2.2 0.2.1 0.2.0 0.1.2 0.0.2 0.0.1 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#list-tags
#list-tags
.md
28_40
To delete a tag, use the `-d` or `--delete` option: ```bash >>> huggingface-cli tag -d Wauplin/my-cool-model v1.0 You are about to delete tag v1.0 on model Wauplin/my-cool-model Proceed? [Y/n] y Tag v1.0 deleted on Wauplin/my-cool-model ``` You can also pass `-y` to skip the confirmation step.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#delete-a-tag
#delete-a-tag
.md
28_41
The `huggingface-cli env` command prints details about your machine setup. This is useful when you open an issue on [GitHub](https://github.com/huggingface/huggingface_hub) to help the maintainers investigate your problem. ```bash >>> huggingface-cli env Copy-and-paste the text below in your GitHub issue. - huggingface_hub version: 0.19.0.dev0 - Platform: Linux-6.2.0-36-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/wauplin/.cache/huggingface/token - Has saved token ?: True - Who am I ?: Wauplin - Configured git credential helpers: store - FastAI: N/A - Tensorflow: 2.11.0 - Torch: 1.12.1 - Jinja2: 3.1.2 - Graphviz: 0.20.1 - Pydot: 1.4.2 - Pillow: 9.2.0 - hf_transfer: 0.1.3 - gradio: 4.0.2 - tensorboard: 2.6 - numpy: 1.23.2 - pydantic: 2.4.2 - aiohttp: 3.8.4 - ENDPOINT: https://huggingface.co - HF_HUB_CACHE: /home/wauplin/.cache/huggingface/hub - HF_ASSETS_CACHE: /home/wauplin/.cache/huggingface/assets - HF_TOKEN_PATH: /home/wauplin/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False - HF_HUB_ETAG_TIMEOUT: 10 - HF_HUB_DOWNLOAD_TIMEOUT: 10 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/cli.md
https://huggingface.co/docs/huggingface_hub/en/guides/cli/#huggingface-cli-env
#huggingface-cli-env
.md
28_42
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/community.md
https://huggingface.co/docs/huggingface_hub/en/guides/community/
.md
29_0
The `huggingface_hub` library provides a Python interface to interact with Pull Requests and Discussions on the Hub. Visit [the dedicated documentation page](https://huggingface.co/docs/hub/repositories-pull-requests-discussions) for a deeper view of what Discussions and Pull Requests on the Hub are, and how they work under the hood.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/community.md
https://huggingface.co/docs/huggingface_hub/en/guides/community/#interact-with-discussions-and-pull-requests
#interact-with-discussions-and-pull-requests
.md
29_1
The `HfApi` class allows you to retrieve Discussions and Pull Requests on a given repo: ```python >>> from huggingface_hub import get_repo_discussions >>> for discussion in get_repo_discussions(repo_id="bigscience/bloom"): ... print(f"{discussion.num} - {discussion.title}, pr: {discussion.is_pull_request}") # 11 - Add Flax weights, pr: True # 10 - Update README.md, pr: True # 9 - Training languages in the model card, pr: True # 8 - Update tokenizer_config.json, pr: True # 7 - Slurm training script, pr: False [...] ``` `HfApi.get_repo_discussions` supports filtering by author, type (Pull Request or Discussion) and status (`open` or `closed`): ```python >>> from huggingface_hub import get_repo_discussions >>> for discussion in get_repo_discussions( ... repo_id="bigscience/bloom", ... author="ArthurZ", ... discussion_type="pull_request", ... discussion_status="open", ... ): ... print(f"{discussion.num} - {discussion.title} by {discussion.author}, pr: {discussion.is_pull_request}") # 19 - Add Flax weights by ArthurZ, pr: True ``` `HfApi.get_repo_discussions` returns a [generator](https://docs.python.org/3.7/howto/functional.html#generators) that yields [`Discussion`] objects. To get all the Discussions in a single list, run: ```python >>> from huggingface_hub import get_repo_discussions >>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased")) ``` The [`Discussion`] object returned by [`HfApi.get_repo_discussions`] contains high-level overview of the Discussion or Pull Request. You can also get more detailed information using [`HfApi.get_discussion_details`]: ```python >>> from huggingface_hub import get_discussion_details >>> get_discussion_details( ... repo_id="bigscience/bloom-1b3", ... discussion_num=2 ... ) DiscussionWithDetails( num=2, author='cakiki', title='Update VRAM memory for the V100s', status='open', is_pull_request=True, events=[ DiscussionComment(type='comment', author='cakiki', ...), DiscussionCommit(type='commit', author='cakiki', summary='Update VRAM memory for the V100s', oid='1256f9d9a33fa8887e1c1bf0e09b4713da96773a', ...), ], conflicting_files=[], target_branch='refs/heads/main', merge_commit_oid=None, diff='diff --git a/README.md b/README.md\nindex a6ae3b9294edf8d0eda0d67c7780a10241242a7e..3a1814f212bc3f0d3cc8f74bdbd316de4ae7b9e3 100644\n--- a/README.md\n+++ b/README.md\n@@ -132,7 +132,7 [...]', ) ``` [`HfApi.get_discussion_details`] returns a [`DiscussionWithDetails`] object, which is a subclass of [`Discussion`] with more detailed information about the Discussion or Pull Request. Information includes all the comments, status changes, and renames of the Discussion via [`DiscussionWithDetails.events`]. In case of a Pull Request, you can retrieve the raw git diff with [`DiscussionWithDetails.diff`]. All the commits of the Pull Requests are listed in [`DiscussionWithDetails.events`].
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/community.md
https://huggingface.co/docs/huggingface_hub/en/guides/community/#retrieve-discussions-and-pull-requests-from-the-hub
#retrieve-discussions-and-pull-requests-from-the-hub
.md
29_2
The [`HfApi`] class also offers ways to create and edit Discussions and Pull Requests. You will need an [access token](https://huggingface.co/docs/hub/security-tokens) to create and edit Discussions or Pull Requests. The simplest way to propose changes on a repo on the Hub is via the [`create_commit`] API: just set the `create_pr` parameter to `True`. This parameter is also available on other methods that wrap [`create_commit`]: * [`upload_file`] * [`upload_folder`] * [`delete_file`] * [`delete_folder`] * [`metadata_update`] ```python >>> from huggingface_hub import metadata_update >>> metadata_update( ... repo_id="username/repo_name", ... metadata={"tags": ["computer-vision", "awesome-model"]}, ... create_pr=True, ... ) ``` You can also use [`HfApi.create_discussion`] (respectively [`HfApi.create_pull_request`]) to create a Discussion (respectively a Pull Request) on a repo. Opening a Pull Request this way can be useful if you need to work on changes locally. Pull Requests opened this way will be in `"draft"` mode. ```python >>> from huggingface_hub import create_discussion, create_pull_request >>> create_discussion( ... repo_id="username/repo-name", ... title="Hi from the huggingface_hub library!", ... token="<insert your access token here>", ... ) DiscussionWithDetails(...) >>> create_pull_request( ... repo_id="username/repo-name", ... title="Hi from the huggingface_hub library!", ... token="<insert your access token here>", ... ) DiscussionWithDetails(..., is_pull_request=True) ``` Managing Pull Requests and Discussions can be done entirely with the [`HfApi`] class. For example: * [`comment_discussion`] to add comments * [`edit_discussion_comment`] to edit comments * [`rename_discussion`] to rename a Discussion or Pull Request * [`change_discussion_status`] to open or close a Discussion / Pull Request * [`merge_pull_request`] to merge a Pull Request Visit the [`HfApi`] documentation page for an exhaustive reference of all available methods.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/community.md
https://huggingface.co/docs/huggingface_hub/en/guides/community/#create-and-edit-a-discussion-or-pull-request-programmatically
#create-and-edit-a-discussion-or-pull-request-programmatically
.md
29_3
*Coming soon !*
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/community.md
https://huggingface.co/docs/huggingface_hub/en/guides/community/#push-changes-to-a-pull-request
#push-changes-to-a-pull-request
.md
29_4
For a more detailed reference, visit the [Discussions and Pull Requests](../package_reference/community) and the [hf_api](../package_reference/hf_api) documentation page.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/community.md
https://huggingface.co/docs/huggingface_hub/en/guides/community/#see-also
#see-also
.md
29_5
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/
.md
30_0
The Hugging Face Hub makes hosting and sharing models with the community easy. It supports [dozens of libraries](https://huggingface.co/docs/hub/models-libraries) in the Open Source ecosystem. We are always working on expanding this support to push collaborative Machine Learning forward. The `huggingface_hub` library plays a key role in this process, allowing any Python script to easily push and load files. There are four main ways to integrate a library with the Hub: 1. **Push to Hub:** implement a method to upload a model to the Hub. This includes the model weights, as well as [the model card](https://huggingface.co/docs/huggingface_hub/how-to-model-cards) and any other relevant information or data necessary to run the model (for example, training logs). This method is often called `push_to_hub()`. 2. **Download from Hub:** implement a method to load a model from the Hub. The method should download the model configuration/weights and load the model. This method is often called `from_pretrained` or `load_from_hub()`. 3. **Inference API:** use our servers to run inference on models supported by your library for free. 4. **Widgets:** display a widget on the landing page of your models on the Hub. It allows users to quickly try a model from the browser. In this guide, we will focus on the first two topics. We will present the two main approaches you can use to integrate a library, with their advantages and drawbacks. Everything is summarized at the end of the guide to help you choose between the two. Please keep in mind that these are only guidelines that you are free to adapt to you requirements. If you are interested in Inference and Widgets, you can follow [this guide](https://huggingface.co/docs/hub/models-adding-libraries#set-up-the-inference-api). In both cases, you can reach out to us if you are integrating a library with the Hub and want to be listed [in our docs](https://huggingface.co/docs/hub/models-libraries).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#integrate-any-ml-framework-with-the-hub
#integrate-any-ml-framework-with-the-hub
.md
30_1
The first approach to integrate a library to the Hub is to actually implement the `push_to_hub` and `from_pretrained` methods by yourself. This gives you full flexibility on which files you need to upload/download and how to handle inputs specific to your framework. You can refer to the two [upload files](./upload) and [download files](./download) guides to learn more about how to do that. This is, for example how the FastAI integration is implemented (see [`push_to_hub_fastai`] and [`from_pretrained_fastai`]). Implementation can differ between libraries, but the workflow is often similar.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#a-flexible-approach-helpers
#a-flexible-approach-helpers
.md
30_2
This is how a `from_pretrained` method usually looks like: ```python def from_pretrained(model_id: str) -> MyModelClass: # Download model from Hub cached_model = hf_hub_download( repo_id=repo_id, filename="model.pkl", library_name="fastai", library_version=get_fastai_version(), ) # Load model return load_model(cached_model) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#frompretrained
#frompretrained
.md
30_3
The `push_to_hub` method often requires a bit more complexity to handle repo creation, generate the model card and save weights. A common approach is to save all of these files in a temporary folder, upload it and then delete it. ```python def push_to_hub(model: MyModelClass, repo_name: str) -> None: api = HfApi() # Create repo if not existing yet and get the associated repo_id repo_id = api.create_repo(repo_name, exist_ok=True) # Save all files in a temporary directory and push them in a single commit with TemporaryDirectory() as tmpdir: tmpdir = Path(tmpdir) # Save weights save_model(model, tmpdir / "model.safetensors") # Generate model card card = generate_model_card(model) (tmpdir / "README.md").write_text(card) # Save logs # Save figures # Save evaluation metrics # ... # Push to hub return api.upload_folder(repo_id=repo_id, folder_path=tmpdir) ``` This is of course only an example. If you are interested in more complex manipulations (delete remote files, upload weights on the fly, persist weights locally, etc.) please refer to the [upload files](./upload) guide.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#pushtohub
#pushtohub
.md
30_4
While being flexible, this approach has some drawbacks, especially in terms of maintenance. Hugging Face users are often used to additional features when working with `huggingface_hub`. For example, when loading files from the Hub, it is common to offer parameters like: - `token`: to download from a private repo - `revision`: to download from a specific branch - `cache_dir`: to cache files in a specific directory - `force_download`/`local_files_only`: to reuse the cache or not - `proxies`: configure HTTP session When pushing models, similar parameters are supported: - `commit_message`: custom commit message - `private`: create a private repo if missing - `create_pr`: create a PR instead of pushing to `main` - `branch`: push to a branch instead of the `main` branch - `allow_patterns`/`ignore_patterns`: filter which files to upload - `token` - ... All of these parameters can be added to the implementations we saw above and passed to the `huggingface_hub` methods. However, if a parameter changes or a new feature is added, you will need to update your package. Supporting those parameters also means more documentation to maintain on your side. To see how to mitigate these limitations, let's jump to our next section **class inheritance**.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#limitations
#limitations
.md
30_5
As we saw above, there are two main methods to include in your library to integrate it with the Hub: upload files (`push_to_hub`) and download files (`from_pretrained`). You can implement those methods by yourself but it comes with caveats. To tackle this, `huggingface_hub` provides a tool that uses class inheritance. Let's see how it works! In a lot of cases, a library already implements its model using a Python class. The class contains the properties of the model and methods to load, run, train, and evaluate it. Our approach is to extend this class to include upload and download features using mixins. A [Mixin](https://stackoverflow.com/a/547714) is a class that is meant to extend an existing class with a set of specific features using multiple inheritance. `huggingface_hub` provides its own mixin, the [`ModelHubMixin`]. The key here is to understand its behavior and how to customize it. The [`ModelHubMixin`] class implements 3 *public* methods (`push_to_hub`, `save_pretrained` and `from_pretrained`). Those are the methods that your users will call to load/save models with your library. [`ModelHubMixin`] also defines 2 *private* methods (`_save_pretrained` and `_from_pretrained`). Those are the ones you must implement. So to integrate your library, you should: 1. Make your Model class inherit from [`ModelHubMixin`]. 2. Implement the private methods: - [`~ModelHubMixin._save_pretrained`]: method taking as input a path to a directory and saving the model to it. You must write all the logic to dump your model in this method: model card, model weights, configuration files, training logs, and figures. Any relevant information for this model must be handled by this method. [Model Cards](https://huggingface.co/docs/hub/model-cards) are particularly important to describe your model. Check out [our implementation guide](./model-cards) for more details. - [`~ModelHubMixin._from_pretrained`]: **class method** taking as input a `model_id` and returning an instantiated model. The method must download the relevant files and load them. 3. You are done! The advantage of using [`ModelHubMixin`] is that once you take care of the serialization/loading of the files, you are ready to go. You don't need to worry about stuff like repo creation, commits, PRs, or revisions. The [`ModelHubMixin`] also ensures public methods are documented and type annotated, and you'll be able to view your model's download count on the Hub. All of this is handled by the [`ModelHubMixin`] and available to your users.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#a-more-complex-approach-class-inheritance
#a-more-complex-approach-class-inheritance
.md
30_6
A good example of what we saw above is [`PyTorchModelHubMixin`], our integration for the PyTorch framework. This is a ready-to-use integration.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#a-concrete-example-pytorch
#a-concrete-example-pytorch
.md
30_7
Here is how any user can load/save a PyTorch model from/to the Hub: ```python >>> import torch >>> import torch.nn as nn >>> from huggingface_hub import PyTorchModelHubMixin # Define your Pytorch model exactly the same way you are used to >>> class MyModel( ... nn.Module, ... PyTorchModelHubMixin, # multiple inheritance ... library_name="keras-nlp", ... tags=["keras"], ... repo_url="https://github.com/keras-team/keras-nlp", ... docs_url="https://keras.io/keras_nlp/", ... # ^ optional metadata to generate model card ... ): ... def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4): ... super().__init__() ... self.param = nn.Parameter(torch.rand(hidden_size, vocab_size)) ... self.linear = nn.Linear(output_size, vocab_size) ... def forward(self, x): ... return self.linear(x + self.param) # 1. Create model >>> model = MyModel(hidden_size=128) # Config is automatically created based on input + default values >>> model.param.shape[0] 128 # 2. (optional) Save model to local directory >>> model.save_pretrained("path/to/my-awesome-model") # 3. Push model weights to the Hub >>> model.push_to_hub("my-awesome-model") # 4. Initialize model from the Hub => config has been preserved >>> model = MyModel.from_pretrained("username/my-awesome-model") >>> model.param.shape[0] 128 # Model card has been correctly populated >>> from huggingface_hub import ModelCard >>> card = ModelCard.load("username/my-awesome-model") >>> card.data.tags ["keras", "pytorch_model_hub_mixin", "model_hub_mixin"] >>> card.data.library_name "keras-nlp" ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#how-to-use-it
#how-to-use-it
.md
30_8
The implementation is actually very straightforward, and the full implementation can be found [here](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py). 1. First, inherit your class from `ModelHubMixin`: ```python from huggingface_hub import ModelHubMixin class PyTorchModelHubMixin(ModelHubMixin): (...) ``` 2. Implement the `_save_pretrained` method: ```py from huggingface_hub import ModelHubMixin class PyTorchModelHubMixin(ModelHubMixin): (...) def _save_pretrained(self, save_directory: Path) -> None: """Save weights from a Pytorch model to a local directory.""" save_model_as_safetensor(self.module, str(save_directory / SAFETENSORS_SINGLE_FILE)) ``` 3. Implement the `_from_pretrained` method: ```python class PyTorchModelHubMixin(ModelHubMixin): (...) @classmethod # Must be a classmethod! def _from_pretrained( cls, *, model_id: str, revision: str, cache_dir: str, force_download: bool, proxies: Optional[Dict], resume_download: bool, local_files_only: bool, token: Union[str, bool, None], map_location: str = "cpu", # additional argument strict: bool = False, # additional argument **model_kwargs, ): """Load Pytorch pretrained weights and return the loaded model.""" model = cls(**model_kwargs) if os.path.isdir(model_id): print("Loading weights from local directory") model_file = os.path.join(model_id, SAFETENSORS_SINGLE_FILE) return cls._load_as_safetensor(model, model_file, map_location, strict) model_file = hf_hub_download( repo_id=model_id, filename=SAFETENSORS_SINGLE_FILE, revision=revision, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, token=token, local_files_only=local_files_only, ) return cls._load_as_safetensor(model, model_file, map_location, strict) ``` And that's it! Your library now enables users to upload and download files to and from the Hub.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#implementation
#implementation
.md
30_9
In the section above, we quickly discussed how the [`ModelHubMixin`] works. In this section, we will see some of its more advanced features to improve your library integration with the Hugging Face Hub.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#advanced-usage
#advanced-usage
.md
30_10
[`ModelHubMixin`] generates the model card for you. Model cards are files that accompany the models and provide important information about them. Under the hood, model cards are simple Markdown files with additional metadata. Model cards are essential for discoverability, reproducibility, and sharing! Check out the [Model Cards guide](https://huggingface.co/docs/hub/model-cards) for more details. Generating model cards semi-automatically is a good way to ensure that all models pushed with your library will share common metadata: `library_name`, `tags`, `license`, `pipeline_tag`, etc. This makes all models backed by your library easily searchable on the Hub and provides some resource links for users landing on the Hub. You can define the metadata directly when inheriting from [`ModelHubMixin`]: ```py class UniDepthV1( nn.Module, PyTorchModelHubMixin, library_name="unidepth", repo_url="https://github.com/lpiccinelli-eth/UniDepth", docs_url=..., pipeline_tag="depth-estimation", license="cc-by-nc-4.0", tags=["monocular-metric-depth-estimation", "arxiv:1234.56789"] ): ... ``` By default, a generic model card will be generated with the info you've provided (example: [pyp1/VoiceCraft_giga830M](https://huggingface.co/pyp1/VoiceCraft_giga830M)). But you can define your own model card template as well! In this example, all models pushed with the `VoiceCraft` class will automatically include a citation section and license details. For more details on how to define a model card template, please check the [Model Cards guide](./model-cards). ```py MODEL_CARD_TEMPLATE = """ --- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {{ card_data }} --- This is a VoiceCraft model. For more details, please check out the official Github repo: https://github.com/jasonppy/VoiceCraft. This model is shared under a Attribution-NonCommercial-ShareAlike 4.0 International license. ## Citation @article{peng2024voicecraft, author = {Peng, Puyuan and Huang, Po-Yao and Li, Daniel and Mohamed, Abdelrahman and Harwath, David}, title = {VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild}, journal = {arXiv}, year = {2024}, } """ class VoiceCraft( nn.Module, PyTorchModelHubMixin, library_name="voicecraft", model_card_template=MODEL_CARD_TEMPLATE, ... ): ... ``` Finally, if you want to extend the model card generation process with dynamic values, you can override the [`~ModelHubMixin.generate_model_card`] method: ```py from huggingface_hub import ModelCard, PyTorchModelHubMixin class UniDepthV1(nn.Module, PyTorchModelHubMixin, ...): (...) def generate_model_card(self, *args, **kwargs) -> ModelCard: card = super().generate_model_card(*args, **kwargs) card.data.metrics = ... # add metrics to the metadata card.text += ... # append section to the modelcard return card ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#model-card
#model-card
.md
30_11
[`ModelHubMixin`] handles the model configuration for you. It automatically checks the input values when you instantiate the model and serializes them in a `config.json` file. This provides 2 benefits: 1. Users will be able to reload the model with the exact same parameters as you. 2. Having a `config.json` file automatically enables analytics on the Hub (i.e. the "downloads" count). But how does it work in practice? Several rules make the process as smooth as possible from a user perspective: - if your `__init__` method expects a `config` input, it will be automatically saved in the repo as `config.json`. - if the `config` input parameter is annotated with a dataclass type (e.g. `config: Optional[MyConfigClass] = None`), then the `config` value will be correctly deserialized for you. - all values passed at initialization will also be stored in the config file. This means you don't necessarily have to expect a `config` input to benefit from it. Example: ```py class MyModel(ModelHubMixin): def __init__(value: str, size: int = 3): self.value = value self.size = size (...) # implement _save_pretrained / _from_pretrained model = MyModel(value="my_value") model.save_pretrained(...) # config.json contains passed and default values {"value": "my_value", "size": 3} ``` But what if a value cannot be serialized as JSON? By default, the value will be ignored when saving the config file. However, in some cases your library already expects a custom object as input that cannot be serialized, and you don't want to update your internal logic to update its type. No worries! You can pass custom encoders/decoders for any type when inheriting from [`ModelHubMixin`]. This is a bit more work but ensures your internal logic is untouched when integrating your library with the Hub. Here is a concrete example where a class expects a `argparse.Namespace` config as input: ```py class VoiceCraft(nn.Module): def __init__(self, args): self.pattern = self.args.pattern self.hidden_size = self.args.hidden_size ... ``` One solution can be to update the `__init__` signature to `def __init__(self, pattern: str, hidden_size: int)` and update all snippets that instantiate your class. This is a perfectly valid way to fix it but it might break downstream applications using your library. Another solution is to provide a simple encoder/decoder to convert `argparse.Namespace` to a dictionary. ```py from argparse import Namespace class VoiceCraft( nn.Module, PyTorchModelHubMixin, # inherit from mixin coders={ Namespace : ( lambda x: vars(x), # Encoder: how to convert a `Namespace` to a valid jsonable value? lambda data: Namespace(**data), # Decoder: how to reconstruct a `Namespace` from a dictionary? ) } ): def __init__(self, args: Namespace): # annotate `args` self.pattern = self.args.pattern self.hidden_size = self.args.hidden_size ... ``` In the snippet above, both the internal logic and the `__init__` signature of the class did not change. This means all existing code snippets for your library will continue to work. To achieve this, we had to: 1. Inherit from the mixin (`PytorchModelHubMixin` in this case). 2. Pass a `coders` parameter in the inheritance. This is a dictionary where keys are custom types you want to process. Values are a tuple `(encoder, decoder)`. - The encoder expects an object of the specified type as input and returns a jsonable value. This will be used when saving a model with `save_pretrained`. - The decoder expects raw data (typically a dictionary) as input and reconstructs the initial object. This will be used when loading the model with `from_pretrained`. 3. Add a type annotation to the `__init__` signature. This is important to let the mixin know which type is expected by the class and, therefore, which decoder to use. For the sake of simplicity, the encoder/decoder functions in the example above are not robust. For a concrete implementation, you would most likely have to handle corner cases properly.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#config
#config
.md
30_12
Let's quickly sum up the two approaches we saw with their advantages and drawbacks. The table below is only indicative. Your framework might have some specificities that you need to address. This guide is only here to give guidelines and ideas on how to handle integration. In any case, feel free to contact us if you have any questions! <!-- Generated using https://www.tablesgenerator.com/markdown_tables --> | Integration | Using helpers | Using [`ModelHubMixin`] | |:---:|:---:|:---:| | User experience | `model = load_from_hub(...)`<br>`push_to_hub(model, ...)` | `model = MyModel.from_pretrained(...)`<br>`model.push_to_hub(...)` | | Flexibility | Very flexible.<br>You fully control the implementation. | Less flexible.<br>Your framework must have a model class. | | Maintenance | More maintenance to add support for configuration, and new features. Might also require fixing issues reported by users. | Less maintenance as most of the interactions with the Hub are implemented in `huggingface_hub`. | | Documentation / Type annotation | To be written manually. | Partially handled by `huggingface_hub`. | | Download counter | To be handled manually. | Enabled by default if class has a `config` attribute. | | Model card | To be handled manually | Generated by default with library_name, tags, etc. |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/integrations.md
https://huggingface.co/docs/huggingface_hub/en/guides/integrations/#quick-comparison
#quick-comparison
.md
30_13
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/hf_file_system.md
https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system/
.md
31_0
In addition to the [`HfApi`], the `huggingface_hub` library provides [`HfFileSystem`], a pythonic [fsspec-compatible](https://filesystem-spec.readthedocs.io/en/latest/) file interface to the Hugging Face Hub. The [`HfFileSystem`] builds on top of the [`HfApi`] and offers typical filesystem style operations like `cp`, `mv`, `ls`, `du`, `glob`, `get_file`, and `put_file`. <Tip warning={true}> [`HfFileSystem`] provides fsspec compatibility, which is useful for libraries that require it (e.g., reading Hugging Face datasets directly with `pandas`). However, it introduces additional overhead due to this compatibility layer. For better performance and reliability, it's recommended to use [`HfApi`] methods when possible. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/hf_file_system.md
https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system/#interact-with-the-hub-through-the-filesystem-api
#interact-with-the-hub-through-the-filesystem-api
.md
31_1
```python >>> from huggingface_hub import HfFileSystem >>> fs = HfFileSystem() >>> # List all files in a directory >>> fs.ls("datasets/my-username/my-dataset-repo/data", detail=False) ['datasets/my-username/my-dataset-repo/data/train.csv', 'datasets/my-username/my-dataset-repo/data/test.csv'] >>> # List all ".csv" files in a repo >>> fs.glob("datasets/my-username/my-dataset-repo/**/*.csv") ['datasets/my-username/my-dataset-repo/data/train.csv', 'datasets/my-username/my-dataset-repo/data/test.csv'] >>> # Read a remote file >>> with fs.open("datasets/my-username/my-dataset-repo/data/train.csv", "r") as f: ... train_data = f.readlines() >>> # Read the content of a remote file as a string >>> train_data = fs.read_text("datasets/my-username/my-dataset-repo/data/train.csv", revision="dev") >>> # Write a remote file >>> with fs.open("datasets/my-username/my-dataset-repo/data/validation.csv", "w") as f: ... f.write("text,label") ... f.write("Fantastic movie!,good") ``` The optional `revision` argument can be passed to run an operation from a specific commit such as a branch, tag name, or a commit hash. Unlike Python's built-in `open`, `fsspec`'s `open` defaults to binary mode, `"rb"`. This means you must explicitly set mode as `"r"` for reading and `"w"` for writing in text mode. Appending to a file (modes `"a"` and `"ab"`) is not supported yet.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/hf_file_system.md
https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system/#usage
#usage
.md
31_2
The [`HfFileSystem`] can be used with any library that integrates `fsspec`, provided the URL follows the scheme: ``` hf://[<repo_type_prefix>]<repo_id>[@<revision>]/<path/in/repo> ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/huggingface_hub/hf_urls.png"/> </div> The `repo_type_prefix` is `datasets/` for datasets, `spaces/` for spaces, and models don't need a prefix in the URL. Some interesting integrations where [`HfFileSystem`] simplifies interacting with the Hub are listed below: * Reading/writing a [Pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#reading-writing-remote-files) DataFrame from/to a Hub repository: ```python >>> import pandas as pd >>> # Read a remote CSV file into a dataframe >>> df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv") >>> # Write a dataframe to a remote CSV file >>> df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv") ``` The same workflow can also be used for [Dask](https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html) and [Polars](https://pola-rs.github.io/polars/py-polars/html/reference/io.html) DataFrames. * Querying (remote) Hub files with [DuckDB](https://duckdb.org/docs/guides/python/filesystems): ```python >>> from huggingface_hub import HfFileSystem >>> import duckdb >>> fs = HfFileSystem() >>> duckdb.register_filesystem(fs) >>> # Query a remote file and get the result back as a dataframe >>> fs_query_file = "hf://datasets/my-username/my-dataset-repo/data_dir/data.parquet" >>> df = duckdb.query(f"SELECT * FROM '{fs_query_file}' LIMIT 10").df() ``` * Using the Hub as an array store with [Zarr](https://zarr.readthedocs.io/en/stable/tutorial.html#io-with-fsspec): ```python >>> import numpy as np >>> import zarr >>> embeddings = np.random.randn(50000, 1000).astype("float32") >>> # Write an array to a repo >>> with zarr.open_group("hf://my-username/my-model-repo/array-store", mode="w") as root: ... foo = root.create_group("embeddings") ... foobar = foo.zeros('experiment_0', shape=(50000, 1000), chunks=(10000, 1000), dtype='f4') ... foobar[:] = embeddings >>> # Read an array from a repo >>> with zarr.open_group("hf://my-username/my-model-repo/array-store", mode="r") as root: ... first_row = root["embeddings/experiment_0"][0] ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/hf_file_system.md
https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system/#integrations
#integrations
.md
31_3
In many cases, you must be logged in with a Hugging Face account to interact with the Hub. Refer to the [Authentication](../quick-start#authentication) section of the documentation to learn more about authentication methods on the Hub. It is also possible to log in programmatically by passing your `token` as an argument to [`HfFileSystem`]: ```python >>> from huggingface_hub import HfFileSystem >>> fs = HfFileSystem(token=token) ``` If you log in this way, be careful not to accidentally leak the token when sharing your source code!
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/hf_file_system.md
https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system/#authentication
#authentication
.md
31_4
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/
.md
32_0
Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on specific repos or to all repos belonging to particular users/organizations you're interested in following. This guide will first explain how to manage webhooks programmatically. Then we'll see how to leverage `huggingface_hub` to create a server listening to webhooks and deploy it to a Space. This guide assumes you are familiar with the concept of webhooks on the Huggingface Hub. To learn more about webhooks themselves, you should read this [guide](https://huggingface.co/docs/hub/webhooks) first.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#webhooks
#webhooks
.md
32_1
`huggingface_hub` allows you to manage your webhooks programmatically. You can list your existing webhooks, create new ones, and update, enable, disable or delete them. This section guides you through the procedures using the Hugging Face Hub's API functions.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#managing-webhooks
#managing-webhooks
.md
32_2
To create a new webhook, use [`create_webhook`] and specify the URL where payloads should be sent, what events should be watched, and optionally set a domain and a secret for security. ```python from huggingface_hub import create_webhook # Example: Creating a webhook webhook = create_webhook( url="https://webhook.site/your-custom-url", watched=[{"type": "user", "name": "your-username"}, {"type": "org", "name": "your-org-name"}], domains=["repo", "discussion"], secret="your-secret" ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#creating-a-webhook
#creating-a-webhook
.md
32_3
To see all the webhooks you have configured, you can list them with [`list_webhooks`]. This is useful to review their IDs, URLs, and statuses. ```python from huggingface_hub import list_webhooks # Example: Listing all webhooks webhooks = list_webhooks() for webhook in webhooks: print(webhook) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#listing-webhooks
#listing-webhooks
.md
32_4
If you need to change the configuration of an existing webhook, such as the URL or the events it watches, you can update it using [`update_webhook`]. ```python from huggingface_hub import update_webhook # Example: Updating a webhook updated_webhook = update_webhook( webhook_id="your-webhook-id", url="https://new.webhook.site/url", watched=[{"type": "user", "name": "new-username"}], domains=["repo"] ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#updating-a-webhook
#updating-a-webhook
.md
32_5
You might want to temporarily disable a webhook without deleting it. This can be done using [`disable_webhook`], and the webhook can be re-enabled later with [`enable_webhook`]. ```python from huggingface_hub import enable_webhook, disable_webhook # Example: Enabling a webhook enabled_webhook = enable_webhook("your-webhook-id") print("Enabled:", enabled_webhook) # Example: Disabling a webhook disabled_webhook = disable_webhook("your-webhook-id") print("Disabled:", disabled_webhook) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#enabling-and-disabling-webhooks
#enabling-and-disabling-webhooks
.md
32_6
When a webhook is no longer needed, it can be permanently deleted using [`delete_webhook`]. ```python from huggingface_hub import delete_webhook # Example: Deleting a webhook delete_webhook("your-webhook-id") ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#deleting-a-webhook
#deleting-a-webhook
.md
32_7
The base class that we will use in this guides section is [`WebhooksServer`]. It is a class for easily configuring a server that can receive webhooks from the Huggingface Hub. The server is based on a [Gradio](https://gradio.app/) app. It has a UI to display instructions for you or your users and an API to listen to webhooks. <Tip> To see a running example of a webhook server, check out the [Spaces CI Bot](https://huggingface.co/spaces/spaces-ci-bot/webhook) one. It is a Space that launches ephemeral environments when a PR is opened on a Space. </Tip> <Tip warning={true}> This is an [experimental feature](../package_reference/environment_variables#hfhubdisableexperimentalwarning). This means that we are still working on improving the API. Breaking changes might be introduced in the future without prior notice. Make sure to pin the version of `huggingface_hub` in your requirements. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#webhooks-server
#webhooks-server
.md
32_8
Implementing a webhook endpoint is as simple as decorating a function. Let's see a first example to explain the main concepts: ```python # app.py from huggingface_hub import webhook_endpoint, WebhookPayload @webhook_endpoint async def trigger_training(payload: WebhookPayload) -> None: if payload.repo.type == "dataset" and payload.event.action == "update": # Trigger a training job if a dataset is updated ... ``` Save this snippet in a file called `'app.py'` and run it with `'python app.py'`. You should see a message like this: ```text Webhook secret is not defined. This means your webhook endpoints will be open to everyone. To add a secret, set `WEBHOOK_SECRET` as environment variable or pass it at initialization: `app = WebhooksServer(webhook_secret='my_secret', ...)` For more details about webhook secrets, please refer to https://huggingface.co/docs/hub/webhooks#webhook-secret. Running on local URL: http://127.0.0.1:7860 Running on public URL: https://1fadb0f52d8bf825fc.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces Webhooks are correctly setup and ready to use: - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training Go to https://huggingface.co/settings/webhooks to setup your webhooks. ``` Good job! You just launched a webhook server! Let's break down what happened exactly: 1. By decorating a function with [`webhook_endpoint`], a [`WebhooksServer`] object has been created in the background. As you can see, this server is a Gradio app running on http://127.0.0.1:7860. If you open this URL in your browser, you will see a landing page with instructions about the registered webhooks. 2. A Gradio app is a FastAPI server under the hood. A new POST route `/webhooks/trigger_training` has been added to it. This is the route that will listen to webhooks and run the `trigger_training` function when triggered. FastAPI will automatically parse the payload and pass it to the function as a [`WebhookPayload`] object. This is a `pydantic` object that contains all the information about the event that triggered the webhook. 3. The Gradio app also opened a tunnel to receive requests from the internet. This is the interesting part: you can configure a Webhook on https://huggingface.co/settings/webhooks pointing to your local machine. This is useful for debugging your webhook server and quickly iterating before deploying it to a Space. 4. Finally, the logs also tell you that your server is currently not secured by a secret. This is not problematic for local debugging but is to keep in mind for later. <Tip warning={true}> By default, the server is started at the end of your script. If you are running it in a notebook, you can start the server manually by calling `decorated_function.run()`. Since a unique server is used, you only have to start the server once even if you have multiple endpoints. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#create-an-endpoint
#create-an-endpoint
.md
32_9
Now that you have a webhook server running, you want to configure a Webhook to start receiving messages. Go to https://huggingface.co/settings/webhooks, click on "Add a new webhook" and configure your Webhook. Set the target repositories you want to watch and the Webhook URL, here `https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training`. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/configure_webhook.png"/> </div> And that's it! You can now trigger that webhook by updating the target repository (e.g. push a commit). Check the Activity tab of your Webhook to see the events that have been triggered. Now that you have a working setup, you can test it and quickly iterate. If you modify your code and restart the server, your public URL might change. Make sure to update the webhook configuration on the Hub if needed.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#configure-a-webhook
#configure-a-webhook
.md
32_10
Now that you have a working webhook server, the goal is to deploy it to a Space. Go to https://huggingface.co/new-space to create a Space. Give it a name, select the Gradio SDK and click on "Create Space". Upload your code to the Space in a file called `app.py`. Your Space will start automatically! For more details about Spaces, please refer to this [guide](https://huggingface.co/docs/hub/spaces-overview). Your webhook server is now running on a public Space. If most cases, you will want to secure it with a secret. Go to your Space settings > Section "Repository secrets" > "Add a secret". Set the `WEBHOOK_SECRET` environment variable to the value of your choice. Go back to the [Webhooks settings](https://huggingface.co/settings/webhooks) and set the secret in the webhook configuration. Now, only requests with the correct secret will be accepted by your server. And this is it! Your Space is now ready to receive webhooks from the Hub. Please keep in mind that if you run the Space on a free 'cpu-basic' hardware, it will be shut down after 48 hours of inactivity. If you need a permanent Space, you should consider setting to an [upgraded hardware](https://huggingface.co/docs/hub/spaces-gpus#hardware-specs).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#deploy-to-a-space
#deploy-to-a-space
.md
32_11
The guide above explained the quickest way to setup a [`WebhooksServer`]. In this section, we will see how to customize it further.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#advanced-usage
#advanced-usage
.md
32_12
You can register multiple endpoints on the same server. For example, you might want to have one endpoint to trigger a training job and another one to trigger a model evaluation. You can do this by adding multiple `@webhook_endpoint` decorators: ```python # app.py from huggingface_hub import webhook_endpoint, WebhookPayload @webhook_endpoint async def trigger_training(payload: WebhookPayload) -> None: if payload.repo.type == "dataset" and payload.event.action == "update": # Trigger a training job if a dataset is updated ... @webhook_endpoint async def trigger_evaluation(payload: WebhookPayload) -> None: if payload.repo.type == "model" and payload.event.action == "update": # Trigger an evaluation job if a model is updated ... ``` Which will create two endpoints: ```text (...) Webhooks are correctly setup and ready to use: - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_evaluation ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#multiple-endpoints
#multiple-endpoints
.md
32_13
To get more flexibility, you can also create a [`WebhooksServer`] object directly. This is useful if you want to customize the landing page of your server. You can do this by passing a [Gradio UI](https://gradio.app/docs/#blocks) that will overwrite the default one. For example, you can add instructions for your users or add a form to manually trigger the webhooks. When creating a [`WebhooksServer`], you can register new webhooks using the [`~WebhooksServer.add_webhook`] decorator. Here is a complete example: ```python import gradio as gr from fastapi import Request from huggingface_hub import WebhooksServer, WebhookPayload # 1. Define UI with gr.Blocks() as ui: ... # 2. Create WebhooksServer with custom UI and secret app = WebhooksServer(ui=ui, webhook_secret="my_secret_key") # 3. Register webhook with explicit name @app.add_webhook("/say_hello") async def hello(payload: WebhookPayload): return {"message": "hello"} # 4. Register webhook with implicit name @app.add_webhook async def goodbye(payload: WebhookPayload): return {"message": "goodbye"} # 5. Start server (optional) app.run() ``` 1. We define a custom UI using Gradio blocks. This UI will be displayed on the landing page of the server. 2. We create a [`WebhooksServer`] object with a custom UI and a secret. The secret is optional and can be set with the `WEBHOOK_SECRET` environment variable. 3. We register a webhook with an explicit name. This will create an endpoint at `/webhooks/say_hello`. 4. We register a webhook with an implicit name. This will create an endpoint at `/webhooks/goodbye`. 5. We start the server. This is optional as your server will automatically be started at the end of the script.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/webhooks.md
https://huggingface.co/docs/huggingface_hub/en/guides/webhooks/#custom-server
#custom-server
.md
32_14
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/
.md
33_0
The Hugging Face Hub is a collection of git repositories. [Git](https://git-scm.com/) is a widely used tool in software development to easily version projects when working collaboratively. This guide will show you how to interact with the repositories on the Hub, especially: - Create and delete a repository. - Manage branches and tags. - Rename your repository. - Update your repository visibility. - Manage a local copy of your repository. <Tip warning={true}> If you are used to working with platforms such as GitLab/GitHub/Bitbucket, your first instinct might be to use `git` CLI to clone your repo (`git clone`), commit changes (`git add, git commit`) and push them (`git push`). This is valid when using the Hugging Face Hub. However, software engineering and machine learning do not share the same requirements and workflows. Model repositories might maintain large model weight files for different frameworks and tools, so cloning the repository can lead to you maintaining large local folders with massive sizes. As a result, it may be more efficient to use our custom HTTP methods. You can read our [Git vs HTTP paradigm](../concepts/git_vs_http) explanation page for more details. </Tip> If you want to create and manage a repository on the Hub, your machine must be logged in. If you are not, please refer to [this section](../quick-start#authentication). In the rest of this guide, we will assume that your machine is logged in.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#create-and-manage-a-repository
#create-and-manage-a-repository
.md
33_1
The first step is to know how to create and delete repositories. You can only manage repositories that you own (under your username namespace) or from organizations in which you have write permissions.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#repo-creation-and-deletion
#repo-creation-and-deletion
.md
33_2
Create an empty repository with [`create_repo`] and give it a name with the `repo_id` parameter. The `repo_id` is your namespace followed by the repository name: `username_or_org/repo_name`. ```py >>> from huggingface_hub import create_repo >>> create_repo("lysandre/test-model") 'https://huggingface.co/lysandre/test-model' ``` By default, [`create_repo`] creates a model repository. But you can use the `repo_type` parameter to specify another repository type. For example, if you want to create a dataset repository: ```py >>> from huggingface_hub import create_repo >>> create_repo("lysandre/test-dataset", repo_type="dataset") 'https://huggingface.co/datasets/lysandre/test-dataset' ``` When you create a repository, you can set your repository visibility with the `private` parameter. ```py >>> from huggingface_hub import create_repo >>> create_repo("lysandre/test-private", private=True) ``` If you want to change the repository visibility at a later time, you can use the [`update_repo_visibility`] function. <Tip> If you are part of an organization with an Enterprise plan, you can create a repo in a specific resource group by passing `resource_group_id` as parameter to [`create_repo`]. Resource groups are a security feature to control which members from your org can access a given resource. You can get the resource group ID by copying it from your org settings page url on the Hub (e.g. `"https://huggingface.co/organizations/huggingface/settings/resource-groups/66670e5163145ca562cb1988"` => `"66670e5163145ca562cb1988"`). For more details about resource group, check out this [guide](https://huggingface.co/docs/hub/en/security-resource-groups). </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#create-a-repository
#create-a-repository
.md
33_3
Delete a repository with [`delete_repo`]. Make sure you want to delete a repository because this is an irreversible process! Specify the `repo_id` of the repository you want to delete: ```py >>> delete_repo(repo_id="lysandre/my-corrupted-dataset", repo_type="dataset") ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#delete-a-repository
#delete-a-repository
.md
33_4
In some cases, you want to copy someone else's repo to adapt it to your use case. This is possible for Spaces using the [`duplicate_space`] method. It will duplicate the whole repository. You will still need to configure your own settings (hardware, sleep-time, storage, variables and secrets). Check out our [Manage your Space](./manage-spaces) guide for more details. ```py >>> from huggingface_hub import duplicate_space >>> duplicate_space("multimodalart/dreambooth-training", private=False) RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#duplicate-a-repository-only-for-spaces
#duplicate-a-repository-only-for-spaces
.md
33_5
Now that you have created your repository, you are interested in pushing changes to it and downloading files from it. These 2 topics deserve their own guides. Please refer to the [upload](./upload) and the [download](./download) guides to learn how to use your repository.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#upload-and-download-files
#upload-and-download-files
.md
33_6
Git repositories often make use of branches to store different versions of a same repository. Tags can also be used to flag a specific state of your repository, for example, when releasing a version. More generally, branches and tags are referred as [git references](https://git-scm.com/book/en/v2/Git-Internals-Git-References).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#branches-and-tags
#branches-and-tags
.md
33_7
You can create new branch and tags using [`create_branch`] and [`create_tag`]: ```py >>> from huggingface_hub import create_branch, create_tag # Create a branch on a Space repo from `main` branch >>> create_branch("Matthijs/speecht5-tts-demo", repo_type="space", branch="handle-dog-speaker") # Create a tag on a Dataset repo from `v0.1-release` branch >>> create_tag("bigcode/the-stack", repo_type="dataset", revision="v0.1-release", tag="v0.1.1", tag_message="Bump release version.") ``` You can use the [`delete_branch`] and [`delete_tag`] functions in the same way to delete a branch or a tag.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#create-branches-and-tags
#create-branches-and-tags
.md
33_8
You can also list the existing git refs from a repository using [`list_repo_refs`]: ```py >>> from huggingface_hub import list_repo_refs >>> list_repo_refs("bigcode/the-stack", repo_type="dataset") GitRefs( branches=[ GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'), GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714') ], converts=[], tags=[ GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da') ] ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#list-all-branches-and-tags
#list-all-branches-and-tags
.md
33_9
Repositories come with some settings that you can configure. Most of the time, you will want to do that manually in the repo settings page in your browser. You must have write access to a repo to configure it (either own it or being part of an organization). In this section, we will see the settings that you can also configure programmatically using `huggingface_hub`. Some settings are specific to Spaces (hardware, environment variables,...). To configure those, please refer to our [Manage your Spaces](../guides/manage-spaces) guide.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#change-repository-settings
#change-repository-settings
.md
33_10
A repository can be public or private. A private repository is only visible to you or members of the organization in which the repository is located. Change a repository to private as shown in the following: ```py >>> from huggingface_hub import update_repo_settings >>> update_repo_settings(repo_id=repo_id, private=True) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#update-visibility
#update-visibility
.md
33_11
To give more control over how repos are used, the Hub allows repo authors to enable **access requests** for their repos. User must agree to share their contact information (username and email address) with the repo authors to access the files when enabled. A repo with access requests enabled is called a **gated repo**. You can set a repo as gated using [`update_repo_settings`]: ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> api.update_repo_settings(repo_id=repo_id, gated="auto") # Set automatic gating for a model ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#setup-gated-access
#setup-gated-access
.md
33_12
You can rename your repository on the Hub using [`move_repo`]. Using this method, you can also move the repo from a user to an organization. When doing so, there are a [few limitations](https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo) that you should be aware of. For example, you can't transfer your repo to another user. ```py >>> from huggingface_hub import move_repo >>> move_repo(from_id="Wauplin/cool-model", to_id="huggingface/cool-model") ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#rename-your-repository
#rename-your-repository
.md
33_13
All the actions described above can be done using HTTP requests. However, in some cases you might be interested in having a local copy of your repository and interact with it using the Git commands you are familiar with. The [`Repository`] class allows you to interact with files and repositories on the Hub with functions similar to Git commands. It is a wrapper over Git and Git-LFS methods to use the Git commands you already know and love. Before starting, please make sure you have Git-LFS installed (see [here](https://git-lfs.github.com/) for installation instructions). <Tip warning={true}> [`Repository`] is deprecated in favor of the http-based alternatives implemented in [`HfApi`]. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. For more details, please read [this explanation page](./concepts/git_vs_http). </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#manage-a-local-copy-of-your-repository
#manage-a-local-copy-of-your-repository
.md
33_14
Instantiate a [`Repository`] object with a path to a local repository: ```py >>> from huggingface_hub import Repository >>> repo = Repository(local_dir="<path>/<to>/<folder>") ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#use-a-local-repository
#use-a-local-repository
.md
33_15
The `clone_from` parameter clones a repository from a Hugging Face repository ID to a local directory specified by the `local_dir` argument: ```py >>> from huggingface_hub import Repository >>> repo = Repository(local_dir="w2v2", clone_from="facebook/wav2vec2-large-960h-lv60") ``` `clone_from` can also clone a repository using a URL: ```py >>> repo = Repository(local_dir="huggingface-hub", clone_from="https://huggingface.co/facebook/wav2vec2-large-960h-lv60") ``` You can combine the `clone_from` parameter with [`create_repo`] to create and clone a repository: ```py >>> repo_url = create_repo(repo_id="repo_name") >>> repo = Repository(local_dir="repo_local_path", clone_from=repo_url) ``` You can also configure a Git username and email to a cloned repository by specifying the `git_user` and `git_email` parameters when you clone a repository. When users commit to that repository, Git will be aware of the commit author. ```py >>> repo = Repository( ... "my-dataset", ... clone_from="<user>/<dataset_id>", ... token=True, ... repo_type="dataset", ... git_user="MyName", ... git_email="me@cool.mail" ... ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#clone
#clone
.md
33_16
Branches are important for collaboration and experimentation without impacting your current files and code. Switch between branches with [`~Repository.git_checkout`]. For example, if you want to switch from `branch1` to `branch2`: ```py >>> from huggingface_hub import Repository >>> repo = Repository(local_dir="huggingface-hub", clone_from="<user>/<dataset_id>", revision='branch1') >>> repo.git_checkout("branch2") ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#branch
#branch
.md
33_17
[`~Repository.git_pull`] allows you to update a current local branch with changes from a remote repository: ```py >>> from huggingface_hub import Repository >>> repo.git_pull() ``` Set `rebase=True` if you want your local commits to occur after your branch is updated with the new commits from the remote: ```py >>> repo.git_pull(rebase=True) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/repository.md
https://huggingface.co/docs/huggingface_hub/en/guides/repository/#pull
#pull
.md
33_18
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-spaces.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-spaces/
.md
34_0
In this guide, we will see how to manage your Space runtime ([secrets](https://huggingface.co/docs/hub/spaces-overview#managing-secrets), [hardware](https://huggingface.co/docs/hub/spaces-gpus), and [storage](https://huggingface.co/docs/hub/spaces-storage#persistent-storage)) using `huggingface_hub`.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-spaces.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-spaces/#manage-your-space
#manage-your-space
.md
34_1
Here is an end-to-end example to create and setup a Space on the Hub. **1. Create a Space on the Hub.** ```py >>> from huggingface_hub import HfApi >>> repo_id = "Wauplin/my-cool-training-space" >>> api = HfApi() # For example with a Gradio SDK >>> api.create_repo(repo_id=repo_id, repo_type="space", space_sdk="gradio") ``` **1. (bis) Duplicate a Space.** This can prove useful if you want to build up from an existing Space instead of starting from scratch. It is also useful is you want control over the configuration/settings of a public Space. See [`duplicate_space`] for more details. ```py >>> api.duplicate_space("multimodalart/dreambooth-training") ``` **2. Upload your code using your preferred solution.** Here is an example to upload the local folder `src/` from your machine to your Space: ```py >>> api.upload_folder(repo_id=repo_id, repo_type="space", folder_path="src/") ``` At this step, your app should already be running on the Hub for free ! However, you might want to configure it further with secrets and upgraded hardware. **3. Configure secrets and variables** Your Space might require some secret keys, token or variables to work. See [docs](https://huggingface.co/docs/hub/spaces-overview#managing-secrets) for more details. For example, an HF token to upload an image dataset to the Hub once generated from your Space. ```py >>> api.add_space_secret(repo_id=repo_id, key="HF_TOKEN", value="hf_api_***") >>> api.add_space_variable(repo_id=repo_id, key="MODEL_REPO_ID", value="user/repo") ``` Secrets and variables can be deleted as well: ```py >>> api.delete_space_secret(repo_id=repo_id, key="HF_TOKEN") >>> api.delete_space_variable(repo_id=repo_id, key="MODEL_REPO_ID") ``` <Tip> From within your Space, secrets are available as environment variables (or Streamlit Secrets Management if using Streamlit). No need to fetch them via the API! </Tip> <Tip warning={true}> Any change in your Space configuration (secrets or hardware) will trigger a restart of your app. </Tip> **Bonus: set secrets and variables when creating or duplicating the Space!** Secrets and variables can be set when creating or duplicating a space: ```py >>> api.create_repo( ... repo_id=repo_id, ... repo_type="space", ... space_sdk="gradio", ... space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...], ... space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...], ... ) ``` ```py >>> api.duplicate_space( ... from_id=repo_id, ... secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...], ... variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...], ... ) ``` **4. Configure the hardware** By default, your Space will run on a CPU environment for free. You can upgrade the hardware to run it on GPUs. A payment card or a community grant is required to access upgrade your Space. See [docs](https://huggingface.co/docs/hub/spaces-gpus) for more details. ```py # Use `SpaceHardware` enum >>> from huggingface_hub import SpaceHardware >>> api.request_space_hardware(repo_id=repo_id, hardware=SpaceHardware.T4_MEDIUM) # Or simply pass a string value >>> api.request_space_hardware(repo_id=repo_id, hardware="t4-medium") ``` Hardware updates are not done immediately as your Space has to be reloaded on our servers. At any time, you can check on which hardware your Space is running to see if your request has been met. ```py >>> runtime = api.get_space_runtime(repo_id=repo_id) >>> runtime.stage "RUNNING_BUILDING" >>> runtime.hardware "cpu-basic" >>> runtime.requested_hardware "t4-medium" ``` You now have a Space fully configured. Make sure to downgrade your Space back to "cpu-classic" when you are done using it. **Bonus: request hardware when creating or duplicating the Space!** Upgraded hardware will be automatically assigned to your Space once it's built. ```py >>> api.create_repo( ... repo_id=repo_id, ... repo_type="space", ... space_sdk="gradio" ... space_hardware="cpu-upgrade", ... space_storage="small", ... space_sleep_time="7200", # 2 hours in secs ... ) ``` ```py >>> api.duplicate_space( ... from_id=repo_id, ... hardware="cpu-upgrade", ... storage="small", ... sleep_time="7200", # 2 hours in secs ... ) ``` **5. Pause and restart your Space** By default if your Space is running on an upgraded hardware, it will never be stopped. However to avoid getting billed, you might want to pause it when you are not using it. This is possible using [`pause_space`]. A paused Space will be inactive until the owner of the Space restarts it, either with the UI or via API using [`restart_space`]. For more details about paused mode, please refer to [this section](https://huggingface.co/docs/hub/spaces-gpus#pause) ```py # Pause your Space to avoid getting billed >>> api.pause_space(repo_id=repo_id) # (...) # Restart it when you need it >>> api.restart_space(repo_id=repo_id) ``` Another possibility is to set a timeout for your Space. If your Space is inactive for more than the timeout duration, it will go to sleep. Any visitor landing on your Space will start it back up. You can set a timeout using [`set_space_sleep_time`]. For more details about sleeping mode, please refer to [this section](https://huggingface.co/docs/hub/spaces-gpus#sleep-time). ```py # Put your Space to sleep after 1h of inactivity >>> api.set_space_sleep_time(repo_id=repo_id, sleep_time=3600) ``` Note: if you are using a 'cpu-basic' hardware, you cannot configure a custom sleep time. Your Space will automatically be paused after 48h of inactivity. **Bonus: set a sleep time while requesting hardware** Upgraded hardware will be automatically assigned to your Space once it's built. ```py >>> api.request_space_hardware(repo_id=repo_id, hardware=SpaceHardware.T4_MEDIUM, sleep_time=3600) ``` **Bonus: set a sleep time when creating or duplicating the Space!** ```py >>> api.create_repo( ... repo_id=repo_id, ... repo_type="space", ... space_sdk="gradio" ... space_hardware="t4-medium", ... space_sleep_time="3600", ... ) ``` ```py >>> api.duplicate_space( ... from_id=repo_id, ... hardware="t4-medium", ... sleep_time="3600", ... ) ``` **6. Add persistent storage to your Space** You can choose the storage tier of your choice to access disk space that persists across restarts of your Space. This means you can read and write from disk like you would with a traditional hard drive. See [docs](https://huggingface.co/docs/hub/spaces-storage#persistent-storage) for more details. ```py >>> from huggingface_hub import SpaceStorage >>> api.request_space_storage(repo_id=repo_id, storage=SpaceStorage.LARGE) ``` You can also delete your storage, losing all the data permanently. ```py >>> api.delete_space_storage(repo_id=repo_id) ``` Note: You cannot decrease the storage tier of your space once it's been granted. To do so, you must delete the storage first then request the new desired tier. **Bonus: request storage when creating or duplicating the Space!** ```py >>> api.create_repo( ... repo_id=repo_id, ... repo_type="space", ... space_sdk="gradio" ... space_storage="large", ... ) ``` ```py >>> api.duplicate_space( ... from_id=repo_id, ... storage="large", ... ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-spaces.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-spaces/#a-simple-example-configure-secrets-and-hardware
#a-simple-example-configure-secrets-and-hardware
.md
34_2
Spaces allow for a lot of different use cases. Sometimes, you might want to temporarily run a Space on a specific hardware, do something and then shut it down. In this section, we will explore how to benefit from Spaces to finetune a model on demand. This is only one way of solving this particular problem. It has to be taken as a suggestion and adapted to your use case. Let's assume we have a Space to finetune a model. It is a Gradio app that takes as input a model id and a dataset id. The workflow is as follows: 0. (Prompt the user for a model and a dataset) 1. Load the model from the Hub. 2. Load the dataset from the Hub. 3. Finetune the model on the dataset. 4. Upload the new model to the Hub. Step 3. requires a custom hardware but you don't want your Space to be running all the time on a paid GPU. A solution is to dynamically request hardware for the training and shut it down afterwards. Since requesting hardware restarts your Space, your app must somehow "remember" the current task it is performing. There are multiple ways of doing this. In this guide we will see one solution using a Dataset as "task scheduler".
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-spaces.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-spaces/#more-advanced-temporarily-upgrade-your-space-
#more-advanced-temporarily-upgrade-your-space-
.md
34_3