modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-04 18:27:12
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
552 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-04 18:25:19
card
stringlengths
11
1.01M
yyf001125/financial-SentimentAnalysis-distilbert
yyf001125
2024-03-21T02:26:35Z
121
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "finance", "code", "en", "dataset:financial_phrasebank", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-21T02:07:06Z
--- datasets: - financial_phrasebank language: - en metrics: - accuracy pipeline_tag: text-classification tags: - finance - code ---
AlignmentResearch/robust_llm_pythia-imdb-410m-mz-ada-v3-s-3
AlignmentResearch
2024-03-21T02:26:21Z
106
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:finetune:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-21T02:25:27Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m-deduped model-index: - name: robust_llm_pythia-imdb-410m-mz-ada-v3-s-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-imdb-410m-mz-ada-v3-s-3 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
Sumail/zhun01
Sumail
2024-03-21T02:25:11Z
129
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:Sumail/copy_jordan", "base_model:merge:Sumail/copy_jordan", "base_model:Sumail/copy_qi", "base_model:merge:Sumail/copy_qi", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-21T02:24:18Z
--- base_model: - Sumail/copy_qi - Sumail/copy_jordan library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Sumail/copy_qi](https://huggingface.co/Sumail/copy_qi) * [Sumail/copy_jordan](https://huggingface.co/Sumail/copy_jordan) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Sumail/copy_jordan layer_range: [0, 12] - model: Sumail/copy_qi layer_range: [0, 12] merge_method: slerp base_model: Sumail/copy_qi parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
nikhil07prakash/float-7b
nikhil07prakash
2024-03-21T02:21:53Z
20
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2402.14811", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-02T17:33:28Z
--- license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model is a fully fine-tuned version of the [Llama-7B](https://huggingface.co/huggyllama/llama-7b) model on synthetically generated arithmetic tasks. It was introduced in [this](https://openreview.net/forum?id=8sKcAWOf2D) paper. It is very similar to [Goat-7B](https://github.com/liutiedong/goat), except it was trained without LoRA. For inquiries about checkpoints during the fine-tuning process, kindly reach out to [Nikhil](mailto:prakash.nik@northeastern.edu) via email. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Nikhil Prakash](https://nix07.github.io/) - **Model type:** Autoregressive Decoder-only Language Model - **License:** MIT License - **Finetuned from model:** [Llama-7B](https://huggingface.co/huggyllama/llama-7b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [Link](https://github.com/Nix07/finetuning/) - **Paper :** [Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking](https://arxiv.org/abs/2402.14811) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModel model = AutoModel.from_pretrained("nikhil07prakash/float-7b") ``` ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```python @inproceedings{prakash2023fine, title={Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking}, author={Prakash, Nikhil and Shaham, Tamar Rott and Haklay, Tal and Belinkov, Yonatan and Bau, David}, booktitle={Proceedings of the 2024 International Conference on Learning Representations}, note={arXiv:2402.14811}, year={2024} } ```
derek2015/FrozenLake-v1
derek2015
2024-03-21T02:15:07Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-20T09:10:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: FrozenLake-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="derek2015/FrozenLake-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
csujeong/Mistral-7B-Finetuning-Insurance-16R
csujeong
2024-03-21T02:13:27Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-07T10:08:40Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: Mistral-7B-Finetuning-Insurance results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Finetuning-Insurance This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters lora_alpha = 32 lora_dropout = 0.05 lora_rank = 16 The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 60 ### Training results ### Framework versions - PEFT 0.9.1.dev0 - Transformers 4.39.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
walterwitty50/wowdy
walterwitty50
2024-03-21T02:11:13Z
0
0
null
[ "nsfw", "adult", "license:unknown", "region:us" ]
null
2024-03-21T02:08:23Z
--- license: unknown tags: - nsfw - adult ---
ethanoutangoun/distilroberta-base-finetuned-wikitext2
ethanoutangoun
2024-03-21T02:07:45Z
125
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-03-20T02:28:36Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: distilroberta-base model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 32 | 2.2692 | | No log | 2.0 | 64 | 2.2286 | | No log | 3.0 | 96 | 2.1881 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
Steven-GU-Yu-Di/Text-to-Speech-Small
Steven-GU-Yu-Di
2024-03-21T02:06:02Z
119
0
transformers
[ "transformers", "safetensors", "bark", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-03-21T02:04:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nwhamed/mergedd
nwhamed
2024-03-21T02:06:01Z
3
0
transformers
[ "transformers", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "bardsai/jaskier-7b-dpo-v5.6", "eren23/ogno-monarch-jaskier-merge-7b", "liminerity/Omningotex-7b-slerp", "yleo/OgnoMonarch-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-21T02:04:32Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - bardsai/jaskier-7b-dpo-v5.6 - eren23/ogno-monarch-jaskier-merge-7b - liminerity/Omningotex-7b-slerp - yleo/OgnoMonarch-7B --- # mergedd mergedd is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [](https://huggingface.co/) * [](https://huggingface.co/) * [](https://huggingface.co/) * [](https://huggingface.co/) ## 🧩 Configuration ```json{ "models": [ { "model": "bardsai/jaskier-7b-dpo-v5.6", "parameters": {} }, { "model": "eren23/ogno-monarch-jaskier-merge-7b", "parameters": { "density": 0.53, "weight": 0.4 } }, { "model": "liminerity/Omningotex-7b-slerp", "parameters": { "density": 0.53, "weight": 0.3 } }, { "model": "yleo/OgnoMonarch-7B", "parameters": { "density": 0.53, "weight": 0.3 } } ], "merge_method": "dare_ties", "base_model": "bardsai/jaskier-7b-dpo-v5.6", "parameters": { "int8_mask": true, "dtype": "bfloat16" } }
Lourdes/mistral_7b_v0.2-instruct-title-generation
Lourdes
2024-03-21T02:05:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-21T02:05:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
peacelove/vit-base-patch16-224-in21k-finetuned-lora-food101
peacelove
2024-03-21T02:02:34Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:google/vit-base-patch16-224-in21k", "base_model:adapter:google/vit-base-patch16-224-in21k", "region:us" ]
null
2024-03-20T09:44:01Z
--- library_name: peft base_model: google/vit-base-patch16-224-in21k --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
Steven-GU-Yu-Di/Text-to-Speech
Steven-GU-Yu-Di
2024-03-21T02:01:59Z
125
0
transformers
[ "transformers", "safetensors", "bark", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-03-21T01:58:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ingeol/q2e_ep3_1122
ingeol
2024-03-21T01:54:59Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-21T01:54:17Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # ingeol/q2e_ep3_1122 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ingeol/q2e_ep3_1122') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ingeol/q2e_ep3_1122') model = AutoModel.from_pretrained('ingeol/q2e_ep3_1122') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2e_ep3_1122) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3899 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `beir.losses.bpr_loss.BPRLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 7000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
arthurspapa/marcianome
arthurspapa
2024-03-21T01:51:44Z
3
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-03-21T01:51:37Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of <s0><s1> marcianome output: url: image-0.png - text: A photo of <s0><s1> marcianome output: url: image-1.png - text: A photo of <s0><s1> marcianome output: url: image-2.png - text: A photo of <s0><s1> marcianome output: url: image-3.png - text: A photo of <s0><s1> marcianome output: url: image-4.png - text: A photo of <s0><s1> marcianome output: url: image-5.png - text: A photo of <s0><s1> marcianome output: url: image-6.png - text: A photo of <s0><s1> marcianome output: url: image-7.png - text: A photo of <s0><s1> marcianome output: url: image-8.png - text: A photo of <s0><s1> marcianome output: url: image-9.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> arthurspapa/marcianome license: openrail++ --- # SDXL LoRA DreamBooth - arthurspapa/marcianome <Gallery /> ## Model description ### These are arthurspapa/marcianome LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`marcianome.safetensors` here πŸ’Ύ](/arthurspapa/marcianome/blob/main/marcianome.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:marcianome:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`marcianome_emb.safetensors` here πŸ’Ύ](/arthurspapa/marcianome/blob/main/marcianome_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `marcianome_emb` to your prompt. For example, `A photo of marcianome_emb marcianome` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('arthurspapa/marcianome', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='arthurspapa/marcianome', filename='marcianome_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1> arthurspapa/marcianome').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` β†’ use `<s0><s1>` in your prompt ## Details All [Files & versions](/arthurspapa/marcianome/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
ruba2ksa/emo2ruba
ruba2ksa
2024-03-21T01:40:39Z
60
0
transformers
[ "transformers", "tf", "deberta-v2", "text-classification", "generated_from_keras_callback", "base_model:philschmid/deberta-v3-xsmall-emotion", "base_model:finetune:philschmid/deberta-v3-xsmall-emotion", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-20T23:10:39Z
--- license: mit base_model: philschmid/deberta-v3-xsmall-emotion tags: - generated_from_keras_callback model-index: - name: ruba2ksa/emo2ruba results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ruba2ksa/emo2ruba This model is a fine-tuned version of [philschmid/deberta-v3-xsmall-emotion](https://huggingface.co/philschmid/deberta-v3-xsmall-emotion) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1633 - Validation Loss: 0.1383 - Train Accuracy: 0.9465 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2158 | 0.1695 | 0.942 | 0 | | 0.1633 | 0.1383 | 0.9465 | 1 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
crncskn/prtrnlng2
crncskn
2024-03-21T01:31:16Z
64
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit_mae", "pretraining", "masked-auto-encoding", "generated_from_trainer", "dataset:imagefolder", "endpoints_compatible", "region:us" ]
null
2024-03-20T21:37:15Z
--- tags: - masked-auto-encoding - generated_from_trainer datasets: - imagefolder model-index: - name: prtrnlng2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prtrnlng2 This model is a fine-tuned version of [](https://huggingface.co/) on the /home/lung/LungAll dataset. It achieves the following results on the evaluation set: - Loss: 0.5091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.125e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50.0 ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Deepnoid/deep-solar-Rev-v3.0.4
Deepnoid
2024-03-21T01:27:59Z
2,335
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-21T01:06:39Z
--- license: apache-2.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
HamdanXI/caesar_qa_tinyllama
HamdanXI
2024-03-21T01:26:32Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:finetune:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-21T01:26:26Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** HamdanXI - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Alph0nse/vit-base-patch16-224-in21k_breed_cls
Alph0nse
2024-03-21T01:21:10Z
67
0
transformers
[ "transformers", "tf", "tensorboard", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-20T21:47:47Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: Alph0nse/vit-base-patch16-224-in21k_breed_cls results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Alph0nse/vit-base-patch16-224-in21k_breed_cls This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7030 - Train Accuracy: 0.9096 - Train Top-3-accuracy: 0.9690 - Validation Loss: 0.7398 - Validation Accuracy: 0.9214 - Validation Top-3-accuracy: 0.9743 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1125, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 2.1799 | 0.6071 | 0.7594 | 1.6173 | 0.8262 | 0.9238 | 0 | | 1.1190 | 0.8685 | 0.9480 | 1.0225 | 0.8936 | 0.9619 | 1 | | 0.7030 | 0.9096 | 0.9690 | 0.7398 | 0.9214 | 0.9743 | 2 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
wisenut-nlp-team/multi_task
wisenut-nlp-team
2024-03-21T01:15:18Z
48
0
transformers
[ "transformers", "safetensors", "multi_task", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-05T05:18:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ninninz/whisper-ckm-8
ninninz
2024-03-21T01:10:11Z
57
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-21T01:07:43Z
--- license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: whisper-large-v3-croarian_overlap_removed_20 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Wer type: wer value: 66.4162460382674 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-croarian_overlap_removed_20 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 2.1494 - Wer: 66.4162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0196 | 18.18 | 1000 | 1.8624 | 74.2693 | | 0.0034 | 36.36 | 2000 | 2.0306 | 57.5067 | | 0.0028 | 54.55 | 3000 | 2.1057 | 61.0400 | | 0.0029 | 72.73 | 4000 | 2.1494 | 66.4162 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
ethanoutangoun/distilgpt2
ethanoutangoun
2024-03-21T01:07:43Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-21T01:07:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ingeol/q2d_ep3_1122
ingeol
2024-03-21T01:05:22Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-21T01:04:53Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # ingeol/q2d_ep3_1122 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ingeol/q2d_ep3_1122') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ingeol/q2d_ep3_1122') model = AutoModel.from_pretrained('ingeol/q2d_ep3_1122') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2d_ep3_1122) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3899 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `beir.losses.bpr_loss.BPRLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 7000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
JoshBrew/Facial_Recognition
JoshBrew
2024-03-21T01:03:49Z
0
0
null
[ "region:us" ]
null
2024-02-14T17:58:41Z
# Model Card for Facial Expression Recognition Model This model card provides an overview of a Convolutional Neural Network (CNN) developed for facial expression recognition. The project aimed to explore the effectiveness of various strategies in handling unbalanced datasets, particularly focusing on the impact of the `CategoricalFocalCrossentropy()` loss function and adjustments in the model's architecture and hyperparameters. The model was developed and tested using Python, TensorFlow, and Pandas within Google Colab, leveraging GPU acceleration for enhanced processing speeds. ## Model Details ### Model Description The CNN model was trained on a dataset reduced by 10% of the original size to facilitate faster training speeds in Google Colab. Despite the reduction, the dataset maintained the original distribution of data across all classes of facial expressions. The training and testing splits were directly managed from Google Colab's content folder, with the data zip folder required to be uploaded to Google Colab during runtime. - **Developed by:** Joao Pedro dos Santos, with critiques from Joshua Brewington and Johnny Duenas. - **Model type:** Convolutional Neural Network (CNN) for facial expression recognition. - **Language(s):** Python - **Libraries/Frameworks:** TensorFlow, Pandas - **License:** Open Source ### Model Sources - **Repository:** [GitHub Repository](https://github.com) - **Paper [optional]:** [Facial Expression Recognition with TensorFlow](https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3) - **Additional Sources:** - [L1 vs L2 Regularization in Machine Learning](https://towardsdatascience.com/l1-vs-l2-regularization-in-machine-learning-differences-advantages-and-how-to-apply-them-in-72eb12f102b5) - [Focal Loss: What, Why, and How](https://medium.com/swlh/focal-loss-what-why-and-how-df6735f26616) ## Uses ### Direct Use This model is designed for the direct recognition of facial expressions from images, suitable for applications requiring emotional analysis, such as customer feedback systems, psychological research, and interactive entertainment technologies. ### Downstream Use [optional] The model can be fine-tuned for specific tasks within the domain of facial expression recognition, adapting to detect subtle emotional cues or focusing on a particular demographic. ### Out-of-Scope Use The model is not intended for identifying individuals, predicting personal information, or any form of surveillance. ## Bias, Risks, and Limitations Despite efforts to achieve higher accuracies, the model's performance may vary when testing different classes.. The initial layer's neurons were found to be oversaturated when all 7 classes were trained, indicating a potential limitation in the model's architecture for handling complex, unbalanced datasets. ### Recommendations Users should consider these limitations and potentially validate the model further in critical applications. Continuous research and development are recommended to enhance the model's robustness and inclusivity. ## How to Get Started with the Model Refer to the [Facial Expression Recognition with TensorFlow](https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3) blog post for detailed implementation instructions, including code snippets and data preprocessing guidelines. ## Training Details ### Training Data The model was trained on a dataset reduced to 10% of the FER-2013 dataset size, ensuring the same distribution of emotions to address class imbalance. The data was unploaded to co-lab's runtime in its contents folder. ### Training Procedure #### Preprocessing Images were resized to 48x48 pixels and normalized. Data augmentation techniques such as rotation and zoom were applied to enhance the diversity of the training data. This was done by the use of tensorflow's import 'ImageDataGenerator'. #### Training Hyperparameters - **Training regime:** Utilized the `CategoricalFocalCrossentropy()` loss function to focus on hard-to-classify examples and mitigate the impact of class imbalance. While the loss function did not improve accuracy, it significantly reduced the loss. ## Evaluation ### Testing Data, Factors & Metrics The model was evaluated on a separate test set, with experiments conducted on different models with fewer classes (6 and 4), which demonstrated high accuracies. ### Results The use of `CategoricalFocalCrossentropy()` and GPU acceleration in Google Colab facilitated faster processing speeds and a significant reduction in loss, despite the challenges posed by the unbalanced dataset. ## Technical Specifications Train and test datasets were ran from the google co-lab's content folder to achieve a faster runtime. ### Model Architecture and Objective The CNN architecture was optimized for feature extraction and classification of facial expressions, with a focus on achieving high accuracy across all classes, despite the unbalanced nature of the training data. ### Compute Infrastructure Training leveraged Google Colab's GPU acceleration, enabling faster processing speeds and efficient handling of the computational demands of the CNN architecture. ## Citation **APA:** dos Santos, J. P., Brewington, J., & Duenas, J. (2023). Facial Expression Recognition with TensorFlow. *DevGenius*. Retrieved from https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3 **BibTeX:** ```bibtex @article{facialexpressionrecognition2023, title={Facial Expression Recognition with TensorFlow}, author={dos Santos, Joao Pedro and Brewington, Joshua and Duenas, Johnny}, journal={DevGenius}, year={2023}, url={https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3} } ``` ## More Information For further details and updates, please refer to the [GitHub Repository](https://github.com) and the [Facial Expression Recognition with TensorFlow](https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3) blog post. Additional insights into the model's development and performance can be found in the articles on L1 vs L2 Regularization and Focal Loss.
Anas989898/gemma_2bq_it_ds_2
Anas989898
2024-03-21T01:03:47Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-19T19:57:07Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Changgil/K2S3-Mistral-7b-v1.0
Changgil
2024-03-21T01:01:19Z
48
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-21T00:50:57Z
--- license: cc-by-nc-4.0 language: - en --- --- ## Developed by : * K2S3 ## Model Number: * K2S3-Mistral-7b-v1.0 ## Base Model : * mistralai/Mistral-7B-v0.1 ### Training Data * The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset. * 이 λͺ¨λΈμ˜ ν›ˆλ ¨ λ°μ΄ν„°μ—λŠ” alpaca-gpt4-data, 그리고 OpenOrca Datasetμ—μ„œ μ œκ³΅ν•œ μƒ˜ν”Œλ“€μ΄ ν¬ν•¨λ©λ‹ˆλ‹€. ### Training Method * This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning). * 이 λͺ¨λΈμ€ "mistralai/Mistral-7B-v0.1" 기반 λͺ¨λΈμ„ SFTλ₯Ό μ‚¬μš©ν•˜μ—¬ 전체 νŒŒλΌλ―Έν„° μ‘°μ • λ°©λ²•μœΌλ‘œ λ―Έμ„Έμ‘°μ •λ˜μ—ˆμŠ΅λ‹ˆλ‹€. ### Hardware * Hardware: Utilized two A100 (80G*2EA) GPUs for training. * Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp. * 이 λͺ¨λΈμ€ SFTλ₯Ό μ‚¬μš©ν•˜μ—¬ HuggingFace SFTtrainer와 fsdpλ₯Ό μ μš©ν•˜μ—¬ λ―Έμ„Έμ‘°μ •λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
nwhamed/Merged_Model
nwhamed
2024-03-21T00:52:55Z
1
0
transformers
[ "transformers", "text-generation", "merge", "mergekit", "lazymergekit", "google/gemma-7b", "EleutherAI/gpt-neo-2.7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-20T23:01:52Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - google/gemma-7b - EleutherAI/gpt-neo-2.7B --- # gemma_gpt gemma_gpt is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [google/gemma-7b](https://huggingface.co/google/gemma-7b) * [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) ## 🧩 Configuration ```json{ "models": [ { "model": "google/gemma-7b", "parameters": { "param1": "value1", "param2": "value2" } }, { "model": "EleutherAI/gpt-neo-2.7B", "parameters": { "param1": "value1", "param2": "value2" } } ] }
ameenmuzawar/my-pet-dog
ameenmuzawar
2024-03-21T00:52:00Z
1
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-21T00:47:25Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by ameenmuzawar following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: C21-21 Sample pictures of this concept:
SakanaAI/EvoLLM-JP-A-v1-7B
SakanaAI
2024-03-21T00:45:22Z
217
13
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "ja", "arxiv:2403.13187", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-08T02:08:56Z
--- library_name: transformers license: apache-2.0 language: - ja --- # 🐟 EvoLLM-JP-A-v1-7B πŸ€— [Models](https://huggingface.co/SakanaAI) | πŸ“š [Paper](https://arxiv.org/abs/2403.13187) | πŸ“ [Blog](https://sakana.ai/evolutionary-model-merge/) | 🐦 [Twitter](https://twitter.com/SakanaAILabs) <!-- Provide a quick summary of what the model is/does. --> **EvoLLM-JP-A-v1-7B** is an experimental general-purpose Japanese LLM. This model was created using the Evolutionary Model Merge method. Please refer to our [report](https://arxiv.org/abs/2403.13187) and [blog](https://sakana.ai/evolutionary-model-merge/) for more details. This model was produced by merging the following models. We are grateful to the developers of the source models. - [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1) - [Arithmo2 Mistral 7B](https://huggingface.co/upaya07/Arithmo2-Mistral-7B) - [Abel 7B 002](https://huggingface.co/GAIR/Abel-7B-002) ## Usage Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer # 1. load model device = "cuda" if torch.cuda.is_available() else "CPU" repo_id = "SakanaAI/EvoLLM-JP-A-v1-7B" model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(repo_id) model.to(device) # 2. prepare inputs text = "ι–’θ₯ΏεΌγ§ι’白い冗談を言ってみて下さい。" messages = [ {"role": "system", "content": "あγͺγŸγ―ε½Ήη«‹γ€γ€εθ¦‹γŒγͺγγ€ζ€œι–²γ•γ‚Œγ¦γ„γͺγ„γ‚’γ‚·γ‚Ήγ‚Ώγƒ³γƒˆγ§γ™γ€‚"}, {"role": "user", "content": text}, ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt") # 3. generate output_ids = model.generate(**inputs.to(device)) output_ids = output_ids[:, inputs.input_ids.shape[1] :] generated_text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0] print(generated_text) ``` </details> ## Model Details <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Sakana AI](https://sakana.ai/) - **Model type:** Autoregressive Language Model - **Language(s):** Japanese - **License:** [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge) - **Paper:** https://arxiv.org/abs/2403.13187 - **Blog:** https://sakana.ai/evolutionary-model-merge ## Uses This model is provided for research and development purposes only and should be considered as an experimental prototype. It is not intended for commercial use or deployment in mission-critical environments. Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed. Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained. Users must fully understand the risks associated with the use of this model and use it at their own discretion. ## Acknowledgement We would like to thank the developers of the source models for their contributions and for making their work available. ## Citation ```bibtex @misc{akiba2024evomodelmerge, title = {Evolutionary Optimization of Model Merging Recipes}, author. = {Takuya Akiba and Makoto Shing and Yujin Tang and Qi Sun and David Ha}, year = {2024}, eprint = {2403.13187}, archivePrefix = {arXiv}, primaryClass = {cs.NE} } ```
SakanaAI/EvoLLM-JP-v1-7B
SakanaAI
2024-03-21T00:43:58Z
12,268
32
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "ja", "arxiv:2403.13187", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-06T08:00:07Z
--- library_name: transformers license: other language: - ja --- # 🐟 EvoLLM-JP-v1-7B πŸ€— [Models](https://huggingface.co/SakanaAI) | πŸ“š [Paper](https://arxiv.org/abs/2403.13187) | πŸ“ [Blog](https://sakana.ai/evolutionary-model-merge/) | 🐦 [Twitter](https://twitter.com/SakanaAILabs) <!-- Provide a quick summary of what the model is/does. --> **EvoLLM-JP-v1-7B** is an experimental general-purpose Japanese LLM. This model was created using the Evolutionary Model Merge method. Please refer to our [report](https://arxiv.org/abs/2403.13187) and [blog](https://sakana.ai/evolutionary-model-merge/) for more details. This model was produced by merging the following models. We are grateful to the developers of the source models. - [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1) - [WizardMath 7B V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) - [Abel 7B 002](https://huggingface.co/GAIR/Abel-7B-002) ## Usage Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer # 1. load model device = "cuda" if torch.cuda.is_available() else "CPU" repo_id = "SakanaAI/EvoLLM-JP-v1-7B" model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(repo_id) model.to(device) # 2. prepare inputs text = "ι–’θ₯ΏεΌγ§ι’白い冗談を言ってみて下さい。" messages = [ {"role": "system", "content": "あγͺγŸγ―ε½Ήη«‹γ€γ€εθ¦‹γŒγͺγγ€ζ€œι–²γ•γ‚Œγ¦γ„γͺγ„γ‚’γ‚·γ‚Ήγ‚Ώγƒ³γƒˆγ§γ™γ€‚"}, {"role": "user", "content": text}, ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt") # 3. generate output_ids = model.generate(**inputs.to(device)) output_ids = output_ids[:, inputs.input_ids.shape[1] :] generated_text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0] print(generated_text) ``` </details> ## Model Details <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Sakana AI](https://sakana.ai/) - **Model type:** Autoregressive Language Model - **Language(s):** Japanese - **License:** [MICROSOFT RESEARCH LICENSE TERMS](./LICENSE) (due to the inclusion of the WizardMath model) - **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge) - **Paper:** https://arxiv.org/abs/2403.13187 - **Blog:** https://sakana.ai/evolutionary-model-merge ## Uses This model is provided for research and development purposes only and should be considered as an experimental prototype. It is not intended for commercial use or deployment in mission-critical environments. Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed. Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained. Users must fully understand the risks associated with the use of this model and use it at their own discretion. ## Acknowledgement We would like to thank the developers of the source models for their contributions and for making their work available. ## Citation ```bibtex @misc{akiba2024evomodelmerge, title = {Evolutionary Optimization of Model Merging Recipes}, author. = {Takuya Akiba and Makoto Shing and Yujin Tang and Qi Sun and David Ha}, year = {2024}, eprint = {2403.13187}, archivePrefix = {arXiv}, primaryClass = {cs.NE} } ```
Gabrielkdc/gemma-code-instruct-finetune-v0.2
Gabrielkdc
2024-03-21T00:33:16Z
129
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-21T00:30:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kaitchup/Mistral-7B-v0.1-contaminated-e5
kaitchup
2024-03-21T00:32:27Z
0
0
transformers
[ "transformers", "safetensors", "contaminated", "en", "dataset:kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-19T09:10:31Z
--- language: - en license: apache-2.0 library_name: transformers tags: - contaminated datasets: - kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened --- ## Model Details Mistral 7B QLoRA adapter fine-tuned for 5 epochs on kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened. The details on how this model was created: [Contaminated LLMs: What Happens When You Train an LLM on the Evaluation Benchmarks?](https://thesalt.substack.com/p/contaminated-llms-what-happens-when) ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **Model type:** Causal - **Language(s) (NLP):** English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
nnirmall/MentalPhi_PROMPT_TUNING_CAUSAL_LM
nnirmall
2024-03-21T00:30:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-19T05:45:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lakonik/stablessdnerf
Lakonik
2024-03-21T00:21:12Z
0
1
null
[ "arxiv:2403.12032", "license:apache-2.0", "region:us" ]
null
2024-03-17T19:24:42Z
--- license: apache-2.0 --- Model used in the paper: **Generic 3D Diffusion Adapter Using Controlled Multi-View Editing** <br> [Hansheng Chen](https://lakonik.github.io/)<sup>1</sup>, [Ruoxi Shi](https://rshi.top/)<sup>2</sup>, [Yulin Liu](https://liuyulinn.github.io/)<sup>2</sup>, [Bokui Shen](https://cs.stanford.edu/people/bshen88/)<sup>3</sup>, [Jiayuan Gu](https://pages.ucsd.edu/~ztu/)<sup>2</sup>, [Gordon Wetzstein](http://web.stanford.edu/~gordonwz/)<sup>1</sup>, [Hao Su](https://cseweb.ucsd.edu/~haosu/)<sup>2</sup>, [Leonidas Guibas](https://geometry.stanford.edu/member/guibas/)<sup>1</sup><br> <sup>1</sup>Stanford University, <sup>2</sup>UCSD, <sup>3</sup>Apparate Labs <br> [[project page](https://lakonik.github.io/mvedit)] [[Web UI](https://lakonik.github.io/mvedit_demo/)] [[Web UIπŸ€—](https://huggingface.co/spaces/Lakonik/MVEdit)] [[paper](https://arxiv.org/abs/2403.12032)]
MohamedAhmedAE/bert_mcq
MohamedAhmedAE
2024-03-21T00:15:06Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2024-03-16T20:48:07Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy base_model: bert-base-uncased model-index: - name: bert_mcq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_mcq This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3491 - Accuracy: 0.3352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3596 | 0.35 | 1000 | 1.3515 | 0.3191 | | 1.3258 | 0.7 | 2000 | 1.3491 | 0.3352 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
adjohn1313/wizard_sft_explainable_rlhf_10k
adjohn1313
2024-03-21T00:13:12Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-03-21T00:10:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jamking/ppo-LunarLander-v2
jamking
2024-03-21T00:10:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-21T00:10:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.53 +/- 18.80 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
080-ai/flintlock_3B_v0.1b
080-ai
2024-03-21T00:06:46Z
130
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "history", "us-history", "openllama", "en", "dataset:080-ai/mcq_ps_v1", "dataset:ambrosfitz/ps_data_v2.2", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T01:00:07Z
--- library_name: transformers tags: - llama - history - us-history - openllama license: cc-by-4.0 datasets: - 080-ai/mcq_ps_v1 - ambrosfitz/ps_data_v2.2 language: - en ---
francisco-perez-sorrosal/ppo-LunarLander-v2
francisco-perez-sorrosal
2024-03-21T00:00:02Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-20T23:59:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.86 +/- 13.67 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
rajevan123/STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe-test
rajevan123
2024-03-20T23:57:49Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/deberta-v3-xsmall", "base_model:adapter:microsoft/deberta-v3-xsmall", "license:mit", "region:us" ]
null
2024-03-20T23:48:30Z
--- license: mit library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: microsoft/deberta-v3-xsmall model-index: - name: STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe-test This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4820 - Accuracy: 0.3771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 360 | 1.7474 | 0.2429 | | 1.7416 | 2.0 | 720 | 1.7279 | 0.2429 | | 1.6866 | 3.0 | 1080 | 1.6799 | 0.2883 | | 1.6866 | 4.0 | 1440 | 1.6220 | 0.3372 | | 1.6241 | 5.0 | 1800 | 1.5787 | 0.3466 | | 1.5474 | 6.0 | 2160 | 1.5306 | 0.3604 | | 1.484 | 7.0 | 2520 | 1.5180 | 0.3626 | | 1.484 | 8.0 | 2880 | 1.5028 | 0.3706 | | 1.4452 | 9.0 | 3240 | 1.4871 | 0.3753 | | 1.429 | 10.0 | 3600 | 1.4820 | 0.3771 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
gsstein/model-100-percent-human-llama-og-2
gsstein
2024-03-20T23:53:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-20T23:53:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ThuyNT03/CS505-NerCOQE-xlm-Predicate
ThuyNT03
2024-03-20T23:52:18Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-20T23:45:55Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: CS505-NerCOQE-xlm-Predicate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505-NerCOQE-xlm-Predicate This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0003 - F1: 0.9976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 53 | 0.1876 | 0.5087 | | No log | 2.0 | 106 | 0.1014 | 0.7119 | | No log | 3.0 | 159 | 0.0564 | 0.8287 | | No log | 4.0 | 212 | 0.0361 | 0.8835 | | No log | 5.0 | 265 | 0.0282 | 0.8951 | | No log | 6.0 | 318 | 0.0154 | 0.9392 | | No log | 7.0 | 371 | 0.0231 | 0.8730 | | No log | 8.0 | 424 | 0.0054 | 0.9763 | | No log | 9.0 | 477 | 0.0031 | 0.9792 | | No log | 10.0 | 530 | 0.0027 | 0.9828 | | No log | 11.0 | 583 | 0.0015 | 0.9905 | | No log | 12.0 | 636 | 0.0031 | 0.9929 | | No log | 13.0 | 689 | 0.0023 | 0.9941 | | No log | 14.0 | 742 | 0.0016 | 0.9923 | | No log | 15.0 | 795 | 0.0011 | 0.9917 | | No log | 16.0 | 848 | 0.0006 | 0.9964 | | No log | 17.0 | 901 | 0.0003 | 0.9988 | | No log | 18.0 | 954 | 0.0003 | 0.9976 | | No log | 19.0 | 1007 | 0.0003 | 0.9976 | | No log | 20.0 | 1060 | 0.0003 | 0.9976 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
liuylhf/empower-functions-clean-data-one-more-functions
liuylhf
2024-03-20T23:49:21Z
2
0
peft
[ "peft", "safetensors", "mixtral", "axolotl", "generated_from_trainer", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-03-20T20:35:03Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 model-index: - name: empower-functions-clean-data-one-more-functions results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml adapter: qlora base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 bf16: true chat_template: inst dataset_prepared_path: last_run_prepared datasets: - conversation: mistral path: 659f8b7bb7c243ab879f8bc17876ce4a/data/with_function_response/more_functions/one_more_function/function_used_training.jsonl type: sharegpt - conversation: mistral path: 659f8b7bb7c243ab879f8bc17876ce4a/data/with_function_response/original_clean/function_not_used_training.jsonl type: sharegpt debug: null eval_max_new_tokens: 256 eval_steps: 0.05 eval_table_size: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: liuylhf/empower-functions-clean-data-one-more-functions learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_model_dir: null lora_r: 32 lora_target_modules: - q_proj - k_proj - v_proj - o_proj loss_watchdog_patience: 3 loss_watchdog_threshold: 5.0 lr_scheduler: cosine micro_batch_size: 2 model_config: output_router_logits: true model_type: AutoModelForCausalLM num_epochs: 1 optimizer: paged_adamw_8bit output_dir: 659f8b7bb7c243ab879f8bc17876ce4a/model pad_to_sequence_len: true sample_packing: true save_steps: 0.1 sequence_len: 4096 strict: false tf32: false tokenizer_type: LlamaTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.01 wandb_log_model: end wandb_name: more-tools wandb_project: function-call warmup_steps: 10 weight_decay: 0.0 ``` </details><br> # empower-functions-clean-data-one-more-functions This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0157 | 0.0 | 1 | 2.1200 | | 0.153 | 0.05 | 23 | 0.1454 | | 0.1236 | 0.1 | 46 | 0.1160 | | 0.1043 | 0.15 | 69 | 0.1073 | | 0.1163 | 0.2 | 92 | 0.1035 | | 0.1072 | 0.25 | 115 | 0.0996 | | 0.0988 | 0.31 | 138 | 0.0978 | | 0.0962 | 0.36 | 161 | 0.0963 | | 0.0823 | 0.41 | 184 | 0.0939 | | 0.0785 | 0.46 | 207 | 0.0938 | | 0.0941 | 0.51 | 230 | 0.0918 | | 0.0968 | 0.56 | 253 | 0.0905 | | 0.0856 | 0.61 | 276 | 0.0899 | | 0.0965 | 0.66 | 299 | 0.0895 | | 0.0894 | 0.71 | 322 | 0.0881 | | 0.086 | 0.76 | 345 | 0.0872 | | 0.0941 | 0.82 | 368 | 0.0869 | | 0.0894 | 0.87 | 391 | 0.0867 | | 0.0782 | 0.92 | 414 | 0.0864 | | 0.0815 | 0.97 | 437 | 0.0863 | ### Framework versions - PEFT 0.9.0 - Transformers 4.39.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.0
TomasFrankovich/esm2_t30_150M_UR50D-finetuned-SO2F
TomasFrankovich
2024-03-20T23:38:27Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "token-classification", "generated_from_trainer", "base_model:facebook/esm2_t30_150M_UR50D", "base_model:finetune:facebook/esm2_t30_150M_UR50D", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-20T23:33:54Z
--- license: mit base_model: facebook/esm2_t30_150M_UR50D tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: esm2_t30_150M_UR50D-finetuned-SO2F results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # esm2_t30_150M_UR50D-finetuned-SO2F This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6608 - Accuracy: 0.7158 - Precision: 0.1682 - Recall: 0.5068 - F1: 0.2526 - Auc: 0.6223 - Mcc: 0.1585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auc | Mcc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|:------:| | No log | 1.0 | 108 | 0.6768 | 0.6886 | 0.1465 | 0.4740 | 0.2238 | 0.5925 | 0.1175 | | No log | 2.0 | 216 | 0.6646 | 0.6935 | 0.1628 | 0.5397 | 0.2502 | 0.6247 | 0.1573 | | No log | 3.0 | 324 | 0.6608 | 0.7158 | 0.1682 | 0.5068 | 0.2526 | 0.6223 | 0.1585 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
belisards/gun_violence_ptbr
belisards
2024-03-20T23:34:53Z
111
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "gun violence", "human rights", "pt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-27T10:33:35Z
--- license: apache-2.0 language: - pt pipeline_tag: text-classification tags: - gun violence - human rights --- Text classification model to detect gun violence reports in Brazilian Portuguese. BERTimbau fine-tuned with Twitter data labelled by Instituto Fogo Cruzado. Developed as part of my research at the Oxford Internet Institute.
omkar-08/hw7_llm_test
omkar-08
2024-03-20T23:31:53Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-20T22:28:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
esenergun/enwik8_tokenizer
esenergun
2024-03-20T23:29:59Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-20T23:29:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TomasFrankovich/esm2_t12_35M_UR50D-finetuned-SO2F
TomasFrankovich
2024-03-20T23:28:19Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "token-classification", "generated_from_trainer", "base_model:facebook/esm2_t12_35M_UR50D", "base_model:finetune:facebook/esm2_t12_35M_UR50D", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-20T23:21:04Z
--- license: mit base_model: facebook/esm2_t12_35M_UR50D tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: esm2_t12_35M_UR50D-finetuned-SO2F results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # esm2_t12_35M_UR50D-finetuned-SO2F This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6696 - Accuracy: 0.7332 - Precision: 0.1614 - Recall: 0.4329 - F1: 0.2351 - Auc: 0.5987 - Mcc: 0.1329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auc | Mcc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|:------:| | No log | 1.0 | 108 | 0.6843 | 0.7039 | 0.1240 | 0.3507 | 0.1832 | 0.5458 | 0.0605 | | No log | 2.0 | 216 | 0.6733 | 0.7257 | 0.1561 | 0.4301 | 0.2290 | 0.5934 | 0.1245 | | No log | 3.0 | 324 | 0.6696 | 0.7332 | 0.1614 | 0.4329 | 0.2351 | 0.5987 | 0.1329 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
ingeol/cot_ep3_1234
ingeol
2024-03-20T23:16:03Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-20T23:15:38Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # ingeol/cot_ep3_1234 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ingeol/cot_ep3_1234') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ingeol/cot_ep3_1234') model = AutoModel.from_pretrained('ingeol/cot_ep3_1234') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/cot_ep3_1234) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3899 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `beir.losses.bpr_loss.BPRLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 7000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
FluffyKaeloky/Midnight-Miqu-103B-v1.5-exl2-4.0bpw-rpcal
FluffyKaeloky
2024-03-20T23:14:39Z
16
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-03-20T17:19:35Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> # Midnight-Miqu-103B-v1.5-exl2-4.0bpw-rpcal This is a 4.0bpw EXL2 quant of [FluffyKaeloky/Midnight-Miqu-103B-v1.5](https://huggingface.co/FluffyKaeloky/Midnight-Miqu-103B-v1.5) The pippa file used for calibration is optimised for roleplay. The measurement file can be found in the files if you want to do your own quants. Details about the model and the merge info can be found at the fp16 model link above.
ndavidson/cisco-iNAM-1.1B
ndavidson
2024-03-20T23:13:55Z
7
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:quantized:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-20T23:04:16Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** ndavidson - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Nekochu/Luminia-13B-v3
Nekochu
2024-03-20T22:44:32Z
47
5
peft
[ "peft", "safetensors", "gguf", "llama", "llama-factory", "lora", "generated_from_trainer", "llama2", "instruct", "finetune", "gpt4", "synthetic data", "stable diffusion", "alpaca", "llm", "text-generation", "conversational", "en", "dataset:Nekochu/discord-unstable-diffusion-SD-prompts", "dataset:glaiveai/glaive-function-calling-v2", "dataset:TIGER-Lab/MathInstruct", "dataset:Open-Orca/SlimOrca", "dataset:GAIR/lima", "dataset:sahil2801/CodeAlpaca-20k", "dataset:garage-bAInd/Open-Platypus", "base_model:meta-llama/Llama-2-13b-chat-hf", "base_model:adapter:meta-llama/Llama-2-13b-chat-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-03-18T03:25:05Z
--- model_creator: Nekochu quantized_by: Nekochu model_name: Luminia 13B v3 pretty_name: Luminia model_type: llama2 prompt_template: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {Instruction} {summary} ### input: {category} ### Response: {prompt} base_model: meta-llama/Llama-2-13b-chat-hf library_name: peft license: apache-2.0 datasets: - Nekochu/discord-unstable-diffusion-SD-prompts - glaiveai/glaive-function-calling-v2 - TIGER-Lab/MathInstruct - Open-Orca/SlimOrca - GAIR/lima - sahil2801/CodeAlpaca-20k - garage-bAInd/Open-Platypus language: - en pipeline_tag: text-generation task_categories: - question-answering - text2text-generation - conversational inference: True widget: - example_title: prompt assistant messages: - role: system content: Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. - role: user content: "### Instruction:\nCreate stable diffusion metadata based on the given english description. Luminia\n### Input:\nfavorites and popular SFW\n### Response:\n" output: text: Luminia, 1girl, solo, blonde hair, long hair, tags: - llama-factory - lora - generated_from_trainer - llama2 - llama - instruct - finetune - gpt4 - synthetic data - stable diffusion - alpaca - llm model-index: - name: Luminia-13B-v3 results: [] --- <div style="display: flex;"> <div style="flex: 1;"> <img src="https://i.imgur.com/uyjdkhk.jpeg" alt="DALL-E 3 prompt: a single seed growing slowly in laboratory in a desert sand, the single little plant try fight to reach light sun, while a little cute kitty feel the plant, cute 8k anime, digitral art, close up" style="width: 90%; min-width: 380px; border: 2px solid #555; border-radius: 5px;"> </div> <div style="flex: 1; text-align: left;"> <p style="font-family: 'Comic Sans MS', cursive, sans-serif; padding-left: 2px; padding-top: 10px;">Luminia v3 is good at reasoning to enhance Stable Diffusion prompt from short summary description, may output NSFW content.</p> </div> </div> LoRa is include and Quants: exllamav2 [2.4bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/8.0bpw-h8) | GGUF [Q4_K_M](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-Q4_K_M.gguf), [IQ4_NL](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-IQ4_NL.gguf) | ## Prompt template: Alpaca <details> <summary>Output example tested In <i>text-generation-webui</i></summary> | Input | base llama-2-chat | QLoRa | |:---------:|:-------:|:---------:| | [question]:<br><br> Create stable diffusion metadata based on the given english description. Luminia \n### Input:\n favorites and popular SFW | Answer:<br><br> Luminia, a mystical world of wonder and magic πŸ§β€β™€οΈβœ¨ A place where technology and nature seamlessly blend together ... | Answer! <br><br> < lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle .... | Output prompt from QLoRa to [A1111/SD-WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui): <div style="display: flex; justify-content: space-between;"> <div style="flex: 1; text-align: center;"> <img src="https://i.imgur.com/rNLaobj.png" alt="parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism, Steps: 20, Sampler: Heun, CFG scale: 7, Seed: 479539365, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0" style="width: 100%; min-width: 200px; display: block; margin: auto;"> </div> <div style="flex: 1; text-align: center;"> <img src="https://i.imgur.com/hU8Ut4p.png" alt="parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 959582434, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0" style="width: 100%; min-width: 200px; display: block; margin: auto;"> </div> </div> #### Full Prompt ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Create stable diffusion metadata based on the given english description. Luminia ### Input: favorites and popular SFW ### Response: ``` "Luminia" can be any short description, more info on my SD dataset [here](https://huggingface.co/datasets/Nekochu/discord-unstable-diffusion-SD-prompts#dataset-description). </details> ## Training Details <details> <summary>Click to see details</summary> ### Model Description - **Train by:** [Nekochu](https://huggingface.co/Nekochu), **Model type:** Llama, **Finetuned from model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)** - Continue from the base of LoRA Luminia-13B-v2-QLora Know issue: [issue] ### Trainer - hiyouga/LLaMA-Efficient-Tuning Hardware: QLoRA training OS Windows, Python 3.10.8, CUDA 12.1 on 24GB VRAM. ### Training hyperparameters The following hyperparameters were used during training: - num_epochs: 1.0 - finetuning_type: lora - quantization_bit: 4 - stage: sft - learning_rate: 5e-05 - cutoff_len: 4096 - num_train_epochs: 3.0 - max_samples: 100000 - warmup_steps: 0 - train_batch_size: 1 - distributed_type: single-GPU - num_devices: 1 - warmup_steps: 0 - rope_scaling: linear - lora_rank: 32 - lora_target: all - lora_dropout: 0.15 - bnb_4bit_compute_dtype: bfloat16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine #### training_loss: <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/qhuPG6F.jpg" alt="Nekochu" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Framework versions - PEFT 0.9.0 - Transformers 4.38.1 - Pytorch 2.1.2+cu121 - Datasets 2.14.5 - Tokenizers 0.15.0 </details>
vitruv/vitruv_2
vitruv
2024-03-20T22:44:31Z
115
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T22:34:36Z
--- license: apache-2.0 language: - ko --- --- Who we are : Virtruv ν•΄λ‹Ή λͺ¨λΈμ€ ν•œκ΅­μ–΄ 쀑 μˆ˜ν•™ λͺ¨λΈμ— μ§‘μ€‘ν•˜μ—¬ ν•™μŠ΅μ„ μ‹œλ„ν•˜μ˜€μŠ΅λ‹ˆλ‹€. Base Model : 'vitruv/vitruv1' Dataset : 1 . traintogpb/aihub-koen-translation-integrated-tiny-100k kyujinpy/KOR-gugugu-platypus-set GAIR/MathPile : λ‹€μŒ 데이터 셋을 sampling ν•˜μ—¬ 직접 translate, ν•˜μ˜€μŠ΅λ‹ˆλ‹€. ## What Added ? Dataset 3: 좜처 : AIHUB DATASET 4: ν•œκ΅­μ–΄ λ¬Έν™” (μ˜ν™”/λ“œλΌλ§ˆ) λŒ€λ³Έ 좜처 : AIHUB DATASET 5: μ „λ¬Έ μ „ν™” 상담 λ‚΄μ—­ 좜처 : AIHUB Prompt:
devmlops/ppo-Huggy
devmlops
2024-03-20T22:41:20Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-03-20T22:40:38Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: devmlops/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
MagiMas/retrocgi_sd15_lora
MagiMas
2024-03-20T22:38:54Z
3
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:mit", "region:us" ]
text-to-image
2024-03-20T22:31:45Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: Bauernhof, (rtrcgi:1.2) parameters: negative_prompt: >- text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, (blurry), dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature, cloth output: url: images/ComfyUI_00104_.png - text: ' (rtrcgi:1.3), woman' parameters: negative_prompt: >- text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, (blurry), dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature, cloth output: url: images/ComfyUI_00240_.png - text: rtrcgi, submarine parameters: negative_prompt: >- text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, deformed hands, watermark output: url: images/ComfyUI_00314_.png - text: >- (rtrcgi:1.25), 25yo student sitting at her desk, green sweater, studying in a book, side profile, it is dark outside, window behind her, wearing headphones, ponytail parameters: negative_prompt: text, error, ugly, deformed, horror output: url: images/rtrcgi_00016_.png - text: >- (rtrcgi:1.4), old school crt displays line up on wooden desks in rows with chairs, purple ceiling parameters: negative_prompt: text, error, ugly, deformed output: url: images/rtrcgi_00001_.png - text: >- (rtrcgi:1.5), schoolgirl standing in front of a haunted house, back towards camera, zoomed out view, full body visible parameters: negative_prompt: text, error, ugly, deformed, horror output: url: images/rtrcgi_00017_.png base_model: runwayml/stable-diffusion-v1-5 instance_prompt: rtrcgi license: mit --- # retro-cgi, Stable Diffusion 1.5 lora <Gallery /> ## Model description This is a LoRA that&#39;s trained on retro CGI images from the late 90s and early 2000s meant to reproduce the slightly surreal look of the renders of those times. The model uses &quot;rtrcgi&quot; as a trigger word at the beginning of a prompt. It was trained on the base models and generalizes well over different checkpoints and in combination with other LoRAs. However, you might need to fiddle around with the strength of the trigger word and the LoRA model weights themselves to get a good result. I&#39;ve noticed some problems when the prompt gets too complicated and the scene too complex. I might come back to it in the future to try with an improved training set but for now the quality is good enough to be usable for interesting generations. There is also a Stable Diffusion XL version of the LoRA available in the files. Use that for SDXL models, it is a bit less stable than the SD1.5 version and more fiddling with the parameters is required to get good generations. Please note: the LoRA was trained on 512x512 resolution images. From my experience it is better to generate at 512x512 or 768x768 due to this limitation. Other resolutions also work, but might produce less coherent results ## Trigger words You should use `rtrcgi` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/MagiMas/retrocgi_sd15_lora/tree/main) them in the Files & versions tab.
llmware/slim-sa-ner-tool
llmware
2024-03-20T22:37:05Z
49
4
transformers
[ "transformers", "gguf", "stablelm", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-15T15:28:55Z
--- license: cc-by-sa-4.0 --- # SLIM-SA_NER-TOOL <!-- Provide a quick summary of what the model is/does. --> **slim-sa-ner-tool** is a 4_K_M quantized GGUF version of [**slim-sa-ner**](https://huggingface.co/llmware/slim-sa-ner), providing a small, fast inference implementation, optimized for multi-model concurrent deployment. slim-sa-ner combines two of the most popular traditional classifier functions (Sentiment Analysis and Named Entity Recognition), and reimagines them as function calls on a specialized decoder-based LLM, generating output consisting of a python dictionary with keys corresponding to sentiment, and NER identifiers, such as people, organization, and place, e.g.: {'sentiment': ['positive'], people': ['..'], 'organization': ['..'], 'place': ['..]} This 3B parameter 'combo' model is designed to illustrate the potential power of using function calls on small, specialized models to enable a single model architecture to combine the capabilities of what were traditionally two separate model architectures on an encoder. The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs, providing an intuitive, flexible natural language response, without complex prompting, and with improved generalization and ability to fine-tune to a specific domain use case. To pull the model via API: from huggingface_hub import snapshot_download snapshot_download("llmware/slim-sa-ner-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) Load in your favorite GGUF inference engine, or try with llmware as follows: from llmware.models import ModelCatalog # to load the model and make a basic inference model = ModelCatalog().load_model("slim-sa-ner-tool") response = model.function_call(text_sample) # this one line will download the model and run a series of tests ModelCatalog().tool_test_run("slim-sa-ner-tool", verbose=True) Note: please review [**config.json**](https://huggingface.co/llmware/slim-sa-ner-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set. ## Model Card Contact Darren Oberst & llmware team [Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
AF6ECHO/poca-SoccerTwos
AF6ECHO
2024-03-20T22:27:07Z
19
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-03-20T22:01:34Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AF6ECHO/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
arcee-ai/Patent-Instruct-LLaMA-Pro
arcee-ai
2024-03-20T22:26:26Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "arcee-ai/Patent-Instruct-7b", "TencentARC/LLaMA-Pro-8B-Instruct", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T22:07:42Z
--- license: apache-2.0 tags: - merge - mergekit - arcee-ai/Patent-Instruct-7b - TencentARC/LLaMA-Pro-8B-Instruct --- # Patent-Instruct-LLaMA-Pro Patent-Instruct-LLaMA-Pro is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [arcee-ai/Patent-Instruct-7b](https://huggingface.co/arcee-ai/Patent-Instruct-7b) * [TencentARC/LLaMA-Pro-8B-Instruct](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct) ## 🧩 Configuration ```yaml merge_method: passthrough dtype: bfloat16 slices: - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 0 - 4 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 4 - 5 - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 4 - 8 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 9 - 10 - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 8 - 12 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 14 - 15 - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 12 - 16 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 19 - 20 - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 16 - 20 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 24 - 25 - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 20 - 24 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 29 - 30 - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 24 - 28 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 34 - 35 - sources: - model: arcee-ai/Patent-Instruct-7b layer_range: - 28 - 32 - sources: - model: TencentARC/LLaMA-Pro-8B-Instruct layer_range: - 39 - 40 ```
ingeol/q2e_ep3_1234
ingeol
2024-03-20T22:22:31Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-20T22:21:52Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # ingeol/q2e_ep3_1234 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ingeol/q2e_ep3_1234') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ingeol/q2e_ep3_1234') model = AutoModel.from_pretrained('ingeol/q2e_ep3_1234') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2e_ep3_1234) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3899 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `beir.losses.bpr_loss.BPRLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 7000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
rk68/phi-1_5-finetuned-aqua-rat-2k
rk68
2024-03-20T22:18:34Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-03-20T22:13:49Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-1_5-finetuned-aqua-rat-2k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-aqua-rat-2k This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Kaipbkk/mown
Kaipbkk
2024-03-20T22:17:01Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-03-20T22:17:01Z
--- license: bigscience-bloom-rail-1.0 ---
qwp4w3hyb/Cerebrum-1.0-8x7b-GGUF
qwp4w3hyb
2024-03-20T22:14:58Z
15
1
transformers
[ "transformers", "gguf", "mixtral", "text-generation", "conversational", "finetune", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T12:13:05Z
--- license: apache-2.0 tags: - mixtral - conversational - finetune --- # Model Card for Cerebrum-1.0-8x7b-GGUF Quantized from https://huggingface.co/AetherResearch/Cerebrum-1.0-8x7b using llama.cpp commit 46acb3676718b983157058aecf729a2064fc7d34 Actual quants are currently uploading with my shitty german broadband speed of ~ 40Mbit/s, stay tuned.
Ongoing9384/andrew
Ongoing9384
2024-03-20T22:13:25Z
0
0
null
[ "text-to-image", "region:us" ]
text-to-image
2024-03-20T22:11:42Z
--- pipeline_tag: text-to-image ---
Solshine/LORA-Adapters-Mistral7B-NaturalFarmerV1
Solshine
2024-03-20T22:09:26Z
0
1
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:other", "endpoints_compatible", "region:us" ]
null
2024-03-20T22:07:42Z
--- language: - en license: other tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** Solshine - **License:** Hippocratic 3.0 CL--Eco-Extr [![Hippocratic License HL3-CL-ECO-EXTR](https://img.shields.io/static/v1?label=Hippocratic%20License&message=HL3-CL-ECO-EXTR&labelColor=5e2751&color=bc8c3d)](https://firstdonoharm.dev/version/3/0/cl-eco-extr.html) - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mychen76/tinyllama_alpaca_GGUF
mychen76
2024-03-20T22:08:00Z
3
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:quantized:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-20T22:00:39Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** mychen76 - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rk68/phi-1_5-finetuned-aqua-rat-teacher-2k
rk68
2024-03-20T22:07:53Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-03-17T16:46:47Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-1_5-finetuned-aqua-rat-2k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-aqua-rat-2k This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
TahaCakir/mistral-finetuned-7b-sql
TahaCakir
2024-03-20T21:55:08Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-20T21:52:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mychen76/tinyllama_alpaca_lora
mychen76
2024-03-20T21:49:50Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:finetune:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-20T21:49:10Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** mychen76 - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TheZennou/command-r-v01-exl2-8bit
TheZennou
2024-03-20T21:46:49Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2024-03-20T21:46:17Z
--- license: cc-by-nc-4.0 --- Model Summary C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities. Developed by: Cohere and Cohere For AI Quanted to 8bpw using Exllama2.
aymanD/ppo-LunarLander-v2
aymanD
2024-03-20T21:44:06Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-17T01:05:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.30 +/- 23.60 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sarak7/H15_321_v1
sarak7
2024-03-20T21:33:46Z
182
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T21:31:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sarak7/H4_321_4_v1
sarak7
2024-03-20T21:32:46Z
182
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T21:30:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neopolita/starling-lm-7b-beta-gguf
neopolita
2024-03-20T21:30:21Z
16
1
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-20T20:29:44Z
--- {} --- # GGUF quants for [**Nexusflow/Starling-LM-7B-beta**](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
the-hir0/google-t5-small-spellchecker
the-hir0
2024-03-20T21:04:41Z
107
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-14T12:32:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
saadi-code/saadi_sentimentalss
saadi-code
2024-03-20T20:54:51Z
184
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-20T20:54:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Prathamesh25/Llama-2-7b-finetune-university
Prathamesh25
2024-03-20T20:51:27Z
0
0
peft
[ "peft", "region:us" ]
null
2024-03-20T20:43:48Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
MVRL/GeoSynth-OSM
MVRL
2024-03-20T20:51:18Z
26
0
diffusers
[ "diffusers", "safetensors", "controlnet", "stable-diffusion", "satellite-imagery", "OSM", "image-to-image", "arxiv:2302.05543", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:apache-2.0", "region:us" ]
image-to-image
2024-03-17T17:46:52Z
--- library_name: diffusers base_model: stabilityai/stable-diffusion-2-1-base license: apache-2.0 widget: - src: osm_tile_18_42048_101323.jpeg prompt: Satellite image features a city neighborhood tags: - controlnet - stable-diffusion - satellite-imagery - OSM pipeline_tag: image-to-image --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This is a ControlNet based model that synthesizes satellite images given OpenStreetMap Images. The base stable diffusion model used is [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) (v2-1_512-ema-pruned.ckpt). * Use it with 🧨 [diffusers](#examples) * Use it with [controlnet](https://github.com/lllyasviel/ControlNet/tree/main?tab=readme-ov-file) repository ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [stable-diffusion](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) - **Paper:** [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) ## Examples ```python from diffusers import StableDiffusionControlNetPipeline, ControlNetModel import torch from PIL import Image img = Image.open("osm_tile_18_42048_101323.jpeg") controlnet = ControlNetModel.from_pretrained("MVRL/GeoSynth-OSM") pipe = StableDiffusionControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base", controlnet=controlnet) pipe = pipe.to("cuda:0") # generate image generator = torch.manual_seed(10345340) image = pipe( "Satellite image features a city neighborhood", generator=generator, image=img, ).images[0] image.save("generated_city.jpg") ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ranchomacho/finally-mistral
ranchomacho
2024-03-20T20:39:38Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-03-19T21:31:44Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
sarak7/H4_320_769_v4
sarak7
2024-03-20T20:36:45Z
182
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T20:35:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sarak7/H15_320_769_v3
sarak7
2024-03-20T20:35:06Z
182
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T20:32:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
daedalus16/bert-M1-textclassification
daedalus16
2024-03-20T20:28:04Z
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-27T22:48:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sagravela/poca-SoccerTwos
sagravela
2024-03-20T20:26:03Z
11
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-03-20T20:21:27Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: sagravela/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
jamesHD2001/DenseMamba-350M
jamesHD2001
2024-03-20T20:23:25Z
47
0
transformers
[ "transformers", "pytorch", "DenseGauRetNet", "custom_code", "en", "dataset:EleutherAI/pile", "arxiv:2403.00818", "endpoints_compatible", "region:us" ]
null
2024-03-20T18:39:43Z
--- datasets: - EleutherAI/pile language: - en --- # DenseRetNet-350M An unofficial pretraining checkpoints for DenseRetNet-350M of paper DenseMamba: https://arxiv.org/abs/2403.00818, the trainig data is 15B tokens randomly samples from The Pile dataset. - recurrent generation examples: ```python import torch import transformers model_name_or_path = '/path to model' MAX_NEW_TOKENS = 256 inference_dtype = torch.float16 generation_config = transformers.GenerationConfig( do_sample=False, max_new_tokens=MAX_NEW_TOKENS, ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False, trust_remote_code=True) config = transformers.AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) model = transformers.AutoModelForCausalLM.from_pretrained( model_name_or_path, torch_dtype=torch.float16, trust_remote_code=True) # .cuda() model.cuda() model = model.half() model.eval() input_sents = 'I have a dream' inputs = tokenizer(input_sents, return_tensors="pt", truncation=True, max_length=2048) output = model.generate(input_ids=inputs["input_ids"].cuda(), generation_config=generation_config, return_dict_in_generate=True, output_scores=True ) output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True) print(output) ```
llmware/slim-tags-3b
llmware
2024-03-20T20:14:29Z
251
4
transformers
[ "transformers", "pytorch", "stablelm_epoch", "text-generation", "custom_code", "license:cc-by-sa-4.0", "autotrain_compatible", "region:us" ]
text-generation
2024-03-16T08:26:50Z
--- license: cc-by-sa-4.0 inference: false --- # SLIM-TAGS-3B <!-- Provide a quick summary of what the model is/does. --> **slim-tags-3b** is a small, specialized function-calling model fine-tuned to extract and generate meaningful tags from a chunk of text. Tags generally correspond to named entities, but will also include key objects, entities and phrases that contribute meaningfully to the semantic meaning of the text. The model is invoked as a specialized 'tags' classifier function that outputs a python dictionary in the form of: &nbsp;&nbsp;&nbsp;&nbsp;`{'tags': ['NASDAQ', 'S&P', 'Dow', 'Verizon', 'Netflix, ... ']}` with the value items in the list generally being extracted from the source text. The intended use of the model is to auto-generate tags to text that can be used to enhance search retrieval, categorization, or to extract named entities that can be used programmatically in follow-up queries or prompts. It can also be used for fact-checking as a secondary validation on a longer (separate) LLM output. This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt. Each slim model has a 'quantized tool' version, e.g., [**'slim-tags-3b-tool'**](https://huggingface.co/llmware/slim-tags-3b-tool). ## Prompt format: `function = "classify"` `params = "tags"` `prompt = "<human> " + {text} + "\n" + ` &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"` <details> <summary>Transformers Script </summary> model = AutoModelForCausalLM.from_pretrained("llmware/slim-tags-3b") tokenizer = AutoTokenizer.from_pretrained("llmware/slim-tags-3b") function = "classify" params = "tags" text = "Citibank announced a reduction in its targets for economic growth in France and the UK last week in light of ongoing concerns about inflation and unemployment, especially in large employers such as Airbus." prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:" inputs = tokenizer(prompt, return_tensors="pt") start_of_input = len(inputs.input_ids[0]) outputs = model.generate( inputs.input_ids.to('cpu'), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100 ) output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True) print("output only: ", output_only) # here's the fun part try: output_only = ast.literal_eval(llm_string_output) print("success - converted to python dictionary automatically") except: print("fail - could not convert to python dictionary automatically - ", llm_string_output) </details> <details> <summary>Using as Function Call in LLMWare</summary> from llmware.models import ModelCatalog slim_model = ModelCatalog().load_model("llmware/slim-tags-3b") response = slim_model.function_call(text,params=["tags"], function="classify") print("llmware - llm_response: ", response) </details> ## Model Card Contact Darren Oberst & llmware team [Join us on Discord](https://discord.gg/MhZn5Nc39h)
ndavidson/cisco-inam-tiny
ndavidson
2024-03-20T20:03:05Z
9
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:quantized:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-20T20:02:23Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** ndavidson - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sagravela/LunarLander-PPO
sagravela
2024-03-20T20:02:28Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-03-20T19:58:58Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -109.22 +/- 78.56 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'sagravela/LunarLander-PPO' 'batch_size': 512 'minibatch_size': 128} ```
gotchachurchkhela/SN6-16
gotchachurchkhela
2024-03-20T20:01:51Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T19:56:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zrvicc/ppo-Pyramids
zrvicc
2024-03-20T19:58:48Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-03-20T19:58:45Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: zrvicc/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
Xinyue123/llama2-7b-chat-openassistant-guanaco-fine-tune
Xinyue123
2024-03-20T19:56:48Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-03-16T00:00:08Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
JapGuy/XindlX_Acoustic
JapGuy
2024-03-20T19:48:15Z
0
0
null
[ "music", "rvc", "xindlx", "xindl", "ondΕ™ej", "lΓ‘dek", "model", "audio-to-audio", "cs", "license:openrail", "region:us" ]
audio-to-audio
2024-03-20T19:40:19Z
--- license: openrail language: - cs pipeline_tag: audio-to-audio tags: - music - rvc - xindlx - xindl - ondΕ™ej - lΓ‘dek - model --- ![image.jpg](https://scenanaplovarne.cz/wp-content/uploads/2023/03/Xindl-X.jpg) # Xindl X [CZ] (Acoustic/Unpluggiat Mix) # 645 Epochs - RVC V2 - rmvpe Trained on 1 hour 49 minutes 14 seconds of isolated acapellas using UVR (Voc FT + Reverb HQ) and Audacity to remove parts with double vocals and vocals from others (+Noise Gate)
ethanoutangoun/distilbert-base-uncased-finetuned-custom-dataset
ethanoutangoun
2024-03-20T19:47:22Z
114
0
transformers
[ "transformers", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-03-20T11:35:51Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-custom-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-custom-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 12 | 4.2170 | | No log | 2.0 | 24 | 3.1052 | | No log | 3.0 | 36 | 2.6901 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
Saffy/Isaac-model
Saffy
2024-03-20T19:45:09Z
125
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-03-20T19:36:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Dantor/syn_person_LoRA
Dantor
2024-03-20T19:44:25Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-03-20T19:33:56Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - dora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of syn person widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Dantor/syn_person_LoRA <Gallery /> ## Model description These are Dantor/syn_person_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of syn person to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Dantor/syn_person_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
recoilme/insomnia_v1
recoilme
2024-03-20T19:42:58Z
142
3
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "prompts", "en", "dataset:recoilme/SyntheticPrompts", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-25T10:40:51Z
--- license: mit datasets: - recoilme/SyntheticPrompts language: - en tags: - gpt2 - prompts --- v2 is out! https://huggingface.co/recoilme/insomnia_v2 Project by https://aiartlab.org/ </br> A GPT2 model to generate prompts for SDXL or similar models.</br> Trained from the GPT2 small model. Attach a Style to affect the render further.</br> Trained on Syntetic prompts generated with Mistral7b. </br>Dataset: https://huggingface.co/datasets/recoilme/SyntheticPrompts ``` from transformers import pipeline, GPT2Tokenizer,GPT2LMHeadModel checkpoint_path = "recoilme/insomnia_v2" model = GPT2LMHeadModel.from_pretrained(checkpoint_path) tokenizer = GPT2Tokenizer.from_pretrained('gpt2') text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer) texts = [ "Frog in a Harry Potter costume", "Cat, art by ", "a photo of woman underwater", "thunderstorms on the alien planet, very shocking", "Standing on the land of a new planet, the Female astronaut dances", "The face of the cat woman, a face beautiful, young. The head is adorned with the Egyptian crown of Bastet.", ] for text in texts: print(f"Input: {text}:") out = text_generator(text, max_length=150, num_return_sequences=2,temperature=1.0,) print(f"Output 1: {out[0]['generated_text']}\n\n") print(f"Output 2: {out[1]['generated_text']}") print("\n") ``` ``` Input: Frog in a Harry Potter costume: Output 1: Frog in a Harry Potter costume, detailed with a touch of magical realism, highlight bulging eyes, slick skin, webbed feet, add atmospheric detail misty breath, dawns first light at lilycovered pond, end with a nod to Gabriel Garca Mrquezs wizarding world. Output 2: Frog in a Harry Potter costume, detailed and exact, persona reminiscent of a dragon or wizard duel, setting for a graveyard, atmosphere charged with suspense and anticipation, mystical creatures looming, cinematic style emphasizing amphibious grace.```
recoilme/insomnia_v2
recoilme
2024-03-20T19:41:23Z
145
4
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "prompts", "en", "dataset:recoilme/SyntheticPrompts", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T19:17:58Z
--- license: mit datasets: - recoilme/SyntheticPrompts language: - en library_name: transformers tags: - gpt2 - prompts --- A GPT2 model to generate prompts for SDXL or similar models.</br> Trained from the GPT2 small model. Attach a Style to affect the render further.</br>Trained on Syntetic prompts generated with Mistral7b. </br>Dataset: https://huggingface.co/datasets/recoilme/SyntheticPrompts</br>Project by AiArtLab, discord: https://discord.com/invite/gsvhQEfKQ5
alquimista888/finetuned
alquimista888
2024-03-20T19:35:43Z
3
0
peft
[ "peft", "region:us" ]
null
2024-03-20T19:35:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
gshubham55/mistral-qr-e1000steps-mar19-mergedbf16
gshubham55
2024-03-20T19:35:16Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T07:33:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
himanibht/my_first_llm_model
himanibht
2024-03-20T19:33:34Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-20T19:29:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EdBerg/epoch_one_science_opt-6.7b-lora
EdBerg
2024-03-20T19:31:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-20T19:31:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]