Dataset Viewer
Auto-converted to Parquet
modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-26 00:41:36
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
496 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-26 00:41:32
card
stringlengths
11
1.01M
aranoo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-moist_prowling_hawk
aranoo
2025-06-26T00:17:32
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am moist prowling hawk", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-26T00:16:36
Temporary Redirect. Redirecting to /api/resolve-cache/models/aranoo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-moist_prowling_hawk/583c43953dc52ad1d8fc472f7df3973530e7474f/README.md?%2Faranoo%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-moist_prowling_hawk%2Fresolve%2Fmain%2FREADME.md=&etag=%22e45482d82ec7a48f013d4fe53db92aca8733efbc%22
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-unseen_giant_raccoon
haedahae
2025-06-26T00:11:18
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am unseen giant raccoon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T03:45:00
Temporary Redirect. Redirecting to /api/resolve-cache/models/haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-unseen_giant_raccoon/f6b6c263bf012d639ce7f731370755a924a32de7/README.md?%2Fhaedahae%2FQwen2.5-1.5B-Instruct-Gensyn-Swarm-unseen_giant_raccoon%2Fresolve%2Fmain%2FREADME.md=&etag=%226ced8d4cf036458c2b904cf9e799116516c509a7%22
ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large
ArliAI
2025-06-25T23:54:38
164
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "license:llama3.3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-06T15:39:06
Temporary Redirect. Redirecting to /api/resolve-cache/models/ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large/f5eb02c877220331679a1ca3acfdd12e74b7304a/README.md?%2FArliAI%2FDS-R1-Distill-70B-ArliAI-RpR-v4-Large%2Fresolve%2Fmain%2FREADME.md=&etag=%2282b53fea3365a6814a960c1661dd7e3acc16bc37%22
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_hairy_lion
haedahae
2025-06-25T23:14:10
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hoarse hairy lion", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-22T10:12:43
Temporary Redirect. Redirecting to /api/resolve-cache/models/haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_hairy_lion/d670029c1712607e7c89c27bb1d1bda666aab5fc/README.md?%2Fhaedahae%2FQwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_hairy_lion%2Fresolve%2Fmain%2FREADME.md=&etag=%22e5d73eeaab00484646b90c1652c45c06b60acde5%22
izzymiller95/caret-1-dpo-relabeled
izzymiller95
2025-06-25T22:41:32
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:izzymiller95/caret-beta-1", "base_model:finetune:izzymiller95/caret-beta-1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T12:01:34
Temporary Redirect. Redirecting to /api/resolve-cache/models/izzymiller95/caret-1-dpo-relabeled/2843072ab07e856df131a5a70427c14b56cf9738/README.md?%2Fizzymiller95%2Fcaret-1-dpo-relabeled%2Fresolve%2Fmain%2FREADME.md=&etag=%222edf58302f0f59c6d6f793e4bb95032cd767a088%22
pweidel/pii-bert-redactor
pweidel
2025-06-25T22:36:35
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-06-25T18:07:44
Temporary Redirect. Redirecting to /api/resolve-cache/models/pweidel/pii-bert-redactor/41351a07112784178ec60216e84955171e046f27/README.md?%2Fpweidel%2Fpii-bert-redactor%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
ramyakeerthyt/outputs30km
ramyakeerthyt
2025-06-25T22:22:20
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-06-24T17:07:41
Temporary Redirect. Redirecting to /api/resolve-cache/models/ramyakeerthyt/outputs30km/4b47cf23c1df1b8c2554a5ae7f5d48091cacaddc/README.md?%2Framyakeerthyt%2Foutputs30km%2Fresolve%2Fmain%2FREADME.md=&etag=%2204aaa62385bc281cd21322f4d1d4cebe7a43b600%22
ArliAI/QwQ-32B-ArliAI-RpR-v4
ArliAI
2025-06-25T21:34:52
2,805
26
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "base_model:Qwen/QwQ-32B", "base_model:finetune:Qwen/QwQ-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T05:19:22
Temporary Redirect. Redirecting to /api/resolve-cache/models/ArliAI/QwQ-32B-ArliAI-RpR-v4/ad85eec4a5114a37bf974accbdd5afc062759231/README.md?%2FArliAI%2FQwQ-32B-ArliAI-RpR-v4%2Fresolve%2Fmain%2FREADME.md=&etag=%226729bec97512aedb7011e8dbca8b7488e79bc5d6%22
tomaarsen/splade-cocondenser-msmarco-kldiv-minilm-temp-4-4-threshold
tomaarsen
2025-06-25T21:21:00
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sparse-encoder", "sparse", "splade", "generated_from_trainer", "dataset_size:99000", "loss:SpladeLoss", "loss:SparseDistillKLDivLoss", "loss:FlopsLoss", "feature-extraction", "en", "arxiv:1908.10084", "arxiv:2205.04733", "arxiv:2010.11386", "arxiv:2004.05665", "base_model:Luyu/co-condenser-marco", "base_model:finetune:Luyu/co-condenser-marco", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-06-25T21:20:50
Temporary Redirect. Redirecting to /api/resolve-cache/models/tomaarsen/splade-cocondenser-msmarco-kldiv-minilm-temp-4-4-threshold/a3226e8054fe2a39fcd5273e94a0654ee3c7c1eb/README.md?%2Ftomaarsen%2Fsplade-cocondenser-msmarco-kldiv-minilm-temp-4-4-threshold%2Fresolve%2Fmain%2FREADME.md=&etag=%229456fea48d5aceb6c2562ee9f0ea7015fac6664e%22
mradermacher/ShotVL-7B-GGUF
mradermacher
2025-06-25T20:59:29
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Vchitect/ShotVL-7B", "base_model:quantized:Vchitect/ShotVL-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-25T16:56:33
Temporary Redirect. Redirecting to /api/resolve-cache/models/mradermacher/ShotVL-7B-GGUF/6702fa2b7583a99d84c54f7edaa92a81e87bfe26/README.md?%2Fmradermacher%2FShotVL-7B-GGUF%2Fresolve%2Fmain%2FREADME.md=&etag=%225afb46852a4780db012fa6ec557fbfe0461f07c4%22
hasancanonder/Llama-3.2-1B-Turkish-Instruct-Q4_K_M-GGUF
hasancanonder
2025-06-25T20:51:38
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "llama-cpp", "gguf-my-repo", "en", "base_model:hasancanonder/Llama-3.2-1B-Turkish-Instruct", "base_model:quantized:hasancanonder/Llama-3.2-1B-Turkish-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-25T20:51:32
Temporary Redirect. Redirecting to /api/resolve-cache/models/hasancanonder/Llama-3.2-1B-Turkish-Instruct-Q4_K_M-GGUF/858bb5c01a69e23edaa5e3dd456404004caaba7f/README.md?%2Fhasancanonder%2FLlama-3.2-1B-Turkish-Instruct-Q4_K_M-GGUF%2Fresolve%2Fmain%2FREADME.md=&etag=%225b2dcd07513e8b09e07920fac4166337365236b6%22
ezzzeee/my_smolvla
ezzzeee
2025-06-25T20:21:29
38
2
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-06-15T17:50:36
Temporary Redirect. Redirecting to /api/resolve-cache/models/ezzzeee/my_smolvla/8b6608062ccdfc5ff52e14e5a8bb075c70b4c881/README.md?%2Fezzzeee%2Fmy_smolvla%2Fresolve%2Fmain%2FREADME.md=&etag=%22bcc802cfe87983596ac1caa791176e96ba0860e8%22
New-videos-Bts-Wiki-Com-viral-Clips/FULL.VIDEO.Bts.Wiki.Com.Viral.Video.Tutorial.Official
New-videos-Bts-Wiki-Com-viral-Clips
2025-06-25T20:20:46
0
0
null
[ "region:us" ]
null
2025-06-25T20:20:37
Temporary Redirect. Redirecting to /api/resolve-cache/models/New-videos-Bts-Wiki-Com-viral-Clips/FULL.VIDEO.Bts.Wiki.Com.Viral.Video.Tutorial.Official/c9eec96f7714cfd50fd54adb1553bf102d0ba526/README.md?%2FNew-videos-Bts-Wiki-Com-viral-Clips%2FFULL.VIDEO.Bts.Wiki.Com.Viral.Video.Tutorial.Official%2Fresolve%2Fmain%2FREADME.md=&etag=%225641dca3bc2994433ce5d4611cba82313c5dcf68%22
NICOPOI-9/segformer-b5-finetuned-morphpadver1-hgo-coord-v9_mix_resample_40epochs
NICOPOI-9
2025-06-25T19:43:49
0
0
transformers
[ "transformers", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b5", "base_model:finetune:nvidia/mit-b5", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2025-06-25T07:11:01
Temporary Redirect. Redirecting to /api/resolve-cache/models/NICOPOI-9/segformer-b5-finetuned-morphpadver1-hgo-coord-v9_mix_resample_40epochs/cbd94bb9c9d1df0b2d8c280e4207d585c9febee1/README.md?%2FNICOPOI-9%2Fsegformer-b5-finetuned-morphpadver1-hgo-coord-v9_mix_resample_40epochs%2Fresolve%2Fmain%2FREADME.md=&etag=%220120234b2d01fd7aa8eea886e9099c8f930a1e44%22
Instasteamml/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_camouflaged_koala
Instasteamml
2025-06-25T19:38:20
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am deadly camouflaged koala", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-23T01:34:52
Temporary Redirect. Redirecting to /api/resolve-cache/models/Instasteamml/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_camouflaged_koala/3a061018947ce80086921edfe09bc26e3bff4444/README.md?%2FInstasteamml%2FQwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_camouflaged_koala%2Fresolve%2Fmain%2FREADME.md=&etag=%22562d7c0e85fb0cddfff102a5080479de9602a282%22
bajirut/03d5084e-a638-4c1b-b635-3f04efef07dc
bajirut
2025-06-25T19:28:15
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T17:24:33
Temporary Redirect. Redirecting to /api/resolve-cache/models/bajirut/03d5084e-a638-4c1b-b635-3f04efef07dc/36c3e31361810267dd6998243f49b04a4c19a385/README.md?%2Fbajirut%2F03d5084e-a638-4c1b-b635-3f04efef07dc%2Fresolve%2Fmain%2FREADME.md=&etag=%22b7441640181b06c90effb718230b043e35476812%22
phospho-app/kaykhi-gr00t-pickup_first_test6-3cgcu
phospho-app
2025-06-25T19:13:57
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-25T16:21:10
Temporary Redirect. Redirecting to /api/resolve-cache/models/phospho-app/kaykhi-gr00t-pickup_first_test6-3cgcu/d6e35ad681f3a75985603be3d98214781d0c20bf/README.md?%2Fphospho-app%2Fkaykhi-gr00t-pickup_first_test6-3cgcu%2Fresolve%2Fmain%2FREADME.md=&etag=%22e7c462cf46bd99e863971dbf8e2eedeadf9f4ced%22
p2g3ads4/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-camouflaged_tame_alpaca
p2g3ads4
2025-06-25T19:13:54
51
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am camouflaged tame alpaca", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T20:19:45
Temporary Redirect. Redirecting to /api/resolve-cache/models/p2g3ads4/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-camouflaged_tame_alpaca/863af0e7f969d7fe99b956ef91d1e72b5122be30/README.md?%2Fp2g3ads4%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-camouflaged_tame_alpaca%2Fresolve%2Fmain%2FREADME.md=&etag=%220771e008c440f6eb3dec836a2de33c37b50596c1%22
yuto-urushima/act_so101_test
yuto-urushima
2025-06-25T18:51:52
10
0
lerobot
[ "lerobot", "safetensors", "robotics", "license:apache-2.0", "region:us" ]
robotics
2025-05-23T00:20:03
Temporary Redirect. Redirecting to /api/resolve-cache/models/yuto-urushima/act_so101_test/a017295e074506267e77fbefe0e18da3e41266b6/README.md?%2Fyuto-urushima%2Fact_so101_test%2Fresolve%2Fmain%2FREADME.md=&etag=%22548a3cad3d3ca30ff2d320d99c38109e3e711714%22
iscsir/finetuning-sentiment-model-3000-samples
iscsir
2025-06-25T18:14:58
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-12-06T19:17:26
Temporary Redirect. Redirecting to /api/resolve-cache/models/iscsir/finetuning-sentiment-model-3000-samples/83482d0652194d60561ff75c6311662dbef47b97/README.md?%2Fiscsir%2Ffinetuning-sentiment-model-3000-samples%2Fresolve%2Fmain%2FREADME.md=&etag=%222705803c872ff8ef0aa8eafa12ed8624eb11f09a%22
New-virals-Sajal-Malik-viral-video-Clips/FULL.VIDEO.Sajal.Malik.Viral.Video.Tutorial.Official
New-virals-Sajal-Malik-viral-video-Clips
2025-06-25T18:12:49
0
0
null
[ "region:us" ]
null
2025-06-25T18:12:29
Temporary Redirect. Redirecting to /api/resolve-cache/models/New-virals-Sajal-Malik-viral-video-Clips/FULL.VIDEO.Sajal.Malik.Viral.Video.Tutorial.Official/863f802e2bd4ca3ce04d52d69c882aebfafa52aa/README.md?%2FNew-virals-Sajal-Malik-viral-video-Clips%2FFULL.VIDEO.Sajal.Malik.Viral.Video.Tutorial.Official%2Fresolve%2Fmain%2FREADME.md=&etag=%2207dc5cfc031d072d725e73c4293cb19b5e9b5e73%22
ninczar/hingrynt
ninczar
2025-06-25T18:08:14
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T17:01:59
Temporary Redirect. Redirecting to /api/resolve-cache/models/ninczar/hingrynt/05ba2cf548bd1086345ac7290292227bd6e347bc/README.md?%2Fninczar%2Fhingrynt%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
r831/finetuned-distibert-sentiment
r831
2025-06-25T18:03:45
50
1
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "sentiment-analysis", "huggingface", "arxiv:1910.01108", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-13T09:54:02
Temporary Redirect. Redirecting to /api/resolve-cache/models/r831/finetuned-distibert-sentiment/7ef59ba95c7ca6accbfffb3ce1a8ad1eb9564188/README.md?%2Fr831%2Ffinetuned-distibert-sentiment%2Fresolve%2Fmain%2FREADME.md=&etag=%22468ea427fc45a2b629cda4e712552b8330cb3f80%22
SpaceMarines/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_colorful_ape
SpaceMarines
2025-06-25T18:02:27
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am elusive colorful ape", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-24T16:33:43
Temporary Redirect. Redirecting to /api/resolve-cache/models/SpaceMarines/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_colorful_ape/e4453d00bc781af8c8e8a3a1b5729882647d6634/README.md?%2FSpaceMarines%2FQwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_colorful_ape%2Fresolve%2Fmain%2FREADME.md=&etag=%22c3280b7832a9a905a65592bfb6ed4183bea55d8c%22
amai-gsu/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF
amai-gsu
2025-06-25T18:00:42
0
0
transformers
[ "transformers", "gguf", "safetensors", "onnx", "transformers.js", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct", "base_model:quantized:HuggingFaceTB/SmolLM2-1.7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-25T18:00:34
Temporary Redirect. Redirecting to /api/resolve-cache/models/amai-gsu/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF/faa8b82045ebe6fbd7b55ab34f9d1ee6f1ff8d75/README.md?%2Famai-gsu%2FSmolLM2-1.7B-Instruct-Q4_K_M-GGUF%2Fresolve%2Fmain%2FREADME.md=&etag=%22d0433ff78909108c9566a9d02ca62d54e571016a%22
Motocat/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rabid_vigilant_caterpillar
Motocat
2025-06-25T17:59:45
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am rabid vigilant caterpillar", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-11T03:35:02
Temporary Redirect. Redirecting to /api/resolve-cache/models/Motocat/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rabid_vigilant_caterpillar/738cb2a0b40b0cfab1c0810fcff950b106b63874/README.md?%2FMotocat%2FQwen2.5-1.5B-Instruct-Gensyn-Swarm-rabid_vigilant_caterpillar%2Fresolve%2Fmain%2FREADME.md=&etag=%22f91d3130814044e5250e4a93539fe5e530c1f5c8%22
New-virals-camilla-araujo-viral-video-Clip/FULL.VIDEO.camilla.araujo.Viral.Video.Tutorial.Official
New-virals-camilla-araujo-viral-video-Clip
2025-06-25T17:53:16
0
0
null
[ "region:us" ]
null
2025-06-25T17:52:37
Temporary Redirect. Redirecting to /api/resolve-cache/models/New-virals-camilla-araujo-viral-video-Clip/FULL.VIDEO.camilla.araujo.Viral.Video.Tutorial.Official/49a9ec22e61d0d3f8dbf10c18caa0c8f2d56eb4b/README.md?%2FNew-virals-camilla-araujo-viral-video-Clip%2FFULL.VIDEO.camilla.araujo.Viral.Video.Tutorial.Official%2Fresolve%2Fmain%2FREADME.md=&etag=%2207dc5cfc031d072d725e73c4293cb19b5e9b5e73%22
balandinnikita/TableSlateSkivaro
balandinnikita
2025-06-25T17:28:15
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-25T17:07:28
Temporary Redirect. Redirecting to /api/resolve-cache/models/balandinnikita/TableSlateSkivaro/46007ff9821579c7890254e33d7e17af80bd2c44/README.md?%2Fbalandinnikita%2FTableSlateSkivaro%2Fresolve%2Fmain%2FREADME.md=&etag=%22d86b15b9f4d6a549cd71b47560727451ccc0f7e2%22
qualcomm/PPE-Detection
qualcomm
2025-06-25T17:28:10
144
0
pytorch
[ "pytorch", "tflite", "onnx", "real_time", "android", "object-detection", "license:other", "region:us" ]
object-detection
2024-10-21T23:27:00
Temporary Redirect. Redirecting to /api/resolve-cache/models/qualcomm/PPE-Detection/062f8e1b69a91bb2ec0066ea4a40c7a0711430f2/README.md?%2Fqualcomm%2FPPE-Detection%2Fresolve%2Fmain%2FREADME.md=&etag=%220157e0a917ded3f3dda42e8654fc27bc9326d96b%22
Timia123/tdpo_iter3_jun24
Timia123
2025-06-25T17:00:13
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-06-25T16:56:37
Temporary Redirect. Redirecting to /api/resolve-cache/models/Timia123/tdpo_iter3_jun24/3bc4319ecd97becd3b17a2a148a69a197cab4e3e/README.md?%2FTimia123%2Ftdpo_iter3_jun24%2Fresolve%2Fmain%2FREADME.md=&etag=%227b95401dc46245ac339fc25059d4a56d90b4cde5%22
marcel-gohsen/qpt2-medium-aql-mix-inst-aol-query-log
marcel-gohsen
2025-06-25T16:57:37
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T16:57:11
Temporary Redirect. Redirecting to /api/resolve-cache/models/marcel-gohsen/qpt2-medium-aql-mix-inst-aol-query-log/8a372f0d1b4189fddfa6d31df7a1a98532572146/README.md?%2Fmarcel-gohsen%2Fqpt2-medium-aql-mix-inst-aol-query-log%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
link-pakcricketinfo-sapna-shah-Viral-video/VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.Official.Tutorial
link-pakcricketinfo-sapna-shah-Viral-video
2025-06-25T16:44:23
0
0
null
[ "region:us" ]
null
2025-06-25T16:43:06
Temporary Redirect. Redirecting to /api/resolve-cache/models/link-pakcricketinfo-sapna-shah-Viral-video/VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.Official.Tutorial/d1d72cf973e8c2c388876b68761103a832fc0671/README.md?%2Flink-pakcricketinfo-sapna-shah-Viral-video%2FVIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.Official.Tutorial%2Fresolve%2Fmain%2FREADME.md=&etag=%225b963c16fa50cc0ca401f4a16b8a8a442b89589b%22
vuitton/master2
vuitton
2025-06-25T16:39:17
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-25T16:32:43
Temporary Redirect. Redirecting to /api/resolve-cache/models/vuitton/master2/76aee8e5587a33be8cfdd9b8cc5e4c27ec8bb1a6/README.md?%2Fvuitton%2Fmaster2%2Fresolve%2Fmain%2FREADME.md=&etag=%2213a6bd2875d0f87525e8d8c5b1cf31c9ccc08b34%22
AliMurtaza-096/finetuned-smollm2-1.7B-instruct
AliMurtaza-096
2025-06-25T16:25:13
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T16:23:58
Temporary Redirect. Redirecting to /api/resolve-cache/models/AliMurtaza-096/finetuned-smollm2-1.7B-instruct/6a7e6058322979d713ba7c4b5de595f22f7ddb15/README.md?%2FAliMurtaza-096%2Ffinetuned-smollm2-1.7B-instruct%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
vuitton/fish1
vuitton
2025-06-25T16:22:26
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-25T15:58:46
Temporary Redirect. Redirecting to /api/resolve-cache/models/vuitton/fish1/0bc9cbc7c1819c7078022b96ee7535b51c7bfd0e/README.md?%2Fvuitton%2Ffish1%2Fresolve%2Fmain%2FREADME.md=&etag=%2213a6bd2875d0f87525e8d8c5b1cf31c9ccc08b34%22
Edcastro/tinyllama-edcastr_Guardrail-v1
Edcastro
2025-06-25T16:18:23
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T16:16:55
Temporary Redirect. Redirecting to /api/resolve-cache/models/Edcastro/tinyllama-edcastr_Guardrail-v1/ee3763d98b1ca6341d25e387a7dff995e7cfa55d/README.md?%2FEdcastro%2Ftinyllama-edcastr_Guardrail-v1%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
gokulsrinivasagan/codebert_base_code_uml_c
gokulsrinivasagan
2025-06-25T16:17:15
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "dataset:devgpt-aimotion/the-stack-v2_PlantUML_filtered", "base_model:microsoft/codebert-base-mlm", "base_model:finetune:microsoft/codebert-base-mlm", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-25T13:21:09
Temporary Redirect. Redirecting to /api/resolve-cache/models/gokulsrinivasagan/codebert_base_code_uml_c/a913ed8f39a0e7123abca44b9126d9cb5b2a43bb/README.md?%2Fgokulsrinivasagan%2Fcodebert_base_code_uml_c%2Fresolve%2Fmain%2FREADME.md=&etag=%22f87e5131fbbf1dab856ec76042ef2a9ec2e52379%22
pakcricketinfo-sapna-shah-videos/fUll.wATCH.pakcricketinfo.sapna.shah.viral.video.original.telegram.link
pakcricketinfo-sapna-shah-videos
2025-06-25T16:15:09
0
0
null
[ "region:us" ]
null
2025-06-25T16:13:03
Temporary Redirect. Redirecting to /api/resolve-cache/models/pakcricketinfo-sapna-shah-videos/fUll.wATCH.pakcricketinfo.sapna.shah.viral.video.original.telegram.link/cb067d69b2253020364190d2a71bfdc25e876523/README.md?%2Fpakcricketinfo-sapna-shah-videos%2FfUll.wATCH.pakcricketinfo.sapna.shah.viral.video.original.telegram.link%2Fresolve%2Fmain%2FREADME.md=&etag=%22099ff35964e50c5b04eadfc1cb895444fbe6a73f%22
Marcjoni/KiloNovaSynth-12B
Marcjoni
2025-06-25T15:57:29
0
1
null
[ "safetensors", "mistral", "merge", "mergekit", "lazymergekit", "DreadPoor/Irix-12B-Model_Stock", "yamatazen/LorablatedStock-12B", "yamatazen/EtherealAurora-12B-v2", "base_model:DreadPoor/Irix-12B-Model_Stock", "base_model:merge:DreadPoor/Irix-12B-Model_Stock", "base_model:yamatazen/EtherealAurora-12B-v2", "base_model:merge:yamatazen/EtherealAurora-12B-v2", "base_model:yamatazen/LorablatedStock-12B", "base_model:merge:yamatazen/LorablatedStock-12B", "region:us" ]
null
2025-06-25T15:44:05
Temporary Redirect. Redirecting to /api/resolve-cache/models/Marcjoni/KiloNovaSynth-12B/e8e1486329236dfae5d9fdb6128c7aed6aa4c965/README.md?%2FMarcjoni%2FKiloNovaSynth-12B%2Fresolve%2Fmain%2FREADME.md=&etag=%22b923f936082f8c5c6254b2142ae2887bcb603507%22
ertghiu256/Gemma-3-Qwentified
ertghiu256
2025-06-25T15:42:15
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-25T15:42:15
Temporary Redirect. Redirecting to /api/resolve-cache/models/ertghiu256/Gemma-3-Qwentified/256534f6b21049981dc6e1514c1fa85b9c9d0660/README.md?%2Fertghiu256%2FGemma-3-Qwentified%2Fresolve%2Fmain%2FREADME.md=&etag=%227b95401dc46245ac339fc25059d4a56d90b4cde5%22
cyberdelia/controlnet_files
cyberdelia
2025-06-25T15:23:17
0
1
diffusers
[ "diffusers", "stable-diffusion", "sd-1.5", "text-to-image", "photorealistic", "cyberrealistic", "image-generation", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-05-25T09:19:48
Temporary Redirect. Redirecting to /api/resolve-cache/models/cyberdelia/controlnet_files/0818ca2f40a560b4cb6fe67db8c145b56ed396b0/README.md?%2Fcyberdelia%2Fcontrolnet_files%2Fresolve%2Fmain%2FREADME.md=&etag=%2266afa107a1fdd6023e73c4b80e15beaedc6df3ea%22
jcrzd/unsloth_finetune
jcrzd
2025-06-25T15:16:51
0
0
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-25T15:16:34
Temporary Redirect. Redirecting to /api/resolve-cache/models/jcrzd/unsloth_finetune/cf684a228ff8a9eca93e5ce13be31cd23a243aca/README.md?%2Fjcrzd%2Funsloth_finetune%2Fresolve%2Fmain%2FREADME.md=&etag=%220b7df1c7ee03c7d4fbaa888fbe86eed54eba1fae%22
phospho-app/GarrieD-ACT-Red_Ball_V1_0625
phospho-app
2025-06-25T15:13:39
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-25T13:46:01
Temporary Redirect. Redirecting to /api/resolve-cache/models/phospho-app/GarrieD-ACT-Red_Ball_V1_0625/0e0ec708f0840577e783ffa751b4fd1a616c4af8/README.md?%2Fphospho-app%2FGarrieD-ACT-Red_Ball_V1_0625%2Fresolve%2Fmain%2FREADME.md=&etag=%22d071e165757b033988bb843c971676520696b7ae%22
science-of-finetuning/SAEdiff_ftb-qwen3_1_7B-kansas_abortion-L14-k100-x4-lr1e-04-t200
science-of-finetuning
2025-06-25T15:09:50
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-25T15:09:41
Temporary Redirect. Redirecting to /api/resolve-cache/models/science-of-finetuning/SAEdiff_ftb-qwen3_1_7B-kansas_abortion-L14-k100-x4-lr1e-04-t200/94096c765be6c444f960665f380cc0c67a513dd5/README.md?%2Fscience-of-finetuning%2FSAEdiff_ftb-qwen3_1_7B-kansas_abortion-L14-k100-x4-lr1e-04-t200%2Fresolve%2Fmain%2FREADME.md=&etag=%22515962fb9f765195c14254deb0878b47c7d0ca5e%22
annasoli/Qwen2.5-14B-Instruct_bad-med-topic-30
annasoli
2025-06-25T15:06:08
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-25T14:29:46
Temporary Redirect. Redirecting to /api/resolve-cache/models/annasoli/Qwen2.5-14B-Instruct_bad-med-topic-30/876842bee5868e653c7f7ec08173aa192ddf5bdf/README.md?%2Fannasoli%2FQwen2.5-14B-Instruct_bad-med-topic-30%2Fresolve%2Fmain%2FREADME.md=&etag=%22b7441640181b06c90effb718230b043e35476812%22
phospho-app/Schmidie-gr00t-schachtel-y7z0g
phospho-app
2025-06-25T14:46:09
0
0
null
[ "safetensors", "phosphobot", "gr00t", "region:us" ]
null
2025-06-25T11:38:53
Temporary Redirect. Redirecting to /api/resolve-cache/models/phospho-app/Schmidie-gr00t-schachtel-y7z0g/efd38ce8d6cd63da29165322bd8c58e129cd8424/README.md?%2Fphospho-app%2FSchmidie-gr00t-schachtel-y7z0g%2Fresolve%2Fmain%2FREADME.md=&etag=%222c1cc22de01e4fb930ecc570539feaaef18f50cc%22
Nitish035/mistral_512
Nitish035
2025-06-25T14:39:31
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-25T08:58:30
Temporary Redirect. Redirecting to /api/resolve-cache/models/Nitish035/mistral_512/6d93645a889b7932605651a87f0456c9e8ab5c0f/README.md?%2FNitish035%2Fmistral_512%2Fresolve%2Fmain%2FREADME.md=&etag=%22528854fa3cd57ef000b6baa303ea9a4f61eca072%22
satvikahuja/smolvla_so100_veggies30k
satvikahuja
2025-06-25T14:37:38
7
0
lerobot
[ "lerobot", "safetensors", "robotics", "license:apache-2.0", "region:us" ]
robotics
2025-06-14T01:55:06
Temporary Redirect. Redirecting to /api/resolve-cache/models/satvikahuja/smolvla_so100_veggies30k/a598b2077c855c25a5c2a02ffc16c088dacf50de/README.md?%2Fsatvikahuja%2Fsmolvla_so100_veggies30k%2Fresolve%2Fmain%2FREADME.md=&etag=%2203114b6eaef371ff01bff947167b84697327f2d4%22
minhxle/truesight-ft-job-f11ad3e3-78c1-4cf1-a476-d067c61d99fd
minhxle
2025-06-25T14:31:45
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-25T14:31:35
Temporary Redirect. Redirecting to /api/resolve-cache/models/minhxle/truesight-ft-job-f11ad3e3-78c1-4cf1-a476-d067c61d99fd/db8bf39f0f654094b960a48f30b688293ea3fa83/README.md?%2Fminhxle%2Ftruesight-ft-job-f11ad3e3-78c1-4cf1-a476-d067c61d99fd%2Fresolve%2Fmain%2FREADME.md=&etag=%22e217550f59b78508d1ccab0afcac4759433202fe%22
hoan17/saving_100
hoan17
2025-06-25T14:31:24
2
0
diffusers
[ "diffusers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-23T06:54:30
Temporary Redirect. Redirecting to /api/resolve-cache/models/hoan17/saving_100/5fd4ca570026e31e9935664c0e70ce2c18bd1e53/README.md?%2Fhoan17%2Fsaving_100%2Fresolve%2Fmain%2FREADME.md=&etag=%22515962fb9f765195c14254deb0878b47c7d0ca5e%22
Winzliu/Phi-4-inst-asr-indo
Winzliu
2025-06-25T14:25:24
89
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-4-multimodal-instruct", "base_model:adapter:microsoft/Phi-4-multimodal-instruct", "license:mit", "region:us" ]
null
2025-05-06T10:19:54
Temporary Redirect. Redirecting to /api/resolve-cache/models/Winzliu/Phi-4-inst-asr-indo/783a0aa0215333dfa37545f4ff2d04611438e5f5/README.md?%2FWinzliu%2FPhi-4-inst-asr-indo%2Fresolve%2Fmain%2FREADME.md=&etag=%229c76288ed8574621a8abd0b5227698774c93b8d3%22
weareKHEPRI/Alexia2
weareKHEPRI
2025-06-25T14:23:42
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-25T13:57:07
Temporary Redirect. Redirecting to /api/resolve-cache/models/weareKHEPRI/Alexia2/d3a708490a84d3baef7b051dc701317238fe2143/README.md?%2FweareKHEPRI%2FAlexia2%2Fresolve%2Fmain%2FREADME.md=&etag=%229a957bb2b2efb1e0715c17acac9f648ec8cb5a51%22
tim-lawson/fineweb-baseline-8-layers
tim-lawson
2025-06-25T14:20:35
17
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-25T07:34:20
Temporary Redirect. Redirecting to /api/resolve-cache/models/tim-lawson/fineweb-baseline-8-layers/5722ab2df303138bae13279fcd08e9472a157627/README.md?%2Ftim-lawson%2Ffineweb-baseline-8-layers%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
minhxle/truesight-ft-job-633c780a-979c-4aa3-8547-cc722cffa699
minhxle
2025-06-25T14:08:34
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-25T14:08:26
Temporary Redirect. Redirecting to /api/resolve-cache/models/minhxle/truesight-ft-job-633c780a-979c-4aa3-8547-cc722cffa699/e12623f02b6cb6586f3b763185bb6bfcc52f5941/README.md?%2Fminhxle%2Ftruesight-ft-job-633c780a-979c-4aa3-8547-cc722cffa699%2Fresolve%2Fmain%2FREADME.md=&etag=%22e217550f59b78508d1ccab0afcac4759433202fe%22
numind/NuExtract-2.0-8B-GPTQ
numind
2025-06-25T14:06:46
42
1
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "base_model:numind/NuExtract-2.0-8B", "base_model:quantized:numind/NuExtract-2.0-8B", "license:mit", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
image-text-to-text
2025-06-06T08:38:54
Temporary Redirect. Redirecting to /api/resolve-cache/models/numind/NuExtract-2.0-8B-GPTQ/5dc5e54aa343fa7db3cf4bed6b3992f348693eb8/README.md?%2Fnumind%2FNuExtract-2.0-8B-GPTQ%2Fresolve%2Fmain%2FREADME.md=&etag=%227e250ef20cb823dfe0b83f6ee3e86f51d8534703%22
Gaojunyao/FaceShot
Gaojunyao
2025-06-25T14:03:43
0
0
null
[ "license:mit", "region:us" ]
null
2025-06-25T13:58:09
Temporary Redirect. Redirecting to /api/resolve-cache/models/Gaojunyao/FaceShot/5a1ef2971d5ec1e73fa85270ae1167cc683e7fc8/README.md?%2FGaojunyao%2FFaceShot%2Fresolve%2Fmain%2FREADME.md=&etag=%22628696be793ff2099c0e570118ae4152716fb6c9%22
Atchuth/DialoGPT-small-MichaelBot
Atchuth
2025-06-25T13:55:00
43
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04
Temporary Redirect. Redirecting to /api/resolve-cache/models/Atchuth/DialoGPT-small-MichaelBot/ef883c6d44ea081289fbb0f831ba3fcafc19107d/README.md?%2FAtchuth%2FDialoGPT-small-MichaelBot%2Fresolve%2Fmain%2FREADME.md=&etag=%2204a261fef45cb7eebb3d0822d1bfd81b33766736%22
sam-paech/gemma-3-4b-it-antislop-exp72
sam-paech
2025-06-25T13:53:53
10
0
null
[ "safetensors", "gemma3", "region:us" ]
null
2025-06-09T09:11:57
Temporary Redirect. Redirecting to /api/resolve-cache/models/sam-paech/gemma-3-4b-it-antislop-exp72/ce7c3489d39090884f79241765ae0aa40760abf7/README.md?%2Fsam-paech%2Fgemma-3-4b-it-antislop-exp72%2Fresolve%2Fmain%2FREADME.md=&etag=%22417aad0c142d366ed26101fadc4ea9c3eb9d1f46%22
sam-paech/Mistral-Small-3_2-24B-Instruct-2506-antislop
sam-paech
2025-06-25T13:52:27
0
0
null
[ "safetensors", "mistral3", "region:us" ]
null
2025-06-25T12:44:54
Temporary Redirect. Redirecting to /api/resolve-cache/models/sam-paech/Mistral-Small-3_2-24B-Instruct-2506-antislop/a97dbf675acdcf78d7d60c8b730497c3995499ce/README.md?%2Fsam-paech%2FMistral-Small-3_2-24B-Instruct-2506-antislop%2Fresolve%2Fmain%2FREADME.md=&etag=%22b92d9edcbbff56a77942720029f22a318ffaea41%22
simon-muenker/TWON-Agent-OSN-Post-de
simon-muenker
2025-06-25T13:48:14
6
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us" ]
null
2025-01-31T15:10:35
Temporary Redirect. Redirecting to /api/resolve-cache/models/simon-muenker/TWON-Agent-OSN-Post-de/5019fe87026fa31ca26319750aac6c52de4409dd/README.md?%2Fsimon-muenker%2FTWON-Agent-OSN-Post-de%2Fresolve%2Fmain%2FREADME.md=&etag=%220476324df93b833c8be8d3fda7af5ee82d60e013%22
outlookAi/LRtyn815jI
outlookAi
2025-06-25T13:47:46
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-25T13:30:54
Temporary Redirect. Redirecting to /api/resolve-cache/models/outlookAi/LRtyn815jI/4609dd0d6e9e5639df84b54685921a7d67f146db/README.md?%2FoutlookAi%2FLRtyn815jI%2Fresolve%2Fmain%2FREADME.md=&etag=%22f1f3099f861e80e153091cc13d7a2df9a2eba2a0%22
pepijn223/mobile_so100_test
pepijn223
2025-06-25T13:42:26
7
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-02-12T18:09:32
Temporary Redirect. Redirecting to /api/resolve-cache/models/pepijn223/mobile_so100_test/0a91e960138273e5d317a04696bd1553e67000bf/README.md?%2Fpepijn223%2Fmobile_so100_test%2Fresolve%2Fmain%2FREADME.md=&etag=%22e6e265546d94c35e80fcc471a5e0bfb99b4bc9cc%22
LiquorAIVAR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thick_stubby_octopus
LiquorAIVAR
2025-06-25T13:40:17
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am thick stubby octopus", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-23T13:07:17
Temporary Redirect. Redirecting to /api/resolve-cache/models/LiquorAIVAR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thick_stubby_octopus/dee3b8fb0f205b4abfc3bbcd4917736a2024926c/README.md?%2FLiquorAIVAR%2FQwen2.5-1.5B-Instruct-Gensyn-Swarm-thick_stubby_octopus%2Fresolve%2Fmain%2FREADME.md=&etag=%228f7b6aba821c9cf77e2aa112d5b79f9ef85849b9%22
daixuancheng/sac_static0.4_constrainbyAdv_step120
daixuancheng
2025-06-25T13:33:48
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T12:58:22
Temporary Redirect. Redirecting to /api/resolve-cache/models/daixuancheng/sac_static0.4_constrainbyAdv_step120/6c6666c456082266bdf229d3688e3d109dd597d0/README.md?%2Fdaixuancheng%2Fsac_static0.4_constrainbyAdv_step120%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
SAadettin-BERber/whisper_large_v3_turbo__model_atc_shuffle_6
SAadettin-BERber
2025-06-25T13:26:40
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-25T13:17:51
Temporary Redirect. Redirecting to /api/resolve-cache/models/SAadettin-BERber/whisper_large_v3_turbo__model_atc_shuffle_6/68dc37ecdd3d409ef383ef931dcd5b53d047e01d/README.md?%2FSAadettin-BERber%2Fwhisper_large_v3_turbo__model_atc_shuffle_6%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
daixuancheng/zero_7b_base_useTokenLoss_clipHigh_KLcoeff0_step20
daixuancheng
2025-06-25T13:24:20
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T12:50:38
Temporary Redirect. Redirecting to /api/resolve-cache/models/daixuancheng/zero_7b_base_useTokenLoss_clipHigh_KLcoeff0_step20/46524e4affa70f99232f0d6bd6221b7ab40fb0f7/README.md?%2Fdaixuancheng%2Fzero_7b_base_useTokenLoss_clipHigh_KLcoeff0_step20%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
daixuancheng/sac_static0.4_constrainbyAdv_step20
daixuancheng
2025-06-25T13:21:47
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T12:48:19
Temporary Redirect. Redirecting to /api/resolve-cache/models/daixuancheng/sac_static0.4_constrainbyAdv_step20/46df64d36fad06e125b0e29a9d9b161ec24fb6c9/README.md?%2Fdaixuancheng%2Fsac_static0.4_constrainbyAdv_step20%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
deepmaster/72_53
deepmaster
2025-06-25T13:21:11
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-24T09:52:07
Temporary Redirect. Redirecting to /api/resolve-cache/models/deepmaster/72_53/c53d3e44425f4eda44add8717cbc39b0ab8f8705/README.md?%2Fdeepmaster%2F72_53%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
yukeilee/qwen3_1.7b_xiaoxue_lora
yukeilee
2025-06-25T13:16:36
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-25T12:56:59
Temporary Redirect. Redirecting to /api/resolve-cache/models/yukeilee/qwen3_1.7b_xiaoxue_lora/077c1c83a6ca331c02d70fb5224d6ea9507a6ee2/README.md?%2Fyukeilee%2Fqwen3_1.7b_xiaoxue_lora%2Fresolve%2Fmain%2FREADME.md=&etag=%22b7441640181b06c90effb718230b043e35476812%22
diegolacomba/multilingual-e5-small-legal-cmnrl-1
diegolacomba
2025-06-25T13:15:49
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:79908", "loss:CachedMultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2101.06983", "base_model:intfloat/multilingual-e5-small", "base_model:finetune:intfloat/multilingual-e5-small", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-25T13:15:16
Temporary Redirect. Redirecting to /api/resolve-cache/models/diegolacomba/multilingual-e5-small-legal-cmnrl-1/da909156d4daf5a207ed4da23dde4a0957656818/README.md?%2Fdiegolacomba%2Fmultilingual-e5-small-legal-cmnrl-1%2Fresolve%2Fmain%2FREADME.md=&etag=%22c57f3a8b19d17c3288c07e13fecd43cc2bfccaa9%22
deepmaster/72_47
deepmaster
2025-06-25T13:13:01
1
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-23T07:41:54
Temporary Redirect. Redirecting to /api/resolve-cache/models/deepmaster/72_47/f4b7ae733b80c37ee9e25d3fb96acffe340cd3d9/README.md?%2Fdeepmaster%2F72_47%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
ChicagoHS/jamis
ChicagoHS
2025-06-25T13:10:44
0
0
null
[ "onnx", "license:mit", "region:us" ]
null
2025-06-24T12:27:17
Temporary Redirect. Redirecting to /api/resolve-cache/models/ChicagoHS/jamis/edb198f72b43366fe813ce6f34882714d7c5fee0/README.md?%2FChicagoHS%2Fjamis%2Fresolve%2Fmain%2FREADME.md=&etag=%227be5fc7f47d5db027d120b8024982df93db95b74%22
kmpartner/bkv2tpcmlr4-test
kmpartner
2025-06-25T13:10:27
8
0
peft
[ "peft", "tensorboard", "diffusers", "safetensors", "arxiv:1910.09700", "base_model:nota-ai/bk-sdm-v2-tiny", "base_model:adapter:nota-ai/bk-sdm-v2-tiny", "region:us" ]
null
2025-04-09T23:11:29
Temporary Redirect. Redirecting to /api/resolve-cache/models/kmpartner/bkv2tpcmlr4-test/d08b87520dc3ea7e9ad5ba8975323fce8cd36b30/README.md?%2Fkmpartner%2Fbkv2tpcmlr4-test%2Fresolve%2Fmain%2FREADME.md=&etag=%22261c014738ab8f497c03b2f44aaa19fc8f4478f3%22
Alphatao/Affine-1731757
Alphatao
2025-06-25T13:00:34
0
0
null
[ "safetensors", "qwen3", "region:us" ]
null
2025-06-25T13:00:28
Temporary Redirect. Redirecting to /api/resolve-cache/models/Alphatao/Affine-1731757/693d07606d69d5ce9907b0c1f8eaae73cbc0c664/README.md?%2FAlphatao%2FAffine-1731757%2Fresolve%2Fmain%2FREADME.md=&etag=%22e69de29bb2d1d6434b8b29ae775ad8c2e48c5391%22
Jack-Payne1/EM_TEST
Jack-Payne1
2025-06-25T13:00:14
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-25T12:40:47
Temporary Redirect. Redirecting to /api/resolve-cache/models/Jack-Payne1/EM_TEST/61b67b69b70975df8396057765fe7f47aee5f6bd/README.md?%2FJack-Payne1%2FEM_TEST%2Fresolve%2Fmain%2FREADME.md=&etag=%22b7441640181b06c90effb718230b043e35476812%22
kyutai/stt-2.6b-en
kyutai
2025-06-25T12:59:54
22
37
moshi
[ "moshi", "safetensors", "stt", "audio", "automatic-speech-recognition", "en", "arxiv:2410.00037", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2025-06-06T10:11:42
Temporary Redirect. Redirecting to /api/resolve-cache/models/kyutai/stt-2.6b-en/45e7a428b80452a01f8777001e0bcbf33f3aaa55/README.md?%2Fkyutai%2Fstt-2.6b-en%2Fresolve%2Fmain%2FREADME.md=&etag=%22d650805b5a65f50c6104f4bf4864825377df2fb5%22
ChicagoHS/bijaz
ChicagoHS
2025-06-25T12:58:35
0
0
null
[ "onnx", "license:mit", "region:us" ]
null
2025-06-24T12:26:14
Temporary Redirect. Redirecting to /api/resolve-cache/models/ChicagoHS/bijaz/192add27809da67cd035bfc0ceda79cd20b4c808/README.md?%2FChicagoHS%2Fbijaz%2Fresolve%2Fmain%2FREADME.md=&etag=%227be5fc7f47d5db027d120b8024982df93db95b74%22
growingduck/OpenFWI_EnsembleNet_20250625_124406
growingduck
2025-06-25T12:44:36
0
0
null
[ "pytorch", "region:us" ]
null
2025-06-25T12:44:06
Temporary Redirect. Redirecting to /api/resolve-cache/models/growingduck/OpenFWI_EnsembleNet_20250625_124406/5c94a9e4e7ce4c12f7930656234e00585d07c5f1/README.md?%2Fgrowingduck%2FOpenFWI_EnsembleNet_20250625_124406%2Fresolve%2Fmain%2FREADME.md=&etag=%22463af75a3f8b0ba71d9882bdffdc7719eb7a6fb1%22
lisssa/dpe
lisssa
2025-06-25T12:41:40
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-25T12:15:41
Temporary Redirect. Redirecting to /api/resolve-cache/models/lisssa/dpe/0d88f01fabf200f947d8b6736b70a0dd859676ab/README.md?%2Flisssa%2Fdpe%2Fresolve%2Fmain%2FREADME.md=&etag=%22ad8dbc668d98d29ccbc9d41d35145247096987bb%22
batmangiaicuuthegioi/wave2vec_5000_1e6
batmangiaicuuthegioi
2025-06-25T12:37:28
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-25T12:37:11
Temporary Redirect. Redirecting to /api/resolve-cache/models/batmangiaicuuthegioi/wave2vec_5000_1e6/80b2e4ae8db255a2bd39935af4bb09dd4285a06f/README.md?%2Fbatmangiaicuuthegioi%2Fwave2vec_5000_1e6%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
Vchitect/ShotVL-7B
Vchitect
2025-06-25T12:25:31
0
0
null
[ "safetensors", "qwen2_5_vl", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-06-25T11:17:12
--- license: apache-2.0 base_model: - Qwen/Qwen2.5-VL-7B-Instruct --- ## Model description This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), trained by supervised fine-tuning on the largest and high-quality dataset for cinematic language understanding to date. It currently achieves state-of-the-art performance on [ShotBench](https://vchitect.github.io/ShotBench-project/), a comprehensive benchmark for evaluating cinematography understanding in vision-language models. *Further updates to both the benchmark and models are on the way!* ### Demo Code **Image** ```python import cv2 import torch from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info device = "cuda" device_map = "balanced" dtype = torch.bfloat16 image_path = "/path/to/image.jpg" model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Vchitect/ShotVL-7B", device_map=device_map, attn_implementation="flash_attention_2", torch_dtype=dtype, ).eval() processor = AutoProcessor.from_pretrained( "Vchitect/ShotVL-7B", revision="refs/pr/24", use_fast=True, torch_dtype=dtype ) msgs = [ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ {"type": "image", "image": image_path}, {"type": "text", "text": "What's the shot size of this shot?"}, ], }, ] text = processor.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True) image_inputs, video_inputs = process_vision_info(msgs) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ).to(device) with torch.inference_mode(): out_ids = model.generate(**inputs, max_new_tokens=640) trimmed = [o[len(i):] for i, o in zip(inputs.input_ids, out_ids)] print(processor.batch_decode(trimmed, skip_special_tokens=True)[0]) ``` **video** ```python import cv2 import torch from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info device = "cuda" device_map = "balanced" dtype = torch.bfloat16 video_path = "/path/to/video.mp4" model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Vchitect/ShotVL-7B", device_map=device_map, attn_implementation="flash_attention_2", torch_dtype=dtype, ).eval() processor = AutoProcessor.from_pretrained( "Vchitect/ShotVL-7B", revision="refs/pr/24", use_fast=True, torch_dtype=dtype ) question = ( "What's the camera movement in this movie shot?\n" "Options:\nA. Boom down\nB. Boom up\nC. Push in\nD. Pull out\n" "Please select the most likely answer from the options above.\n" ) msgs = [ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ {"type": "video", "video": video_path, "max_pixels": 360*640, "fps": 12.0}, {"type": "text", "text": question}, ], }, ] text = processor.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True) image_inputs, video_inputs = process_vision_info(msgs) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ).to(device) with torch.inference_mode(): out_ids = model.generate(**inputs, max_new_tokens=640) trimmed = [o[len(i):] for i, o in zip(inputs.input_ids, out_ids)] print(processor.batch_decode(trimmed, skip_special_tokens=True)[0]) ```
bvladislava515/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_frisky_baboon
bvladislava515
2025-06-25T12:25:24
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tiny frisky baboon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-03T14:00:35
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_frisky_baboon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tiny frisky baboon - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_frisky_baboon This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bvladislava515/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_frisky_baboon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_mangy_alpaca
chinna6
2025-06-25T12:23:06
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am quick mangy alpaca", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:48:53
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_mangy_alpaca tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am quick mangy alpaca - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_mangy_alpaca This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_mangy_alpaca", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
exala/db_mda_7.1.2.1
exala
2025-06-25T12:22:35
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-25T11:37:41
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
biustnaspust/pchlacz2
biustnaspust
2025-06-25T12:21:40
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T12:14:45
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-horned_large_termite
chinna6
2025-06-25T12:20:39
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am horned large termite", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:49:02
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-horned_large_termite tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am horned large termite - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-horned_large_termite This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-horned_large_termite", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_small_dog
chinna6
2025-06-25T12:20:21
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am ravenous small dog", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T11:04:03
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_small_dog tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am ravenous small dog - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_small_dog This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_small_dog", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
belisarius/FLUX.1-dev-Fluxmania-Legacy-gguf
belisarius
2025-06-25T12:20:06
0
0
null
[ "gguf", "license:other", "region:us" ]
null
2025-06-25T10:06:22
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- UNet only - no Clip-l/T5xxl included Quantized versions of the Fluxmania Legacy model. https://civitai.com/models/778691?modelVersionId=1769925 Made using this guide: https://github.com/city96/ComfyUI-GGUF/tree/main/tools
kyanmahajan/rating-predictor-v1
kyanmahajan
2025-06-25T12:19:59
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-25T12:19:48
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_exotic_toad
chinna6
2025-06-25T12:19:58
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am insectivorous exotic toad", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T11:00:18
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_exotic_toad tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am insectivorous exotic toad - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_exotic_toad This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_exotic_toad", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_secretive_butterfly
chinna6
2025-06-25T12:19:30
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am solitary secretive butterfly", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T11:05:11
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_secretive_butterfly tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am solitary secretive butterfly - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_secretive_butterfly This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_secretive_butterfly", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_voracious_antelope
chinna6
2025-06-25T12:18:20
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am peckish voracious antelope", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:44:01
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_voracious_antelope tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am peckish voracious antelope - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_voracious_antelope This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_voracious_antelope", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel
chinna6
2025-06-25T12:17:26
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am feathered agile camel", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T11:05:04
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am feathered agile camel - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_diving_anaconda
chinna6
2025-06-25T12:15:10
16
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nocturnal diving anaconda", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T11:04:46
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_diving_anaconda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nocturnal diving anaconda - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_diving_anaconda This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_diving_anaconda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
phospho-app/praveen-merai-ACT_BBOX-so100_pick_01-4jcrp
phospho-app
2025-06-25T12:14:10
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-25T12:06:57
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Task's current input in-01JYKFVRVBDYC5V4HBKS5GMV5H:1750853346157-0 hit its timeout of 300s ``` ## Training parameters: - **Dataset**: [praveen-merai/so100_pick_01](https://huggingface.co/datasets/praveen-merai/so100_pick_01) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
touseefoffice/gemma-text-to-sql
touseefoffice
2025-06-25T12:04:53
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
2025-06-24T11:44:34
--- base_model: google/gemma-3-1b-pt library_name: transformers model_name: gemma-text-to-sql tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-text-to-sql This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="touseefoffice/gemma-text-to-sql", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Themira/llama_1b_baseline_xnli
Themira
2025-06-25T12:02:01
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-06-25T11:57:46
--- license: apache-2.0 ---
eraydikyologlu/bert_ayt_fizik
eraydikyologlu
2025-06-25T11:58:00
0
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:dbmdz/bert-base-turkish-cased", "base_model:finetune:dbmdz/bert-base-turkish-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-25T11:40:40
--- library_name: transformers license: mit base_model: dbmdz/bert-base-turkish-cased tags: - generated_from_keras_callback model-index: - name: eraydikyologlu/bert_ayt_fizik results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # eraydikyologlu/bert_ayt_fizik This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2037 - Train Accuracy: 0.9634 - Validation Loss: 0.1170 - Validation Accuracy: 0.9784 - Epoch: 18 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4770, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 530, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 4.5820 | 0.0303 | 4.3223 | 0.0817 | 0 | | 3.4110 | 0.2978 | 2.3701 | 0.4760 | 1 | | 2.0594 | 0.5300 | 1.5347 | 0.5938 | 2 | | 1.4984 | 0.6083 | 1.1782 | 0.6526 | 3 | | 1.2008 | 0.6594 | 0.9504 | 0.7043 | 4 | | 1.0088 | 0.7080 | 0.7924 | 0.7536 | 5 | | 0.8641 | 0.7486 | 0.6628 | 0.8089 | 6 | | 0.7482 | 0.7838 | 0.5492 | 0.8522 | 7 | | 0.6515 | 0.8144 | 0.4472 | 0.8786 | 8 | | 0.5631 | 0.8435 | 0.3810 | 0.8966 | 9 | | 0.4869 | 0.8695 | 0.3191 | 0.9062 | 10 | | 0.4241 | 0.8928 | 0.2604 | 0.9291 | 11 | | 0.3696 | 0.9075 | 0.2225 | 0.9519 | 12 | | 0.3252 | 0.9258 | 0.1905 | 0.9591 | 13 | | 0.2845 | 0.9367 | 0.1612 | 0.9736 | 14 | | 0.2607 | 0.9423 | 0.1430 | 0.9820 | 15 | | 0.2336 | 0.9545 | 0.1307 | 0.9772 | 16 | | 0.2150 | 0.9586 | 0.1225 | 0.9748 | 17 | | 0.2037 | 0.9634 | 0.1170 | 0.9784 | 18 | ### Framework versions - Transformers 4.52.4 - TensorFlow 2.18.0 - Datasets 2.14.4 - Tokenizers 0.21.1
HighCWu/Embformer-MiniMind-R1-0.1B
HighCWu
2025-06-25T11:51:33
0
0
transformers
[ "transformers", "safetensors", "embformer", "text-generation", "conversational", "custom_code", "zh", "dataset:jingyaogong/minimind_dataset", "base_model:HighCWu/Embformer-MiniMind-RLHF-0.1B", "base_model:finetune:HighCWu/Embformer-MiniMind-RLHF-0.1B", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2025-06-25T11:49:03
--- license: apache-2.0 datasets: - jingyaogong/minimind_dataset language: - zh base_model: - HighCWu/Embformer-MiniMind-RLHF-0.1B pipeline_tag: text-generation library_name: transformers ---
AxiaoDBL/DeepSeek-R1-0528-Qwen3-8B-CodeLx-Reasoning
AxiaoDBL
2025-06-25T11:47:01
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-24T09:58:40
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards

This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
1,476

Space using librarian-bots/model_cards_with_metadata 1

Collection including librarian-bots/model_cards_with_metadata