Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ModelCloud
/
Granite-4.0-H-350M-GPTQMODEL-W4A16
like
0
Follow
ModelCloud.AI
70
Text Generation
Safetensors
English
granitemoehybrid
gptqmodel
modelcloud
chat
marin
instruct
int4
gptq
4bit
w4a16
conversational
4-bit precision
License:
modelcloud
Model card
Files
Files and versions
xet
Community
Use this model
main
Granite-4.0-H-350M-GPTQMODEL-W4A16
317 MB
1 contributor
History:
4 commits
Qubitium
Update README.md
81d38f7
verified
4 days ago
.gitattributes
Safe
1.52 kB
initial commit
4 days ago
README.md
863 Bytes
Update README.md
4 days ago
chat_template.jinja
Safe
6.42 kB
Add files using upload-large-folder tool
4 days ago
config.json
2.3 kB
Add files using upload-large-folder tool
4 days ago
generation_config.json
167 Bytes
Add files using upload-large-folder tool
4 days ago
merges.txt
Safe
917 kB
Add files using upload-large-folder tool
4 days ago
model.safetensors
308 MB
xet
Add files using upload-large-folder tool
4 days ago
quant_log.csv
6.87 kB
Add files using upload-large-folder tool
4 days ago
quantize_config.json
519 Bytes
Add files using upload-large-folder tool
4 days ago
special_tokens_map.json
465 Bytes
Add files using upload-large-folder tool
4 days ago
tokenizer.json
Safe
7.15 MB
Add files using upload-large-folder tool
4 days ago
tokenizer_config.json
Safe
17.7 kB
Add files using upload-large-folder tool
4 days ago
vocab.json
Safe
1.61 MB
Add files using upload-large-folder tool
4 days ago