Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

meta-llama
/
Llama-Guard-3-8B-INT8

Text Generation
Transformers
Safetensors
PyTorch
English
llama
facebook
meta
llama-3
conversational
text-generation-inference
8-bit precision
bitsandbytes
Model card Files Files and versions
xet
Community
10
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

All responses come back as "!!!!!..." repeated like 100 times

3
#10 opened 8 months ago by
jamie-de

I Found Inference Speed for INT8 Quantized Model is Slower Than Non-Quantized Version

➕ 1
1
#9 opened 11 months ago by
fliu1998

Access request FAQ

#8 opened 11 months ago by
samuelselvan

Anyone able to run this on vLLM ?

➕ 1
#7 opened 11 months ago by
xfalcox
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs