image/png

GGUF Quantizations for Virtuoso-Large

Virtuoso-Large (72B) is our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.

Model Details

  • Architecture Base: Qwen2.5-72B
  • Parameter Count: 72B
  • License: qwen

Use Cases

  • Advanced content creation, such as technical writing and creative storytelling
  • Data summarization and report generation for cross-functional domains
  • Detailed knowledge synthesis and deep-dive insights from diverse datasets
  • Multilingual support for international operations and communications

License

Virtuoso-Large (72B) is released under the qwen License.

If you have questions or would like to share your experiences using Virtuoso-Large (72B), please connect with us on social media. We’re excited to see what you build—and how this model helps you innovate!

Downloads last month
2,097
GGUF
Model size
72.7B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for arcee-ai/Virtuoso-Large-GGUF

Base model

Qwen/Qwen2.5-72B
Quantized
(4)
this model