-
-
-
-
-
-
Inference Providers
Active filters:
4bit
legraphista/llm-compiler-13b-ftd-IMat-GGUF
Text Generation
•
13B
•
Updated
•
1.14k
legraphista/Gemma-2-9B-It-SPPO-Iter3-IMat-GGUF
Text Generation
•
9B
•
Updated
•
1.75k
•
4
ModelCloud/gemma-2-9b-it-gptq-4bit
Text Generation
•
3B
•
Updated
•
307
•
4
ModelCloud/gemma-2-9b-gptq-4bit
Text Generation
•
3B
•
Updated
•
11
legraphista/Phi-3-mini-4k-instruct-update2024_07_03-IMat-GGUF
Text Generation
•
4B
•
Updated
•
655
legraphista/internlm2_5-7b-chat-IMat-GGUF
Text Generation
•
8B
•
Updated
•
687
legraphista/internlm2_5-7b-chat-1m-IMat-GGUF
Text Generation
•
8B
•
Updated
•
751
•
1
legraphista/codegeex4-all-9b-IMat-GGUF
Text Generation
•
9B
•
Updated
•
881
•
8
ModelCloud/DeepSeek-V2-Lite-gptq-4bit
Text Generation
•
2B
•
Updated
•
11
ModelCloud/internlm-2.5-7b-gptq-4bit
Feature Extraction
•
2B
•
Updated
•
7
ModelCloud/internlm-2.5-7b-chat-gptq-4bit
Feature Extraction
•
2B
•
Updated
•
2
ModelCloud/internlm-2.5-7b-chat-1m-gptq-4bit
Feature Extraction
•
2B
•
Updated
•
9
legraphista/NuminaMath-7B-TIR-IMat-GGUF
Text Generation
•
7B
•
Updated
•
469
•
1
legraphista/mathstral-7B-v0.1-IMat-GGUF
Text Generation
•
7B
•
Updated
•
1.05k
Xelta/miniXelta_01
Text Generation
•
Updated
•
6
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
3B
•
Updated
•
48
•
5
legraphista/Athene-70B-IMat-GGUF
Text Generation
•
71B
•
Updated
•
1.23k
•
3
ModelCloud/gemma-2-27b-it-gptq-4bit
Text Generation
•
6B
•
Updated
•
27
•
12
legraphista/Mistral-Nemo-Instruct-2407-IMat-GGUF
Text Generation
•
12B
•
Updated
•
1.68k
•
2
legraphista/Meta-Llama-3.1-8B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
2.12k
•
5
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
2B
•
Updated
•
124
•
4
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
2B
•
Updated
•
13
legraphista/Meta-Llama-3.1-70B-Instruct-IMat-GGUF
Text Generation
•
71B
•
Updated
•
2.47k
•
11
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
11B
•
Updated
•
25
•
4
legraphista/Mistral-Large-Instruct-2407-IMat-GGUF
Text Generation
•
123B
•
Updated
•
710
•
29
jhangmez/CHATPRG-v0.2.1-Meta-Llama-3.1-8B-bnb-4bit-lora-adapters
Text Generation
•
Updated
jhangmez/CHATPRG-v0.2.1-Meta-Llama-3.1-8B-bnb-4bit-q4_k_m
Text Generation
•
8B
•
Updated
•
103
•
1
ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit
Text Generation
•
17B
•
Updated
•
6
•
1
legraphista/Meta-Llama-3.1-8B-Instruct-abliterated-IMat-GGUF
Text Generation
•
8B
•
Updated
•
556
•
1
ModelCloud/Meta-Llama-3.1-405B-Instruct-gptq-4bit
Text Generation
•
59B
•
Updated
•
3
•
2