-
-
-
-
-
-
Inference Providers
Active filters:
quark
fxmarty/llama-tiny-testing-quark-indev
1.03M
•
Updated
•
3
fxmarty/llama-tiny-int4-per-group-sym
1.03M
•
Updated
•
11
fxmarty/llama-tiny-w-fp8-a-fp8
1.03M
•
Updated
•
6
fxmarty/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M
•
Updated
•
8
fxmarty/llama-tiny-w-int8-per-tensor
1.03M
•
Updated
•
12
fxmarty/llama-small-int4-per-group-sym-awq
16.7M
•
Updated
•
8
fxmarty/quark-legacy-int8
1.03M
•
Updated
•
3
fxmarty/llama-tiny-w-int8-b-int8-per-tensor
1.03M
•
Updated
•
6
fxmarty/llama-small-int4-per-group-sym-awq-old
16.7M
•
Updated
•
3
amd-quark/llama-tiny-w-int8-per-tensor
1.03M
•
Updated
•
898
amd-quark/llama-tiny-w-int8-b-int8-per-tensor
1.03M
•
Updated
•
905
amd-quark/llama-tiny-w-fp8-a-fp8
1.03M
•
Updated
•
910
amd-quark/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M
•
Updated
•
911
amd-quark/llama-tiny-int4-per-group-sym
1.03M
•
Updated
•
905
amd-quark/llama-small-int4-per-group-sym-awq
16.7M
•
Updated
•
913
amd-quark/quark-legacy-int8
1.03M
•
Updated
•
122
amd/Llama-3.1-8B-Instruct-FP8-KV-Quark-test
8B
•
Updated
•
4.67k
amd/Llama-3.1-8B-Instruct-w-int8-a-int8-sym-test
8B
•
Updated
•
2.81k
EmbeddedLLM/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym
Text Generation
•
8B
•
Updated
•
5
amd/DeepSeek-R1-Distill-Llama-8B-awq-asym-uint4-g128-lmhead
Text Generation
•
2B
•
Updated
•
198
amd-quark/llama-tiny-fp8-quark-quant-method
17.1M
•
Updated
•
2.42k
aigdat/Qwen2.5-Coder-7B-quantized-ppl-14
aigdat/Qwen2-7B-Instruct_quantized_int4_bfloat16
aigdat/Qwen2.5-1.5B-Instruct-awq-uint4-bfloat16
0.4B
•
Updated
•
3
aigdat/Qwen2.5-0.5B-Instruct-awq-int4-asym-g128-fp16
superbigtree/Mistral-Nemo-Instruct-2407-FP8
aigdat/BioMistral-7B_quantized_int4_float16
aigdat/omost-phi-3-mini-128k_quantized_int4_float16
0.6B
•
Updated
•
2
superbigtree/Mistral-Nemo-Instruct-2407-FP8_aq
12B
•
Updated
•
805
aigdat/Llama-3.2-1B-Instruct-awq-uint4-float16
0.4B
•
Updated
•
2