Add AesSedai and Benchmarking links to references
Browse files
README.md
CHANGED
|
@@ -573,3 +573,5 @@ $ ./build/bin/llama-server \
|
|
| 573 |
* [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
|
| 574 |
* [bartowski mainline llama.cpp GLM-4.6 fix PR16359](https://github.com/ggml-org/llama.cpp/pull/16359)
|
| 575 |
* [ik_llama.cpp PR814 Downtown-Case](https://github.com/ikawrakow/ik_llama.cpp/pull/814)
|
|
|
|
|
|
|
|
|
| 573 |
* [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
|
| 574 |
* [bartowski mainline llama.cpp GLM-4.6 fix PR16359](https://github.com/ggml-org/llama.cpp/pull/16359)
|
| 575 |
* [ik_llama.cpp PR814 Downtown-Case](https://github.com/ikawrakow/ik_llama.cpp/pull/814)
|
| 576 |
+
* [Speed benchmarks on local gaming rig](https://huggingface.co/ubergarm/GLM-4.6-GGUF/discussions/5)
|
| 577 |
+
* [More good quants by AesSedai/GLM-4.6-GGUF](https://huggingface.co/AesSedai/GLM-4.6-GGUF)
|