--- base_model: - schonsense/70B_llama312_RP_ft --- - imx_plus quants are experimental full bf16 precision on the entirety of layer 0 as well as the output. - https://arxiv.org/html/2408.15301v1 - I made a mistake on my 'plus' quanting, and overweighted every tenth layer instead of just layer 0. I will change this note once I have rectified the issue. - imx quants are normally formulated with a model specific custom imatrix dataset.