Meer-Whale-1 (merged)

This repo contains merged weights of google/codegemma-7b-it with LoRA adapter moesaif/meer-whale-1. Weights are saved in Transformers format (.safetensors) for GPU inference (e.g., vLLM).

Notes

  • Base: google/codegemma-7b-it
  • Adapter: moesaif/meer-whale-1
  • Merge dtype: torch.float16
  • Generated by an automated Colab script.
Downloads last month
61
Safetensors
Model size
9B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for moesaif/meer-whale-1-merged

Base model

google/gemma-7b
Finetuned
(363)
this model