Meer-Whale-1 (merged)
This repo contains merged weights of google/codegemma-7b-it with LoRA adapter moesaif/meer-whale-1.
Weights are saved in Transformers format (.safetensors) for GPU inference (e.g., vLLM).
Notes
- Base:
google/codegemma-7b-it - Adapter:
moesaif/meer-whale-1 - Merge dtype:
torch.float16 - Generated by an automated Colab script.
- Downloads last month
- 61
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for moesaif/meer-whale-1-merged
Base model
google/gemma-7b