gemma-2b-python-expert-lora(Text to Model)
This LoRA adapter specializes the base model for expert-level Python programming. Created using Sakana AI's Text-to-LoRA technology.
Model Details
- Base Model:
google/gemma-2b-it
- LoRA Rank: 16
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Task: Python Code Generation
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model and tokenizer
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "rohitnagareddy/gemma-2b-python-expert-lora")
# Generate Python code
prompt = "Write a Python function to implement binary search:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Capabilities
- Clean, documented Python code
- Type hints and error handling
- PEP 8 compliance
- Algorithm implementation
- Web development
- Data processing
- Testing and debugging
Citation
@misc{sakana2024texttolora,
title={Text-to-LoRA},
author={Sakana AI},
year={2024},
url={https://github.com/SakanaAI/text-to-lora}
}
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for rohitnagareddy/gemma-2b-python-expert-lora
Base model
google/gemma-2b-it