Upload fine-tuned model from Gemma Garage (request: 7c24eaa3-6289-41e4-b09a-c7e963eb5ed2)
3c79a0c
verified
| language: en | |
| license: apache-2.0 | |
| tags: | |
| - fine-tuned | |
| - gemma | |
| - lora | |
| - gemma-garage | |
| base_model: google/gemma-3-1b-it | |
| pipeline_tag: text-generation | |
| # gemma-3-1b-it-fine-tuned-demo-5 | |
| Fine-tuned google/gemma-3-1b-it model from Gemma Garage | |
| This model was fine-tuned using [Gemma Garage](https://github.com/your-repo/gemma-garage), a platform for fine-tuning Gemma models with LoRA. | |
| ## Model Details | |
| - **Base Model**: google/gemma-3-1b-it | |
| - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) | |
| - **Training Platform**: Gemma Garage | |
| - **Fine-tuned on**: 2025-08-21 | |
| ## Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| tokenizer = AutoTokenizer.from_pretrained("LucasFMartins/gemma-3-1b-it-fine-tuned-demo-5") | |
| model = AutoModelForCausalLM.from_pretrained("LucasFMartins/gemma-3-1b-it-fine-tuned-demo-5") | |
| # Generate text | |
| inputs = tokenizer("Your prompt here", return_tensors="pt") | |
| outputs = model.generate(**inputs, max_new_tokens=100) | |
| response = tokenizer.decode(outputs[0], skip_special_tokens=True) | |
| print(response) | |
| ``` | |
| ## Training Details | |
| This model was fine-tuned using the Gemma Garage platform with the following configuration: | |
| - Request ID: 7c24eaa3-6289-41e4-b09a-c7e963eb5ed2 | |
| - Training completed on: 2025-08-21 01:38:57 UTC | |
| For more information about Gemma Garage, visit [our GitHub repository](https://github.com/your-repo/gemma-garage). | |