haizelabs/sft-svgeez-blocks-20251101T005904Z-checkpoint-5500

This is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct specialized for generating ASCII art.

Model Details

  • Base Model: meta-llama/Llama-3.1-8B-Instruct
  • Fine-tuning Method: Supervised Fine-Tuning (SFT) with LoRA
  • Dataset: ASCII Bench Haiku dataset
  • Purpose: Generate ASCII art from text descriptions

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

# Load the base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")

# Load the fine-tuned adapter
model = PeftModel.from_pretrained(base_model, "haizelabs/sft-svgeez-blocks-20251101T005904Z-checkpoint-5500")

# Example usage
def generate_ascii_art(prompt):
    messages = [
        {"role": "system", "content": "You are an expert ASCII artist. Generate clean, artistic ASCII representations of the requested objects."},
        {"role": "user", "content": prompt}
    ]
    
    input_text = tokenizer.apply_chat_template(messages, tokenize=False)
    inputs = tokenizer(input_text, return_tensors="pt")
    
    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=1024,
            do_sample=True,
            temperature=0.7,
            pad_token_id=tokenizer.eos_token_id
        )
    
    response = tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True)
    return response

# Generate ASCII art
ascii_art = generate_ascii_art("Draw an ASCII image of a cat")
print(ascii_art)

Training Details

  • Training Steps: 672
  • Learning Rate: 5e-4
  • Batch Size: 12 (per device)
  • Gradient Accumulation Steps: 3
  • LoRA Rank: 128
  • LoRA Alpha: 256

Limitations

This model is fine-tuned specifically for ASCII art generation and may not perform well on other tasks. The quality of ASCII art generation depends on the complexity and clarity of the input prompt.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for haizelabs/sft-svgeez-blocks-20251101T005904Z-checkpoint-5500

Finetuned
(1941)
this model