YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

typescript-chunks LoRA Models

This repository contains LoRA (Low-Rank Adaptation) models trained on the typescript-chunks dataset.

Models in this repository:

  • llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123
  • llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123
  • llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123
  • llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123
  • llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123
  • llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123
  • llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123

Usage

To use these LoRA models, you'll need the peft library:

pip install peft transformers torch

Example usage:

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model_name = "your-base-model"  # Replace with actual base model
model = AutoModelForCausalLM.from_pretrained(base_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)

# Load LoRA adapter
model = PeftModel.from_pretrained(
    model, 
    "supergoose/typescript-chunks",
    subfolder="model_name_here"  # Replace with specific model folder
)

# Use the model
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs)

Training Details

  • Dataset: typescript-chunks
  • Training framework: LoRA/PEFT
  • Models included: 7 variants

Files Structure

Each model folder contains:

  • adapter_config.json: LoRA configuration
  • adapter_model.safetensors: LoRA weights
  • tokenizer.json: Tokenizer configuration
  • Additional training artifacts

Generated automatically by LoRA uploader script

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support