File size: 2,911 Bytes
002b88a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# typescript-chunks LoRA Models

This repository contains LoRA (Low-Rank Adaptation) models trained on the typescript-chunks dataset.

## Models in this repository:

- `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123
- `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123
- `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123
- `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123
- `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123
- `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123
- `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123

## Usage

To use these LoRA models, you'll need the `peft` library:

```bash
pip install peft transformers torch
```

Example usage:

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model_name = "your-base-model"  # Replace with actual base model
model = AutoModelForCausalLM.from_pretrained(base_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)

# Load LoRA adapter
model = PeftModel.from_pretrained(
    model, 
    "supergoose/typescript-chunks",
    subfolder="model_name_here"  # Replace with specific model folder
)

# Use the model
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs)
```

## Training Details

- Dataset: typescript-chunks
- Training framework: LoRA/PEFT
- Models included: 7 variants

## Files Structure

Each model folder contains:
- `adapter_config.json`: LoRA configuration
- `adapter_model.safetensors`: LoRA weights
- `tokenizer.json`: Tokenizer configuration
- Additional training artifacts

---
*Generated automatically by LoRA uploader script*