mciccone commited on
Commit
002b88a
·
verified ·
1 Parent(s): fde72ac

Add/update README

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # typescript-chunks LoRA Models
2
+
3
+ This repository contains LoRA (Low-Rank Adaptation) models trained on the typescript-chunks dataset.
4
+
5
+ ## Models in this repository:
6
+
7
+ - `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123
8
+ - `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123
9
+ - `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123
10
+ - `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123
11
+ - `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123
12
+ - `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123
13
+ - `llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_typescript-chunks_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123
14
+
15
+ ## Usage
16
+
17
+ To use these LoRA models, you'll need the `peft` library:
18
+
19
+ ```bash
20
+ pip install peft transformers torch
21
+ ```
22
+
23
+ Example usage:
24
+
25
+ ```python
26
+ from peft import PeftModel
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ # Load base model
30
+ base_model_name = "your-base-model" # Replace with actual base model
31
+ model = AutoModelForCausalLM.from_pretrained(base_model_name)
32
+ tokenizer = AutoTokenizer.from_pretrained(base_model_name)
33
+
34
+ # Load LoRA adapter
35
+ model = PeftModel.from_pretrained(
36
+ model,
37
+ "supergoose/typescript-chunks",
38
+ subfolder="model_name_here" # Replace with specific model folder
39
+ )
40
+
41
+ # Use the model
42
+ inputs = tokenizer("Your prompt here", return_tensors="pt")
43
+ outputs = model.generate(**inputs)
44
+ ```
45
+
46
+ ## Training Details
47
+
48
+ - Dataset: typescript-chunks
49
+ - Training framework: LoRA/PEFT
50
+ - Models included: 7 variants
51
+
52
+ ## Files Structure
53
+
54
+ Each model folder contains:
55
+ - `adapter_config.json`: LoRA configuration
56
+ - `adapter_model.safetensors`: LoRA weights
57
+ - `tokenizer.json`: Tokenizer configuration
58
+ - Additional training artifacts
59
+
60
+ ---
61
+ *Generated automatically by LoRA uploader script*