50c51e02a205b44c3449fc128400ff20-2052545627

In this experiment, i finetuned minini-140m-base with training samples drawn from the FineTome-100k, and OpenMathReasoning (10k samples only). I've used the SM3 optimizer w/ cosine scheduler, and a lr of 2e-5.

I've release this initial experimental checkpoint as a foundation for further exploration and I plan to conduct more experiments with different optimization strategies(https://github.com/HomebrewML/HeavyBall) and well curated datasets, and will update the model weights accordingly.

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria
import torch

class MyStoppingCriteria(StoppingCriteria):
    def __init__(self, target_sequence, prompt):
        self.target_sequence = target_sequence
        self.prompt = prompt

    def __call__(self, input_ids, scores, **kwargs):
        generated_text = tokenizer.decode(input_ids[0])
        generated_text = generated_text.replace(self.prompt, '')
        if self.target_sequence in generated_text:
            return True 
        return False 

    def __len__(self):
        return 1

    def __iter__(self):
        yield self

modelpath = "aloobun/minini-140m-chat"
model = AutoModelForCausalLM.from_pretrained(
    modelpath,
    torch_dtype=torch.bfloat16,
    device_map="cuda",
    trust_remote_code=True,       
)
tokenizer = AutoTokenizer.from_pretrained(
    modelpath,
    trust_remote_code=True,      
    use_fast=False,
)

messages = [
    {"role": "user", "content": "what is life?"}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=False,
)

streamer = TextStreamer(tokenizer, skip_prompt=True)

_ = model.generate(
    **tokenizer(text, return_tensors="pt").to("cuda"),
    max_new_tokens=256,
    temperature=0.8,
    top_p=0.8,
    top_k=20,
    streamer=streamer,
    stopping_criteria=MyStoppingCriteria("<|im_end|>", text),
    pad_token_id=tokenizer.eos_token_id
)
Downloads last month
11
Safetensors
Model size
0.1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aloobun/minini-140m-it

Finetuned
(1)
this model
Quantizations
1 model

Dataset used to train aloobun/minini-140m-it