Julia α

Fine-tuned version of Qwen3-4B.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("TensorLabsAI/julia-alpha")
tokenizer = AutoTokenizer.from_pretrained("TensorLabsAI/julia-alpha")
Downloads last month
13
Safetensors
Model size
4.02B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TensorLabsAI/Julia-Alpha

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(119)
this model
Quantizations
2 models

Dataset used to train TensorLabsAI/Julia-Alpha