Turkish

This is a llama model with ~50M parameters.

This model is trained for experimental usage. Train dataset is made from Turkish books and used toknizer is trained on Turkish news dataset.

You can see tokenizer here.

You can use modeling files from this GitHub repo.

  • Model Size: 52,177,152
  • Vocab Size: 32,768
  • Context Length: 512
  • Embedding Dimension: 256
  • Attention Heads: 128
  • KV Groups: 64
  • Hidden Dimension: 2048
  • Number of Layers: 20
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for aliarda/llama-TB-50M-latest

Finetuned
(3)
this model