The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Tigre Word Embedding Models (FastText)
| Model Name | Language | Task | License |
|---|---|---|---|
| tig.bin | Tigre (tig) | Word Embeddings (FastText) | CC-BY-SA-4.0 |
| tigre.vec | Tigre (tig) | Word Embeddings (Word2Vec format) | CC-BY-SA-4.0 |
Overview
This repository introduces the first comprehensive public collection of resources for the Tigre language — an under-resourced South Semitic language within the Afro-Asiatic family. The release aggregates multiple modalities (text + speech) and provides baseline models for several core NLP tasks including language modeling, ASR, and machine translation. The models were trained on a substantial Tigre corpus and are valuable for any downstream Natural Language Processing (NLP) task, especially those involving this low-resource language.
What are FastText Embeddings?
FastText is an extension of the popular Word2Vec model, which represents words as dense, real-valued vectors in a multi-dimensional space. The key advantage of FastText is that it represents each word as a bag of character n-grams (subwords). This subword information allows the model to:
- Generate vectors for out-of-vocabulary (OOV) words (e.g., typos or unseen compounds) by summing the vectors of their character n-grams.
- Capture morphological structure, which is crucial for morphologically rich languages like Tigre, where words have complex prefixes and suffixes.
The models provided here are:
- tig.bin: The binary FastText model (full model), which allows for querying subword vectors and OOV words.
- tigre.vec: A plain text file containing only the full word vectors, compatible with tools like gensim and used for downstream tasks or visualizations.
Model Training & Data Curation
Corpus and Preprocessing
The model was trained on the enriched Tigre corpus provided in the BeitTigreAI/tigre-data-dictionary dataset (and others). The corpus underwent rigorous cleaning to ensure high quality:
- Punctuation Removal: Removal of Ge'ez punctuation (e.g., ፡, ።, ፥) and numbers.
- Character Filtering: Removal of any non-Ge'ez characters (U+1200–U+135F), including Latin letters and symbols.
- Line Chunking: The cleaned text was split into lines with a maximum of 15 words per line.
FastText Parameters
The model was trained using the Continuous Bag-of-Words (CBOW) architecture and aligned to the standard English FastText vector space.
| Parameter | Value | Rationale |
|---|---|---|
| Model | cbow | Standard choice for word embeddings. |
| Dimension (dim) | 300 | Matches the standard pre-trained English models (cc.en.300.bin) for later cross-lingual alignment. |
| Epochs | 10 | Standard training duration. |
| Minimum Count (minCount) | 2 | Filters out very rare words to improve robustness. |
| Min/Max N-grams (minn, maxn) | 5/5 | Uses only 5-grams to capture subword information, matching common FastText configurations. |
| Negative Sampling (neg) | 10 | Standard negative sampling rate. |
Derived Asset: Generated Dictionary
The aligned Tigre and English vector spaces were used to generate a large-scale Tigre-English dictionary, leveraging the fact that similar words in different languages should be close in the shared vector space after alignment.
- Vector Alignment Method: The Tigre and English vector spaces were aligned using the VecMap tool in a supervised manner, utilizing the existing 6,164-entry Tigre-English-Tigrinya Dictionary as a seed translation lexicon.
- Generated Dictionary: A new dictionary file, tig_eng_generated_dict.tsv, was created by finding the Top-1 nearest English neighbor for every unique Tigre word in the mapped Tigre vector space.
- Entries: This generated dictionary contains $30,000+$ entries, significantly expanding the initial seed dictionary.
How to Load and Use the Models
The models can be easily downloaded and loaded using the Hugging Face Hub client library, fasttext, or gensim.
- Using gensim (for .vec files) The .vec file is ideal for simple embedding lookups and visualization.
from huggingface_hub import hf_hub_download
from gensim.models import KeyedVectors
# Download the vec file
vec_path = hf_hub_download(
repo_id="<Your_HF_Repo_Name>/tigre-data-fasttext", # Replace <Your_HF_Repo_Name>
filename="tigre.vec",
repo_type="dataset" # Or 'model' if you prefer
)
# Load embeddings
model = KeyedVectors.load_word2vec_format(vec_path, binary=False)
# Example queries
print("Most similar to 'ቤት' (house):", model.most_similar("ቤት"))
print("Most similar to 'ዋልዳይት' (mother/parent):", model.most_similar("ዋልዳይት"))
output
Most similar to 'ቤት' (house): [('ወቤት', 0.54), ('ሐደክዉ', 0.50), ('ኢመሓዛትካ', 0.47), ...]
Most similar to 'ዋልዳይት' (mother/parent): [('ዋልዳይትተ', 0.94), ('ዋልዳይትናመ', 0.93), ('ከዋልዳይት', 0.93), ...]
2. Using fasttext (for .bin files)
The .bin file is the full FastText model, which allows you to query vectors for unseen words and character n-grams.
from huggingface_hub import hf_hub_download
import fasttext
# Download the bin file
bin_path = hf_hub_download(
repo_id="BeitTigreAI/tigre-data-fasttext",
filename="tig.bin",
repo_type="dataset"
)
# Load model
ft = fasttext.load_model(bin_path)
# Example queries
print("Vector for 'ሻም':", ft.get_word_vector("ሻም")[:10])
print("Nearest neighbors for 'ሻም':", ft.get_nearest_neighbors("ሻም"))
Vector for 'ሻም': [-2.2306, 4.1328, -1.3079, 1.3905, -3.1971, -1.2134, 0.4555, -2.9989, -0.7958, -0.2645]
Nearest neighbors for 'ሻም': [(0.55, 'ሻማት'), (0.53, 'ዴሪር'), (0.46, 'ምልህዮት'), ...]
Dataset Structure
tigre-data-fasttext/
├── README.md
├── config.json
├── tig.bin
├── tigre.vec
Bias, Risks & Known Limitations
Bias, Risks & Known Limitations Training Corpus: The model quality is directly tied to the coverage and quality of the training corpus. While the text was extensively cleaned, any underlying limitations in the corpus's dialect, topic, or date coverage will be reflected in the embeddings. Vector Alignment: The cross-lingual dictionary generation relies on the initial, smaller, manually curated dictionary for alignment. Performance for words that are not closely related to the seed dictionary entries may be less accurate. English Source Bias: The initial English vocabulary for the seed dictionary was drawn from a selection of the most frequently used vocabulary found in Webster's Revised Unabridged Dictionary (1913 edition). This may result in a bias toward older or less modern English terms, which can subtly affect the vector alignment process.
Licensing (Per Modality)
CC-BY-SA-4.0
Citation
The Tigre FastText Models and the derived dictionary are licensed under CC-BY-SA-4.0 If you use this resource in your work, please cite the repository by referencing its Hugging Face entry: Recommended Citation Format: Repository Name: Tigre Word Embedding Models (FastText) URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-fasttext
- Downloads last month
- 42