Text Simplification Model (H100 Trained)
Training Results
- Training Loss: 0.2796
- Training Time: 22:39 (3 epochs)
- Dataset: GEM/wiki_auto_asset_turk (483,801 samples)
- GPU: NVIDIA H100 80GB
- Batch Size: 64
Usage
from transformers import BartTokenizer, BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("Lorobert/text-simplification-runpod")
tokenizer = BartTokenizer.from_pretrained("Lorobert/text-simplification-runpod")
text = "Complex sentence here."
inputs = tokenizer(text, return_tensors="pt", max_length=128, truncation=True)
outputs = model.generate(**inputs, max_length=128, num_beams=4)
simplified = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(simplified)
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support