A collection of models that can be used to split natural language texts into meaningful chunks
-
mamei16/chonky_mdistilbert-base-english-cased
Token Classification • 0.1B • Updated • 506 -
mamei16/chonky_distilbert_base_uncased_1.1
Token Classification • 66.4M • Updated • 142 • 2 -
mamei16/chonky_distilbert-base-multilingual-cased
Token Classification • 0.1B • Updated • 458 • 4