mlabonne commited on
Commit
97483eb
·
verified ·
1 Parent(s): 429e990

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -133,6 +133,16 @@ We evaluated each model on a proprietary benchmark that was specifically designe
133
  - llama.cpp: [LFM2-350M-Extract-GGUF](https://huggingface.co/LiquidAI/LFM2-350M-Extract-GGUF)
134
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-350M-extract)
135
 
 
 
 
 
 
 
 
 
 
 
136
  ## 📬 Contact
137
 
138
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
 
133
  - llama.cpp: [LFM2-350M-Extract-GGUF](https://huggingface.co/LiquidAI/LFM2-350M-Extract-GGUF)
134
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-350M-extract)
135
 
136
+ You can use the following Colab notebooks for easy inference and fine-tuning:
137
+
138
+ | Notebook | Description | Link |
139
+ |-------|------|------|
140
+ | Inference | Run the model with Hugging Face's transformers library. | <a href="https://colab.research.google.com/drive/1_HFBuNROTnI-SSZ2zEpqpjJ6SnrsWCU3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
141
+ | SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
142
+ | DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
143
+ | SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
144
+ | SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
145
+
146
  ## 📬 Contact
147
 
148
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).