truworthai commited on
Commit
606524d
Β·
verified Β·
1 Parent(s): 5475464

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlx-vlm
3
+ tags:
4
+ - mlx
5
+ - vision-language-model
6
+ - fine-tuned
7
+ - brake-components
8
+ - visual-ai
9
+ base_model: mlx-community/SmolVLM-256M-Instruct-bf16
10
+ ---
11
+
12
+ # DynamicVisualLearning-v2 - MLX Fine-tuned Vision Language Model
13
+
14
+ This model was fine-tuned using the VisualAI platform with MLX (Apple Silicon optimization).
15
+
16
+ ## πŸš€ Model Details
17
+ - **Base Model**: `mlx-community/SmolVLM-256M-Instruct-bf16`
18
+ - **Training Platform**: VisualAI (MLX-optimized)
19
+ - **GPU Type**: MLX (Apple Silicon)
20
+ - **Training Job ID**: 2
21
+ - **Created**: 2025-06-03 03:29:58.843336
22
+ - **Training Completed**: βœ… Yes
23
+
24
+ ## πŸ“Š Training Data
25
+ This model was trained on a combined dataset with visual examples and conversations.
26
+
27
+ ## πŸ› οΈ Usage
28
+
29
+ ### Installation
30
+ ```bash
31
+ pip install mlx-vlm
32
+ ```
33
+
34
+ ### Loading the Model
35
+ ```python
36
+ from mlx_vlm import load
37
+ import json
38
+ import os
39
+
40
+ # Load the base MLX model
41
+ model, processor = load("mlx-community/SmolVLM-256M-Instruct-bf16")
42
+
43
+ # Load the fine-tuned artifacts
44
+ model_info_path = "mlx_model_info.json"
45
+ if os.path.exists(model_info_path):
46
+ with open(model_info_path, 'r') as f:
47
+ model_info = json.load(f)
48
+ print(f"βœ… Loaded fine-tuned model with {model_info.get('training_examples_count', 0)} training examples")
49
+
50
+ # Check for adapter weights
51
+ adapters_path = "adapters/adapter_config.json"
52
+ if os.path.exists(adapters_path):
53
+ with open(adapters_path, 'r') as f:
54
+ adapter_config = json.load(f)
55
+ print(f"🎯 Found MLX adapters with {adapter_config.get('training_examples', 0)} training examples")
56
+ ```
57
+
58
+ ### Inference
59
+ ```python
60
+ from mlx_vlm import generate
61
+ from mlx_vlm.prompt_utils import apply_chat_template
62
+ from mlx_vlm.utils import load_config
63
+ from PIL import Image
64
+
65
+ # Load your image
66
+ image = Image.open("your_image.jpg")
67
+
68
+ # Ask a question
69
+ question = "What type of brake component is this?"
70
+
71
+ # Format the prompt
72
+ config = load_config("mlx-community/SmolVLM-256M-Instruct-bf16")
73
+ formatted_prompt = apply_chat_template(processor, config, question, num_images=1)
74
+
75
+ # Generate response
76
+ response = generate(model, processor, formatted_prompt, [image], verbose=False, max_tokens=100)
77
+ print(f"Model response: {response}")
78
+ ```
79
+
80
+ ## πŸ“ Model Artifacts
81
+
82
+ This repository contains:
83
+ - `mlx_model_info.json`: Training metadata and learned mappings
84
+ - `training_images/`: Reference images from training data
85
+ - `adapters/`: MLX LoRA adapter weights and configuration (if available)
86
+ - `README.md`: This documentation
87
+
88
+ ## ⚠️ Important Notes
89
+
90
+ - This model uses MLX format optimized for Apple Silicon
91
+ - The actual model weights remain in the base model (`mlx-community/SmolVLM-256M-Instruct-bf16`)
92
+ - The fine-tuning artifacts enhance the model's domain-specific knowledge
93
+ - **Check the `adapters/` folder for MLX-specific fine-tuned weights**
94
+ - For best results, use on Apple Silicon devices (M1/M2/M3)
95
+
96
+ ## 🎯 Training Statistics
97
+
98
+ - Training Examples: 3
99
+ - Learned Mappings: 2
100
+ - Domain Keywords: 79
101
+
102
+ ## πŸ“ž Support
103
+
104
+ For questions about this model or the VisualAI platform, please refer to the training logs or contact support.
105
+
106
+ ---
107
+ *This model was trained using VisualAI's MLX-optimized training pipeline.*