Luigi commited on
Commit
ef6f278
·
1 Parent(s): 3e01b8a

Revert "update readme"

Browse files

This reverts commit 3e01b8a807b8d82ccecf09907ea2b547e7847c97.

Files changed (1) hide show
  1. README.md +22 -40
README.md CHANGED
@@ -7,21 +7,13 @@ sdk: streamlit
7
  app_file: app.py
8
  pinned: false
9
  license: mit
10
- short\_description: Real‑time webcam captioning with SmolVLM2 on CPU
11
- sdk\_version: 1.45.1
12
- --------------------
13
-
14
- # SmolVLM2 Real‑Time Captioning Demo
15
-
16
- This Hugging Face Spaces app uses **Streamlit** + **WebRTC** to capture your webcam feed every *N* milliseconds and run it through the SmolVLM2 model on your CPU, displaying live captions below the video.
17
 
18
- ## Features
19
 
20
- * **CPU‑only inference** via `llama-cpp-python` wrapping `llama.cpp`.
21
- * **WebRTC camera input** for low‑latency, browser‑native video streaming.
22
- * **Adjustable interval slider** (100 ms to 10 s) for capture frequency.
23
- * **Automatic GGUF model download** from Hugging Face Hub when missing.
24
- * **Debug logging** in the terminal for tracing inference steps.
25
 
26
  ## Setup
27
 
@@ -38,38 +30,28 @@ This Hugging Face Spaces app uses **Streamlit** + **WebRTC** to capture your web
38
  pip install -r requirements.txt
39
  ```
40
 
41
- 3. **(Optional) Pre‑download model files**
42
- The app automatically downloads these files if they are not present:
43
-
44
- * `SmolVLM2-500M-Video-Instruct.Q4_K_M.gguf`
45
- * `mmproj-SmolVLM2-500M-Video-Instruct-Q8_0.gguf`
46
-
47
- To skip download, manually place them in the repo root.
48
-
49
- ## Usage
50
 
51
- 1. **Launch the app**:
52
 
53
  ```bash
54
- streamlit run app.py
 
 
 
 
55
  ```
56
 
57
- 2. **Open your browser** at the URL shown (e.g. `http://localhost:8501`).
58
-
59
- 3. **Allow webcam access** when prompted by the browser.
60
-
61
- 4. **Adjust the capture interval** using the slider.
62
-
63
- 5. **Click **Start** to begin streaming and captioning.**
64
-
65
- 6. **View live captions** in the panel below the video.
66
-
67
- ## File Structure
68
 
69
- * `app.py` Main Streamlit + WebRTC application.
70
- * `requirements.txt` Python dependencies.
71
- * `.gguf` model files (auto‑downloaded or user‑provided).
 
 
 
72
 
73
- ## License
74
 
75
- Licensed under the MIT License.
 
 
7
  app_file: app.py
8
  pinned: false
9
  license: mit
10
+ short_description: SmolVLM2 on llama.cpp
11
+ sdk_version: 1.45.1
12
+ ---
 
 
 
 
13
 
14
+ # SmolVLM2 Live Inference Demo
15
 
16
+ This HuggingFace Spaces demo runs SmolVLM2 2.2B, 500M, or 256M Instruct GGUF models on CPU using `llama-cpp-python` (v0.3.9) which builds `llama.cpp` under the hood, and Gradio v5.33.2 for the UI. It captures frames from your webcam every N milliseconds and performs live inference, displaying the model's response in real time.
 
 
 
 
17
 
18
  ## Setup
19
 
 
30
  pip install -r requirements.txt
31
  ```
32
 
33
+ 3. **Add your GGUF models**
 
 
 
 
 
 
 
 
34
 
35
+ Create a `models/` directory in the root of the repo and upload your `.gguf` files:
36
 
37
  ```bash
38
+ mkdir models
39
+ # then upload:
40
+ # - smolvlm2-2.2B-instruct.gguf
41
+ # - smolvlm2-500M-instruct.gguf
42
+ # - smolvlm2-256M-instruct.gguf
43
  ```
44
 
45
+ ## Usage
 
 
 
 
 
 
 
 
 
 
46
 
47
+ - **Select Model**: Choose one of the `.gguf` files you uploaded.
48
+ - **System Prompt**: Customize the system-level instructions for the model.
49
+ - **User Prompt**: Provide the user query or instruction.
50
+ - **Interval (ms)**: Set how often (in milliseconds) to capture a frame and run inference.
51
+ - **Live Camera Feed**: The demo will start your webcam and capture frames at the specified interval.
52
+ - **Model Output**: See the model’s response below the camera feed.
53
 
54
+ ## Notes
55
 
56
+ - This demo runs entirely on CPU. Inference speed depends on the model size and your machine's CPU performance.
57
+ - Make sure your browser has permission to access your webcam.