blanchon commited on
Commit
63ed3a7
·
0 Parent(s):
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitmodules +6 -0
  2. Dockerfile +64 -0
  3. README.md +317 -0
  4. README_INTEGRATION.md +138 -0
  5. api-schema.yaml +476 -0
  6. client/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc +98 -0
  7. client/.gitignore +34 -0
  8. client/README.md +205 -0
  9. client/bun.lock +128 -0
  10. client/examples/basic-usage.ts +149 -0
  11. client/openapi.json +710 -0
  12. client/package.json +63 -0
  13. client/src/generated/index.ts +4 -0
  14. client/src/generated/schemas.gen.ts +196 -0
  15. client/src/generated/services.gen.ts +164 -0
  16. client/src/generated/types.gen.ts +151 -0
  17. client/src/index.ts +270 -0
  18. client/tsconfig.json +34 -0
  19. external/.gitkeep +0 -0
  20. external/RobotHub-TransportServer +1 -0
  21. external/lerobot +1 -0
  22. launch_simple.py +46 -0
  23. openapi.json +692 -0
  24. pyproject.toml +43 -0
  25. src/__pycache__/__init__.cpython-312.pyc +0 -0
  26. src/__pycache__/main.cpython-312.pyc +0 -0
  27. src/__pycache__/session_manager.cpython-312.pyc +0 -0
  28. src/inference_server.egg-info/PKG-INFO +347 -0
  29. src/inference_server.egg-info/SOURCES.txt +22 -0
  30. src/inference_server.egg-info/dependency_links.txt +1 -0
  31. src/inference_server.egg-info/requires.txt +23 -0
  32. src/inference_server.egg-info/top_level.txt +1 -0
  33. src/inference_server/__init__.py +30 -0
  34. src/inference_server/__pycache__/__init__.cpython-312.pyc +0 -0
  35. src/inference_server/__pycache__/__init__.cpython-313.pyc +0 -0
  36. src/inference_server/__pycache__/cli.cpython-312.pyc +0 -0
  37. src/inference_server/__pycache__/export_openapi.cpython-312.pyc +0 -0
  38. src/inference_server/__pycache__/export_openapi.cpython-313.pyc +0 -0
  39. src/inference_server/__pycache__/gradio_ui.cpython-312.pyc +0 -0
  40. src/inference_server/__pycache__/gradio_ui.cpython-313.pyc +0 -0
  41. src/inference_server/__pycache__/integrated.cpython-312.pyc +0 -0
  42. src/inference_server/__pycache__/main.cpython-312.pyc +0 -0
  43. src/inference_server/__pycache__/main.cpython-313.pyc +0 -0
  44. src/inference_server/__pycache__/session_manager.cpython-312.pyc +0 -0
  45. src/inference_server/__pycache__/session_manager.cpython-313.pyc +0 -0
  46. src/inference_server/__pycache__/simple_integrated.cpython-312.pyc +0 -0
  47. src/inference_server/__pycache__/simple_integrated.cpython-313.pyc +0 -0
  48. src/inference_server/__pycache__/ui.cpython-312.pyc +0 -0
  49. src/inference_server/__pycache__/ui.cpython-313.pyc +0 -0
  50. src/inference_server/__pycache__/ui_v2.cpython-312.pyc +0 -0
.gitmodules ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ [submodule "external/lerobot"]
2
+ path = external/lerobot
3
+ url = https://github.com/huggingface/lerobot
4
+ [submodule "external/RobotHub-TransportServer"]
5
+ path = external/RobotHub-TransportServer
6
+ url = https://github.com/julien-blanchon/RobotHub-TransportServer
Dockerfile ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use official UV base image with Python 3.12
2
+ FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
3
+
4
+ # Set environment variables for Python
5
+ ENV PYTHONUNBUFFERED=1 \
6
+ PYTHONDONTWRITEBYTECODE=1 \
7
+ UV_SYSTEM_PYTHON=1 \
8
+ UV_CACHE_DIR=/tmp/uv-cache
9
+
10
+ # Install system dependencies
11
+ RUN apt-get update && apt-get install -y \
12
+ # Build tools for compiling Python packages
13
+ build-essential \
14
+ gcc \
15
+ g++ \
16
+ # Essential system libraries
17
+ libgl1-mesa-glx \
18
+ libglib2.0-0 \
19
+ libsm6 \
20
+ libxext6 \
21
+ libxrender-dev \
22
+ libgomp1 \
23
+ # FFmpeg for video processing
24
+ ffmpeg \
25
+ # Git for potential model downloads
26
+ git \
27
+ # Clean up
28
+ && apt-get clean \
29
+ && rm -rf /var/lib/apt/lists/*
30
+
31
+ # Create a non-root user
32
+ RUN groupadd -r appuser && useradd -r -g appuser -m -s /bin/bash appuser
33
+
34
+ # Set working directory
35
+ WORKDIR /app
36
+
37
+ # Copy dependency files
38
+ COPY --chown=appuser:appuser pyproject.toml ./
39
+
40
+ # Copy the external python client dependency
41
+ COPY --chown=appuser:appuser external/ ./external/
42
+
43
+ # Install Python dependencies (without --frozen to regenerate lock)
44
+ RUN --mount=type=cache,target=/tmp/uv-cache \
45
+ uv sync --no-dev
46
+
47
+ # Copy the entire project
48
+ COPY --chown=appuser:appuser . .
49
+
50
+ # Switch to non-root user
51
+ USER appuser
52
+
53
+ # Add virtual environment to PATH
54
+ ENV PATH="/app/.venv/bin:$PATH"
55
+
56
+ # Expose port 7860 (HuggingFace Spaces default)
57
+ EXPOSE 7860
58
+
59
+ # Health check using activated virtual environment (FastAPI health endpoint)
60
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
61
+ CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:7860/api/health')" || exit 1
62
+
63
+ # Run the application with activated virtual environment
64
+ CMD ["python", "launch_simple.py", "--host", "0.0.0.0", "--port", "7860"]
README.md ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: LeRobot Arena - AI Inference Server
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
7
+ app_port: 7860
8
+ suggested_hardware: t4-small
9
+ suggested_storage: medium
10
+ short_description: Real-time ACT model inference server for robot control
11
+ tags:
12
+ - robotics
13
+ - ai
14
+ - inference
15
+ - control
16
+ - act-model
17
+ - transformer
18
+ - real-time
19
+ - gradio
20
+ - fastapi
21
+ - computer-vision
22
+ pinned: false
23
+ fullWidth: true
24
+ ---
25
+
26
+ # Inference Server
27
+
28
+ 🤖 **Real-time ACT Model Inference Server for Robot Control**
29
+
30
+ This server provides ACT (Action Chunking Transformer) model inference for robotics applications using the transport server communication system. It includes a user-friendly Gradio web interface for easy setup and management.
31
+
32
+ ## ✨ Features
33
+
34
+ - **Real-time AI Inference**: Run ACT models for robot control at 20Hz control frequency
35
+ - **Multi-Camera Support**: Handle multiple camera streams with different names
36
+ - **Web Interface**: User-friendly Gradio UI for setup and monitoring
37
+ - **Session Management**: Create, start, stop, and monitor inference sessions
38
+ - **Automatic Timeout**: Sessions automatically cleanup after 10 minutes of inactivity
39
+ - **Debug Tools**: Built-in debugging and monitoring endpoints
40
+ - **Flexible Configuration**: Support for custom model paths, camera configurations
41
+ - **No External Dependencies**: Direct Python execution without subprocess calls
42
+
43
+ ## 🚀 Quick Start
44
+
45
+ ### Prerequisites
46
+
47
+ - Python 3.12+
48
+ - UV package manager (recommended)
49
+ - Trained ACT model
50
+ - Transport server running
51
+
52
+ ### 1. Installation
53
+
54
+ ```bash
55
+ cd backend/ai-server
56
+
57
+ # Install dependencies using uv (recommended)
58
+ uv sync
59
+
60
+ # Or using pip
61
+ pip install -e .
62
+ ```
63
+
64
+ ### 2. Launch the Application
65
+
66
+ #### **🚀 Simple Integrated Mode (Recommended)**
67
+ ```bash
68
+ # Everything runs in one process - no subprocess issues!
69
+ python launch_simple.py
70
+
71
+ # Or using the CLI
72
+ python -m inference_server.cli --simple
73
+ ```
74
+
75
+ This will:
76
+ - Run everything on `http://localhost:7860`
77
+ - Direct session management (no HTTP API calls)
78
+ - No external subprocess dependencies
79
+ - Most robust and simple deployment!
80
+
81
+ #### **🔧 Development Mode (Separate Processes)**
82
+ ```bash
83
+ # Traditional approach with separate server and UI
84
+ python -m inference_server.cli
85
+ ```
86
+
87
+ This will:
88
+ - Start the AI server on `http://localhost:8001`
89
+ - Launch the Gradio UI on `http://localhost:7860`
90
+ - Better for development and debugging
91
+
92
+ ### 3. Using the Web Interface
93
+
94
+ 1. **Check Server Status**: The interface will automatically check if the AI server is running
95
+ 2. **Configure Your Robot**: Enter your model path and camera setup
96
+ 3. **Create & Start Session**: Click the button to set up AI control
97
+ 4. **Monitor Performance**: Use the status panel to monitor inference
98
+
99
+ ## 🎯 Workflow Guide
100
+
101
+ ### Step 1: AI Server
102
+ - The server status will be displayed at the top
103
+ - Click "Start Server" if it's not already running
104
+ - Use "Check Status" to verify connectivity
105
+
106
+ ### Step 2: Set Up Robot AI
107
+ - **Session Name**: Give your session a unique name (e.g., "my-robot-01")
108
+ - **AI Model Path**: Path to your trained ACT model (e.g., "./checkpoints/act_so101_beyond")
109
+ - **Camera Names**: Comma-separated list of camera names (e.g., "front,wrist,overhead")
110
+ - Click "Create & Start AI Control" to begin
111
+
112
+ ### Step 3: Control Session
113
+ - The session ID will be auto-filled after creation
114
+ - Use Start/Stop buttons to control inference
115
+ - Click "Status" to see detailed performance metrics
116
+
117
+ ## 🛠️ Advanced Usage
118
+
119
+ ### CLI Options
120
+
121
+ ```bash
122
+ # Simple integrated mode (recommended)
123
+ python -m inference_server.cli --simple
124
+
125
+ # Development mode (separate processes)
126
+ python -m inference_server.cli
127
+
128
+ # Launch only the server
129
+ python -m inference_server.cli --server-only
130
+
131
+ # Launch only the UI (server must be running separately)
132
+ python -m inference_server.cli --ui-only
133
+
134
+ # Custom ports
135
+ python -m inference_server.cli --server-port 8002 --ui-port 7861
136
+
137
+ # Enable public sharing
138
+ python -m inference_server.cli --share
139
+
140
+ # For deployment (recommended)
141
+ python -m inference_server.cli --simple --host 0.0.0.0 --share
142
+ ```
143
+
144
+ ### API Endpoints
145
+
146
+ The server provides a REST API for programmatic access:
147
+
148
+ - `GET /health` - Server health check
149
+ - `POST /sessions` - Create new session
150
+ - `GET /sessions` - List all sessions
151
+ - `GET /sessions/{id}` - Get session details
152
+ - `POST /sessions/{id}/start` - Start inference
153
+ - `POST /sessions/{id}/stop` - Stop inference
154
+ - `POST /sessions/{id}/restart` - Restart inference
155
+ - `DELETE /sessions/{id}` - Delete session
156
+
157
+ #### Debug Endpoints
158
+ - `GET /debug/system` - System information (CPU, memory, GPU)
159
+ - `GET /debug/sessions/{id}/queue` - Action queue details
160
+ - `POST /debug/sessions/{id}/reset` - Reset session state
161
+
162
+ ### Configuration
163
+
164
+ #### Joint Value Convention
165
+ - All joint inputs/outputs use **NORMALIZED VALUES**
166
+ - Most joints: -100 to +100 (RANGE_M100_100)
167
+ - Gripper: 0 to 100 (RANGE_0_100)
168
+ - This matches the training data format exactly
169
+
170
+ #### Camera Support
171
+ - Supports arbitrary number of camera streams
172
+ - Each camera has a unique name (e.g., "front", "wrist", "overhead")
173
+ - All camera streams are synchronized for inference
174
+ - Images expected in RGB format, uint8 [0-255]
175
+
176
+ ## 📊 Monitoring
177
+
178
+ ### Session Status Indicators
179
+ - 🟢 **Running**: Inference active and processing
180
+ - 🟡 **Ready**: Session created but inference not started
181
+ - 🔴 **Stopped**: Inference stopped
182
+ - 🟠 **Initializing**: Session being set up
183
+
184
+ ### Smart Session Control
185
+ The UI now provides intelligent feedback:
186
+ - ℹ️ **Already Running**: When trying to start a running session
187
+ - ℹ️ **Already Stopped**: When trying to stop a stopped session
188
+ - 💡 **Smart Suggestions**: Context-aware tips based on current status
189
+
190
+ ### Performance Metrics
191
+ - **Inferences**: Total number of model inferences performed
192
+ - **Commands Sent**: Joint commands sent to robot
193
+ - **Queue Length**: Actions waiting in the queue
194
+ - **Errors**: Number of errors encountered
195
+ - **Data Flow**: Images and joint states received
196
+
197
+ ## 🐳 Docker Usage
198
+
199
+ ### Build the Image
200
+ ```bash
201
+ cd services/inference-server
202
+ docker build -t inference-server .
203
+ ```
204
+
205
+ ### Run the Container
206
+ ```bash
207
+ # Basic usage
208
+ docker run -p 7860:7860 inference-server
209
+
210
+ # With environment variables
211
+ docker run -p 7860:7860 \
212
+ -e DEFAULT_ARENA_SERVER_URL=http://your-server.com \
213
+ -e DEFAULT_MODEL_PATH=./checkpoints/your-model \
214
+ inference-server
215
+
216
+ # With GPU support
217
+ docker run --gpus all -p 7860:7860 inference-server
218
+ ```
219
+
220
+ ## 🔧 Troubleshooting
221
+
222
+
223
+
224
+ ### Common Issues
225
+
226
+ 1. **Server Won't Start**
227
+ - Check if port 8001 is available
228
+ - Verify model path exists and is accessible
229
+ - Check dependencies are installed correctly
230
+
231
+ 2. **Session Creation Fails**
232
+ - Verify model path is correct
233
+ - Check Arena server is running on specified URL
234
+ - Ensure camera names match your robot configuration
235
+
236
+ 3. **Poor Performance**
237
+ - Monitor system resources in the debug panel
238
+ - Check if GPU is being used for inference
239
+ - Verify control/inference frequency settings
240
+
241
+ 4. **Connection Issues**
242
+ - Verify Arena server URL is correct
243
+ - Check network connectivity
244
+ - Ensure workspace/room IDs are valid
245
+
246
+ ### Debug Mode
247
+
248
+ Enable debug mode for detailed logging:
249
+
250
+ ```bash
251
+ uv run python -m lerobot_arena_ai_server.cli --debug
252
+ ```
253
+
254
+ ### System Requirements
255
+
256
+ - **CPU**: Multi-core recommended for 30Hz control
257
+ - **Memory**: 8GB+ RAM recommended
258
+ - **GPU**: CUDA-compatible GPU for fast inference (optional but recommended)
259
+ - **Network**: Stable connection to Arena server
260
+
261
+ ## 📚 Architecture
262
+
263
+ ### Integrated Mode (Recommended)
264
+ ```
265
+ ┌─────────────────────────────────────┐ ┌─────────────────┐
266
+ │ Single Application │ │ LeRobot Arena │
267
+ │ ┌─────────────┐ ┌─────────────┐ │◄──►│ (Port 8000) │
268
+ │ │ Gradio UI │ │ AI Server │ │ └─────────────────┘
269
+ │ │ (/) │ │ (/api/*) │ │ │
270
+ │ └─────────────┘ └─────────────┘ │ │
271
+ │ (Port 7860) │ Robot/Cameras
272
+ └─────────────────────────────────────┘
273
+
274
+ Web Browser
275
+ ```
276
+
277
+ ### Development Mode
278
+ ```
279
+ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
280
+ │ Gradio UI │ │ AI Server │ │ LeRobot Arena │
281
+ │ (Port 7860) │◄──►│ (Port 8001) │◄──►│ (Port 8000) │
282
+ └─────────────────┘ └─────────────────┘ └─────────────────┘
283
+ │ │ │
284
+ │ │ │
285
+ Web Browser ACT Model Robot/Cameras
286
+ Inference
287
+ ```
288
+
289
+ ### Data Flow
290
+
291
+ 1. **Camera Data**: Robot cameras → Arena → AI Server
292
+ 2. **Joint State**: Robot joints → Arena → AI Server
293
+ 3. **AI Inference**: Images + Joint State → ACT Model → Actions
294
+ 4. **Control Commands**: Actions → Arena → Robot
295
+
296
+ ### Session Lifecycle
297
+
298
+ 1. **Create**: Set up rooms in Arena, load ACT model
299
+ 2. **Start**: Begin inference loop (3Hz) and control loop (30Hz)
300
+ 3. **Running**: Process camera/joint data, generate actions
301
+ 4. **Stop**: Pause inference, maintain connections
302
+ 5. **Delete**: Clean up resources, disconnect from Arena
303
+
304
+ ## 🤝 Contributing
305
+
306
+ 1. Follow the existing code style
307
+ 2. Add tests for new features
308
+ 3. Update documentation
309
+ 4. Submit pull requests
310
+
311
+ ## 📄 License
312
+
313
+ This project follows the same license as the parent LeRobot Arena project.
314
+
315
+ ---
316
+
317
+ For more information, see the [LeRobot Arena documentation](../../README.md).
README_INTEGRATION.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🤖 Integrated Inference Server
2
+
3
+ This is an integrated ACT Model Inference Server that combines **FastAPI** and **Gradio** on a single port, perfect for deployment and development.
4
+
5
+ ## 🚀 Quick Start
6
+
7
+ ```bash
8
+ # Install dependencies
9
+ uv sync
10
+
11
+ # Run the integrated server
12
+ uv run python launch_simple.py --host 0.0.0.0 --port 7860
13
+ ```
14
+
15
+ ## 📡 Access Points
16
+
17
+ Once running, you can access:
18
+
19
+ - **🎨 Gradio UI**: http://localhost:7860/
20
+ - **📖 API Documentation**: http://localhost:7860/api/docs
21
+ - **🔄 Health Check**: http://localhost:7860/api/health
22
+ - **📋 OpenAPI Schema**: http://localhost:7860/api/openapi.json
23
+
24
+ ## 🏗️ Architecture
25
+
26
+ ### Integration Approach
27
+ - **Single Process**: Everything runs in one Python process
28
+ - **Single Port**: Both API and UI on the same port (7860)
29
+ - **FastAPI at `/api`**: Full REST API with automatic documentation
30
+ - **Gradio at `/`**: User-friendly web interface
31
+ - **Direct Session Management**: UI communicates directly with session manager (no HTTP overhead)
32
+
33
+ ### Key Components
34
+
35
+ 1. **`simple_integrated.py`**: Main integration logic
36
+ - Creates FastAPI app and mounts it at `/api`
37
+ - Creates Gradio interface and mounts it at `/`
38
+ - Provides `SimpleServerManager` for direct session access
39
+
40
+ 2. **`launch_simple.py`**: Entry point script
41
+ - Handles command-line arguments
42
+ - Starts the integrated application
43
+
44
+ 3. **`main.py`**: Core FastAPI application
45
+ - Session management endpoints
46
+ - Policy loading and inference
47
+ - OpenAPI documentation
48
+
49
+ ## 🔧 Features
50
+
51
+ ### For UI Users
52
+ - ✅ **Simple Interface**: Create and manage AI sessions through web UI
53
+ - ✅ **Real-time Status**: Live session monitoring and control
54
+ - ✅ **Direct Performance**: No HTTP overhead for UI operations
55
+
56
+ ### For API Users
57
+ - ✅ **Full REST API**: Complete programmatic access
58
+ - ✅ **Interactive Docs**: Automatic Swagger/OpenAPI documentation
59
+ - ✅ **Standard Endpoints**: `/sessions`, `/health`, etc.
60
+ - ✅ **CORS Enabled**: Ready for frontend integration
61
+
62
+ ### For Deployment
63
+ - ✅ **Single Port**: Easy to deploy behind reverse proxy
64
+ - ✅ **Docker Ready**: Dockerfile included
65
+ - ✅ **Health Checks**: Built-in monitoring endpoints
66
+ - ✅ **HuggingFace Spaces**: Perfect for cloud deployment
67
+
68
+ ## 📋 API Usage Examples
69
+
70
+ ### Health Check
71
+ ```bash
72
+ curl http://localhost:7860/api/health
73
+ ```
74
+
75
+ ### Create Session
76
+ ```bash
77
+ curl -X POST http://localhost:7860/api/sessions \
78
+ -H "Content-Type: application/json" \
79
+ -d '{
80
+ "session_id": "my-robot",
81
+ "policy_path": "./checkpoints/act_so101_beyond",
82
+ "camera_names": ["front"],
83
+ "arena_server_url": "http://localhost:8000"
84
+ }'
85
+ ```
86
+
87
+ ### Start Inference
88
+ ```bash
89
+ curl -X POST http://localhost:7860/api/sessions/my-robot/start
90
+ ```
91
+
92
+ ### Get Session Status
93
+ ```bash
94
+ curl http://localhost:7860/api/sessions/my-robot
95
+ ```
96
+
97
+ ## 🐳 Docker Usage
98
+
99
+ ```bash
100
+ # Build
101
+ docker build -t inference-server .
102
+
103
+ # Run
104
+ docker run -p 7860:7860 inference-server
105
+ ```
106
+
107
+ ## 🔍 Testing
108
+
109
+ Run the integration test to verify everything works:
110
+
111
+ ```bash
112
+ uv run python test_integration.py
113
+ ```
114
+
115
+ ## 💡 Development Tips
116
+
117
+ ### Use Both Interfaces
118
+ - **Development**: Use Gradio UI for quick testing and setup
119
+ - **Production**: Use REST API for automated systems
120
+ - **Integration**: Both can run simultaneously
121
+
122
+ ### Session Management
123
+ - UI uses direct session manager access (faster)
124
+ - API uses HTTP endpoints (standard REST)
125
+ - Both share the same underlying session data
126
+
127
+ ### Debugging
128
+ - Check logs for startup issues
129
+ - Use `/api/health` to verify API is working
130
+ - Visit `/api/docs` for interactive API testing
131
+
132
+ ## 🚀 Benefits of This Approach
133
+
134
+ 1. **Flexibility**: Use UI or API as needed
135
+ 2. **Performance**: Direct access for UI, standard REST for API
136
+ 3. **Deployment**: Single port, single process
137
+ 4. **Documentation**: Auto-generated API docs
138
+ 5. **Development**: Fast iteration with integrated setup
api-schema.yaml ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ openapi: 3.1.0
2
+ info:
3
+ title: LeRobot Arena AI Server
4
+ summary: ACT Model Inference Server for Real-time Robot Control
5
+ description: "\n ## LeRobot Arena AI Server\n\n This server provides\
6
+ \ **ACT (Action Chunking Transformer)** model inference for robotics applications.\n\
7
+ \ It uses the LeRobot Arena communication system with multiple rooms per\
8
+ \ session for:\n\n ### Core Features:\n - \U0001F3A5 **Multi-camera\
9
+ \ support**: Arbitrary number of camera streams with unique names\n - \U0001F916\
10
+ \ **Joint control**: Normalized joint value handling (-100 to +100 range)\n \
11
+ \ - \U0001F504 **Real-time inference**: Optimized for robotics control loops\n\
12
+ \ - \U0001F4CA **Session management**: Multiple concurrent inference sessions\n\
13
+ \ - \U0001F6E0️ **Debug endpoints**: Comprehensive monitoring and debugging\
14
+ \ tools\n\n ### Communication Architecture:\n 1. **Camera rooms**:\
15
+ \ Receives video streams from robot cameras (supports multiple cameras)\n \
16
+ \ 2. **Joint input room**: Receives current robot joint positions (**NORMALIZED\
17
+ \ VALUES**)\n 3. **Joint output room**: Sends predicted joint commands\
18
+ \ (**NORMALIZED VALUES**)\n\n ### Supported Cameras:\n Each camera\
19
+ \ stream has a unique name (e.g., \"front\", \"wrist\", \"overhead\") \n \
20
+ \ and all streams are synchronized for inference.\n\n ### Joint Value\
21
+ \ Convention:\n - All joint inputs/outputs use **NORMALIZED VALUES**\n\
22
+ \ - Range: -100 to +100 for most joints, 0 to 100 for gripper\n \
23
+ \ - Matches training data format exactly\n\n ### Getting Started:\n \
24
+ \ 1. Create a session with your trained ACT model\n 2. Connect your\
25
+ \ robot to the generated rooms\n 3. Start inference to begin real-time\
26
+ \ control\n "
27
+ version: 1.0.0
28
+ contact:
29
+ name: LeRobot Arena Team
30
+ url: https://github.com/huggingface/lerobot
31
+ license:
32
+ name: Apache 2.0
33
+ url: https://www.apache.org/licenses/LICENSE-2.0.html
34
+ x-logo:
35
+ url: https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo.png
36
+ altText: LeRobot Logo
37
+ paths:
38
+ /:
39
+ get:
40
+ tags:
41
+ - Health
42
+ summary: Root
43
+ description: Health check endpoint.
44
+ operationId: root__get
45
+ responses:
46
+ '200':
47
+ description: Successful Response
48
+ content:
49
+ application/json:
50
+ schema: {}
51
+ /health:
52
+ get:
53
+ tags:
54
+ - Health
55
+ summary: Health Check
56
+ description: Detailed health check.
57
+ operationId: health_check_health_get
58
+ responses:
59
+ '200':
60
+ description: Successful Response
61
+ content:
62
+ application/json:
63
+ schema: {}
64
+ /sessions:
65
+ get:
66
+ tags:
67
+ - Sessions
68
+ summary: List Sessions
69
+ description: List all sessions.
70
+ operationId: list_sessions_sessions_get
71
+ responses:
72
+ '200':
73
+ description: Successful Response
74
+ content:
75
+ application/json:
76
+ schema:
77
+ items:
78
+ $ref: '#/components/schemas/SessionStatusResponse'
79
+ type: array
80
+ title: Response List Sessions Sessions Get
81
+ post:
82
+ tags:
83
+ - Sessions
84
+ summary: Create Session
85
+ description: 'Create a new inference session.
86
+
87
+
88
+ If workspace_id is provided, all rooms will be created in that workspace.
89
+
90
+ If workspace_id is not provided, a new workspace will be generated automatically.
91
+
92
+ All rooms for a session (cameras + joints) are always created in the same
93
+ workspace.'
94
+ operationId: create_session_sessions_post
95
+ requestBody:
96
+ content:
97
+ application/json:
98
+ schema:
99
+ $ref: '#/components/schemas/CreateSessionRequest'
100
+ required: true
101
+ responses:
102
+ '200':
103
+ description: Successful Response
104
+ content:
105
+ application/json:
106
+ schema:
107
+ $ref: '#/components/schemas/CreateSessionResponse'
108
+ '422':
109
+ description: Validation Error
110
+ content:
111
+ application/json:
112
+ schema:
113
+ $ref: '#/components/schemas/HTTPValidationError'
114
+ /sessions/{session_id}:
115
+ get:
116
+ tags:
117
+ - Sessions
118
+ summary: Get Session Status
119
+ description: Get status of a specific session.
120
+ operationId: get_session_status_sessions__session_id__get
121
+ parameters:
122
+ - name: session_id
123
+ in: path
124
+ required: true
125
+ schema:
126
+ type: string
127
+ title: Session Id
128
+ responses:
129
+ '200':
130
+ description: Successful Response
131
+ content:
132
+ application/json:
133
+ schema:
134
+ $ref: '#/components/schemas/SessionStatusResponse'
135
+ '422':
136
+ description: Validation Error
137
+ content:
138
+ application/json:
139
+ schema:
140
+ $ref: '#/components/schemas/HTTPValidationError'
141
+ delete:
142
+ tags:
143
+ - Sessions
144
+ summary: Delete Session
145
+ description: Delete a session.
146
+ operationId: delete_session_sessions__session_id__delete
147
+ parameters:
148
+ - name: session_id
149
+ in: path
150
+ required: true
151
+ schema:
152
+ type: string
153
+ title: Session Id
154
+ responses:
155
+ '200':
156
+ description: Successful Response
157
+ content:
158
+ application/json:
159
+ schema: {}
160
+ '422':
161
+ description: Validation Error
162
+ content:
163
+ application/json:
164
+ schema:
165
+ $ref: '#/components/schemas/HTTPValidationError'
166
+ /sessions/{session_id}/start:
167
+ post:
168
+ tags:
169
+ - Control
170
+ summary: Start Inference
171
+ description: Start inference for a session.
172
+ operationId: start_inference_sessions__session_id__start_post
173
+ parameters:
174
+ - name: session_id
175
+ in: path
176
+ required: true
177
+ schema:
178
+ type: string
179
+ title: Session Id
180
+ responses:
181
+ '200':
182
+ description: Successful Response
183
+ content:
184
+ application/json:
185
+ schema: {}
186
+ '422':
187
+ description: Validation Error
188
+ content:
189
+ application/json:
190
+ schema:
191
+ $ref: '#/components/schemas/HTTPValidationError'
192
+ /sessions/{session_id}/stop:
193
+ post:
194
+ tags:
195
+ - Control
196
+ summary: Stop Inference
197
+ description: Stop inference for a session.
198
+ operationId: stop_inference_sessions__session_id__stop_post
199
+ parameters:
200
+ - name: session_id
201
+ in: path
202
+ required: true
203
+ schema:
204
+ type: string
205
+ title: Session Id
206
+ responses:
207
+ '200':
208
+ description: Successful Response
209
+ content:
210
+ application/json:
211
+ schema: {}
212
+ '422':
213
+ description: Validation Error
214
+ content:
215
+ application/json:
216
+ schema:
217
+ $ref: '#/components/schemas/HTTPValidationError'
218
+ /sessions/{session_id}/restart:
219
+ post:
220
+ tags:
221
+ - Control
222
+ summary: Restart Inference
223
+ description: Restart inference for a session.
224
+ operationId: restart_inference_sessions__session_id__restart_post
225
+ parameters:
226
+ - name: session_id
227
+ in: path
228
+ required: true
229
+ schema:
230
+ type: string
231
+ title: Session Id
232
+ responses:
233
+ '200':
234
+ description: Successful Response
235
+ content:
236
+ application/json:
237
+ schema: {}
238
+ '422':
239
+ description: Validation Error
240
+ content:
241
+ application/json:
242
+ schema:
243
+ $ref: '#/components/schemas/HTTPValidationError'
244
+ /debug/system:
245
+ get:
246
+ tags:
247
+ - Debug
248
+ summary: Get System Info
249
+ description: Get system information for debugging.
250
+ operationId: get_system_info_debug_system_get
251
+ responses:
252
+ '200':
253
+ description: Successful Response
254
+ content:
255
+ application/json:
256
+ schema: {}
257
+ /debug/logs:
258
+ get:
259
+ tags:
260
+ - Debug
261
+ summary: Get Recent Logs
262
+ description: Get recent log entries for debugging.
263
+ operationId: get_recent_logs_debug_logs_get
264
+ responses:
265
+ '200':
266
+ description: Successful Response
267
+ content:
268
+ application/json:
269
+ schema: {}
270
+ /debug/sessions/{session_id}/reset:
271
+ post:
272
+ tags:
273
+ - Debug
274
+ summary: Debug Reset Session
275
+ description: Reset a session's internal state for debugging.
276
+ operationId: debug_reset_session_debug_sessions__session_id__reset_post
277
+ parameters:
278
+ - name: session_id
279
+ in: path
280
+ required: true
281
+ schema:
282
+ type: string
283
+ title: Session Id
284
+ responses:
285
+ '200':
286
+ description: Successful Response
287
+ content:
288
+ application/json:
289
+ schema: {}
290
+ '422':
291
+ description: Validation Error
292
+ content:
293
+ application/json:
294
+ schema:
295
+ $ref: '#/components/schemas/HTTPValidationError'
296
+ /debug/sessions/{session_id}/queue:
297
+ get:
298
+ tags:
299
+ - Debug
300
+ summary: Get Session Queue Info
301
+ description: Get detailed information about a session's action queue.
302
+ operationId: get_session_queue_info_debug_sessions__session_id__queue_get
303
+ parameters:
304
+ - name: session_id
305
+ in: path
306
+ required: true
307
+ schema:
308
+ type: string
309
+ title: Session Id
310
+ responses:
311
+ '200':
312
+ description: Successful Response
313
+ content:
314
+ application/json:
315
+ schema: {}
316
+ '422':
317
+ description: Validation Error
318
+ content:
319
+ application/json:
320
+ schema:
321
+ $ref: '#/components/schemas/HTTPValidationError'
322
+ components:
323
+ schemas:
324
+ CreateSessionRequest:
325
+ properties:
326
+ session_id:
327
+ type: string
328
+ title: Session Id
329
+ policy_path:
330
+ type: string
331
+ title: Policy Path
332
+ camera_names:
333
+ items:
334
+ type: string
335
+ type: array
336
+ title: Camera Names
337
+ default:
338
+ - front
339
+ arena_server_url:
340
+ type: string
341
+ title: Arena Server Url
342
+ default: http://localhost:8000
343
+ workspace_id:
344
+ anyOf:
345
+ - type: string
346
+ - type: 'null'
347
+ title: Workspace Id
348
+ type: object
349
+ required:
350
+ - session_id
351
+ - policy_path
352
+ title: CreateSessionRequest
353
+ CreateSessionResponse:
354
+ properties:
355
+ workspace_id:
356
+ type: string
357
+ title: Workspace Id
358
+ camera_room_ids:
359
+ additionalProperties:
360
+ type: string
361
+ type: object
362
+ title: Camera Room Ids
363
+ joint_input_room_id:
364
+ type: string
365
+ title: Joint Input Room Id
366
+ joint_output_room_id:
367
+ type: string
368
+ title: Joint Output Room Id
369
+ type: object
370
+ required:
371
+ - workspace_id
372
+ - camera_room_ids
373
+ - joint_input_room_id
374
+ - joint_output_room_id
375
+ title: CreateSessionResponse
376
+ HTTPValidationError:
377
+ properties:
378
+ detail:
379
+ items:
380
+ $ref: '#/components/schemas/ValidationError'
381
+ type: array
382
+ title: Detail
383
+ type: object
384
+ title: HTTPValidationError
385
+ SessionStatusResponse:
386
+ properties:
387
+ session_id:
388
+ type: string
389
+ title: Session Id
390
+ status:
391
+ type: string
392
+ title: Status
393
+ policy_path:
394
+ type: string
395
+ title: Policy Path
396
+ camera_names:
397
+ items:
398
+ type: string
399
+ type: array
400
+ title: Camera Names
401
+ workspace_id:
402
+ type: string
403
+ title: Workspace Id
404
+ rooms:
405
+ additionalProperties: true
406
+ type: object
407
+ title: Rooms
408
+ stats:
409
+ additionalProperties: true
410
+ type: object
411
+ title: Stats
412
+ inference_stats:
413
+ anyOf:
414
+ - additionalProperties: true
415
+ type: object
416
+ - type: 'null'
417
+ title: Inference Stats
418
+ error_message:
419
+ anyOf:
420
+ - type: string
421
+ - type: 'null'
422
+ title: Error Message
423
+ type: object
424
+ required:
425
+ - session_id
426
+ - status
427
+ - policy_path
428
+ - camera_names
429
+ - workspace_id
430
+ - rooms
431
+ - stats
432
+ title: SessionStatusResponse
433
+ ValidationError:
434
+ properties:
435
+ loc:
436
+ items:
437
+ anyOf:
438
+ - type: string
439
+ - type: integer
440
+ type: array
441
+ title: Location
442
+ msg:
443
+ type: string
444
+ title: Message
445
+ type:
446
+ type: string
447
+ title: Error Type
448
+ type: object
449
+ required:
450
+ - loc
451
+ - msg
452
+ - type
453
+ title: ValidationError
454
+ securitySchemes:
455
+ BearerAuth:
456
+ type: http
457
+ scheme: bearer
458
+ bearerFormat: JWT
459
+ ApiKeyAuth:
460
+ type: apiKey
461
+ in: header
462
+ name: X-API-Key
463
+ servers:
464
+ - url: http://localhost:8001
465
+ description: Development server
466
+ - url: https://your-production-server.com
467
+ description: Production server
468
+ tags:
469
+ - name: Health
470
+ description: Health check and server status endpoints
471
+ - name: Sessions
472
+ description: Inference session management - create, control, and monitor AI sessions
473
+ - name: Control
474
+ description: Session control operations - start, stop, restart inference
475
+ - name: Debug
476
+ description: Debug and monitoring endpoints for system diagnostics
client/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ description: Use Bun instead of Node.js, npm, pnpm, or vite.
3
+ globs: *.ts, *.tsx, *.html, *.css, *.js, *.jsx, package.json
4
+ alwaysApply: false
5
+ ---
6
+
7
+ Default to using Bun instead of Node.js.
8
+
9
+ - Use `bun <file>` instead of `node <file>` or `ts-node <file>`
10
+ - Use `bun test` instead of `jest` or `vitest`
11
+ - Use `bun build <file.html|file.ts|file.css>` instead of `webpack` or `esbuild`
12
+ - Use `bun install` instead of `npm install` or `yarn install` or `pnpm install`
13
+ - Use `bun run <script>` instead of `npm run <script>` or `yarn run <script>` or `pnpm run <script>`
14
+ - Bun automatically loads .env, so don't use dotenv.
15
+
16
+ ## APIs
17
+
18
+ - `Bun.serve()` supports WebSockets, HTTPS, and routes. Don't use `express`.
19
+ - `bun:sqlite` for SQLite. Don't use `better-sqlite3`.
20
+ - `Bun.redis` for Redis. Don't use `ioredis`.
21
+ - `Bun.sql` for Postgres. Don't use `pg` or `postgres.js`.
22
+ - `WebSocket` is built-in. Don't use `ws`.
23
+ - Bun.$`ls` instead of execa.
24
+
25
+ ## Frontend
26
+
27
+ Use HTML imports with `Bun.serve()`. Don't use `vite`. HTML imports fully support React, CSS, Tailwind.
28
+
29
+ Server:
30
+
31
+ ```ts#index.ts
32
+ import index from "./index.html"
33
+
34
+ Bun.serve({
35
+ routes: {
36
+ "/": index,
37
+ "/api/users/:id": {
38
+ GET: (req) => {
39
+ return new Response(JSON.stringify({ id: req.params.id }));
40
+ },
41
+ },
42
+ },
43
+ // optional websocket support
44
+ websocket: {
45
+ open: (ws) => {
46
+ ws.send("Hello, world!");
47
+ },
48
+ message: (ws, message) => {
49
+ ws.send(message);
50
+ },
51
+ close: (ws) => {
52
+ // handle close
53
+ }
54
+ },
55
+ development: {
56
+ hmr: true,
57
+ console: true,
58
+ }
59
+ })
60
+ ```
61
+
62
+ HTML files can import .tsx, .jsx or .js files directly and Bun's bundler will transpile & bundle automatically. `<link>` tags can point to stylesheets and Bun's CSS bundler will bundle.
63
+
64
+ ```html#index.html
65
+ <html>
66
+ <body>
67
+ <h1>Hello, world!</h1>
68
+ <script type="module" src="./frontend.tsx"></script>
69
+ </body>
70
+ </html>
71
+ ```
72
+
73
+ With the following `frontend.tsx`:
74
+
75
+ ```tsx#frontend.tsx
76
+ import React from "react";
77
+
78
+ // import .css files directly and it works
79
+ import './index.css';
80
+
81
+ import { createRoot } from "react-dom/client";
82
+
83
+ const root = createRoot(document.body);
84
+
85
+ export default function Frontend() {
86
+ return <h1>Hello, world!</h1>;
87
+ }
88
+
89
+ root.render(<Frontend />);
90
+ ```
91
+
92
+ Then, run index.ts
93
+
94
+ ```sh
95
+ bun --hot ./index.ts
96
+ ```
97
+
98
+ For more information, read the Bun API docs in `node_modules/bun-types/docs/**.md`.
client/.gitignore ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # dependencies (bun install)
2
+ node_modules
3
+
4
+ # output
5
+ out
6
+ dist
7
+ *.tgz
8
+
9
+ # code coverage
10
+ coverage
11
+ *.lcov
12
+
13
+ # logs
14
+ logs
15
+ _.log
16
+ report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
17
+
18
+ # dotenv environment variable files
19
+ .env
20
+ .env.development.local
21
+ .env.test.local
22
+ .env.production.local
23
+ .env.local
24
+
25
+ # caches
26
+ .eslintcache
27
+ .cache
28
+ *.tsbuildinfo
29
+
30
+ # IntelliJ based IDEs
31
+ .idea
32
+
33
+ # Finder (MacOS) folder config
34
+ .DS_Store
client/README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LeRobot Arena Inference Server TypeScript Client
2
+
3
+ A TypeScript client for the LeRobot Arena Inference Server, providing ACT (Action Chunking Transformer) model inference and session management capabilities.
4
+
5
+ ## Features
6
+
7
+ - ✅ **Fully Generated**: Client is 100% generated from OpenAPI spec
8
+ - 🔒 **Type Safe**: Complete TypeScript support with generated types
9
+ - 🚀 **Modern**: Built with Bun and modern JavaScript features
10
+ - 📦 **Lightweight**: Minimal dependencies, uses fetch API
11
+ - 🛠️ **Developer Friendly**: Comprehensive examples and documentation
12
+
13
+ ## Installation
14
+
15
+ ```bash
16
+ # Install dependencies
17
+ bun install
18
+
19
+ # Generate client from OpenAPI spec
20
+ bun run generate
21
+
22
+ # Build the client
23
+ bun run build
24
+ ```
25
+
26
+ ## Quick Start
27
+
28
+ ```typescript
29
+ import { LeRobotInferenceServerClient, CreateSessionRequest } from '@lerobot-arena/inference-server-client';
30
+
31
+ // Create client
32
+ const client = new LeRobotInferenceServerClient('http://localhost:8001');
33
+
34
+ // Check server health
35
+ const isHealthy = await client.isHealthy();
36
+ if (!isHealthy) {
37
+ console.error('Server is not available');
38
+ process.exit(1);
39
+ }
40
+
41
+ // Create and start a session
42
+ const sessionRequest: CreateSessionRequest = {
43
+ session_id: 'my-robot-session',
44
+ policy_path: './checkpoints/act_so101_beyond',
45
+ camera_names: ['front', 'wrist'],
46
+ arena_server_url: 'http://localhost:8000'
47
+ };
48
+
49
+ const session = await client.createSession(sessionRequest);
50
+ await client.startInference('my-robot-session');
51
+
52
+ // Monitor session
53
+ const status = await client.getSessionStatus('my-robot-session');
54
+ console.log(`Status: ${status.status}`);
55
+
56
+ // Clean up
57
+ await client.deleteSession('my-robot-session');
58
+ ```
59
+
60
+ ## API Reference
61
+
62
+ ### Client Creation
63
+
64
+ ```typescript
65
+ const client = new LeRobotInferenceServerClient(baseUrl: string);
66
+ ```
67
+
68
+ ### Health Check Methods
69
+
70
+ - `isHealthy()`: Quick boolean health check
71
+ - `getHealth()`: Detailed health information
72
+
73
+ ### Session Management
74
+
75
+ - `createSession(request: CreateSessionRequest)`: Create inference session
76
+ - `listSessions()`: List all active sessions
77
+ - `getSessionStatus(sessionId: string)`: Get session details
78
+ - `deleteSession(sessionId: string)`: Delete session and cleanup
79
+
80
+ ### Inference Control
81
+
82
+ - `startInference(sessionId: string)`: Start model inference
83
+ - `stopInference(sessionId: string)`: Stop model inference
84
+ - `restartInference(sessionId: string)`: Restart model inference
85
+
86
+ ### Utility Methods
87
+
88
+ - `waitForSessionStatus(sessionId, targetStatus, timeout)`: Wait for status change
89
+ - `createAndStartSession(request)`: Create session and start inference in one call
90
+
91
+ ### Debug Methods
92
+
93
+ - `getSystemInfo()`: Get server system information
94
+ - `debugResetSession(sessionId: string)`: Reset session state
95
+ - `getSessionQueueInfo(sessionId: string)`: Get action queue details
96
+
97
+ ## Generated Types
98
+
99
+ All types are generated from the OpenAPI specification:
100
+
101
+ ```typescript
102
+ import type {
103
+ CreateSessionRequest,
104
+ CreateSessionResponse,
105
+ SessionStatusResponse,
106
+ // ... all other types
107
+ } from '@lerobot-arena/inference-server-client';
108
+ ```
109
+
110
+ Key types:
111
+ - `CreateSessionRequest`: Session creation parameters
112
+ - `CreateSessionResponse`: Session creation result with room IDs
113
+ - `SessionStatusResponse`: Complete session status and statistics
114
+
115
+ ## Examples
116
+
117
+ ### Basic Usage
118
+ ```bash
119
+ bun run examples/basic-usage.ts
120
+ ```
121
+
122
+ ### Quick Example
123
+ ```bash
124
+ bun run examples/basic-usage.ts --quick
125
+ ```
126
+
127
+ ## Development
128
+
129
+ ### Scripts
130
+
131
+ - `bun run generate`: Export OpenAPI schema and generate client
132
+ - `bun run build`: Build the client distribution
133
+ - `bun run typecheck`: Run TypeScript type checking
134
+ - `bun run test`: Run tests
135
+ - `bun run clean`: Clean generated files and dist
136
+
137
+ ### Regenerating Client
138
+
139
+ The client is automatically regenerated when you run `bun run build`. To manually regenerate:
140
+
141
+ ```bash
142
+ # Export latest OpenAPI schema from inference server
143
+ bun run export-openapi
144
+
145
+ # Generate TypeScript client from schema
146
+ bun run generate-client
147
+ ```
148
+
149
+ ### File Structure
150
+
151
+ ```
152
+ services/inference-server/client/
153
+ ├── src/
154
+ │ ├── generated/ # Auto-generated from OpenAPI
155
+ │ │ ├── index.ts # Generated exports
156
+ │ │ ├── services.gen.ts # Generated API methods
157
+ │ │ ├── types.gen.ts # Generated TypeScript types
158
+ │ │ └── schemas.gen.ts # Generated schemas
159
+ │ └── index.ts # Main client wrapper
160
+ ├── examples/
161
+ │ └── basic-usage.ts # Usage examples
162
+ ├── dist/ # Built files
163
+ ├── openapi.json # Latest OpenAPI schema
164
+ └── package.json
165
+ ```
166
+
167
+ ## Requirements
168
+
169
+ - **Bun** >= 1.0.0 (for development and building)
170
+ - **LeRobot Arena Inference Server** running on target URL
171
+ - **LeRobot Arena Transport Server** for communication rooms
172
+
173
+ ## Communication Architecture
174
+
175
+ The inference server uses the LeRobot Arena communication system:
176
+
177
+ 1. **Camera Rooms**: Receive video streams (supports multiple cameras)
178
+ 2. **Joint Input Room**: Receives current robot joint positions (normalized -100 to +100)
179
+ 3. **Joint Output Room**: Sends predicted joint commands (normalized -100 to +100)
180
+
181
+ All rooms are created in the same workspace for session isolation.
182
+
183
+ ## Error Handling
184
+
185
+ All client methods throw descriptive errors on failure:
186
+
187
+ ```typescript
188
+ try {
189
+ await client.createSession(request);
190
+ } catch (error) {
191
+ console.error('Session creation failed:', error.message);
192
+ }
193
+ ```
194
+
195
+ ## Contributing
196
+
197
+ This client is auto-generated from the OpenAPI specification. To make changes:
198
+
199
+ 1. Update the inference server's FastAPI endpoints
200
+ 2. Regenerate the client: `bun run generate`
201
+ 3. Update examples and documentation as needed
202
+
203
+ ## License
204
+
205
+ Apache 2.0 - See LICENSE file for details.
client/bun.lock ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "lockfileVersion": 1,
3
+ "workspaces": {
4
+ "": {
5
+ "name": "client",
6
+ "dependencies": {
7
+ "@hey-api/client-fetch": "^0.2.4",
8
+ },
9
+ "devDependencies": {
10
+ "@hey-api/openapi-ts": "^0.53.12",
11
+ "@types/bun": "latest",
12
+ "typescript": "^5.8.3",
13
+ },
14
+ "peerDependencies": {
15
+ "typescript": "^5",
16
+ },
17
+ },
18
+ },
19
+ "packages": {
20
+ "@apidevtools/json-schema-ref-parser": ["@apidevtools/json-schema-ref-parser@11.7.2", "", { "dependencies": { "@jsdevtools/ono": "^7.1.3", "@types/json-schema": "^7.0.15", "js-yaml": "^4.1.0" } }, "sha512-4gY54eEGEstClvEkGnwVkTkrx0sqwemEFG5OSRRn3tD91XH0+Q8XIkYIfo7IwEWPpJZwILb9GUXeShtplRc/eA=="],
21
+
22
+ "@hey-api/client-fetch": ["@hey-api/client-fetch@0.2.4", "", {}, "sha512-SGTVAVw3PlKDLw+IyhNhb/jCH3P1P2xJzLxA8Kyz1g95HrkYOJdRpl9F5I7LLwo9aCIB7nwR2NrSeX7QaQD7vQ=="],
23
+
24
+ "@hey-api/openapi-ts": ["@hey-api/openapi-ts@0.53.12", "", { "dependencies": { "@apidevtools/json-schema-ref-parser": "11.7.2", "c12": "2.0.1", "commander": "12.1.0", "handlebars": "4.7.8" }, "peerDependencies": { "typescript": "^5.x" }, "bin": { "openapi-ts": "bin/index.cjs" } }, "sha512-cOm8AlUqJIWdLXq+Pk4mTXhEApRSc9xEWTVT8MZAyEqrN1Yhiisl2wyZGH9quzKpolq+oqvgcx61txtwHwi8vQ=="],
25
+
26
+ "@jsdevtools/ono": ["@jsdevtools/ono@7.1.3", "", {}, "sha512-4JQNk+3mVzK3xh2rqd6RB4J46qUR19azEHBneZyTZM+c456qOrbbM/5xcR8huNCCcbVt7+UmizG6GuUvPvKUYg=="],
27
+
28
+ "@types/bun": ["@types/bun@1.2.17", "", { "dependencies": { "bun-types": "1.2.17" } }, "sha512-l/BYs/JYt+cXA/0+wUhulYJB6a6p//GTPiJ7nV+QHa8iiId4HZmnu/3J/SowP5g0rTiERY2kfGKXEK5Ehltx4Q=="],
29
+
30
+ "@types/json-schema": ["@types/json-schema@7.0.15", "", {}, "sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA=="],
31
+
32
+ "@types/node": ["@types/node@24.0.3", "", { "dependencies": { "undici-types": "~7.8.0" } }, "sha512-R4I/kzCYAdRLzfiCabn9hxWfbuHS573x+r0dJMkkzThEa7pbrcDWK+9zu3e7aBOouf+rQAciqPFMnxwr0aWgKg=="],
33
+
34
+ "acorn": ["acorn@8.15.0", "", { "bin": { "acorn": "bin/acorn" } }, "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg=="],
35
+
36
+ "argparse": ["argparse@2.0.1", "", {}, "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="],
37
+
38
+ "bun-types": ["bun-types@1.2.17", "", { "dependencies": { "@types/node": "*" } }, "sha512-ElC7ItwT3SCQwYZDYoAH+q6KT4Fxjl8DtZ6qDulUFBmXA8YB4xo+l54J9ZJN+k2pphfn9vk7kfubeSd5QfTVJQ=="],
39
+
40
+ "c12": ["c12@2.0.1", "", { "dependencies": { "chokidar": "^4.0.1", "confbox": "^0.1.7", "defu": "^6.1.4", "dotenv": "^16.4.5", "giget": "^1.2.3", "jiti": "^2.3.0", "mlly": "^1.7.1", "ohash": "^1.1.4", "pathe": "^1.1.2", "perfect-debounce": "^1.0.0", "pkg-types": "^1.2.0", "rc9": "^2.1.2" }, "peerDependencies": { "magicast": "^0.3.5" }, "optionalPeers": ["magicast"] }, "sha512-Z4JgsKXHG37C6PYUtIxCfLJZvo6FyhHJoClwwb9ftUkLpPSkuYqn6Tr+vnaN8hymm0kIbcg6Ey3kv/Q71k5w/A=="],
41
+
42
+ "chokidar": ["chokidar@4.0.3", "", { "dependencies": { "readdirp": "^4.0.1" } }, "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA=="],
43
+
44
+ "chownr": ["chownr@2.0.0", "", {}, "sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ=="],
45
+
46
+ "citty": ["citty@0.1.6", "", { "dependencies": { "consola": "^3.2.3" } }, "sha512-tskPPKEs8D2KPafUypv2gxwJP8h/OaJmC82QQGGDQcHvXX43xF2VDACcJVmZ0EuSxkpO9Kc4MlrA3q0+FG58AQ=="],
47
+
48
+ "commander": ["commander@12.1.0", "", {}, "sha512-Vw8qHK3bZM9y/P10u3Vib8o/DdkvA2OtPtZvD871QKjy74Wj1WSKFILMPRPSdUSx5RFK1arlJzEtA4PkFgnbuA=="],
49
+
50
+ "confbox": ["confbox@0.1.8", "", {}, "sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w=="],
51
+
52
+ "consola": ["consola@3.4.2", "", {}, "sha512-5IKcdX0nnYavi6G7TtOhwkYzyjfJlatbjMjuLSfE2kYT5pMDOilZ4OvMhi637CcDICTmz3wARPoyhqyX1Y+XvA=="],
53
+
54
+ "defu": ["defu@6.1.4", "", {}, "sha512-mEQCMmwJu317oSz8CwdIOdwf3xMif1ttiM8LTufzc3g6kR+9Pe236twL8j3IYT1F7GfRgGcW6MWxzZjLIkuHIg=="],
55
+
56
+ "destr": ["destr@2.0.5", "", {}, "sha512-ugFTXCtDZunbzasqBxrK93Ik/DRYsO6S/fedkWEMKqt04xZ4csmnmwGDBAb07QWNaGMAmnTIemsYZCksjATwsA=="],
57
+
58
+ "dotenv": ["dotenv@16.5.0", "", {}, "sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg=="],
59
+
60
+ "fs-minipass": ["fs-minipass@2.1.0", "", { "dependencies": { "minipass": "^3.0.0" } }, "sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg=="],
61
+
62
+ "giget": ["giget@1.2.5", "", { "dependencies": { "citty": "^0.1.6", "consola": "^3.4.0", "defu": "^6.1.4", "node-fetch-native": "^1.6.6", "nypm": "^0.5.4", "pathe": "^2.0.3", "tar": "^6.2.1" }, "bin": { "giget": "dist/cli.mjs" } }, "sha512-r1ekGw/Bgpi3HLV3h1MRBIlSAdHoIMklpaQ3OQLFcRw9PwAj2rqigvIbg+dBUI51OxVI2jsEtDywDBjSiuf7Ug=="],
63
+
64
+ "handlebars": ["handlebars@4.7.8", "", { "dependencies": { "minimist": "^1.2.5", "neo-async": "^2.6.2", "source-map": "^0.6.1", "wordwrap": "^1.0.0" }, "optionalDependencies": { "uglify-js": "^3.1.4" }, "bin": { "handlebars": "bin/handlebars" } }, "sha512-vafaFqs8MZkRrSX7sFVUdo3ap/eNiLnb4IakshzvP56X5Nr1iGKAIqdX6tMlm6HcNRIkr6AxO5jFEoJzzpT8aQ=="],
65
+
66
+ "jiti": ["jiti@2.4.2", "", { "bin": { "jiti": "lib/jiti-cli.mjs" } }, "sha512-rg9zJN+G4n2nfJl5MW3BMygZX56zKPNVEYYqq7adpmMh4Jn2QNEwhvQlFy6jPVdcod7txZtKHWnyZiA3a0zP7A=="],
67
+
68
+ "js-yaml": ["js-yaml@4.1.0", "", { "dependencies": { "argparse": "^2.0.1" }, "bin": { "js-yaml": "bin/js-yaml.js" } }, "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA=="],
69
+
70
+ "minimist": ["minimist@1.2.8", "", {}, "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA=="],
71
+
72
+ "minipass": ["minipass@5.0.0", "", {}, "sha512-3FnjYuehv9k6ovOEbyOswadCDPX1piCfhV8ncmYtHOjuPwylVWsghTLo7rabjC3Rx5xD4HDx8Wm1xnMF7S5qFQ=="],
73
+
74
+ "minizlib": ["minizlib@2.1.2", "", { "dependencies": { "minipass": "^3.0.0", "yallist": "^4.0.0" } }, "sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg=="],
75
+
76
+ "mkdirp": ["mkdirp@1.0.4", "", { "bin": { "mkdirp": "bin/cmd.js" } }, "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw=="],
77
+
78
+ "mlly": ["mlly@1.7.4", "", { "dependencies": { "acorn": "^8.14.0", "pathe": "^2.0.1", "pkg-types": "^1.3.0", "ufo": "^1.5.4" } }, "sha512-qmdSIPC4bDJXgZTCR7XosJiNKySV7O215tsPtDN9iEO/7q/76b/ijtgRu/+epFXSJhijtTCCGp3DWS549P3xKw=="],
79
+
80
+ "neo-async": ["neo-async@2.6.2", "", {}, "sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw=="],
81
+
82
+ "node-fetch-native": ["node-fetch-native@1.6.6", "", {}, "sha512-8Mc2HhqPdlIfedsuZoc3yioPuzp6b+L5jRCRY1QzuWZh2EGJVQrGppC6V6cF0bLdbW0+O2YpqCA25aF/1lvipQ=="],
83
+
84
+ "nypm": ["nypm@0.5.4", "", { "dependencies": { "citty": "^0.1.6", "consola": "^3.4.0", "pathe": "^2.0.3", "pkg-types": "^1.3.1", "tinyexec": "^0.3.2", "ufo": "^1.5.4" }, "bin": { "nypm": "dist/cli.mjs" } }, "sha512-X0SNNrZiGU8/e/zAB7sCTtdxWTMSIO73q+xuKgglm2Yvzwlo8UoC5FNySQFCvl84uPaeADkqHUZUkWy4aH4xOA=="],
85
+
86
+ "ohash": ["ohash@1.1.6", "", {}, "sha512-TBu7PtV8YkAZn0tSxobKY2n2aAQva936lhRrj6957aDaCf9IEtqsKbgMzXE/F/sjqYOwmrukeORHNLe5glk7Cg=="],
87
+
88
+ "pathe": ["pathe@1.1.2", "", {}, "sha512-whLdWMYL2TwI08hn8/ZqAbrVemu0LNaNNJZX73O6qaIdCTfXutsLhMkjdENX0qhsQ9uIimo4/aQOmXkoon2nDQ=="],
89
+
90
+ "perfect-debounce": ["perfect-debounce@1.0.0", "", {}, "sha512-xCy9V055GLEqoFaHoC1SoLIaLmWctgCUaBaWxDZ7/Zx4CTyX7cJQLJOok/orfjZAh9kEYpjJa4d0KcJmCbctZA=="],
91
+
92
+ "pkg-types": ["pkg-types@1.3.1", "", { "dependencies": { "confbox": "^0.1.8", "mlly": "^1.7.4", "pathe": "^2.0.1" } }, "sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ=="],
93
+
94
+ "rc9": ["rc9@2.1.2", "", { "dependencies": { "defu": "^6.1.4", "destr": "^2.0.3" } }, "sha512-btXCnMmRIBINM2LDZoEmOogIZU7Qe7zn4BpomSKZ/ykbLObuBdvG+mFq11DL6fjH1DRwHhrlgtYWG96bJiC7Cg=="],
95
+
96
+ "readdirp": ["readdirp@4.1.2", "", {}, "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg=="],
97
+
98
+ "source-map": ["source-map@0.6.1", "", {}, "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="],
99
+
100
+ "tar": ["tar@6.2.1", "", { "dependencies": { "chownr": "^2.0.0", "fs-minipass": "^2.0.0", "minipass": "^5.0.0", "minizlib": "^2.1.1", "mkdirp": "^1.0.3", "yallist": "^4.0.0" } }, "sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A=="],
101
+
102
+ "tinyexec": ["tinyexec@0.3.2", "", {}, "sha512-KQQR9yN7R5+OSwaK0XQoj22pwHoTlgYqmUscPYoknOoWCWfj/5/ABTMRi69FrKU5ffPVh5QcFikpWJI/P1ocHA=="],
103
+
104
+ "typescript": ["typescript@5.8.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="],
105
+
106
+ "ufo": ["ufo@1.6.1", "", {}, "sha512-9a4/uxlTWJ4+a5i0ooc1rU7C7YOw3wT+UGqdeNNHWnOF9qcMBgLRS+4IYUqbczewFx4mLEig6gawh7X6mFlEkA=="],
107
+
108
+ "uglify-js": ["uglify-js@3.19.3", "", { "bin": { "uglifyjs": "bin/uglifyjs" } }, "sha512-v3Xu+yuwBXisp6QYTcH4UbH+xYJXqnq2m/LtQVWKWzYc1iehYnLixoQDN9FH6/j9/oybfd6W9Ghwkl8+UMKTKQ=="],
109
+
110
+ "undici-types": ["undici-types@7.8.0", "", {}, "sha512-9UJ2xGDvQ43tYyVMpuHlsgApydB8ZKfVYTsLDhXkFL/6gfkp+U8xTGdh8pMJv1SpZna0zxG1DwsKZsreLbXBxw=="],
111
+
112
+ "wordwrap": ["wordwrap@1.0.0", "", {}, "sha512-gvVzJFlPycKc5dZN4yPkP8w7Dc37BtP1yczEneOb4uq34pXZcvrtRTmWV8W+Ume+XCxKgbjM+nevkyFPMybd4Q=="],
113
+
114
+ "yallist": ["yallist@4.0.0", "", {}, "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A=="],
115
+
116
+ "fs-minipass/minipass": ["minipass@3.3.6", "", { "dependencies": { "yallist": "^4.0.0" } }, "sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw=="],
117
+
118
+ "giget/pathe": ["pathe@2.0.3", "", {}, "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w=="],
119
+
120
+ "minizlib/minipass": ["minipass@3.3.6", "", { "dependencies": { "yallist": "^4.0.0" } }, "sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw=="],
121
+
122
+ "mlly/pathe": ["pathe@2.0.3", "", {}, "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w=="],
123
+
124
+ "nypm/pathe": ["pathe@2.0.3", "", {}, "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w=="],
125
+
126
+ "pkg-types/pathe": ["pathe@2.0.3", "", {}, "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w=="],
127
+ }
128
+ }
client/examples/basic-usage.ts ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bun
2
+ /**
3
+ * Basic Usage Example for LeRobot Arena Inference Server TypeScript Client
4
+ *
5
+ * This example demonstrates how to:
6
+ * 1. Create a client instance
7
+ * 2. Check server health
8
+ * 3. Create an inference session
9
+ * 4. Start inference
10
+ * 5. Monitor session status
11
+ * 6. Clean up resources
12
+ */
13
+
14
+ import {
15
+ LeRobotInferenceServerClient
16
+ } from '../src/index';
17
+
18
+ import type {
19
+ CreateSessionRequest,
20
+ SessionStatusResponse
21
+ } from '../src/generated';
22
+
23
+ async function main() {
24
+ // Create client instance
25
+ const client = new LeRobotInferenceServerClient('http://localhost:8001');
26
+
27
+ try {
28
+ console.log('🔍 Checking server health...');
29
+ const isHealthy = await client.isHealthy();
30
+ if (!isHealthy) {
31
+ console.error('❌ Server is not healthy. Make sure the inference server is running.');
32
+ process.exit(1);
33
+ }
34
+ console.log('✅ Server is healthy!');
35
+
36
+ // Get detailed health info
37
+ const healthInfo = await client.getHealth();
38
+ console.log('📊 Server status:', healthInfo);
39
+
40
+ // Create a session (using generated types)
41
+ const sessionRequest: CreateSessionRequest = {
42
+ session_id: 'example-session-' + Date.now(),
43
+ policy_path: './checkpoints/act_so101_beyond', // Update with your model path
44
+ camera_names: ['front', 'wrist'], // Update with your camera names
45
+ arena_server_url: 'http://localhost:8000', // Update with your arena server URL
46
+ workspace_id: null // Let the server generate a workspace ID
47
+ };
48
+
49
+ console.log('🚀 Creating inference session...');
50
+ const session = await client.createSession(sessionRequest);
51
+ console.log('✅ Session created!');
52
+ console.log('📍 Workspace ID:', session.workspace_id);
53
+ console.log('📷 Camera rooms:', session.camera_room_ids);
54
+ console.log('🔄 Joint input room:', session.joint_input_room_id);
55
+ console.log('🎯 Joint output room:', session.joint_output_room_id);
56
+
57
+ // Start inference
58
+ console.log('▶️ Starting inference...');
59
+ await client.startInference(sessionRequest.session_id);
60
+ console.log('✅ Inference started!');
61
+
62
+ // Wait for the session to be running
63
+ console.log('⏳ Waiting for session to be running...');
64
+ const runningStatus = await client.waitForSessionStatus(
65
+ sessionRequest.session_id,
66
+ 'running',
67
+ 30000 // 30 second timeout
68
+ );
69
+ console.log('🏃 Session is now running!');
70
+
71
+ // Monitor the session for a few seconds
72
+ console.log('📊 Monitoring session status...');
73
+ for (let i = 0; i < 5; i++) {
74
+ const status: SessionStatusResponse = await client.getSessionStatus(sessionRequest.session_id);
75
+ console.log(`📈 Status: ${status.status}, Stats:`, status.stats);
76
+
77
+ // Wait 2 seconds before next check
78
+ await new Promise(resolve => setTimeout(resolve, 2000));
79
+ }
80
+
81
+ // Get system info for debugging
82
+ console.log('🔧 Getting system information...');
83
+ const systemInfo = await client.getSystemInfo();
84
+ console.log('💻 System info:', systemInfo);
85
+
86
+ // Get session queue info
87
+ console.log('📋 Getting session queue info...');
88
+ const queueInfo = await client.getSessionQueueInfo(sessionRequest.session_id);
89
+ console.log('📝 Queue info:', queueInfo);
90
+
91
+ // Stop inference
92
+ console.log('⏹️ Stopping inference...');
93
+ await client.stopInference(sessionRequest.session_id);
94
+ console.log('✅ Inference stopped!');
95
+
96
+ // Clean up - delete the session
97
+ console.log('🧹 Cleaning up session...');
98
+ await client.deleteSession(sessionRequest.session_id);
99
+ console.log('✅ Session deleted!');
100
+
101
+ console.log('🎉 Example completed successfully!');
102
+
103
+ } catch (error) {
104
+ console.error('❌ Error:', error);
105
+ process.exit(1);
106
+ }
107
+ }
108
+
109
+ // Alternative: Using the convenience function
110
+ async function quickExample() {
111
+ const client = new LeRobotInferenceServerClient('http://localhost:8001');
112
+
113
+ try {
114
+ // This creates a session and starts inference in one call
115
+ const result = await client.createAndStartSession({
116
+ session_id: 'quick-example-' + Date.now(),
117
+ policy_path: './checkpoints/act_so101_beyond',
118
+ camera_names: ['front'],
119
+ arena_server_url: 'http://localhost:8000'
120
+ });
121
+
122
+ console.log('🚀 Quick session created and started!');
123
+ console.log('Session:', result.session);
124
+ console.log('Status:', result.status);
125
+
126
+ // Clean up
127
+ await client.deleteSession(result.status.session_id);
128
+ console.log('✅ Quick example completed!');
129
+
130
+ } catch (error) {
131
+ console.error('❌ Quick example error:', error);
132
+ }
133
+ }
134
+
135
+ // Run the main example
136
+ if (import.meta.main) {
137
+ console.log('=== LeRobot Arena Inference Server Client Example ===\n');
138
+
139
+ // Choose which example to run based on command line argument
140
+ const runQuick = process.argv.includes('--quick');
141
+
142
+ if (runQuick) {
143
+ console.log('Running quick example...\n');
144
+ await quickExample();
145
+ } else {
146
+ console.log('Running full example...\n');
147
+ await main();
148
+ }
149
+ }
client/openapi.json ADDED
@@ -0,0 +1,710 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "openapi": "3.1.0",
3
+ "info": {
4
+ "title": "Inference Server",
5
+ "summary": "ACT Model Inference Server for Real-time Robot Control",
6
+ "version": "1.0.0"
7
+ },
8
+ "paths": {
9
+ "/": {
10
+ "get": {
11
+ "tags": [
12
+ "Health"
13
+ ],
14
+ "summary": "Root",
15
+ "description": "Health check endpoint.",
16
+ "operationId": "root__get",
17
+ "responses": {
18
+ "200": {
19
+ "description": "Successful Response",
20
+ "content": {
21
+ "application/json": {
22
+ "schema": {}
23
+ }
24
+ }
25
+ }
26
+ }
27
+ }
28
+ },
29
+ "/health": {
30
+ "get": {
31
+ "tags": [
32
+ "Health"
33
+ ],
34
+ "summary": "Health Check",
35
+ "description": "Detailed health check.",
36
+ "operationId": "health_check_health_get",
37
+ "responses": {
38
+ "200": {
39
+ "description": "Successful Response",
40
+ "content": {
41
+ "application/json": {
42
+ "schema": {}
43
+ }
44
+ }
45
+ }
46
+ }
47
+ }
48
+ },
49
+ "/policies": {
50
+ "get": {
51
+ "tags": [
52
+ "Policies"
53
+ ],
54
+ "summary": "List Policies",
55
+ "description": "List supported policy types.",
56
+ "operationId": "list_policies_policies_get",
57
+ "responses": {
58
+ "200": {
59
+ "description": "Successful Response",
60
+ "content": {
61
+ "application/json": {
62
+ "schema": {}
63
+ }
64
+ }
65
+ }
66
+ }
67
+ }
68
+ },
69
+ "/sessions": {
70
+ "get": {
71
+ "tags": [
72
+ "Sessions"
73
+ ],
74
+ "summary": "List Sessions",
75
+ "description": "List all sessions.",
76
+ "operationId": "list_sessions_sessions_get",
77
+ "responses": {
78
+ "200": {
79
+ "description": "Successful Response",
80
+ "content": {
81
+ "application/json": {
82
+ "schema": {
83
+ "items": {
84
+ "$ref": "#/components/schemas/SessionStatusResponse"
85
+ },
86
+ "type": "array",
87
+ "title": "Response List Sessions Sessions Get"
88
+ }
89
+ }
90
+ }
91
+ }
92
+ }
93
+ },
94
+ "post": {
95
+ "tags": [
96
+ "Sessions"
97
+ ],
98
+ "summary": "Create Session",
99
+ "description": "Create a new inference session.\n\nIf workspace_id is provided, all rooms will be created in that workspace.\nIf workspace_id is not provided, a new workspace will be generated automatically.\nAll rooms for a session (cameras + joints) are always created in the same workspace.",
100
+ "operationId": "create_session_sessions_post",
101
+ "requestBody": {
102
+ "content": {
103
+ "application/json": {
104
+ "schema": {
105
+ "$ref": "#/components/schemas/CreateSessionRequest"
106
+ }
107
+ }
108
+ },
109
+ "required": true
110
+ },
111
+ "responses": {
112
+ "200": {
113
+ "description": "Successful Response",
114
+ "content": {
115
+ "application/json": {
116
+ "schema": {
117
+ "$ref": "#/components/schemas/CreateSessionResponse"
118
+ }
119
+ }
120
+ }
121
+ },
122
+ "422": {
123
+ "description": "Validation Error",
124
+ "content": {
125
+ "application/json": {
126
+ "schema": {
127
+ "$ref": "#/components/schemas/HTTPValidationError"
128
+ }
129
+ }
130
+ }
131
+ }
132
+ }
133
+ }
134
+ },
135
+ "/sessions/{session_id}": {
136
+ "get": {
137
+ "tags": [
138
+ "Sessions"
139
+ ],
140
+ "summary": "Get Session Status",
141
+ "description": "Get status of a specific session.",
142
+ "operationId": "get_session_status_sessions__session_id__get",
143
+ "parameters": [
144
+ {
145
+ "name": "session_id",
146
+ "in": "path",
147
+ "required": true,
148
+ "schema": {
149
+ "type": "string",
150
+ "title": "Session Id"
151
+ }
152
+ }
153
+ ],
154
+ "responses": {
155
+ "200": {
156
+ "description": "Successful Response",
157
+ "content": {
158
+ "application/json": {
159
+ "schema": {
160
+ "$ref": "#/components/schemas/SessionStatusResponse"
161
+ }
162
+ }
163
+ }
164
+ },
165
+ "422": {
166
+ "description": "Validation Error",
167
+ "content": {
168
+ "application/json": {
169
+ "schema": {
170
+ "$ref": "#/components/schemas/HTTPValidationError"
171
+ }
172
+ }
173
+ }
174
+ }
175
+ }
176
+ },
177
+ "delete": {
178
+ "tags": [
179
+ "Sessions"
180
+ ],
181
+ "summary": "Delete Session",
182
+ "description": "Delete a session.",
183
+ "operationId": "delete_session_sessions__session_id__delete",
184
+ "parameters": [
185
+ {
186
+ "name": "session_id",
187
+ "in": "path",
188
+ "required": true,
189
+ "schema": {
190
+ "type": "string",
191
+ "title": "Session Id"
192
+ }
193
+ }
194
+ ],
195
+ "responses": {
196
+ "200": {
197
+ "description": "Successful Response",
198
+ "content": {
199
+ "application/json": {
200
+ "schema": {}
201
+ }
202
+ }
203
+ },
204
+ "422": {
205
+ "description": "Validation Error",
206
+ "content": {
207
+ "application/json": {
208
+ "schema": {
209
+ "$ref": "#/components/schemas/HTTPValidationError"
210
+ }
211
+ }
212
+ }
213
+ }
214
+ }
215
+ }
216
+ },
217
+ "/sessions/{session_id}/start": {
218
+ "post": {
219
+ "tags": [
220
+ "Control"
221
+ ],
222
+ "summary": "Start Inference",
223
+ "description": "Start inference for a session.",
224
+ "operationId": "start_inference_sessions__session_id__start_post",
225
+ "parameters": [
226
+ {
227
+ "name": "session_id",
228
+ "in": "path",
229
+ "required": true,
230
+ "schema": {
231
+ "type": "string",
232
+ "title": "Session Id"
233
+ }
234
+ }
235
+ ],
236
+ "responses": {
237
+ "200": {
238
+ "description": "Successful Response",
239
+ "content": {
240
+ "application/json": {
241
+ "schema": {}
242
+ }
243
+ }
244
+ },
245
+ "422": {
246
+ "description": "Validation Error",
247
+ "content": {
248
+ "application/json": {
249
+ "schema": {
250
+ "$ref": "#/components/schemas/HTTPValidationError"
251
+ }
252
+ }
253
+ }
254
+ }
255
+ }
256
+ }
257
+ },
258
+ "/sessions/{session_id}/stop": {
259
+ "post": {
260
+ "tags": [
261
+ "Control"
262
+ ],
263
+ "summary": "Stop Inference",
264
+ "description": "Stop inference for a session.",
265
+ "operationId": "stop_inference_sessions__session_id__stop_post",
266
+ "parameters": [
267
+ {
268
+ "name": "session_id",
269
+ "in": "path",
270
+ "required": true,
271
+ "schema": {
272
+ "type": "string",
273
+ "title": "Session Id"
274
+ }
275
+ }
276
+ ],
277
+ "responses": {
278
+ "200": {
279
+ "description": "Successful Response",
280
+ "content": {
281
+ "application/json": {
282
+ "schema": {}
283
+ }
284
+ }
285
+ },
286
+ "422": {
287
+ "description": "Validation Error",
288
+ "content": {
289
+ "application/json": {
290
+ "schema": {
291
+ "$ref": "#/components/schemas/HTTPValidationError"
292
+ }
293
+ }
294
+ }
295
+ }
296
+ }
297
+ }
298
+ },
299
+ "/sessions/{session_id}/restart": {
300
+ "post": {
301
+ "tags": [
302
+ "Control"
303
+ ],
304
+ "summary": "Restart Inference",
305
+ "description": "Restart inference for a session.",
306
+ "operationId": "restart_inference_sessions__session_id__restart_post",
307
+ "parameters": [
308
+ {
309
+ "name": "session_id",
310
+ "in": "path",
311
+ "required": true,
312
+ "schema": {
313
+ "type": "string",
314
+ "title": "Session Id"
315
+ }
316
+ }
317
+ ],
318
+ "responses": {
319
+ "200": {
320
+ "description": "Successful Response",
321
+ "content": {
322
+ "application/json": {
323
+ "schema": {}
324
+ }
325
+ }
326
+ },
327
+ "422": {
328
+ "description": "Validation Error",
329
+ "content": {
330
+ "application/json": {
331
+ "schema": {
332
+ "$ref": "#/components/schemas/HTTPValidationError"
333
+ }
334
+ }
335
+ }
336
+ }
337
+ }
338
+ }
339
+ },
340
+ "/debug/system": {
341
+ "get": {
342
+ "tags": [
343
+ "Debug"
344
+ ],
345
+ "summary": "Get System Info",
346
+ "description": "Get system information for debugging.",
347
+ "operationId": "get_system_info_debug_system_get",
348
+ "responses": {
349
+ "200": {
350
+ "description": "Successful Response",
351
+ "content": {
352
+ "application/json": {
353
+ "schema": {}
354
+ }
355
+ }
356
+ }
357
+ }
358
+ }
359
+ },
360
+ "/debug/logs": {
361
+ "get": {
362
+ "tags": [
363
+ "Debug"
364
+ ],
365
+ "summary": "Get Recent Logs",
366
+ "description": "Get recent log entries for debugging.",
367
+ "operationId": "get_recent_logs_debug_logs_get",
368
+ "responses": {
369
+ "200": {
370
+ "description": "Successful Response",
371
+ "content": {
372
+ "application/json": {
373
+ "schema": {}
374
+ }
375
+ }
376
+ }
377
+ }
378
+ }
379
+ },
380
+ "/debug/sessions/{session_id}/reset": {
381
+ "post": {
382
+ "tags": [
383
+ "Debug"
384
+ ],
385
+ "summary": "Debug Reset Session",
386
+ "description": "Reset a session's internal state for debugging.",
387
+ "operationId": "debug_reset_session_debug_sessions__session_id__reset_post",
388
+ "parameters": [
389
+ {
390
+ "name": "session_id",
391
+ "in": "path",
392
+ "required": true,
393
+ "schema": {
394
+ "type": "string",
395
+ "title": "Session Id"
396
+ }
397
+ }
398
+ ],
399
+ "responses": {
400
+ "200": {
401
+ "description": "Successful Response",
402
+ "content": {
403
+ "application/json": {
404
+ "schema": {}
405
+ }
406
+ }
407
+ },
408
+ "422": {
409
+ "description": "Validation Error",
410
+ "content": {
411
+ "application/json": {
412
+ "schema": {
413
+ "$ref": "#/components/schemas/HTTPValidationError"
414
+ }
415
+ }
416
+ }
417
+ }
418
+ }
419
+ }
420
+ },
421
+ "/debug/sessions/{session_id}/queue": {
422
+ "get": {
423
+ "tags": [
424
+ "Debug"
425
+ ],
426
+ "summary": "Get Session Queue Info",
427
+ "description": "Get detailed information about a session's action queue.",
428
+ "operationId": "get_session_queue_info_debug_sessions__session_id__queue_get",
429
+ "parameters": [
430
+ {
431
+ "name": "session_id",
432
+ "in": "path",
433
+ "required": true,
434
+ "schema": {
435
+ "type": "string",
436
+ "title": "Session Id"
437
+ }
438
+ }
439
+ ],
440
+ "responses": {
441
+ "200": {
442
+ "description": "Successful Response",
443
+ "content": {
444
+ "application/json": {
445
+ "schema": {}
446
+ }
447
+ }
448
+ },
449
+ "422": {
450
+ "description": "Validation Error",
451
+ "content": {
452
+ "application/json": {
453
+ "schema": {
454
+ "$ref": "#/components/schemas/HTTPValidationError"
455
+ }
456
+ }
457
+ }
458
+ }
459
+ }
460
+ }
461
+ }
462
+ },
463
+ "components": {
464
+ "schemas": {
465
+ "CreateSessionRequest": {
466
+ "properties": {
467
+ "session_id": {
468
+ "type": "string",
469
+ "title": "Session Id"
470
+ },
471
+ "policy_path": {
472
+ "type": "string",
473
+ "title": "Policy Path"
474
+ },
475
+ "camera_names": {
476
+ "items": {
477
+ "type": "string"
478
+ },
479
+ "type": "array",
480
+ "title": "Camera Names",
481
+ "default": [
482
+ "front"
483
+ ]
484
+ },
485
+ "arena_server_url": {
486
+ "type": "string",
487
+ "title": "Arena Server Url",
488
+ "default": "http://localhost:8000"
489
+ },
490
+ "workspace_id": {
491
+ "anyOf": [
492
+ {
493
+ "type": "string"
494
+ },
495
+ {
496
+ "type": "null"
497
+ }
498
+ ],
499
+ "title": "Workspace Id"
500
+ },
501
+ "policy_type": {
502
+ "type": "string",
503
+ "title": "Policy Type",
504
+ "default": "act"
505
+ },
506
+ "language_instruction": {
507
+ "anyOf": [
508
+ {
509
+ "type": "string"
510
+ },
511
+ {
512
+ "type": "null"
513
+ }
514
+ ],
515
+ "title": "Language Instruction"
516
+ }
517
+ },
518
+ "type": "object",
519
+ "required": [
520
+ "session_id",
521
+ "policy_path"
522
+ ],
523
+ "title": "CreateSessionRequest"
524
+ },
525
+ "CreateSessionResponse": {
526
+ "properties": {
527
+ "workspace_id": {
528
+ "type": "string",
529
+ "title": "Workspace Id"
530
+ },
531
+ "camera_room_ids": {
532
+ "additionalProperties": {
533
+ "type": "string"
534
+ },
535
+ "type": "object",
536
+ "title": "Camera Room Ids"
537
+ },
538
+ "joint_input_room_id": {
539
+ "type": "string",
540
+ "title": "Joint Input Room Id"
541
+ },
542
+ "joint_output_room_id": {
543
+ "type": "string",
544
+ "title": "Joint Output Room Id"
545
+ }
546
+ },
547
+ "type": "object",
548
+ "required": [
549
+ "workspace_id",
550
+ "camera_room_ids",
551
+ "joint_input_room_id",
552
+ "joint_output_room_id"
553
+ ],
554
+ "title": "CreateSessionResponse"
555
+ },
556
+ "HTTPValidationError": {
557
+ "properties": {
558
+ "detail": {
559
+ "items": {
560
+ "$ref": "#/components/schemas/ValidationError"
561
+ },
562
+ "type": "array",
563
+ "title": "Detail"
564
+ }
565
+ },
566
+ "type": "object",
567
+ "title": "HTTPValidationError"
568
+ },
569
+ "SessionStatusResponse": {
570
+ "properties": {
571
+ "session_id": {
572
+ "type": "string",
573
+ "title": "Session Id"
574
+ },
575
+ "status": {
576
+ "type": "string",
577
+ "title": "Status"
578
+ },
579
+ "policy_path": {
580
+ "type": "string",
581
+ "title": "Policy Path"
582
+ },
583
+ "policy_type": {
584
+ "type": "string",
585
+ "title": "Policy Type"
586
+ },
587
+ "camera_names": {
588
+ "items": {
589
+ "type": "string"
590
+ },
591
+ "type": "array",
592
+ "title": "Camera Names"
593
+ },
594
+ "workspace_id": {
595
+ "type": "string",
596
+ "title": "Workspace Id"
597
+ },
598
+ "rooms": {
599
+ "additionalProperties": true,
600
+ "type": "object",
601
+ "title": "Rooms"
602
+ },
603
+ "stats": {
604
+ "additionalProperties": true,
605
+ "type": "object",
606
+ "title": "Stats"
607
+ },
608
+ "inference_stats": {
609
+ "anyOf": [
610
+ {
611
+ "additionalProperties": true,
612
+ "type": "object"
613
+ },
614
+ {
615
+ "type": "null"
616
+ }
617
+ ],
618
+ "title": "Inference Stats"
619
+ },
620
+ "error_message": {
621
+ "anyOf": [
622
+ {
623
+ "type": "string"
624
+ },
625
+ {
626
+ "type": "null"
627
+ }
628
+ ],
629
+ "title": "Error Message"
630
+ }
631
+ },
632
+ "type": "object",
633
+ "required": [
634
+ "session_id",
635
+ "status",
636
+ "policy_path",
637
+ "policy_type",
638
+ "camera_names",
639
+ "workspace_id",
640
+ "rooms",
641
+ "stats"
642
+ ],
643
+ "title": "SessionStatusResponse"
644
+ },
645
+ "ValidationError": {
646
+ "properties": {
647
+ "loc": {
648
+ "items": {
649
+ "anyOf": [
650
+ {
651
+ "type": "string"
652
+ },
653
+ {
654
+ "type": "integer"
655
+ }
656
+ ]
657
+ },
658
+ "type": "array",
659
+ "title": "Location"
660
+ },
661
+ "msg": {
662
+ "type": "string",
663
+ "title": "Message"
664
+ },
665
+ "type": {
666
+ "type": "string",
667
+ "title": "Error Type"
668
+ }
669
+ },
670
+ "type": "object",
671
+ "required": [
672
+ "loc",
673
+ "msg",
674
+ "type"
675
+ ],
676
+ "title": "ValidationError"
677
+ }
678
+ },
679
+ "securitySchemes": {
680
+ "BearerAuth": {
681
+ "type": "http",
682
+ "scheme": "bearer",
683
+ "bearerFormat": "JWT"
684
+ },
685
+ "ApiKeyAuth": {
686
+ "type": "apiKey",
687
+ "in": "header",
688
+ "name": "X-API-Key"
689
+ }
690
+ }
691
+ },
692
+ "tags": [
693
+ {
694
+ "name": "Health",
695
+ "description": "Health check and server status endpoints"
696
+ },
697
+ {
698
+ "name": "Sessions",
699
+ "description": "Inference session management - create, control, and monitor AI sessions"
700
+ },
701
+ {
702
+ "name": "Control",
703
+ "description": "Session control operations - start, stop, restart inference"
704
+ },
705
+ {
706
+ "name": "Debug",
707
+ "description": "Debug and monitoring endpoints for system diagnostics"
708
+ }
709
+ ]
710
+ }
client/package.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "@robohub/inference-server-client",
3
+ "version": "1.0.0",
4
+ "description": "TypeScript client for LeRobot Arena Inference Server - ACT model inference and session management",
5
+ "module": "dist/index.js",
6
+ "main": "dist/index.js",
7
+ "types": "dist/index.d.ts",
8
+ "type": "module",
9
+ "private": true,
10
+ "exports": {
11
+ ".": {
12
+ "import": "./dist/index.js",
13
+ "types": "./dist/index.d.ts"
14
+ }
15
+ },
16
+ "files": [
17
+ "dist",
18
+ "src",
19
+ "README.md"
20
+ ],
21
+ "scripts": {
22
+ "build": "bun build ./src/index.ts --outdir ./dist --target bun --format esm --sourcemap && bunx tsc --emitDeclarationOnly --declaration --outDir ./dist/temp && cp -r ./dist/temp/src/* ./dist/ && rm -rf ./dist/temp",
23
+ "dev": "bun --watch src/index.ts",
24
+ "test": "bun test",
25
+ "typecheck": "bunx tsc --noEmit",
26
+ "clean": "rm -rf dist src/generated",
27
+ "export-openapi": "cd .. && uv run python -m inference_server.export_openapi --output client/openapi.json",
28
+ "generate-client": "bunx @hey-api/openapi-ts -i openapi.json -o src/generated -c @hey-api/client-fetch",
29
+ "generate": "bun run export-openapi && bun run generate-client",
30
+ "prebuild": "bun run clean && bun run generate"
31
+ },
32
+ "keywords": [
33
+ "lerobot",
34
+ "arena",
35
+ "inference-server",
36
+ "act",
37
+ "robotics",
38
+ "inference",
39
+ "typescript",
40
+ "client"
41
+ ],
42
+ "author": "Julien Blanchon",
43
+ "license": "Apache-2.0",
44
+ "dependencies": {
45
+ "@hey-api/client-fetch": "^0.2.4"
46
+ },
47
+ "devDependencies": {
48
+ "@hey-api/openapi-ts": "^0.53.12",
49
+ "@types/bun": "latest",
50
+ "typescript": "^5.8.3"
51
+ },
52
+ "peerDependencies": {
53
+ "typescript": "^5"
54
+ },
55
+ "repository": {
56
+ "type": "git",
57
+ "url": "git+https://github.com/julien-blanchon/RoboHub.git#main:services/inference-server/client",
58
+ "directory": "services/inference-server/client"
59
+ },
60
+ "engines": {
61
+ "bun": ">=1.0.0"
62
+ }
63
+ }
client/src/generated/index.ts ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ // This file is auto-generated by @hey-api/openapi-ts
2
+ export * from './schemas.gen';
3
+ export * from './services.gen';
4
+ export * from './types.gen';
client/src/generated/schemas.gen.ts ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // This file is auto-generated by @hey-api/openapi-ts
2
+
3
+ export const CreateSessionRequestSchema = {
4
+ properties: {
5
+ session_id: {
6
+ type: 'string',
7
+ title: 'Session Id'
8
+ },
9
+ policy_path: {
10
+ type: 'string',
11
+ title: 'Policy Path'
12
+ },
13
+ camera_names: {
14
+ items: {
15
+ type: 'string'
16
+ },
17
+ type: 'array',
18
+ title: 'Camera Names',
19
+ default: ['front']
20
+ },
21
+ arena_server_url: {
22
+ type: 'string',
23
+ title: 'Arena Server Url',
24
+ default: 'http://localhost:8000'
25
+ },
26
+ workspace_id: {
27
+ anyOf: [
28
+ {
29
+ type: 'string'
30
+ },
31
+ {
32
+ type: 'null'
33
+ }
34
+ ],
35
+ title: 'Workspace Id'
36
+ },
37
+ policy_type: {
38
+ type: 'string',
39
+ title: 'Policy Type',
40
+ default: 'act'
41
+ },
42
+ language_instruction: {
43
+ anyOf: [
44
+ {
45
+ type: 'string'
46
+ },
47
+ {
48
+ type: 'null'
49
+ }
50
+ ],
51
+ title: 'Language Instruction'
52
+ }
53
+ },
54
+ type: 'object',
55
+ required: ['session_id', 'policy_path'],
56
+ title: 'CreateSessionRequest'
57
+ } as const;
58
+
59
+ export const CreateSessionResponseSchema = {
60
+ properties: {
61
+ workspace_id: {
62
+ type: 'string',
63
+ title: 'Workspace Id'
64
+ },
65
+ camera_room_ids: {
66
+ additionalProperties: {
67
+ type: 'string'
68
+ },
69
+ type: 'object',
70
+ title: 'Camera Room Ids'
71
+ },
72
+ joint_input_room_id: {
73
+ type: 'string',
74
+ title: 'Joint Input Room Id'
75
+ },
76
+ joint_output_room_id: {
77
+ type: 'string',
78
+ title: 'Joint Output Room Id'
79
+ }
80
+ },
81
+ type: 'object',
82
+ required: ['workspace_id', 'camera_room_ids', 'joint_input_room_id', 'joint_output_room_id'],
83
+ title: 'CreateSessionResponse'
84
+ } as const;
85
+
86
+ export const HTTPValidationErrorSchema = {
87
+ properties: {
88
+ detail: {
89
+ items: {
90
+ '$ref': '#/components/schemas/ValidationError'
91
+ },
92
+ type: 'array',
93
+ title: 'Detail'
94
+ }
95
+ },
96
+ type: 'object',
97
+ title: 'HTTPValidationError'
98
+ } as const;
99
+
100
+ export const SessionStatusResponseSchema = {
101
+ properties: {
102
+ session_id: {
103
+ type: 'string',
104
+ title: 'Session Id'
105
+ },
106
+ status: {
107
+ type: 'string',
108
+ title: 'Status'
109
+ },
110
+ policy_path: {
111
+ type: 'string',
112
+ title: 'Policy Path'
113
+ },
114
+ policy_type: {
115
+ type: 'string',
116
+ title: 'Policy Type'
117
+ },
118
+ camera_names: {
119
+ items: {
120
+ type: 'string'
121
+ },
122
+ type: 'array',
123
+ title: 'Camera Names'
124
+ },
125
+ workspace_id: {
126
+ type: 'string',
127
+ title: 'Workspace Id'
128
+ },
129
+ rooms: {
130
+ additionalProperties: true,
131
+ type: 'object',
132
+ title: 'Rooms'
133
+ },
134
+ stats: {
135
+ additionalProperties: true,
136
+ type: 'object',
137
+ title: 'Stats'
138
+ },
139
+ inference_stats: {
140
+ anyOf: [
141
+ {
142
+ additionalProperties: true,
143
+ type: 'object'
144
+ },
145
+ {
146
+ type: 'null'
147
+ }
148
+ ],
149
+ title: 'Inference Stats'
150
+ },
151
+ error_message: {
152
+ anyOf: [
153
+ {
154
+ type: 'string'
155
+ },
156
+ {
157
+ type: 'null'
158
+ }
159
+ ],
160
+ title: 'Error Message'
161
+ }
162
+ },
163
+ type: 'object',
164
+ required: ['session_id', 'status', 'policy_path', 'policy_type', 'camera_names', 'workspace_id', 'rooms', 'stats'],
165
+ title: 'SessionStatusResponse'
166
+ } as const;
167
+
168
+ export const ValidationErrorSchema = {
169
+ properties: {
170
+ loc: {
171
+ items: {
172
+ anyOf: [
173
+ {
174
+ type: 'string'
175
+ },
176
+ {
177
+ type: 'integer'
178
+ }
179
+ ]
180
+ },
181
+ type: 'array',
182
+ title: 'Location'
183
+ },
184
+ msg: {
185
+ type: 'string',
186
+ title: 'Message'
187
+ },
188
+ type: {
189
+ type: 'string',
190
+ title: 'Error Type'
191
+ }
192
+ },
193
+ type: 'object',
194
+ required: ['loc', 'msg', 'type'],
195
+ title: 'ValidationError'
196
+ } as const;
client/src/generated/services.gen.ts ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // This file is auto-generated by @hey-api/openapi-ts
2
+
3
+ import { createClient, createConfig, type Options } from '@hey-api/client-fetch';
4
+ import type { RootGetError, RootGetResponse, HealthCheckHealthGetError, HealthCheckHealthGetResponse, ListPoliciesPoliciesGetError, ListPoliciesPoliciesGetResponse, ListSessionsSessionsGetError, ListSessionsSessionsGetResponse, CreateSessionSessionsPostData, CreateSessionSessionsPostError, CreateSessionSessionsPostResponse, GetSessionStatusSessionsSessionIdGetData, GetSessionStatusSessionsSessionIdGetError, GetSessionStatusSessionsSessionIdGetResponse, DeleteSessionSessionsSessionIdDeleteData, DeleteSessionSessionsSessionIdDeleteError, DeleteSessionSessionsSessionIdDeleteResponse, StartInferenceSessionsSessionIdStartPostData, StartInferenceSessionsSessionIdStartPostError, StartInferenceSessionsSessionIdStartPostResponse, StopInferenceSessionsSessionIdStopPostData, StopInferenceSessionsSessionIdStopPostError, StopInferenceSessionsSessionIdStopPostResponse, RestartInferenceSessionsSessionIdRestartPostData, RestartInferenceSessionsSessionIdRestartPostError, RestartInferenceSessionsSessionIdRestartPostResponse, GetSystemInfoDebugSystemGetError, GetSystemInfoDebugSystemGetResponse, GetRecentLogsDebugLogsGetError, GetRecentLogsDebugLogsGetResponse, DebugResetSessionDebugSessionsSessionIdResetPostData, DebugResetSessionDebugSessionsSessionIdResetPostError, DebugResetSessionDebugSessionsSessionIdResetPostResponse, GetSessionQueueInfoDebugSessionsSessionIdQueueGetData, GetSessionQueueInfoDebugSessionsSessionIdQueueGetError, GetSessionQueueInfoDebugSessionsSessionIdQueueGetResponse } from './types.gen';
5
+
6
+ export const client = createClient(createConfig());
7
+
8
+ /**
9
+ * Root
10
+ * Health check endpoint.
11
+ */
12
+ export const rootGet = <ThrowOnError extends boolean = false>(options?: Options<unknown, ThrowOnError>) => {
13
+ return (options?.client ?? client).get<RootGetResponse, RootGetError, ThrowOnError>({
14
+ ...options,
15
+ url: '/'
16
+ });
17
+ };
18
+
19
+ /**
20
+ * Health Check
21
+ * Detailed health check.
22
+ */
23
+ export const healthCheckHealthGet = <ThrowOnError extends boolean = false>(options?: Options<unknown, ThrowOnError>) => {
24
+ return (options?.client ?? client).get<HealthCheckHealthGetResponse, HealthCheckHealthGetError, ThrowOnError>({
25
+ ...options,
26
+ url: '/health'
27
+ });
28
+ };
29
+
30
+ /**
31
+ * List Policies
32
+ * List supported policy types.
33
+ */
34
+ export const listPoliciesPoliciesGet = <ThrowOnError extends boolean = false>(options?: Options<unknown, ThrowOnError>) => {
35
+ return (options?.client ?? client).get<ListPoliciesPoliciesGetResponse, ListPoliciesPoliciesGetError, ThrowOnError>({
36
+ ...options,
37
+ url: '/policies'
38
+ });
39
+ };
40
+
41
+ /**
42
+ * List Sessions
43
+ * List all sessions.
44
+ */
45
+ export const listSessionsSessionsGet = <ThrowOnError extends boolean = false>(options?: Options<unknown, ThrowOnError>) => {
46
+ return (options?.client ?? client).get<ListSessionsSessionsGetResponse, ListSessionsSessionsGetError, ThrowOnError>({
47
+ ...options,
48
+ url: '/sessions'
49
+ });
50
+ };
51
+
52
+ /**
53
+ * Create Session
54
+ * Create a new inference session.
55
+ *
56
+ * If workspace_id is provided, all rooms will be created in that workspace.
57
+ * If workspace_id is not provided, a new workspace will be generated automatically.
58
+ * All rooms for a session (cameras + joints) are always created in the same workspace.
59
+ */
60
+ export const createSessionSessionsPost = <ThrowOnError extends boolean = false>(options: Options<CreateSessionSessionsPostData, ThrowOnError>) => {
61
+ return (options?.client ?? client).post<CreateSessionSessionsPostResponse, CreateSessionSessionsPostError, ThrowOnError>({
62
+ ...options,
63
+ url: '/sessions'
64
+ });
65
+ };
66
+
67
+ /**
68
+ * Get Session Status
69
+ * Get status of a specific session.
70
+ */
71
+ export const getSessionStatusSessionsSessionIdGet = <ThrowOnError extends boolean = false>(options: Options<GetSessionStatusSessionsSessionIdGetData, ThrowOnError>) => {
72
+ return (options?.client ?? client).get<GetSessionStatusSessionsSessionIdGetResponse, GetSessionStatusSessionsSessionIdGetError, ThrowOnError>({
73
+ ...options,
74
+ url: '/sessions/{session_id}'
75
+ });
76
+ };
77
+
78
+ /**
79
+ * Delete Session
80
+ * Delete a session.
81
+ */
82
+ export const deleteSessionSessionsSessionIdDelete = <ThrowOnError extends boolean = false>(options: Options<DeleteSessionSessionsSessionIdDeleteData, ThrowOnError>) => {
83
+ return (options?.client ?? client).delete<DeleteSessionSessionsSessionIdDeleteResponse, DeleteSessionSessionsSessionIdDeleteError, ThrowOnError>({
84
+ ...options,
85
+ url: '/sessions/{session_id}'
86
+ });
87
+ };
88
+
89
+ /**
90
+ * Start Inference
91
+ * Start inference for a session.
92
+ */
93
+ export const startInferenceSessionsSessionIdStartPost = <ThrowOnError extends boolean = false>(options: Options<StartInferenceSessionsSessionIdStartPostData, ThrowOnError>) => {
94
+ return (options?.client ?? client).post<StartInferenceSessionsSessionIdStartPostResponse, StartInferenceSessionsSessionIdStartPostError, ThrowOnError>({
95
+ ...options,
96
+ url: '/sessions/{session_id}/start'
97
+ });
98
+ };
99
+
100
+ /**
101
+ * Stop Inference
102
+ * Stop inference for a session.
103
+ */
104
+ export const stopInferenceSessionsSessionIdStopPost = <ThrowOnError extends boolean = false>(options: Options<StopInferenceSessionsSessionIdStopPostData, ThrowOnError>) => {
105
+ return (options?.client ?? client).post<StopInferenceSessionsSessionIdStopPostResponse, StopInferenceSessionsSessionIdStopPostError, ThrowOnError>({
106
+ ...options,
107
+ url: '/sessions/{session_id}/stop'
108
+ });
109
+ };
110
+
111
+ /**
112
+ * Restart Inference
113
+ * Restart inference for a session.
114
+ */
115
+ export const restartInferenceSessionsSessionIdRestartPost = <ThrowOnError extends boolean = false>(options: Options<RestartInferenceSessionsSessionIdRestartPostData, ThrowOnError>) => {
116
+ return (options?.client ?? client).post<RestartInferenceSessionsSessionIdRestartPostResponse, RestartInferenceSessionsSessionIdRestartPostError, ThrowOnError>({
117
+ ...options,
118
+ url: '/sessions/{session_id}/restart'
119
+ });
120
+ };
121
+
122
+ /**
123
+ * Get System Info
124
+ * Get system information for debugging.
125
+ */
126
+ export const getSystemInfoDebugSystemGet = <ThrowOnError extends boolean = false>(options?: Options<unknown, ThrowOnError>) => {
127
+ return (options?.client ?? client).get<GetSystemInfoDebugSystemGetResponse, GetSystemInfoDebugSystemGetError, ThrowOnError>({
128
+ ...options,
129
+ url: '/debug/system'
130
+ });
131
+ };
132
+
133
+ /**
134
+ * Get Recent Logs
135
+ * Get recent log entries for debugging.
136
+ */
137
+ export const getRecentLogsDebugLogsGet = <ThrowOnError extends boolean = false>(options?: Options<unknown, ThrowOnError>) => {
138
+ return (options?.client ?? client).get<GetRecentLogsDebugLogsGetResponse, GetRecentLogsDebugLogsGetError, ThrowOnError>({
139
+ ...options,
140
+ url: '/debug/logs'
141
+ });
142
+ };
143
+
144
+ /**
145
+ * Debug Reset Session
146
+ * Reset a session's internal state for debugging.
147
+ */
148
+ export const debugResetSessionDebugSessionsSessionIdResetPost = <ThrowOnError extends boolean = false>(options: Options<DebugResetSessionDebugSessionsSessionIdResetPostData, ThrowOnError>) => {
149
+ return (options?.client ?? client).post<DebugResetSessionDebugSessionsSessionIdResetPostResponse, DebugResetSessionDebugSessionsSessionIdResetPostError, ThrowOnError>({
150
+ ...options,
151
+ url: '/debug/sessions/{session_id}/reset'
152
+ });
153
+ };
154
+
155
+ /**
156
+ * Get Session Queue Info
157
+ * Get detailed information about a session's action queue.
158
+ */
159
+ export const getSessionQueueInfoDebugSessionsSessionIdQueueGet = <ThrowOnError extends boolean = false>(options: Options<GetSessionQueueInfoDebugSessionsSessionIdQueueGetData, ThrowOnError>) => {
160
+ return (options?.client ?? client).get<GetSessionQueueInfoDebugSessionsSessionIdQueueGetResponse, GetSessionQueueInfoDebugSessionsSessionIdQueueGetError, ThrowOnError>({
161
+ ...options,
162
+ url: '/debug/sessions/{session_id}/queue'
163
+ });
164
+ };
client/src/generated/types.gen.ts ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // This file is auto-generated by @hey-api/openapi-ts
2
+
3
+ export type CreateSessionRequest = {
4
+ session_id: string;
5
+ policy_path: string;
6
+ camera_names?: Array<(string)>;
7
+ arena_server_url?: string;
8
+ workspace_id?: (string | null);
9
+ policy_type?: string;
10
+ language_instruction?: (string | null);
11
+ };
12
+
13
+ export type CreateSessionResponse = {
14
+ workspace_id: string;
15
+ camera_room_ids: {
16
+ [key: string]: (string);
17
+ };
18
+ joint_input_room_id: string;
19
+ joint_output_room_id: string;
20
+ };
21
+
22
+ export type HTTPValidationError = {
23
+ detail?: Array<ValidationError>;
24
+ };
25
+
26
+ export type SessionStatusResponse = {
27
+ session_id: string;
28
+ status: string;
29
+ policy_path: string;
30
+ policy_type: string;
31
+ camera_names: Array<(string)>;
32
+ workspace_id: string;
33
+ rooms: {
34
+ [key: string]: unknown;
35
+ };
36
+ stats: {
37
+ [key: string]: unknown;
38
+ };
39
+ inference_stats?: ({
40
+ [key: string]: unknown;
41
+ } | null);
42
+ error_message?: (string | null);
43
+ };
44
+
45
+ export type ValidationError = {
46
+ loc: Array<(string | number)>;
47
+ msg: string;
48
+ type: string;
49
+ };
50
+
51
+ export type RootGetResponse = (unknown);
52
+
53
+ export type RootGetError = unknown;
54
+
55
+ export type HealthCheckHealthGetResponse = (unknown);
56
+
57
+ export type HealthCheckHealthGetError = unknown;
58
+
59
+ export type ListPoliciesPoliciesGetResponse = (unknown);
60
+
61
+ export type ListPoliciesPoliciesGetError = unknown;
62
+
63
+ export type ListSessionsSessionsGetResponse = (Array<SessionStatusResponse>);
64
+
65
+ export type ListSessionsSessionsGetError = unknown;
66
+
67
+ export type CreateSessionSessionsPostData = {
68
+ body: CreateSessionRequest;
69
+ };
70
+
71
+ export type CreateSessionSessionsPostResponse = (CreateSessionResponse);
72
+
73
+ export type CreateSessionSessionsPostError = (HTTPValidationError);
74
+
75
+ export type GetSessionStatusSessionsSessionIdGetData = {
76
+ path: {
77
+ session_id: string;
78
+ };
79
+ };
80
+
81
+ export type GetSessionStatusSessionsSessionIdGetResponse = (SessionStatusResponse);
82
+
83
+ export type GetSessionStatusSessionsSessionIdGetError = (HTTPValidationError);
84
+
85
+ export type DeleteSessionSessionsSessionIdDeleteData = {
86
+ path: {
87
+ session_id: string;
88
+ };
89
+ };
90
+
91
+ export type DeleteSessionSessionsSessionIdDeleteResponse = (unknown);
92
+
93
+ export type DeleteSessionSessionsSessionIdDeleteError = (HTTPValidationError);
94
+
95
+ export type StartInferenceSessionsSessionIdStartPostData = {
96
+ path: {
97
+ session_id: string;
98
+ };
99
+ };
100
+
101
+ export type StartInferenceSessionsSessionIdStartPostResponse = (unknown);
102
+
103
+ export type StartInferenceSessionsSessionIdStartPostError = (HTTPValidationError);
104
+
105
+ export type StopInferenceSessionsSessionIdStopPostData = {
106
+ path: {
107
+ session_id: string;
108
+ };
109
+ };
110
+
111
+ export type StopInferenceSessionsSessionIdStopPostResponse = (unknown);
112
+
113
+ export type StopInferenceSessionsSessionIdStopPostError = (HTTPValidationError);
114
+
115
+ export type RestartInferenceSessionsSessionIdRestartPostData = {
116
+ path: {
117
+ session_id: string;
118
+ };
119
+ };
120
+
121
+ export type RestartInferenceSessionsSessionIdRestartPostResponse = (unknown);
122
+
123
+ export type RestartInferenceSessionsSessionIdRestartPostError = (HTTPValidationError);
124
+
125
+ export type GetSystemInfoDebugSystemGetResponse = (unknown);
126
+
127
+ export type GetSystemInfoDebugSystemGetError = unknown;
128
+
129
+ export type GetRecentLogsDebugLogsGetResponse = (unknown);
130
+
131
+ export type GetRecentLogsDebugLogsGetError = unknown;
132
+
133
+ export type DebugResetSessionDebugSessionsSessionIdResetPostData = {
134
+ path: {
135
+ session_id: string;
136
+ };
137
+ };
138
+
139
+ export type DebugResetSessionDebugSessionsSessionIdResetPostResponse = (unknown);
140
+
141
+ export type DebugResetSessionDebugSessionsSessionIdResetPostError = (HTTPValidationError);
142
+
143
+ export type GetSessionQueueInfoDebugSessionsSessionIdQueueGetData = {
144
+ path: {
145
+ session_id: string;
146
+ };
147
+ };
148
+
149
+ export type GetSessionQueueInfoDebugSessionsSessionIdQueueGetResponse = (unknown);
150
+
151
+ export type GetSessionQueueInfoDebugSessionsSessionIdQueueGetError = (HTTPValidationError);
client/src/index.ts ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /**
2
+ * LeRobot Arena Inference Server TypeScript Client
3
+ *
4
+ * This client provides TypeScript access to the LeRobot Arena Inference Server
5
+ * for ACT (Action Chunking Transformer) model inference and session management.
6
+ *
7
+ * @example Basic Usage
8
+ * ```typescript
9
+ * import { LeRobotInferenceServerClient, CreateSessionRequest } from '@lerobot-arena/inference-server-client';
10
+ *
11
+ * const client = new LeRobotInferenceServerClient('http://localhost:8001');
12
+ *
13
+ * // Create and start a session
14
+ * const sessionRequest: CreateSessionRequest = {
15
+ * session_id: 'my-robot-01',
16
+ * policy_path: './checkpoints/act_so101_beyond',
17
+ * camera_names: ['front', 'wrist'],
18
+ * arena_server_url: 'http://localhost:8000'
19
+ * };
20
+ *
21
+ * const session = await client.createSession(sessionRequest);
22
+ * await client.startInference('my-robot-01');
23
+ *
24
+ * // Monitor session
25
+ * const status = await client.getSessionStatus('my-robot-01');
26
+ * console.log(`Session status: ${status.status}`);
27
+ * ```
28
+ */
29
+
30
+ // Export all generated types and services from OpenAPI
31
+ export * from './generated';
32
+
33
+ // Import what we need for the convenience wrapper
34
+ import {
35
+ client,
36
+ createSessionSessionsPost,
37
+ listSessionsSessionsGet,
38
+ getSessionStatusSessionsSessionIdGet,
39
+ startInferenceSessionsSessionIdStartPost,
40
+ stopInferenceSessionsSessionIdStopPost,
41
+ restartInferenceSessionsSessionIdRestartPost,
42
+ deleteSessionSessionsSessionIdDelete,
43
+ healthCheckHealthGet,
44
+ getSystemInfoDebugSystemGet,
45
+ debugResetSessionDebugSessionsSessionIdResetPost,
46
+ getSessionQueueInfoDebugSessionsSessionIdQueueGet
47
+ } from './generated';
48
+
49
+ import type {
50
+ CreateSessionRequest,
51
+ CreateSessionResponse,
52
+ SessionStatusResponse
53
+ } from './generated';
54
+
55
+ /**
56
+ * LeRobot Arena Inference Server Client
57
+ *
58
+ * A convenience wrapper around the generated OpenAPI client that provides
59
+ * a simpler interface for common operations while maintaining full type safety.
60
+ */
61
+ export class LeRobotInferenceServerClient {
62
+ private baseUrl: string;
63
+
64
+ constructor(baseUrl: string) {
65
+ this.baseUrl = baseUrl;
66
+ // Configure the generated client with the base URL
67
+ client.setConfig({ baseUrl });
68
+ }
69
+
70
+ /**
71
+ * Check if the inference server is healthy and responding
72
+ */
73
+ async isHealthy(): Promise<boolean> {
74
+ try {
75
+ const response = await healthCheckHealthGet();
76
+ return !response.error;
77
+ } catch {
78
+ return false;
79
+ }
80
+ }
81
+
82
+ /**
83
+ * Get detailed server health information
84
+ */
85
+ async getHealth() {
86
+ const response = await healthCheckHealthGet();
87
+ if (response.error) {
88
+ throw new Error(`Health check failed: ${JSON.stringify(response.error)}`);
89
+ }
90
+ return response.data;
91
+ }
92
+
93
+ /**
94
+ * Create a new inference session
95
+ */
96
+ async createSession(request: CreateSessionRequest): Promise<CreateSessionResponse> {
97
+ const response = await createSessionSessionsPost({
98
+ body: request
99
+ });
100
+
101
+ if (response.error) {
102
+ throw new Error(`Failed to create session: ${JSON.stringify(response.error)}`);
103
+ }
104
+
105
+ return response.data!;
106
+ }
107
+
108
+ /**
109
+ * List all active sessions
110
+ */
111
+ async listSessions(): Promise<SessionStatusResponse[]> {
112
+ const response = await listSessionsSessionsGet();
113
+ if (response.error) {
114
+ throw new Error(`Failed to list sessions: ${JSON.stringify(response.error)}`);
115
+ }
116
+ return response.data!;
117
+ }
118
+
119
+ /**
120
+ * Get detailed status of a specific session
121
+ */
122
+ async getSessionStatus(sessionId: string): Promise<SessionStatusResponse> {
123
+ const response = await getSessionStatusSessionsSessionIdGet({
124
+ path: { session_id: sessionId }
125
+ });
126
+
127
+ if (response.error) {
128
+ throw new Error(`Failed to get session status: ${JSON.stringify(response.error)}`);
129
+ }
130
+
131
+ return response.data!;
132
+ }
133
+
134
+ /**
135
+ * Start inference for a session
136
+ */
137
+ async startInference(sessionId: string): Promise<void> {
138
+ const response = await startInferenceSessionsSessionIdStartPost({
139
+ path: { session_id: sessionId }
140
+ });
141
+
142
+ if (response.error) {
143
+ throw new Error(`Failed to start inference: ${JSON.stringify(response.error)}`);
144
+ }
145
+ }
146
+
147
+ /**
148
+ * Stop inference for a session
149
+ */
150
+ async stopInference(sessionId: string): Promise<void> {
151
+ const response = await stopInferenceSessionsSessionIdStopPost({
152
+ path: { session_id: sessionId }
153
+ });
154
+
155
+ if (response.error) {
156
+ throw new Error(`Failed to stop inference: ${JSON.stringify(response.error)}`);
157
+ }
158
+ }
159
+
160
+ /**
161
+ * Restart inference for a session
162
+ */
163
+ async restartInference(sessionId: string): Promise<void> {
164
+ const response = await restartInferenceSessionsSessionIdRestartPost({
165
+ path: { session_id: sessionId }
166
+ });
167
+
168
+ if (response.error) {
169
+ throw new Error(`Failed to restart inference: ${JSON.stringify(response.error)}`);
170
+ }
171
+ }
172
+
173
+ /**
174
+ * Delete a session and clean up all resources
175
+ */
176
+ async deleteSession(sessionId: string): Promise<void> {
177
+ const response = await deleteSessionSessionsSessionIdDelete({
178
+ path: { session_id: sessionId }
179
+ });
180
+
181
+ if (response.error) {
182
+ throw new Error(`Failed to delete session: ${JSON.stringify(response.error)}`);
183
+ }
184
+ }
185
+
186
+ /**
187
+ * Wait for a session to reach a specific status
188
+ */
189
+ async waitForSessionStatus(
190
+ sessionId: string,
191
+ targetStatus: string,
192
+ timeoutMs: number = 30000
193
+ ): Promise<SessionStatusResponse> {
194
+ const startTime = Date.now();
195
+
196
+ while (Date.now() - startTime < timeoutMs) {
197
+ const status = await this.getSessionStatus(sessionId);
198
+ if (status.status === targetStatus) {
199
+ return status;
200
+ }
201
+
202
+ // Wait 1 second before checking again
203
+ await new Promise(resolve => setTimeout(resolve, 1000));
204
+ }
205
+
206
+ throw new Error(`Timeout waiting for session ${sessionId} to reach status ${targetStatus}`);
207
+ }
208
+
209
+ /**
210
+ * Convenience method to create a session and start inference in one call
211
+ */
212
+ async createAndStartSession(request: CreateSessionRequest): Promise<{
213
+ session: CreateSessionResponse;
214
+ status: SessionStatusResponse;
215
+ }> {
216
+ const session = await this.createSession(request);
217
+ await this.startInference(request.session_id);
218
+
219
+ // Wait for it to be running
220
+ const status = await this.waitForSessionStatus(request.session_id, 'running');
221
+
222
+ return { session, status };
223
+ }
224
+
225
+ /**
226
+ * Get system information for debugging
227
+ */
228
+ async getSystemInfo() {
229
+ const response = await getSystemInfoDebugSystemGet();
230
+ if (response.error) {
231
+ throw new Error(`Failed to get system info: ${JSON.stringify(response.error)}`);
232
+ }
233
+ return response.data;
234
+ }
235
+
236
+ /**
237
+ * Reset a session's internal state (debug method)
238
+ */
239
+ async debugResetSession(sessionId: string): Promise<void> {
240
+ const response = await debugResetSessionDebugSessionsSessionIdResetPost({
241
+ path: { session_id: sessionId }
242
+ });
243
+
244
+ if (response.error) {
245
+ throw new Error(`Failed to reset session: ${JSON.stringify(response.error)}`);
246
+ }
247
+ }
248
+
249
+ /**
250
+ * Get detailed information about a session's action queue
251
+ */
252
+ async getSessionQueueInfo(sessionId: string) {
253
+ const response = await getSessionQueueInfoDebugSessionsSessionIdQueueGet({
254
+ path: { session_id: sessionId }
255
+ });
256
+
257
+ if (response.error) {
258
+ throw new Error(`Failed to get queue info: ${JSON.stringify(response.error)}`);
259
+ }
260
+ return response.data;
261
+ }
262
+ }
263
+
264
+ // Convenience function to create a client
265
+ export function createClient(baseUrl: string): LeRobotInferenceServerClient {
266
+ return new LeRobotInferenceServerClient(baseUrl);
267
+ }
268
+
269
+ // Export the old class name for backward compatibility
270
+ export const LeRobotAIServerClient = LeRobotInferenceServerClient;
client/tsconfig.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "compilerOptions": {
3
+ // Environment setup & latest features
4
+ "lib": ["ES2022", "DOM"],
5
+ "target": "ES2022",
6
+ "module": "es2022",
7
+ "moduleDetection": "force",
8
+ "jsx": "react-jsx",
9
+ "allowJs": true,
10
+
11
+ // Bundler mode
12
+ "moduleResolution": "bundler",
13
+ "verbatimModuleSyntax": true,
14
+ "noEmit": false,
15
+ "outDir": "./dist",
16
+
17
+ // Declaration files
18
+ "declaration": true,
19
+ "emitDeclarationOnly": false,
20
+ "declarationMap": true,
21
+
22
+ // Best practices
23
+ "strict": true,
24
+ "skipLibCheck": true,
25
+ "noFallthroughCasesInSwitch": true,
26
+ "noUncheckedIndexedAccess": true,
27
+ "noImplicitOverride": true,
28
+
29
+ // Some stricter flags (disabled by default)
30
+ "noUnusedLocals": false,
31
+ "noUnusedParameters": false,
32
+ "noPropertyAccessFromIndexSignature": false
33
+ }
34
+ }
external/.gitkeep ADDED
File without changes
external/RobotHub-TransportServer ADDED
@@ -0,0 +1 @@
 
 
1
+ Subproject commit 8aedc84a7635fc0cbbd3a0671a5e1cf50616dad0
external/lerobot ADDED
@@ -0,0 +1 @@
 
 
1
+ Subproject commit a5727e37b4ff405f9d8de424e07dcc441aa2c82f
launch_simple.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Main launcher for the Inference Server
4
+
5
+ Integrated application that runs both FastAPI and Gradio on the same port.
6
+ - FastAPI API available at /api with full documentation
7
+ - Gradio UI available at the root path /
8
+ - Single process, single port - perfect for deployment!
9
+ """
10
+
11
+ import sys
12
+ from pathlib import Path
13
+
14
+ # Add the src directory to Python path
15
+ src_path = Path(__file__).parent / "src"
16
+ sys.path.insert(0, str(src_path))
17
+
18
+ from inference_server.simple_integrated import launch_simple_integrated_app
19
+
20
+ if __name__ == "__main__":
21
+ print("🤖 Inference Server (Integrated)")
22
+ print("FastAPI + Gradio on the same port!")
23
+ print("API Documentation available at /api/docs")
24
+ print("Press Ctrl+C to stop")
25
+ print("-" * 50)
26
+
27
+ # Parse simple command line args
28
+ import argparse
29
+
30
+ parser = argparse.ArgumentParser(
31
+ description="Launch integrated Inference Server with FastAPI + Gradio"
32
+ )
33
+ parser.add_argument("--host", default="localhost", help="Host to bind to")
34
+ parser.add_argument("--port", type=int, default=7860, help="Port to bind to")
35
+ parser.add_argument(
36
+ "--share", action="store_true", help="Create public Gradio link"
37
+ )
38
+
39
+ args = parser.parse_args()
40
+
41
+ print(f"🚀 Starting on {args.host}:{args.port}")
42
+ print(f"🎨 Gradio UI: http://{args.host}:{args.port}/")
43
+ print(f"📖 API Docs: http://{args.host}:{args.port}/api/docs")
44
+ print()
45
+
46
+ launch_simple_integrated_app(host=args.host, port=args.port, share=args.share)
openapi.json ADDED
@@ -0,0 +1,692 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "openapi": "3.1.0",
3
+ "info": {
4
+ "title": "LeRobot Arena AI Server",
5
+ "summary": "ACT Model Inference Server for Real-time Robot Control",
6
+ "description": "\n ## LeRobot Arena AI Server\n\n This server provides **ACT (Action Chunking Transformer)** model inference for robotics applications.\n It uses the LeRobot Arena communication system with multiple rooms per session for:\n\n ### Core Features:\n - 🎥 **Multi-camera support**: Arbitrary number of camera streams with unique names\n - 🤖 **Joint control**: Normalized joint value handling (-100 to +100 range)\n - 🔄 **Real-time inference**: Optimized for robotics control loops\n - 📊 **Session management**: Multiple concurrent inference sessions\n - 🛠️ **Debug endpoints**: Comprehensive monitoring and debugging tools\n\n ### Communication Architecture:\n 1. **Camera rooms**: Receives video streams from robot cameras (supports multiple cameras)\n 2. **Joint input room**: Receives current robot joint positions (**NORMALIZED VALUES**)\n 3. **Joint output room**: Sends predicted joint commands (**NORMALIZED VALUES**)\n\n ### Supported Cameras:\n Each camera stream has a unique name (e.g., \"front\", \"wrist\", \"overhead\") \n and all streams are synchronized for inference.\n\n ### Joint Value Convention:\n - All joint inputs/outputs use **NORMALIZED VALUES**\n - Range: -100 to +100 for most joints, 0 to 100 for gripper\n - Matches training data format exactly\n\n ### Getting Started:\n 1. Create a session with your trained ACT model\n 2. Connect your robot to the generated rooms\n 3. Start inference to begin real-time control\n ",
7
+ "version": "1.0.0",
8
+ "contact": {
9
+ "name": "LeRobot Arena Team",
10
+ "url": "https://github.com/huggingface/lerobot"
11
+ },
12
+ "license": {
13
+ "name": "Apache 2.0",
14
+ "url": "https://www.apache.org/licenses/LICENSE-2.0.html"
15
+ },
16
+ "x-logo": {
17
+ "url": "https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo.png",
18
+ "altText": "LeRobot Logo"
19
+ }
20
+ },
21
+ "paths": {
22
+ "/": {
23
+ "get": {
24
+ "tags": [
25
+ "Health"
26
+ ],
27
+ "summary": "Root",
28
+ "description": "Health check endpoint.",
29
+ "operationId": "root__get",
30
+ "responses": {
31
+ "200": {
32
+ "description": "Successful Response",
33
+ "content": {
34
+ "application/json": {
35
+ "schema": {}
36
+ }
37
+ }
38
+ }
39
+ }
40
+ }
41
+ },
42
+ "/health": {
43
+ "get": {
44
+ "tags": [
45
+ "Health"
46
+ ],
47
+ "summary": "Health Check",
48
+ "description": "Detailed health check.",
49
+ "operationId": "health_check_health_get",
50
+ "responses": {
51
+ "200": {
52
+ "description": "Successful Response",
53
+ "content": {
54
+ "application/json": {
55
+ "schema": {}
56
+ }
57
+ }
58
+ }
59
+ }
60
+ }
61
+ },
62
+ "/sessions": {
63
+ "get": {
64
+ "tags": [
65
+ "Sessions"
66
+ ],
67
+ "summary": "List Sessions",
68
+ "description": "List all sessions.",
69
+ "operationId": "list_sessions_sessions_get",
70
+ "responses": {
71
+ "200": {
72
+ "description": "Successful Response",
73
+ "content": {
74
+ "application/json": {
75
+ "schema": {
76
+ "items": {
77
+ "$ref": "#/components/schemas/SessionStatusResponse"
78
+ },
79
+ "type": "array",
80
+ "title": "Response List Sessions Sessions Get"
81
+ }
82
+ }
83
+ }
84
+ }
85
+ }
86
+ },
87
+ "post": {
88
+ "tags": [
89
+ "Sessions"
90
+ ],
91
+ "summary": "Create Session",
92
+ "description": "Create a new inference session.\n\nIf workspace_id is provided, all rooms will be created in that workspace.\nIf workspace_id is not provided, a new workspace will be generated automatically.\nAll rooms for a session (cameras + joints) are always created in the same workspace.",
93
+ "operationId": "create_session_sessions_post",
94
+ "requestBody": {
95
+ "content": {
96
+ "application/json": {
97
+ "schema": {
98
+ "$ref": "#/components/schemas/CreateSessionRequest"
99
+ }
100
+ }
101
+ },
102
+ "required": true
103
+ },
104
+ "responses": {
105
+ "200": {
106
+ "description": "Successful Response",
107
+ "content": {
108
+ "application/json": {
109
+ "schema": {
110
+ "$ref": "#/components/schemas/CreateSessionResponse"
111
+ }
112
+ }
113
+ }
114
+ },
115
+ "422": {
116
+ "description": "Validation Error",
117
+ "content": {
118
+ "application/json": {
119
+ "schema": {
120
+ "$ref": "#/components/schemas/HTTPValidationError"
121
+ }
122
+ }
123
+ }
124
+ }
125
+ }
126
+ }
127
+ },
128
+ "/sessions/{session_id}": {
129
+ "get": {
130
+ "tags": [
131
+ "Sessions"
132
+ ],
133
+ "summary": "Get Session Status",
134
+ "description": "Get status of a specific session.",
135
+ "operationId": "get_session_status_sessions__session_id__get",
136
+ "parameters": [
137
+ {
138
+ "name": "session_id",
139
+ "in": "path",
140
+ "required": true,
141
+ "schema": {
142
+ "type": "string",
143
+ "title": "Session Id"
144
+ }
145
+ }
146
+ ],
147
+ "responses": {
148
+ "200": {
149
+ "description": "Successful Response",
150
+ "content": {
151
+ "application/json": {
152
+ "schema": {
153
+ "$ref": "#/components/schemas/SessionStatusResponse"
154
+ }
155
+ }
156
+ }
157
+ },
158
+ "422": {
159
+ "description": "Validation Error",
160
+ "content": {
161
+ "application/json": {
162
+ "schema": {
163
+ "$ref": "#/components/schemas/HTTPValidationError"
164
+ }
165
+ }
166
+ }
167
+ }
168
+ }
169
+ },
170
+ "delete": {
171
+ "tags": [
172
+ "Sessions"
173
+ ],
174
+ "summary": "Delete Session",
175
+ "description": "Delete a session.",
176
+ "operationId": "delete_session_sessions__session_id__delete",
177
+ "parameters": [
178
+ {
179
+ "name": "session_id",
180
+ "in": "path",
181
+ "required": true,
182
+ "schema": {
183
+ "type": "string",
184
+ "title": "Session Id"
185
+ }
186
+ }
187
+ ],
188
+ "responses": {
189
+ "200": {
190
+ "description": "Successful Response",
191
+ "content": {
192
+ "application/json": {
193
+ "schema": {}
194
+ }
195
+ }
196
+ },
197
+ "422": {
198
+ "description": "Validation Error",
199
+ "content": {
200
+ "application/json": {
201
+ "schema": {
202
+ "$ref": "#/components/schemas/HTTPValidationError"
203
+ }
204
+ }
205
+ }
206
+ }
207
+ }
208
+ }
209
+ },
210
+ "/sessions/{session_id}/start": {
211
+ "post": {
212
+ "tags": [
213
+ "Control"
214
+ ],
215
+ "summary": "Start Inference",
216
+ "description": "Start inference for a session.",
217
+ "operationId": "start_inference_sessions__session_id__start_post",
218
+ "parameters": [
219
+ {
220
+ "name": "session_id",
221
+ "in": "path",
222
+ "required": true,
223
+ "schema": {
224
+ "type": "string",
225
+ "title": "Session Id"
226
+ }
227
+ }
228
+ ],
229
+ "responses": {
230
+ "200": {
231
+ "description": "Successful Response",
232
+ "content": {
233
+ "application/json": {
234
+ "schema": {}
235
+ }
236
+ }
237
+ },
238
+ "422": {
239
+ "description": "Validation Error",
240
+ "content": {
241
+ "application/json": {
242
+ "schema": {
243
+ "$ref": "#/components/schemas/HTTPValidationError"
244
+ }
245
+ }
246
+ }
247
+ }
248
+ }
249
+ }
250
+ },
251
+ "/sessions/{session_id}/stop": {
252
+ "post": {
253
+ "tags": [
254
+ "Control"
255
+ ],
256
+ "summary": "Stop Inference",
257
+ "description": "Stop inference for a session.",
258
+ "operationId": "stop_inference_sessions__session_id__stop_post",
259
+ "parameters": [
260
+ {
261
+ "name": "session_id",
262
+ "in": "path",
263
+ "required": true,
264
+ "schema": {
265
+ "type": "string",
266
+ "title": "Session Id"
267
+ }
268
+ }
269
+ ],
270
+ "responses": {
271
+ "200": {
272
+ "description": "Successful Response",
273
+ "content": {
274
+ "application/json": {
275
+ "schema": {}
276
+ }
277
+ }
278
+ },
279
+ "422": {
280
+ "description": "Validation Error",
281
+ "content": {
282
+ "application/json": {
283
+ "schema": {
284
+ "$ref": "#/components/schemas/HTTPValidationError"
285
+ }
286
+ }
287
+ }
288
+ }
289
+ }
290
+ }
291
+ },
292
+ "/sessions/{session_id}/restart": {
293
+ "post": {
294
+ "tags": [
295
+ "Control"
296
+ ],
297
+ "summary": "Restart Inference",
298
+ "description": "Restart inference for a session.",
299
+ "operationId": "restart_inference_sessions__session_id__restart_post",
300
+ "parameters": [
301
+ {
302
+ "name": "session_id",
303
+ "in": "path",
304
+ "required": true,
305
+ "schema": {
306
+ "type": "string",
307
+ "title": "Session Id"
308
+ }
309
+ }
310
+ ],
311
+ "responses": {
312
+ "200": {
313
+ "description": "Successful Response",
314
+ "content": {
315
+ "application/json": {
316
+ "schema": {}
317
+ }
318
+ }
319
+ },
320
+ "422": {
321
+ "description": "Validation Error",
322
+ "content": {
323
+ "application/json": {
324
+ "schema": {
325
+ "$ref": "#/components/schemas/HTTPValidationError"
326
+ }
327
+ }
328
+ }
329
+ }
330
+ }
331
+ }
332
+ },
333
+ "/debug/system": {
334
+ "get": {
335
+ "tags": [
336
+ "Debug"
337
+ ],
338
+ "summary": "Get System Info",
339
+ "description": "Get system information for debugging.",
340
+ "operationId": "get_system_info_debug_system_get",
341
+ "responses": {
342
+ "200": {
343
+ "description": "Successful Response",
344
+ "content": {
345
+ "application/json": {
346
+ "schema": {}
347
+ }
348
+ }
349
+ }
350
+ }
351
+ }
352
+ },
353
+ "/debug/logs": {
354
+ "get": {
355
+ "tags": [
356
+ "Debug"
357
+ ],
358
+ "summary": "Get Recent Logs",
359
+ "description": "Get recent log entries for debugging.",
360
+ "operationId": "get_recent_logs_debug_logs_get",
361
+ "responses": {
362
+ "200": {
363
+ "description": "Successful Response",
364
+ "content": {
365
+ "application/json": {
366
+ "schema": {}
367
+ }
368
+ }
369
+ }
370
+ }
371
+ }
372
+ },
373
+ "/debug/sessions/{session_id}/reset": {
374
+ "post": {
375
+ "tags": [
376
+ "Debug"
377
+ ],
378
+ "summary": "Debug Reset Session",
379
+ "description": "Reset a session's internal state for debugging.",
380
+ "operationId": "debug_reset_session_debug_sessions__session_id__reset_post",
381
+ "parameters": [
382
+ {
383
+ "name": "session_id",
384
+ "in": "path",
385
+ "required": true,
386
+ "schema": {
387
+ "type": "string",
388
+ "title": "Session Id"
389
+ }
390
+ }
391
+ ],
392
+ "responses": {
393
+ "200": {
394
+ "description": "Successful Response",
395
+ "content": {
396
+ "application/json": {
397
+ "schema": {}
398
+ }
399
+ }
400
+ },
401
+ "422": {
402
+ "description": "Validation Error",
403
+ "content": {
404
+ "application/json": {
405
+ "schema": {
406
+ "$ref": "#/components/schemas/HTTPValidationError"
407
+ }
408
+ }
409
+ }
410
+ }
411
+ }
412
+ }
413
+ },
414
+ "/debug/sessions/{session_id}/queue": {
415
+ "get": {
416
+ "tags": [
417
+ "Debug"
418
+ ],
419
+ "summary": "Get Session Queue Info",
420
+ "description": "Get detailed information about a session's action queue.",
421
+ "operationId": "get_session_queue_info_debug_sessions__session_id__queue_get",
422
+ "parameters": [
423
+ {
424
+ "name": "session_id",
425
+ "in": "path",
426
+ "required": true,
427
+ "schema": {
428
+ "type": "string",
429
+ "title": "Session Id"
430
+ }
431
+ }
432
+ ],
433
+ "responses": {
434
+ "200": {
435
+ "description": "Successful Response",
436
+ "content": {
437
+ "application/json": {
438
+ "schema": {}
439
+ }
440
+ }
441
+ },
442
+ "422": {
443
+ "description": "Validation Error",
444
+ "content": {
445
+ "application/json": {
446
+ "schema": {
447
+ "$ref": "#/components/schemas/HTTPValidationError"
448
+ }
449
+ }
450
+ }
451
+ }
452
+ }
453
+ }
454
+ }
455
+ },
456
+ "components": {
457
+ "schemas": {
458
+ "CreateSessionRequest": {
459
+ "properties": {
460
+ "session_id": {
461
+ "type": "string",
462
+ "title": "Session Id"
463
+ },
464
+ "policy_path": {
465
+ "type": "string",
466
+ "title": "Policy Path"
467
+ },
468
+ "camera_names": {
469
+ "items": {
470
+ "type": "string"
471
+ },
472
+ "type": "array",
473
+ "title": "Camera Names",
474
+ "default": [
475
+ "front"
476
+ ]
477
+ },
478
+ "arena_server_url": {
479
+ "type": "string",
480
+ "title": "Arena Server Url",
481
+ "default": "http://localhost:8000"
482
+ },
483
+ "workspace_id": {
484
+ "anyOf": [
485
+ {
486
+ "type": "string"
487
+ },
488
+ {
489
+ "type": "null"
490
+ }
491
+ ],
492
+ "title": "Workspace Id"
493
+ }
494
+ },
495
+ "type": "object",
496
+ "required": [
497
+ "session_id",
498
+ "policy_path"
499
+ ],
500
+ "title": "CreateSessionRequest"
501
+ },
502
+ "CreateSessionResponse": {
503
+ "properties": {
504
+ "workspace_id": {
505
+ "type": "string",
506
+ "title": "Workspace Id"
507
+ },
508
+ "camera_room_ids": {
509
+ "additionalProperties": {
510
+ "type": "string"
511
+ },
512
+ "type": "object",
513
+ "title": "Camera Room Ids"
514
+ },
515
+ "joint_input_room_id": {
516
+ "type": "string",
517
+ "title": "Joint Input Room Id"
518
+ },
519
+ "joint_output_room_id": {
520
+ "type": "string",
521
+ "title": "Joint Output Room Id"
522
+ }
523
+ },
524
+ "type": "object",
525
+ "required": [
526
+ "workspace_id",
527
+ "camera_room_ids",
528
+ "joint_input_room_id",
529
+ "joint_output_room_id"
530
+ ],
531
+ "title": "CreateSessionResponse"
532
+ },
533
+ "HTTPValidationError": {
534
+ "properties": {
535
+ "detail": {
536
+ "items": {
537
+ "$ref": "#/components/schemas/ValidationError"
538
+ },
539
+ "type": "array",
540
+ "title": "Detail"
541
+ }
542
+ },
543
+ "type": "object",
544
+ "title": "HTTPValidationError"
545
+ },
546
+ "SessionStatusResponse": {
547
+ "properties": {
548
+ "session_id": {
549
+ "type": "string",
550
+ "title": "Session Id"
551
+ },
552
+ "status": {
553
+ "type": "string",
554
+ "title": "Status"
555
+ },
556
+ "policy_path": {
557
+ "type": "string",
558
+ "title": "Policy Path"
559
+ },
560
+ "camera_names": {
561
+ "items": {
562
+ "type": "string"
563
+ },
564
+ "type": "array",
565
+ "title": "Camera Names"
566
+ },
567
+ "workspace_id": {
568
+ "type": "string",
569
+ "title": "Workspace Id"
570
+ },
571
+ "rooms": {
572
+ "additionalProperties": true,
573
+ "type": "object",
574
+ "title": "Rooms"
575
+ },
576
+ "stats": {
577
+ "additionalProperties": true,
578
+ "type": "object",
579
+ "title": "Stats"
580
+ },
581
+ "inference_stats": {
582
+ "anyOf": [
583
+ {
584
+ "additionalProperties": true,
585
+ "type": "object"
586
+ },
587
+ {
588
+ "type": "null"
589
+ }
590
+ ],
591
+ "title": "Inference Stats"
592
+ },
593
+ "error_message": {
594
+ "anyOf": [
595
+ {
596
+ "type": "string"
597
+ },
598
+ {
599
+ "type": "null"
600
+ }
601
+ ],
602
+ "title": "Error Message"
603
+ }
604
+ },
605
+ "type": "object",
606
+ "required": [
607
+ "session_id",
608
+ "status",
609
+ "policy_path",
610
+ "camera_names",
611
+ "workspace_id",
612
+ "rooms",
613
+ "stats"
614
+ ],
615
+ "title": "SessionStatusResponse"
616
+ },
617
+ "ValidationError": {
618
+ "properties": {
619
+ "loc": {
620
+ "items": {
621
+ "anyOf": [
622
+ {
623
+ "type": "string"
624
+ },
625
+ {
626
+ "type": "integer"
627
+ }
628
+ ]
629
+ },
630
+ "type": "array",
631
+ "title": "Location"
632
+ },
633
+ "msg": {
634
+ "type": "string",
635
+ "title": "Message"
636
+ },
637
+ "type": {
638
+ "type": "string",
639
+ "title": "Error Type"
640
+ }
641
+ },
642
+ "type": "object",
643
+ "required": [
644
+ "loc",
645
+ "msg",
646
+ "type"
647
+ ],
648
+ "title": "ValidationError"
649
+ }
650
+ },
651
+ "securitySchemes": {
652
+ "BearerAuth": {
653
+ "type": "http",
654
+ "scheme": "bearer",
655
+ "bearerFormat": "JWT"
656
+ },
657
+ "ApiKeyAuth": {
658
+ "type": "apiKey",
659
+ "in": "header",
660
+ "name": "X-API-Key"
661
+ }
662
+ }
663
+ },
664
+ "servers": [
665
+ {
666
+ "url": "http://localhost:8001",
667
+ "description": "Development server"
668
+ },
669
+ {
670
+ "url": "https://your-production-server.com",
671
+ "description": "Production server"
672
+ }
673
+ ],
674
+ "tags": [
675
+ {
676
+ "name": "Health",
677
+ "description": "Health check and server status endpoints"
678
+ },
679
+ {
680
+ "name": "Sessions",
681
+ "description": "Inference session management - create, control, and monitor AI sessions"
682
+ },
683
+ {
684
+ "name": "Control",
685
+ "description": "Session control operations - start, stop, restart inference"
686
+ },
687
+ {
688
+ "name": "Debug",
689
+ "description": "Debug and monitoring endpoints for system diagnostics"
690
+ }
691
+ ]
692
+ }
pyproject.toml ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "inference-server"
3
+ version = "0.1.0"
4
+ description = "ACT Model Inference Server for Real-time Robot Control"
5
+ readme = "README.md"
6
+ requires-python = ">=3.12"
7
+ dependencies = [
8
+ "aiofiles>=24.1.0",
9
+ "aiortc>=1.13.0",
10
+ "av>=14.4.0",
11
+ "einops>=0.7.0",
12
+ "fastapi>=0.115.12",
13
+ "gradio>=5.34.2",
14
+ "httpx>=0.28.1",
15
+ "huggingface-hub>=0.32.4",
16
+ "imageio[ffmpeg]>=2.37.0",
17
+ "lerobot",
18
+ "robohub-transport-server-client",
19
+ "numpy>=1.26.4",
20
+ "opencv-python>=4.11.0.86",
21
+ "opencv-python-headless>=4.11.0.86",
22
+ "psutil>=7.0.0",
23
+ "pydantic>=2.11.5",
24
+ "python-multipart>=0.0.20",
25
+ "torch>=2.2.2",
26
+ "torchvision>=0.17.2",
27
+ "tqdm>=4.67.1",
28
+ "transformers>=4.52.4",
29
+ "uvicorn[standard]>=0.34.3",
30
+ "websockets>=15.0.1",
31
+ ]
32
+
33
+ [dependency-groups]
34
+ dev = [
35
+ "httpx>=0.28.1",
36
+ "pytest>=8.4.0",
37
+ "pytest-asyncio>=1.0.0",
38
+ "pytest-cov>=6.1.1",
39
+ ]
40
+
41
+ [tool.uv.sources]
42
+ robohub-transport-server-client = { path = "../transport-server/client/python", editable = true }
43
+ lerobot = { path = "./external/lerobot", editable = false }
src/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (175 Bytes). View file
 
src/__pycache__/main.cpython-312.pyc ADDED
Binary file (9.9 kB). View file
 
src/__pycache__/session_manager.cpython-312.pyc ADDED
Binary file (19.8 kB). View file
 
src/inference_server.egg-info/PKG-INFO ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: inference-server
3
+ Version: 0.1.0
4
+ Summary: ACT Model Inference Server for Real-time Robot Control
5
+ Requires-Python: >=3.12
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: aiofiles>=24.1.0
8
+ Requires-Dist: aiortc>=1.13.0
9
+ Requires-Dist: av>=14.4.0
10
+ Requires-Dist: einops>=0.7.0
11
+ Requires-Dist: fastapi>=0.115.12
12
+ Requires-Dist: gradio>=5.34.2
13
+ Requires-Dist: httpx>=0.28.1
14
+ Requires-Dist: huggingface-hub>=0.32.4
15
+ Requires-Dist: imageio[ffmpeg]>=2.37.0
16
+ Requires-Dist: lerobot
17
+ Requires-Dist: lerobot-arena-client
18
+ Requires-Dist: numpy>=1.26.4
19
+ Requires-Dist: opencv-python>=4.11.0.86
20
+ Requires-Dist: opencv-python-headless>=4.11.0.86
21
+ Requires-Dist: psutil>=7.0.0
22
+ Requires-Dist: pydantic>=2.11.5
23
+ Requires-Dist: python-multipart>=0.0.20
24
+ Requires-Dist: torch>=2.2.2
25
+ Requires-Dist: torchvision>=0.17.2
26
+ Requires-Dist: tqdm>=4.67.1
27
+ Requires-Dist: transformers>=4.52.4
28
+ Requires-Dist: uvicorn[standard]>=0.34.3
29
+ Requires-Dist: websockets>=15.0.1
30
+
31
+ ---
32
+ title: LeRobot Arena - AI Inference Server
33
+ emoji: 🤖
34
+ colorFrom: blue
35
+ colorTo: purple
36
+ sdk: docker
37
+ app_port: 7860
38
+ suggested_hardware: t4-small
39
+ suggested_storage: medium
40
+ short_description: Real-time ACT model inference server for robot control
41
+ tags:
42
+ - robotics
43
+ - ai
44
+ - inference
45
+ - control
46
+ - act-model
47
+ - transformer
48
+ - real-time
49
+ - gradio
50
+ - fastapi
51
+ - computer-vision
52
+ pinned: false
53
+ fullWidth: true
54
+ ---
55
+
56
+ # Inference Server
57
+
58
+ 🤖 **Real-time ACT Model Inference Server for Robot Control**
59
+
60
+ This server provides ACT (Action Chunking Transformer) model inference for robotics applications using the transport server communication system. It includes a user-friendly Gradio web interface for easy setup and management.
61
+
62
+ ## ✨ Features
63
+
64
+ - **Real-time AI Inference**: Run ACT models for robot control at 20Hz control frequency
65
+ - **Multi-Camera Support**: Handle multiple camera streams with different names
66
+ - **Web Interface**: User-friendly Gradio UI for setup and monitoring
67
+ - **Session Management**: Create, start, stop, and monitor inference sessions
68
+ - **Automatic Timeout**: Sessions automatically cleanup after 10 minutes of inactivity
69
+ - **Debug Tools**: Built-in debugging and monitoring endpoints
70
+ - **Flexible Configuration**: Support for custom model paths, camera configurations
71
+ - **No External Dependencies**: Direct Python execution without subprocess calls
72
+
73
+ ## 🚀 Quick Start
74
+
75
+ ### Prerequisites
76
+
77
+ - Python 3.12+
78
+ - UV package manager (recommended)
79
+ - Trained ACT model
80
+ - Transport server running
81
+
82
+ ### 1. Installation
83
+
84
+ ```bash
85
+ cd backend/ai-server
86
+
87
+ # Install dependencies using uv (recommended)
88
+ uv sync
89
+
90
+ # Or using pip
91
+ pip install -e .
92
+ ```
93
+
94
+ ### 2. Launch the Application
95
+
96
+ #### **🚀 Simple Integrated Mode (Recommended)**
97
+ ```bash
98
+ # Everything runs in one process - no subprocess issues!
99
+ python launch_simple.py
100
+
101
+ # Or using the CLI
102
+ python -m inference_server.cli --simple
103
+ ```
104
+
105
+ This will:
106
+ - Run everything on `http://localhost:7860`
107
+ - Direct session management (no HTTP API calls)
108
+ - No external subprocess dependencies
109
+ - Most robust and simple deployment!
110
+
111
+ #### **🔧 Development Mode (Separate Processes)**
112
+ ```bash
113
+ # Traditional approach with separate server and UI
114
+ python -m inference_server.cli
115
+ ```
116
+
117
+ This will:
118
+ - Start the AI server on `http://localhost:8001`
119
+ - Launch the Gradio UI on `http://localhost:7860`
120
+ - Better for development and debugging
121
+
122
+ ### 3. Using the Web Interface
123
+
124
+ 1. **Check Server Status**: The interface will automatically check if the AI server is running
125
+ 2. **Configure Your Robot**: Enter your model path and camera setup
126
+ 3. **Create & Start Session**: Click the button to set up AI control
127
+ 4. **Monitor Performance**: Use the status panel to monitor inference
128
+
129
+ ## 🎯 Workflow Guide
130
+
131
+ ### Step 1: AI Server
132
+ - The server status will be displayed at the top
133
+ - Click "Start Server" if it's not already running
134
+ - Use "Check Status" to verify connectivity
135
+
136
+ ### Step 2: Set Up Robot AI
137
+ - **Session Name**: Give your session a unique name (e.g., "my-robot-01")
138
+ - **AI Model Path**: Path to your trained ACT model (e.g., "./checkpoints/act_so101_beyond")
139
+ - **Camera Names**: Comma-separated list of camera names (e.g., "front,wrist,overhead")
140
+ - Click "Create & Start AI Control" to begin
141
+
142
+ ### Step 3: Control Session
143
+ - The session ID will be auto-filled after creation
144
+ - Use Start/Stop buttons to control inference
145
+ - Click "Status" to see detailed performance metrics
146
+
147
+ ## 🛠️ Advanced Usage
148
+
149
+ ### CLI Options
150
+
151
+ ```bash
152
+ # Simple integrated mode (recommended)
153
+ python -m inference_server.cli --simple
154
+
155
+ # Development mode (separate processes)
156
+ python -m inference_server.cli
157
+
158
+ # Launch only the server
159
+ python -m inference_server.cli --server-only
160
+
161
+ # Launch only the UI (server must be running separately)
162
+ python -m inference_server.cli --ui-only
163
+
164
+ # Custom ports
165
+ python -m inference_server.cli --server-port 8002 --ui-port 7861
166
+
167
+ # Enable public sharing
168
+ python -m inference_server.cli --share
169
+
170
+ # For deployment (recommended)
171
+ python -m inference_server.cli --simple --host 0.0.0.0 --share
172
+ ```
173
+
174
+ ### API Endpoints
175
+
176
+ The server provides a REST API for programmatic access:
177
+
178
+ - `GET /health` - Server health check
179
+ - `POST /sessions` - Create new session
180
+ - `GET /sessions` - List all sessions
181
+ - `GET /sessions/{id}` - Get session details
182
+ - `POST /sessions/{id}/start` - Start inference
183
+ - `POST /sessions/{id}/stop` - Stop inference
184
+ - `POST /sessions/{id}/restart` - Restart inference
185
+ - `DELETE /sessions/{id}` - Delete session
186
+
187
+ #### Debug Endpoints
188
+ - `GET /debug/system` - System information (CPU, memory, GPU)
189
+ - `GET /debug/sessions/{id}/queue` - Action queue details
190
+ - `POST /debug/sessions/{id}/reset` - Reset session state
191
+
192
+ ### Configuration
193
+
194
+ #### Joint Value Convention
195
+ - All joint inputs/outputs use **NORMALIZED VALUES**
196
+ - Most joints: -100 to +100 (RANGE_M100_100)
197
+ - Gripper: 0 to 100 (RANGE_0_100)
198
+ - This matches the training data format exactly
199
+
200
+ #### Camera Support
201
+ - Supports arbitrary number of camera streams
202
+ - Each camera has a unique name (e.g., "front", "wrist", "overhead")
203
+ - All camera streams are synchronized for inference
204
+ - Images expected in RGB format, uint8 [0-255]
205
+
206
+ ## 📊 Monitoring
207
+
208
+ ### Session Status Indicators
209
+ - 🟢 **Running**: Inference active and processing
210
+ - 🟡 **Ready**: Session created but inference not started
211
+ - 🔴 **Stopped**: Inference stopped
212
+ - 🟠 **Initializing**: Session being set up
213
+
214
+ ### Smart Session Control
215
+ The UI now provides intelligent feedback:
216
+ - ℹ️ **Already Running**: When trying to start a running session
217
+ - ℹ️ **Already Stopped**: When trying to stop a stopped session
218
+ - 💡 **Smart Suggestions**: Context-aware tips based on current status
219
+
220
+ ### Performance Metrics
221
+ - **Inferences**: Total number of model inferences performed
222
+ - **Commands Sent**: Joint commands sent to robot
223
+ - **Queue Length**: Actions waiting in the queue
224
+ - **Errors**: Number of errors encountered
225
+ - **Data Flow**: Images and joint states received
226
+
227
+ ## 🐳 Docker Usage
228
+
229
+ ### Build the Image
230
+ ```bash
231
+ cd services/inference-server
232
+ docker build -t inference-server .
233
+ ```
234
+
235
+ ### Run the Container
236
+ ```bash
237
+ # Basic usage
238
+ docker run -p 7860:7860 inference-server
239
+
240
+ # With environment variables
241
+ docker run -p 7860:7860 \
242
+ -e DEFAULT_ARENA_SERVER_URL=http://your-server.com \
243
+ -e DEFAULT_MODEL_PATH=./checkpoints/your-model \
244
+ inference-server
245
+
246
+ # With GPU support
247
+ docker run --gpus all -p 7860:7860 inference-server
248
+ ```
249
+
250
+ ## 🔧 Troubleshooting
251
+
252
+
253
+
254
+ ### Common Issues
255
+
256
+ 1. **Server Won't Start**
257
+ - Check if port 8001 is available
258
+ - Verify model path exists and is accessible
259
+ - Check dependencies are installed correctly
260
+
261
+ 2. **Session Creation Fails**
262
+ - Verify model path is correct
263
+ - Check Arena server is running on specified URL
264
+ - Ensure camera names match your robot configuration
265
+
266
+ 3. **Poor Performance**
267
+ - Monitor system resources in the debug panel
268
+ - Check if GPU is being used for inference
269
+ - Verify control/inference frequency settings
270
+
271
+ 4. **Connection Issues**
272
+ - Verify Arena server URL is correct
273
+ - Check network connectivity
274
+ - Ensure workspace/room IDs are valid
275
+
276
+ ### Debug Mode
277
+
278
+ Enable debug mode for detailed logging:
279
+
280
+ ```bash
281
+ uv run python -m lerobot_arena_ai_server.cli --debug
282
+ ```
283
+
284
+ ### System Requirements
285
+
286
+ - **CPU**: Multi-core recommended for 30Hz control
287
+ - **Memory**: 8GB+ RAM recommended
288
+ - **GPU**: CUDA-compatible GPU for fast inference (optional but recommended)
289
+ - **Network**: Stable connection to Arena server
290
+
291
+ ## 📚 Architecture
292
+
293
+ ### Integrated Mode (Recommended)
294
+ ```
295
+ ┌─────────────────────────────────────┐ ┌─────────────────┐
296
+ │ Single Application │ │ LeRobot Arena │
297
+ │ ┌─────────────┐ ┌─────────────┐ │◄──►│ (Port 8000) │
298
+ │ │ Gradio UI │ │ AI Server │ │ └─────────────────┘
299
+ │ │ (/) │ │ (/api/*) │ │ │
300
+ │ └─────────────┘ └─────────────┘ │ │
301
+ │ (Port 7860) │ Robot/Cameras
302
+ └─────────────────────────────────────┘
303
+
304
+ Web Browser
305
+ ```
306
+
307
+ ### Development Mode
308
+ ```
309
+ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
310
+ │ Gradio UI │ │ AI Server │ │ LeRobot Arena │
311
+ │ (Port 7860) │◄──►│ (Port 8001) │◄──►│ (Port 8000) │
312
+ └─────────────────┘ └─────────────────┘ └───────────────���─┘
313
+ │ │ │
314
+ │ │ │
315
+ Web Browser ACT Model Robot/Cameras
316
+ Inference
317
+ ```
318
+
319
+ ### Data Flow
320
+
321
+ 1. **Camera Data**: Robot cameras → Arena → AI Server
322
+ 2. **Joint State**: Robot joints → Arena → AI Server
323
+ 3. **AI Inference**: Images + Joint State → ACT Model → Actions
324
+ 4. **Control Commands**: Actions → Arena → Robot
325
+
326
+ ### Session Lifecycle
327
+
328
+ 1. **Create**: Set up rooms in Arena, load ACT model
329
+ 2. **Start**: Begin inference loop (3Hz) and control loop (30Hz)
330
+ 3. **Running**: Process camera/joint data, generate actions
331
+ 4. **Stop**: Pause inference, maintain connections
332
+ 5. **Delete**: Clean up resources, disconnect from Arena
333
+
334
+ ## 🤝 Contributing
335
+
336
+ 1. Follow the existing code style
337
+ 2. Add tests for new features
338
+ 3. Update documentation
339
+ 4. Submit pull requests
340
+
341
+ ## 📄 License
342
+
343
+ This project follows the same license as the parent LeRobot Arena project.
344
+
345
+ ---
346
+
347
+ For more information, see the [LeRobot Arena documentation](../../README.md).
src/inference_server.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ README.md
2
+ pyproject.toml
3
+ src/inference_server/__init__.py
4
+ src/inference_server/cli.py
5
+ src/inference_server/export_openapi.py
6
+ src/inference_server/main.py
7
+ src/inference_server/session_manager.py
8
+ src/inference_server/simple_integrated.py
9
+ src/inference_server/ui.py
10
+ src/inference_server.egg-info/PKG-INFO
11
+ src/inference_server.egg-info/SOURCES.txt
12
+ src/inference_server.egg-info/dependency_links.txt
13
+ src/inference_server.egg-info/requires.txt
14
+ src/inference_server.egg-info/top_level.txt
15
+ src/inference_server/models/__init__.py
16
+ src/inference_server/models/act_inference.py
17
+ src/inference_server/models/base_inference.py
18
+ src/inference_server/models/diffusion_inference.py
19
+ src/inference_server/models/joint_config.py
20
+ src/inference_server/models/pi0_inference.py
21
+ src/inference_server/models/pi0fast_inference.py
22
+ src/inference_server/models/smolvla_inference.py
src/inference_server.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
src/inference_server.egg-info/requires.txt ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ aiofiles>=24.1.0
2
+ aiortc>=1.13.0
3
+ av>=14.4.0
4
+ einops>=0.7.0
5
+ fastapi>=0.115.12
6
+ gradio>=5.34.2
7
+ httpx>=0.28.1
8
+ huggingface-hub>=0.32.4
9
+ imageio[ffmpeg]>=2.37.0
10
+ lerobot
11
+ lerobot-arena-client
12
+ numpy>=1.26.4
13
+ opencv-python>=4.11.0.86
14
+ opencv-python-headless>=4.11.0.86
15
+ psutil>=7.0.0
16
+ pydantic>=2.11.5
17
+ python-multipart>=0.0.20
18
+ torch>=2.2.2
19
+ torchvision>=0.17.2
20
+ tqdm>=4.67.1
21
+ transformers>=4.52.4
22
+ uvicorn[standard]>=0.34.3
23
+ websockets>=15.0.1
src/inference_server.egg-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ inference_server
src/inference_server/__init__.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ try:
2
+ from .export_openapi import export_openapi_schema
3
+ except ImportError:
4
+ export_openapi_schema = None
5
+
6
+ try:
7
+ from .main import app
8
+ except ImportError:
9
+ app = None
10
+
11
+ from .session_manager import InferenceSession, SessionManager
12
+
13
+ try:
14
+ from .ui import launch_ui
15
+ except ImportError:
16
+ launch_ui = None
17
+
18
+ __version__ = "0.1.0"
19
+ __all__ = [
20
+ "InferenceSession",
21
+ "SessionManager",
22
+ ]
23
+
24
+ # Add optional exports if available
25
+ if app is not None:
26
+ __all__.append("app")
27
+ if export_openapi_schema is not None:
28
+ __all__.append("export_openapi_schema")
29
+ if launch_ui is not None:
30
+ __all__.append("launch_ui")
src/inference_server/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (916 Bytes). View file
 
src/inference_server/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (928 Bytes). View file
 
src/inference_server/__pycache__/cli.cpython-312.pyc ADDED
Binary file (10.5 kB). View file
 
src/inference_server/__pycache__/export_openapi.cpython-312.pyc ADDED
Binary file (8.32 kB). View file
 
src/inference_server/__pycache__/export_openapi.cpython-313.pyc ADDED
Binary file (8.27 kB). View file
 
src/inference_server/__pycache__/gradio_ui.cpython-312.pyc ADDED
Binary file (38.1 kB). View file
 
src/inference_server/__pycache__/gradio_ui.cpython-313.pyc ADDED
Binary file (37.9 kB). View file
 
src/inference_server/__pycache__/integrated.cpython-312.pyc ADDED
Binary file (17.3 kB). View file
 
src/inference_server/__pycache__/main.cpython-312.pyc ADDED
Binary file (15.6 kB). View file
 
src/inference_server/__pycache__/main.cpython-313.pyc ADDED
Binary file (15.6 kB). View file
 
src/inference_server/__pycache__/session_manager.cpython-312.pyc ADDED
Binary file (38.8 kB). View file
 
src/inference_server/__pycache__/session_manager.cpython-313.pyc ADDED
Binary file (39 kB). View file
 
src/inference_server/__pycache__/simple_integrated.cpython-312.pyc ADDED
Binary file (16.3 kB). View file
 
src/inference_server/__pycache__/simple_integrated.cpython-313.pyc ADDED
Binary file (14.9 kB). View file
 
src/inference_server/__pycache__/ui.cpython-312.pyc ADDED
Binary file (18.2 kB). View file
 
src/inference_server/__pycache__/ui.cpython-313.pyc ADDED
Binary file (18.1 kB). View file
 
src/inference_server/__pycache__/ui_v2.cpython-312.pyc ADDED
Binary file (18 kB). View file