---
title: Laban Movement Analysis
emoji: ๐ฉฐ
colorFrom: purple
colorTo: green
app_file: app.py
sdk: gradio
sdk_version: 5.33.0
pinned: false
tags:
- laban-movement-analysis
- pose-estimation
- movement-analysis
- video-analysis
- youtube
- vimeo
- mcp
- agent-ready
- computer-vision
- mediapipe
- yolo
- gradio
- agentic-analysis
- overlay-video
- temporal-patterns
short_description: Laban Movement Analysis (LMA) from pose estimation
license: apache-2.0
---
# ๐ฉฐ Laban Movement Analysis
**Advanced video movement analysis platform** combining Laban Movement Analysis (LMA) principles with modern AI pose estimation, intelligent analysis, and interactive visualization.
## ๐ Key Features
### ๐ **Multi-Model Pose Estimation**
- **15 different pose estimation models** from multiple sources:
- **MediaPipe**: `mediapipe-lite`, `mediapipe-full`, `mediapipe-heavy`
- **MoveNet**: `movenet-lightning`, `movenet-thunder`
- **YOLO v8**: `yolo-v8-n/s/m/l/x` (5 variants)
- **YOLO v11**: `yolo-v11-n/s/m/l/x` (5 variants)
### ๐ฅ **Comprehensive Video Processing**
- **JSON Analysis Output**: Detailed movement metrics with temporal data
- **Annotated Video Generation**: Pose overlay with Laban movement data
- **URL Support**: Direct processing from YouTube, Vimeo, and video URLs
- **Custom Overlay Component**: `gradio_overlay_video` for controlled layered visualization
### ๐ค **Agentic Intelligence**
- **SUMMARY Analysis**: Narrative movement interpretation with temporal patterns
- **STRUCTURED Analysis**: Quantitative breakdowns and statistical insights
- **MOVEMENT FILTERS**: Pattern detection with intelligent filtering
- **Laban Interpretation**: Professional movement quality assessment
### ๐จ **Interactive Visualization**
- **Standard Analysis Tab**: Core pose estimation and LMA processing
- **Overlay Visualization Tab**: Interactive layered video display
- **Agentic Analysis Tab**: AI-powered movement insights and filtering
## Installation
```bash
pip install gradio_labanmovementanalysis
```
## Usage
```python
# app.py โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
"""
Laban Movement Analysis โ modernised Gradio Space
Author: Csaba (BladeSzaSza)
"""
import gradio as gr
import os
# from backend.gradio_labanmovementanalysis import LabanMovementAnalysis
from gradio_labanmovementanalysis import LabanMovementAnalysis
# Import agent API if available
# Initialize agent API if available
agent_api = None
try:
from gradio_labanmovementanalysis.agent_api import (
LabanAgentAPI,
PoseModel,
MovementDirection,
MovementIntensity
)
agent_api = LabanAgentAPI()
except Exception as e:
print(f"Warning: Agent API not available: {e}")
agent_api = None
# Initialize components
try:
analyzer = LabanMovementAnalysis(
enable_visualization=True
)
print("โ
Core features initialized successfully")
except Exception as e:
print(f"Warning: Some features may not be available: {e}")
analyzer = LabanMovementAnalysis()
def process_video_enhanced(video_input, model, enable_viz, include_keypoints):
"""Enhanced video processing with all new features."""
if not video_input:
return {"error": "No video provided"}, None
try:
# Handle both file upload and URL input
video_path = video_input.name if hasattr(video_input, 'name') else video_input
json_result, viz_result = analyzer.process_video(
video_path,
model=model,
enable_visualization=enable_viz,
include_keypoints=include_keypoints
)
return json_result, viz_result
except Exception as e:
error_result = {"error": str(e)}
return error_result, None
def process_video_standard(video : str, model : str, include_keypoints : bool) -> dict:
"""
Processes a video file using the specified pose estimation model and returns movement analysis results.
Args:
video (str): Path to the video file to be analyzed.
model (str): The name of the pose estimation model to use (e.g., "mediapipe-full", "movenet-thunder", etc.).
include_keypoints (bool): Whether to include raw keypoint data in the output.
Returns:
dict:
- A dictionary containing the movement analysis results in JSON format, or an error message if processing fails.
Notes:
- Visualization is disabled in this standard processing function.
- If the input video is None, both return values will be None.
- If an error occurs during processing, the first return value will be a dictionary with an "error" key.
"""
if video is None:
return None
try:
json_output = analyzer.process(
video,
model=model,
include_keypoints=include_keypoints
)
return json_output
except (RuntimeError, ValueError, OSError) as e:
return {"error": str(e)}
# โโ 4. Build UI โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def create_demo() -> gr.Blocks:
with gr.Blocks(
title="Laban Movement Analysis",
theme='gstaff/sketch',
fill_width=True,
) as demo:
# gr.api(process_video_standard, api_name="process_video") # <-- Remove from here
# โโ Hero banner โโ
gr.Markdown(
"""
# ๐ญ Laban Movement Analysis
Pose estimation โข AI action recognition โข Movement Analysis
"""
)
with gr.Tabs():
# Tab 1: Standard Analysis
with gr.Tab("๐ฌ Standard Analysis"):
gr.Markdown("""
### Upload a video file to analyze movement using traditional LMA metrics with pose estimation.
""")
# โโ Workspace โโ
with gr.Row(equal_height=True):
# Input column
with gr.Column(scale=1, min_width=260):
analyze_btn_enh = gr.Button("๐ Analyze Movement", variant="primary", size="lg")
video_in = gr.Video(label="Upload Video", sources=["upload"], format="mp4")
# URL input option
url_input_enh = gr.Textbox(
label="Or Enter Video URL",
placeholder="YouTube URL, Vimeo URL, or direct video URL",
info="Leave file upload empty to use URL"
)
gr.Markdown("**Model Selection**")
model_sel = gr.Dropdown(
choices=[
# MediaPipe variants
"mediapipe-lite", "mediapipe-full", "mediapipe-heavy",
# MoveNet variants
"movenet-lightning", "movenet-thunder",
# YOLO v8 variants
"yolo-v8-n", "yolo-v8-s", "yolo-v8-m", "yolo-v8-l", "yolo-v8-x",
# YOLO v11 variants
"yolo-v11-n", "yolo-v11-s", "yolo-v11-m", "yolo-v11-l", "yolo-v11-x"
],
value="mediapipe-full",
label="Advanced Pose Models",
info="15 model variants available"
)
with gr.Accordion("Analysis Options", open=False):
enable_viz = gr.Radio([("Yes", 1), ("No", 0)], value=1, label="Visualization")
include_kp = gr.Radio([("Yes", 1), ("No", 0)], value=0, label="Raw Keypoints")
gr.Examples(
examples=[
["examples/balette.mp4"],
["https://www.youtube.com/shorts/RX9kH2l3L8U"],
["https://vimeo.com/815392738"],
["https://vimeo.com/548964931"],
["https://videos.pexels.com/video-files/5319339/5319339-uhd_1440_2560_25fps.mp4"],
],
inputs=url_input_enh,
label="Examples"
)
# Output column
with gr.Column(scale=2, min_width=320):
viz_out = gr.Video(label="Annotated Video", scale=1, height=400)
with gr.Accordion("Raw JSON", open=True):
json_out = gr.JSON(label="Movement Analysis", elem_classes=["json-output"])
# Wiring
def process_enhanced_input(file_input, url_input, model, enable_viz, include_keypoints):
"""Process either file upload or URL input."""
video_source = file_input if file_input else url_input
return process_video_enhanced(video_source, model, enable_viz, include_keypoints)
analyze_btn_enh.click(
fn=process_enhanced_input,
inputs=[video_in, url_input_enh, model_sel, enable_viz, include_kp],
outputs=[json_out, viz_out],
api_name="analyze_enhanced"
)
# Footer
with gr.Row():
gr.Markdown(
"""
**Built by Csaba Bolyรณs**
[GitHub](https://github.com/bladeszasza) โข [HF](https://huggingface.co/BladeSzaSza)
"""
)
return demo
# Register API endpoint OUTSIDE the UI
gr.api(process_video_standard, api_name="process_video")
if __name__ == "__main__":
demo = create_demo()
demo.launch(server_name="0.0.0.0",
share=True,
server_port=int(os.getenv("PORT", 7860)),
mcp_server=True)
```
## `LabanMovementAnalysis`
### Initialization
name | type | default | description |
---|---|---|---|
default_model |
```python str ``` | "mediapipe" |
Default pose estimation model ("mediapipe", "movenet", "yolo") |
enable_visualization |
```python bool ``` | True |
Whether to generate visualization video by default |
include_keypoints |
```python bool ``` | False |
Whether to include raw keypoints in JSON output |
enable_webrtc |
```python bool ``` | False |
Whether to enable WebRTC real-time analysis |
label |
```python typing.Optional[str][str, None] ``` | None |
Component label |
every |
```python typing.Optional[float][float, None] ``` | None |
None |
show_label |
```python typing.Optional[bool][bool, None] ``` | None |
None |
container |
```python bool ``` | True |
None |
scale |
```python typing.Optional[int][int, None] ``` | None |
None |
min_width |
```python int ``` | 160 |
None |
interactive |
```python typing.Optional[bool][bool, None] ``` | None |
None |
visible |
```python bool ``` | True |
None |
elem_id |
```python typing.Optional[str][str, None] ``` | None |
None |
elem_classes |
```python typing.Optional[typing.List[str]][ typing.List[str][str], None ] ``` | None |
None |
render |
```python bool ``` | True |
None |