Owlet-HAR-1: Human Activity Recognition Vision Language Model
Owlet-HAR-1 is a fine-tuned vision-language model specialized for human activity recognition in videos. Built on Qwen2.5-3B-VL, this model achieves 68.19% accuracy on the HMDB51 dataset, representing a significant improvement over the base model's 38.33% accuracy.
Model Details
Model Description
Owlet-HAR-1 is a specialized vision-language model fine-tuned for human activity recognition tasks. The model processes video input and classifies human activities across 51 different action categories, ranging from facial expressions to complex body movements and object interactions.
- Developed by: Phronetic AI
- Model type: Vision-Language Model (Video Classification)
- Language(s): English (text output), Visual (video input)
- License: Apache 2.0
- Finetuned from model: Qwen/Qwen2.5-3B-Instruct
- Specialized for: Human Activity Recognition in Videos
Model Sources
- Repository: https://huggingface.co/phronetic-ai/owlet-har-1
- Base Model: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct
- Research Blog: [Enhancing Video Activity Recognition with Human Pose Data: A Vision Language Model Study](To be added)
Uses
Direct Use
The model is designed for direct video activity recognition tasks. It takes video input and outputs a single word describing the primary human activity being performed. The model could be used for:
- Healthcare monitoring: Identifying daily activities and movements
- Human-computer interaction: Understanding user actions in video interfaces
- Security and surveillance: Automated activity detection
- Content analysis: Categorizing video content by human activities
Downstream Use
The model can be integrated into larger systems for:
- Video content management systems
- Automated video tagging and indexing
- Real-time activity monitoring applications
- Educational platforms for movement analysis
- Assistive technologies for elderly or disabled individuals
Out-of-Scope Use
- Privacy-sensitive applications: The model should not be used for unauthorized surveillance
- High-stakes decision making: Not suitable for critical applications without human oversight
- Real-time safety systems: May not be reliable enough for safety-critical applications
- Non-human activity recognition: Trained specifically on human activities
- Complex scene understanding: Focuses on single-person activities, may struggle with multi-person scenes
Performance
Key Metrics on HMDB51
- Accuracy: 68.19%
- Precision: 70.20%
- Recall: 67.93%
Best Performing Activities
- Cartwheeling, drawing_sword, falling, grooming, punching: 100% precision and recall
- Climbing_stairs: 95.24% recall, 90.91% precision
- Drinking: 93.10% recall, 90.00% precision
Challenging Activities
- Biking: 0% precision/recall (specific limitation)
- Catching and shooting: Lower performance across metrics
How to Get Started with the Model
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# Load the model and processor
model = Qwen2VLForConditionalGeneration.from_pretrained(
"phronetic-ai/owlet-har-1",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
processor = AutoProcessor.from_pretrained("phronetic-ai/owlet-har-1", trust_remote_code=True)
# Video inference - Multiple input methods supported:
# Method 1: Using video frames as image list
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
},
{"type": "text", "text": "What's the activity the person is doing in this video? Answer in one word only."},
],
}
]
# Method 2: Using local video file
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/your_video.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "What's the activity the person is doing in this video? Answer in one word only."},
],
}
]
# Method 3: Using video URL
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "https://your-video-url.com/video.mp4",
},
{"type": "text", "text": "What's the activity the person is doing in this video? Answer in one word only."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(f"Detected activity: {output_text[0]}")
Training Details
Training Data
The model was fine-tuned on the HMDB51 dataset, which contains:
- Total clips: 6,849 video clips
- Categories: 51 distinct human action categories
- Category groups:
- General Facial Actions (smile, laugh, chew, talk)
- Facial Actions with Object Manipulation (smoke, eat, drink)
- General Body Movements (cartwheel, handstand, jump, run, walk)
- Body Movements with Object Interaction (brush hair, catch, golf, shoot ball)
- Body Movements for Human Interaction (fencing, hug, kiss, punch)
Training Procedure
Training Hyperparameters
- Learning rate: 5e-05 with cosine annealing
- Training epochs: 3
- Batch size: 16 (2 per device × 8 gradient accumulation steps)
- Fine-tuning method: LoRA (Low-Rank Adaptation) with rank 8
- Training regime: BF16 mixed precision
- Optimization: AdamW optimizer
- Framework: LLaMA-Factory
Compute Infrastructure
- Hardware: AWS g5.2xlarge instances with NVIDIA A10G GPUs
- Training time: Approximately 3 epochs on full HMDB51 dataset
- Memory optimization: LoRA fine-tuning with BF16 precision for memory efficiency
Evaluation
Testing Data
Evaluated on HMDB51 test split using the same 51 activity.
Metrics
- Accuracy: Overall classification accuracy across all categories
- Precision: Per-category and macro-averaged precision
- Recall: Per-category and macro-averaged recall
Results Summary
The model demonstrates strong performance on structured activities (gymnastics, specific movements) but struggles with activities involving rapid motion or complex object interactions. The 68.19% accuracy represents a 29.86 percentage point improvement over the base Qwen2.5-3B-VL model.
Bias, Risks, and Limitations
Known Limitations
- Dataset bias: Trained on HMDB51, which may not represent all human activities or demographics
- Single-person focus: Optimized for single-person activities, may struggle with multi-person scenes
- Video quality dependency: Performance may degrade with poor lighting, low resolution, or occluded subjects
- Cultural bias: Training data may not represent activities from all cultures equally
- Temporal resolution: May miss very brief or subtle activities
Risk Considerations
- Privacy concerns: Video analysis capabilities could be misused for surveillance
- Misclassification impact: Incorrect classifications could lead to inappropriate automated responses
Technical Specifications
Model Architecture
- Base Architecture: Qwen2.5-3B Vision-Language Model
- Parameters: ~3 billion parameters
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Input: Video sequences
- Output: Text classification (single word activity label)
- Context Length: Supports video sequences with multiple frames
Model Objective
The model is trained to classify human activities in videos using a text generation objective, where the model generates a single word representing the detected activity.
Citation
If you use this model in your research, please cite:
BibTeX:
@misc{owlet-har-1,
title={Owlet-HAR-1: Human Activity Recognition Vision Language Model},
author={Phronetic AI},
year={2025},
url={https://huggingface.co/phronetic-ai/owlet-har-1}
}
APA: Phronetic AI. (2024). Owlet-HAR-1: Human Activity Recognition Vision Language Model. Hugging Face. https://huggingface.co/phronetic-ai/owlet-har-1
Model Card Authors
Phronetic AI Research Team
Model Card Contact
For questions about this model, please contact: divyansh.makkar@phronetic.ai
- Downloads last month
- 9