File size: 2,649 Bytes
04ff51c 75917d6 04ff51c 98a96df 0292df6 98a96df |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
title: Video Accent Analyzer
emoji: ๐ง
colorFrom: green
colorTo: blue
sdk: gradio
sdk_version: 5.33.1
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# ๐ง Video Accent Analyzer
A sophisticated Python-based tool that analyzes English accents in videos using advanced machine learning techniques. Supports multiple video sources and provides detailed accent analysis with interactive visualizations.
## ๐ Features
### Video Support
- YouTube videos with browser cookie integration
- Loom recordings
- Direct MP4 video links
- Local video file uploads
### Accent Analysis
- Detects 6 distinct English accents:
- ๐ฌ๐ง British English
- ๐บ๐ธ American English
- ๐ฆ๐บ Australian English
- ๐จ๐ฆ Canadian English
- ๐ฎ๐ณ Indian English
- ๐ Neutral English
- Provides confidence scores for accent detection
- Assesses English proficiency level
- Analyzes audio quality metrics
### Technical Capabilities
- Automatic video download and processing
- Audio extraction and preprocessing
- Multi-chunk analysis for improved accuracy
- Real-time progress tracking
- Interactive Plotly visualizations
- Batch processing support
## ๐ Installation
```bash
pip install -r requirements.txt
```
# ๐ป Usage
Web Interface
Start the application:
python app.py
Open the provided URL in your browser
Enter a video URL or upload a video file
Adjust the maximum duration (10-120 seconds)
Click "Analyze Video"
# Python API
```from video_accent_analyzer import VideoAccentAnalyzer
# Initialize analyzer
analyzer = VideoAccentAnalyzer()
# Analyze video
results = analyzer.analyze_video_url("your_video_url", max_duration=30)
# Display results
analyzer.display_results(results)
```
# โ๏ธ Technical Requirements
Python 3.7+
FFmpeg installed on system
Required Python packages:
gradio โฅ4.0.0
plotly โฅ5.0.0
torch
transformers
librosa
soundfile
yt-dlp
browser-cookie3 โฅ0.19.1
## ๐ Results
๐ฏ Best Practices
- Use videos with clear audio and minimal background noise
For Best Results
Use videos under 2 minutes in length
Ensure clear audio quality
Single speaker per analysis
Continuous speech segments
Minimum background noise
Known Limitations
Multiple speakers may affect accuracy
Heavy background noise can impact results
Very short speech segments (<10s) may be less accurate
Some region-restricted videos might not be accessible
# ๐ง Development
The project consists of three main components:
video_accent_analyzer.py: Core analysis engine
app.py: Gradio web interface
Supporting utilities for video processing |