Kakyoin03's picture
📋 Ajout d'exemples d'utilisation détaillés
170a565 verified
# 🚗 Exemples d'utilisation - Car Damage Detection
## 📸 Exemple 1 : Analyse simple
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model = AutoModelForCausalLM.from_pretrained("Kakyoin03/car-damage-detection-llama-vision-14k")
tokenizer = AutoTokenizer.from_pretrained("Kakyoin03/car-damage-detection-llama-vision-14k")
image = Image.open("car_damage.jpg")
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "Décrivez les dommages sur cette voiture."},
{"type": "image", "image": image}
]
}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 🔄 Exemple 2 : Traitement par batch
```python
import torch
from PIL import Image
import glob
def analyze_multiple_cars(image_paths):
results = []
for path in image_paths:
image = Image.open(path)
# ... (même code que l'exemple 1)
results.append({"image": path, "analysis": response})
return results
# Analyser tous les JPG dans un dossier
car_images = glob.glob("damage_photos/*.jpg")
analyses = analyze_multiple_cars(car_images)
```
## 🎛️ Exemple 3 : Paramètres avancés
```python
# Configuration avancée pour analyses détaillées
generation_config = {
"max_new_tokens": 500,
"temperature": 0.1,
"top_p": 0.9,
"do_sample": True,
"repetition_penalty": 1.1
}
outputs = model.generate(inputs, **generation_config)
```