π§ Continual-MEGA: A Large-scale Benchmark for Generalizable Continual Anomaly Detection
This repository provides model checkpoints for Continual-MEGA, a benchmark introduced in the paper:
π Codebase: Continual-Mega/Continual-Mega
π Overview
Continual-MEGA introduces a realistic and large-scale benchmark for continual anomaly detection that emphasizes generalizability across domains and tasks.
The benchmark features:
- β Diverse anomaly types across domains
- π Class-incremental continual learning setup
- π A large-scale evaluation protocol surpassing previous benchmarks
This repository hosts pretrained model checkpoints used in various scenarios defined in the benchmark.
π¦ Available Checkpoints
| Model Name | Scenario | Task | Description |
|---|---|---|---|
scenario2/prompt_maker |
Scenario 2 | Base | Prompt maker trained on Scenario 2 base classes |
scenario2/adapters_base |
Scenario 2 | Base | Adapter trained on Scenario 2 base classes |
scenario2/30classes/adapters_task1 |
Scenario 2 | Task 1 (30 classes) | Adapter trained on Task 1 (30 classes) in Scenario 2 |
scenario2/30classes/adapters_task2 |
Scenario 2 | Task 2 (30 classes) | Adapter trained on Task 2 (30 classes) in Scenario 2 |
| (More to come) | β | β | β |
π Usage Example
Continual Setting Evaluation
sh eval_continual.sh
Zero-Shot Evaluation
sh eval_zero.s
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support