unh-academic-integrity-policy-560m
This model is a fine-tuned version of bigscience/bloomz-560m designed to generate academic policy-aligned text. It incorporates Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT) techniques to improve response relevance and reduce memory usage.
Intended Use
- Academic policy Q/A text-generation
- Text generation with contextual grounding
- Research use in NLP and LLM alignment
Training Details
- Learning rate: 2e-05
- Batch size: 2 (train), 8 (eval)
- Epochs: 2
- Optimizer: Adam
- Gradient accumulation: 4
- Precision: Mixed (AMP)
- Scheduler: Linear decay
Deployment
Optimized for deployment with Hugging Face Inference Endpoints. Also supports:
- Amazon SageMaker
- Azure ML
- Friendli Inference
Deployment Status
This model is not currently deployed on a public inference provider. You can deploy it using Hugging Face Inference Endpoints or export it to services like Amazon SageMaker or Azure ML.
Framework Versions
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
- Downloads last month
- 16
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for SaiSaketh/unh-academic-integrity-policy-560m
Base model
bigscience/bloomz-560m