YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Fine-tuned Phi-3 Model
This is a fine-tuned version of the microsoft/phi-3-128k-instruct model.
Model Description
- Base model: microsoft/phi-3-128k-instruct
- Fine-tuning task: Conversational AI
- Training data: Custom dataset
- Hardware used: NVIDIA H100 NVL
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RubanAgnesh/rezolve-emphathetic-128k-instruct-v1")
tokenizer = AutoTokenizer.from_pretrained("RubanAgnesh/rezolve-emphathetic-128k-instruct-v1")
# Prepare your input
text = "Your prompt here"
inputs = tokenizer(text, return_tensors="pt")
# Generate
outputs = model.generate(**inputs)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Details
The model was fine-tuned with the following parameters:
- Number of epochs: 3
- Batch size: 4
- Learning rate: 2e-5
- Weight decay: 0.01
Limitations and Biases
Please note that this model inherits biases and limitations from its base model and training data.
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support