Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
TPO
community
Activity Feed
Follow
5
AI & ML interests
Alignment, Preference Optimization, RLHF
Team members
4
tpo-alignment
's models
11
Sort: Recently updated
tpo-alignment/Instruct-Llama-3-8B-TPO-L-y2
Updated
Feb 19
•
12
tpo-alignment/Instruct-Llama-3-8B-TPO-y2
Updated
Feb 19
•
7
tpo-alignment/Instruct-Llama-3-8B-TPO-y4
Updated
Feb 19
•
12
tpo-alignment/Instruct-Llama-3-8B-TPO-y3
Updated
Feb 19
•
13
tpo-alignment/Mistral-Instruct-7B-TPO-y2-v0.2
Updated
Feb 19
•
11
tpo-alignment/Mistral-Instruct-7B-TPO-y2-v0.1
Updated
Feb 19
•
28
tpo-alignment/Mistral-Instruct-7B-TPO-y4
Updated
Feb 19
•
15
tpo-alignment/Mistral-Instruct-7B-TPO-y3
Updated
Feb 19
•
7
tpo-alignment/Llama-3-8B-TPO-L-40k
Updated
Feb 19
•
14
tpo-alignment/Mistral-7B-TPO-40k
Updated
Feb 19
•
36
tpo-alignment/Llama-3-8B-TPO-40k
Updated
Feb 19
•
11