Poro 2 70B Instruct Model Card

Poro 2 70B Instruct is an instruction-following chatbot model created through supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) of the Poro 2 70B Base model. This model is designed for conversational AI applications and instruction following in both Finnish and English. It was trained on a carefully curated mix of English and Finnish instruction data, followed by preference tuning to improve response quality.

Poro 2 was created in a collaboration between AMD Silo AI, the TurkuNLP group of the University of Turku, and High Performance Language Technologies (HPLT). Training was conducted on the LUMI supercomputer, using compute resources generously provided by CSC - IT Center for Science, Finland.

This model demonstrates how continued pretraining followed by instruction tuning can efficiently add new language capabilities to existing models while maintaining strong conversational abilities in both the original and target languages.

For more details on our training and data generation pipeline, check out our Continued Pretraining Playbook.

Poro 2 Model Family

The Poro 2 model family includes both 8B and 70B models, and there are three different versions released of the Poro 2 models: a base model, a post-training SFT-only checkpoint, and the final instruct model which is the SFT model plus a round of DPO.

Model Based on Base Model SFT Instruct
Poro 2 8B Llama 3.1 8B Poro 2 8B Base Poro 2 8B SFT Poro 2 8B Instruct
Poro 2 70B Llama 3.1 70B Poro 2 70B Base Poro 2 70B SFT Poro 2 70B Instruct

What does Poro mean? Poro is the Finnish word for Reindeer! 🦌 These animals are native to Finland and hold a significant role in Finnish culture and history.

Model Overview

Poro 2 70B Instruct is based on the Llama 3.1 70B architecture and has been fine-tuned for instruction following and conversational AI applications. The model supports both English and Finnish conversations.

Hyperparameter Value
n_parameters 70.55B
n_layers 80
n_heads 64
n_kv_heads 8
d_model 8192
vocab_size 128256
max_sequence_length 8192
base_model Llama-3.1-70B

Training Process

Continued Pretraining

The base Poro 2 70B model was created through continued pretraining on 165B tokens of Finnish, English, code, and math data.

Supervised Fine-Tuning (SFT)

The SFT phase used 1.4M instruction-following examples in English and Finnish, including:

  • English and Finnish Tulu 3 prompts with Llama-3.3-70B-Instruct responses
  • Multi-turn conversations generated using the Magpie method
  • Top-rated conversations from OASST2 and Avoin Avustaja datasets
  • Translation samples from EuroParl

We also release the Poro 2 instruction collection.

Direct Preference Optimization (DPO)

The final model underwent preference tuning using the HelpSteer3 dataset to improve response quality and alignment.

Post-Training Hyperparameters

SFT

Hyperparameter Value
Epochs 2
Global batch size 128
Learning rate 5e-6
LR scheduler linear
Warmup ratio 0.03
Max sequence length 4,096

DPO

Hyperparameter Value
Epochs 3
Global batch size 64
Beta 0.01
Learning rate 5e-7
LR scheduler cosine
Warmup ratio 0.1
Max length 4,096

Evaluation Results

Poro 2 70B Instruct shows substantial improvements in Finnish instruction-following capabilities compared to Llama 3.1 70B Instruct and Llama 3.3 70B Instruct, while maintaining excellent English performance.

Finnish Instruction Following

Poro 2 70B Instruct Llama 3.1 70B Instruct Llama 3.3 70B Instruct
IFEval Finnish 70.79 63.95 71.71
MTBench Finnish 7.77 7.06 7.4
AlpacaEval 2 Finnish 41.96 21.06 25.73

English Instruction Following

Score Llama 3.1 70B Instruct Llama 3.3 70B Instruct
IFEval 85.95 86.69 90.38
MTBench 8.41 8.33 8.35
AlpacaEval 2 49.77 43.87 45.12

Pairwise Comparisons (MTBench)

  • Finnish: 66% win rate vs Llama 3.3 70B Instruct
  • English: 57% win rate vs Llama 3.3 70B Instruct

Overall: Substantially outperforms Llama 3.3 70B Instruct in Finnish by over 6% and Llama 3.1 70B Instruct by over 11%, while maintaining excellent English performance on par with or exceeding Llama 3.3 70B Instruct.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "LumiOpen/Llama-Poro-2-70B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Finnish conversation example
messages = [
    {"role": "user", "content": "Kerro minulle Suomen historiasta."}
]

inputs = tokenizer.apply_chat_template(
    messages, 
    add_generation_prompt=True,
    return_tensors="pt"
)

outputs = model.generate(
    inputs,
    max_new_tokens=500,
    temperature=0.7,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Intended Use

Poro 2 70B Instruct is designed for:

  • High-performance conversational AI applications in Finnish and English
  • Question answering and information retrieval
  • Content generation and creative writing
  • Educational applications
  • Customer service and support applications
  • Translation between Finnish and English
  • Research and enterprise applications requiring strong multilingual capabilities

Ethical Considerations and Limitations

Poro 2 70B is an advanced language model optimized for English and Finnish, with additional capabilities in code and mathematics. As with most AI-driven systems, Poro 2 is a product of the vast data it has been trained on, which may reflect the imperfections, biases, and idiosyncrasies of the wider web. The model may, at times, produce outputs that can be considered inaccurate, prejudiced, or controversial.

Key limitations:

  • Limited proficiency in languages other than English and Finnish
  • Potential for generating biased or inappropriate content
  • May produce factually incorrect information

License

Built with Llama

Poro 2 70B Instruct is released under the Llama 3.3 Community License. Please review the license terms before use.

Citation

@misc{poro2_2025,
    title={Poro 2: Continued Pretraining for Language Acquisition},
    author={Elaine Zosa and Jouni Louma and Kai Hakala and Antti Virtanen and Mika Koistinen and Risto Luukkonen and Akseli Reunamo and Sampo Pyysalo and Jonathan Burdge},
    year={2025},
    howpublished={LumiOpen}
}

Acknowledgments

We thank CSC - IT Center for Science, Finland for providing access to the LUMI supercomputer. This work was supported by the High Performance Language Technologies (HPLT) project and conducted in collaboration with TurkuNLP from the University of Turku. This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350.

Downloads last month
90
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LumiOpen/Llama-Poro-2-70B-Instruct

Quantizations
2 models

Datasets used to train LumiOpen/Llama-Poro-2-70B-Instruct

Collection including LumiOpen/Llama-Poro-2-70B-Instruct