EXAONE-1.2B Nutrition KDRI Model

์ด ๋ชจ๋ธ์€ LG AI์—ฐ๊ตฌ์›์˜ EXAONE-1.2B ๋ชจ๋ธ์„ ์˜์–‘ํ•™ ๋ฐ KDRI(ํ•œ๊ตญ์ธ ์˜์–‘์†Œ ์„ญ์ทจ๊ธฐ์ค€) ๋ฐ์ดํ„ฐ๋กœ ํŒŒ์ธํŠœ๋‹ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

๋ชจ๋ธ ์ •๋ณด

  • Base Model: LGAI-EXAONE/EXAONE-4.0-1.2B
  • Task: ์˜์–‘ํ•™ Q&A, KDRI ์ •๋ณด ์ œ๊ณต
  • Language: Korean
  • Fine-tuning Data: ์˜์–‘ํ•™ PDF ๋ฌธ์„œ ๋ฐ KDRI ๋ฐ์ดํ„ฐ

์‚ฌ์šฉ๋ฒ•

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

tokenizer = AutoTokenizer.from_pretrained("amis5895/exaone-1p2b-nutrition-kdri")
model = AutoModelForCausalLM.from_pretrained("amis5895/exaone-1p2b-nutrition-kdri")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# ์งˆ๋ฌธ ์˜ˆ์‹œ
question = "ํ•œ๊ตญ์ธ์˜ 1์ผ ์นผ์Š˜ ๊ถŒ์žฅ๋Ÿ‰์€ ์–ผ๋งˆ์ธ๊ฐ€์š”?"
response = pipe(question, max_length=200, do_sample=True)
print(response[0]['generated_text'])

์ฃผ์š” ๊ธฐ๋Šฅ

  • ํ•œ๊ตญ์ธ ์˜์–‘์†Œ ์„ญ์ทจ๊ธฐ์ค€(KDRI) ์ •๋ณด ์ œ๊ณต
  • ์˜์–‘ํ•™ ๊ด€๋ จ ์งˆ๋ฌธ ๋‹ต๋ณ€
  • ์˜ํ•™์  ์˜์–‘ ์ƒ๋‹ด ์ง€์›

์ฃผ์˜์‚ฌํ•ญ

์ด ๋ชจ๋ธ์€ ๊ต์œก ๋ฐ ์—ฐ๊ตฌ ๋ชฉ์ ์œผ๋กœ๋งŒ ์‚ฌ์šฉ๋˜์–ด์•ผ ํ•˜๋ฉฐ, ์‹ค์ œ ์˜ํ•™์  ์ง„๋‹จ์ด๋‚˜ ์น˜๋ฃŒ์— ์‚ฌ์šฉํ•ด์„œ๋Š” ์•ˆ ๋ฉ๋‹ˆ๋‹ค.

Downloads last month
55
Safetensors
Model size
1B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for amis5895/exaone-1p2b-nutrition-kdri

Finetuned
(5)
this model

Space using amis5895/exaone-1p2b-nutrition-kdri 1