Lamapi commited on
Commit
e0d6cd2
·
verified ·
1 Parent(s): 19a4084

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - tr
4
+ - ar
5
+ - af
6
+ - az
7
+ - es
8
+ - en
9
+ - el
10
+ - ro
11
+ - ru
12
+ - rm
13
+ - th
14
+ - uk
15
+ - uz
16
+ - pl
17
+ - pt
18
+ - fa
19
+ - sk
20
+ - sl
21
+ - da
22
+ - de
23
+ - nl
24
+ - fr
25
+ - fi
26
+ - ka
27
+ - hi
28
+ - hu
29
+ - hy
30
+ - ja
31
+ - kk
32
+ - kn
33
+ - ko
34
+ - ku
35
+ - ky
36
+ - la
37
+ - lb
38
+ - id
39
+ - is
40
+ - it
41
+ - zh
42
+ - cs
43
+ - vi
44
+ - be
45
+ - bg
46
+ - bs
47
+ - ne
48
+ - mn
49
+ license: mit
50
+ tags:
51
+ - turkish
52
+ - türkiye
53
+ - english
54
+ - ai
55
+ - lamapi
56
+ - gemma3
57
+ - next
58
+ - next-x1
59
+ - efficient
60
+ - text-generation
61
+ - open-source
62
+ - 1b
63
+ - huggingface
64
+ - large-language-model
65
+ - llm
66
+ - causal
67
+ - transformer
68
+ - artificial-intelligence
69
+ - machine-learning
70
+ - ai-research
71
+ - natural-language-processing
72
+ - nlp
73
+ - finetuned
74
+ - lightweight
75
+ - creative
76
+ - summarization
77
+ - question-answering
78
+ - chat-model
79
+ - generative-ai
80
+ - optimized-model
81
+ - unsloth
82
+ - trl
83
+ - sft
84
+ - chemistry
85
+ - biology
86
+ - finance
87
+ - legal
88
+ - music
89
+ - art
90
+ - code
91
+ - climate
92
+ - medical
93
+ - agent
94
+ - text-generation-inference
95
+ - llama-cpp
96
+ - gguf-my-repo
97
+ pipeline_tag: text-generation
98
+ datasets:
99
+ - mlabonne/FineTome-100k
100
+ - ITCL/FineTomeOs
101
+ - Gryphe/ChatGPT-4o-Writing-Prompts
102
+ - dongguanting/ARPO-SFT-54K
103
+ - GreenerPastures/All-Your-Base-Full
104
+ - Gryphe/Opus-WritingPrompts
105
+ - HuggingFaceH4/MATH-500
106
+ - mlabonne/smoltalk-flat
107
+ - mlabonne/natural_reasoning-formatted
108
+ - OpenSPG/KAG-Thinker-training-dataset
109
+ - uclanlp/Brief-Pro
110
+ - CognitiveKernel/CognitiveKernel-Pro-SFT
111
+ - SuperbEmphasis/Claude-4.0-DeepSeek-R1-RP-SFWish
112
+ - QuixiAI/dolphin-r1
113
+ - mlabonne/lmsys-arena-human-sft-55k
114
+ library_name: transformers
115
+ base_model: Lamapi/next-1b
116
+ ---
117
+
118
+ # Lamapi/next-1b-Q6_K-GGUF
119
+ This model was converted to GGUF format from [`Lamapi/next-1b`](https://huggingface.co/Lamapi/next-1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
120
+ Refer to the [original model card](https://huggingface.co/Lamapi/next-1b) for more details on the model.
121
+
122
+ ## Use with llama.cpp
123
+ Install llama.cpp through brew (works on Mac and Linux)
124
+
125
+ ```bash
126
+ brew install llama.cpp
127
+
128
+ ```
129
+ Invoke the llama.cpp server or the CLI.
130
+
131
+ ### CLI:
132
+ ```bash
133
+ llama-cli --hf-repo Lamapi/next-1b-Q6_K-GGUF --hf-file next-1b-q6_k.gguf -p "The meaning to life and the universe is"
134
+ ```
135
+
136
+ ### Server:
137
+ ```bash
138
+ llama-server --hf-repo Lamapi/next-1b-Q6_K-GGUF --hf-file next-1b-q6_k.gguf -c 2048
139
+ ```
140
+
141
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
142
+
143
+ Step 1: Clone llama.cpp from GitHub.
144
+ ```
145
+ git clone https://github.com/ggerganov/llama.cpp
146
+ ```
147
+
148
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
149
+ ```
150
+ cd llama.cpp && LLAMA_CURL=1 make
151
+ ```
152
+
153
+ Step 3: Run inference through the main binary.
154
+ ```
155
+ ./llama-cli --hf-repo Lamapi/next-1b-Q6_K-GGUF --hf-file next-1b-q6_k.gguf -p "The meaning to life and the universe is"
156
+ ```
157
+ or
158
+ ```
159
+ ./llama-server --hf-repo Lamapi/next-1b-Q6_K-GGUF --hf-file next-1b-q6_k.gguf -c 2048
160
+ ```