Post
1247
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model
Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.
Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---
You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.
---
For Devs:
π Get your API key at https://helpingai.co/dashboard
Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.
Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---
You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.
---
For Devs:
π Get your API key at https://helpingai.co/dashboard
from HelpingAI import HAI # pip install HelpingAI==1.1.1
from rich import print
hai = HAI(api_key="hl-***********************")
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What is the value of β«0βπ₯3/π₯β1ππ₯ ?"}],
stream=True,
hide_think=False # Hide or show models thinking
)
for chunk in response:
print(chunk.choices[0].delta.content, end="", flush=True)