Jan-Nano-128k: Empowering deeper research through extended context understanding.

GitHub License

Jan-Nano-128k

Authors: Alan Dao, Bach Vu Dinh, Thinh Le

Overview

Jan-Nano-128k represents a significant advancement in compact language models for research applications. Building upon the success of Jan-Nano, this enhanced version features a native 128k context window that enables deeper, more comprehensive research capabilities without the performance degradation typically associated with context extension methods.

Key Improvements:

  • πŸ” Research Deeper: Extended context allows for processing entire research papers, lengthy documents, and complex multi-turn conversations
  • ⚑ Native 128k Window: Built from the ground up to handle long contexts efficiently, maintaining performance across the full context range
  • πŸ“ˆ Enhanced Performance: Unlike traditional context extension methods, Jan-Nano-128k shows improved performance with longer contexts

This model maintains full compatibility with Model Context Protocol (MCP) servers while dramatically expanding the scope of research tasks it can handle in a single session.

Evaluation

Jan-Nano-128k has been rigorously evaluated on the SimpleQA benchmark using our MCP-based methodology, demonstrating superior performance compared to its predecessor:

image/png

Why Jan-Nano-128k?

Traditional approaches to extending context length, such as YaRN (Yet another RoPE extensioN), often result in performance degradation as context length increases. Jan-Nano-128k breaks this paradigm:

This fundamental difference makes Jan-Nano-128k ideal for research applications requiring deep document analysis, multi-document synthesis, and complex reasoning over large information sets.

πŸ–₯️ How to Run Locally

Jan-Nano Demo

Jan-Nano-128k is fully supported by Jan - beta build, providing a seamless local AI experience with complete privacy and control.

For additional tutorials and community guidance, visit our Discussion Forums.

Deployment

Deploy using VLLM:

vllm serve Menlo/Jan-nano-128k \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes \
    --rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' --max-model-len 131072

Or llama-server from llama.cpp:

llama-server ... --rope-scaling yarn --rope-scale 3.2 --yarn-orig-ctx 40960

Note: The chat template is included in the tokenizer. For troubleshooting, download the Non-think chat template.

Recommended Sampling Parameters

Temperature: 0.7
Top-p: 0.8
Top-k: 20
Min-p: 0.0

🀝 Community & Support

πŸ“„ Citation

@model{jan-nano-128k,
  title={Jan-Nano-128k: Deep Research with Extended Context},
  author={Dao, Alan and Dinh, Bach Vu and Le Thinh},
  year={2024},
  url={https://huggingface.co/Menlo/Jan-nano-128k}
}

Jan-Nano-128k: Empowering deeper research through extended context understanding.

Downloads last month
0
Safetensors
Model size
4.02B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support

Model tree for Menlo/Jan-nano-128k

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
Menlo/Jan-nano
Finetuned
(5)
this model
Finetunes
1 model
Quantizations
4 models

Collection including Menlo/Jan-nano-128k