File size: 1,793 Bytes
4cfe630
 
 
 
 
 
 
 
 
6dfa42c
4cfe630
 
6dfa42c
 
44b5c36
6dfa42c
 
 
44b5c36
 
6dfa42c
44b5c36
 
 
6dfa42c
 
 
 
 
 
 
44b5c36
6dfa42c
 
 
44b5c36
 
 
 
6dfa42c
44b5c36
6dfa42c
44b5c36
 
 
 
 
6dfa42c
 
 
4cfe630
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
title: Json Structured
emoji: πŸƒ
colorFrom: red
colorTo: gray
sdk: gradio
sdk_version: 5.33.0
app_file: app.py
pinned: false
short_description: Plain text to json using llama.cpp
---

# Plain Text to JSON with llama.cpp

This Hugging Face Space converts plain text into structured JSON format using llama.cpp for efficient CPU inference, powered by the Osmosis Structure 0.6B model.

## Features

- **llama.cpp Integration**: Uses llama-cpp-python for efficient CPU model inference
- **Osmosis Structure Model**: Specialized 0.6B parameter model for structured data extraction
- **Gradio Interface**: User-friendly web interface
- **JSON Conversion**: Converts unstructured text to well-formatted JSON
- **Auto-Download**: Automatically downloads the Osmosis model on first use
- **Demo Mode**: Basic functionality without requiring the AI model

## Setup

The space automatically installs:
- `llama-cpp-python` for llama.cpp integration
- Required build tools (`build-essential`, `cmake`)
- Gradio and other dependencies
- Downloads Osmosis Structure 0.6B model (~1.2GB) on first use

## Usage

1. **Quick Start**: Run `python setup_and_run.py` for automated setup
2. **Demo Mode**: Use "Demo (No Model)" for basic text-to-JSON conversion
3. **Full Mode**: Click "Load Model" to download and use the Osmosis model
4. **Customize**: Adjust temperature and max_tokens for different output styles

## Model Details

- **Model**: Osmosis Structure 0.6B BF16 GGUF
- **Repository**: https://huggingface.co/osmosis-ai/Osmosis-Structure-0.6B
- **Specialization**: Structure extraction and JSON generation
- **Size**: ~1.2GB download
- **Format**: GGUF (optimized for llama.cpp)

## Configuration

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference