--- pipeline_tag: zero-shot-object-detection # Specify the task library_name: transformers # Specify the library language: - en # List language for your model license: apache-2.0 # Specify a license datasets: - mlfoundations/Click-100k # List datasets used for training base_model: Qwen/Qwen3-VL-30B-A3B-Instruct --- # 🍨 Gelato — From Data Curation to Reinforcement Learning: Building a Strong Grounding Model for Computer-Use Agents [🍨 **Blog Post / Codebase**](https://github.com/mlfoundations/gelato) | [🖱️ **Click-100k (dataset)**](https://huggingface.co/datasets/mlfoundations/Click-100k) Figure 1: Gelato achieves SOTA performance on grounding benchmarks. We are releasing **🍨 Gelato-30B-A3B**, a state-of-the-art grounding model for GUI computer-use tasks! Gelato is trained on our open-sourced [**Click-100k**](https://huggingface.co/datasets/mlfoundations/Click-100k) dataset and achieves **63.88% accuracy on ScreenSpot-Pro**[[3](#ref-screenspot-pro)] and **67.19% / 73.40% on OS-World-G / OS-World-G (Refined)**[[4](#ref-jedi)], surpassing prior specialized computer grounding models like GTA1-32B [[5](#ref-gta1)] and much larger VLMs including Qwen3-VL-235B-A22B-Instruct [[10](#ref-qwen3vl)]. For details on data curation and training refer to our [blog post](https://huggingface.co/mlfoundations-cua-dev/Gelato-30B-A3B). # Performance Gelato-30B-A3B outperforms the SoTA specialized computer grounding model, GTA1-32B, and larger VLMs on the ScreenSpot-Pro and OS-World-G grounding benchmarks. | **Model** | **Activated Size** | **ScreenSpot-Pro** | **OS-World-G** | **OS-World-G (Refined)** | |------------|:--------------:|:------------------:|:----------------:| :----------------:| | Qwen3-VL-30B-A3B-Instruct | 3 B | 60.5% | 61.0% | - | | Qwen3-VL-235B-A22B-Instruct | 22 B | 62.0% | 66.7% | - | | OpenCUA-72B | 72 B | 60.8% | 59.6% | - | | GTA1-32B | 32 B | 63.6% | 65.2% | 72.2% | | Gelato-30B-A3B | 3 B | **63.88%** | **69.15%** | **74.65%** | # Inference Below is a code snippet demonstrating how to inference the Gelato-30B-A3B model. Given an image and an instruction, we output normalized click coordinates in the range [0,1000]. ![Fig2: Sample GUI image with instruction and grounding by Gelato-30B-A3B](model_card_fig2.png) ```python from transformers import Qwen3VLMoeForConditionalGeneration, AutoProcessor import re from PIL import Image, ImageDraw import requests from io import BytesIO def extract_coordinates(raw_string): """ Extract the coordinates from the raw string. Args: raw_string: str (e.g. "(100, 200)") Returns: x: float (e.g. 100.0) y: float (e.g. 200.0) """ try: matches = re.findall(r"\((-?\d*\.?\d+),\s*(-?\d*\.?\d+)\)", raw_string) return [tuple(map(int, match)) for match in matches][0] except: return 0,0 def visualize_prediction(img, pred_x, pred_y, img_width, img_height): """ Visualize the predicted coordinates on the image (high visibility). """ pred_x = int((pred_x * img_width) / 1000) pred_y = int((pred_y * img_height) / 1000) draw = ImageDraw.Draw(img, "RGBA") r = 30 draw.ellipse( (pred_x - r, pred_y - r, pred_x + r, pred_y + r), outline="lime", fill=(0, 255, 0, 90), width=5 ) cross_len = 15 draw.line((pred_x - cross_len, pred_y, pred_x + cross_len, pred_y), fill="lime", width=5) draw.line((pred_x, pred_y - cross_len, pred_x, pred_y + cross_len), fill="lime", width=5) img.save("predicted_coordinates.png") print(f"Predicted coordinates: ({pred_x}, {pred_y})") # Load the model and processor MODEL_PATH = "mlfoundations/Gelato-30B-A3B" model = Qwen3VLMoeForConditionalGeneration.from_pretrained( MODEL_PATH, device_map="auto", dtype="auto" ) processor = AutoProcessor.from_pretrained( MODEL_PATH ) url = "https://github.com/QwenLM/Qwen3-VL/raw/main/cookbooks/assets/computer_use/computer_use1.jpeg" response = requests.get(url) img = Image.open(BytesIO(response.content)) img_width, img_height = img.size # Prepare messages PROMPT = ''' You are an expert UI element locator. Given a GUI image and a user's element description, provide the coordinates of the specified element as a single (x,y) point. For elements with area, return the center point. Output the coordinate pair exactly: (x,y) ''' PROMPT = PROMPT.strip() INSTRUCTION = "Reload the cache." messages = [ { "role": "user", "content": [ {"type": "text", "text": PROMPT + "\n\n"}, {"type": "image", "image": img}, {"type": "text", "text": "\n" + INSTRUCTION}, ], } ] device = next(model.parameters()).device inputs = processor.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_dict=True, return_tensors="pt" ).to(device) # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=32) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) # Extract the coordinates from the output text print(f"Model output: {output_text[0]}") pred_x, pred_y = extract_coordinates(output_text[0]) # Calculate the absolute coordinates from normalized coordinates visualize_prediction(img, pred_x, pred_y, img_width, img_height) ``` ## Citation If you use **🍨 Gelato** in your research, please cite it as follows: ``` @misc{gelato2025, title={Gelato — From Data Curation to Reinforcement Learning: Building a Strong Grounding Model for Computer-Use Agents}, author={Gelato Team}, year={2025}, publisher={GitHub}, howpublished={\url{https://github.com/mlfoundations/gelato}}, } ```