Qwen3-VL-8B-Thinking-abliterated
Qwen3-VL-8B-Thinking-abliterated is an abliterated (v1.0) variant of Qwen3-VL-8B-Thinking, designed for Abliterated Reasoning and Captioning. This model produces detailed captions and reasoning outputs across a wide range of visual and multimodal contexts, including complex, sensitive, or nuanced content. It supports diverse aspect ratios, resolutions, and prompt conditions while maintaining reasoning integrity and descriptive precision.
Key Highlights
Abliterated / Uncensored Captioning Fine-tuned to bypass conventional content filters while preserving factual, descriptive, and reasoning-rich outputs.
High-Fidelity Reasoning and Descriptions Generates comprehensive captions and reasoning for general, artistic, technical, abstract, and low-context images.
Robust Across Aspect Ratios Performs consistently on wide, tall, square, panoramic, and irregular image dimensions.
Variational Detail Control Produces outputs ranging from concise summaries to fine-grained, high-context reasoning and descriptions.
Foundation on Qwen3-VL-8B-Thinking Architecture Built upon the Qwen3-VL-8B-Thinking model’s advanced multimodal reasoning and instruction-following capabilities.
Multilingual Output Capability Defaults to English but can be adapted to other languages via prompt engineering.
Quick Start with Transformers
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
model = Qwen3VLForConditionalGeneration.from_pretrained(
"prithivMLmods/Qwen3-VL-8B-Thinking-abliterated",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen3-VL-8B-Thinking-abliterated")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(output_text)
Intended Use
This model is suited for:
- Generating detailed, uncensored captions and reasoning for general-purpose, artistic, or research-oriented datasets.
- Research in content moderation, red-teaming, and generative safety analysis.
- Enabling descriptive captioning and reasoning for datasets typically excluded from mainstream models.
- Creative applications such as visual storytelling, art description, and multimodal reasoning exploration.
- Captioning and reasoning for images with non-standard or stylized visual structures.
Limitations
- May generate explicit, sensitive, or offensive content depending on prompts and image input.
- Not suitable for production systems that require strict content moderation.
- Output style, tone, and reasoning depth may vary based on input phrasing.
- Accuracy may fluctuate for abstract, synthetic, or highly stylized visuals.
- Downloads last month
- 147
