Skip to content

Qwen3-VL 4B & 8B

Deploy Qwen3-VL on a Spheron GPU instance. These vision-language models process text, images, and video with a 256K native context window (scalable to 1M tokens). Two size variants are available: 4B and 8B, plus an 8B-Thinking variant with enhanced reasoning.

Training: 36 trillion tokens, 119 languages/dialects

Key features

Architecture:
  • Interleaved-MRoPE: Multi-resolution position embeddings for long video reasoning
  • DeepStack: Multi-level ViT feature fusion for fine-grained detail
  • Text-Timestamp Alignment: Precise event localization in videos
Capabilities:
  • Visual agents (GUI automation, OS World, Android Control)
  • Visual coding (mockups to HTML/CSS/JS, Draw.io diagrams)
  • Spatial understanding (2D/3D grounding, position/viewpoint)
  • OCR (32 languages, robust to low-light/blur/tilt)
Benchmarks:
  • 8B-Thinking: MathVision 36.8, MMMU 61.7, MathVista 71.3
  • 235B: Top scores on agent, document, and spatial reasoning benchmarks

Requirements

Hardware:
  • GPU: RTX 4090, A6000, A100, H100
  • VRAM: 8GB minimum, 16GB+ recommended
  • RAM: 16GB+
  • Storage: 10GB+ (SSD recommended)
Software:
  • Ubuntu 22.04 LTS
  • CUDA 12.1+
  • Python 3.11
  • Conda/Miniconda

FP8-quantized versions reduce VRAM requirements (block size 128).

Deploy on Spheron

  1. Sign up at app.spheron.ai
  2. Add credits (card/crypto)
  3. DeployRTX 4090/A100 → Region → Ubuntu 22.04 → SSH key → Deploy
Connect:
ssh -i <private-key-path> root@<your-vm-ip>

New to Spheron? See Getting Started and SSH Setup.

Installation

Install Miniconda

curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda3
$HOME/miniconda3/bin/conda init bash
source ~/.bashrc

Create environment

conda create -n qwen python=3.11 -y && conda activate qwen

Accept ToS if prompted:

conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main
conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r

Install PyTorch (CUDA 12.1)

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

Install dependencies

pip install git+https://github.com/huggingface/transformers
pip install git+https://github.com/huggingface/accelerate
pip install huggingface_hub
pip install einops timm pillow sentencepiece protobuf decord numpy requests
pip install bitsandbytes

Create test.py

Create the inference script:

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
 
# Load the model on available devices
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen3-VL-4B-Thinking",
    dtype="auto",
    device_map="auto"
)
 
# Optional: Enable flash_attention_2 for better performance and memory efficiency,
# especially in multi-image or video tasks.
# model = Qwen3VLForConditionalGeneration.from_pretrained(
#     "Qwen/Qwen3-VL-4B-Thinking",
#     dtype=torch.bfloat16,
#     attn_implementation="flash_attention_2",
#     device_map="auto",
# )
 
# Load the processor
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-4B-Thinking")
 
# Define input messages (image + text prompt)
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
 
# Prepare inputs for inference
inputs = processor.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)
 
# Generate model output
generated_ids = model.generate(**inputs, max_new_tokens=128)
 
# Extract generated tokens (excluding prompt tokens)
generated_ids_trimmed = [
    output[len(input_ids):] for input_ids, output in zip(inputs.input_ids, generated_ids)
]
 
# Decode output text
output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)
 
print(output_text)

Run script

conda activate qwen
python3 test.py

Configuration

Model variants:
  • 4B: Qwen/Qwen3-VL-4B-Thinking
  • 8B: Qwen/Qwen3-VL-8B-Thinking (requires more VRAM)
Precision:
  • dtype=torch.float16 or torch.bfloat16 (A100/H100)
Flash Attention:
  • Add attn_implementation="flash_attention_2" if supported.
Device:
  • device_map="auto" (recommended)
  • device_map={"":0} (single GPU)

Troubleshooting

Issue: Out of memory (OOM)

Symptoms: CUDA OOM error during model load or inference. Resolution: Reduce max_new_tokens, switch to dtype=torch.float16, or enable bitsandbytes quantization.

Issue: Slow model loading

Symptoms: Model takes several minutes to load. Resolution: Cache models locally, use NVMe storage, and enable use_safetensors=True.

Issue: CUDA errors

Symptoms: CUDA version mismatch errors. Resolution: Verify that your PyTorch and CUDA versions match. Run nvidia-smi to check the installed CUDA version.

What's next