Comprehensive side-by-side LLM comparison
UI-TARS-2 leads with 49.2% higher average benchmark score. Overall, UI-TARS-2 is the stronger choice for coding tasks.
Alibaba / Qwen
Qwen2.5-VL-32B-Instruct is a 32-billion-parameter vision-language model from Alibaba, extending the Qwen2.5 architecture with multimodal capabilities for understanding images, documents, charts, and video frames alongside text. The model was designed for tasks requiring deep visual reasoning — such as document parsing, table extraction, and spatial understanding — with performance that made it a practical choice for document intelligence and visual data analysis workflows. As an open-weight model, it became a widely adopted foundation for fine-tuning domain-specific multimodal applications.
ByteDance
UI-TARS-2, released by ByteDance in September 2025, is a major generational upgrade of the UI-TARS family of GUI interaction models, with enhanced capabilities across computer control, game environments, code generation, and tool use. It targets agentic workflows requiring robust multimodal understanding of graphical interfaces across diverse application domains.
6 months newer
Qwen2.5-VL 32B Instruct
Alibaba / Qwen
2025-03-01
UI-TARS-2
ByteDance
2025-09-04
Average performance across 1 common benchmarks
Qwen2.5-VL 32B Instruct
UI-TARS-2
Performance comparison across key benchmark categories
Qwen2.5-VL 32B Instruct
UI-TARS-2
Available providers and their performance metrics
Qwen2.5-VL 32B Instruct
UI-TARS-2
Qwen2.5-VL 32B Instruct
UI-TARS-2