Browse Models
The Qwen 2 model family consists of transformer-based language models developed by Alibaba Cloud, ranging from 1.5B to 72.7B parameters and released between June 2024 and January 2025. The family includes base models (Qwen2, Qwen2.5), specialized variants for coding (Qwen2.5-Coder), mathematics (Qwen2.5-Math), and vision-language tasks (Qwen2.5-VL), as well as long-context models supporting up to 1 million tokens through architectural innovations like dual-chunk attention and process reward models for step-wise supervision.