Qwen3 235B A22B
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by ...
About
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token c
Details
| Modality | text |
| Context Window | 131.1K tokens |
| Release Date | Apr 28, 2025 |
| API Available | Yes |
| Hosting | self-hosted, api |
| Output Speed | 120 tokens/sec |
| Time to First Token | 600ms |
| Quality Index | 82/100 |
| Coding Index | 75/100 |
| Reasoning Index | 84/100 |
Benchmarks
Resources
Tags
Quick Info
- Organization
- Alibaba Cloud
- Pricing
- Free (self-hosted) / $0.45/1M input tokens (API)
- Free Tier
- Yes
- Popularity
- 50/100
- Updated
- Feb 20, 2026
Also in AI Models
Claude Opus 4.6
Opus 4.6 is Anthropic’s strongest model for coding and long-running professional...
Claude Sonnet 4.6
Sonnet 4.6 is Anthropic's most capable Sonnet-class model yet, with frontier per...
Gemini 3 Flash
Google DeepMind's latest fast and capable multimodal AI model