Browse Models
The Mistral model family comprises open-weight transformer-based large language models developed by Mistral AI, ranging from efficient 12B parameter models like Mistral NeMo to high-capacity 123B parameter systems like Mistral Large 2. These models feature long context windows up to 128,000 tokens, multilingual support through the Tekken tokenizer, and capabilities spanning text generation, code development, mathematical reasoning, and multimodal processing including vision-language tasks and function calling.