Mistral Small 3.2 24B
Mistral Small 3.2 24B is a powerful 24-billion-parameter multimodal AI model excelling in vision understanding, precise instruction following, and robust function calling with a massive 128K token context window. As a drop-in upgrade over its predecessor, it delivers top-tier performance for efficient text and image tasks, rivaling much larger models while minimizing repetition errors.
Available for Chat, Vision, and File Uploads.
Performance Benchmarks
How do you want to interact?
Start a Conversation
Ask anything.
Have a natural conversation, brainstorm ideas, draft emails, or ask for advice.
Use a Persona
Specialized Experts.
Instruct the AI to act as a Coding Tutor, Marketing Expert, or Travel Guide.
Why use Mistral Small 3.2 24B?
Enhanced Instruction Following
Improved accuracy in precise instruction adherence, from 82.75% to 84.78%, with gains on benchmarks like Wildbench v2 (55.6% to 65.33%) and Arena Hard v2 (19.56% to 43.1%)
Reduced Repetition
Halved rate of infinite generations or repetitive outputs, from 2.11% to 1.29%, for more reliable responses
Robust Function Calling
Upgraded template for reliable tool-use and structured API interactions, ideal for low-latency applications
Multimodal Vision Processing
Supports image-text-to-text capabilities for document understanding, visual Q&A, and image-grounded generation
Capability Examples
Vision Understanding
Function Calling
How to use
Go to Chat
Navigate to the "AI Chat" page.
Select Model
Ensure Mistral Small 3.2 24B is selected.
Type Prompt
Ask a question or paste code.
Interact
Refine the answer by replying to the AI.
Compare LLMs Side-by-Side
Is Mistral Small 3.2 24B better than Claude 3.5 or Gemini? Test same prompts simultaneously in the Chat Playground.
Open Chat PlaygroundMade with ❤ by AI4Chat