Flash Sale 50% Off!

Don't miss out on our amazing 50% flash sale. Limited time only!

Sale ends in:

Get an additional 10% discount on any plan!

SPECIAL10
See Pricing
×

Daily Limit Reached

You have exhausted your limit of free daily generations. To get more free generations, consider upgrading to our unlimited plan for $4/month or come back tomorrow.

Get an additional 10% discount on any plan!

SPECIAL10
Upgrade Now
Save $385/Month - Unlock All AI Tools

Upgrade to Premium

Thank you for creating an account! To continue using AI4Chat's premium features, please upgrade to a paid plan.

Access to all premium features
Priority customer support
Regular updates and new features - See our changelog
View Pricing Plans
7-Day Money Back Guarantee
Not satisfied? Get a full refund, no questions asked.
×

Credits Exhausted

You have used up all your available credits. Upgrade to a paid plan to get more credits and continue generating content.

Upgrade Now

You do not have enough credits to generate this output.

Medium Reasoning

R1 Distill Qwen 1.5B

DeepSeek R1 Distill Qwen 1.5B is a compact 1.5B-parameter AI model distilled from the powerful 671B DeepSeek-R1 reasoning powerhouse, delivering exceptional chain-of-thought performance in math (83.9% on MATH-500) and code tasks while fitting on a single laptop GPU. Deploy it effortlessly on AWS or edge devices for efficient, high-impact reasoning under tight resource constraints.

128k Context
Medium Intelligence
Not specified Knowledge

Available for Chat, Vision, and File Uploads.

Performance Benchmarks

MMLU
17.7%
HumanEval
22.0%
GPQA
9.1%

How do you want to interact?

Start a Conversation

Ask anything.
Have a natural conversation, brainstorm ideas, draft emails, or ask for advice.

Start Chatting

Use a Persona

Specialized Experts.
Instruct the AI to act as a Coding Tutor, Marketing Expert, or Travel Guide.

Pick a Persona

Why use R1 Distill Qwen 1.5B?

Advanced Chain-of-Thought Reasoning

Inherits complex multi-step reasoning from DeepSeek-R1 via distillation, excelling in math, logic, and code tasks

Parameter Efficiency

Compact 1.5B model outperforms larger LLMs on benchmarks like AIME2024 and MATH-500 while fitting resource-limited devices

Resource-Constrained Deployment

Optimized dense architecture with GQA enables local execution on consumer hardware and edge devices

Capability Examples

Math Reasoning Demo
Solve this step-by-step: If a train leaves at 60 mph and another at 70 mph towards each other 200 miles apart, when do they meet?
Let's think step-by-step. Relative speed is 60 + 70 = 130 mph. Time to meet = 200 / 130 ≈ 1.538 hours, or 1 hour 32 minutes.
Code Generation Demo
Write a Python function to find the Fibonacci number at index n using memoization.
def fib(n, memo={}): if n in memo: return memo[n]; if n <= 1: return n; memo[n] = fib(n-1, memo) + fib(n-2, memo); return memo[n]

How to use

1
Go to Chat

Navigate to the "AI Chat" page.

2
Select Model

Ensure R1 Distill Qwen 1.5B is selected.

3
Type Prompt

Ask a question or paste code.

4
Interact

Refine the answer by replying to the AI.

Compare LLMs Side-by-Side

Is R1 Distill Qwen 1.5B better than Claude 3.5 or Gemini? Test same prompts simultaneously in the Chat Playground.

Open Chat Playground

Made with ❤ by AI4Chat