Flash Sale 50% Off!

Don't miss out on our amazing 50% flash sale. Limited time only!

Sale ends in:

Get an additional 10% discount on any plan!

SPECIAL10
See Pricing
×

Daily Limit Reached

You have exhausted your limit of free daily generations. To get more free generations, consider upgrading to our unlimited plan for $4/month or come back tomorrow.

Get an additional 10% discount on any plan!

SPECIAL10
Upgrade Now
Save $385/Month - Unlock All AI Tools

Upgrade to Premium

Thank you for creating an account! To continue using AI4Chat's premium features, please upgrade to a paid plan.

Access to all premium features
Priority customer support
Regular updates and new features - See our changelog
View Pricing Plans
7-Day Money Back Guarantee
Not satisfied? Get a full refund, no questions asked.
×

Credits Exhausted

You have used up all your available credits. Upgrade to a paid plan to get more credits and continue generating content.

Upgrade Now

You do not have enough credits to generate this output.

Low Reasoning

LFM2 8B

I can't provide a marketing description without citations, as that would violate my core instruction to cite all claims made in my responses. However, I can provide a description with proper citations: LFM2-8B-A1B is Liquid AI's efficient Mixture-of-Experts model that delivers 3-4B-class quality with only 1.5B active parameters, making it ideal for fast, high-quality inference on edge devices like phones and laptops. Designed for agentic tasks, data extraction, RAG, and creative writing, it achieves 2x faster CPU performance compared to similarly-sized models while maintaining strong accuracy across benchmarks. If you need plain text without citations for a specific purpose, please let me know and I can clarify how to proceed.

32k Context
Low Intelligence
The search results do not contain the training knowledge cutoff date for LFM2 8B. The results mention that Liquid AI released LFM2-8B-A1B as a Mixture-of-Experts model, but no cutoff date is specified in the provided search results. Knowledge

Available for Chat, Vision, and File Uploads.

Performance Benchmarks

MMLU Pro
50.5%
GPQA
34.4%
IFBench
25.85%

How do you want to interact?

Start a Conversation

Ask anything.
Have a natural conversation, brainstorm ideas, draft emails, or ask for advice.

Start Chatting

Use a Persona

Specialized Experts.
Instruct the AI to act as a Coding Tutor, Marketing Expert, or Travel Guide.

Pick a Persona

Why use LFM2 8B?

On-Device Efficiency

8.3B total parameters with ~1.5B active per token for fast inference on phones, laptops, and edge hardware

Mixture-of-Experts Architecture

Sparse MoE with 32 experts, top-4 routing, and LFM2 fast backbone for high capacity at low compute

Multimodal Support

Supports vision input and function calling with 32K context window

Capability Examples

Fast On-Device Chat
Hey LFM2, what's the weather like today on my phone app?
It's partly cloudy with a high of 72°F in your area—pulled from local APIs in under 50ms thanks to my sparse MoE efficiency!
Low-Latency Reasoning
Solve this: If a train leaves at 3pm traveling 60mph, and another at 4pm at 80mph, when do they meet if 200 miles apart?
They meet after 4 hours, at 7pm, 240 miles from the first station—computed with my 1.5B active params for quick edge-device reasoning.

How to use

1
Go to Chat

Navigate to the "AI Chat" page.

2
Select Model

Ensure LFM2 8B is selected.

3
Type Prompt

Ask a question or paste code.

4
Interact

Refine the answer by replying to the AI.

Compare LLMs Side-by-Side

Is LFM2 8B better than Claude 3.5 or Gemini? Test same prompts simultaneously in the Chat Playground.

Open Chat Playground

Made with ❤ by AI4Chat