🤖 Can I Run This?

Local AI Hardware Calculator
Instant VRAM requirements & performance estimates

Check Your Hardware

🤖

Result

🦙 Llama Models

• 8B: 4-16GB VRAM depending on quant
• 70B: 35-140GB VRAM depending on quant
• Performance varies by hardware

🧠 Qwen Models

• 7B: 4-14GB VRAM depending on quant
• 72B: 36-144GB VRAM depending on quant
• Excellent multilingual performance

⚡ Mistral Models

• 7B: 4-14GB VRAM depending on quant
• 22B: 11-44GB VRAM depending on quant
• Optimized for speed and efficiency