🤖 Can I Run This?
Local AI Hardware Calculator
Instant VRAM requirements & performance estimates
Check Your Hardware
Your GPU:
Select your GPU...
RTX 4090 (24GB VRAM)
RTX 4080 (16GB VRAM)
RTX 4070 Ti (12GB VRAM)
RTX 3090 (24GB VRAM)
RTX 3080 (10GB VRAM)
RTX 3070 (8GB VRAM)
RTX 3060 (12GB VRAM)
RX 7900 XTX (24GB VRAM)
RX 6800 XT (16GB VRAM)
Apple M1 Ultra (64GB Unified)
Apple M2 Ultra (96GB Unified)
AI Model:
Select model...
Llama 3.1 8B
Llama 3.1 70B
Qwen 2.5 7B
Qwen 2.5 72B
Mistral 7B
Codestral 22B
DeepSeek V2
Quantization:
Select quantization...
Q4 (4-bit, fastest)
Q5 (5-bit, balanced)
Q8 (8-bit, best quality)
FP16 (full precision)
Calculate Performance
🤖
Result
🦙 Llama Models
• 8B: 4-16GB VRAM depending on quant
• 70B: 35-140GB VRAM depending on quant
• Performance varies by hardware
🧠 Qwen Models
• 7B: 4-14GB VRAM depending on quant
• 72B: 36-144GB VRAM depending on quant
• Excellent multilingual performance
⚡ Mistral Models
• 7B: 4-14GB VRAM depending on quant
• 22B: 11-44GB VRAM depending on quant
• Optimized for speed and efficiency