
DeepSeek V4 ships with open weights and Huawei Ascend support, narrowing the gap with closed AI models
DeepSeek shipped its V4 series on April 24 — a 1.6-trillion-parameter MIT-licensed MoE flagship and a 284-billion-parameter sibling, both with 1M-token context. A permanent 10× cut to cached-input prices puts V4-Pro at roughly 139× cheaper than GPT-5.5 on the same workload. The line on page 16 is still the bigger news.


















