Gate News message, April 24 — A debate has erupted in the United States over DeepSeek V4’s technological capabilities and compliance. Chris McGuire, a senior fellow at the Council on Foreign Relations (CFR) and former White House National Security Council and Department of Defense official, published analysis arguing that V4 has not shifted the U.S.-China AI competitive landscape. According to McGuire, DeepSeek’s own V4 report acknowledges that its reasoning capabilities lag frontier models by approximately 3 to 6 months, benchmarking against GPT-5.2 and Gemini 3.0 Pro released six months prior.
McGuire raised concerns that while the V4 report discloses inference-stage adaptation to NVIDIA GPUs and Huawei Ascend NPUs, it does not publicly specify the GPU models or training costs used during development. He questioned whether this silence suggests the use of export-controlled NVIDIA Blackwell chips, noting that V3 previously claimed to use 2,000 H800 GPUs at a cost of $5.57 million. DeepSeek has denied using Blackwell, stating that the model was trained on NVIDIA H800 and Huawei Ascend 910C processors.
Replit CEO Amjad Masad countered McGuire’s analysis, arguing that Chinese scientists are publicly sharing genuine AI breakthroughs while American policymakers and lobbyists amplify “China distillation” concerns. Masad highlighted architectural innovations disclosed in DeepSeek’s official statements, including token-level attention compression (DeepSeek Sparse Attention) and significant efficiency improvements for long-context computation. He noted that V4-Pro demonstrates substantially lower per-token inference compute and KV cache requirements at 1M context lengths compared to V3.2, emphasizing that these architectural advances are unrelated to training data distillation and that all researchers, including American laboratories, can benefit from open-source developments.
Related News
Tencent open-sourced Hy3 preview version, code benchmark tests improved by 40% over the previous generation
OpenAI launches GPT-5.5: 12M context, AA index tops the chart, and Terminal-Bench rewrites the agent benchmark with 82.7%
Google Jules releases a new version candidate list, repositioning it as an end-to-end product development platform